-
Notifications
You must be signed in to change notification settings - Fork 14
Architecture: Drawing Classifier
The drawing classifier is a decently complicated piece of a code, so the purpose of this document is to clarify how it works.
To start, it’s worth taking a look at the InVision prototypes to understand the difference between tablet and handset: https://projects.invisionapp.com/d/main#/projects/prototypes/13393648.
The main difference is that handset displays the drawing screen in a modal, whereas tablet does not.
DrawingClassifier
- This class handles putting together all of the different components that make up the classifier screen. As well, it handles passing all of the workflow and subject information to the other components.
DrawingClassifierSubject
- This class handles calculating and reporting the dimensions of the image when it lays out. Because we size the image using flex, the image grows to fill the container its in. This class also handles rendering the blur view over the DrawingToolView
DrawingToolView
- This class renders the MarkableImage and the DrawingButtons. As well it handles to animation when a new subject is loaded into the view.
MarkableImage
- Handles rendering the image with the SvgOverlay over it. As well handles reporting all of the actions that bubble up from the SvgOverlay.
SvgOverlay
- Frankly this class has been pared down so much that it probably could just be deleted by now. It essentially serves as a wrapper to the ShapeEditorSvg.
ShapeEditorSvg
- This class handles the bulk of the drawing logic. That is it handles the users pan gestures and draws both preview shape (the shape shown when drawing) and the drawn shapes. As well, this class handles restricting the pan handler movements according to rules such as not allowing the shapes to be dragged off screen.
EditableRect
- This component handles all of the animated updates to the drawn rectangles.
At this point it is probably worth talking about how we store the drawing state in our system. If you take a look at drawingReducer there are some comments explaining how our system works. Essentially, while the user is the edit mode (the drawing modal is open) we record each action they do in an array. These actions are recorded as an ‘add’, ‘edit’ or ‘delete’ action. After each action is added we run through all the actions in the array and infer what shapes are drawn (or not drawn). This allows the user to ‘undo’ any action they take as a stack. Once the user confirms their changes, we flush the action stack and save the shapes as they are rendered.