- Moderate abusive (hateful, harmful) content
- Phase 1 - Moderator manually publishes all Prompts and Posts as they are submitted
- Phase 2 - Content filtered by algorithm, and published automatically or reviewed by moderator when flagged as potentially abusive
- Phase 3 - If abusive post passes algorithm, users can flag as abusive
- Phase 4 - Supportive intervention extended to abusive user when content is flagged as abusive
- Make avatar stick to response
- User can respond to comments
- Private groups / conversations
- Invite friend(s) to group
- This feature will require people to create accounts
- Minimum amount of data required from end user is a username and password
- Can optionally take email address if the user wants password resets
- The User owns the data. If they delete their account, all of their Posts content should be deleted.
- What about Prompts they have issued to the Group?
- Prompt and Post moderation
- Group name moderation
- Moderation of a Group's Users
- Maintain a safe environment
- Conversations organized by category (mental health, relationships, etc)
- Allow user to read other people's Responses only if they have first posted a Response to that Prompt
- All users can add prompts
- migrate production database on deploy
- Make layout less squeezed on mobile
- Easier to locate Prompt navigation buttons