diff --git a/README.md b/README.md index a35f64a..856f35d 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,8 @@ version 1.0.0 ROBODADA is an open-source toolkit to map facial expressions to body language of a robot. A web-based interface allows you to move a two-axis robot, record different movement patterns and assign them to [face expressions](https://www.researchgate.net/publication/318447136_Basic_Emotions). Activating your webcam, ROBODADA tries to interpret your emotion based on a pre-trained model and activates the robot's recorded movement. +Find examples of ROBODADA in use [here](https://hybridthings.tha.de/robodada/). + We use the [face-api.js API](https://github.com/justadudewhohacks/face-api.js/) for face expression detection. We are aware, that machine based emotion detection has to be [discussed](https://www.theverge.com/2019/7/25/8929793/emotion-recognition-analysis-ai-machine-learning-facial-expression-review) very critically. ROBODADA underlines, that a machine is not really able to [detect human emotions](https://www.theatlantic.com/technology/archive/2021/04/artificial-intelligence-misreading-human-emotion/618696/). It just makes decisions on values, which can be [measured](https://github.com/justadudewhohacks/face-api.js/#features) by a machine. We see ROBODADA as a playful approach for further discussion: