Musync - Music Streaming Application with Emotion recognition and Playlist generation using Image Processing - FE
Music Streaming system is designed to enhance the user's listening experience by analyzing their emotions through facial expressions and creating a customized playlist to match their mood. The system utilizes a camera to capture the user's facial expressions and an Image Processing algorithm to analyze those expressions to determine the user's current emotional state.
Built with React, Tailwind CSS, and TensorFlow, Musync provides a personalized music experience by analyzing user emotions from images and generating suitable playlists.
- Emotion Recognition: Detects user emotions from images using TensorFlow.
- Playlist Generation: Creates personalized playlists based on detected emotions.
- Music Streaming: Stream high-quality music with an extensive library of songs.
- User-Friendly Interface: Intuitive and responsive design using React and Tailwind CSS.
- React: Frontend library for building user interfaces.
- Tailwind CSS: Utility-first CSS framework for styling.
- TensorFlow: Machine learning framework for emotion recognition.
This project is licensed under the MIT License. See the LICENSE file for details.