MIT demos a smartwatch app that can detect your emotions

Massachusetts Institute of Technology (MIT) has a shown off a demo of a wearable AI system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals. “Coupled with audio and vital-sign data, deep-learning system could someday serve as a “social coach” for people with anxiety or Asperger’s,” says MIT.

PhD candidate Mohammad Ghassemi and graduate student Tuka Alhanai built an algorithm that can analyze speech and tone. These researchers used an iPhone 5S to record the audio part to listen to the conversation that allows system to analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy.

Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation. This was implemented on a Samsung Simband that can measure metrics such as movement, heart rate, blood pressure and skin temperature.  The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches,” says MIT. But the algorithm is not yet reliable enough to be deployed for social coaching, and they are actively working toward that goal.

The team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

Source 1, 2


Srivatsan Sridhar: Srivatsan Sridhar is a Mobile Technology Enthusiast who is passionate about Mobile phones and Mobile apps. He uses the phones he reviews as his main phone. You can follow him on Twitter and Instagram
Related Post