AI With An Empathic Voice

Voice AI is a subset of the AI that people have been experimenting with both professionally and for personal use.

"Voice" can mean two things in English. It is both the sound we make when we speak but also when  a particular attitude is expressed. This is evident in both speech and writing when a voice takes on mood, tone etc.

The term "empathic" as used here means the voice responding shows an ability to understand and share the feelings of the human. In this AI sense, if the input is sad or depressed, the output is sympathetic and understanding.

Hume is a research lab and technology company that states their mission is "to ensure that artificial intelligence is built to serve human goals and emotional well-being. Hume’s Empathic Voice Interface (EVI) is the first AI with emotional intelligence. It understands the user’s tone of voice, which adds meaning to every word and uses the user's vocal signals to guide its own language and speech. You can talk to it like a human so it should respond better, faster, and more naturally. Developers can use EVI as an interface for any application. Hume's EVI is an API powered by a empathic large language model (eLLM).

The foundation of their research at Hume is semantic space theory (SST), which is an inductive, data-driven approach to mapping the full spectrum of human emotions.

EVI will be generally available in April 2024. A demo is available online.

Trackbacks

Trackback specific URI for this entry

Comments

Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry