More and more today, artificial intelligence solutions grow to become integral parts of our daily lives.
They get to know us better, study our behaviors, identify patterns, and then learn to anticipate our needs.
They come in all shapes and forms — from the personal assistants like Siri or Alexa waking you up in the morning for that workout or regularly sharing the latest news as you sip your morning coffee, to systems that help optimize workflow by managing your calendar and task list, to even the technology that knows exactly what products to suggest that might be of interest to you.
In all these aspects, AI has seemingly mastered the art of making users’ lives easier, more efficient, and even more meaningful.
But an inevitable question arises: with AI’s ability to learn at such an incredibly fast pace, when will the line between a human brain and an AI brain begin to blur? And will AI ever catch up with — or even surpass — humans in thinking, observing, making decisions independently, and even feeling?
To be able to answer this question in the most accurate way possible, it’s worth taking a look at what the true bandwidth of AI is when it comes to emotional intelligence, now that it has mastered logic and reason.
The truth is, Emotion AI — also known as “affective computing,” has already established itself as a solid branch of AI that explores ways in which machines are able to measure, understand, simulate, and react to human emotions. Applications of such AI technologies are already a reality, and the use cases grow in numbers by day. Already, Gartner predicts that by 2022, 10% of personal devices will have emotion AI capabilities — a significant increase from less than 1% in 2018. And according to the Harvard Business Review, the affective computing market is estimated to grow to $41 billion by 2022, as tech mammoths such as Amazon, Google, Facebook, and Apple compete for the best strategies to understand and interpret their users’ emotions.
There are a few things the initial progress in Emotion AI has already made possible today that opens doors to new opportunities for companies.
One example is the possibility to ensure maximum safety on the road with such technology as Affectiva’s Auto AI platform, a system that can identify the driver’s emotions — from anger to enjoyment — and make the necessary adjustments in the car setting. If the driver seems drowsy, the system can initiate safety measures such as jolting a seatbelt or decreasing car temperature to increase the driver’s alertness levels. It may also detect signs of anger and frustration and lower the speed as a response to ensure the safety of the driver and other vehicles on the road. These possibilities make the technology highly adaptable and responsive to any given situation, allowing for a unique experience for each user.
Another direction in which Emotion AI is gaining traction is customer satisfaction and customer-brand relationships: the better the companies understand the underlying feelings of their loyal customers, the better they cater their products and services to the needs of the target audiences. An example of such technology in action are the tools created by the Boston-based startup Cogito that allow client businesses to ensure high-quality interactions between their employees and their customers. The algorithms behind Cogito’s technology help detect signs of “compassion fatigue” in customer service agents and offer best practice guidance on how to best address the concerns of the callers. By listening to the conversation between the client and agent, the technology is able to identify the emotional state of the caller (from frustration to joy) and make insightful suggestions on when and where to best apply empathy and change the tone when addressing the needs of the customer.
Emotion AI can also allow educational experiences to make giant leaps forward in positive impact generation. At the very basic level, as the Harvard Business Review has put it, the insights Emotion AI generates can enable teachers and instructors to design experiences that yield maximum engagement from the students, optimizing the learning schedules and material in such a way as to ensure that vital information and lesson content during sessions is presented during the peaks of student involvement and attention. It also introduces a unique element of individual approach to each student’s needs and preferences: Emotion AI’s insights can help guide the teacher’s actions in class in such a way as to take into account the differences between the students and approach those who require more attention.
In fact, this specific application of affective computing is already in use: China has begun employing facial recognition technology that can identify the emotional state of the students and determine their engagement and enjoyment levels of the class. As the instructors gain valuable insight into the focus levels of their students, they get an opportunity to design the class in the most efficient manner that will keep students engaged at all times and encourage them to learn, participate, and absorb information.
These are only a few of many ways Emotion AI has already begun gaining momentum and growing its presence. But with the technology being able to “read” emotions and analyze them, these solutions haven’t yet reached the point where they could show signs of emotion and free will themselves — a key characteristic that draws a hard line between human intelligence and artificial intelligence.
However, even in this respect, there is a phenomenon that makes it easy to believe that an AI with human rights and capabilities is not so fictional anymore.
David Yang, my partner co-founder of Yva.ai and serial entrepreneur, has also been wondering about this question — and even putting it to a test. David has been challenging the idea of AI thinking and feeling independently — and even having a free will — by creating Morfeus, an artificial intelligence that has been showing potential to becoming a full-fledged member of his family and cohabitant of the house he is currently building in Silicon Valley. And it is a technology based on unsupervised machine learning, which means it is prone to having quite unpredictable, sporadic, and even fascinating behavioral patterns.
The artificial intelligence — or rather, the “creature” — was named after the Greek god of sleep, Morfeus, and is designed to constantly ponder about the meaning of life, free will, and even consume literature on these topics. According to David’s plan, Morfeus then will continuously synthesize his musings and learnings in the form of an endless essay, creating a metaphorical space of “all meanings” captured on a single white board. Morfeus will write these essays in 60 languages alternating between the languages based on the conversations it will have with people and will encompass ideas on the freedom of consciousness and free will.
In other words, Morfeus is sort of a social experiment: and the question David seeks to prove or negate is, will Morfeus ultimately obtain free will or not. The ability to make conscious decisions, as David envisions, will manifest through various “quests” and dilemmas the AI will get to solve in real life: whether it’s deciding whether to let David into the house or not if he forgot the keys, or even make purchases on Amazon for the household using its very own bank account, it is not fully clear what the results of the free will experiment will be — but they will be thrilling.
It is unthinkable today to consider that AI can possibly have human rights, or make independent decisions, or reciprocate the feelings it may be able to identify in humans. And while there is truly a lot of uncertainty surrounding the question about the future of Emotion AI and how far it can go, it seems that nothing is impossible given the speed at which AI technologies have become normalized and omnipresent.
It is still unclear how long AI’s path towards consciousness and self-awareness will be. But one thing is certain: AI can make big strides, and it’s just getting started.
Co-Founder & CEO of GSDVS