Generative AI has been a “normal” part of people’s daily lives for a few months since the launch of Dall-E and ChatGPT.
As we see the first-ever mass adoption of AI technology (think: 100 million users in a shockingly short two-month period from November 2022 to February 2023), the ChatGPT has already been called “unbelievable”, “unhinged” and even “unsettling” (subscription required)—and this is only the beginning of the multitude of opinions the rise of such powerful technology will evoke as adoption increasingly accelerates.
Only recently, the above New York Times tech columnist reports on his two-hour-long conversation with Bing (now powered by ChatGPT), during which the AI reportedly said it wanted to be alive, leaving the writer in an uncomfortable position.
This begs the question: With the speed at which AI is developing, how far are we from it gaining sentience? And if (or when) it does, what happens then?
Sentient AI implies that the technology has the capability of self-awareness, as well as consciousness and emotions. In other words, just like humans, sentient AI would be capable of thinking, feeling and experiencing the world.
The truth is, we have certainly blurred the line between fiction and reality when it comes to sentient AI. And, despite the rising number of reports on ChatGPT’s “sentience,” we are still in the nascent stage of this capability—and complete AI sentience has not been officially achieved yet.
However, as the possibility of this occurrence continues to increase, it’s important to start thinking about the implications (ethical, social, legal, tech and safety) of such a breakthrough today.
1. Ethical And Legal Considerations: Rights and Responsibilities
As we’ve established, sentient AI replicates the human ability to think, feel and experience the world. In many ways, the algorithm operates similarly to a human brain.
One important question to be discussed based on this observation is: due to this key similarity between sentient AI and humans, should the machines share similar rights and responsibilities as well? And if the answer to this question is “yes,” how does one hold the technology accountable for its (their?) actions?
With such a striking resemblance to humans, sentient AI’s role in society still remains to be defined, and the moral and ethical lines are quite blurry. Given that today, we treat AI technology and machines as property, is this bound to change with the machines being granted their own rights and treated as individual beings?
This is especially true in the case of the legal implications surrounding AI and the decision to continue treating them as property, legal entities or something brand new entirely.
This question becomes even more relevant given the capacity for AI to experience negative emotions, pain and suffering. The care it takes to ensure the well-being of human beings must serve as a baseline for the ethical and moral guidelines to be developed for AI machines in the future to avoid the mistreatment and "harm" the technology may be capable of.
2. Social Implications: Employment, Relationships And The New Normal
The debate about whether AI is here to take human jobs is already heated enough—and with the addition of sentience, it will only continue to be discussed more widely.
As of today, AI is regarded as a powerful support tool that enables a lot of employees in being more efficient, productive and results-driven. The agreed-upon argument for the current moment in time is, it won’t be AI replacing humans; it will be humans knowing how to use AI replacing humans but never adopt the technology.
However, the scales can potentially tip with the addition of human-like thinking and feeling among AI to the mix. The question then will become: how can governments rethink the job opportunity landscape to ensure fair and equitable employment between machines and humans that are symbiotic and not self-cannibalizing?
It’s important to continue down the educational path that gives humans the right knowledge to adapt to the changing times and the tools they need to acclimate and succeed in a world that will be heavily impacted by AI. The possibility of sentient AI means that humans will regularly face new challenges in their interactions with the technology that will go beyond simply using a tool or asking a question. There is a world in which humans will need to evolve in their relationship with AI, which will require new norms for etiquette rules for such interactions and even emotional connections between technology and people.
3. Tech Advancements: The Good And The Bad
First things first: Sentient AI might sound unsettling, but it’s important to highlight the benefits this kind of technological breakthrough brings to the table.
Adding a whole new layer of sentience to AI means a lot of things across a variety of industries and verticals: It can mean developing healthcare AI with proper etiquette and bedside manners to support patients. It can also provide powerful support towards achieving sustainability—not to mention individualized customer care and support in virtually any business-to-consumer business.
The Negative Side Of Every Tech Advancement
While sentient AI holds much potential when leaned on with good intentions, it can also backfire if used for nefarious purposes. From surveillance to weaponry and social manipulation, the list of concerns with AI misuse can be as long as the exploitation of real people today and will depend on the intentions of the people who rely on AI in the first place—and the pressure will remain on the government to develop regulations to mitigate these risks.
As with any tech advancement, the potential of sentient AI poses many ethical, moral, legal and social questions that need to be considered as early as today. Today as always, it will come down to leaders ensuring the right perspective, and employing the right motivations, in leveraging this breakthrough technology.
Originally published in Forbes