Leveraging technology to stay ahead of the competition has become a prerequisite for modern companies to succeed; every day is a new attempt to win the race of digital transformation and adopt the latest innovations. Artificial intelligence has become one of the most widely discussed methods of optimization and growth on a global scale.
According to the New York Times, beyond business and operations, artificial intelligence has already proven to be able to contribute to the development and transformation of transportation, healthcare and scientific research. But there are two sides to every coin. The truth is AI solutions have also been linked to mass surveillance, identity theft and the spread of false news.
Today, the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google and more — have begun taking concrete steps toward developing internal teams that will address the ethical issues that come with the widespread collection and processing of massive amounts of data — especially data used to train AI and machine learning models. Without these internal checks, a company may cross the line of ethical applications, and in those instances, AI can pose a risk to the company’s reputation.
According to the Harvard Business Review, tackling the issue of ethics is a complicated process. Companies often oscillate between the two extremes. On one hand, the companies it studied had no explicit frameworks in place for AI innovation and instead had to tackle issues on a case-by-case basis, sometimes hoping that the problem would resolve itself — a big risk. On the other hand, there have been cases where companies introduced highly rigid yet vague guidelines and rules for technology development that created an excess of regulation and inhibited progress.
The solution may well be somewhere in between the two extremes. Companies cannot ignore the risks associated with ethical AI applications, but guidelines need to be more palatable and concrete in order to fuel progress. And the answer is not a one-size-fits-all solution either — the optimal solution will be uniquely tailored to the values, goals and needs of each company, making the journey to find the right ethics-growth balance highly individualized.
There are a few steps companies can take to unlock the door to this balance between ethical AI and progress.
1. Assess current data availability and the company’s technical capabilities.
Companies differ in their level of digitization and technological capabilities. While the world has increasingly moved toward a more digital-first future — something the pandemic accelerated — companies find themselves at various levels when it comes to adopting technology or developing it. Depending on what stage the companies find themselves in — be it at the beginning, when they seek technology for initial optimization, or much later, when they use AI to understand customer needs and preferences — the ethical questions and implications will vary immensely.
2. Gauge respective industry standards.
The practices relying on AI technology vary by industry as well. While certain companies may choose to rely on AI to increase internal productivity or improve workflow, others may take it a few steps further and use AI to achieve better audience targeting, user behavior analysis or fraud identification. Depending on the sensitivity level of the data used to train the AI model, various industries will have different needs when it comes to developing ethical approaches to adopting and developing AI solutions.
3. Create a team that will work on an ethics framework.
One thing matters beyond the individual differences between companies and industries — each organization absolutely needs to have a team that works on developing ethics standards for the company. The team needs to be a balance between academic thought leaders in AI and employees directly involved in AI solution development and adoption. While the academic approach allows a more general understanding of the ethical standards in a high-level, theoretical sense, the employees — from software engineers to product managers — drive the frameworks home by introducing the nuances of the company’s technological capabilities and long-term goals, helping create a set of standards that uniquely fit the company.
4. Involve the entire organization and collaborate.
While involving internal stakeholders directly involved in AI application is crucial in developing a highly relevant and tailored framework, it’s equally important to involve every part of the organization in the process and educate everyone in the company about the risks the frameworks aim to avoid.
Employees need to develop a good understanding of the data the company possesses and uses to train the AI models, as well as the need for ethical standards in dealing with vast amounts of data. The goal of this involvement is to hone a culture within the organization that constantly questions the way data is collected and used and to identify areas of improvement as the company progressively incorporates AI in its operations — internally and externally.
Originally published in Forbes