Building an Organizational Approach to Responsible AI
To engineer successful digital transformation with AI, companies must embrace a new approach to the responsible use of technology.
As technology has advanced and become ubiquitous in our lives, a common philosophical question is whether technology itself is neutral. There are many good arguments to be made that it is — and that it is how technology is used and deployed that creates good or bad outcomes for individuals, companies, and society.
This question is important for the digital transformation shaping businesses today. With data acting as the fuel for artificial intelligence, the issues surrounding customer privacy and data tracking are increasing. Organizations and governments are recognizing this, as evidenced by the European Union’s General Data Protection Regulation, which went into effect in 2018 to protect the privacy of European citizens.
AI differs from many other tools of digital transformation and raises different concerns because it is the only technology that learns and changes its outcomes as a result. Accordingly, AI can make graver mistakes more quickly than a human could. Despite the amplified risk of its speed and scale, AI can also be tremendously valuable in business. PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030.
To different degrees, all companies will need to become “AI companies” so they can leverage the considerable benefits to be gained through greater knowledge of their customers, explore new markets, and counteract new, AI-driven companies that might seek their market share. In recent years, we’ve watched as Netflix has overtaken the likes of ExxonMobil in value — a reminder to legacy companies that a strategic approach to becoming AI- and data-driven is key to embracing a new vision of the future.
To extract the benefits from AI while mitigating the risks, companies must ensure that they are sufficiently agile so they can adopt best practices to create responsible transformation with AI. Many tools are now available to help leaders and organizations navigate the complex use of AI. At a fundamental level, companies must transform their way of thinking about their organization, workforce, product design, development, and use of AI to engineer their success. One such example is the work of the AI platform at the World Economic Forum to provide recommendations for companies on various aspects of responsible use of technology. Our platform identifies three foundational changes that are important for companies to make as they implement responsible AI. When these are overlooked in the digital transformation process, companies risk failure and damage to brand reputation. The three principles of responsible AI are:
- The whole organization must be engaged with the AI strategy, which involves a total organizational review and potential changes.
- All employees need education and training to understand how AI is used in the company so that diverse teams can be created to manage AI design, development, and use. Additionally, employees should understand how the use of AI will impact their work and potentially help them do their jobs.
- Responsibility for AI products does not end at the point of sale: Companies must engage in proactive responsible AI audits for all ideas and products before development and deployment.
To extract the benefits from AI while mitigating the risks, companies must ensure that they are sufficiently agile so they can adopt best practices to create responsible transformation with AI.
Responsible AI and the Employee Experience
In addition to delivering better ethical outcomes and helping protect stakeholder trust, responsible AI is a good bet for companies that want to save money. Gartner estimated that 85% of all algorithms will deliver erroneous outcomes by 2022 because of bias in data and the teams creating them — the costs of which add up. Companies can protect against this risk by providing better training and taking proper measures to address bias in algorithms.
Other areas where bias in AI has proved to be detrimental are hiring and retention — yet many companies are rushing to implement technology for these purposes. Organizations must recognize the drawbacks that some algorithms bring into the screening and hiring process as a result of the way they are trained, which can have a direct impact on outcomes such as diversity and inclusion. When it comes to reskilling and retaining employees, AI can be helpful to companies when deployed for screening and training employees for new positions. Many employees have skills that can be built upon to cross into a new position, but companies often don’t realize the full extent of their employees’ capabilities.
For companies seeking to attract and retain employees with AI skills, it helps to develop responsible AI policies, because many of the most talented AI designers and developers value their company’s positions on ethics and transparency in their work. There are so few skilled AI designers amid huge demand for products that developing a robust responsible AI program can bolster a company’s recruitment strategy and provide a competitive edge.
Responsible AI for Customers and Stakeholders
As AI touches more of society, the general public has become increasingly concerned about the technology and its uses. Indeed, as many as 88% of Europeans and 82% of Americans believe that AI needs to be carefully managed. It’s more important than ever that companies develop strategy around responsible AI and communicate it well to internal and external stakeholders in order to maintain accountability. Companies should also keep in mind that a one-size-fits-all approach does not always work with emerging technology; instead, they need to match the right AI solution to the right customers and create business offerings that align with customer needs.
Organizations should also be forward-looking and recognize that where there’s disruption, there will likely be more regulation. This year, the European Union proposed new rules and regulations for AI that would have implications for organizations across the globe. This is a risk-based approach, and all companies using AI within the EU should be gearing up to comply. Of course, as with all legislation, it should be a first step, not the last word, in using AI wisely.
Another developing area is the addition of responsible AI to the environmental, social, and governance schema. Increasingly, investors want to know about the use of AI and how companies are solving responsible AI problems. Likewise, venture capital companies are beginning to question whether it’s a good investment to put money into startups that haven’t thought about responsible AI. This affects traditional business in three ways:
- Startups with responsible AI strategies will be more valuable.
- The purchase of a responsible AI startup may depend on the startup’s approval of the acquirer’s approach.
- Investors may refuse to buy stock in companies that don’t have responsible AI. Indeed, there may be an increase in activist investors in this space.
Responsible AI Needs Support at the Top
To gain traction throughout an organization, support for responsible AI needs to come from its leadership. Unfortunately, many board members and executive teams lack understanding about AI. The World Economic Forum created a toolkit for boards to learn about the different oversight responsibilities in companies involved with AI. They can use it to understand how responsible AI can be adopted across different areas of the business — including branding, competitive and customer strategies, cybersecurity, governance, operations, human resources, and corporate social responsibility — and prevent ethical issues from taking hold.
Responsible AI is too important to leave to one member of the C-suite. Instead, it requires collaboration and a shared understanding of the risks and benefits by all. If every company will become an AI company, then every company must have a board and C-suite with knowledge and understanding of compliance best practices.
Most organizations still have far to go in their AI journeys, but by adopting responsible AI practices, the benefits for business, employees, customers, and society can be far-reaching.