The PwC 2022 AI Business Survey finds that “AI success is becoming the rule, not the exception,” and, according to PwC US, published in the 2021 AI Predictions & 2021 Responsible AI Insights Report, “Responsible AI is the leading priority among industry leaders for AI applications in 2021, with emphasis on improving privacy, explainability, bias detection, and governance.”
Artificial intelligence (AI) is of course already ever-present in our daily lives, for example on our phones: Siri, Alexa, and Google support our searches and bring us recommendations. While the use and application of AI across businesses is spreading, knowledge about AI and the implications of its use are not showing the same exponential growth.
This article seeks to shed some light on AI as a topic generally, as well in regard to ethical practices and implementations: What AI does, what “responsible’ or “‘trustworthy” AI is, why this is such an important topic, and how the drafting of the European Union’s AI Act is already shaping how organizations and businesses approach the development and the deployment of AI technology today. “Trustworthy” AI is more than just morally sound; it is more tailored and effective than its less-considered competitors.
The impact of AI
What is AI? AI is a fast-evolving family of technologies that are already contributing to a wide array of economic and societal benefits, and they do this across the entire spectrum of industries as well as social activities.
AI technology can be used to find information, improve predictions, optimize operations and resource allocations, and personalize digital solutions and journeys. Most critically, AI can be trained to do things that previously only humans were capable of—but AI can do these things much faster and at much larger scales, a process which in digital transformation is coined “automation.” As a result, the ability to automate processes of analysis and information processing has revolutionized almost every industry globally.
From the individual user to global organizations, this is of course a very powerful and impactful tool. However, as with every tool, the goal must be to implement such groundbreaking technology in a sustainable and responsible way. The forerunner in regulatory leadership in this area is the European Union (EU), which is currently drafting the EU AI Act, expected to be published in 2023.
The origin of tech regulation
In order to understand such regulations and their importance, it is useful to look at an example of the past and the positive impact regulations can, and should, have.
When internet giants started to gather more and more information about individuals in the late 2000s, services improved and users enjoyed a growing number of appealing features.
This continuous improvement of functionality by the tech giants, and the data-capture required in order to implement it, grew fast—and became increasingly problematic—especially since, as is often the case, public awareness lagged behind developments.
Users of the new services were of course exploring and enjoying the convenience of the digital age, yet they were often oblivious to the data-gathering taking place in the background—and unaware that this data capture was essentially funding the “free” platforms they used.
Users begin to understand the implications of data sharing
Early social media users saw recommended connections—family and friends—in a relatively non-intrusive manner. However, as face recognition and tagging in photos improved and added to usability and user excitement, AI began to make connections between previously unconnected users in the same photo. This led some users to become increasingly concerned about their digital footprint online.
Consumers gradually began to understand that anything put online might be accessible to anyone who was interested, possibly forever. Of particular concern for some users was the possibility that an individual’s potential future employers might find users tagged in photos—and that they, the user, had little control over privacy settings for these images.
Concerns also rose significantly when users realized that many apps had mapped their entire contacts library, automatically yet non-consensually. I personally remember seeing several of my contacts and their details, such as their mobile number, on social media sites—knowing that these were friends or colleagues who had purposefully decided not to join social media platforms exactly for that reason, a desire to protect their personal data. Yet here they were, included and “known” within the digital ecosystem, regardless of whether or not they wanted to be included. These profiles of non-users became known as “shadow profiles.”
There are, of course, many more examples. I have merely chosen this one as a personal experience. The takeaway here is that as functionality and personalization grew, concerns also slowly grew.
The GDPR emerges
The EU led the way and developed the General Data Protection Regulation (GDPR) to protect personal data. This regulation came into force in early 2018, setting strict limits on data use and clear expectations on consent. GDPR has been so successful that it has been adopted globally as the standard, and it is rightfully celebrated as a huge success.
Critical to the success of the GDPR was its unique stance on territoriality: It requires any organization doing business in the EU or marketing to EU citizens to meet specific data protection standards. The result of this was that all international businesses were required to meet GDPR standards as a baseline, which pushed the global regulatory consensus forward in a significant and positive way.
The next phase of regulation
Having learned lessons from such progress in digitalization, data capture, and personalization, the EU is now developing its AI Act in order to regulate the development and application of AI. The goal of this Act is to ensure an ethical use of AI and to prevent the uses of AI in ways that are discriminatory or violate human rights.
All AI applications will have to abide by the framework of “trustworthy AI,” which is defined by the Act. Given the role of the EU’s GDPR in shaping data privacy law, the EU’s AI Act is expected to be similarly influential and impactful and to become the global standard in this space.
The EU framework for ‘responsible’ AI
The way the EU framework will deliver this vision for responsible AI, sometimes referred to as RAI, is by emphasizing the following four ethical pillars of safe AI development:
- Transparency: How straightforwardly the system uses logic
- Explainability: How closely the AI’s choices can be followed from beginning to end
- Accountability: How intimately the AI is monitored by a person
- Safety: How accurately, efficiently, fairly, and safely the system runs
The AI Act also flags areas of AI usage as deserving of particular scrutiny. These are identified as areas particularly critical to the functioning of a healthy society, and thus of central necessity to protect from the risks of poorly considered automated decision-making. They are:
- Education and training
- Employment
- So-called important services (societally), like healthcare and potentially media
- Law enforcement and the judiciary
International organizations are already preparing
The good news is that while the EU is still in the process of finalizing the EU AI Act, progress is already being made. On an international scale, organizations are now in the process of implementing safeguards and strategies, developing internal procedures and protocols, and incorporating control mechanisms into all levels of the company.
Didem Un Ates, head of applied strategy, data & AI at Microsoft Customer & Partner Solutions, shared her thoughts on the topic, as well as the vision and steps Microsoft is implementing, in an article for the Forbes Technology Council. She observes that, “This initiative from the EU is exemplary regulatory leadership, embodying respect for human rights and dignity. I wholeheartedly applaud and support this monumental undertaking EU leaders have initiated and expect this act to accelerate the adoption of ‘Responsible AI’ practices as a must-have and an essential business function for any organization using or developing AI/ ML solutions.”
Evidence of organizations adopting such measures and procedures for responsible AI are available publicly and in abundance, as these have already become the norm for many organizations: HSBC’s Principles for the Ethical Use of Data and AI offer another example.
Ethical AI frameworks are essential
No industry or business can overlook the power of AI to augment decision making, and critical to that recognition is an understanding that mishandled AI is worse than no AI. Ethical AI frameworks are not merely moral guidelines; they are opportunities for companies to develop stronger and more effective systems.
In the context of technology, the metaphor is frequently used that “the genie is out of the bottle.” This is generally a reference to the fact that the impact of scientific innovation cannot be contained once it reaches the general public. In mythology, genies were beings of unimaginable power if not adequately constrained. At the same time, genies could create limitless riches, magic, and wonder if used thoughtfully.
It is therefore great to see that the EU is busy developing a “bottle” that is fit for this essential purpose—creating a competitive business playing field for progress and innovation, while keeping its eye closely on our individual rights and interests!
Explore emerging tech, challenges with other learning leaders
Are you seeking the strategies and skills required to navigate the needs of today’s ever-changing workplace? Are you an experienced or aspiring leader looking for a community to connect with to explore today’s biggest learning leadership challenges?
The Learning Leaders Alliance is a vendor-neutral global community for learning leaders who want to stay ahead of the curve and for aspiring leaders wanting to build their skill sets. The Learning Guild’s Alliance Membership package includes access to exclusive digital events and content curated for today’s modern learning leader, as well as opportunities to attend in-person learning leadership events held around the globe. Join today!