Dominique Shelton Leipzig is a privacy and cybersecurity partner at Mayer Brown and leads its global data innovation practice.
The power of AI to transform our lives for the better is promising. From healthcare to education, energy to finance and entertainment, AI increases productivity, improves accuracy and enables personalization of services. Consultants project that AI will add 7 trillion dollars to our global economy over the next ten years, so much so that if AI were a country, it would be the third largest country behind the United States and China.
With the rapid and widespread adoption of AI across industries, the need for security governance to protect consumers is clear, but only 28% Executives recently surveyed believe their companies are ready to regulate AI. However, the hardest part has already been done for them: in my research, I discovered that a bill codifying “trustworthy” AI, which exists on six continents and 37 countries and which is based on data, provides the answers on how to make AI safe and effective. Companies should consider building AI security governance around this AI bill because standards are coming, and if they wait until the law is passed, it will be too late.
The model of trustworthy AI
Concerns CEOs, lawmakers, and community groups have highlighted that without AI governance for sensitive use cases, we risk harm (e.g., bias, privacy, and misinformation) in our global society for decades future. To avoid this outcome, a top tech CEO called on the industry to “proactive” rather than “reactive”, which can be easily achieved by following existing AI frameworks.
Inspired by ideas from computer science experts, the bill calls on tech companies creating AI systems and their business customers to allow them to risk classifying AI into categories analogous to a lamppost at an intersection: applications Prohibited at “red lights” should be avoided. such as continuous surveillance of people in public spaces; “green light” for low-risk applications, such as chatting with an AI-powered chatbot on a retailer’s website; and finally, the “yellow light” for high-risk AI, which is the focus of most AI governance.
A bill allows for high-risk use of AI, but, just like crossing an intersection when the light is yellow, regulators are calling on businesses and individuals to proceed with caution. Examples of high-risk AI include use cases that could seriously harm the emotional or physical well-being of individuals – for example, AI used for health, employment, personal finance; surveillance at work or school; sensitive data (e.g. race, ethnicity, religion, political beliefs, sexual orientation, trade union membership); children; criminal justice, democracy (for example, such as the right to vote); critical infrastructure (e.g. energy networks, hospitals, food supply).
Given the importance of “high risk” areas, governments want companies to ensure that AI is trained on accurate data and reflects technical documentation from previous testing and mitigation measures, so that if problems arise, they can be easily diagnosed and resolved. Finally, if AI cannot be patched to avoid harming high-risk groups like children, the bill calls for companies to have “failsafe” or a way to stop/kill this particular use case of AI.
That of President Biden Executive Order on AI follows the principles discussed above and will likely impact suppliers, government benefit recipients, and federal government contractors. The message is clear. Trustworthy AI is a priority for the federal government and leaders should take necessary action.
Why you should consider joining AI frameworks
Traditionally, businesses do not follow proposed laws, often out of fear that regulations will change. However, key lessons from the past prove that there are crucial times when successful companies have adopted legislative trends. Before the final laws come into force. For example, one CEO compared proactive AI security to seat belts. Companies that Proactively included seat belts in the design of their cars before 1968, when it was mandated by federal law, they were likely able to make their safety features a product differentiator, save millions of lives, and become market leaders by being trustworthy . In the same sense, trusted companies are 400% more successful than their competitors.
The company with the world’s largest market capitalization ($3 trillion) adopted privacy trends well before the final legislation was passed. Other companies lost more than $1.4 trillion in market capitalization while waiting for laws to implement privacy protections to be passed. Where there is consistency in bills across the world, there is a growing consensus that will not reverse itself.
As was the case with privacy and seat belt use, the data-driven AI bill provides sound recommendations for the safe use of AI and aligns the interests of businesses on the well-being of society. Adhering to these frameworks can maximize AI’s great outcomes and minimize harm to people.
The opinions expressed are those of Dominique Shelton Leipzig and do not constitute legal advice or an attorney-client relationship. They also do not represent the views of his employer, his clients or any other company.
Forbes Business Advice is the leading growth and networking organization for business owners and leaders. Am I eligible?