Government Regulation Of AI Businesses: UK CMA Starts Review
The UK Competition and Markets Authority (CMA) has announced an initial review into the competition and consumer protection considerations surrounding the development and use of artificial intelligence (AI) foundation models
Chandu Gopalakrishnan May 4th, 2023
Share on LinkedInShare on Twitter
Government regulation of AI businesses is a hot topic across the world. The US has announced its plan for an AI Bill of Rights, while the Indian government has made it clear that it has no plans to regulate AI businesses.
Joining the club, the UK Competition and Markets Authority (CMA) today announced an initial review into the competition and consumer protection considerations surrounding the development and use of artificial intelligence (AI) foundation models.
These models, including large language models and generative AI, have the potential to transform many aspects of business and daily life.
Government regulation of AI businesses: The UK way
The CMA’s review seeks to produce guiding principles that will best support the development of foundation models and their use in the future, while also examining how competitive markets for these models may evolve and explore the opportunities and risks they present for competition and consumer protection.
“The development of AI touches upon a number of important issues, including safety, security, copyright, privacy, and human rights, as well as the ways markets work,” said the CMA announcement.
“Many of these issues are being considered by government or other regulators, so this initial review will focus on the questions the CMA is best placed to address − what are the likely implications of the development of AI foundation models for competition and consumer protection?”
Seen as the first step towards the government regulation of AI businesses in the UK, the CMA is seeking views and evidence from stakeholders, with a deadline for submissions set for June 2, 2023.
Following evidence gathering and analysis, the CMA plans to publish a report in September 2023 that will set out its findings.
The review is in line with the UK government’s AI white paper, which seeks a pro-innovation and proportionate approach to regulating the use of AI.
Sarah Cardell, Chief Executive of the CMA, emphasized that AI is a rapidly scaling technology with the potential to transform the way businesses compete and drive substantial economic growth.
The CMA’s goal is to ensure that the potential benefits of AI are readily accessible to UK businesses and consumers while protecting them from issues like false or misleading information, she said in the official announcement.
The CMA’s work in this area will be closely coordinated with the Office for AI and the Digital Regulation Cooperation Forum (DRCF) and will inform the UK’s broader AI strategy.
Key concerns in government regulation of AI businesses
AI activists have been campaigning for the scrutiny of the foundation models of AI businesses, including data protection, intellectual property and copyright, and online safety.
And that has not missed the attention of the regulators, as the recent attempts of government regulation of AI businesses show.
The UK government announced in March that it intends to divide the responsibility for regulating artificial intelligence (AI) among existing bodies responsible for human rights, health and safety, and competition, rather than creating a new entity dedicated solely to this technology.
Focus areas on government regulation of AI businesses across the world vary, depending on the political concerns, consumer awareness, and even business needs.
United States
In March 2021, the U.S. Federal Trade Commission issued guidance for companies using AI to promote transparency and minimize the risk of discriminatory outcomes.
The Algorithmic Accountability Act was introduced in Congress in 2019, which would require companies to assess the impact of their AI systems on data privacy, accuracy, and fairness.
The National Institute of Standards and Technology (NIST) has developed guidelines for the development and use of trustworthy AI systems.
European Union
The EU proposed new regulations in April 2021 that would classify some AI systems as “high-risk” and subject them to strict transparency and accountability requirements.
The General Data Protection Regulation (GDPR), which went into effect in 2018, already includes provisions regulating the use of AI systems in data processing.
The European Commission has established a High-Level Expert Group on AI to provide policy recommendations and guidance on the development of ethical and trustworthy AI.
China
In 2020, China released the “New Generation Artificial Intelligence Development Plan,” which outlines the country’s goal to become a world leader in AI by 2030.
The country has also implemented regulations requiring companies to conduct risk assessments and obtain government approval before exporting certain AI technologies.
China’s approach to regulating AI has been criticized for prioritizing economic growth over privacy and civil liberties.
Canada
In 2018, Canada released its national AI strategy, which includes plans to promote ethical and human-centric AI development and ensure that AI systems are transparent and accountable.
The country’s privacy laws, including the Personal Information Protection and Electronic Documents Act (PIPEDA), already regulate the use of personal data in AI systems.
In 2019, the Canadian government established the Advisory Council on Artificial Intelligence to provide advice on the responsible development and use of AI.
Japan
Japan’s government released its AI strategy in 2019, which includes promoting the development of AI technologies that are transparent, fair, and trustworthy.
The country has established guidelines for the use of AI in healthcare and plans to develop similar guidelines for other sectors.
Japan has also established a public-private council on AI ethics to provide guidance on the responsible use of AI.
Regulation of AI businesses: Private players in play
The UK government released a white paper on March 30 that advocates a “pro-innovation approach” to AI regulation, which involves no dedicated AI watchdog and no new legislation, but instead, a “proportionate and pro-innovation regulatory framework.”
The UK white paper appears to place the responsibility for responsible AI on the user, potentially shifting any liability that arises from the misuse of the technology to them. Notably, there was no mention of ChatGPT.
The latest attempt of government regulation of AI businesses in the UK seem to have been influenced by a popular campaign backed by tech heavyweights, urging a pause on AI development.
Hundreds of leading technology figures, such as Elon Musk and Steve Wozniak, last month signed an open letter from the US-based non-profit, Future of Life Institute, calling for a halt on generative AI development.
The signatories assert that before continuing the technology’s progression, a better understanding of its potential risks is necessary.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” read the letter.
“Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in.”
Apart from Musk and Wozniak, popular signatories to the open letter include Israeli historian and author Yuval Noah Harari, and Pinterest co-founder Evan Sharp, among others. A small number of staff from Google, Microsoft, Facebook, and DeepMind have also signed.
The push for a pause comes after significant changes that occurred in the past six months, leading to an arms race between Big Tech companies such as Google and Microsoft, who are looking to incorporate advanced AI tools into everyday productivity tools.
Google recently announced a new cybersecurity suite powered by Sec-PaLM, a specialized “security” AI language model that incorporates security intelligence such as software vulnerabilities, malware, and threat actor profiles.
Named Google Cloud Security AI Workbench, it provides customers with enterprise-grade data control capabilities such as data isolation, data protection, sovereignty, and compliance support, said the company announcement.
While it was important for a company like Google to allay privacy concerns associated with such an AI initiative, regulators understand that there is much to be understood.
“According to some estimates, by 2023 there will be some 29 billion connected devices on the planet that use AI technologies, and the underlying algorithms are becoming ever more central,” wrote Christian Archambeau, Executive Director of the European Union Intellectual Property Office (EUIPO) in a 2022 EU report on AI and IP violations.
“Understanding the implications of these transformations is vital at a time when the Fourth Industrial Revolution (4IR) is transforming virtually every area of the economy and society.”