Regulators are starting to ask whether innovation in AI would progress faster if it were free from big tech's influence.
The UK's antitrust regulator has put tech giants on notice after expressing concern that developments in the AI market could stifle innovation.
Sarah Cardell, CEO of the UK's Competition and Markets Authority (CMA), delivered a speech on the regulation of artificial intelligence in Washington DC on Thursday, highlighting new AI-specific elements of a previously announced investigation into cloud service providers.
The CMA will also investigate how Microsoft's partnership with OpenAI might be affecting competition in the wider AI ecosystem. Another strand of the probe will look into the competitive landscape in AI accelerator chips, a market segment where Nvidia holds sway.
While praising the rapid pace of development in AI and numerous recent innovations, Cardell expressed concerns that existing tech giant are exerting undue control.
"We believe the growing presence across the foundation models value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today's most important digital markets, could profoundly shape these new markets to the detriment of fair, open and effective competition," Cardell said in a speech to the Antitrust Law Spring Meeting conference.
Vendor lock-in fears
Anti-competitive tying or bundling of products and services is making life harder for new entrants. Partnerships and investments - including in the supply of critical inputs such as data, compute power and technical expertise - also pose a competitive threat, according to Cardell.
She criticised the "winner-take-all dynamics" that have resulted in the domination of a "small number of powerful platforms" in the emerging market for AI-based technologies and services.
"We have seen instances of those incumbent firms leveraging their core market power to obstruct new entrants and smaller players from competing effectively, stymying the innovation and growth that free and open markets can deliver for our societies and our economies," she said.
The UK's pending Digital Markets, Competition and Consumers Bill, alongside the CMA's existing powers, could give the authority the ability to promote diversity and choice in the AI market.
Amazon and Nvidia declined to comment on Cardell's speech while the other vendors name-checked in the speech -Google, Microsoft, and OpenAI - did not immediately reply.
Dan Shellard, a partner at European venture capital firm Breega and a former Google employee, said the CMA was right to be concerned about how the AI market was developing.
"Owing to the large amounts of compute, talent, data, and ultimately capital needed to build foundational models, by its nature AI centralises to big tech," Shellard said.
"Of course, we've seen a few European players successfully raise the capital needed to compete, including Mistral, but the reality is that the underlying models powering AI technologies remain owned by an exclusive group."
The recently voted EU AI Act and the potential for US regulation in the AI marketplace make for a shifting picture, where the CMA is just one actor in a growing movement. The implications of regulation and oversight on AI tooling by entities such as the CMA are significant, according to industry experts.
"Future regulations may impose stricter rules around the 'key inputs' in the development, use, and sale of AI components such as data, expertise and compute resources," said Jeff Watkins, chief product and technology officer at xDesign, a UK-based digital design consultancy.
Risk mitigation
It remains to be seen how regulation to prevent market power concentration will influence the existing concentrations - of code and of data - around AI.
James Poulter, CEO of AI tools developer Vixen Labs, suggested that businesses looking to develop their own AI tools should look to utilise open source technologies in order to minimise risks.
"If the CMA and other regulatory bodies begin to impose restrictions on how foundation models are trained - and more importantly, hold the creators liable for the output of such models - we may see an increase in companies looking to take an open-source approach to limit their liability," Poulter said.
While financial service firms, retailers, and others should take time to assess the models they choose to deploy as part of an AI strategy, regulators are "usually predisposed to holding the companies who create such models to account - more than clamping down on users," he said.
Data privacy is more of an issue for businesses looking to deploy AI, according to Poulter.
Poulter concluded: "We need to see a regulatory model which encourages users of AI tools to take personal responsibility for how they use them - including what data they provide to model creators, as well as ensuring foundation model providers take an ethical approach to model training and development."
Developing AI market regulations might introduce stricter data governance practices, creating additional compliance headaches.
"Companies using AI for tasks like customer profiling or sentiment analysis could face audits to ensure user consent is obtained for data collection and that responsible data usage principles are followed," Mayur Upadhyaya, CEO of APIContext said. "Additionally, stricter API security and authorisation standards could be implemented."
Dr Kjell Carlsson, head of AI strategy, Domino Data Lab, said "Generative AI increases data privacy risks because it makes it easier for customers and employees to engage directly with AI models, for example via enhanced chatbots, which in turn makes it easy for people to divulge sensitive information, which an organisation is then on the hook to protect. Unfortunately, traditional mechanisms for data governance do not help when it comes to minimising the risk of falling afoul of GDPR when using AI because they are disconnected from the AI model lifecycle."
APIContext's Upadhyaya suggested integrating user consent mechanisms directly into interactions with AI chatbots and the like offers an approach to mitigate risks of falling out of compliance with regulations such as GDPR.