Skip to content

Has AI already evolved beyond regulation?

In our last post we discussed the need to support AI play, but as Azeem Azhar reminded us in the opening of the AI for Good Summit,

Engaging with workers and providing training, education and upskilling. Those are the hygiene factors. It’s really about creating the economic conditions that favour augmentation [as opposed to replacing humans with AI] and that means they favour growth, they favour entrepreneurship and the creation of new human and human-augmented tasks.” 

As Deputy Managing Director of the IMF Gita Gopinath noted, the biggest test of how well we have achieved this will come in a downturn: “During good times, firms are often flush with profits. They can afford to invest in automation and hold on to workers, even if the value-added of those workers declines. However, in a downturn, these firms simply let go of workers to cut costs. Therefore, the extent to which automation could replace humans only becomes fully visible during or immediately after a downturn.”

As policy and governance practices emerge, TFD is monitoring this space carefully to ensure we can stay ahead of change and support our clients in forging a responsible future.

Encouragingly, setting the right economic conditions and AI governance have been a dominant theme in several major events over the past month from the AI for Good Summit to the London AI Summit and LSE’s AI Guardians event. 

In one of the opening keynotes to the AI for Good Summit, Tristan Harris from the Centre for Humane Technology and The Social Dilemma encapsulated these challenges in two quotes: 

Firstly one from Charlie Munger (Warren Buffet’s business partner):

“If you want to predict what's going to happen, you show me the incentive, and I will show you the outcome.”

and secondly through Ajey Cotra’s (from Open Philanthropy) description of the governance challenge as “AI is like 24th century technology crashing down on 20th century governance."

Given that market dominance was generally accepted to be the main incentive for AI, the outlook isn’t encouraging.  

As Azeem Azhar put it:

“This is a technology that tends to winner take all, and the rewards are so high, there is quite the unseemly land grab going on at the moment. We're seeing firms trying to get monopolistic positions, dominant positions, and engage in some of the most regulatory capture that historians have ever seen, and in the process, cut corners in the name of progress."

It becomes a race to train the next big AI model and release it faster and get users before their competitor does. 

The race operates at both a corporate and a geo-political level: we’re driven by the fear that if we don't build it or deploy it, we're just going to lose to the company or the country that will. It’s essentially an AI arms race. 

So are we all just doomed?

Not necessarily.  As Tristan points out,  “social media was kind of like first contact between humanity and a runaway AI.” Social media was generally welcomed as a great liberator, that would connect people around the world. Regulation only came around a decade later,  in response to significant harms. As Tristan points out, the risks of getting this wrong are far greater with AI as now “we need to think about AI multiplied by social media.”

So, the mere fact that governance, commercial models and incentives dominated so much of the discussions at these big events (with an entire day dedicated to governance at the AI for Good Summit) bodes well.  

Moreover, we are already seeing actions being taken. In the UK, the Digital Markets and Consumers Act just made it through before Parliament dissolved. As Neil Lawrence, The DeepMind Professor of Machine Learning at Cambridge, noted: “This is an extremely important piece of legislation… that does start giving our regulators some power.”  

The regulator in question is no lightweight either. As Azeem Azhar pointed out, they have already posed some “suitably challenging questions about emerging market dominance” and according to Neil Lawrence (one of the team advising them) they are “one of the most equipped regulators to deal with this.” 

More encouraging still, was that speakers were already looking at potential solutions. Below is a quick round up of some of the options and approaches being discussed.

Potential Solutions

As Boston Dynamics’ Brendan Schulman pointed out, it is very hard for industry or government to solve so many big issues all at once. You need to start by defining what the risk is and bring some scientific rigour to it. After all, the companies know best how their technologies might be misused. 

His advice: “Identity the issue and work on it in a scientific way is, I think, the way you solve problems.”

The IMF’s Deputy Managing Director, Gita Gopinath, points to three ways to mitigate the risk of automation in an economic downturn:

1. "Make sure that tax systems do not inefficiently favour automation over people."

She notes that she is not calling for a special tax on AI. Rather, this is a call to reconsider existing corporate tax incentives that may be making AI ‘special’ and encouraging labour substituting investments.

2. "Take measures to help workers cope with the impacts of AI."

To protect workers from AI labour market disruptions, heavier investments in education and training are essential.

3. "Adopt measures to lower financial and supply-chain amplification risks.

"To mitigate the threat of an AI-amplified event, financial regulators will need to enhance both supervision and regulation. To do so, regulators will themselves need upskilling to help them understand AI-related risks.

"Disclosures by financial institutions and securities issuers may need to be strengthened, to provide visibility on how they use AI, on the source of their AI models, and on their circuit breakers to reduce herding and distress sales.

"With growing reliance on AI decision-making, both financial and non-financial companies may need to stress test their AI models against “events like no other” and establish sufficient human oversight to prevent cascading breakdowns."

Tristan Harris, from the Center for Human Technology, suggests looking to AI to carry out some of the upgrades to governance so it can match the speed of technology. It’s an idea echoed in the concluding comments of the OECD’s recent paper on Artificial Intelligence, Data and Competition: “The potential for AI to transform and empower authorities themselves should also not be forgotten and is perhaps a nice synergy to consider from increased knowledge and capacity in the subject.”

Tristan suggests

  • “You could use AI to optimise laws to be saying, how do we look at all laws that are getting outdated because the assumptions upon which the law was written have actually changed,” he suggested 

and goes further, proposing that AGI labs could also fund some of the upgrades.

  • What if, for every $1 million that were spent on increasing AI capabilities, AGI Labs had to spend a corresponding $1 million on actually going into safety?” he asked at the AI for Good Summit

As Neil Lawrence pointed out at LSE’s AI Guardians event, it’s not just a matter of finance, we also need to carefully consider how and where that money is spent. 

  • “We’ve spent £100 million on an AI Safety Institute with unclear aims with a group of people who are not accountable to anyone…and we gave £10 million to our 19 regulators to equip them with the capabilities to deal with this technology… that’s shocking”

Another solution Tristan proposes for consideration is making the developers of AI models liable for the kinds of downstream harms that occurred. 

  • That would move the pace of release of AI models to a slow enough pace that everyone would know I'm not going to release it. I'm not going to be forced to release it as fast as everybody else, because I know everyone has to go at the pace of being responsible for the things that you create.”

We may not be there yet, but the discussions taking place do suggest a shift away from the existential threats and dystopian dialogue of 2023 towards a more pragmatic approach with guardrails and frameworks emerging.

By proactively engaging with emerging governance, as leaders and innovators, we can shift from a reactive stance to harm to embracing this opportunity to sculpt the future with ethical foresight. 

Further reading on this topic

Full guidance from IMF First Deputy Managing Director, Gita Gopinath: Crisis Amplifier? How to Prevent AI from Worsening the Next Economic Downturn

Full OECD paper on Artificial Intelligence, Data and Competition
 

We use cookies to give you the best experience of using this website. By continuing to use this site, you accept our use of cookies. Please read our Cookie Policy for more information.