Technology Top Stories
6 mins read

Putting the brakes on AI. Why am I smelling gas?

Amid the grandeur of the UK hosting the world’s first global summit on AI safety last week, a question lingers: Beyond the politics and posturing, who is actually setting the pace in regulating AI, and how?

It was while mending broken boilers for British Gas that I convinced myself that my job would never be under threat from artificial Intelligence (AI). Even the smartest of bots would need to do a much deeper dive into deep learning – and grow ears, eyes and hands – to do what I was doing. My ears telling me if a heat exchanger is ‘kettling’, my eyes alerting me to the flame picture that oozed CO and of course my hands taking bits apart and mending. I told myself it was the actuaries and code writers who needed to lie awake at night, fearful of ‘the simulation of human intelligence in machines’ marching in while they scatter off to learn how to do plumbing.

When I returned to journalism earlier this year, the potential impact of AI did strike home, realising that advanced large language models (LLMs) will not only change the world of content creation indefinitely but that the changes and advancement rate of AI, will grow exponentially. That said, I was hardly on the brink of an existential crisis – ‘you can’t manufacture experience’ was my (very) naïve refrain. 

GLOBAL AI SAFETY SUMMIT

But those superiorly more intelligent, powerful (and wealthy) are far less sanguine. Last week the British government hosted the first global summit on AI safety with the aim of addressing two key categories of risk when it comes to AI: the misuse of AI capabilities, and loss of control, a situation in which the AI that humans create could be turned against humanity.

The location of the two-day summit was carefully chosen: Bletchley Park, where, in 1941, codebreakers led by British mathematician Alan Turing cracked Nazi Germany’s Enigma machine, an encryption device used to transmit coded strategic messages to German armed forces.

No sooner have the representatives of 28 nations arrived, had they signed the Bletchley Declaration. A pre-prepared document by die UK government, which warns “of the dangers posed by the most advanced ‘frontier’ AI systems”. Elon Musk, tech entrepreneur and business magnate, echoed the threat on the sidelines with a sound bite warning that AI is one of the biggest threats to humanity:  “We’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us.”

On initial inspection, the summit seemed to go down quite splendidly. And make no mistake, nobody denies that AI comes with a major set of risks. But take a closer look and something seems off as most in attendance were jockeying for position. While the EU is on the verge of passing its own AI Act, China has already pushed through its own rules governing generative AI while the US passed an Executive Order only two days before the summit “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence”. Playing catchup, hosts UK hailed the summit as a diplomatic coup, being the first to host such a summit and waving a signed declaration of intent to prove it.

WHO IS SETTING THE PACE?

Setting the politics and posturing aside, who is actually setting the pace in regulating IA?

In general

This body convenes leading European universities, media, global businesses, policymakers and governments to develop innovative initiatives and frontier thinking. They work on issues directly – or with strategic partners from the business, government, media, and non-profit sectors – to launch or elaborate on crucial issues that foster innovation, promote evidence-based policy making, whilst shaping the social impact of new applications of AI and inspiring citizen-engagement in science.

As far back as 2018 they launched AI4People, a document taking aim at “the societal impact of AI technologies to ensure they align with human values, ethics, and the greater good”. At the end of the same year, on behalf of AI4People and its partners, Prof. Luciano Floridi presented at the European Parliament the AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Formed by Canadian prime minister Justin Trudeau and French president Emmanuel Macron in 2020, the GPAI is an international body aiming to share research and information on AI, foster international research collaboration around responsible AI, and inform AI policies around the world. The organisation includes 29 countries, some in Africa, South America, and Asia.

  • Microsoft, OpenAI, Meta and the like

Am I suggesting the poacher turns gamekeeper? I am indeed. In July this year, Seven of AI’s top companies attended a White House summit, reaching A deal with the Biden administration to roll out new guardrails to enhance the safety and security of their systems and users. They promised more will follow.

Take a closer look at the FMF and you might be convinced that it’s the same as the above. It might but while the big tech players like Microsoft and heavily funded startups like OpenAI get invitations to the White House, the smaller players were being left out. The FMF opens the door to all as “one vehicle for cross-organisational discussions and actions on AI safety and responsibility”. It already launched a new AI Safety Fund with over $10 million to accelerate academic research on frontier model safety.

In media:

  • Associated Press. – In July the Associated Press (AP) reached a two-year deal with OpenAI, the parent company to ChatGPT, to share access to select news content and technology. In striking such a deal, AP placed itself in the position to become an industry leader in developing standards and best practices around generative AI for other newsrooms. AP is now one of only a handful of news organisations that have begun to set rules on how to integrate fast-developing tech tools. 
  • JournalismAI is a global initiative run by the London School of Economics’s journalism think-tank Polis, which assists news organisations in using artificial intelligence responsibly. 
  • The Nieman Journalism Lab. Founded to promote and elevate the standards of journalism, the NiemanLab publishes industry-first information about IA can improve newsrooms and B2B messaging. 
PROMOTING OWN INTERESTS?

These players are merely the snout of the hippo. But it does beg the question: is it time to smell gas, given the Bletchley talkshop was, in essence, nothing more than 48 hours of scaremongering? 

Mark Surman, president of the Mozilla Foundation, might have touched on the real issue here. He suggests the summit was a world stage for private companies to promote their own interests. His comments were followed by an open letter on the last day of the summit, signed by a diverse group of academics, politicians and employees from private companies, such as Meta, as well as Nobel Peace Prize laureate Maria Ressa.

It reads: “We have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.

“Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policymaking. If our objectives are safety, security, and accountability, then openness and transparency are essential ingredients to get us there.

“When it comes to AI safety and security, openness is an antidote, not a poison.”

Some observers have pointed out that last week’s summit was held at Bletchley Park to remind the world that the UK had/have brilliant minds, like Turing. What they failed to point out was that had AI been around in 1941, the Enigma code could have been broken in under 10 seconds.

This is not the time to put unnecessary brakes on AI development, no matter the motive. We all want to live in a better world where we can do less mundane tasks and focus on the things we like, even if it means a team of Second Generation Robotic Droid Series-2 (R2D2) mending our boilers while saving actuaries and code writers from fixing our taps.



Media Makers Meet – Live

We’d like to see you at our upcoming live events in the UK, Spain and Portugal!

  • Mx3 AI takes place on 7 December in London. From looking at the big picture to drilling down into current experiments, come and learn, discover, and discuss AI and the media with fellow professionals. Click here for more.
  • Mx3 Barcelona focuses squarely on innovation in media, emphasising creator-led, consumer and B2B media operating in and across media verticals. It takes place from 12-13 March. Click here for more.
  • The FIPP World Media Congress, which we produce on license from FIPP, takes place from 4-6 June in Cascais, Portugal. The magazine media world’s flagship global event, this will be the 46th edition since it first launched in Paris in 1925. Click here for more.