The global AI regulation dilemma: from blueprints to bans

3 May 2023

The global AI regulation dilemma: from blueprints to bans

3 May 2023

The global AI regulation dilemma: from blueprints to bans

3 May 2023

The global AI regulation dilemma: from blueprints to bans
The global AI regulation dilemma: from blueprints to bans

Last year, we scrutinised the 4th October 2022 US Blueprint for an AI Bill of Rights – an advisory document with merely voluntary guidelines for tech firms. The world was on the cusp of an AI revolution, but little did we know how imminent it was. Just two months after this Blueprint, we toppled into the post-ChatGPT era. Now, the US is looking to enforce actual legislation.

The aftermath of ChatGPT’s release may be the wakeup call Congress needed to finally curb the power of Big Tech. The dangers of social networking and surveillance capitalism, depicted in the documentary The Social Dilemma, was insufficient to warrant regulation. But now, with even Elon Musk himself calling for a 6-month halt of AI development, conversations are brewing (but, Musk has since hinted at starting his own new AI company dubbed ‘TruthGPT’, so his true motives are unclear!).

AI’s benevolent potential as a force for good should not be curtailed. However, this is only possible with regulation, to build confidence that potent AI systems can be trusted – some scientists warn that rogue AI “could kill everyone” if not treated like nuclear weapons.

In this article, first we bring you up to speed on how some nations are taking a pro-innovation approach to AI regulation (UK, US), while others are clamping down on AI (EU, China, Italy). Next, we recap the Springbok response to the 2022 Blueprint, which we saw as unsatisfactory. Finally, considering data privacy, image copyrights, and other legal entanglements, we consider what audit and assessment of AI could look like.

Moving towards US regulation: the latest news

We are seeing growing pains of the post-ChatGPT world. Initially, the world was racing to leverage ChatGPT. Enterprises panicked about missing out on a competitive edge.

Now, the world is racing to protect itself – businesses and governments alike. Italy outright banning ChatGPT over GDPR privacy concerns has had a chilling effect on the chatbot. In other countries, regulation is in its early stages.

Even the US government – historically a world player with more light-touch regulation – is bowing to pressure. The US is taking a slower approach than the EU, but it is taking steps nonetheless.

The NTIA (National Telecommunications and Information Administration) is seeking feedback and expertise on the best way to pull together AI audits and assessments. The NTIA is consulting researchers, industry groups, and privacy and digital rights organisations.

What tech regulation in the US currently looks like

The US lacks anything equivalent to GDPR on the federal, nation-wide level. On the state level, there are some similar legislations. For example, the 2018 California Consumer Privacy Act (CCPA). This grants consumers new privacy rights, including the right to delete personal information collected from them, and to opt-out of the sale or sharing of their personal information.

OpenAI itself has terms and conditions and there are some checks in place, but these are commercial operations rather than a government mandate.

What’s happening around the world: the UK, EU, and China

The European Parliament is acting harder and faster. In recent years, the EU has been the influential frontrunner in responsible tech regulation. When GDPR came into effect in 2018, the EU swiftly became the global standard. The UK implemented its own version called the UK Data Protection Act (DPA). The EU aspires to set the golden standard for AI, too.

The European AI Alliance was launched by the EU in 2018, and talks for an Artificial Intelligence Act began in 2021. Now, MEPs have truly picked up the pace. Proposals include: stricter rules for copyrighted material used to train AI for chatbots and image-generating tools, and a total ban on the use of facial recognition (biometric authentication) in public spaces (although pushback from local police forces is expected).

The post-Brexit UK’s priority is incentivising overseas tech companies to establish a presence on its shores. “A heavy-handed and rigid approach can stifle innovation and slow AI adoption”, said the Secretary of State for Science, Innovation and Technology.

ChatGPT is outright banned in: Italy, China, Russia, North Korea, Iran, Cuba, and Syria.

Italy became the first Western country to block ChatGPT over privacy concerns. The Italian data-protection authority is investigating whether it complies with GDPR, and has given OpenAI 20 days to address the watchdog’s concerns, under penalty of a fine of €20 million. It appears that Germany may be following suit, while other EU nations have been inspired to investigate if harsher measures may be necessary.

China outright banned ChatGPT. The Chinese Communist Party’s priority is to maintain national unity. Thus, companies must ensure that AI does not call for the subversion of state power, or produce content that encourages violence, extremism, terrorism, or discrimination. China has banned AI-generated images that don’t have a watermark labelling them as such.

The Springbok response to the Nov 2023 US Blueprint for an AI Bill of Rights

Last year, we ruled the US Blueprint insufficient to mobilise tangible change in the tech industry, given that it is not legally binding. It equates to saying “wouldn’t it be nice if tech firms implemented ethical principles, didn’t abuse our data, were open and honest when AI is being used, and eliminated bias?”, and hoping that it will create social pressure.

We applauded Biden’s step in the right direction, especially for an economic environment historically inclined towards free market economics and light-touch regulation. However, ultimately we concluded that its merely advisory nature means that it will have a negligible impact.

To recap, the 5 principles outlined in the Blueprint are:

1. You should be protected from unsafe or ineffective systems.

2. You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

3. You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

4. You should know that an automated system is being used, and understand how and why it contributes to outcomes that impact you.

5. You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The audit and assessment of AI is not straightforward

AI above a certain threshold (by number of users, volume of data, or revenue), or touching certain sectors (policing, legal), could be regulated.

If the US were serious about regulation, we would advise the following timeline:

  1. In the next 6 months: regulation will be drawn up and put through Congress, and regulatory authority over given to the FTC, or a new body.

  2. 12-18 months: tech firms will be required to produce AI audit reports.

  3. 3-5 years: independent third party auditors will be set up.

However, we do not expect this to happen.

The regulation could be comparable to financial audits conducted by independent third parties, such as Deloitte and PwC. Financial audits seek to identify inaccuracies in the financial statements of a company – this is fairly straightforward.

“In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davidson, the head of the NTIA.

An AI audit, however, might prove more complex and arduous, demanding a computer scientist of a far more advanced level than the financial audit graduate recruitees. This is likely to be an obstacle not only for the US legislation but also legislation of other countries as the training and implementation of these departments may take years.

The auditors would need to examine the data sets and the model itself, monitoring them for abuse and malintent. They would need to run the model to see if it violates bias rules. However, it would be difficult to have a standardised suite. Perhaps each tech firm would do their own test then submit their results. Reports alone are inadequate, as they can be forged, so in an ideal world, the technical analysis would be reproducible by publishing code and data.

Data privacy and fake news

With regards to data privacy, a US equivalent of GDPR would be beneficial. Unfortunately, given that the US economy is thriving on surveillance capitalism, we do not anticipate that this will materialise. Over the past five years or so, the arguments for GDPR have remained around the same. ChatGPT is hardly worse for data privacy than, say, Cambridge Analytica in its data scandal in harvesting Facebook profiles to provide analytical assistance to the 2016 US presidential election.

Heavy fines and the halting of operations could be faced by disobedient companies – or so we’d hope. In reality, the track record of the US regulating businesses makes it appear pretty lenient, all things considered. For example, the Equifax data breach settlement of $425 million was a drop in the ocean compared to the magnitude of the leak, and the firm’s profits. This leniency promotes a business culture where incurred fines are viewed as just another cost of doing business.

The auto-generation of fake news is another concern. When we wrote about ChatGPT’s risks back in February, this was one consideration, alongside copycat radicalisation, as a possibility of weaponised technology. Syria, for instance, banning ChatGPT is, we imagine, a move to protect its post-war nation from misinformation dancing around the internet.

Sticky situations – who is to be punished?

It’s not always crystal clear with whom the responsibility lies. If we take the example of fake news – would the firm get into trouble for providing the platform, or the users who weaponise it to malicious ends?

AI-related legal entanglements are on the rise. Given the rapid pace of AI development, these lawsuits are often unprecedented – and, without up-to-date regulation, they are evermore complex to settle.

Getty Images has sued Stability AI (the creators of text-to-image generator Stable Diffusion, similar to OpenAI’s DALL-E). Getty is accusing Stability of “unlawfully” scraping over 12 million images without a licence to train its system. In a press statement, Getty Images states that it provides licences “for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights”, but that Stability AI ignored “viable licensing options and long-standing legal protections”.

Which agent holds the responsibility – the tech firm or the user – is a further dilemma. If ChatGPT were utilised to auto-generate fake news, should the regulators chase after OpenAI, or the user who instructed this abuse of AI? The same holds for other tech platforms: misinformation proliferating on YouTube, underage, nonconsensual pornography on Pornhub, and drug dealers communicating on Telegram.

Key takeaways

Given how Europe is generally more keen on regulation than the US, it is unsurprising that Europe is further ahead in its action timeline. We predict that the EU will implement some legislation before the US. But, we hope that the ChatGPT frenzy will be more of a wakeup call rather than the final nail in the coffin.

This article is part of our series exploring ChatGPT. Others in the series include:

Springbok have also written the ChatGPT Best Practices Policy Handbook in response to popular client demand. Reach out or comment if you'd like a copy.

If you’re interested in anything you’ve heard about in this article, reach out at victoria@springbok.ai!

Sign up

Sign up to our blog

Sign up

Sign up to our blog