The Willy Wonka experience: navigating misrepresentation in the age of AI

Avatar photo

By Emma Campbell on

Emma Campbell, law student at The University of Strathclyde, explores how regulation of AI can protect consumers against misrepresentation

Credit: Willy Chocolate Experience/Stuart Sinclair

The ‘Willy Chocolate Experience’ captured global attention earlier this year by promising an event where you could “indulge in a chocolate fantasy, like never before”, but instead delivering a bitter dose of reality. Visitors left feeling deflated by the event which shined a spotlight on the pitfalls of the use of Artificial Intelligence (AI) in promotional materials.

Inspired by the Roald Dahl book Charlie and the Chocolate Factory, The ‘Willy Chocolate Experience; held in Glasgow was sold as a magical event complete with chocolate fountains and dancing Oompa Loompas (or to avoid copyright infringements, “Wonkidoodles”). This promised magical chocolate factory was instead a sparsely decorated warehouse filled with a bouncy castle, and a lone bowl of jellybeans.

The advertisement for the event was created using AI, a technology that enables computers to reproduce human intelligence, where algorithms can utilise available data to produce media such as writing and images.

While AI can offer many benefits to its users, it also poses a risk to its consumers who may not realise the content they are consuming is AI-generated. The AI-created illustrations used to advertise the event were in the typical AI style of bright colours, distorted subjects, and spelling errors. While to some it may be obvious that the event creators used AI to advertise, (who wouldn’t want to buy tickets to a “paradise of sweet treats”?), the issue of not being able to recognise AI-generated content is increasing. The Office for National Statistics found that only one in six adults could “always or often” detect when they were using AI, which shows the dangers of the use of AI within consumer-based settings.

Many attendees argued that the experience was a waste of time and money. However, AI poses a greater risk of more serious issues, which the UK government has quickly tried to regulate.

Influenced by the White House Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the House of Lords proposed the Artificial Intelligence (Regulation) Bill. The bill proposes the establishment of a new regulatory body, the AI Authority, which would possess various functions to help address AI regulation. It requires that the AI Authority “needs to have regard” to core principles such as safety, security as well as transparency.

The question of transparency within the use of AI has gained attention. Social networking companies such as TikTok and Meta have begun using “AI generated” labels indicating when a post has been created using AI. This has proven effective, with warning labels reducing an individual’s likelihood of engaging in misleading information. These AI warning labels may help to facilitate the proposed AI authority’s core principle of transparency within the business use of AI, but more needs to be done to ensure that all businesses have AI identification markers in place.

Want to write for the Legal Cheek Journal?

Find out more

The fact that there is no set legislative framework to ensure that social media and e-commerce companies mark content that is AI-generated is concerning. This is illustrated within the e-commerce site Etsy, where being able to sell AI-created products is regarded as a grey area. Esty’s policies don’t expressively allow or prohibit AI-generated products from being sold on the site. This has called for change within the Etsy community, where some believe that allowing AI images to be sold as products on the site puts users at risk of being scammed as it strays too far from the company’s “handcrafted products” image.

This begs the question as to whether the UK government is doing enough to safeguard against the dangers of AI. Rather than adopting blanket legislation, the Artificial Intelligence (Regulation) Bill offers a principal-based approach which may cause confusion with regards to what companies are required to do to regulate its transparency for AI.

Adopting clear legislation to require companies to label content posted as AI-generated will undoubtedly prevent consumers from being misrepresented. Requiring such a label will protect consumer rights, as established in the Consumer Protection from Unfair Trading Regulations (2008), which prohibits the use of misleading commercial practices which may influence the consumer. The AI-generated images of Willys Chocolate Experience undoubtedly influenced its attendees to purchase tickets. If AI-generated labels had been attached to these images, they might have at least alerted Wonka fans that the advertisement may not accurately reflect the actual experience.

However, when considering the principle of transparency, when should the law demand it? Clause five of the bill proposes that any person supplying a product or service would have to give customers “clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance”. This short clause does not provide clarity into what is expected of those supplying AI products to customers. When providing clear labelling into AI-generated products, rather than a more general innovative approach, a more consumer-based approach should be considered. For example, declaring potential allergens on food labels is a statutory instrument (The Food Labelling (Declaration of Allergens) (England) Regulations 2008), which sets out what must be declared on food packaging, as well as how it must be formatted. Providing more clarity as to the format and application of AI labelling will encourage its effective use and increase transparency for the consumer.

AI is developing at an extraordinary rate, becoming more advanced and ‘human-like’. The government has stated that it is too early in the development of AI to introduce primary legislation which contrasts against the EU approach of introducing wide-ranging legislation across all member states. However, with such drastic developments happening at such a rate, regulation must be able to keep up. As artificial intelligence develops, and becomes more human-like, being able to identify what is human and what is artificial will become increasingly difficult. This only increases the need for more concrete AI regulations. While an AI authority may be useful in providing consensus on core principles within AI usage, powers should be extended to the regulatory body to use their expertise to propose relevant and useful legislation, and in future, to amend legislation to respond to changes within AI.

The need for concrete legislation will only increase with AI developments becoming more human-like and less obviously AI. The Willy Chocolate Experience highlights the dangers of leveraging AI without the proper safeguards. A more concrete framework, such as legislation, must be adopted to compliment the creation of a regulatory body, in order to balance accountability and ethical AI use to safeguard its users and consumers.

Emma Campbell is a second-year LLB law student at the University of Strathclyde and a student legal advisor at the University of Strathclyde Law Clinic. 

Join the conversation

Related Stories

AI and the rise of ‘music laundering’

LPC student Frederick Gummer analyses the legal implications of artificial intelligence on the music industry

Apr 29 2024 8:04am

Navigating bias in generative AI

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

Sep 11 2023 9:22am
2

The impact of AI on copyright law

Following public excitement around 'ChatGPT', aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

Dec 20 2022 8:52am