Home > Publications > Digital technologies & governance > AI safety – A consumer and public interest perspective

Uses of AI technologies are concentrated in industrial applications that affect the public mainly indirectly, but public debate naturally focuses on applications that affect the public directly, such as decisions on eligibility for benefits. The UK Government’s recent international conference on AI safety and associated events such as the “AI Fringe” have fostered discussion on the safety of this latter type of applications, with unusual and welcome stress on participation by representatives of civil society and consumer groups. This short article is based on my remarks in a panel session at the British Standards Institution (BSI)’s “AI for All” fringe conference on 26 October 2023, at which I represented the Consumer and Public Interest Network (CPIN) of BSI. [1]

What does “safe AI” mean for consumers?

AI safety for consumers and the public seems to be broadly understood as minimisation of harms from AI, with many harms rating a mention (unlike product safety, which has had a strong focus on risks of physical harm). Bias (from models built on slanted data) has raised widespread concern, as have misinformation (such as results from generative AI), job displacement, and risks to the privacy and security of personal data. Accessibility and inclusiveness of AI-supported applications are also mentioned, and of course also physical harm, for example from automated vehicles.

One area of harm which I feel deserves far more attention than it gets is environmental sustainability.  Personally, I would make this the top priority, because without sustainability there will be no future. It cannot be called “safe” to bring forward a world of floods, fires and depleted natural resources. This means that any product or service that works against sustainability goals clearly is not safe in a broad sense. In my view, all AI-supported applications should be tested for their contribution to sustainability, for their whole foreseeable lifecycle, before being released to market. A recent German initiative explores this type of thinking in more depth.

Risks to sustainability can occur directly, for example through high energy consumption, but also – and this is important – indirectly, for example through rebound effects, such as people using their spare time or money that they may get through AI in environmentally unfriendly ways. How to reduce undesirable rebound effects is a tough question, but AI itself can at least help people to be aware of the sustainability implications of their choices.

Consumers want more control

One way to address consumer safety may be to give consumers more control over their AI-based applications. There is good evidence that consumers generally will feel (and may well be) safer if they have more control over the online and technology-supported parts of their lives. So companies marketing AI-supported applications to consumers could consider an “OFF switch” for personal data gathering, countering any fear of being “snooped on”.  A distinction between essential and inessential functions would be needed for products where complete switching off is not in the consumer’s interest, such as medical devices. Of course, these “switches” must be guaranteed to work properly – reliably certified control mechanisms should do a lot to help build consumer safety and hence justify trust.

A related type of control is that of fine-tuning algorithmic recommendations based on personal profiling, as required by the EU Digital Services Act (DSA)[2] and also recent Chinese regulation on recommendation algorithms[3]. This should give end-users the chance to change ways they have been tagged, which may be incorrect or represent past behaviours that the individual wants to move away from, such as eating certain foods.

Another way for people to regain some control is through rights of appeal against automated decisions, often referred to as “human in the loop”. A point not often made is that these humans must be equipped to exercise genuine independent judgement, not just cogs in corporate structures which are designed for other goals.

Beyond voluntary standards: the need for regulation

Around the recent AI Safety Summit there has been much expert comment[4] on the need to act now to prevent further harm of already well-known types, such as those mentioned above, with even greater urgency than considering the longer-term issues of frontier AI. Thinking of consumer and public interests, I strongly agree with this viewpoint.

As someone working in the standards world, the important contribution of standards is front-of-mind. But we must remember that AI players are not all good ones – many simply do not think or care about wider impacts of their activities, and many are actively bad players. So voluntary standards will never be enough – their most important provisions need to be identified and made enforceable.

So we need strong and visible regulation, backed up by detailed sets of guidelines and standards. As regulation is inevitably behind the game, it is lucky that guidelines and standards can be put in place relatively fast, and the best of them can then be cemented in enforceable rules.  An AI watchdog is a good idea, but it must be able to bite as well as bark.


[1] The author would like to extend her thanks to BSI for the speaking invitation and to CPIN colleagues for their support. CPIN consists of over 50 unpaid experts with various professional backgrounds who work on standards from a consumer perspective.

[2] Articles 27 and 38 on Recommender Systems of the EU Digital Services Act.

[3] See Articles 16 and 17 of Provisions on the Management of Algorithmic Recommendations in Internet Information Services (chinalawtranslate.com)

[4] See for example Ada Lovelace Foundation reports on regulating AI and lessons from other sectors, and remarks from a selection of Oxford academics.