Events / talks

Virtual Session "Do Large Language Models have a Duty to Tell the Truth?"

Sep 6 @ 17:00 - 17:30 JST
Details:

Topic: "Do Large Language Models have a Duty to Tell the Truth?"

Speaker: Brent Mittelstadt, PhD / Associate Professor / University of Oxford
He leads the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. He is a data ethicist and philosopher specializing in AI ethics, algorithmic fairness and explainability, and technology law and policy. Prof. Mittelstadt is the author of foundational works addressing the ethics of algorithms, AI, and Big Data; fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; ethical auditing of automated systems; and digital epidemiology and public health ethics. His contributions in these areas are widely cited and have been implemented by researchers, policy-makers, and companies internationally, featuring in policy proposals and guidelines from the UK government, Information Commissioner’s Office, and European Commission, as well as products from Google, Amazon, and Microsoft. He also serves on the Advisory Board of the IAPP AI Governance Centre.

Abstract:
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time.

This talk examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. EU human rights law and liability frameworks contain some truth-related obligations for products and platforms, but they are relatively limited in scope and sectoral reach.

The talk concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs, and discusses “zero-shot translation” as a prompting method to constrain LLMs and better align their outputs with verified, truthful information.