Wednesday's session of the Science and Technology Select Committee was supposed to be a routine evidence-gathering exercise ahead of the third reading of the Artificial Intelligence (Governance and Accountability) Bill. It became something else entirely: a seven-hour confrontation between parliamentarians alarmed by the speed of AI development and an industry that believes Britain is about to legislate itself out of the most consequential technological race of the century.
At issue is Clause 14 of the bill, which would impose strict liability on AI developers for harms caused by their systems in high-stakes domains including healthcare, financial services, law enforcement, and critical infrastructure. Unlike negligence-based liability — which requires proof of fault — strict liability holds a developer responsible regardless of whether they took reasonable precautions. It is the same standard applied to pharmaceutical companies.
"Clause 14, as written, would make every AI deployment in a hospital or a bank a potential existential legal liability. No one will build here. They'll build in Delaware." — Mustafa Hassan, CEO, Meridian AI, giving evidence to the committee
The government's position
The Science Minister, Cressida Park, defended the clause as a proportionate response to documented harms from AI systems deployed without adequate accountability. She cited three incidents from 2025: a diagnostic AI that generated false positive cancer results in an NHS screening programme, an automated credit-scoring system found to have produced discriminatory outcomes against applicants from ethnic minority backgrounds, and a policing prediction tool whose outputs a High Court judge had described as "legally indefensible".
"The question is not whether AI can cause harm," Ms Park told the committee. "It manifestly can and has. The question is who bears the cost when it does — the people harmed, or those who profited from the deployment."
The minister also pushed back on industry warnings of capital flight, noting that the EU's AI Act already imposes comparable requirements and that developers had not abandoned the European market. Critics on the committee pointed out, however, that the UK's market is considerably smaller than the EU's, and that the regulatory calculus facing a company choosing between London and Brussels or New York is materially different.
Where the politics stand
The bill has cross-party support in principle but faces significant amendment pressure from the Conservative benches and from a group of Labour backbenchers with technology constituency interests. A coalition of 14 MPs from both major parties wrote to the Secretary of State this week urging a "safe harbour" provision that would shield developers who comply with a forthcoming government certification scheme from strict liability exposure.
The government has signalled it is open to discussion but has not committed to the safe harbour mechanism, wary of creating a compliance-box-ticking culture that gives developers cover without guaranteeing actual safety.
The bill is scheduled for its third reading on 12 May. Observers believe the government has the numbers to pass it, but the amendment landscape remains fluid, and the House of Lords — where several technology peers sit — is expected to subject the liability framework to rigorous scrutiny. The outcome will shape Britain's relationship with the AI industry for a generation. Wednesday's session suggested that neither side has yet found a formulation both can live with.