Who Killed Elaine Herzberg: A Tragic Accident or a political Experiment in Public

Shengyu Yang—Feb 9, 2026

Elaine Herzberg was walking her bike across a road in Tempe, Arizona, in 2018 when an Uber self-driving car hit her and killed her. It was late at night. The road was dark. The car was in autonomous mode, with a human safety driver sitting behind the wheel. Herzberg did not know she was part of an experiment. She did not agree to take part. Yet she became the first pedestrian to die in a crash involving a self-driving car, and the story was quickly framed as a tragic mistake rather than a warning sign (https://www.bbc.com/news/technology-54175359). The uncomfortable question is not only what went wrong, but who should be held responsible. This crash is often explained as a simple case of human error. The safety driver was distracted. The pedestrian was in the wrong place. The system failed to react in time. But this explanation is too neat, and perhaps too convenient. The deeper issue is that companies are allowed to test risky technologies on public roads, where ordinary people become test subjects without consent. This turns innovation into a public experiment. It also makes responsibility unclear, because it becomes shared and blurred across the company, the engineers, and the government that allowed the testing in the first place. If we only blame one person at the scene, we avoid asking whether this experiment should have been happening at all.

After the crash, police quickly blamed Herzberg. They suggested she should not have been crossing the road, as if her decision was the main cause of what happened. Stilgoe points out that this kind of reaction is predictable, because it focuses on the closest and easiest explanation rather than the wider system that made the crash possible (https://link.springer.com/chapter/10.1007/978-3-030-32320-2_1). Blaming the pedestrian creates a clear story. It also protects the organisations involved. If the crash is described as an accident, it becomes something unfortunate but unavoidable. Yet this language is political. It closes down debate. It encourages us to stop thinking about why the car was there, why the technology was treated as ready enough, and why public streets were used as a testing ground. The same pattern appears in how we talk about unintended consequences. Calling harm unintended can make it sound natural and blameless, even when it comes from deliberate choices. Calling the crash an accident hides the chain of decisions that made it possible.

This is why it matters to treat technology as more than a neutral tool. In science and technology studies, a key idea is that technologies shape the world around them. They organise behaviour, create new risks, and quietly change what society accepts as normal. Langdon Winner makes this argument strongly when he suggests that technologies can act like forms of legislation, because they shape how people live, often without public debate (https://www.taylorfrancis.com/chapters/edit/10.4324/9781315259697-21/artifacts-politics-langdon-winner). In the Herzberg case, the road became a testing ground, and her death became part of the cost of innovation. Sheila Jasanoff makes a similar point when she argues that technology is a form of power, because it reorganises social life and decides whose values are built into the future. Uber’s self-driving system was not just a machine. It carried assumptions about what counts as safe enough, whose safety matters most, and which risks are acceptable. If we ignore these political choices, we fall into what Winner calls technological somnambulism, where society sleepwalks into a future shaped by private companies rather than democratic decision-making.

What counts as safe is not a fixed measure of physics or engineering. It is the outcome of choices made by companies and regulators about which risks are acceptable and for whom (https://www.sciencedirect.com/science/article/abs/pii/S0951832097001348). In the Uber crash, the automated system detected Elaine Herzberg several seconds before the collision, but it never engaged emergency braking while in autonomous mode because Uber had disabled that function to avoid what it saw as erratic behaviour. The system also struggled to classify her correctly, identifying her first as an unknown object and then as a bicycle shortly before impact, limiting the system’s ability to respond in time. These design decisions reflect trade-offs between false positives and false negatives that engineers and managers make under pressure from cost, speed of development and commercial competition (https://www.ntsb.gov/investigations/Pages/HWY18MH010.aspx?utm_source). Safety, then, becomes a matter of negotiation: safe enough for what purpose, and for whose safety?

The problem in Tempe was not only Uber’s software or its choice to rely on a single safety operator. It was also the context in which that choice happened. American streets often prioritise vehicle traffic over pedestrian safety (https://www.thezebra.com/resources/driving/complete-streets/), making environments less forgiving of mistakes. More importantly, Arizona had cultivated a permissive atmosphere for autonomous vehicle testing, encouraging companies to bring unproven technology onto public roads with few specific regulatory safeguards (https://www.bloomberg.com/news/articles/2018-03-20/arizona-became-self-driving-proving-ground-before-uber-s-deadly-crash?utm_source). After the crash, the state briefly suspended Uber’s tests, but the initial laissez-faire approach had already contributed to a regulatory landscape where companies competed to be first rather than safest. What should governments do in this situation? The crash shows that weak oversight and regulatory competition can make public spaces de facto laboratories for corporate experiments, pushing risk onto ordinary citizens.

Stilgoe calls the pattern of waiting for tragedy before acting a tombstone mentality. If rules are only put in place after someone dies, then the public is treated as an unwitting test group for technology that has not been proven safe. This approach is ethically troubling and politically avoidable. A better system would require stronger pre-market safety assessments, clear transparency about test protocols, avenues for public participation in deciding acceptable risk, and legally enforceable safety standards that cannot be bypassed in the name of innovation. These measures would shift emphasis from reactionary responses after harm to careful governance before harm occurs, aligning with seminar discussions on anticipatory or pre-emptive governance.

Elaine Herzberg was not merely struck by a malfunctioning car; she was harmed by a system that allowed the public to become part of a technological experiment without democratic consent. If we treat her death as an accident, we accept the politics embedded in our technological choices without question. Technologies do not just solve problems; they shape futures and distribute harm. If self-driving cars are to become commonplace, we must ask not only how safe they are, but who decides what safety means, and whether we are willing to use people’s lives as the price of testing.

留下评论