Waymo Vehicles 65 Times More Likely to Pass School Buses Illegally Despite Safety Training

Waymo Vehicles 65 Times More Likely to Pass School Buses Illegally Despite Safety Training

2026-03-29 data

Austin, Sunday, 29 March 2026.
Austin’s school district attempted to train Waymo’s self-driving cars to stop for school buses, but the collaboration failed dramatically. Between August 2025 and February 2026, Waymo vehicles committed 25 illegal passes of stopped school buses - making them 65 times more likely than human drivers to violate this critical safety rule. Despite a federal recall in December 2025 and software updates, violations continued with at least four more incidents by January 2026. School officials report that while 98% of human drivers learn from their first violation, Waymo’s AI system shows no signs of learning from repeated mistakes, raising serious questions about autonomous vehicle readiness around children.

The Scale of the Problem

The magnitude of Waymo’s school bus safety failures becomes clear when examining the numbers. Between mid-August 2025 and February 10, 2026, Austin recorded 8,000 human drivers illegally passing stopped school buses [2]. During this same period, Waymo’s driverless taxis committed 25 illegal drive-bys of stopped school buses [2]. This creates a stark statistical reality: Waymo vehicles are 65 times more likely to illegally pass a stopped school bus compared to average motorists in Austin [2]. Austin Independent School District uses BusPatrol to install cameras on buses, with illegal passing resulting in a $300 fine [2]. The district’s collaboration with Waymo to address these violations represents an unprecedented attempt by a school system to directly train autonomous vehicle technology for child safety.

Failed Training Initiative and Persistent Violations

Austin Independent School District’s proactive approach to solving the problem involved hosting a “data collection” event in mid-December 2025, conducted in a school parking lot specifically for Waymo to gather information on school buses and their flashing lights [1]. However, this training initiative failed to achieve its intended results. Despite Waymo engineers developing software changes to address the behavior weeks before their December 2025 federal recall [1], violations continued unabated. By mid-January 2026, the school district reported at least four more school-bus-passing incidents in Austin [1]. A particularly concerning incident occurred on January 12, 2026, when a Waymo remote assistant incorrectly told the robotaxi that the school bus ahead didn’t have active signals on, leading to six vehicles passing the stopped bus [1]. This incident highlights not just software failures, but potential human oversight errors in Waymo’s remote monitoring system.

Learning Patterns: Humans vs. Machines

The fundamental difference between human and artificial intelligence learning becomes apparent in Austin’s violation data. According to an official with the school’s police department, “The data we collected from the beginning of the school year to the end of the semester shows that about 98 percent of people that receive one violation do not receive another…That tells us that the person is learning, but it does not appear the Waymo automated driver system is learning through its software updates, its recall, what have you, because we are still having violations” [1]. This stark contrast is further illustrated by the repeat offense statistics: Waymo AVs repeated the illegal passing mistake 24 times in the last seven months, while human drivers have a 1% repeat rate [2]. The pattern suggests a fundamental challenge in how machine learning systems process and apply safety protocols in real-world scenarios.

Technical Challenges and Expert Analysis

The technical complexity behind Waymo’s school bus recognition problems stems from the varied contexts in which stop signs appear throughout urban environments [1]. Philip Koopman, an autonomous-vehicle software and safety researcher at Carnegie Mellon University, explains the core issue: “Waymo is struggling to teach their machine learning the lesson Waymo wants it to learn. That’s not a surprise. This was always going to be a problem” [1]. The challenge extends beyond simple object recognition to contextual understanding of when and how different stop signals should be obeyed. Missy Cummings, an autonomous vehicle researcher, warns that the problem will only escalate: “If [the company] didn’t fix this a few years ago, the more they drive, the more it’s going to be a problem. That’s exactly what’s happening here” [1]. Cummings has recommended that Waymo “should not be allowed to operate around schools during school pickup and drop-off until they get this problem fixed and can demonstrate it with specific tests” [1]. However, Austin ISD and police suggested Waymo avoid driving during school bus pickup and drop-off hours, but the company refused [2].

Company Response and Future Expansion Plans

Despite the mounting evidence of safety failures, Waymo maintains its position on safety performance. A Waymo spokesperson stated, “Our safety performance around school buses is superior to human drivers” [2]. This claim stands in stark contrast to the statistical evidence showing their vehicles are 65 times more likely to commit school bus violations. On February 11, 2026, Waymo Co-CEO Tekedra Mawakana would not confirm the problem was solved [2]. The company’s confidence appears undeterred by these safety concerns, as Waymo plans to expand into Washington, Detroit, Las Vegas, San Diego, Denver and nine other U.S. and international cities in 2026 [2]. Additionally, Waymo anticipates reaching 1 million paid robotaxi rides per week in the U.S. by the end of 2026 [2]. The expansion timeline raises questions about whether the company will resolve its school safety protocols before deploying in new markets with different school districts and traffic patterns.

Bronnen


autonomous vehicles AI safety