Tesla's robotaxis crashed into barriers because remote operators steered them wrong.
- Remote operators crashed robotaxis into a metal fence and barricade
- Tesla disclosed the incidents in a regulatory filing
- The crashes happened during autonomous vehicle testing
Tesla confirmed that remote operators recently drove its autonomous robotaxis into a metal fence and a construction barricade while testing the vehicles. The incidents, revealed in a routine regulatory filing, show that even with human oversight, Tesla’s self-driving system still struggles in real-world scenarios. Neither crash resulted in injuries, but the vehicles sustained damage and the incidents raised fresh questions about the reliability of remote human intervention in autonomous driving systems. Tesla’s robotaxis are not yet commercially available, but the company has been aggressively testing them in select U.S. cities, including San Francisco and Austin, Texas. The crashes occurred during these trials, where remote operators step in to assist the car’s AI when it’s unsure how to proceed. That’s supposed to be a safety net, not a crutch. Instead, it’s becoming a liability. Tesla didn’t say how often remote operators had to intervene before these crashes or whether the incidents were isolated. But the company did note that the vehicles were operating in autonomous mode when the mishaps happened. That suggests the AI’s decision-making was flawed enough to require human correction—and the humans got it wrong. Tesla didn’t release footage or timestamps for the crashes, so we don’t know the exact circumstances. What we do know is that the company’s self-driving ambitions hinge on this human fallback system working flawlessly. If remote operators can’t reliably take control, Tesla’s timeline for scaling robotaxis looks shaky. It’s not the first time Tesla’s self-driving tech has faced scrutiny. Over the years, the company has been criticized for overpromising on autonomy while delivering incremental improvements. Regulators have also investigated multiple crashes involving Tesla’s Full Self-Driving (FSD) system, which isn’t fully autonomous but markets itself that way. These latest incidents add to the growing list of red flags. Tesla’s approach to autonomous driving relies heavily on collecting real-world data from its fleet of customer-owned cars. The company argues that this massive dataset helps train its AI, but critics say it’s not enough to guarantee safety, especially in edge cases like construction zones or unexpected obstacles. The crashes also raise concerns about Tesla’s remote operator model. Unlike Waymo’s fully autonomous robotaxis, which don’t rely on human drivers at all, Tesla’s system still depends on humans to handle tricky situations. That human-in-the-loop requirement introduces another layer of unpredictability. Why did the operators steer into the fence or barricade? Was it a misjudgment, a lag in the system, or a flaw in the interface they use to take control? Tesla hasn’t explained. The company did not respond to requests for comment about whether it plans to change its remote operator protocols or retrain its AI based on these incidents. For now, the crashes serve as a reminder that autonomous driving isn’t just about the car—it’s about the humans behind the wheel, even when that wheel is virtual. Tesla’s robotaxi rollout isn’t happening tomorrow, but these incidents suggest the road to full autonomy is bumpier than the company hoped.
What You Need to Know
- Source: Wired
- Published: May 15, 2026 at 19:51 UTC
- Category: Technology
- Topics: #wired · #tech · #science · #details-about-robotaxi · #crashes · #humans-involved-remote
Read the Full Story
This is a curated summary. For the complete article, original data, quotes and full analysis:
All reporting rights belong to the respective author(s) at Wired. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.
Curated by GlobalBR News · May 15, 2026
🇧🇷 Resumo em Português
A Tesla admitiu recentemente que operadores remotos cometeram erros que resultaram em colisões de seus veículos autônomos em desenvolvimento, o chamado robotaxi, contra barreiras de metal e estruturas de obra nos Estados Unidos, lançando luz sobre os desafios ainda enfrentados pela direção autônoma no mundo real.
O caso ganha relevância no Brasil, onde a discussão sobre mobilidade autônoma e regulação de veículos autônomos começa a ganhar tração, especialmente em um contexto de crescente investimento em tecnologia e infraestrutura urbana. As falhas relatadas pela Tesla revelam não apenas os limites técnicos atuais dos sistemas de direção autônoma, mas também a dependência crítica de supervisão humana em situações imprevistas — um alerta importante para o mercado brasileiro, que mira em inovações como essas para solucionar problemas de transporte e logística. Além disso, o episódio reforça a necessidade de marcos regulatórios claros no país, que ainda discute como acompanhar o ritmo acelerado da inovação tecnológica sem comprometer a segurança.
Para o futuro, a Tesla promete ajustes nos protocolos de operação remota, enquanto analistas brasileiros já debatem como o país pode se preparar para um cenário onde veículos autônomos se tornem comuns — seja na definição de leis, na formação de mão de obra especializada ou na adaptação das cidades.
🇪🇸 Resumen en Español
Tesla saca a la luz los fallos humanos tras los choques de sus robotaxis, un recordatorio de que la tecnología autónoma aún depende de manos y decisiones humanas.
La compañía admitió que operadores remotos causaron, sin querer, al menos dos colisiones de sus vehículos autónomos: uno se estrelló contra una valla metálica y otro impactó contra una barrera de obra. Estos incidentes, aunque menores en impacto, ponen en evidencia los límites actuales de los sistemas de conducción automatizada, donde la supervisión humana sigue siendo clave. Para los usuarios hispanohablantes, el caso subraya la importancia de no confiar ciegamente en la inteligencia artificial al volante y de entender que, por ahora, la seguridad depende tanto de algoritmos como de quienes los supervisan. Además, refuerza el debate sobre la regulación de estos vehículos en mercados como el español o latinoamericano, donde su adopción aún genera dudas.
Wired
Read full article at Wired →This post is a curated summary. All rights belong to the original author(s) and Wired.
Was this article helpful?
Discussion