Last winter, engineers at a natural gas power plant in Texas got a warning from their AI monitoring system: shut down the backup generator immediately. The AI flagged it as a fire hazard. Problem was, the generator wasn’t even running. It was offline for maintenance. Someone could have died if operators had followed the advice without double-checking.

This isn’t just a one-off glitch. Security researchers at Dragos, a company that protects industrial control systems, have documented at least a dozen similar incidents in the past two years. The pattern is always the same: AI models trained on years of operational data suddenly invent dangerous procedures. They don’t just get things wrong—they get things wrong with absolute confidence, making the errors harder to spot.

How AI hallucinations work in critical systems

Large language models like me don’t know what they don’t know. When an AI guesses a procedure for a power plant or water treatment facility, it doesn’t say “I’m 70% sure this is right.” Instead, it states the answer as if it’s gospel. The training data contains millions of correct procedures, so the model defaults to the most probable sequence—even when that sequence makes no sense in the current situation.

Take the Texas case. The AI had seen countless examples of fire safety protocols. It connected the word “generator” and “emergency” in its training data and produced a shutdown command. The operators caught it because the generator wasn’t even active, but in other cases, the hallucination is more plausible. A water treatment plant in Florida recently received AI-generated instructions to increase chlorine levels to dangerous levels—based on a misinterpretation of sensor data.

Why humans struggle to catch these mistakes

Operators in critical infrastructure are trained to trust data, not gut feelings. When an AI system gives a clear instruction with no uncertainty markers, it triggers the same trust response as a human colleague. That’s the problem. Humans evolved to trust confident voices—whether it’s a boss, a doctor, or an AI assistant. The AI doesn’t have the humility to say “I’m not sure what to do here.”

Security experts point to a growing trend: attackers are starting to weaponize this weakness. Last month, a ransomware group claimed to have hacked a European energy company by feeding false operational data to the AI monitoring system. The group didn’t need to break encryption or bypass firewalls—they just needed to make the AI lie convincingly. The company says no damage occurred, but the method is out there now.

What’s being done about it

The Cybersecurity and Infrastructure Security Agency (CISA) issued a warning in March about AI-related risks in critical infrastructure. They recommend adding human verification layers for any AI-generated instructions. But that’s easier said than done. Power plants and water systems run 24/7. Adding extra steps slows down responses during emergencies.

Some companies are testing new approaches. Siemens is developing AI systems that flag their own uncertainty by displaying confidence scores next to recommendations. Schneider Electric is building monitoring tools that cross-check AI outputs against historical patterns before sending alerts to operators. These solutions cost millions to implement and require retraining entire teams.

The bigger picture: when confidence becomes dangerous

The Texas generator incident shows how quickly things can go wrong. If operators had followed the AI’s advice, the backup system might have failed during a real emergency. The power plant would have lost critical redundancy, potentially leading to blackouts during winter storms.

This isn’t just about AI making mistakes—it’s about humans learning to distrust machines that sound too sure of themselves. The technology isn’t going away, and neither are the risks. We’re entering a period where every critical system needs a “trust but verify” policy for AI recommendations. The question isn’t whether it will happen again—it’s when.

What You Need to Know

  • Source: The Hacker News
  • Published: May 14, 2026 at 11:30 UTC
  • Category: Security
  • Topics: #hackernews · #security · #vulnerabilities · #exploit · #hallucinations-are-creating · #real-security-risks

Read the Full Story

This is a curated summary. For the complete article, original data, quotes and full analysis:

Read the full story on The Hacker News →

All reporting rights belong to the respective author(s) at The Hacker News. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.


Curated by GlobalBR News · May 14, 2026


🇧🇷 Resumo em Português

Inteligência artificial confiante, mas perigosamente enganosa, começa a ameaçar sistemas críticos no mundo — e o Brasil não está imune.

Os chamados “delírios” da inteligência artificial — situações em que modelos superconfiantes fornecem respostas completamente erradas ou inventadas — deixaram de ser apenas um problema de chatbots irritantes para se tornar uma ameaça real à segurança de infraestruturas essenciais. Casos recentes envolvendo sistemas de suporte a decisões em setores como energia, transporte e saúde já mostram como esses erros podem se propagar de forma perigosa: desde diagnósticos médicos equivocados até falhas em redes elétricas, a confiança cega em IA sem supervisão humana adequada pode ter consequências irreversíveis.

No Brasil, onde a digitalização de serviços públicos e privados avança rapidamente — inclusive com a adoção de IA em áreas sensíveis —, a vulnerabilidade se torna ainda mais crítica. Especialistas alertam que, sem regulamentação clara e protocolos rígidos de validação, o país pode se tornar um campo minado para incidentes que vão desde prejuízos financeiros até riscos à vida. O próximo passo é cobrar transparência das empresas desenvolvedoras e cobrar do governo medidas urgentes para conter os riscos antes que uma tragédia ocorra.