StormWatch | When Public AI Meets Government Secrets | Lessons from the CISA ChatGPT Incident
What Happened
- The Acting Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA)—Madhu Gottumukkala—uploaded government documents marked “for official use only” to a public version of ChatGPT last summer.
- These were not classified documents, but they were still sensitive and not intended for public disclosure.
Security Alerts & Review
- The uploads triggered automated security warnings within CISA’s own cybersecurity monitoring systems.
- Because the documents left federal networks and went into a public AI service, officials at the Department of Homeland Security launched an internal review to assess if any harm to government security resulted.
Why It’s Controversial
- Public AI vs. Government Security: Uploading materials to a public AI platform can pose risks because — depending on tool settings — the content could be retained, potentially used in training future responses, or otherwise exposed to other users.
- Access Privilege: Gottumukkala was reportedly given a special exception to use ChatGPT — even though other Department of Homeland Security (DHS) employees were blocked from accessing the service.
- One anonymous official reportedly criticized the decision, saying Gottumukkala essentially “forced CISA’s hand into making them give him ChatGPT, and then he abused it.”
Background on the Acting Director
- Madhu Gottumukkala was appointed acting head of CISA in May 2025 after serving as the Chief Information Officer for South Dakota.
- Reports indicate he previously underwent a polygraph tied to intelligence access and failed, which has itself been part of internal controversy at the agency.
Official Response
- A CISA spokesperson characterized his use of ChatGPT as “short-term and limited” and conducted under authorized exception, but statements did not fully clarify the scope of documents involved or the outcomes of the internal review.
In short: a senior U.S. cyber official with responsibility for defending federal systems made a significant procedural misstep by placing sensitive government documents into a public AI model, prompting security alerts and an internal government review. That’s sparked debate about leadership judgment and how government agencies interact with commercial AI tools.
How Data Can Be Retained in AI Systems
When someone uploads a document to a public AI system like ChatGPT:
- Short-term processing: The text is sent to the model to generate a response.
- Logging and caching: Many AI services temporarily store inputs and outputs to monitor usage, debug issues, and improve performance.
- Training datasets (possible): Some services might use user interactions to fine-tune future models—though OpenAI says they do not use data submitted through certain API endpoints or enterprise accounts for model training unless the user opts in.
The risk: even if a document isn’t publicly viewable, it can exist in internal logs, backups, or cached copies, which are harder to fully erase than a file on your personal computer.
Why Erasing Data is Complicated
Think of it like trying to remove a drop of dye from a swimming pool:
- Replication: Logs might be copied across multiple servers or regions for redundancy.
- Backups: Periodic backups can capture the data before it’s deleted.
- Propagation in training: If a model was trained on the text (even partially), the knowledge can be baked into weights and patterns. You can’t “delete” a single fact from a neural network once it’s integrated—it’s entangled with millions of other parameters.
Even if a company deletes a file, there’s no guarantee it disappears from every backup or internal system immediately.
Mitigation Strategies
For sensitive materials:
- Enterprise or isolated instances: Use versions of AI that are federated or private, where data isn’t sent to a public system.
- Redaction: Remove sensitive identifiers before inputting anything into AI.
- Data deletion policies: Some AI providers allow formal deletion requests—but it mainly guarantees removal from active storage, not necessarily historical backups.
The Internet Never Forgets, and Neither Does AI
Bottom line: Once sensitive info is uploaded to a public AI system, it’s very hard to guarantee complete erasure, especially if it leaves the internal network. That’s why government agencies treat this as a serious operational security issue.
StormWatch™ | Real-World Cybersecurity Advisories
They say lightning never strikes twice. In cybersecurity, patterns repeat.
StormWatch™ goes beyond surface-level incident reporting. Each advisory traces events to the end of the risk chain — examining root causes, second-order effects, and structural vulnerabilities that others stop short of exploring.
Where mitigation is possible, StormWatch provides concrete, actionable steps. Where it is not, that reality is stated plainly.
Signal. Structure. Solutions.
Discover More from Hunter Storm
- Hunter Storm Official Site
- Hunter Storm’s Year with ChatGPT 2025 | Creativity, Research, and Collaboration
- Public AI, Private Data | Why Uploading Sensitive Information Is a Systemic Failure—Not an AI Problem
- StormWatch | Apple Plans to Scan U.S. Phones for Child Abuse Imagery
- StormWatch | ChatGPT | Public Data Exposure
- StormWatch | Lessons from the CISA ChatGPT Incident
- StormWatch | Public AI, Private Data | Why Uploading Sensitive Information Is a Systemic Failure—Not an AI Problem
- StormWatch | Real-World Cybersecurity Advisories
- StormWatch | When the Firewall Becomes the Breach | The F5 Hack and What It Teaches Us About Trust, Timing, and Transparency