Cyber Chief Turns to ChatGPT as Public AI Tools Tempt Executives

Jan. 29, 2026, 10:00 AM UTC

The allure of widely available general purpose AI tools like ChatGPT continues to infiltrate the workplace in companies and government—and even one of the top cybersecurity officials in the US is not immune.

Madhu Gottumukkala, acting director of the Cybersecurity and Infrastructure Security Agency, which is responsible for protecting critical infrastructure, turned to ChatGPT for work last summer under a temporary exception to the agency’s block of the OpenAI chatbot, CISA said in a statement Wednesday. Politico earlier reported that Gottumukkala’s upload of sensitive contracting documents into ChatGPT triggered several security warnings meant to prevent the unintentional disclosure of government material.

The use of ChatGPT by a senior official, even temporarily, is raising security concerns. It also highlights an issue that the government and corporate world continues to battle: Despite the spread of safer, in-house enterprise AI tools, executives and senior officials continue to turn to publicly available tools like ChatGPT, which carry higher security and privacy risks.

Jennifer Ewbank, a former CIA deputy director for digital innovation, said this kind of employee use of an external AI model—sometimes referred to as shadow AI—happens when there is an urgency around work, available tools don’t meet the need and there’s ambiguity over rules.

“I can’t speak to CISA, but shadow AI is a growing problem across organizations (public and private),” Ewbank said in an email. “While there are risks of data leakage and a weakening of security culture in an organization, the cybersecurity risks themselves are often under-appreciated.”

‘Not A Surprise’

The agency said Gottumukkala was granted exceptional permission to use the tool with “DHS controls in place,” as CISA remains committed to “harnessing AI and other cutting-edge technologies” to deliver on President Donald Trump’s AI executive order. CISA declined to specify what controls the Department of Homeland Security had in place for the chatbot or why its acting director used ChatGPT instead of the agency’s internal AI tools.

As AI tools become ubiquitous, many companies are struggling to keep track of which tools employees and executives are using—and what they’re using them for.

Despite companies’ investments in enterprise-level AI—which promises to keep company data private and secure—more than a third of employees said they’re still using free versions of company-approved AI tools, according to a Jan. 27 report from BlackFog, a provider of AI-based cybersecurity services.

About 21% said they believe their employer would “turn a blind eye” to the use of unapproved AI tools if it got them to complete their work on time.

Senior executives are more likely to accept additional risks, the survey found. Almost 70% of president or C-suite level executives said speed is a priority over privacy or security risks.

As a result, sensitive data is leaving organizations. One third of employees surveyed said they’ve shared research or data sets. More than a quarter said they’ve shared payroll information, while 23% admitted having shared financial statements.

“It was really not a surprise that it happened,” said Itai Schwartz, co-founder and chief technology officer of MIND, an AI-native data security platform, referring to Gottumukkala’s use of ChatGPT. “It’s almost like this is a 2025 kind of problem. It’s obvious that it’s going to happen. But what we’re going to see in 2026 is a real acceleration of adoption of agentic AI tools—and that’s a whole different level of problems to secure.”

He added, “Tomorrow we’re going to see an agentic AI tool doing a thousand times more activities and that’s going to create exponentially more severe data loss events.”

Explicit Risks

The risks stemming from uploading sensitive information to a public chatbot vary depending on the agreements between a customer and AI provider. In some cases, data can be used to train future versions of an AI model. Information can also be obtained by a bad actor or even leaked during a data breach.

Inputting information into a commercial model triggers the threat of “every user of these services having access to that data,” said Casey Bleeker, co-founder and CEO of SurePath AI Inc., a platform that helps companies’ govern generative AI solutions.

“So that’s a huge, huge national security concern,” he said.

Companies are increasingly trying to prevent data from leaving their organization because of AI technologies, whether it’s to manage risks related to data privacy, intellectual property, or trade secrets.

Bleeker added, “AI models provide enough data to actually extrapolate and extract a lot of the value these organizations are trying to keep internal.”

To contact the reporters on this story: Cassandre Coyer in Washington at ccoyer@bloombergindustry.com; Kaustuv Basu in Washington at kbasu@bloombergindustry.com

To contact the editors responsible for this story: Jeff Harrington at jharrington@bloombergindustry.com; Michelle M. Stein at mstein1@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.