AI Governance

The Samsung ChatGPT Leak: Why Every Company Needs an AI Acceptable Use Policy

Jillian Bommarito

Samsung has just offered every company a useful little warning label for the AI era.

According to reports, employees in Samsung’s semiconductor business unit pasted proprietary source code and other confidential material into ChatGPT while trying to solve ordinary work problems. One prompt involved source code from a faulty semiconductor database. Another involved meeting notes. Another, reportedly, involved code optimization work. In other words: the sort of thing people do every day when they are moving fast, trying to be helpful, and assuming the shiny new tool on their browser tab is somehow safer than the old one. It is not.

This is not a story about one careless employee and one unlucky chatbot prompt. It is a story about what happens when useful technology outruns governance. And yes, that is exactly when the lawyers, privacy folks, security teams, and board members start getting very interested in what people are doing on corporate devices at 11:47 p.m.

What Happened At Samsung?

The basic facts are simple enough.

Samsung reportedly allowed engineers in its semiconductor arm to use ChatGPT for work. Within a short period, three separate incidents of sensitive information being entered into the tool were identified. The reported examples are the ones every technology leader should now have memorized:

  • source code pasted into a public AI tool to debug errors
  • internal meeting notes pasted in to generate a summary or minutes
  • confidential technical data used to ask for optimization help

That is the risk in one sentence: employees will use whatever tool helps them get through the day, unless you clearly tell them what they can and cannot do.

And to be fair, the behavior is understandable. Engineers are under pressure. They want faster fixes, cleaner code, better documentation, and fewer meetings that should have been emails. ChatGPT looks like a miracle productivity machine. But if your first reaction is, “well, they should have known better,” you are missing the point. The point is not whether the employee was smart. The point is whether the company made the safe choice the easy choice.

Why This Matters Beyond Samsung

Samsung is huge, technical, and highly competitive. Semiconductor IP is not the kind of thing you casually toss into a public model and hope for the best. But the lesson is not limited to chip design.

Every company now has the same basic exposure:

  • employees using public AI tools for drafting, coding, summarizing, and brainstorming
  • unclear boundaries between public, internal, confidential, and restricted data
  • a growing gap between what leaders think is happening and what people are actually doing
  • no practical rulebook for acceptable use

That last one is the killer.

A lot of organizations think they have an “AI policy” because someone drafted a one-page memo that says, in essence, “Please be responsible.” That is not a policy. That is a wish with formatting.

If your workforce can paste source code, customer records, legal strategy, pricing data, product roadmaps, or private meeting notes into a public model without tripping a control, then you do not have governance. You have a hope-based security program. Those are fun right up until they are not.

A Ban Is Not A Strategy

The easy answer here is to ban generative AI altogether.

And sometimes companies do exactly that, at least temporarily. But a ban is a blunt instrument. It may reduce risk in the short term, but it also creates a different problem: shadow AI. Employees keep using the tools anyway, just without approval, visibility, or controls.

So what is the better answer?

An AI acceptable use policy.

Not a slogan. Not a slide deck. Not a vague “use your judgment” note from IT.

A real policy says:

  • which tools are approved
  • which tools are prohibited
  • what data is never allowed in public models
  • whether employees may use AI for coding, drafting, summarization, translation, or analysis
  • what review is required before AI-generated output is used externally
  • who owns escalation, exception approvals, and incident response
  • what happens when someone violates the policy

If that sounds boring, good. Boring is what compliance is supposed to sound like when it works.

What A Real Policy Should Cover

At minimum, companies should define data categories in plain English.

If a tool is public and third-party, then confidential and restricted data should be off-limits. That includes source code, credential material, trade secrets, internal meeting content, customer data, regulated personal information, and anything else you would not want sitting on a vendor’s server without a very clear business and legal basis.

The policy should also address output risk.

A lot of leaders focus on what goes into the model, which is necessary, but not sufficient. What comes out can be wrong, incomplete, plagiarized, biased, or just weird. If employees are using AI-generated code or text, someone needs to review it before it ships, gets filed, or goes out the door. Otherwise you are outsourcing judgment to a machine that has no stake in the outcome.

Then there is vendor risk. Where does the data go? How long is it retained? Can it be used for training? Can it be deleted? Can you audit access? Can you even get a straight answer from the vendor? These are not small questions. They are the difference between a tool you can manage and a liability you are pretending not to see.

This is exactly where AI Governance & Compliance work becomes practical instead of theoretical. The useful part is not writing policy language that sounds impressive. The useful part is making sure the policy matches how people actually work, how data actually moves, and where the real risk lives.

The Real Lesson

Samsung’s problem is not that it used AI.

Samsung’s problem is that AI entered the workflow faster than the company could define the rules of the road.

That is the lesson for everyone else. If your teams are already experimenting with ChatGPT, Claude, Copilot, or whatever the next tool happens to be, then you are already in the governance business whether you like it or not. The only question is whether you are doing it deliberately or cleaning up after a leak.

And cleanup is expensive. It is expensive in trust, in legal exposure, in operational friction, and in the time it takes to explain to leadership why “we thought people would know better” is not an acceptable control.

So yes, companies should absolutely experiment with AI. They should use it, test it, and learn where it helps. But they should also write down the rules before the first proprietary code sample goes into the wrong box.

Because once the data is out, it is out.

And as a management strategy, “please un-send that” is not especially strong.

The Bottom Line

If your organization has not written an AI acceptable use policy yet, Samsung just gave you the business case.

Do not wait for a headline, a regulator, or an internal incident response call to decide what employees may paste into a model. Define the boundaries now, train people on them, and make the safe path the easiest path.

That is how you keep the benefits of AI without handing over your confidential information to the nearest chat window.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.