According to a recent study[1] as much as 32% of US employees have shared customer data and 27% have inputted internal company documents into unapproved AI tools at work. With the exponential increase in the use of AI models in the workplace, having the right policy and guardrails in place is imperative. Recent news reports make it clear that business leaders who fail to define minimal AI rules of engagement for their team members may find themselves navigating ethical landmines, compliance violations, loss of intellectual property rights and reputational damage.

What does a smart, company-wide AI policy look like? How do you create one that’s realistic, flexible, and built for the future?

Start with Principles, Not Panic

An AI policy shouldn’t scare people. It’s not about locking everything down. It’s about giving teams clear guidance for using AI safely and responsibly, without stifling innovation. Where is the best place to begin? Your organization’s core values. Build your AI policy on principles on which your company stands. A few must-haves:

  • Transparency: Employees should understand which tools are allowed, how they’re expected to use them, and what types of data are off-limits to share.
  • Accountability: Make it clear who is responsible for AI oversight, from IT security to legal and compliance teams.
  • Equity and Fairness: AI shouldn’t reinforce bias or be used to make unreviewed decisions about people, especially in hiring, promotions, or performance evaluations.

Clearly Define the Do’s and Don’ts

AI tools are only as effective as the boundaries you set around them. A good policy brings clarity to what an organization views as acceptable use and includes real-world examples that resonate with your employees:

For example:

  • Do: Use ChatGPT to draft client emails using only public information.
  • Don’t: Input confidential financial forecasts into a generative tool without approval.

Set Up a Governance Structure

Who owns your AI strategy? If your answer is “everyone” or “no one,” it’s time to change that. Designate a cross-functional AI Governance Committee, with IT, legal, compliance, and business leaders to:

  • Review and approve allowed AI tools
  • Monitor how your organization is using AI
  • Adapt policies as technologies and regulations evolve

This group should meet regularly to assess risk, respond to incidents, and refine policy language to stay ahead of change.

Think Long Term

An AI policy isn’t a one-and-done document. It’s a living framework that evolves with technology. Schedule regular reviews to:

  • Reassess risk based on new AI capabilities
  • Update approved tool lists
  • Respond to changes in legislation or industry norms

The goal of an AI policy isn’t to block innovation, it’s to responsibly use it and protect data requiring protection. When employees know the rules, they’re more confident, creative, and compliant. And when leadership sets clear, forward-thinking AI standards, the entire organization benefits.

AI is here to stay. The question is: Will your organization lead the way or play catch-up?

[1] Cint Market Research, August 2025, https://www.emarketer.com/content/employee-data-most-risk-unapproved-ai-use