Millions of people are leveraging AI tools like ChatGPT to become more efficient at home and at work. As a result, your employees are almost certainly using ChatGPT and other AI tools, even without official approval, which puts AI safety in the workplace at risk.
At first glance, it may seem like employers benefit from free AI use. However, that assumption is misleading. In reality, almost all AI tools used under a free or personal license are training their models on the data that is entered. In other words, when employees use AI tools independently, they may be unknowingly exposing your company’s information—including potentially sensitive data.
What are the Risks?
AI models, by nature, learn from the data they’re fed. As a result, several key concerns arise:
- Data Leakage: Inputting confidential information into public AI interfaces could lead to the unintentional exposure of sensitive corporate data or customer information.
- Model Training: AI providers may use submitted data to train their models, potentially compromising sensitive details.
- Third-Party Vulnerabilities: Relying on external AI services introduces risks associated with the provider’s security practices.
- Compliance Issues: Mishandling data can lead to violations of regulations like GDPR, HIPAA, or CCPA, or the breaking of contractual obligations.
How to Use AI Safely
To mitigate these risks, businesses must be proactive. Start by taking the following steps:
- License Enterprise Versions of AI Tools for Your Employees
This ensures that your employees are using AI instances tied to your business and not their personal accounts, giving you more control. Additionally, enterprise versions of the most popular tools provide administrator controls and privacy protections. For example:
-
- ChatGPT: Purchase ChatGPT Team or Enterprise. The models aren’t trained on data at these subscription levels,[1] and there is an admin console to restrict access to 3rd party GPTs.
- Gemini (Google Workspace): Included as part of Workspace subscriptions. Gemini does not use workspace data to train its models,[2] and administrators can manage access to Gemini features in Console.
- Perplexity: Enterprise use data is never used for model training.[3]
- GitHub Copilot: Does not store users’ data for model training or any long-term storage.[4]
- Employee Training and Awareness
Next, educating your workforce is essential:
- Educate employees about the potential risks associated with AI usage and best practices for data security.
- Foster a culture of security awareness throughout your organization.
- Review Terms of Service and Privacy Policies
Before adopting any AI tool, carefully examine its privacy terms:- Scrutinize the terms of service and privacy policies.
- Understand how your data will be used and what security and privacy measures the provider has in place.
- Avoid any tool that trains on your data.
- Look for providers that offer strong data protection and compliance certifications.
Don’t Forget About Compliance!
Finally, businesses should stay alert to evolving compliance requirements. Today, many companies now include AI-specific clauses in vendor contracts to manage risk and compliance. Therefore, businesses should be aware of any restrictions on AI usage or reporting requirements that might have an impact on the decision to use generative AI tools and AI safely in the workplace.
[1] OpenAI’s Enterprise Privacy – https://openai.com/enterprise-privacy/
[2] Gemini privacy/security hub – https://support.google.com/a/answer/15706919?product_name=UnuFlow&visit_id=638785130827932194-2167660103&rd=1&src=supportwidget0
[3] Perplexity’s Trust Center – https://trust.perplexity.ai/
[4] GitHub’s Copilot Privacy FAQ – https://www.copilot.live/blog/does-github-copilot-use-your-code