Skip to main content
AI Strategy12 min read

Is It Safe to Use AI at Work? A Data Privacy Guide

Jonathan Lasley

Jonathan Lasley

(Updated )

Yes, it's safe to use AI at work if your accounts are set up correctly. Free-tier AI tools train on your conversations by default (except Claude, which requires opt-in). Team subscriptions at $25-30/seat/month disable training organization-wide with admin controls you can verify. But right now, 77% of employees are pasting company data into AI tools through personal accounts that have none of those protections. The question isn't whether your company is using AI. It's whether you've set up a safe way to do it.


Key Takeaways

  • Free AI tools train on your data by default (except Claude, which is opt-in). Team subscriptions ($25-30/seat/month) disable training organization-wide with admin controls, eliminating the need to verify individual settings across your workforce.
  • 77% of employees paste company data into AI tools, and 82% of that activity flows through personal, unmanaged accounts (LayerX, 2025). This is a procurement gap, not an employee discipline problem.
  • Not every department needs the same tool tier. Map your data classification (public, internal, confidential, regulated) to the minimum tool tier required, and you'll avoid overspending on Enterprise licenses while still protecting sensitive information.
  • Five protections you can set up this week without hiring a compliance consultant or buying new security software.
  • State AI regulation is accelerating. Texas TRAIGA took effect January 2026, Colorado AI Act launches June 2026, and 45 states introduced over 1,000 AI bills in 2025.

Free Download

AI Data Privacy Action Kit

Printable worksheets your department heads fill in and hand to IT: data classification wall chart, vendor evaluation scorecard weighted for mid-market priorities, shadow AI audit checklist with owner fields, and a 30-day deployment plan with weekly milestones. The artifacts you bring to Monday's leadership meeting.

What Actually Happens to Your Data

When you type a prompt into any major AI tool, whether it's Claude, ChatGPT, Gemini, or Copilot, three things can happen to that text:

Training. Can AI tools see your company data? Yes, and they can learn from it. The AI provider can use your conversation to improve future models. Free and consumer accounts on ChatGPT, Gemini, and Copilot allow this by default (Claude is the exception, requiring users to opt in). Every major platform now offers a privacy toggle to disable training on free tiers, and doing so typically drops retention to 30 days or less. The catch: these are individual settings. There's no admin dashboard to verify compliance across your workforce.

Retention. Your conversations are stored on the provider's servers. With training enabled on free accounts, retention can stretch months or indefinitely. Opting out of training reduces retention on some platforms: ChatGPT drops to 30 days, Claude defaults to 30 days (training is already opt-in). Gemini is the outlier: even with activity controls paused, Google retains free-tier conversations for up to 18 months, and human-reviewed data for up to three years. Business and Team subscriptions give you organization-level retention controls with no action required from individual users.

Access. Provider employees can review conversations for safety monitoring and abuse prevention. Enterprise tiers restrict this access to a small trust-and-safety team with audit trails.

The specific opt-out locations: ChatGPT has an "Improve the model for everyone" toggle in Settings > Data Controls. Gemini has a "Keep Activity" setting. Copilot has "Model training" toggles for text and voice. Claude doesn't require action (training is opt-in by default). For a company, verifying that 50 or 200 employees have all found and maintained these settings is impractical. Team subscriptions ($25-30/seat/month) solve this at the organization level.

Shadow AI data exposure statistics
Shadow AI data exposure statistics

Shadow AI adoption makes this urgent. LayerX's 2025 Enterprise AI Security Report found 77% of employees paste company data into AI tools, and 82% of that activity flows through personal, unmanaged accounts. In a 200-person company, that's 154 people feeding proprietary information into tools that train on it. Only 17% of organizations have DLP scanning for AI tools.

The worst cases I've seen involve customer data, especially regulated information that someone pasted into a free-tier account without thinking twice. Even the "minor" cases are bad: employees routinely paste unsanitized company information into AI tools, complete with client names, internal project codes, and financial figures that make the data trivially identifiable. It's never malicious. It's a Tuesday afternoon shortcut that nobody thought to question.

What to Do If Company Data Was Already Shared With AI Tools

If you're reading this because exposure already happened, start here. Identify what was shared and on which platform, then check that tier's retention and training policies. If training was enabled, the data may already be in model weights with no way to extract it. Change any shared credentials or API keys immediately. For client data, check NDA and notification obligations.

Then fix the root cause: upgrade to Team subscriptions and distribute the data classification guide below. Most shadow AI exposure is a procurement gap, not a breach notification event, but check with counsel if regulated data (PHI, PII) was involved.

IBM's 2025 Cost of a Data Breach Report puts a price on the exposure: breaches involving shadow IT cost $4.63 million on average, $670,000 more than the $3.96 million baseline. Shadow AI was a factor in 20% of all breaches studied. Uncontrolled AI adoption is one of the fastest paths to the kinds of failures documented in why AI projects fail at mid-market companies.

The Five Tiers of AI Data Protection

Not all AI subscriptions offer the same protections. I break vendor offerings into five tiers based on what happens to your data at each level.

The five tiers of AI data protection
The five tiers of AI data protection

Tier 1: Free and Consumer ($0-20/user/month)

Claude Free, ChatGPT Free, Gemini Free, and most "Plus" or "Pro" plans. Training is on by default for ChatGPT, Gemini, and Copilot (Claude is opt-in). Every platform provides individual privacy toggles to disable training, dropping retention to 30 days or less. For companies, the gap is admin visibility: no way to verify settings across your org, no compliance certifications, no audit trails.

Tier 2: Team and Business ($25-30/user/month)

Claude Team, ChatGPT Business (formerly ChatGPT Team), Gemini Business, Microsoft 365 Copilot. Data is excluded from training. Retention is capped (typically 30 days). Admin controls for user management. SOC 2 Type II certification from most providers.

For most mid-market companies, this tier solves 80% of the data privacy concern. A 50-person company pays $1,250-$1,500/month, less than the monthly office coffee budget. When you're evaluating the return on an AI investment, moving from free to Team is the highest-ROI security decision available.

For the CFO: that's $15,000-$18,000 per year in subscriptions. IBM's 2025 study found shadow AI was a factor in 20% of breaches, with those breaches costing $670,000 more than baseline. Those are global averages across companies larger than yours, but the math works at any scale: even at a quarter of those figures, the subscription needs to reduce your breach probability by roughly 3% to pay for itself. Eliminating 150 unmanaged personal AI accounts does considerably more than 3%.

Tier 3: Enterprise (Custom pricing, typically $50–60+/user/month)

Claude Enterprise, ChatGPT Enterprise, Gemini Enterprise. Everything in Tier 2 plus SCIM provisioning (SSO is available at Team tier for some platforms, including Claude), data residency controls, custom retention policies, and HIPAA BAA eligibility for most vendors. Anthropic offers BAAs through both the API and Enterprise plans. OpenAI launched Enterprise Key Management (EKM) in late 2025 for customer-managed encryption keys. Necessary for companies handling regulated data or requiring compliance documentation.

Tier 4: API Access (Per-token pricing)

Direct API access from OpenAI, Anthropic, and Google. Zero-retention options, no training on inputs by default, maximum control over data handling. Requires technical implementation but offers the most flexibility for custom applications.

Tier 5: Cloud AI Platforms ($thousands/month base + per-token)

Amazon Bedrock, Microsoft Azure OpenAI Service, Google Vertex AI. Full VPC isolation where data never leaves your cloud environment. Integrates with existing IAM, logging, and compliance infrastructure. Required for companies building AI into their products, processing PHI/PII at scale, or operating in highly regulated industries.

The jump from Tier 1 to Tier 2 delivers the highest return on security investment. If your company is still on free or consumer plans, start there.

Selecting the right tier is the straightforward part. The implementation work is where companies stall: migrating scattered personal accounts to managed subscriptions, mapping data classification to department workflows, and configuring admin controls that actually get enforced. An AI Strategy Assessment covers vendor evaluation, data classification, and deployment planning as standard deliverables, so you get a rollout plan, not just a recommendation.

Head-to-Head: Comparing AI Platform Privacy Policies

The four major AI providers handle data privacy differently across tiers. "Business tier" means different things to different vendors. All policies below are current as of February 2026; verify directly with each vendor before making purchasing decisions, as these change quarterly.

FeatureClaude TeamChatGPT BusinessGemini BusinessMicrosoft Copilot
Trains on your dataNo (opt-in since Oct 2025)NoNoNo
Free tier opt-outNot needed (opt-in)Yes (toggle in Data Controls)Yes (Keep Activity toggle)Yes (Model training toggles)
Data retention30 days (default)30 daysCustomer-controlledFollows M365 policy
SOC 2 Type IIYesYesYesYes (via Azure)
HIPAA BAAAPI + EnterpriseEnterprise onlyEnterprise onlyAvailable with E5
Admin consoleYesYesYesYes (via M365)
SSO/SCIMSSO (Team+) / SCIM (Enterprise)SSO (Business+) / SCIM (Enterprise)Enterprise onlyYes (via Entra ID)
Data residencyEnterprise onlyEnterprise onlyEnterprise (EU available)Follows Azure region
Price/seat/month$25$25Included in Workspace ($14+)$30 (M365 Copilot); $21 (Copilot Business, under 300 users)

For mid-market companies starting fresh, I recommend Claude Team. I use Claude daily for client work, and three factors put it ahead: Anthropic made free-tier training opt-in rather than opt-out in October 2025 (users must explicitly choose to share data, and opting in extends retention from 30 days to five years), the privacy controls are the strongest at every tier, and the capabilities beyond basic chat (code generation, collaborative workspaces, workflow automation) make it the most versatile business AI platform available.

If your company runs on Microsoft 365 with E5 licensing, Copilot integrates with your existing security infrastructure and is the pragmatic starting point. Keep in mind that Copilot is one interface layer. You'll still configure which AI models and data flows power your custom workflows, and those choices carry their own data handling implications.

One caution for every platform: the January 2026 Copilot DLP bypass bug caused Copilot to summarize emails marked "Confidential," bypassing DLP policies for weeks before a fix rolled out in February 2026. No vendor is "set and forget."

According to Microsoft's own Data Security Index (January 2026), generative AI is now involved in 32% of data security incidents, and only 47% of security leaders have implemented GenAI-specific controls.

AI privacy by vendor and tier: training and retention policies across all subscription levels
AI privacy by vendor and tier: training and retention policies across all subscription levels

Download the AI Data Privacy Action Kit: the comparison table and tier matrix above are reference material, but reference material doesn't close the gap. The Action Kit gives you the artifacts to execute: a fill-in vendor evaluation scorecard where you score each platform against your specific requirements (not a pre-filled recommendation), a data classification wall chart with blank fields for your department-specific data types, a shadow AI audit template with owner and date columns so nothing falls through the cracks, and a 30-day deployment checklist with weekly milestones and assigned owners. These are the worksheets you hand to IT and bring to the leadership meeting, not a PDF of the article.

Get the AI Data Privacy Action Kit

Vendor comparison, data classification worksheet, shadow AI audit checklist, and a 30-day implementation plan with owner and date fields. The package you bring to the leadership team when someone asks 'what's our AI data policy?'

Free download. No spam. See our privacy policy.

Match Your Data Classification to the Right Tool Tier

Most mid-market companies don't need Enterprise subscriptions for every employee. The smarter approach: classify your data into four categories and match each to the minimum tool tier that handles it safely.

Data classification matrix mapping data types to AI tool tiers
Data classification matrix mapping data types to AI tool tiers

The matrix works like this: public data (marketing copy, published research) is safe on any tier including free. Internal data (meeting notes, project plans, strategy drafts) requires Team or Business at minimum, and this is where most employee day-to-day AI usage falls. Confidential data (customer PII, financials, trade secrets) requires Enterprise with SSO and audit trails. Regulated data (PHI, SOX/GLBA, GDPR-covered PII) requires API access or cloud platforms with VPC isolation.

This doesn't need to be a 50-page policy document. A one-page grid that every department head can reference in 30 seconds works better than a governance manual nobody reads. If you already have a data governance framework, add an "AI tool tier" column to your existing classification rather than building a separate system. That article covers the full policy framework; here we're focused on which tool tier handles which data safely.

Data That Should Never Enter Any AI Tool

Even on Enterprise or API tiers with zero-retention policies, some data doesn't belong in AI prompts:

  • Credentials and secrets -- passwords, API keys, SSH keys, database connection strings, MFA codes
  • Unredacted customer PII -- raw names paired with SSNs, account numbers, medical records, or financial details
  • Active legal materials -- litigation documents, sealed records, attorney-client privileged communications
  • Source code with embedded secrets -- hardcoded credentials, internal infrastructure URLs, private repository tokens
  • Pre-announcement financial data -- M&A terms, material non-public information, unfinished audit findings

If the data would trigger a compliance review when emailed to the wrong person, it doesn't belong in an AI prompt. The data classification matrix above covers the gray areas. This is the bright-line list for the cases with no gray at all.

When You Need a Cloud AI Platform for Data Privacy

For most mid-market companies, Team or Business subscriptions (Tier 2) are enough. Don't buy a cloud AI platform because a vendor told you it's more secure. Buy one because your use case requires it.

You likely need Tier 4 (API) or Tier 5 (cloud platform) when:

  • You're building AI into your product or service. If AI processes customer data as part of your offering, not just internal productivity, you need infrastructure-level control over data handling.
  • You process regulated data at scale. A handful of employees asking Claude about HIPAA-adjacent topics doesn't require Amazon Bedrock. Processing thousands of patient records through an AI pipeline does.
  • You need VPC isolation or data residency guarantees. Some industries and geographies require that data never transit a third party's infrastructure, even with contractual protections.
  • You're running fine-tuned or custom models. Cloud platforms support model customization, retrieval-augmented generation, and other architectures that subscription tools can't provide.

The cost difference is substantial. Team subscriptions run $25–30/seat/month. Cloud platforms start at several thousand per month in base costs plus per-token usage fees. For a company weighing the build-vs-buy decision on AI tools, the infrastructure tier is where that decision gets concrete.

Decision tree for choosing the right AI data privacy tier for your business
Decision tree for choosing the right AI data privacy tier for your business

If you're unsure which tier your company needs, an AI Strategy Assessment maps your specific use cases, data types, and compliance requirements to the right infrastructure, so you avoid both overspending and underprotecting.

For the CEO who needs the 30-second version: Your company needs Team subscriptions ($25-30/seat/month), a one-page data classification guide, and about 30 days to roll it out. Total cost for a 50-person company: under $1,500/month. The five protections below are the playbook. If you already know what needs to happen and just need the schedule, skip to the 30-day timeline.

Five Protections You Can Set Up This Week

The comparison tables and tier frameworks above are reference material. Bookmark them, share them with IT, hand them to your CFO. But knowing which vendor to pick isn't the hard part. The hard part is migrating scattered personal accounts, configuring admin controls that actually get enforced, and getting your team to use the sanctioned tools instead of their personal ones. These five actions close that gap, and they take less than a day total.

1. Audit Your Current AI Tool Usage (1 hour)

Check with department heads: what AI tools are people using, and on what subscription tier? Don't limit this to "approved" tools. Ask specifically about personal accounts on Claude, ChatGPT, Gemini, and Copilot used for work tasks. The Netskope Shadow AI Report (2025) found that 72% of ChatGPT users at work were on personal accounts. The answer will probably surprise you.

2. Upgrade to Team or Business Subscriptions (30 minutes)

This is the highest-impact action on this list. Move everyone currently on free or consumer accounts to Team or Business tiers. At $25–30/seat/month, the cost is trivial compared to the risk. Prioritize departments that handle customer data, financial information, or competitive intelligence.

3. Create a One-Page Data Classification Guide (2 hours)

Don't write a 50-page data security policy. Create a single page with four rows: public data (any AI tool), internal data (Team tier minimum), confidential data (Enterprise or API), regulated data (cloud platform required). Distribute it to every department head.

4. Configure Admin Controls and Set a Review Cadence (1 hour setup + quarterly)

Every Team and Business subscription includes an admin console. Enable it. Set up centralized billing, review user access, and turn on available audit logging. For Microsoft Copilot users, verify that DLP policies are configured correctly: the January 2026 Copilot DLP bypass bug caused confidential emails to be summarized by AI for weeks before a fix rolled out, and it only affected organizations that hadn't configured sensitivity labels. That's the lesson: purchasing the right tier isn't the finish line. Set a quarterly review of admin settings, DLP configurations, and user access lists. Vendors update features, change defaults, and occasionally ship bugs. Governance includes monitoring, not just procurement.

5. Provision Sanctioned Tools (30 minutes)

Don't just block personal AI accounts. Blocking pushes usage to phone hotspots and personal devices, and you lose what little visibility you had. The math doesn't support it either: a 50-person company pays $1,250-$1,500/month for Team subscriptions, and the productivity loss from blocking AI tools your employees are already using costs multiples of that.

My recommendation: provision Claude Team or ChatGPT Business for every employee who touches AI (which, statistically, is most of them). If your company runs on Microsoft 365 with E5 licensing, Copilot is the faster path. Make the sanctioned tool better and easier than the free alternative, distribute the data classification guide from Protection #3, and most shadow AI disappears on its own. The AI governance framework I've written about covers this in detail: governance means procurement alongside restriction. For the specific policy document, the AI acceptable use policy template gives you a ready-made structure covering tool approval tiers, data classification, and shadow AI response.

Your 30-Day Privacy Implementation Timeline

Week 1: Run the AI tool audit (Protection #1) and create your data classification guide (Protection #3). By Friday, you should know what tools people are using and what data goes where.

Week 2: Purchase Team subscriptions and migrate users from personal accounts (Protection #2). Configure admin controls as you go (Protection #4).

Week 3: Roll out sanctioned tools (Protection #5). Hold a 30-minute department-head briefing covering the data classification guide and approved platforms.

Week 4: Verify adoption. Check admin consoles for active users, review remaining personal account usage, and document your governance decisions for audit readiness.

If your team needs hands-on training to adopt sanctioned tools, a structured AI workshop compresses weeks of self-guided learning into a single day. For ongoing oversight of AI tool policies and vendor relationships, a Fractional AI Director provides that accountability without a full-time hire.

What to tell your board: "We've audited AI tool usage company-wide, upgraded to managed Team subscriptions at $25-30 per seat, classified our data into four tiers mapped to approved tools, and configured admin controls with audit logging. Implementation took 30 days and costs less than $30 per employee per month. We're aligned with Texas TRAIGA requirements and positioned for Colorado AI Act enforcement in June."

AI Data Privacy Regulation: What's Coming

AI-specific regulation is moving faster than most companies expect. Three developments matter for mid-market privacy planning.

AI data privacy regulation timeline for mid-market companies
AI data privacy regulation timeline for mid-market companies

Texas TRAIGA (Texas Responsible AI Governance Act) took effect January 2026. It requires companies deploying "high-risk" AI systems to conduct impact assessments, maintain documentation, and notify consumers about automated decision-making. Penalties run up to $200,000 per violation.

Colorado AI Act takes effect June 2026. It targets "high-risk" AI systems that make consequential decisions about consumers in areas like employment, lending, insurance, and housing. Companies must document data inputs, test for algorithmic discrimination, and disclose AI involvement in decisions.

The broader wave. 45 states introduced over 1,000 AI-related bills in 2025. Most address data privacy, algorithmic transparency, or sector-specific AI use. Federal legislation remains fragmented, but state-level regulation is filling the gap rapidly.

Here's what the vendor white papers and cybersecurity blogs leave out: if you've followed the five protections in this guide, you already satisfy the core requirements of most pending legislation. Texas TRAIGA requires documentation of AI use and impact assessments. You built those in Protection #1 (your audit) and Protection #3 (your data classification guide). Colorado's AI Act requires transparency about AI-driven decisions. Your data classification guide and admin audit logs cover that. The companies facing regulatory scrambles aren't the ones using AI. They're the ones still running AI on consumer accounts with no governance structure, no documentation, and no audit trail. The five protections in this guide close that gap before the deadlines arrive.

An AI readiness checklist helps identify remaining gaps before the next deadline hits, and it takes five minutes to complete.

Frequently Asked Questions

Can my employees use ChatGPT safely for work?

Yes, with the right setup. This applies to ChatGPT, Claude, Gemini, and Copilot equally. The safest approach is a Team ($25-30/seat/month) or Enterprise subscription on any major platform, which disables training at the organization level and provides admin controls. Free accounts can also be made safer: every platform offers privacy settings to disable training, which drops retention to 30 days or less. Claude goes further and doesn't train on free-tier data at all unless users opt in. The difference is enforcement: on free accounts, each employee manages their own settings with no way for IT to verify. On Team accounts, the protection is automatic and admin-controlled. Take the free AI readiness assessment to see where your company stands.

Does ChatGPT use my business data for training?

On free and Plus accounts, yes, by default. You can turn it off in Settings > Data Controls by toggling off "Improve the model for everyone." When disabled, retention drops to about 30 days and your conversations aren't used for training. On ChatGPT Business (formerly Team) and Enterprise, training is disabled automatically with no individual action needed. The same pattern holds across platforms: Claude doesn't train on free-tier data unless you opt in, Gemini and Copilot offer similar opt-out toggles, and every vendor's business tier disables training organization-wide.

What's the difference between Team and Enterprise tiers for AI data privacy?

Every major platform (ChatGPT, Claude, Gemini, Copilot) follows the same pattern: both Team and Enterprise exclude your data from training. Enterprise adds SSO and SCIM integration (so users authenticate through your company's identity provider), custom data retention policies, HIPAA BAA eligibility, customer-managed encryption keys, and admin audit controls. For most mid-market companies without regulated data, Team is sufficient. Enterprise becomes necessary when you need compliance documentation for audits or handle health, financial, or legal data subject to industry-specific regulations.

How do I stop employees from using unapproved AI tools?

The most effective approach is making sanctioned tools readily available rather than blocking access. Provide Team or Business subscriptions for approved AI platforms, distribute a one-page guide mapping data types to approved tools, and make the policy simple enough that employees don't work around it. Network-level blocks tend to push usage to personal devices and phone hotspots, which creates less visibility than managed accounts. For a broader approach to AI governance, including policy templates and approval workflows, the AI governance framework covers policy templates and approval workflows in detail.

When does a company need Amazon Bedrock or Azure OpenAI instead of a regular subscription?

When you're building AI into your product (processing customer data through AI pipelines), handling regulated data at volume (thousands of medical records or financial transactions), or needing VPC isolation where data never transits a third party's network. For a 200-person company using AI for internal productivity, Team subscriptions are almost always sufficient. Cloud platforms add significant cost and complexity; they're worth it only when subscription-tier controls can't meet your compliance or architectural requirements.

Take Action on AI Data Privacy

The gap between knowing about AI data privacy risks and fixing them is usually one purchasing decision and a few hours of admin configuration. For most mid-market companies, upgrading to Team or Business subscriptions and distributing a one-page data classification guide solves the majority of the problem.

If you're not sure where your company stands, the free AI readiness assessment takes five minutes and identifies your biggest exposure areas.

If you already know you need help mapping tool tiers to departments, configuring governance policies, or evaluating cloud platform options, book a free 30-minute strategy call and I'll walk through your specific situation.


Jonathan Lasley

Jonathan Lasley

Fractional AI Director

Jonathan Lasley is an independent Fractional AI Director based in Michigan, with 25+ years of enterprise IT experience. He helps mid-market companies turn AI from a buzzword into measurable business outcomes.

Learn more about fractional AI leadership

Ready to Turn AI into Results?

Take the free AI readiness assessment to see where your company stands, or book a strategy call to discuss your specific situation. No pitch, no pressure.

Take the Free Assessment

30 minutes. No pitch. Just clarity on your best AI opportunities.