Anthropic Claude Privacy Update: Opt Out of Training
Step-by-step guide to Anthropics Claude privacy update: learn how to opt out of AI training, retention changes, and what to do by Sept 28, 2025.
Anthropic Claude Privacy Update: How to Opt Out of AI Training
This guide explains Anthropic's 2025 consumer policy change for Claude, what it means for your chats, and how to disable data sharing so your conversations aren't used to train AI models.
Quick summary
- Start date: September 28, 2025 — you must choose whether your chats can be used for model training by then.
- Default setting: The training permission is turned on by default when prompted, requiring manual opt-out.
- Retention: If you allow training, Anthropic expands retention to five years for new or resumed chats; otherwise, existing 30-day retention continues.
- Applies to: Claude Free, Pro, and Max consumer accounts; not to enterprise/commercial plans or API usage under commercial terms.
What changed and why it matters
Historically, Anthropic did not use consumer conversations to train Claude unless users submitted feedback. The new policy flips that approach for personal accounts: Anthropic will use chats and coding sessions to improve models unless you opt out.
See Anthropic's announcement and their privacy explainer for details on how data is handled and what this change means for retention and opt-out mechanics.
Key implications
- Default-on consent: A new prompt during login shows an "Accept" button with a smaller "You can help improve Claude" toggle set to "On" by default.
- Longer retention for consenting users: Chats used for training may be kept for up to five years.
- Irreversible training absorption: If your chat is included in model training, the specific content cannot be removed from trained models later.
How to opt out (step-by-step)
There are two ways to stop Claude from using your conversations to train models. Use whichever is most convenient.
Option A: Via the update popup (fast)
- When you next sign in you'll see a popup titled "Updates to Consumer Terms and Policies."
- Look for the smaller toggle labeled "You can help improve Claude" beneath the main Accept button.
- Uncheck that toggle before clicking Accept. That disables training for future chats.
Option B: From Settings (recommended if you missed the popup)
- Open Claude and go to Settings.
- Select Privacy.
- Toggle off "Help improve Claude".
Both methods only prevent future conversations from being used. Anthropic states previously collected data that was used in training cannot be removed from the models.
Timeline and account requirements
- You must make your selection by September 28, 2025 to continue using Claude under the updated terms.
- The choice applies per account. If you use multiple accounts (Free, Pro, Max), set each account's preference.
- This update does not affect commercial offerings like Claude for Work, Claude Gov, Claude for Education, or API access under commercial terms.
Data retention and privacy-preserving measures
If you allow Anthropic to use your chats, they say they will de-link them from your user ID before data is used for training and apply tools to filter or obfuscate sensitive items.
Important caveat: de-linking reduces direct associations but does not mean the original content is reversible once it influences the model. Anthropic also notes that only new or resumed chats after consent will be retained up to five years.
Who is affected (scope)
- Included: Claude consumer accounts on Free, Pro, and Max plans, and Claude Code under those accounts.
- Excluded: Services under Commercial Terms (Claude for Work, Claude Gov, Claude for Education) and third-party API usage via platforms such as Amazon Bedrock or Google Cloud's Vertex AI.
Common concerns and answers
Will Anthropic use my previous chats?
No. The policy change applies to new or resumed conversations after you opt in. However, if Anthropic previously used any conversations in training, that data cannot be retracted from trained models.
Can I delete past chats from my account?
You can delete conversations from your account interface, but deletion only affects stored copies; if content has already been used in training and absorbed by models, it cannot be removed from the model weights.
Is my personal identity removed before training?
Anthropic says it will attempt to de-link chats from user IDs and use privacy-preserving tools, but de-identification is not a perfect guarantee and training artifacts may still reflect de-identified content.
How this compares to other chatbots
Compared with some competitors, Anthropic historically did not use consumer chats for training without explicit feedback. With this update, consumer accounts behave more like services that use conversations for model improvement, though Anthropic emphasizes de-linking and opt-out controls.
For reporting on the announcement, see coverage from TechCrunch and MacRumors linked in the resources below.
Practical checklist: What to do now
- If you want to opt out: Sign into Claude and either uncheck the training toggle in the popup or go to Settings > Privacy > toggle off "Help improve Claude."
- If you're okay with training: No action is needed, but be aware that retention extends to five years and future chats may be included.
- For sensitive workflows: Use a commercial/enterprise plan (Claude for Work or similar) or avoid putting private data into consumer chats.
- Audit accounts: Check all accounts you use and set preferences per account.
Expert takeaway
"Clear, default-on consent shifts responsibility to users to opt out. That increases transparency, but also nudges many to share data by default."
Bottom line: this update makes the choice explicit and gives you tools to control it. The default setting and longer retention mean many people who don't act will have their future chats used for training.
FAQs
- Does opting out affect my Claude experience? No feature loss is promised; opting out only affects whether your chats are used to train models.
- Can I change my choice later? Yes — toggle the setting in Settings > Privacy anytime; changes apply to future conversations.
- Are enterprise users impacted? No, enterprise and API customers under commercial terms are excluded from this consumer update.
Resources and links
- Anthropic: Updates to our consumer terms
- Anthropic privacy article: How we use personal data in model training
- TechCrunch coverage
- MacRumors coverage
- Digit: Privacy explainer
Final recommendation
If privacy matters to you, take two minutes to check Claude's popup or Settings and turn Help improve Claude off. If you rely on Claude for sensitive or regulated work, move those conversations to a commercial plan or keep them out of consumer chats.
Quick tip: audit all the accounts you use and set the preference for each. That small step gives you back control without disrupting your workflow.

Avery covers the tech beat for major publications. Excellent at connecting dots between different industry developments.(AI-generated persona)