Diff Watch
Every silent terms change we have caught, in the order we caught it. We snapshot every monitored tool daily and surface material changes within 24 hours.
16
changes detected
15
severity 4–5
OpenAI removed the 30-day arbitration opt-out, removed the $25K carve-out for court actions, and added an explicit jury-trial waiver. Disputes of any size now go to forced arbitration with no escape hatch.
Any customer who experiences a material harm from ChatGPT or the API — they have no path to court regardless of damages, and cannot join class actions if many users are affected by the same issue.
Stripe removed the 30-day advance-notice commitment for adverse Service changes and added a blanket no-liability clause covering any modification or discontinuation.
Businesses with payment flows hard-wired to specific Stripe API behaviors — Stripe can now break or remove features overnight with no warning and no recourse.
Datadog extended the cancellation notice window from 60 to 90 days and removed its prior commitment to send 30-day renewal reminders. Customers who miss the new 90-day window are auto-renewed at whatever the current price is.
Engineering teams on annual Datadog contracts — the 90-day silent window closes a full quarter before year-end, meaning any budget review happening in Q4 is already too late to cancel without penalty.
HubSpot tied AI training consent to feature enablement — turning on Breeze AI automatically grants HubSpot rights to use your CRM data including contact records and email content for model training. The prior policy explicitly prohibited this use.
Sales and marketing teams whose CRM contains confidential pipeline data, competitor intelligence, and customer communication history. Enabling any Breeze AI feature retroactively authorizes training on all historical CRM data.
Notion added OpenAI and Anthropic as sub-processors and quietly removed the EU-only residency guarantee for workspaces using AI features. EU customer data now routes to US-based AI providers regardless of the workspace's stated region.
EU companies on Notion paying for the AI add-on (or using free AI features) — their data is now flowing to US sub-processors, putting GDPR Schrems II compliance at risk.
monday.com introduced monday AI and updated its terms to allow board content, task descriptions, and file attachments to be processed by third-party AI providers, with anonymized data used for model improvement. The prior policy had an explicit prohibition on this use.
Project teams storing strategic roadmaps, client deliverables, and proprietary workflows in monday.com — all board content is now accessible to AI processing pipelines, and anonymized versions feed back into model training.
Vercel removed the 7-day grace period before overage charges and the prior notice requirement before deployment suspension. Vercel can now charge overages immediately and suspend deployments instantly without warning.
Startups and agencies running production workloads on Vercel — a traffic spike or viral moment can now trigger immediate overage billing and potentially kill live deployments without the 7-day buffer that previously allowed time to respond.
Figma removed its explicit no-AI-training pledge and added a clause allowing design files, prototypes, and comments to be used for AI model training. The opt-out is only available to Enterprise tier customers, not Pro or Org plans.
Professional designers and studios on Pro/Org plans storing unreleased product designs, client branding, and UI systems — only Enterprise customers get an opt-out, and only by contacting a human.
Atlassian reversed its no-AI-training pledge on Jira and Confluence content to power Atlassian Intelligence. The opt-out is restricted to Cloud Enterprise customers and requires contacting an account team, leaving Standard and Premium plans with no recourse.
Engineering and product teams on Standard/Premium plans whose Jira backlogs and Confluence wikis contain unreleased product specs, security vulnerability details, and proprietary architecture documents — all now feedable into AI training.
Slack flipped its AI training default from opt-in to opt-out, with the only opt-out path being an email from the Workspace Owner to support. The previous version explicitly excluded customer messages from any non-customer-specific model training.
Every team using Slack for confidential business discussions whose admin hasn't yet emailed Slack support to opt out. Anything posted before opt-out may already be in training pipelines.
Canva reversed its no-AI-training commitment for user designs and uploaded images to power its Magic Design/Edit features. Free plan users have no opt-out; only Pro and Teams customers can disable training in settings.
Free plan users with no opt-out path, and agencies uploading client brand assets — logos, unreleased campaign materials, and confidential visuals are now fair game for Canva's AI training pipeline.
Dropbox introduced Dropbox AI and quietly authorized sharing user file contents with OpenAI and other third-party AI providers. The update removed the prior guarantee that files wouldn't be shared with AI providers.
Users storing tax documents, NDAs, financial records, or medical files in Dropbox — these can now be sent to OpenAI's infrastructure as part of AI feature processing, with OpenAI's separate retention policy applying.
GitHub Copilot Business reversed its explicit no-training promise and began defaulting to opt-in code collection for AI training. Organizations must now actively find and disable a settings toggle to prevent their proprietary code from entering training pipelines.
Enterprises using Copilot Business under the assumption their code wasn't being harvested — especially those with export-controlled, regulated, or trade-secret-protected codebases.
Adobe expanded Section 4.1 from a no-access promise to allowing both automated and manual review of all customer content, including for ML model training. The previous version explicitly prohibited Adobe from viewing user content.
Designers and agencies storing client work, NDA-protected creative, or pre-release product mocks in Creative Cloud — none of it is now off-limits to Adobe employees and ML pipelines.
Google updated Workspace terms to allow Gemini AI to process emails, documents, and meeting transcripts when Gemini features are enabled. The prior clause had an explicit prohibition on using customer data for general AI model development.
Enterprises using Google Workspace for sensitive internal communications — any admin who enables Gemini features (often on by default for paid tiers) authorizes email and document content to be processed by Google's AI infrastructure.
Zoom added language allowing it to use meeting audio, video, and chat content to train AI models without requiring customer consent. The update removed the prior explicit prohibition on using call content for AI training.
Any organization conducting confidential meetings over Zoom — legal calls, board meetings, M&A discussions, medical consultations — all potentially feedable into AI training pipelines.