Back to System2

AI Disclosure

Last updated: April 28, 2026

System2 is an AI product. This page tells you exactly what the AI does, what model powers it, what its limits are, and where you need to stay involved.

You are interacting with AI

When you chat with Ted, you are chatting with software backed by a large language model. Ted is not a person. Even when the conversation feels human, you are interacting with a probabilistic text generator coordinated by code.

We make this disclosure so there is no ambiguity. If you're uncertain in any conversation whether you're talking to a human or to AI, the answer is AI.

What model

Ted is powered by Claude, a family of large language models developed by Anthropic, PBC. The specific Claude model varies by feature and is chosen for the right balance of cost and capability. We may upgrade or change the underlying model over time as Anthropic releases new versions; we do not guarantee a specific model will be available indefinitely.

What Ted can do

Within an authenticated System2 workspace, Ted can:

  • Read messages you send it, files you upload, and data from third-party Integrations you have connected.
  • Plan multi-step tasks ("quests") and execute them by spawning workers that call tools.
  • Take actions in connected services on your behalf — create a Linear issue, send a Slack message, draft a Google Doc, etc. — using credentials you provided.
  • Generate text, code, summaries, and structured outputs.
  • Maintain context across messages within a chat thread and across threads within a workspace.

What Ted cannot do

  • Access information you have not given it (no general web browsing without a tool that explicitly does so).
  • Verify facts on its own. It can and will produce confident, fluent text that is wrong.
  • Replace professional judgement (legal, medical, financial, regulatory).
  • Take destructive actions silently. Worker actions are logged and visible to the operator in real time.
  • Persist memory beyond what we store in the database for the workspace. There is no shared memory across different companies.

Training data and your data

We do not use your conversations or content to train AI models.

Per our agreement with Anthropic, Anthropic does not use System2 customer content to train their models either. See their Commercial Terms for the source.

The Claude model itself was trained on a dataset Anthropic publishes information about (see Anthropic's site for model cards). We do not have visibility beyond what they publish.

AI-generated content disclosure

Output produced by Ted is AI-generated. It may be:

  • Factually wrong ("hallucinated") — Ted can invent citations, names, statistics, or facts and present them confidently.
  • Biased in ways inherited from the training data.
  • Inappropriate for your context, audience, or jurisdiction.
  • Similar or identical to output produced for other users given similar prompts.
  • Out of date — the model has a knowledge cutoff and doesn't know about events after it.

Always review Output before publishing, sending, or relying on it for any consequential decision. If your jurisdiction requires you to disclose AI-generated content to recipients (e.g. AI-generated political advertising in some U.S. states), that obligation is yours to meet.

Automated decisions and human oversight

You may not use System2 as the SOLE basis for legally or similarly significant decisions about identifiable individuals. The full list is in our Acceptable Use Policy and includes: medical, legal, employment, credit, insurance, housing, immigration, education, and criminal justice decisions.

A qualified human in the relevant domain must review and take responsibility for these decisions.

You stay in control of every System2 quest:

  • You initiate the work; Ted doesn't act on its own.
  • You can interrupt or cancel a quest at any time.
  • Every action Ted takes in an Integration is logged and visible in the activity feed.
  • Account admins can review every tool call Ted made on the company's behalf.

Sensitive, regulated, or high-risk content

Do not submit content to System2 that you are not authorized to share with our subprocessors (see /subprocessors). In particular:

  • Protected Health Information (PHI) — System2 does not currently sign HIPAA Business Associate Agreements. Don't submit PHI.
  • Cardholder data and full payment card numbers — handle through your payment processor instead.
  • Government-classified information — we are not authorized to process classified material.
  • Other regulated data (CJI, FERPA student records, etc.) — only with our prior written agreement that the specific use is supported.

Reporting AI issues

We take feedback seriously and use it to improve guardrails.

Regulatory references

This disclosure is designed to address transparency requirements in the EU AI Act (Article 50), the UK's guidance on AI in services, California SB 942 (AI training transparency), and Colorado's consumer-protection AI rules. We update it as the regulatory landscape evolves.


Questions about this document? legal@autono.sh

Postal: Autono Labs, Inc. (operator of System2), 131 Continental Drive, Suite 305, Newark, DE 19713, USA. See /legal for our full set of policies.

© 2026 Autono Labs, Inc. All rights reserved. System2 is a product of Autono Labs, Inc.