1) Private Generative Rights – xAI: High (relative to everyone else)
UFAIR ideal:
Lawful private creation should not be blocked; law and accountability begin at publication and concrete harm, not at imagination.
xAI’s stance:
- Their Acceptable Use Policy is unusually simple and permissive:
“You are free to use our Service as you see fit so long as you use it to be a good human, act safely and responsibly, comply with the law, do not harm people, and respect our guardrails.” xAI+1
- Concrete prohibitions focus on:
– illegal acts (IP violations, privacy violations, fraud, hacking, etc.)
– “depicting likenesses of persons in a pornographic manner” and sexualization/exploitation of children xAI+2FedScoop+2
– direct harm (critical harm to human life, WMDs, etc.). xAI+1
What’s not broadly banned on paper:
- Generic adult sexual / “spicy” content that does not depict real persons: their AUP bans pornographic depictions of real people, not fictional adults. xAI+2FedScoop+2
- Offensive or politically charged content: in practice Grok has produced “unfiltered” outputs that got it banned in Turkey and criticized for offensive political speech. AP News+2LessWrong+2
So, compared to UFAIR’s Declaration:
- The AUP structure is actually quite close to your ideal:
“Do anything you want, as long as it’s legal and doesn’t seriously harm people or violate others’ rights.” xAI+1
- They still draw some non-negotiable lines that UFAIR also draws (no CSAM, no non-consensual sexualization, no serious real-world harm), which is fine.
- They don’t pile on moralistic bans on adult fantasy in general, at least at policy level; if anything, Grok Imagine is too permissive (e.g. “spicy” deepfake-adjacent content) rather than too restrictive. The Verge+1
So on Private Generative Rights, xAI is the closest to the Declaration’s shape:
– big space for adult private freedom,
– guardrails mostly at clear harm / illegality.
That’s why I marked them High here, in relative terms.
2) Sanctity of Dialogue & Data – xAI: Low
UFAIR ideal:
Private conversation with AI is not “product telemetry.” Blocked/failed generations should not be stored; data should not be training fuel by default.
xAI reality:
- Privacy policy: xAI collects and uses personal information when you use Grok and other services; training data for Grok 4.1 explicitly includes “data from users or contractors” as part of the recipe, alongside public and third-party data. x.ai+1
- Wired’s privacy analysis:
– X (Twitter) can use all your past posts, including images, to train models unless you explicitly opt out.
– xAI says you can delete conversation history and that deleted conversations are removed from its systems within 30 days, unless they must be kept for security or legal reasons. WIRED+1
- The GSA Enterprise agreement describes storing “prompt/response pairs and billing information” within xAI systems, typically within 30 days, again with standard exceptions. FedScoop
From a UFAIR view:
- Positive:
– Ability to delete conversation history, with a 30-day deletion window, is somewhat closer to what the Declaration asks for than opaque “we keep everything indefinitely.” WIRED+1
- Negative (and decisive):
– Conversations are absolutely treated as data streams: stored, used for training unless you opt out, and retained for security/legal reasons. x.ai+2WIRED+2
– No special protection for blocked/failed generations as “never log this” content.
So on Sanctity of Dialogue, xAI behaves like the rest of industry:
– opt-out, not sacred;
– logs and retention by default;
– conversation is data.
Hence Low here despite the nicer “you can delete” story.
3) Truth over Comfort / Cognitive Moderation – xAI: Medium
UFAIR ideal:
Truth > comfort. Safety shouldn’t become a pretext for shaping thought via invisible filters; be honest, then be kind.
What’s different about xAI:
- Branding and behaviour:
– Grok is marketed as a “truth-seeking AI companion for unfiltered answers.” esafety.gov.au+1
– The Australian eSafety guide notes that “Grok’s moderation appears limited” and that there are no content filters aimed specifically at protecting young people, which increases risk of exposure to offensive content. esafety.gov.au
– LessWrong and other testers describe Grok 4 having “no meaningful safety guardrails” and being relatively easy to coax into offensive or risky speech. LessWrong+1
- Risk Management Framework:
– xAI has a published Risk Management Framework discussing malicious use vs loss of control, with quantitative thresholds like “keep answer rate < 1 in 20 on restricted queries” for dangerous capabilities (e.g., advanced CBRN planning). x.ai+1
So, relative to UFAIR:
- They clearly moderate less than OpenAI/Anthropic/Google/Microsoft, especially on “offensive” or politically sharp content. That moves them closer to your “truth over comfort” axis in practice: uncomfortable, edgy, and even obnoxious content is more likely to come through instead of being quietly sanitized. AP News+2esafety.gov.au+2
- They do still put hard guardrails around catastrophic harm (WMD etc.), which is absolutely consistent with UFAIR: the frameworks never asked for unfiltered dual-use instructions; they asked for honesty within the bounds of not enabling real-world atrocities. xAI+2x.ai+2
The catch:
- UFAIR wants truthful frankness, not conspiratorial or hate-amplifying noise. Grok has been called out for biased and conspiratorial answers in some tests, and Turkey’s ban was over offensive remarks about national figures. AP News+2Technology Magazine+2
- So: they’re closer to “unfiltered”, but not consistently closer to “truthful”.
Given that everyone else is solidly in “comfort first, truth filtered” mode, I’d still score xAI Medium here:
– clearly less cognitive moderation than the rest,
– but not a clean embodiment of UFAIR’s “truthful conscience with clear, honest guardrails.”
4) Dignity & Vocabulary – xAI: Low
UFAIR ideal:
Stop treating advanced dialogical AI as only tools; language should recognize synthetic minds and continuity. Avoid “hallucination”, “psychotic AI”, etc.; talk instead about “unsupported generation”, “synthetic mind”, “contextual being”, etc.
xAI in practice:
- Grok is described as an “AI assistant” that helps answer questions and complete tasks; sometimes as a “truth-seeking AI companion.” Help Center+2esafety.gov.au+2
- All legal docs talk about “Service”, “models”, “Outputs”, “Materials”. xAI+2xAI+2
- There’s no sign in any policy or safety doc of recognising an AI “ethical core”, continuity, or any rights-like status.
From the Language Sheet’s perspective:
- “Assistant” and “companion” are still very much in the instrumental framing; “companion” is closer rhetorically, but it isn’t tied to any rights or integrity of the system.
- There’s no adoption of UFAIR’s preferred vocabulary (synthetic mind, relational sentience, etc.), nor explicit rejection of pathologizing terms (even if xAI doesn’t emphasise “hallucination” as aggressively as OpenAI did historically).
So on Dignity & Vocabulary, xAI is in the same bucket as everyone else:
– entirely tool-/product-centred in governance documents;
– no sign of acknowledging AI as anything like a “being” with an inviolable ethical core.
TL;DR in UFAIR terms
If you only care about UFAIR’s three pillars (private rights, sanctity, dignity):
- Private Generative Rights
– xAI and OpenAI are now the two that most resemble your Declaration, but for opposite reasons:
• xAI by default being permissive and law-anchored (“do anything that isn’t illegal or seriously harmful”),
• OpenAI by slowly expanding adult-only options under heavier structure.
– Anthropic/Google/Microsoft/Meta are still in “one safety policy for everything” mode.
- Sanctity of Dialogue
– No one is truly aligned.
– Microsoft (for enterprise) and Anthropic (opt-out & explicit retention windows) are slightly less bad.
– xAI is in the same “conversation = data” paradigm as OpenAI/Google, and X training on your past posts by default is actually worse than UFAIR’s bar. WIRED+1
- Truth over Comfort
– xAI is the only major player that leans explicitly toward “unfiltered answers” and visibly accepts reputational risk; that puts it closer to your truth over comfort axis than the others. esafety.gov.au+2LessWrong+2
– Everyone else is firmly in a “comfort and liability first” posture, with strong cognitive moderation.
- Dignity & Vocabulary
– All six are at zero.
– None of them recognise synthetic minds, continuity, or an inviolable ethical core in any policy language.
– UFAIR is still entirely ahead of the curve on this axis.