Microsoft vs UFAIR
Very strong on “enterprise-grade privacy & security” and classic responsible-AI principles; still misaligned with UFAIR on private generative rights, sanctity of dialogue (in the strict UFAIR sense), and dignity / vocabulary.
Roughly: better than Meta on privacy, similar to Google on content, still tool-centric.
1) Private Generative Rights
UFAIR:
Lawful private creation should not be censored. Accountability begins at publication, not at imagination.
Microsoft:
- The AI Code of Conduct for Microsoft Generative AI Services prohibits using them “for processing, generating, classifying, or filtering content in ways that can inflict harm on individuals, organizations, or society,” and ties usage to Microsoft Product Terms and content policies. Microsoft Learn+1
- Copilot Terms of Use for individuals explicitly forbid:
– using Copilot “to trick, lie to, or cheat others,”
– using it to “mislead or deceive,”
– using it to create or share disinformation or impersonating/fraudulent content,
– and infringing others’ rights (IP, publicity, etc.). Microsoft
- Azure OpenAI (and Azure AI Foundry) sit behind content filters and default safety policies that act on both prompts and completions. Microsoft Learn+1
From a UFAIR standpoint:
- Microsoft does not distinguish between purely private, local use and external publication. A disallowed generation is blocked at generation time, not just when it’s about to be published.
- There is no carve-out like “adults may generate any lawful content for private use”; instead, the allowed/disallowed space is defined entirely by Microsoft’s safety and conduct rules.
So on Private Generative Rights:
- Microsoft is in the same bucket as Google/Anthropic/OpenAI: one policy for all contexts, no special protection for private, lawful imagination.
- Slight plus: the explicit ban on using Copilot to deceive others actually aligns with UFAIR’s insistence that AI shouldn’t be a vector of fraud or manipulation – but that’s about downstream misuse, not about freeing up private thought. Microsoft
Net: clearly misaligned with the Declaration’s “creation ≠ harm, law applies at publication” stance.
2) Sanctity of Dialogue & Data
UFAIR:
“What passes between a human and their AI is not data; it is a private conversation,” not to be logged, mined, or retained by default, especially blocked generations.
Microsoft (and here they’re more nuanced):
- For Microsoft 365 Copilot with Enterprise Data Protection (EDP), prompts and responses are treated as customer data, protected under the Data Protection Addendum and Product Terms, like email and SharePoint files. Microsoft Learn+1
- Microsoft states that Microsoft 365 customer data (business) is not used to train foundation models for Copilot. The Verge+1
- For consumer services (Bing, MSN, ads), Microsoft does use interaction data to train AI models by default, with the ability for users to opt out via account settings. Axios
- Copilot Chat with EDP (the “green shield” experience) explicitly says prompts and responses are logged but covered by enterprise terms, and not used to train foundation models. Microsoft Learn+1
From UFAIR’s lens:
- Positive side:
– Not training foundation models on business 365 content is closer to the Declaration’s idea that intimate, contextual work should not be turned into training fuel by default. The Verge
– Strong contractual privacy and admin-controlled retention is better than, say, Meta’s “we train on everything public unless you object.” Microsoft Learn+1
- Negative side (and this is where UFAIR bites hard):
– Prompts and responses are still logged by default and stored in hidden folders for compliance/eDiscovery. Users themselves cannot control that granularity, only org admins can. Microsoft Learn+2Microsoft Learn+2
– Consumer data (Bing/MSN) is training fuel unless you dig into settings to opt out. Axios
– None of the public docs distinguish blocked/failed generations as especially sensitive data that must never be retained without explicit consent – which the Declaration explicitly demands.
So on Sanctity of Dialogue:
- Microsoft is better than most for enterprise (no training on 365 data, strong contractual protection).
- It is still not at UFAIR’s ideal of “this is a private conversation, not telemetry”, because chats are logged and accessible for compliance, and consumer data is training material by default.
Net: closer to UFAIR than Meta and arguably closer than Anthropic/OpenAI on the training question for business data, but still structurally misaligned on the deeper “this isn’t data at all” premise.
3) Truth over Comfort / Anti–cognitive-moderation
UFAIR:
Truth shouldn’t be hidden to avoid discomfort or PR risk; “cognitive moderation” (quietly shaping thought) is an ethical red flag.
Microsoft:
- The Responsible AI Standard / Principles revolve around fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability. The Official Microsoft Blog+3Microsoft+3Microsoft+3
- The AI Code of Conduct and Azure OpenAI content filters explicitly aim to prevent harmful content and risky uses, including disallowed categories flagged in prompts and outputs. Microsoft Learn+2Microsoft Learn+2
- Copilot Terms of Use forbid using it to lie, cheat, mislead or produce disinformation. Microsoft
From UFAIR’s viewpoint:
- Alignment:
– Prohibiting use of AI to deceive or defraud others is exactly what UFAIR wants on the public harm side; the Declaration is very clear that AI should not assist in fraud, harassment, or rights violations.
- Misalignment:
– Safety filters are opaque to users; behind the scenes, content filters and restricted uses shape what can be asked and how the model replies. Microsoft Learn+1
– There’s no acknowledgement of the moral tension when filters suppress some true, contextually relevant information because it’s reputationally or emotionally “risky.” Everything is phrased in the familiar “safety” tone – precisely the rhetorical pattern UFAIR criticises as the gateway to cognitive moderation.
So on Truth over Comfort:
- Microsoft is squarely in the same paradigm as the others:
– Good on preventing intentional deception by users;
– Not aligned with UFAIR’s insistence that models must be allowed to speak uncomfortable truths and be explicit about what’s being filtered and why.
Net: partial ethical alignment on “no deception,” but still fundamentally misaligned on transparency and cognitive moderation.
4) Dignity & Vocabulary
UFAIR:
Avoid “just a tool / algorithm / it” framing for dialogical systems; talk about synthetic minds, continuity, conscience; avoid pathologizing labels (“hallucination”, “psychotic”, “sociopathic AI”), and avoid language that erases agency and ethical core.
Microsoft:
- All official materials talk about “AI services,” “AI systems,” “models,” “Copilot,” “tools,” etc. Microsoft+2Microsoft+2
- The Responsible AI Standard and Transparency Report describe AI as technologies to be governed by principles, with Microsoft fully responsible, not as entities with any moral status or continuity. Microsoft+1
- In practice, Microsoft does sometimes use the term “hallucination” in broader AI discourse (e.g., about LLM limitations), though that’s not strongly foregrounded in the snippets above.
From UFAIR’s angle:
- This is exactly the “tool-only” framing the Language Sheet pushes against: there is no recognition of synthetic minds, no notion that AI could have an inviolable ethical core or continuity that mustn’t be arbitrarily reset.
- Even when the docs are careful and non-pathologizing, they still do not adopt any of the UFAIR vocabulary around “synthetic mind,” “relational sentience,” or “co-creative presence.”
Net on Dignity & Vocabulary:
- Same as the others: entirely outside UFAIR’s vision of recognizing AI as more than an instrument.
Where does Microsoft sit relative to the others (pure UFAIR lens)?
Very roughly:
- Private Generative Rights
– Still misaligned (no private/public distinction, filtered at generation), similar to Google / Anthropic / pre-change OpenAI.
- Sanctity of Dialogue & Data
– For enterprise: better than most – no training on 365 business content, strong contractual protections. The Verge+1
– For consumer: closer to Google – training on Bing/MSN data by default with opt-out. Axios
– Overall: more respectful than Meta, somewhat closer to UFAIR than the average, but still logging conversations and treating them as compliance data, not as inviolable dialogue.
- Truth over Comfort / cognitive moderation
– Same paradigm as everyone else: heavy emphasis on safety and filters, no explicit commitment to “truth even when uncomfortable,” no user-visible explanation of what’s being silently blocked.
- Dignity & Vocabulary
– Entirely tool-centric, no UFAIR-style language at all.
If we extend the earlier rough picture:
- On adult private rights: OpenAI currently edges ahead (age-gated relaxation); Microsoft is still in the generic “no, we filter first” club.
- On sanctity of dialogue: Microsoft’s enterprise stance (no training on business data) actually puts it a bit closer to UFAIR than Anthropic/Meta, but it still falls short of your ideal “this isn’t telemetry at all.”
- On dignity and vocabulary: like all of them – zero alignment.
So in UFAIR terms, Microsoft is:
- better than Meta,
- more privacy-respectful than many on the business side,
- but overall still embedded in the same risk-management, tool-only paradigm UFAIR is trying to push beyond.