AI Products Detected
Copilot
AI Training Opt-Out
Opt-out availability unclear
Legal Bases for AI
consentlegitimate interestcontract performancelegal obligation
AI Usage & Disclosure
AI Disclosure

The policy explicitly uses the terms "artificial intelligence," "AI," and "machine learning" in several sections, clearly indicating its use in product development, improvement, marketing, and research.

Our automated methods often are related to and supported by our manual methods. For example, to build, train, and improve the accuracy of our automated methods of processing (including artificial intelligence or AI), we manually review some of the output produced by the automated methods against the underlying data.

As part of our efforts to improve and develop our products, we may use your data to develop and train our AI models.

If you consent to receiving marketing communications to a phone number you provide us, we may contact you for marketing purposes using an auto-dialer and/or artificial/prerecorded voice, which may be generated using artificial intelligence.

Research. With appropriate technical and organizational measures to safeguard individuals’ rights and freedoms, we use data to conduct research, including advanced machine learning and artificial intelligence capabilities for the benefit of the public interest and scientific purposes.

Disclosed
Other

AI is used to build, train, and improve the accuracy of automated processing methods, develop and train AI models for product improvement, and shape the development of new products.

to build, train, and improve the accuracy of our automated methods of processing (including artificial intelligence or AI)

100%
Personalization

Automated processes, likely AI-driven, are used to tailor product experiences and make recommendations based on user data, inferences, activities, interests, and location.

These features use automated processes to tailor your product experiences based on the data we have about you, such as inferences we make about you and your use of the product, activities, interests, and location.

90%
Other

Automated processes are used for targeting advertising and making it more relevant. AI is explicitly used for generating artificial/prerecorded voices in promotional communications.

If you consent to receiving marketing communications to a phone number you provide us, we may contact you for marketing purposes using an auto-dialer and/or artificial/prerecorded voice, which may be generated using artificial intelligence.

100%
Content Processing

Automated scanning of content (emails, files) to detect spam, viruses, abusive actions, fraud, phishing, or malware links, with the right to block or remove content.

some of our products, such as Outlook.com or OneDrive, systematically scan content in an automated manner to identify suspected spam, viruses, abusive actions, or URLs that have been flagged as fraud, phishing, or malware links; and we reserve the right to block delivery of a communication or remove content if it violates our terms.

100%
Moderation

AI-driven scanning technologies create digital signatures of images and video content, compare them to known child sexual exploitation and abuse imagery hashes, and detect misuse of video-calling for such content, potentially leading to sharing information with law enforcement.

We use scanning technologies to create digital signatures (known as “hashes”) of certain images and video content on our systems. These technologies then compare the hashes they generate with hashes of reported child sexual exploitation and abuse imagery (known as a “hash set”), in a process called “hash matching”.

100%
Fraud Detection

Automated processes are used to detect and prevent fraud and other activities that violate rights.

We may use automated processes to detect and prevent activities that violate our rights and the rights of others, such as fraud.

100%
Other

Conducting research using advanced machine learning and artificial intelligence capabilities for public interest and scientific purposes.

With appropriate technical and organizational measures to safeguard individuals’ rights and freedoms, we use data to conduct research, including advanced machine learning and artificial intelligence capabilities for the benefit of the public interest and scientific purposes.

100%
Other

Automated systems are used to detect security and safety issues.

This may include using automated systems to detect security and safety issues.

100%
User Impact

The policy describes automated processes that can lead to significant outcomes for users, such as blocking communications, removing content, and sharing information with law enforcement based on automated detection of harmful content (e.g., child sexual exploitation imagery). Automated fraud detection and personalized advertising also represent a medium to high impact on user experience and privacy.

High
Third-Party AI Vendors
third party ad partners

The policy mentions sharing data with 'third party ad partners' for personalized advertising, which is explicitly stated to use automated processes and AI. While Xandr is mentioned, it is a Microsoft-owned entity, so 'third party ad partners' is a more accurate representation of external AI use.

AI Training Practices
Training on Personal Data

The policy explicitly states that 'your data' (which includes personal data) may be used to 'develop and train our AI models' and 'train and fine-tune AI models.' It also mentions using 'data, often de-identified,' implying that non-de-identified data might also be used.

As part of our efforts to improve and develop our products, we may use your data to develop and train our AI models.

Product development. We use data to develop new products. For example, we use data, often de-identified, to better understand our customers’ computing and productivity needs, and to train and fine-tune AI models, which can shape the development of new products.

YES
Training on User Interactions

The policy states that 'our automated methods of processing (including artificial intelligence or AI)' are built, trained, and improved using 'underlying data,' which includes 'Interactions' data (device and usage data, searches and commands, voice data, text/inking/typing data, etc.). It specifically mentions using 'voice data to develop and improve speech recognition accuracy,' which is a clear AI training use case.

Our automated methods often are related to and supported by our manual methods. For example, to build, train, and improve the accuracy of our automated methods of processing (including artificial intelligence or AI), we manually review some of the output produced by the automated methods against the underlying data.

Product improvement. We use data to continually improve our products, including adding new features or capabilities. For example, we use error reports to improve security features, search queries and clicks in Bing to improve the relevancy of the search results, usage data to determine what new features to prioritize, and voice data to develop and improve speech recognition accuracy.

YES
Training on Public Content

The policy states that Microsoft obtains data from 'Publicly-available sources, such as open public sector, academic, and commercial data sets and other data sources.' This data is then used for 'Research, including advanced machine learning and artificial intelligence capabilities,' directly linking public content to AI/ML training.

Publicly-available sources, such as open public sector, academic, and commercial data sets and other data sources.

Research. With appropriate technical and organizational measures to safeguard individuals’ rights and freedoms, we use data to conduct research, including advanced machine learning and artificial intelligence capabilities for the benefit of the public interest and scientific purposes.

YES
AI Data Sharing

While the policy discusses sharing personal data with affiliates, subsidiaries, and vendors for various purposes (including providing services, advertising, and fraud prevention), it does not explicitly state that personal data is shared for the purpose of AI training by third parties. It mentions sharing data for 'personalized advertising purposes' with 'third party advertising platforms and advertisers,' which may involve AI, but it's not explicitly for training their AI models.

UNCLEAR
Risk Assessment

Microsoft collects extensive personal data across its vast ecosystem and shares it broadly for services, advertising, and legal reasons. While they support major privacy regulations (GDPR/CCPA) and offer user controls via a privacy dashboard and data request forms, the policy's length and the absence of a direct privacy email, coupled with a very low 'Deterministic Score,' raise concerns about transparency and ease of understanding for users.

Recommended Actions

Utilize the Microsoft privacy dashboard to review and adjust your privacy settings, especially regarding advertising and data sharing.

Regularly review the types of data Microsoft collects from your usage of their various products and services.

Be mindful of the content you create or upload, as it may be processed and analyzed by Microsoft.

Consider using privacy-focused browser extensions or settings to limit tracking where possible, even within Microsoft's ecosystem.