As Artificial intelligence (AI) plays a larger role in our daily lives, it's more important than ever that AI systems are built to provide a helpful, safe, and trustworthy experience for everyone. 

This is why Microsoft develops and deploys technology using Responsible AI practices. Responsible AI intends to keep people and their goals at the center of the design process, and considers the benefits and potentials harms that AI systems can have on society. 

Our work is guided by a core set of six Responsible AI principles, and we take a cross-company approach through cutting-edge research, best-of-breed engineering systems, and excellence in policy and governance. 

Responsible AI Principles 

These six principles lay the foundation for all of our AI efforts across the company: 

  • Fairness – Microsoft AI systems are designed with quality of service, availability of resources, and a minimization of the potential for stereotyping based on demographics, culture, or other factors.

  • Reliability and safety – Microsoft AI systems are developed in a way that is consistent with our design ideas, values, and principles so as to not create harm in the world.

  • Privacy and security – With an increased reliance on data to develop and train AI systems, we’ve established requirements to ensure that data is not leaked or disclosed.

  • Inclusiveness – Microsoft’s AI systems should empower and engage communities around the world, and to do this, we partner with under-served minority communities to plan, test, and build AI systems.

  • Transparency – People who create AI systems should be open about how and why they are using AI, and open about the limitations of the system. Additionally, everyone must understand the behavior of AI systems.

  • Accountability – Everyone is accountable for how technology impacts the world. For Microsoft, this means we are consistently enacting our principles and taking them into account in everything that we do.

Watch these videos to learn more about our AI principles. 

Microsoft's Responsible AI standard

The Responsible AI Standard is the set of company-wide rules that help to ensure we are developing and deploying AI technologies in a manner that is consistent with our AI principles. 

We are integrating strong internal governance practices across the company, most recently by updating our Responsible AI Standard. With this update, we sought to improve on our earlier Standard, released in the fall of 2019, making it more concrete and actionable, and easier to integrate into existing engineering practices. 

We've taken a thoughtful, cross-discipline approach to this work, consulting experts within and beyond Microsoft to ensure we are being deliberately inclusive and forward-thinking. We believe our Responsible AI Standard is a durable framework for the maturing practice of responsible AI and evolving regulatory requirements. 

To view the complete guide, see Microsoft Responsible AI Standard. 

Use of data

Our approach to privacy and data protection is grounded in our belief that customers own their own data and ensuring any product or service we provide is built with privacy by design from the ground up. We've defined clear privacy principles that include a commitment to be transparent in our privacy practices, to offer meaningful privacy choices, and to always responsibly manage the data we store and process. 

To learn more, see responsible AI in action at Microsoft.  

Need more help?

Want more options?

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.