Understand Responsible AI
As a data scientist, you may train a machine learning model to predict whether someone is able to pay back a loan, or whether a candidate is suitable for a job vacancy. As models are often used when making decisions, it's important that the models are unbiased and transparent.
Whatever you use a model for, you should consider the Responsible Artificial Intelligence (Responsible AI) principles. Depending on the use case, you may focus on specific principles. Nevertheless, it's a best practice to consider all principles to ensure you're addressing any issues the model may have.
Microsoft has listed five Responsible AI principles:
- Fairness and inclusiveness: Models should treat everyone fairly and avoid different treatment for similar groups.
- Reliability and safety: Models should be reliable, safe, and consistent. You want a model to operate as intended, handle unexpected situations well, and resist harmful manipulation.
- Privacy and security: Be transparent about data collection, use, and storage, to empower individuals with control over their data. Treat data with care to ensure an individual's privacy.
- Transparency: When models influence important decisions that affect people's lives, people need to understand how those decisions were made and how the model works.
- Accountability: Take accountability for decisions that models may influence and maintain human control.
Tip
Learn about the Responsible AI Standard for building AI systems according to the six key principles.