Introduction

The subject of AI is frequently discussed these days. Additionally, if you look at the number of announcements that come up about copilot for Power Platform and D365 products (not to mention that Office, Windows, and Azure have tons of copilot capabilities, and we will only focus on the Power Platform and D365 in this post), With these AI capabilities integrated into Microsoft products, it also follows responsible AI standards. We’ll examine in this blog post what responsible AI is and how Microsoft is approaching the subject.

Why do you need Responsible AI?

The goal of responsible AI is to ensure that AI technologies are developed and applied in a fair, open, and moral manner. It seeks to lessen biases, promote fairness, and simplify the explanation of AI outcomes.

Algorithms that are specifically created for and used in AI systems are used to generate the outputs of AI systems. These algorithms analyze data and generate outcomes or forecasts depending on the patterns they find.

Let us also understands what is an algorithm. Algorithms are software artifacts used for data processing, they inherit the ethical issues raised by the development and accessibility of new technologies as well as those arising from the handling of sizable amounts of personal and other data.

There are six types of ethical concerns raised by the algorithms.

  1. Inconclusive evidence – AI features in ERP and CRM may offer uncertain or incomplete insights, affecting the accuracy of decision support.
  2. Inscrutable evidence – The inner workings of the AI algorithms used in these products may not be transparent, making it difficult to comprehend their conclusions.
  3. Misguided evidence – AI features in ERP and CRM has the ability to offer erroneous insights, which could result in poor business judgments.
  4. Unfair outcomes – AI bias could lead to unfair or discriminating outcomes for users of AI features in ERP and CRM.
  5. Transformative effects – AI features in ERP and CRM effects on user behavior and business procedures may have unanticipated and occasionally disruptive results.
  6. Traceability – It may be difficult to determine where AI-driven decisions and actions in these goods came from, which could have an impact on ethical supervision and accountability.

The above is a prescriptive framework which can be used to address the ethical challenges related to the use of algorithms. This framework is mentioned in research article (link). I have used this framework and applied that into the AI features in ERP & CRM.

Microsoft’s Responsible AI approach

Microsoft has been investing in a cross-company program to ensure responsible AI systems for six years from 2017. In 2017, the Aether Committee was launched to focus on AI issues and develop principles. In 2019, the Office of Responsible AI was created to coordinate AI governance and launch the first version of the Responsible AI Standard. In 2021, the program was expanded, and in 2022, it was strengthened to its second version. Microsoft has engaged a multidisciplinary team with OpenAI to assess the latest AI technology without additional safeguards. This has led to rapid progress in understanding potential harms, building bespoke measurement pipelines, and developing effective mitigation strategies. This has reinforced the need for new norms, standards, and laws in responsible AI.

Microsoft’s Responsible AI Standard was created to help the corporation create AI systems that are moral, reliable, and helpful to society. The standard is built on six guiding principles, including justice, reliability, privacy and security, inclusivity, transparency, and accountability, which Microsoft believes should govern AI development and use.

Below is a great video from Microsoft which explains on Microsoft’s Responsible AI Standard.

There are six key principles in Microsoft Responsible AI Principles.

  1. Fairness – Fairness is crucial in AI systems, especially when developing AI. Microsoft offers an AI fairness checklist with five stages: envision, prototype, build, launch, and evolve. Fair learn integrates with Azure Machine Learning to help data scientists assess and improve AI systems’ fairness. The toolkit provides unfairness-mitigation algorithms and an interactive dashboard, ensuring fairness is a part of the data science process.
  2. Reliability and safety – AI systems must be reliable and safe, able to respond safely to new situations and resist manipulation. Organizations should conduct rigorous testing, integrate A/B testing, and incorporate champion/challenger methods. A robust monitoring and model-tracking process is necessary to measure and modernize AI systems.
  3. Privacy and security – Azure differential privacy ensures privacy by randomizing data and adding noise to conceal personal information from data scientists, ensuring data holders maintain their obligations to protect their data.
  4. Inclusiveness – Inclusive AI should consider all human races and experiences, using speech-to-text, text-to-speech, and visual recognition technology to empower people with hearing, visual, and other impairments.
  5. Transparency – Transparency in machine learning involves understanding the data, algorithms, transformation logic, final model, and associated assets, enabling transparent reproducibility. Azure Machine Learning workspaces support transparency by recording or retraining training-related assets and metrics.
  6. Accountability – Accountability is essential for the ethical development of AI. As part of their AI journey, organizations should create an internal review body to provide supervision, guidance, and insights on the development and deployment of AI systems.

There will be few more blog posts related to Responsible AI and the application of that in the D365 and Power Platform with examples. Stay tuned.

Thanks for reading this blog post. Will meet you again with the next post.

One thought on “What is Responsible AI, and what’s Microsoft’s approach?

Leave a comment