Sarah Bird, an expert in Responsible AI implementation who leads in Azure AI, Microsoft, gave the audience a perspective on how artificial intelligence is a big thing at this moment.
Working around AI for her entire career, Sarah said “This time there is something different happening”
Sarah started by describing one of the Generative AI applications, Github Copilot, which was used by developers from all over the world. As a result, 46% of new code was generated by AI and was 55% faster with a satisfactory over 75%.
In the video presentation, Sarah showed how AI can improve lives; how ChatGPT helps agriculture in India, and how Image recognition helps the visually impaired in their daily life via a camera app.
In the implementation of Responsible AI, Microsoft starts with the principles to ensure that the systems are fair, inclusive, safe, reliable, private, and secure while also providing transparency to users and the outside world. They also made sure that humans were responsible for any of the outcomes.
Having created the foundation in 2016, Sarah expresses her excitement that they are readily prepared for this moment about what AI can do, not just for business use but for the whole of humanity.
Sarah emphasized that the standards of Microsoft’s Responsible AI are incredibly important because it is what they need to achieve from their AI system. Broken down into its anatomy, the Principles guides the values of responsible AI, along with the Requirement they must take to secure the Goals of their outcomes while the Tools and Practices can aid them in meeting the Requirements.
Sarah explained that generative AI is the new and cutting-edge type of deep learning that has crossed the critical threshold in the past two years and has enough potential to be used in many applications.
One of the generative AI applications which is also the newest product in Microsoft portfolio, Bing Chat. Not only Bing Chat, but Microsoft also implements generative AI in various of their products, such as Microsoft 365, Github and more while using the Azure AI as their platform to adapt generative AI across all of their products.
However there are risks in using generative AI, Sarah stated these key concerns
In addition, she also said that there is a new type of concern; Manipulation and human-like behavior They had to find a boundary where AI can behave like humans and where it cannot prevent the system from going too far to make humans aware of who they are interacting with. But don’t be worried, Sarah ensures that technical methods can be combined to address and mitigate the risks for instance, Azure AI can detect harmful content from the prompts and filters it before giving the information back to the user. This safety system is built-in across all Azure AI platforms.
In terms of the Azure AI user’s experience, Sarah explained more about how the platform can be adapted to each of the application. As some applications require different processes of data, Azure can be fine-tuned to fit the application efficiently from the overall behaviour of the AI to the cost of operating it, users can choose from thousands of available models that can be fitted perfectly to their application.
Also, Sara described that Azure AI has an evaluation system to follow up with the performance of the AI both manually by experts and automatically by their augmented system and outputs it in metrics.
Sarah sums up by highlighting the use cases of generative AI to connect with scientists for new scientific discoveries and enabling people to connect with healthcare providers, understanding climate change and sustainability. She hope that their tools and guides will help others innovate responsibly and truly change the world.
ลงทะเบียนเข้าสู่ระบบ เพื่ออ่านบทความฟรีไม่จำกัด