Microsoft's New Tools to Secure Chatbots and AI Applications

Microsoft's New Tools to Secure Chatbots and AI Applications

By
Emilio Fernandez
1 min read

Microsoft has launched a suite of new tools within Azure aimed at boosting the safety and security of generative AI applications, with a special focus on chatbots. The tools address concerns about abusive content, prompt injections, and other risks associated with deploying generative AI. Features include real-time monitoring to track and shut down abusive content or users, as well as protections against new attack vectors like jailbreaks and prompt injections. These tools are a response to the unpreparedness expressed by corporate leaders for risks associated with generative AI according to a recent McKinsey survey. Microsoft's new tools are the result of technical innovation and research, leveraging its experience with in-house products like Copilot. The company's multibillion-dollar investment in OpenAI has also been pivotal in this regard. Prompt Shields, part of the new tools, is designed to block direct and indirect prompt attacks using advanced machine learning algorithms and natural language processing. Additionally, stress testing and real-time monitoring aim to improve the reliability and safety of generative AI applications. These advancements reflect Microsoft's commitment to responsible and secure AI.

You May Also Like

This article is submitted by our user under the News Submission Rules and Guidelines. The cover photo is computer generated art for illustrative purposes only; not indicative of factual content. If you believe this article infringes upon copyright rights, please do not hesitate to report it by sending an email to us. Your vigilance and cooperation are invaluable in helping us maintain a respectful and legally compliant community.

Subscribe to our Newsletter

Get the latest in enterprise business and tech with exclusive peeks at our new offerings