among them Responsible AI Transparency Reportfocuses primarily on 2023, with Microsoft touting its achievements in safely deploying AI products. The annual AI transparency report is one of the commitments the company made after signing a voluntary agreement with the White House last July. Microsoft and other companies have committed to establishing responsible and secure AI systems.
In the report, Microsoft says it has created 30 Responsible AI tools in the past year, expanded its Responsible AI team, and empowered teams creating generative AI applications to measure and map risks throughout the development cycle. states that it is mandatory. The company notes: Added content credentials Image generation platforms watermark photos and tag them as created by an AI model.
The company said it provides Azure AI customers with access to tools to detect problematic content such as hate speech, sexual content and self-harm, as well as tools to assess security risks. This includes new jailbreak detection methods. Expanded in March This year, it will include indirect prompt injection if the malicious instructions are part of the data ingested by the AI model.
is also expanding red team This includes both in-house red teams that intentionally try to bypass the safety features of AI models, and red team applications that allow third-party testing of new models before releasing them.
However, the Red Team's troops are given the job they deserve. The company's AI rollout has not been without controversy.
Natasha Crampton, Chief AI Officer at Microsoft, said: The Verge The company says it understands that AI is still evolving and so is responsible AI.
“Responsible AI doesn’t have a goal, so we never consider our work under the Voluntary AI Commitment complete. But since we signed them, we’ve made great progress and this year we I look forward to building on that momentum,” Crampton said.