Microsoft recently released a new AI model, WizardLM-2, as a successor to the previous model but promptly withdrew it within a day due to missing toxicity testing. The company removed the AI's announcement post and related files, but assured that they are working to complete the required test and re-release the model. However, despite the removal, some users claim to have downloaded and re-uploaded the models, raising concerns about potential risks to users. PCMag has reached out to Microsoft for further details on the AI's removal and its toxicity testing process.
Key Takeaways
- Microsoft released WizardLM-2, the successor to the WizardLM AI model, but quickly pulled it from the web within a day.
- The new AI's announcement post, files on GitHub, and AI data were deleted due to missing toxicity testing.
- The developers apologized for the oversight and announced that they are completing the test to re-release the model.
- Users claim to have downloaded and reuploaded the deleted model files, raising uncertainty about potential risks.
- Researchers and tech experts are questioning the effectiveness of Microsoft's "toxicity" test and what was specifically censored.
Analysis
Microsoft's abrupt withdrawal of the new AI model, WizardLM-2, raises concerns about potential risks to users and casts doubts on the effectiveness of their toxicity testing. The oversight may have direct consequences for Microsoft's reputation and user trust, while the re-uploaded models could pose security risks. Indirectly, researchers and tech experts may suffer from credibility issues, and the AI industry as a whole could face heightened scrutiny. If not addressed transparently, the short-term impact may damage Microsoft's standing, while the long-term consequences could impact the adoption of future AI models and the perception of AI safety.
Did You Know?
-
WizardLM-2 AI Model: This refers to a new artificial intelligence model developed by Microsoft as a successor to its previous model, WizardLM. It was released but quickly withdrawn due to the absence of toxicity testing, resulting in the deletion of the AI's announcement post, related files, and data.
-
Toxicity Testing: This is a process used to assess the potential harmful effects or negative consequences of AI models, particularly in terms of generating toxic or harmful content. In this case, Microsoft's decision to withdraw the AI model was based on the absence of this testing, leading to concerns about potential risks to users.