AIArtificial IntelligenceBusinessNewsOpenAItechnology

Researchers Flag Accuracy Concerns in OpenAI's Whisper Transcription Tool

Researchers are worried about mistakes in OpenAI’s Whisper, a tool that transcribes audio into text. According to the Associated Press, engineers and researchers found that Whisper sometimes “hallucinates,” meaning it adds extra details that aren’t actually in the audio. This is odd for a transcription tool, which should just capture what’s said.

Instead, Whisper sometimes throws in unexpected comments, like racial remarks or made-up medical advice. This is especially concerning if it’s used in hospitals or medical settings where accuracy is critical.

A University of Michigan researcher saw these mistakes in 8 out of 10 transcriptions from public meetings. A machine learning engineer reviewed over 100 hours of Whisper transcripts and found that more than half contained “hallucinations.” Another developer found issues in nearly all of the 26,000 transcriptions he created with the tool.

OpenAI says they’re actively working to improve Whisper’s accuracy and reduce hallucinations. They also remind users that Whisper isn’t meant for high-stakes contexts, like medical decisions. “We thank researchers for sharing their findings,” an OpenAI spokesperson said.

Image: DIW-Aigen

Read next: Instagram Tests New ‘Expand Your Image’ Feature That Uses AI To Adjust Pictures on Stories

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.