A security flaw in ChatGPT, identified last year, is being increasingly exploited by cybercriminals to target artificial intelligence systems, according to a March 12 report from cybersecurity firm Veriti. Although the National Institute of Standards and Technology has rated the vulnerability as medium risk, Veriti reports that the flaw has been used in over 10,000 attack attempts globally.
Financial institutions, healthcare providers, and government organizations are among the primary targets. These attacks risk data breaches, unauthorized transactions, regulatory fines, and damage to reputations.
“This vulnerability could allow attackers to steal sensitive data or disrupt the AI tool’s functionality,” said Scott Gee, Deputy National Advisor for Cybersecurity and Risk at the American Hospital Association. “It underscores the importance of patch management as part of a broader AI governance plan, especially in hospital settings. With the vulnerability being over a year old and an exploitation proof of concept already available, it also serves as a reminder of the need for timely software updates.”