OpenAI Limits Release of New Cybersecurity AI Model

OpenAI Restricts GPT-5.4 Cyber Model Amid Rising Security Concerns

WASHINGTON: (Web Desk) – Artificial intelligence firm OpenAI has announced plans to roll out its latest cybersecurity-focused model, GPT-5.4-Cyber, to a select group of trusted partners, following a similar move by Anthropic. The cautious approach from both companies highlights growing concerns over the potential misuse of advanced AI tools in cyberattacks.

According to OpenAI, the new model will be made available only to top-tier users under its Trusted Access for Cyber (TAC) programme, which includes verified cybersecurity professionals and organizations responsible for protecting critical systems. The company emphasized its intention to balance broader access with safeguards against misuse.

Former student opens fire at Turkey school, wounds 16, kills himself

Anthropic recently restricted access to its Claude Mythos model, offering it to just 40 major technology firms through its Project Glasswing initiative. Despite not being specifically trained for cybersecurity, the model impressed experts by identifying thousands of vulnerabilities in widely used software, including some that had remained undetected for years.

The rapid advancement of generative AI in coding and system analysis has raised alarms about a potential arms race between cybersecurity defenders and malicious hackers. While these tools can help identify weaknesses, they could also be exploited if they fall into the wrong hands.

OpenAI stated that its GPT-5.4-Cyber model is designed to be more flexible for defenders, allowing them to test systems for vulnerabilities without unnecessary restrictions. Meanwhile, Anthropic has defended its limited rollout as a way to give security experts a critical advantage in addressing threats before attackers can act.

Comments are closed, but trackbacks and pingbacks are open.