The White House is Looking into an Executive Order (EO) that would Create a Vetting System for New Artificial Intelligence (AI) Models like Anthropic PBC’s Mythos, in a Bid to Protect Business and Government Networks from AI related Cyber Risks, a Top Economic Adviser said 5/6/2026.
“We’re studying, possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe, just like an FDA drug,” National Economic Council Director Kevin Hassett (R) told Fox Business.
The Directive is taking shape Weeks after Anthropic Revealed that its Breakthrough Mythos Model was Adept at Finding Network Vulnerabilities and could Pose a Global Cybersecurity Risk. The Company has Limited Access for Now to a Handful of Large Tech and Financial Companies, and the Trump (R) Administration has been Pressing to make Mythos available to Federal Agencies to Test Government Systems. Hassett Acknowledged that Effort on 5/6/2026.
“We have scrambled an all of government effort and all the private sector to coordinate and make sure that before this model is released out into the wild, that it’s been tested left and right, to make sure that it doesn’t cause any harm to the American businesses or the American government,” Hassett said.
Anthropic’s Mythos Announcement spurred Trump Administration Officials into Accelerating existing Efforts to Shape AI Policy. White House Chief of Staff Susie Wiles (R) and other Senior Aadministration Officials met with Anthropic CEO Dario Amodei last month, where Topics discussed included Mythos.
Hassett said it’s “really quite likely” that any Testing spelled out under the Order would ultimately Extend to all AI Companies. “I think that, that Mythos is the first of them, but it’s incumbent on us to build a system,” He said. “And that’s really pretty much what we’re working almost full time on right now.”
It’s Unclear whether the Order described by Hassett would Call for Mandatory Testing of AI systems. Such a Requirement would Mark a Shift in Trump’s stance on AI, One that has Stressed a Hands-Off Approach to Regulation. The White House did Not Immediately respond to a Request for Comment on Whether the Executive Order would Mandate that AI Models Secure Government Approval Prior to their Release.
On 5/5/2026, the Commerce Department (DOC) Announced the Expansion of a Voluntary Program to Test AI Models before their Release. Alphabet Inc.’s Google, Microsoft Corp, and xAI, have Agreed to give the U.S. Government Access to their Models to Assess the Systems’ Capabilities and Help Improve Security. OpenAI and Anthropic were already part of the Initiative, led by the Department’s Center for AI Standards and Innovation (CAISI).
Separately, White House Officials have been preparing a Wide-Ranging AI Policy Memo that Outlines Requirements for AI Deployment by National Security Agencies, Bloomberg News previously Reported. It would call for U.S. Agencies to use Multiple AI Providers and for Companies to Abide by the Military’s Chain-of-Command.
Anthropic has been Embroiled in a Dispute with the Defense Department (DOD/DOW) over Terms of Access to its AI Tools. Talks with the Pentagon Collapsed in 2/2026 after Anthropic insisted on Guarantees its Products Wouldn’t be used for Fully Autonomous Weapons or Mass Surveillance of Americans. Those Conditions were Rejected by Defense Officials, who Declared the Company a Supply-Chain Risk and began Moving to Drop it as an AI Provider.
More recently, Trump has Signaled that the U.S. would Ultimately have a Good Relationship with Anthropic, a Significant Shift in Tone after Months of Tensions with the Company. In a CNBC Interview, Trump said U.S. Officials had “very good talks” with Anthropic Executives, adding “I think we’ll get along with them just fine.”

NYC Wins When Everyone Can Vote! Michael H. Drucker
