Crypto

U.S. to review AI models from Microsoft, Google, xAI before public release



Microsoft, Google, and xAI have agreed to provide the U.S. government with early access to upcoming artificial intelligence models, allowing officials to examine potential national security risks before the systems are released to the public.

Summary

  • Microsoft, Google, and xAI will provide U.S. authorities early access to new AI models for national security testing before public release.
  • The Commerce Department’s CAISI will evaluate model capabilities and risks, including potential misuse in cyberattacks, using versions with reduced safety guardrails.
  • The move follows broader Pentagon efforts to expand AI partnerships, including new agreements to deploy advanced systems across classified military networks.

The arrangement will enable the Center for AI Standards and Innovation (CAISI), which operates under the United States Department of Commerce, to test these models in advance. 

According to the agency, the process will include technical evaluations and research into how the systems perform, as well as the risks they may pose.

Recent advances in AI, including systems such as Anthropic Mythos, have intensified concerns in Washington and across corporate sectors. Officials and executives have warned that increasingly capable models could be misused, particularly in cyberattacks where automation may enhance the scale and speed of malicious activity.

Anthropic was not referenced in the latest announcement. The company has been at odds with the United States Department of Defense over the extent of safeguards applied when its tools are used in military environments, highlighting ongoing tensions around how such systems should be deployed.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said, underscoring the need for structured evaluation frameworks as model capabilities advance.

CAISI acts as the federal government’s central hub for testing advanced AI systems. It said it has already carried out more than 40 evaluations, including on models that had not yet been made publicly available at the time of testing.

To support these assessments, developers often submit modified versions of their models with certain safety guardrails relaxed. That allows the agency to examine worst-case scenarios and identify vulnerabilities that may not appear under standard usage conditions.

The move comes shortly after the Pentagon expanded its own AI partnerships. The United States Department of Defense said last week it had signed agreements with seven companies to deploy advanced AI capabilities across classified networks, as it looks to diversify its pool of technology providers supporting military operations.



Source link