AI, Pentagon and Anthropic
Digest more
The OpenAI-Pentagon deal and the federal standoff with Anthropic signal the urgent need for a more developed AI safety industry to provide external security standards.
Amodei recommended that all labs develop “a true ‘MRI for AI,’” but he acknowledged that they might not have enough time, given how quickly AI is advancing. This interpretability problem gets to the core of Anthropic’s concern about autonomous weapons.
OpenAI CEO Sam Altman announced late Friday that the company had signed a deal with the Pentagon for its AI tools to be used in the military’s classified systems, but with seemingly similar guardrails rival Anthropic had also requested.
However, some scholars, activists and proponents of AI regulation warn the race between the two countries could get out of hand. They fear that, in a blind rush to get ahead, both are creating systems that could eventually pose cataclysmic risks with few guardrails.
Researchers found that interest in AI agents has undoubtedly skyrocketed in the last year or so. Research papers mentioning “AI Agent” or “Agentic AI” in 2025 more than doubled the total from 2020 to 2024 combined, and a McKinsey survey found that 62% of companies reported that their organizations were at least experimenting with AI agents.
More than 60 percent of K-12 teachers told the EdWeek Research Center that they used AI-based tools in their classrooms in 2025, nearly double the share that used the technology just two years before. Half of teachers said they have received at least some training in the tools, though the substance varied widely.
India should utilise the opportunity to push for a non-binding framework rooted in its principles of accountability and aligned with its interests
AI agents in 2026 rely on tools, memory, guardrails, and evaluation; learn to build safer and avoid common failures