Anthropic tries 'to enable beneficial uses' of AI by government agencies

Not keen on smart weapons, more interested in stopping human trafficking

Anthropic wants governments to think of it when they want AI to make the world a better place. No, seriously.

The AI startup's ambitions were this week expressed in its decision to offer its Claude 3 Haiku and Claude 3 Sonnet AI models in the AWS Marketplace for the US Intelligence Community and in AWS GovCloud, suggesting their debut in those digital domains will see American government agencies "provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios."

Anthropic choosing a popular route to market – AWS – is not unusual. Nor is it surprising that the company wants to target government buyers. March 2024 analysis by think tank the Brookings Institute found a 1,200 percent jump in AI-related contracts dangled by Washington, and that somewhat spooky AI provider Palantir has dominated past awards.

Anthropic has positioned itself on the lighter side of AI, but is willing to tackle some mucky tasks as outlined in a list of contractual exceptions to its general usage policy.

The exemptions "allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them," the image-conscious startup declared. Limitations on other harmful uses like disinformation, weaponry, and the like remain in place.

That leaves the company happier to clean things up than do dirty work, and perhaps avoid scenarios that inspire fears of rogue AI or drive regulators to draft rules to prevent AI creating harmful outcomes. Nor are Anthropic's ambitions focused on applications that have really piqued the interest of the US military – like AI dogfighting, killer drones, or battlefield awareness systems.

Anthropic's imagined AI scenarios are, however, representative of the ways that US government agencies – outside the defense community – actually use AI. "Agencies are currently using AI in various areas, including agriculture, financial services, healthcare, internal management, national security and law enforcement, public services and engagement, science, telecommunications, and transportation," the US Government Accountability Office (GAO) observed in a December 2023 report.

The major current user of AI, per the GAO, is NASA – for applications like global volcano surveillance and picking targets for planetary rovers to match scientist specifications.

Taking a cloudy route to market is also notable because big players like Microsoft and Oracle win vast amounts of business from Washington without having to compete for it thanks to deals that specify certain vendors. Even when Microsoft brings pain to Uncle Sam with its security failings, the deals and dollars keep coming.

Anthropic may not yet be good at playing the lobbying games that help win those contracts. But it has made it easy for government agencies to buy its stuff, and positioned itself as the ethical choice among rivals such as OpenAI and Google – both of which, as older companies, have had vastly more time to make missteps.

But Anthropic's stance may be hard to sustain if open source models like Meta's LLaMa 3 continue to become more competitive. Many orgs faced with comparable AI models will choose the one without contractual usage restrictions.

And of course even with lofty motives, Anthropic is not fundamentally different from Google and OpenAI partner Microsoft – each keen to sell Secure AI in Google Cloud and Azure OpenAI Service in Azure Government respectively – not to mention AWS now with added Claude. For all players, the point is billable AI first, responsible machine learning a little lower down the list. ®

More about

TIP US OFF

Send us news


Other stories you might like