Pentagon Allegedly Used Claude AI During Maduro Raid
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson said. “Any use of Claude – whether in the private sector or across government – is required to comply with our Usage Policies.”
Claude, developed by the San Francisco-based AI lab, is subject to policies that prohibit its use to “facilitate violence, develop weapons or conduct surveillance.” No Americans were killed during the operation, but dozens of Venezuelan and Cuban soldiers and security personnel died on January 3.
Unlike some competitors, Claude is reportedly deployed on classified military platforms through a partnership with Palantir Technologies, allowing access to the U.S. military’s most sensitive operations. Other AI providers have Pentagon contracts but are not integrated in the same way on classified systems.
The revelations have emerged at a sensitive moment for Anthropic, which has emphasized AI safety and positioned itself as a cautious, responsible alternative within the industry. Critics say the deployment raises questions about how commercial AI tools are applied in military operations and how company safeguards are enforced.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.