ET 20:01

U.S. Military Uses Anthropic Claude in Venezuela Capture; PLATRFacilitates Deployment Amid Ethics Scrutiny

IMP6.0
SNT0.0
CONF50%
Macro

U.S. forces executed a斩首-style operation in Venezuela on January 12, 2026, arresting former President Nicolás Maduro, and reportedly used Anthropic’s Claude AI tool during the operation. The use of Claude, which prohibits participation in violence, monitoring, or arms development, raises significant ethical and regulatory concerns. The Department of Defense declined comment, and an Anthropic spokesperson stated the company cannot comment on specific deployments, emphasizing all uses must comply with its policies and be conducted with approved partners. The system is reportedly being accessed through a collaboration with Palantir Technologies (PLTR-US). The classified contract, valued at up to $2 billion, was signed in summer 2025. Previous reports suggested the administration considered canceling the deal amid pushback over risks of AI in lethal and domestic surveillance applications. Internal pressure within Anthropic, led by CEO Dario Amodei, has increased for stronger oversight and regulation. This follows remarks by Secretary of Defense Pete Hegseth indicating the Department would not employ AI models barred from combat, and tensions with the Trump administration over regulatory and export restrictions on AI chips. OpenAI has also joined Google’s Gemini platform, providing services to about 300,000 DoD personnel for document analysis, report generation, and research support.

EditorTan Wei Jie