ET 02:01

OpenAI Unveils GPT-5.3-Codex: Self-Training AI Outpaces Previous Models, Efficiency Jumps 25%

OpenAI released GPT-5.3-Codex, the first model capable of self-training and iteratively refining its own development environment, significantly accelerating software engineering and operational efficiency. Earlier versions of Codex autonomously diagnosed infrastructure issues, optimized GPU clusters, and proactively suggested improvements, cutting release cycles from years to months. Performance leads across benchmarks: SWE-Bench Pro 56.8%, Terminal-Bench 2.0 77.3%, OSWorld-Verified 64.7% (up from 38.2%), and CTF security 77.6%. Token costs are nearly halved and generation speed exceeds 25% faster. The model now operates as an end-to-end software collaborator, handling PRD, UI, deployment, and monitoring, and can iteratively build complex applications with minimal human input. A new interactive collaboration mode maintains context and invites human input at decision points, enhancing transparency and control. Released on February 6, 2026, GPT-5.3-Codex intensifies competition with Anthropic’s Claude Opus 4.6.

EditorThomas Ho