
Factory AI Empowers Droids with Open-Source AI Integration
Factory AI has taken a bold step in reshaping how developers interact with artificial intelligence. On October 7, 2025, the company announced that its Droids—a line of intelligent software development agents—can now run on any open-source AI model. This upgrade makes it possible for developers to integrate models like GLM 4.6, Qwen3-Coder, and GPT-OSS directly into their coding workflows without relying on closed systems.
In simple terms, Droids are AI-powered coding companions that connect with tools like GitHub, Slack, and terminal environments. They assist in writing, debugging, and reviewing code in real time. But until now, they mainly worked with proprietary models. Factory AI’s latest move changes that, giving developers unprecedented freedom.
Speaking at the announcement, a company spokesperson said, “We believe open-source AI is no longer a compromise—it’s the future of scalable agent performance.”
GLM 4.6 Pushes the Limits of Open-Source Performance
Among all supported models, Zhipu AI’s GLM 4.6 stands out. With a staggering 357 billion parameters and a massive 200,000-token context window, this model isn’t just powerful—it’s practical. When integrated into Droids, GLM 4.6 achieved 43.5% on the Terminal-Bench benchmark, outperforming Anthropic’s Sonnet 4 in coding performance and approaching the level of top-tier proprietary systems.
Terminal-Bench is a competitive benchmark that measures how effectively AI agents perform coding tasks such as debugging, repository management, and multi-step reasoning. A 43.5% score represents a significant leap for open-source models, narrowing the gap between freely available AI and commercial giants like OpenAI and Anthropic.
Factory AI emphasized that GLM 4.6 not only enhances code generation but also improves context awareness across large projects. This makes it especially valuable for enterprise-level software teams managing massive repositories.
The Numbers That Matter: Open-Source Models Go Mainstream
The announcement included several revealing figures from Factory AI’s internal testing:
- GLM 4.6 – 43.5% (frontier performance)
- GPT-OSS-120B – 38% with 5.1B active parameters
- GLM-4.5 Air – 34.6% with 12B active parameters
- Qwen3-Coder-30B – 29% with 3.3B active parameters
However, the company also warned that performance drops sharply below 30 billion parameters. For example, DeepSeek-Coder-7B scored 0% due to failures in tool-calling and multi-step reasoning. That means while smaller models are easy to run locally, they still lack the intelligence needed for complex agent tasks.
A Factory AI engineer noted, “We’re seeing the beginning of a new balance between compute efficiency and intelligence. Sparse mixture-of-experts architectures make this possible.”
Bringing Choice to Developers: From Cloud Giants to Laptops
With this update, developers have more control than ever. Factory AI’s “bring-your-own-key” system allows seamless integration with third-party platforms like Fireworks AI and OpenRouter, enabling users to choose models based on cost, performance, or licensing preferences.
This flexibility also opens the door for smaller teams and startups. Instead of paying premium API costs to proprietary providers, they can deploy competitive open-source models directly on their own servers—or even consumer hardware—depending on scale.
For developers, this shift means independence. A coder can now choose between a high-end frontier model like GLM 4.6 or a lightweight alternative for offline environments. In essence, the same Droid framework that powers Fortune 500 companies can now run in a student’s local environment with minimal configuration.
A Turning Point for Open-Source AI
The broader message behind Factory AI’s move is clear: open-source no longer means inferior. In the last two years, open models have evolved from niche experiments to production-ready solutions. With benchmarks like Terminal-Bench confirming their capability, they’re becoming serious contenders to proprietary systems.
Experts say this shift could spark new innovation in the AI ecosystem. By opening Droids to external models, Factory AI has effectively built a bridge between cutting-edge research and real-world software development. It encourages competition, speeds up model improvements, and keeps the AI ecosystem decentralized.
AI analyst Meera Patel commented, “What Factory AI has done is more than a product upgrade—it’s a philosophical statement about the future of developer tools.”
The Road Ahead
While the integration marks a breakthrough, it also sets high expectations. Factory AI plans to expand compatibility to dozens of other open models in the coming months, potentially adding options like Mistral, Llama 3.2, and Falcon.
The company hinted that upcoming versions of Droids will feature enhanced context retention and long-form reasoning, allowing entire repositories to be understood and modified intelligently in one go. This would push AI-assisted programming even closer to full autonomy.
As open-source AI accelerates, the competition between free and proprietary models is heating up. And with GLM 4.6 leading the charge through Droids, Factory AI might have just redefined what’s possible in the world of coding automation.
In short, this move isn’t just about integrating models—it’s about redefining freedom for developers. Factory AI’s Droids now stand as a symbol of open innovation, giving every coder—from startups to enterprises—a chance to work smarter, faster, and independently.