Open Core Ventures (OCV) is proud to announce the launch of RayAI, the first Ray platform for AI Agents. RayAI gives agent builders production-ready infrastructure to run agents reliably at scale by extending Ray OSS for agentic workloads.
Founder and CTO, Pavitra Bhalla, is a respected engineering leader with deep expertise in building enterprise platforms at scale. As Director of Engineering at the Long-Term Stock Exchange (LTSE), he led development of an equity management platform trusted by over 32,000 venture-backed companies and leading accelerators like Y Combinator and Techstars. Most recently, he was a Staff Software Engineer at Dosu, where he helped build the code research AI agent deployed on over 70,000 repositories on GitHub.
While evolving Dosu into a multi-agent system, Pavitra had a key insight: production agents need infrastructure designed specifically for their workloads, not adapted from pre-LLM systems. That's why he founded RayAI.
RayAI brings Ray—the distributed computing framework with over 237 million downloads that powers training and serving of frontier models like GPT—to agent builders. "As AI evolves from chatbots to agents that perform economically valuable real-world tasks, infrastructure must evolve too," said Pavitra. "RayAI gives developers the tools to scale from laptop prototypes to production clusters, without becoming distributed systems experts."
The AI agent infrastructure gap
Building domain-expert agents is challenging, but infrastructure complexity makes it exponentially harder. Agents search and retrieve data, execute code, use browsers, call APIs, and coordinate multi-step workflows with parallel async tasks. A research agent might spawn sub-agents to scrape dozens of websites, run model inference to generate analysis code, execute that code in sandboxes, and iterate until results are ready. Each step can fail, take different amounts of time, and be compute, network, or I/O bound.
This requires dynamic orchestration of fine-grained tasks across different compute resources—entirely new requirements that pre-LLM infrastructure wasn't built to handle. "Ray is the compute engine that powers the AI-native stack," said Pavitra.
Originally designed with simple primitives for distributed computing, Ray has evolved to enable large-scale data processing, reinforcement learning, model training, and serving massive models across many GPUs and CPUs. "Agentic workloads and the desire to integrate frontier and custom models are driving Ray adoption," Pavitra said. "At RayAI, our mission is to democratize this AI-native stack for agent builders."
From prototype to production
RayAI transforms laptop prototypes built with any agent framework into production-ready systems. Agents deployed through the platform get resource awareness, auto-scaling, high availability, and automatic resilience—eliminating months of infrastructure work.
The platform leverages Ray's proven runtime, which has scaled to thousands of nodes for training the world's largest language models. Agents can parallelize tool calls across hundreds of nodes with automatic retries and failure recovery, while Ray handles scheduling, resource planning, and distributed execution. For code execution, RayAI provides secure sandboxes that maintain the high-throughput performance required for function calling. Automatic checkpointing, retry logic, and failover handling on the platform keep agents running even when individual nodes fail.
Beyond giving agents a reliable runtime, RayAI is building observability features that enable continuous improvement. "Agent-level visibility shows where agents perform well, where costs accumulate, and where bottlenecks emerge," Pavitra said. "These insights drive continuous prompt-tuning and post-training on agentic tasks."
RayAI works with existing code and provides adapters for LangGraph and other frameworks. Developers keep their tools and workflows while Ray handles infrastructure. Because Ray runs anywhere—AWS, GCP, Azure, on-premise, or laptops—teams avoid vendor lock-in. The same API works on self-hosted clusters or RayAI's managed platform.
"Bring your agent code, deploy anywhere, scale without infrastructure complexity," said Pavitra. "For engineers already building on Ray, it's a natural extension. For new agent builders, it's infrastructure that gets out of the way."
If you're interested in building infrastructure for AI agents, RayAI is hiring. View open positions.
