Skip to content

Cachengo’s Decentralized AI Infrastructure is Solving the Bottleneck in AI Deployments

Cachengo CIO explains how decentralized AI infrastructure solves real bottlenecks in AI deployments, from transit systems to remote solar farms.

The conversation around AI typically focuses on model performance, but according to Steve Westmoreland, CIO of Cachengo, the real bottleneck isn’t your model—it’s the infrastructure it runs on. This insight drives Cachengo’s recent platinum membership with the OpenInfra Foundation, signaling a strategic shift toward decentralized AI computing solutions.

The Physics Problem in AI Infrastructure

Westmoreland explains the fundamental challenge: “If you have to bring that back to the cloud to make every decision, physics becomes your enemy.” This reality becomes critical in applications like New Jersey Transit’s innovation center, where Cachengo processes video and sensor feeds directly on buses to detect weapons and safety incidents without relying on wireless connections that can’t handle high-density video streams.

The solution involves deploying AI inference models directly at the edge. “Our AI components—our inference engine—running on a piece of Cachengo hardware and software, are analyzing those videos using AI models that detect weapons and events such as someone falling down,” Westmoreland notes. These models can be updated remotely, including facial recognition capabilities, without requiring constant cloud connectivity.

Beyond Traditional Data Center Deployment

Cachengo’s approach extends far beyond typical enterprise deployments. Their hardware runs in non-traditional environments—from transit buses to remote solar farms—where traditional cloud connectivity is impractical or expensive. Westmoreland describes working with organizations managing large solar installations in remote locations: “You care about what they’re doing. You want to store what they’re doing. But you don’t necessarily want to keep it or store it in the cloud. You want to process it in near real time.”

This distributed approach includes predictive maintenance capabilities, where AI models analyze audio patterns to detect bearing failures before they occur—demonstrating practical applications beyond the typical AI use cases.

Open Source Strategy and Community Integration

Jimmy McArthur, Director of Business Development at OpenInfra, emphasizes the significance of this partnership: “This is exactly the kind of Platinum Member we want at the OpenInfra Foundation, because they’re engaging with the community — they’re coming in with eyes open.”

Cachengo’s integration roadmap focuses on OpenStack components including Nova, Swift, Ironic, and Neutron projects. The company plans to present at the OpenInfra Summit in Europe, sharing code that enables geographically dispersed storage components for decentralized and distributed processing.

Market Implications for AI Infrastructure

The partnership represents broader market trends. As McArthur explains, “OpenStack is the de facto software for running AI workloads,” but Cachengo’s approach to inference models at the edge represents a new application area that other organizations haven’t explored with OpenStack.

This collaboration comes as the OpenInfra Foundation joins the Linux Foundation, creating a larger ecosystem for infrastructure innovation. The addition of Cachengo as the third platinum member in the past year demonstrates growing momentum in open source infrastructure solutions.

Looking Forward

Westmoreland’s 35-year journey through open source—from VA Linux to the early days of SourceForge—provides perspective on infrastructure evolution. His surprise at OpenStack’s continued growth and stability reinforces the platform’s role as foundational technology supporting modern containerization and Kubernetes workloads.

The partnership between Cachengo and OpenInfra represents more than a business relationship—it’s a signal that open source infrastructure is evolving to meet the massive demands of AI workloads where traditional centralized approaches fail.

This interview was conducted by TFIR. For more B2B tech startup video interviews and in-depth information, you can visit their website tfir.io.