Our genesis is an interesting tale and probably requires a beer to really appreciate. There are now more than 20 of us, as we continue to grow and convert folks with our radical way of thinking. The best analogy to the reactions we sometimes receive is we're immersed in a world of flat earthers as we present that the world truly is round. Even in technology, we sometimes grow so accustomed to a particular way of doing things that we fail to question whether it is this still a good thing. This is certainly the situation when we think about storage. It is typically the single least appreciated part of our infrastructure purchase– the part we have to have, but desperately wish we didn't. We felt that sentiment needed to change.
The original inspiration for Cachengo came from our co-founder, Ash Young, who has spent the past 30+ years pioneering in the storage industry. When you meet him, you wouldn't think he has been around so long, especially if you hear him discuss his passions and what he does in his spare time. However, the status quo storage architecture and software stack that is de facto today was his idea more than two decades ago– back when most people had never even heard of Linux or open source. But paradigms change for good reasons; and the time finally seemed right to make a change.
To appreciate this, take a look back at the conditions, 20 years earlier. Businesses were dominated by different operating system platforms. Windows NT owned the average Sales & Marketing organization. Netware was typically used in Finance and Operations. And Unix was almost always found in the Engineering departments. Companies needed a way to share information across different departments; and the use of file translation software on every client was cumbersome and expensive. These challenges created the opportunity for a new class of Network Attached Storage (NAS) to be born. The initial solutions were expensive, large, and consisted of multiple rack mountable parts. Ash saw this and felt things could become more mainstream if the appliances were all-in-one, utilized a more open source approach to the operating system, and could leverage more commodity components. This quickly became the recipe for modern Enterprise and Cloud-based storage systems.
Today, storage systems have become fairly complex as they try to meet the needs of Cloud computing. While the fundamental building blocks are the same, the workload has increased tremendously, which requires a different way of doing things that doesn't require the ecosystem to be radically changed. In other words, we want to avoid throwing the baby out with the bath water. Instead, we want to leverage the existing protocols and application stacks that are used on the computing side, while addressing performance bottlenecks, latencies, CAPEX, and OPEX. The current methodology of always throwing the latest and greatest CPU into each storage appliance and then increasing the number of appliances in the cluster, has not reduced either CAPEX or OPEX. And some would argue it hasn't really changed latency or performance much, either. So, this is where we focused our attention.
What we did was relatively simple; but it was so simple and so obvious that no one else really bothered to do it. Just as the economic climate for components had shifted 20 years earlier, the climate now allowed us to migrate from a single CPU complex to placing a CPU onto each drive. By doing this, we found that not only could we reduce latencies and increase performance, but also significantly reduce both CAPEX and OPEX. When we say "significantly", we mean by an order of 10. That was simply too significant to ignore. So, we started to assemble prototypes and give demonstrations of our conception at events such as Mobile World Congress (MWC) and at Open Networking Summit (ONS) in early 2018. No one really expected to get excited about storage; but all of that quickly changed.
We want to provide the best performing and easiest to deploy storage systems to meet the increased demands for Edge Computing, Machine Learning (ML), and Artificial Intelligence (AI); by replacing the over-use of single, large CPUs with many smaller ones, embedded onto each drive, to perform localized computing where the data sits.
We wish to dramatically alter the status quo computing paradigm that relies on migrating data from disk, to CPU, back to disk to one where the computing is truly distributed and is driven to each drive, which results in a dramatic reduction in data center footprint and greenhouse gases.
We are based in West Tennessee. While many in our company are from Silicon Valley, we wanted a location that would allow us to easily bring manufacturing back to the United States. Sure, a lot of our components necessarily come from overseas, but final assembly, test, QA, etc., is all based here. We felt we could bring trust and confidence back to the manufacturing of computing equipment, while also creating new jobs, at higher wages, to a part of the country that might better appreciate it. Many states were looked at, and continue to be considered, but Tennessee quickly felt like it should be home.
Ash is our Chief Executive Officer. He has served in a number of VP and CXO roles for some of the pioneers in storage and open source, including Xyratex (now Seagate), Snap Appliance (now Microsemi), and VA Linux Systems. His educational background includes undergraduate studies in math and physics and post graduate studies in theology, law, and data science.
Jimmy is our Chief Scientist. He previously served as Software Engineer and Data Scientist at Facebook where he worked on detecting automated attacks against the site as part of the Infrastructure Organization. Jimmy received his PhD in Chemical Engineering from UCLA while working on models of bacterial metabolism.