Talking Edge Computing Tech with Scale Computing CTO

ValueWalk Q&A with Alan Conboy, office of the CTO, Scale Computing. In this interview, Alan discusses his and his company’s background, the problems that Scale Computing can solve, their competitors, hyperconverged infrastructure and edge computing, and his technology predictions for 2020.

Get The Full Warren Buffett Series in PDF

Get the entire 10-part series on Warren Buffett in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues

Q4 2019 hedge fund letters, conferences and more

Can you tell us about your background?

With more than 20 years of experience, I am an industry veteran and technology evangelist specializing in designing, prototyping, selling and implementing disruptive storage and virtualization technologies. I have served in the office of the CTO at Scale Computing since 2009, where I have held multiple roles, such as senior systems engineer, and global solutions architect.

Mangrove Partners Narrowly Avoids “Extinction-Level Event”

Nathaniel August's Mangrove Partners is having a rough 2020. According to a copy of the hedge fund's August update, a copy of which ValueWalk has been able to review, for the year to August 5, Mangrove Funds have returned -38%. Over the trailing 12-month period, the funds returned -44%. The S&P 500 produced a positive Read More


What about Scale Computing’s background?

Scale Computing Chief Executive Officer Jeff Ready and co-founders Jason Collier and Scott Loughmiller set out to simplify IT and revolutionize the virtualization market when they founded the company in 2007. To do so, Scale Computing engineered HC3, the IT infrastructure platform that allows organizations to do more with less. Scale Computing HC3 eliminates the need for traditional IT silos of virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully integrated, highly available platform for running applications.

Can you explain for those of us with a non-tech background what you do, and what problems Scale Computing solve?

The two biggest costs in IT are downtime and people - the Scale Computing HC3 platform addresses both, simplifying IT operations. Scale Computing HC3 software uses self-healing and automation to maximize application uptime and performance, simplify management, and protect data. When ease of use, high availability, and total cost of ownership matters, Scale Computing HC3 is the ideal IT infrastructure platform for distributed enterprises, large retailers, and SMBs alike.

Who are Scale Computing’s competitors?

Scale Computing competes within the hyperconvergence, virtualization and edge computing market spaces against competitors such as VMware, Nutanix and Dell EMC. Scale Computing specifically differentiates within the marketplace because we deliver affordable, reliable and cost-effective HCI and edge computing solutions to businesses. By retaining our focus on serving companies across the globe, we are able to help organizations achieve their IT infrastructure goals at a fraction of the price of our competitors.

What is the most exciting thing Scale Computing is working on?

With edge computing on the rise, organizations are requiring solutions that can fit small footprint requirements with robust application performance, while still being affordable, efficient and simple to manage remotely. In December, we launched the latest appliance in our HC3 family, our HE150 a small, all-flash, NVMe storage-based compute appliance ideal for distributed enterprises and sites that are in need of highly available infrastructure. The HE150 is powered by the Intel NUC, offering a low-cost edge solution built on a tiny form factor, making deployment in small clusters where highly available computing was previously cost-prohibitive.

We engineered the HE150 to meet a growing demand among distributed organizations that require infrastructure at the edge of the network, specifically at sites where there are limited IT staff and resources available. Our ability to consistently deliver HCI technology than a smaller form factor and a lower price point makes edge computing capabilities and resources more accessible to more organizations.

Can you explain what hyperconverged infrastructure (HCI) and edge computing are for those of us without a tech background?

In a nutshell, hyperconverged infrastructure (HCI) is an appliance-based approach that combines all IT processes - servers, storage and virtualization - into a single vendor solution. HCI’s top benefits have been well-established in the tech industry, such as requiring simpler management, utilizing less rack space and power, fewer overall vendors and an easy transition to commodity servers. An HCI solution is ideal for enterprises with number locations, including retailers, government agencies and departments, educational institutions, and oil and gas companies.

The next big frontier for HCI now sits at the edge of the network. Edge computing is a physical computing infrastructure, intentionally located outside the four walls of the centralized data center, so storage compute resources can be placed where they are needed. Using a small hardware footprint, infrastructure at the edge allows users to collect, process and manage vast quantities of data, which can then be uploaded to either a centralized data center or the cloud.

What is one technology prediction you have for 2020?

We are living in a world that is increasingly data-driven, and that data is being generated outside the four walls of the traditional data center. As we start this new decade, organizations are taking a much deeper look at their cloud usage. Cloud was originally positioned as the answer to all problems, but now the question is, at what cost? More organizations are turning to hybrid cloud and edge computing strategies, and choosing  solutions that process data at the source of its creation. In 2020, organizations will rely on hybrid environments, with edge computing collecting, processing and reducing vast quantities of data, which are then later uploaded to a centralized data center or the cloud.