Today we are announcing the general availability of the world’s first
commercial cloud computer — along with
our $44M Series A financing.

From the outset at Oxide, and as I outlined in
my 2020 Stanford talk,
we have had three core beliefs as a company:

  1. Cloud computing is the future of all computing infrastructure.

  2. The computer that runs the cloud should be able to be purchased and not
    merely rented.

  3. Building a cloud computer necessitates a rack-level approach — and the
    co-design of both hardware and software.

Of these beliefs, the first is not at all controversial: the agility,
flexibility, and scalability of cloud computing have been indisputably
essential for many of the services that we depend on in the modern economy.

The degree that the second belief is controversial, however, depends on who you
are: for those that are already running on premises due to security,
regulatory, economic, or latency reasons, it is self-evident that computers
should be able to be purchased and not merely rented. But to others, this has
been more of a revelation — and since we started Oxide, we have found more and
more people realize that the rental-only model for the cloud is not
sustainable. Friends love to tag us on links to
VC thinkpieces,
CTO rants, or
analyst reports on
industry trends
— and we love people thinking of us, of course (even when
being tagged for the dozenth time!) — but the only surprise is how surprising
it continues to be for some folks.

The third belief — that the development of a cloud computer necessitates
rack-scale design of both hardware and software — may seem iconoclastic to
those who think only in terms of software, but it is in fact not controversial
among technologists: as computing pioneer Alan Kay famously observed, “people
who are really serious about software should make their own hardware.” This is
especially true in cloud computing, where the large public cloud companies
have long ago come to the conclusion that they needed to be designing their
own holistic systems. But if this isn’t controversial, why hasn’t there been
a cloud computer before Oxide’s? First, because it’s big: to meaningfully
build a cloud computer, one must break out of the shackles of the 1U or 2U
server, and really think about the rack as the unit of design. Second,
it hasn’t been done because it’s hard: co-designing hardware and
software that spans compute, networking, and storage requires building an
extraordinary team across disparate disciplines, coupling deep expertise with
a strong sense of versatility, teamwork, and empathy. And the team isn’t
enough by itself: it also needs courage, resilience, and (especially) time.

So the biggest question when we set out was not “is the market there?” or “is
this the right way to do it?”, but rather could we pull this off?

Pulling it off

We have indeed
pulled it off — and it’s been a
wild ride! While we have talked about the trek quite a bit on our podcast,
Oxide and Friends (and
specifically, Steve
and I recently answered questions about the rack
), our general availability
is a good opportunity to reflect on some of the first impressions that the
Oxide cloud computer has made upon those who have seen it.

“Where are all the boxes?”

The traditional rack-and-stack approach starts with a sea of boxes arriving
with servers, racks, cabling, etc. This amounts to a literal
kit car approach — and it starts with
tedious, dusty, de-boxing. But the Oxide rack ships with everything installed
and comes in just one box — a crate that is
its own feat of
engineering
. All of this serves to dramatically reduce the latency from
equipment arrival to power on and first provision — from weeks and months to
days or even hours.

“Is it on?”

We knew at the outset that rack-level design would afford us the ability to
change the geometry of compute sleds — that we would get higher density in the
rack by trading horizontal real estate for vertical. We knew, too, that we
were choosing to use 80mm fans for their ability to move more air much more
efficiently — so much so that we leveraged
our
approach to the supply chain
to partner with Sanyo Denki (our fan provider) to
lower the minimum speed of the fans from 5K RPM to the 2K RPM that we needed.
But adding it up, the Oxide rack has a surprising aesthetic attribute: it is
whisper quiet.
To those accustomed to screaming servers, this is so
unexpected that when we were
getting FCC
compliance
, the engineer running the test sheepishly asked us if we were sure
the rack was on — when it was dissipating 15 kW! That the rack is quiet wasn’t
really deliberate (and we are frankly much more interested in the often hidden
power draw that blaring fan noise represents), but it does viscerally embody
much of the Oxide differentiation with respect to both rack-level design and
approach to the supply chain.

“Where are the cables?”

Anyone accustomed to a datacenter will note the missing mass of cold-aisle
cabling that one typically sees at the front of a rack. But moving to the back
of the rack reveals only a DC busbar and
a tight, cabled backplane. This represents one of the bigger bets we made:
we blindmated
networking
. This was mechanically tricky, but the payoff is huge: capacity
can be added to the Oxide cloud computer simply by snapping in a new compute
sled — nothing to be cabled whatsoever! This is a domain in which we have
leapfrogged the hyperscalers, who (for their own legacy reasons) don’t do it
this way. This can be jarring to veteran technologists. As one exclaimed upon
seeing the rack last week, “I am both surprised and delighted!” (Or rather:
a very profane variant of that sentiment.)

“You did your own switch too?!”

When we first started the company, one of our biggest technical quandaries was
what to do about the switch. At some level, both paths seemed untenable: we
knew from our own experience that integrating with third-party switches would
lead to exactly the kind of integration pain for customers that we sought to
alleviate — but it also seemed outrageously ambitious to do our own switch in
addition to everything else we were doing. But as we have many times over the
course of Oxide, we opted for the steeper path in the name of saving our
customers grief, choosing to
build our own
switch
. If it has to be said,
getting it working
isn’t easy
! And of course, building the switch is insufficient: we also
needed to build our
own networking software
— to say nothing of the
management network
required to be able to manage compute sleds when they’re powered off.

“Wait, that’s part of it?!”

It’s one thing to say that all of the software that one needs to operate the
cloud computer is built in — but it’s another to actually see what that
software includes. And for many, it’s seeing the
Oxide web console (or its
live demo!) that really drives the
message home: yes, all of the software is included. And because the
console
implementation
is built on the public API,
everything that one can do in the console for the Oxide rack is also available
via CLI and API — a concrete manifestation of our
code-as-contract approach.

“And there’s no separate licensing?”

One common source of pain for users of on-prem infrastructure has been license
management: financial pain due to over-paying and under-utilizing, and
operational pain in the navigation of different license terms, different
expiration dates, unpredictable dependencies, and
uncertain
vendor futures
. From the beginning we knew that we wanted to deliver a
delightful, integrated experience: we believe that cloud computers should come
complete with all system software built-in, and with no additional licensing to
manage or to pay for. Bug fixes and new features are always only an update
away and do not require a multi-departmental discussion to determine value and
budget.

“It’s all open source?”

While the software is an essential part of the Oxide cloud computer, what we
sell is in fact the computer. As a
champion of open source
, this allows Oxide a particularly straightforward open
source strategy: our software is all open.
So you don’t need to worry about hinky open core models or relicensing
surprises. And from a user perspective, you are assured levels of transparency
that you don’t get in the public cloud — let alone the proprietary on-prem
world.

Getting your own first impression

We’re really excited to have the first commercial cloud computer — and for
it to be generally available! If you yourself are interested, we look
forward to it making its first impression on you —
reach out to us!

Read More