Dedicated Servers vs. Amazon EC2 Dedicated Instance
Amazon EC2 instances launched within Amazon VPC run hardware dedicated to one customer. It's less secure than a enterprise server in a private data center. Compared to existing hosting, it’s no less secure than decade-old enterprise grade hosting.
Amazon Web Services today announced the availability of dedicated compute instances within a VPC:
Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.
Of course, the humor here is that Amazon didn’t explain which hardware it was referring to. As Chris Hoff of Cisco writes in his Rational Survivability blog, Amazon isn’t isolating network or storage layers, which at first blush leaves the new offering open to criticism that it’s hardly more isolated than a normal AMI. This is true. But the bigger question is what you are comparing Amazon’s new offering to? If you compare it to an enterprise server in a private data center, it’s clearly less secure (unless you applied cloud specific security and encryption tools perhaps). But if you compare it to existing hosting offering, it’s no less secure than enterprise grade hosting has been for a decade.
Shared networks? Layer 3 VLANs have been used for network separation in large hosting environments since right after they were invented. That didn’t stop the most paranoid of the first wave of enterprise co-location customers from insisting on physical switches to separate each tier of their environment even though VLANs were safe. In HA architectures, their paranoid stance probably quadrupled the opex and capex of their networks.
Shared storage? Anyone remember SNI, a shared data-center storage company that went public in 2000 before it went bankrupt, along with most of its dozen other competitors? Some co-lo customers used it and saved money, and the more paranoid bought their own EMC gear.
That’s why criticism about shared infrastructure in Amazon’s new dedicated compute instances will fall mostly on deaf ears. Networks and storage have been shared for longer than cloud-style compute cycles have been.
It also doesn’t look like this is a defensive move on Amazon’s part. Just about every cloud evangelist, including myself, has pointed out in the last year that it’s harder to be compliant with security standards on Amazon than on some other cloud providers like Terremark or Savvis. Amazon is evolving the service in the right direction based on market demand, and doing it blindingly fast for a company of its size.
Hosting and co-location companies spent a long time trying to figure out what to do about burstable capacity. For years the answer was to use a CDN. That worked pretty well, especially for web 1.0 sites. But as web 2.0 architectures evolved, and scaling needs became more extreme, we first saw hosting providers build their own CDNs, then we found dedicated server companies like Rackspace, OpSource, and Savvis roll out public and private cloud offerings next to their dedicated server offerings. This made a lot of sense because their customers with dedicated servers sometimes need burstable server capacity. In fact, some of them were going to Amazon for that burstable capacity, and they weren’t coming back, in spite of AWS’s shortcomings as compared to dedicated servers. Given that dynamic, it’s no surprise that Amazon is giving its customers an easier way to make AWS their permanent home rather than a place for overflow capacity.
Even better yet, this solves the bandwidth and latency problem in cloudbursting that makes hybrid clouds harder to implement. I’ve blogged about this several times, most notably here in the NY Times. When you can “burst” your traffic from a dedicated server at AWS to a shared server, or better yet another dedicated server provisioned on the fly, you can simply ignore inter-cloud latency as a performance barrier. That’s what’s really cool about the new AWS dedicated compute instance.
Providers with co-lo and dedicated server offerings like Rackspace, Savvis, Terremark, and others like Opsource are going to feel increased competition from AWS’ dedicated compute, although in a surprising twist it seems like the AWS offering is more expensive. Perhaps it ought to be, given the seamless interoperability between AWS’ dedicated and burstable instances, especially compared to most 1st generation providers which lack seamless integration between dedicated and burstable offerings.
[Ed. note: Trend Micro would like to know what you think about this. We enthusiastically invite your comments and we will read every one of them. For very detailed information: