Five Things Rackspace Can Do To Win Again

May 14, 2013 at 08:00 PM

I've been reading about slow growth for Rackspace cloud and how apps are pulling support. I shouldn't be so surprised given that my own usage of the Rackspace cloud has also dwindled, despite the ORD datacenter being one of the most rock solid facilities I have ever used. I know that Rackspace has spent the last few years working hard and innovating, but somehow they seem to still be missing the boat. Here is a list of key things that made me go back to AWS and that Rackspace can implement to reverse this trend.

1. Bring Your Own Kernel/Image - One of the big issues I have with Rackspace is flexibility. This is also an area where AWS shines. When it comes to vm images, AWS has built a marketplace around them. There are thousands. Rackspace has a 27 starter images in their "First Generation" platform and 49 images in their "2nd Generation".

2. Get rid of storage templates - Rackspace's new Open Stack storage lets you add block storage from 100GB-1024GB in size. On AWS I can allocate pretty much any size I want.

3. Notify Customers When Logging Into an Instance - One of the eerie things about using Rackspace cloud is that I randomly get logins from Rackspace staff. They have a few daemons they set up and like to run on the instance. Since I use Arch Linux and I keep it up to date, their daemon sometimes gets turned off due to dependencies it has getting upgraded (they don't use a package!). So every once in a while I log into an instance and see someone's been there, tinkering with the daemon and getting it to run. I wouldn't mind this if there was communication. Or maybe I would. I would rather get a ticket "Please run this software on your instances or ask us if you'd like us to install it."

4. Add an API for Health Checks/Failover - Provide basic scripting with the monitoring so that failover can be initiated. (Short cut: Buy a DNS provider like Ultra or Dyn)

5. Mirror Databases in Both Datacenters - Let me configure MySQL replication between DFW and ORD.

Being able to copy cloudfiles and machine images between the two datacenters (which AWS introduced a few months back) is very handy for building resilient distributed infrastructure as well, but I think that the list of five items above would go a long way to stem the bleeding. Since Rackspace does have more memory options, it could prove to be the more flexible platform. While I'm at it though, three more wish list items:

6. Unify the US and UK Control Panels - Having to log in and manage Rackspace UK separately is a pain.

7. Multi-Factor Authentication for the Console - there are many ways to get MFA into your infrastructure but the management console is still MFA-less. I have read that Rackspace is working on this, and they may very well be working on other items on this list.

8. Encrypted Server Images - Since there is console support you could completely secure an instance. This isn't practical for every application. I encrypt a lot of my storage at Amazon, but since AWS doesn't have interactive console support, the boot chain remains unencrypted. This is another way that Rackspace could pull ahead, although much like the templating on OS and block storage it seems like Rackspace is really tied to the philosophy of managed services instead of manage your own. Encrypting the images could become a hurdle to upgrading to managed services.

The pace of innovation coming out of Amazon is really amazing for a company that size. I know many hard working people at Rackspace as well and they aren't standing still. Maybe the difference is a little marketing, but a few of the items on my list point to a shift in perspective on the Rackspace cloud product offering itself. An "open" cloud behind a set of pre-defined templates doesn't seem to be all that flexible or fun.

Permanent Link — Posted in Cloud Computing, Technology Management

Arch Linux AMI for Amazon EC2

April 02, 2013 at 08:00 PM

Update August 21, 2016

I am no longer maintaining Arch Linux images for Amazon EC2, and I no longer recommend using Arch Linux on servers. The attitude in some of the core pieces of the system has become far less disciplined and... what I will in a politically correct way say is more centered around agenda than users or system use.

Specifically the issue that broke this for me is the way versions of pacman since the file reorganization effort remove symlinks in the root path install path of a package. This bug has been brought up several times in pacman's history. The author and current Arch czar has stated that symlinks are improper and should be replaced with bind mounts. This approach breaks the best practice of being able to separate the OS from the data, and using bind mounts causes disk metrics, analysis and monitoring to misreport. In previous instances, this bug was fixed, however so far this time it is not being addressed.

I continue to be a proponent of Arch Linux for desktop use, but I have stopped using it on servers. I'm currently deploying using CentOS and most of the scripts I have open sourced for system management have been updated to work with CentOS.


Below is for Historical Purpose only.


These Arch Linux images for Amazon EC2 use my ec2-init script which requires python2 and boto, but other than that they are stock Arch Linux with just the base load and LTS kernel.

Usage Notes:

The ec2-init script will find the following variables in the user-metadata for the instance:

  • hostname - The hostname to set for the instance
  • mailto - the address to email with a message listing the instance information and ip address
  • mailfrom - the from address of the email message

The user-metadata should be pipe delimited like this:

hostname=myhost.example.com|mailto=myemail@example.com|mailfrom=ec2host@example.com

Additionally if the instance is granted IAM role permission to Route53, the script will create or update a DNS entry for the hostname if it finds a matching zone in Route53.

Pacman is functional but key signing has not been initialized. I recommend you install haveged and initialize the package signing:

# pacman-key --init

# pacman-key --populate

The pacman-key --init command will take a while or seem like it is hung while the system gathers enough entropy for the random number generator. To help it out, you can log into another session and do an ls -lR / as it uses system activity.

See Pacman-key on the Arch Linux Wiki for more information.

Permanent Link — Posted in Arch Linux, Cloud Computing, Amazon Web Services

Arch Linux Boot Script for Amazon EC2

January 17, 2013 at 08:00 PM

I have an updated Arch Linux image for Amazon EC2 that is systemd. I created a boot script that sets the hostname and root keys. It will even update DNS in Route53 and send you an email letting you know the instance IP.

Released under the MIT license on github.

I am working on cleaning up the base image that I use on Amazon EC2 and publishing the AMI as well.

Permanent Link — Posted in Arch Linux, Cloud Computing, Amazon Web Services

2012 Cloud Computing Adoption Survey

June 26, 2012 at 06:00 PM

Rackspace has put out a nice infographic highlighting what IT decision makers are looking for as well as what they are concerned about when it comes to cloud.

Rackspace® — Rogue IT, Cloud Lock-In Dominate Cloud Concerns [INFOGRAPHIC] Rackspace® — Rogue IT, Cloud Lock-In Dominate Cloud Concerns [INFOGRAPHIC]

Permanent Link — Posted in Cloud Computing, Technology Management, Amazon Web Services

Adjusting IT for Cloud Computing

June 19, 2012 at 08:00 AM

Cloud Computing is not just a paradigm shift for infrastructure. IT operations, accounting and even staffing structure need to be updated to effectively harness the benefits.

In a previous article I illustrated deploying a multi-terrabyte RAID array in the cloud. That takes just a few minutes these days but it used to take most organizations over a month to provision that much storage through their IT channel. Moving to cloud will allow organizations to reduce and potentially eliminate IT staffing around procurement.

From an accounting standpoint, most IT departments are structured and budgeted around a list of services that they provide: server provisioning, incident management, resource monitoring, web support, database support, and infrastructure. These services to the business typically get rolled up into infrastructure OPEX. Actual integration support is often missed in terms of process, time planning, and accounting.

While some roles can be eliminated, there are new roles that need to be filled. As it gets easier to deploy into the cloud, it is important to put process and authority checks in place to avoid cost overruns and "server creep". Easy deployment allows IT operations to put more focus into integration support. This has given rise to the "DevOps" movement:

In computing, "DevOps" is an emerging set of principles, methods and practices for communication, collaboration and integration between software development (application/software engineering) and IT Operations (systems administration/infrastructure) professionals. It has developed in response to the emerging understanding of the interdependence and importance of both the development and operations disciplines in meeting an organization's goal of rapidly producing software products and services.
~ Wikipedia

Operating successfully in this new environment means adding integration support roles that focus on the interoperability between the infrastructure and applications, both during development and ongoing support. IT staffing needs to shift from hardware support to overall platform management and analysis, taking a consultative approach in building environments in which the organization's products can thrive. This requires gaining understanding and supporting the ecosystem of the business, which for some may require a different skill set than what they have currently.

Adopting a model of having dedicated integration support staff and infrastructure support staff will bring transparency to the true operating costs of the product or application. This increased visibility will allow for better planning for IT decision makers.

Permanent Link — Posted in Cloud Computing, Technology Management