top of page
  • jcudby

Managed Hosting and the DevOps movement – Part VI – Technology

Updated: Jun 11, 2020


originally published in Feb 2015


So far I have explored Sales, People and Process, now I tackle the technology aspects of DevOps in the world of Managed Service Providers..

In Cameron Haight's Gartner Keynote, he outlined the one button "Build Deploy Test" concept. From a technology perspective and in the world of the managed services provider there are 2 concepts here :-

Hardware & operating systems to run the customers applications

Hardware and systems necessary to provision, monitor and support those customers applications.

Application Platform

As I talked about in the Sales post, customer requirements drive initial hardware acquisition, and often the requirements do not come from the operations team. For the Managed Services Provider the objective has been to meet (beat ?) the customers stated goals and provide hardware that meets the customers application delivery requirements. Typically we would then add in overhead for "Best Practices" and then discuss pricing :)

Historically servers were purchased and built with a role in mind. CPU, memory, disk were all specified at time of purchase based on the intended role for the server within this application delivery platform. On the network side, switch ports were configured into vlans using "access ports" appropriate to the server's role and cabled accordingly. On the security side, firewalls have been configured with the most restrictive ACL's for the most "restrictive" policies, ensuring the tightest regulation over the network.

I suspect that you can already begin to see some limitations here, but for completeness....see the following.

The application changes, the server roles need to change. CPU / Memory / Disk need to change. This prompts the need for hardware acquisition. Hail the sales process, quoting, sales orders, signature, budget etc etc.. This hardware acquisition results in delays, outages and errors. One example that I always rememember happened not that long ago, maxing out RAM in Dell servers apparently needed an upgraded power supply. At the time the memory was ordered no-one knew that the power supplies had to be replaced, causing multiple delays, multiple maintenance windows and extra costs.

Changing a server role often required a rebuild of the operating systems as disk layouts change, possible upgrades have to occur to support more memory.

Now - many folks will say, Virtualization fixes all that right? and the answer is in many ways yes. As long as there are sufficient host resources, guest resources can be changed pretty much at will. Multiple VLANS can be presented to the Virtual Switch and servers network access can be changed without the intervention of network engineering resources. Firewalls and load balancers may need interventions, however those are generally quick and easy.

For many customers, Virtualization is not the answer, workloads do not lend themselves to that type of infrastructure.

So how do you build out a physical infrastructure with the flexibility of a virtual one ?

There are a variety of techniques that can be used in the physical world.

1 - Networking - Treat physical servers as though they were ESX hosts - trunk all vlans for all roles to every server - use the operating system to manage tagging. Use a balance of host based and subnetwork based firewalls rules to allow flexibility. (Or use Puppet Managed host based firewalls)

2 - Operating systems - Use tools like Puppet to deploy operating systems and packages to support various server roles in minutes.

3 - Applications - Use puppet to deploy applications in a consistent and scripted manner.

Next up - MSP deployment and support technologies.



Photo by Christina @ wocintechchat.com on Unsplash

26 views0 comments

Comments


bottom of page