We’ve entered an era where our IT infrastructures are now becoming a compilation of capacity that is spread out and running upon a wide range of platforms; some we completely control, some we control partially and some we don’t control at all. No longer should our IT services discussions start with ‘And in the data center we have…’, but instead they need to center around mission critical business applications and/or transactions that are provided by ‘the fabric’.
Who would have thought that all of us long-time ‘data center professionals’ would now be on the hook to deliver IT services using a platform or set of platforms that we had little or no control over? Who would have thought we’d be building our infrastructures like fabric, weaving various pieces together like a finely crafted quilt? But yet here we are, and between the data centers we own, the co-locations we fill and the clouds we rent, we are putting a lot of faith in a lot of people’s prowess to create these computing quilts or fabrics.
We all know that the executive committee will ask us regularly, “We have now transformed to be digital everything. How prepared we are to deliver these essential business critical services?”, and we in turn know that we must respond with a rehearsed confirmation of readiness. The reality is we are really crossing our fingers and hoping that the colo’s we’ve chosen and our instances in the Cloud we’ve spun up won’t show up on the 6 o’clock news each night. We simply have less and less control as we outsource more and more.
A big challenge to be sure. What we need to do is to focus on the total capacity needed and identify the risk tolerance for each application, and then look at our hybrid infrastructure as a compilation of sub-assemblies which each have their own characteristics for risk and cost. While it’s not simple math to figure out our risk and cost, it *IS* math that needs to be done, application by application. Remember I can now throw nearly any application into my in-house data centers, or spin them up in a co-location site, and even burst up to the cloud on demand. The user of that application would NOT likely know the difference in platform, yet the cost and risk to process that transaction would vary widely.
But we have SLAs to manage all of this 3rd party risk, right? Nope. SLAs are part of the dirty little secret of the industry which essentially says what happens when a third-party fails to keep things running. Most SLA agreements spend most of the prose explaining what the penalties will be WHEN the service fails. SLAs do not prevent failure, they just articulate what happens when failures occur.
So this now becomes a pure business discussion about supporting a Mission Critical ‘Fabric’. This fabric is the hybrid infrastructures we are all already creating. What needs to be added to the mix are the business attributes of cost and risk and for each, a cost calculation and risk justification for why we have made certain platform choices. Remember, we can run nearly ANY application in any one of the platform choices described above, so there must be a clear reason WHY we have done what we have done, and we need to be able to articulate and defend those reasons. And we need to think about service delivery, when it spans multiple platforms and can actually traverse from one to another over the course of any given hour, day or week. Its all a set of calculations!
Put your screwdrivers away and fire up your risk management tools, your financial modelling tools, or even your trusty copy of Excel! This is the time to work through the business metrics, rather than the technical details.
Welcome to the era of Mission Critical Computing Fabric!
The post Mission Critical Computing Fabric appeared first on Uptime Institute Blog.