The world is set to burn. Yes, that sounds like hysterical hyperbole, but recent reports have shown that the Earth’s temperature is set to increase by an uncomfortable 1.5 degrees C by 2027 – which is significantly sooner than humanity had hoped. We do not have the time or the luxury to treat the effects of man-made climate change as a hypothesis anymore – we need to act now. That being the case, and given our continued and increasing demand on energy-thirsty data centers, a push towards Net Zero – a notional carbon-neutrality across the scope of an economy – is vital.
In Part 1 of this article, we spoke to Paul Mackay, Cloud Director, EMEA at Cloudera, to ask whether a goal, set by the UK government, of achieving Net Zero by 2050 was feasible – and if it was, what needed to happen in order to achieve it.
In Part 2, Paul explained the role virtualization could play in redressing the energy-thirst and heat loss balance of even relatively modern data centers.
And at the end of Part 2, Paul mentioned that however much virtualization could help, it didn’t constitute a complete solution, and that some level of hybrid provision would probably always be necessary.
The need for hybridization.
We asked him if there was a generalizable prescription for how this hybridization would work, or whether it was a mix that would have to be arrived at individual company by individual company.
PM:
The reality is that there will always have to be some things on-prem for either a security or a governance perspective, which is very relevant in the public sector. Performance economics, right? Cloud isn’t always cheaper, it can be more expensive. Many organizations are trying to figure out how they can move as much as possible to a cloud, but maintain their environments on-prem.
The problem with that is that you very quickly end up building silos.
Hybridization is the idea that you have a common operating layer or consistency that allows you to interact with those two locations, so you can use a single set of skills to manage them, but also move between them.
Some organizations will be able to move more to public cloud, because they exist in a less stringent sector or vertical, whereas others will have to have some on-prem storage. I think the one true thing we’re seeing at the moment is that you have to have this ability to move between them. So we know that huge amounts of repatriation is going on, as organizations move more content and data to a public cloud. They then realize that this is five, six times as expensive, and they can’t afford it, and so they need to move it back.
Giving yourself a technology platform that allows you to do that is becoming more and more critical. But it is different in each organization. There’s no golden rule that works for all, but you just have to find the flexibility, I believe.
The legacy question.
THQ:
We expected it to be an individual response, but wondered if there was some benchmark or a baseline.
PM:
There are certain things that are, by default, more cloud native. So as organizations move towards more SaaS models, or PaaS models, the default is that you’re starting to use more and more cloud services. So while you can’t move everything to a SaaS or a PaaS model, the things you can’t move will tend to be legacy. And if I ran a brand new company today, everything would be either in a SaaS model or a high-end PaaS model. Those organizations that unfortunately have legacy debt, that have those mainframes or those systems that are running mission-critical systems of record, they’re limited as to how much they can move.
A highway to hyperscale.
THQ:
How does moving to one of the hyperscalers drive down the carbon footprints of businesses and get us closer to Net Zero? And is it a case of offloading the issue onto organizations which have the scale to deal with it more ecologically?
PM:
I think it is, yes. The hyperscalers are on an absolute mission to be carbon-neutral. Google, for instance, as the last player into the hyperscaler space, has been able to build things from the ground up that are maybe more in line with that, whereas others, like AWS, are having to change things as they go, but AWS is aiming by 2025 to be powered with 100% renewable energy, which if you compare with a traditional data center under the streets of London, they’re saying they’re expecting to reduce carbon footprints by 70-80%, which is incredible. And done by scale, to your point.
The fact that they have these huge data centers means they have huge numbers of customers adopting them. That means they have to be thinking about how they’re cooling these things, how they’re powering them. How are they recycling air?
So, yeah, 100%, the hyperscalers can make an impact on those workloads that you can move there, when it comes to Net Zero goals. And cloud is a huge pillar, so you’re going to drive huge efficiencies on there. But also, if I think about the things that I can’t move, the stuff that I have left on-prem, how do I make that look and feel more like a cloud? So how do I use virtualization containers? How do I use automation and orchestration, so that I’m delivering as much efficiency as possible? How am I making sure that the new hardware that I bring in on my five-year cycle is meeting the goals I have?
They’re hell bent, the hyperscalers. And it’s really good to see that they’re thinking about how to adopt some of these technologies and use them – because it shows that it can be done. Getting to Net Zero can be done – especially if the hyperscalers lead the way.