Menu

Interoperability in Multi-Cloud Considerations

What we’re seeing in the industry today is that most of our customers aren’t adopting a single platform for all of their workloads. Depending on a plethora of factors (RTO/RPO, application requirements, latency requirements, etc.), it truly is becoming a multi-cloud world.

But with lots of options comes complexity.

In this blog series, we attempt to tackle some of the key considerations you should consider when making workload placement decisions. This month, we’re talking with Derek DeHaan on interoperability in multi-cloud when deciding the best execution venue for your workloads.

Derek DeHaanYou spend a lot of time with OneNeck’s customers, helping them weigh their options when deciding what platform best suits their workload. When it comes to interoperability, what should they absolutely consider?

So, when we talk about interoperability in multi-cloud, the most important consideration we hear from our customers is around multi-cloud security. When tying workloads together across multiple platforms, it is critical to understand how the workloads and data can interact and traverse between those platforms in a secure manner. This includes the data and user/administrator access to ensure your data is protected.

Another key consideration with interoperability is what sort of bare metal compute requirements does your organization have today and can those be fulfilled by the new platform of choice. What I mean by that is that customers today currently have legacy applications and hardware they’ve invested in, either in their own data center or in a colocation provider, and maybe those things are a year or so into a 3-year lifecycle. With a hosted private cloud option, if required (especially at OneNeck – I can’t speak for other providers), if you have those investments in hardware, we can bring that into our data centers and attach it directly to our private cloud while still giving you access over our backbone to the hyperscaler’s as well.

What that gives you is the flexibility to go ahead and finish the usable life on that equipment before you make the full migration to a public cloud. Some of the hyperscale providers do not offer this as an option. So, if you have an appliance or a network device – something that cannot be virtualized or is not end of life – it’s important to know that you do have options to bring that equipment with you in some of the hosted private cloud scenarios and still connect it and use it for the rest of its usable life.

Are platform requirements still a big interoperability consideration?

I would have said they were even more of a concern probably about a year ago, before Azure started supporting Linux and other operating systems that are outside the Microsoft stack. So today it’s probably a little bit less of a consideration, because the hyperscale public cloud providers and the hosted private cloud providers have realized the need to go across the gamut on platforms.  And what I mean by that is X86, Solaris, Linux, Unix – they’ve all implemented solutions to allow many if not all of those operating systems on their platforms.

The same thing used to be an issue around hypervisor choice, but the hyperscale providers are going down the path of offering different hypervisors now too, so it’s not as big of a concern as it used to be either.

As a cloud and data center provider, we often help our customers understand the importance of carrier connectivity in their overall cloud strategy. What connectivity advice would you have for organizations looking at moving to a cloud provider?

Carrier connectivity is all about understanding how you’re going to connect to your applications and how are your users going to connect to those applications. The big keys here are connectivity options can vary greatly and depending on your choice of dedicated circuits or over VPNs on the internet the costs and user experience associated with connectivity can vary greatly.

Hosted private cloud options with private carrier circuits are usually more cost effective than connecting to a large pubic cloud where you’re going to be bringing a ton of data in and out, as you’re going to get hit with those ingress and egress charges. So, something to consider ahead of making the decision, hyperscale public cloud may look a lot cheaper up front, but I’m going to have to take all of this data out of the database servers, access more data for reports, and now the cost of pulling that data out becomes a measure that you really didn’t calculate up front. Maybe it looked cheaper up front, but now that you see all your data transfer costs, it really isn’t as cheap as you thought initially.

Obviously, having a plan B is critical in any IT strategy these days, so what should one consider around replication in the cloud?

Bottom line: replication technology is incredibly important. And there are many different offerings out there today that are leaps and bounds ahead of what used to be just hardware replication or vendor-specific replication – we’re now in the software-replication space. So, software utilities that can go across platforms, irrelevant of vendors, and can replicate from on premise, hosted private cloud or into the hyperscale private clouds are always the best option. Some examples are Zerto or Azure Site Recovery, but there are many others.

But it’s important to define your replication strategy if you’re going to do DR, if you’re going to have multiple environments like dev/test or any other environments that might reside in a disparate place from production. This just makes sense in today’s world, as you can take advantage of a multi-cloud approach and get economies of scale and cost optimization. So, it’s important to understand how you’re going to replicate and get that data between those sites most effectively and for the lowest cost.

Any final thoughts on interoperability considerations for our readers?

The final thought I would leave you with is there are many considerations as I mentioned above and the importance of each is different for every organization – so I would say test a small amount of workloads in the decided platforms if possible. This will allow you to test the interoperability, security, connectivity, etc. to make sure it is as you expect and you don’t negatively affect the end user experience.

Want to read some more of Derek’s multi-cloud tips? Check out our conversation on critical security considerations as you build your multi-cloud strategy.

Download our Multi-Cloud Toolkit

Get In Touch

Call Us

For general inquiries, call: 855.ONENECK

Immediate Assistance

Managed services support: 800.272.3077
Non-managed service support: 515.334.5755
Or visit our service desk:
Service Desk Portal

Chat With Us

Hours available: 24/7
Start a Chat

OneNeck Headquarters

525 Junction Road
Madison, WI 53717
View All Locations

Talk to Our Team