GDT Webinar Series – How to Fail at Security? Reserve Your Spot

Virtual Machine or Container…or Hypervisor? Read this, and you can make the call

virtual machine or container

By Richard Arneson

Containers have been around for years, but we’ll leave its history for another blog. Hypervisors, if you recall, are software that manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. Hypervisors are, basically, a platform for VMs. But don’t be surprised to hear hypervisor and VM used interchangeably; they shouldn’t be, but it’s not uncommon. Just remember―hypervisors are the software that run VMs.

They’re both Abstractions, but at different layers

Hypervisors (VMs)―physical layer

Abstractions relate to something that’s pulled, or extracted, from something else. Hypervisors abstract physical resources, such as those listed above (memory, processor, and other resources), from the host hardware. And those physical resources can be abstracted for each of the virtual machines. The hypervisor abstracts the resources at a physical level, capable of, as an example, turning a single server into many, thus allowing for multiple VMs to run off a single machine. VMs run their own OS and applications, which can take up loads of resources, even boot up slowly.

Containers―application layer

Containers are, again, an abstraction, but pull from the application layer, packaging code and related dependencies into one (1) happy family. What’s another word for this packaging? Yep, containerization.

What are the benefits of containers over VMs?

Application Development

There are several benefits related to containers, but we’ll start with the differentiator that provides the biggest bang for the buck. Prior to containers, software couldn’t be counted on to reliably run when moved to different computing environments. Let’s say DevOps wants to move an application to a test environment. It might work fine, but it’s not uncommon for it to work―here’s a technical term―squirrelly. Maybe tests are conducted on Red Hat and production will be on, say, Debian. Or both locations have different versions of Python. Yep, squirrelly results.
In short, containers make it far easier for software developers by enabling them to know their creations will run, regardless of where they’ve been deployed.


Containers take up far less space than VMs, which, again, run their own OS. In addition, containers can handle more applications and require fewer VMs. Make no mistake, VMs are great, but when heavy scaling is required, you may find yourself dedicating resources that are, basically, managing a spate of operating systems.
And consider moving workloads between vendors with VMs. It’s not as simple as dragging an application from one OS to the other. A vSphere-based VM can’t have associated workloads moved to, say, Hyper-V.


Microservices, which can run in containers, break down applications into smaller, bite-sized chunks. It allows different teams to easily work independently on different parts or aspects of an application. The result? Faster software development.

No, containers don’t mark the end of VMs and Hypervisors

In fact, containers and VMs don’t need to be mutually exclusive. VMs and containers can co-exist beautifully. As an example, a particular application may need to talk to a database on a VM. Containers can easily accommodate this particular scenario.
Sure, containers are efficient, self-contained systems that allow applications to run, regardless of where they’ve been deployed. But containers might not be the best option for all situations. And without expertise within IT departments to understand this difference, it will probably leave you wondering which―VMs or containers―will be the most beneficial to your organization. And, again, it might not be an either/or situation. For instance, as containers utilize one OS, it could, if you don’t have security expertise, leave you more open for security breaches than if utilizing VMs. Your best bet? Talk to experts like those as GDT.

Please, use your resources

You won’t find better networking resources than GDT’s talented solutions architects and engineers. They hold the highest technical certifications in the industry and have designed and implemented complex networking solutions for some of the largest enterprises and service providers in the world. They can be reached at or at They’d love to hear from you.


Share this article

You might also like:

The advent of artificial intelligence (AI) brings transformative potential across industries while also introducing significant data security challenges. As AI systems become integral to operational and decision-making processes, safeguarding sensitive information against sophisticated threats is paramount. This exploration sheds light on the complexities of AI and data security and proposes

Transport layer security (TLS) is one of the most common tools for keeping users safe on the internet. When automated, TLS certification management can help organizations ensure more reliable and consistent use of TLS, reducing the need for human intervention and risk of human error. In fact, over the years,

As the head of GDT’s security practice and an industry veteran, Jeanne Malone and her team help customers worldwide advance their cybersecurity posture. One of the biggest cybersecurity game-changers is artificial intelligence (AI). We asked Jeanne to weigh in on leveraging AI and machine learning in cybersecurity to improve intrusion