Cloud Management
IT Management
Behind the Software Q&A with Midokura CEO Dan Mihai Dumitriu
Any business with experience building a network infrastructure understands the complexities inherent in a physical network. In addition to being time-consuming to adopt and maintain, physical networks tend to be inflexible and have numerous potential points of failure as a result of being linked to the underlying hardware. Network virtualization platforms eliminate those weak points — decoupling the infrastructure from the physical framework — while also boosting security, reliability and flexibility.
MidoNet, the flagship product from vendor Midokura, is a network virtualization system with a distributed architecture that simplifies and stabilizes your system network. In this exclusive Q&A with Midokura CEO Dan Dumitriu, we talk about MidoNet’s value as a virtual network provider, the importance of scalability in network virtualization and the company’s recent focus on expanding their partnerships.
To begin, I’d love to hear more about how Midokura got its start. Why did you and co-founder Tatsuya Kato feel that pre-existing software and hardware options for building and maintaining networks were inadequate?
Our team was trying to put together a software stack pretty basically from available open source. We were really unable to find any solution for the networking site, so we set out to build our own. Over the course of the next year and a half or so, we realized that virtual networking was the bigger opportunity, so we shifted the company to a software company focusing on network virtualization. It was really interesting because the need that we solved was our own need.
For those of our readers unfamiliar with the topic, can you describe exactly how a network virtualization system operates?
The idea of network virtualization is to create an abstraction between the applications that are running in some kind of virtualized computing environment – either a virtual machine or a container like Docker – and the physical network that transports packets around.The way it works is by using a virtual switch that’s installed in the hypervisor host (a privileged piece of software that’s running on each host), and there’s also a centralized control system that is distributing the configuration information to all of the agents that are running on the computers. The agent is then using that configuration to centrally program, which is usually done via the orchestration system (like OpenStack).
These agents treat the control traffic in such a way that makes it look like the application is running inside the virtual machine, but they have their own private network. The configuration manipulates the traffic in such a way to create that illusion. It also creates a perfect isolation between multiple tenants by encapsulating the IP traffic between servers, so that even though the traffic shares the same single network, it’s able to go from host to host without interfering in any way. These virtual switches are actually steering traffic in the right direction.
You mentioned OpenStack, which I know MidoNet is able to integrate with. Can you elaborate on why you decided to work with an open-source platform and allow that capability for users?
In a sense OpenStack was actually good timing for us, because around the end of 2010 [when we started developing MidoNet], we also got started with the OpenStack project. It was quite early at the time, but the growth in the project, the community and the market has been tremendous — to the extent that OpenStack has essentially eclipsed all other open source cloud management platforms. We had tried to keep a platform-neutral approach for a while, which seemed sensible before there was a clear winner in the market; but now that there’s a clear winner in the market and it’s OpenStack, we’re all over that.
Your software platform co-exists in a market with a number of legacy vendors. How does MidoNet stand out from other network virtualization systems offered by big-name competitors like VMware and HP?
We are an overlay network — a network virtualization overlay solution — which is similar to the VMware NFS product, but dissimilar from products by vendors like HP or Cisco that generally focus on more traditional physical network fabric management. Those products are not meant for doing the same thing [that MidoNet does]; our solution is implemented over the top of the IP network.
“The unique aspect of our product is that it…[has] no single point of failure.”
I think the unique aspect of our product is that it is designed from the ground up to be completely distributed and decentralized, with no single point of failure and no bottlenecks. We consciously made the decision from the beginning not to use any kind of virtual appliance or router or other type of network device in a VM or physical device model, because those create bottlenecks. We went to great pains to make sure that all of our virtual network constructs are fully distributed and implemented only by the agent of the on-the-edge host. I think that is a very compelling differentiator of our solution.
A big theme throughout your website is scalability – why is scalability such an important component of building a network in today’s world? How are MidoNet users able to employ and benefit from the system’s scalability?
This type of system is intended to be scalable, because it’s typically going to be used as part of an infrastructure cloud – whether that’s a public cloud that’s shared upon multiple tenants or a private cloud inside the enterprise.
I think there’s a misunderstanding of what multi-tenancy is all about. Some people assume that if the cloud is being run in only one organization that it’s not multi-tenant. In that sense you can say that, yes, if it’s all my company’s work, they’re not going to try and steal data from each other or something like that. But they can still interfere in a non-malicious sense. The thing to do is to segment the application space [in a way that is] more fine-grained than what was possible in the past. In the past, the cloud wasn’t powerful enough to give each application its own isolated computer networking storage device, but now it is. If you can do it, why not do it? It only makes the system safer and more robust.
Taking this all into account, scalability is very important because we’re going to have an explosion of the number of micro-segments in the software-defined data center. We have to scale along multiple axes. One is the total number of servers – which could be not so big to very large, depending on the enterprise or the provider we are talking about. Another is the number of isolated virtual infrastructures and virtual data centers, like VM’s virtual networks and virtual storage units. Those could be arbitrarily very large, depending on how fine-grained the isolation is.
Could you provide a hypothetical situation where an enterprise would benefit from deploying MidoNet, and how that company would go about transitioning from a physical network structure to a virtual one?
I think there are a couple of initial conditions that we could consider. One is where the applications are already running in virtualized computer environments, which is very likely in a lot of cases. Or maybe they’re running on bare metal right now, but they’re moving into VMs or containers. Let’s take a look at an example of a configuration issue when you have shared infrastructure. In the traditional network environment, we have several different types of physical devices. We have Layer 2 switching; we have routing; we have firewalls; we have load-balancing. Even if there is some isolation using E-LAN [Ethernet Virtual Private LAN] at Layer 2, that’s limited because of the number of E-LANs is inherently limited. Typically these other types of devices — like the load balancers and firewalls and router — are shared. In other words, we have an infrastructure with 10 applications, and they are all actually sharing the same physical firewall devices and load balancers. When there is a new application deployed or if an application needs to be changed somehow, then there is a shared configuration that needs to be modified — and that’s an opportunity for error. Despite the best efforts of infrastructure and network operators to be careful, this happens a lot.
What happens in moving this into a virtualized environment with virtualized networking is that each application can have its own firewall, its own load balancer, its own router. The configuration is completely distinct because we separate these into different virtual tenants. and they can’t possibly screw each other up.
Another scenario is if somehow the traffic on the network itself is interfering in the sense of performance. Or maybe you have an address conflict with multiple applications, where the traffic from one application could go to an unintended server by mistake. All that is not possible in the virtualized network environment. A virtualized environment is important, particularly if you’re deploying for a private cloud infrastructure. Marrying that network virtualization is the most natural thing to do.
Zetta.io was recently announced as a client of MidoNet, and there’s also been talk around Midokura’s developing partnerships with Dell, Cumulus and Fujitsu. How do obtaining these partnerships and securing a buzz-worthy client influence or affect Midokura’s company mission?
We want to see our customers succeed, so we’re doing everything we can to help them to implement and scale their clouds. The partnerships with Dell, Cumulus and Fujitsu are really helpful as a channel for us to be able to multiply our reach in the market. Each of these partnerships are all different: Dell is a reseller channel relationship, and Cumulus is a technical partnership where we do lead sharing in the field because our products are so complementary. Fujitsu is both a product partnership — integrating MidoNet with their cloud platform, which is also based on OpenStack — as well as a global sales partnership.
Virtualization is very much an expanding region within the software industry – do you think virtualization technologies will continue to gain traction across the board? What types of software do you think would most benefit from widespread adoption of virtualization?
I think virtualization of some sort will keep growing. I think there’s been a pendulum swing to some degree between the types of virtualization — whether it’s based on operating system isolation or full machine virtualization — and throughout the past decade, this has been swinging back and forth.
Recently the most talked-about technology is Docker, which is not a full machine virtualization but a way to use an operating system container to package up applications with all of their dependencies other than the operating system. Docker is a middle ground between just running apps on a shared server versus full machine virtualization like you get on Amazon or Rackspace. I think the really interesting aspect is that it makes it easier for the application owners to deploy their application with everything that they need — all of the dependencies included. I think that will continue to grow and gain market share, because it makes a lot of sense and it’s so easy to deploy.
Returning to Midokura’s role in the industry, what is your company’s strategy for continuing to expand? Are there any new functions you’re currently working on or would like to develop down the line?
“The well-known vendors do too many things…whereas we are focused on solving one problem.”
Our strategy to expand is through partnerships, and that includes the technology partnerships, product integration partnerships and sales partnerships. This segment is populated with well-known vendors, but I would say that the well-known vendors sometimes have an Achilles’ heel, which is that either they’re too tied to their traditional businesses or they move too slowly. They do too many things and they can’t really focus [on one technology], whereas we are focused on solving one problem. I think that makes us able to come up with the right solution and react better. Our weakness is sales power. We just don’t have as much sales power because we are very small. That’s where the channel partnerships and sales partnerships come in.
We’re working on several new things. There’s nothing I can really talk about right now, but essentially we are making our integration with OpenStack better and deeper, which will make it even easier for users to consume our technology. We will be adding more functions to our virtual network functions so that it will be even easier for the user to deploy the same familiar construct that they are used to in the physical world — just in a much more easy-to-operate way.
Be sure to check out all of our resources on IT management needs like network virtualization — plus tons of other great content on top software reviews, implementation advice, top features and other best practices — by visiting the Business-Software.com blog homepage.