A better deployment method from development to production
Today, most applications, both open source and commercial, are distributed in a Docker container or Helm format and can be deployed in a Kubernetes® service with a single command line.
Kubernetes® intelligently shares containers and services on different nodes. Need to separate development, testing, acceptance and production? Easily move the configuration file from one cluster to the other. With declarative syntax, you simply describe the desired state.
Scalability and High Availability
Exposing a service on multiple worker nodes is easily done, in a few command lines. Kubernetes® launches containers and configures the Load Balancer for you. You can also define the health conditions for each service. Kubernetes® will relaunch any pods and containers that do not meet these conditions. Your nodes can also be monitored. Your services enjoy the high availability of the OVH Infrastructure-as-a-Service (IaaS) solutions, and you can instantly add new computing nodes.
Reversibility, multi and hybrid cloud
Thanks to the CNCF conformance programme, a number of cloud licensors and providers guarantee the total reversibility of data. Kubernetes® has in fact become the standard for multi-cloud (i.e. different cloud providers or datacentres) and hybrid (i.e. on-premises and cloud distribution) scenarios. The same configuration can be transferred from one provider to another in no time.
Architecture for microservices and highly distributed tasks
The responsiveness of Kubernetes® and the low overhead allows you to reduce the underlying infrastructure for microservices.
Compatible with historical applications (support for stateful loads thanks to persistent volumes), Kubernetes® intelligently distributes tasks based on the expected use of RAM and CPU.
It is also possible to define different thresholds that govern the automatic creation or destruction of new resources and combine monthly and hourly IaaS resources for optimised billing.
Transparent and controlled upgrades
Updating your application layers is made simple, thanks to the different abstraction levels provided by Kubernetes®. By choosing the “rolling upgrade” option, you can carry out transparent version updates for your end users. Our team uses this approach itself, to update the components of your Kubernetes® clusters for patching minor bugs and security.
...and much more!
Even if we manage the Kubernetes® components, the standard API and the many compatible tools provide you with the freedom to enhance your Kubernetes® experience!
"At Saagie, we publish an orchestrator for datalabs that uses Kubernetes® in an advanced way. We have tested Kubernetes® managed services from other providers. OVH's Managed Kubernetes® solution is based on standards, which has given us a very good portability experience! "
Youen Chéné, CTO Saagie.
"We had already tried to set up our Kubernetes® cluster internally, but we couldn't do it completely, and we found it too complex to install and maintain. OVH's Managed Kubernetes® offer has allowed us to migrate our applications to Kubernetes® without having to worry about installing and maintaining the platform. The beta phase of the Kube® project went very smoothly, thanks to the presence and reactivity of the OVH teams. "
Vincent Davy, DevOps at ITK.
"OVH's Managed Kubernetes® offer provides all the performance necessary for the smooth running of our services, and we are sure to have no surprises on the invoice. "
Jérôme Balducci, CTO at Whoz.
Frequently asked questions about Kubernetes®
What is a container orchestrator?
We often read that containers, especially Docker, are a simple solution for carrying software from one infrastructure to another. While it’s true that containers are an excellent way to ensure an equal execution of a software building block from one server to another, things are actually more complex, as every solid IT project is made up of several containers, hosted on multiple servers. It’s here that the orchestrator plays its part. It intelligently distributes the containers the way you want across different servers that you have chosen. It also ensures the health status of the containers, and the different underlying components needed for their execution.