Application Performance Management for Microservice Applications on Kubernetes

The Ultimate Guide to Managing Performance of Business Applications on Kubernetes

There’s a reason everyone is talking about Kubernetes these days. It has become the go-to container orchestration solution for organizations of all sizes as they migrate to microservice application stacks running in managed container environments.

Kubernetes is certainly worthy of the recent excitement it has garnered, but it doesn’t solve every management problem, especially around performance. It’s important to understand what Kubernetes does, and what it doesn’t do; and what specific capabilities DevOps teams require from their tooling to fully manage orchestrated microservice applications and achieve operational excellence.

This eBook examines Kuberenetes, the operational issues it addresses, and those that it does not. Additional examination of modern DevOps process is included, with a discussion on management tooling needed to achieve continuous delivery of business services leading to excellent operational performance. The eBook concludes with a detailed analysis of the capabilities needed from your tooling to successfully operate and manage the performance of microservice applications running on Kubernetes.

Kubernetes Basics
(or the A-B-C’s of K-8-S)


Kubernetes (sometimes abbreviated K8s) is a container orchestration tool for microservice application deployment.

It originated as an infrastructure orchestration tool built by Google to help manage container deployment in their hyper-scale environment. Google ultimately released K8s as an open source solution through CNCF (the Cloud-Native Computing Foundation).

Orchestration is just a fancy word that summarizes the basic Kubernetes features:
• Container deployment automation, relieving admins of the need to manually start them
• Instance management - balancing the number of instances of a given container running concurrently to meet application demand
• DNS management regarding microservice / container load balancing and clustering to help manage scaling due to increased request load
• Container distribution management across host servers to spread application load evenly across the host infrastructure (which can help maximize application availability)

Notice there is a critical aspect of operational management missing - application performance management. The whole discipline of application performance visibility and management is not part of the Kubernetes platform.

Why Kubernetes is Important?

Remember, the goal of DevOps is speed!! Orchestration is all about enabling fast and easy changes to production environments so that business applications can rapidly evolve.

The message is clear:
speeding up your application delivery cycles adds huge value to your business. Automating container orchestration is a great complement to agile development methods and the microservice architecture.

Modern CI/CD is automating the testing and delivery stages of development - containers and Kubernetes make it much easier to get your code into production and manage resources.

Kubernetes distributions

Many cloud providers have their own versions of Kubernetes (called a “Distribution”) that have unique enterprise capabilities added to the open source Kubernetes version, which provide a few distinct advantages:

• Organizations concerned about enterprise readiness get a fully tested and supported version of k8s
• Additional enterprise functionality is included
- for example, Red Hat’s OpenShift K8s distribution adds security features and build automation to the mix
For most enterprise use cases, it‘s much faster and easier to use a cloud provider’s Kubernetes distribution than to set up the open source version.

A wide variety of Kubernetes distributions are available, designed to run either on local infrastructure or as a hosted service in the cloud.

You can get an updated list of distribution providers in Kubernetes online docs.

Container Management is NOT Application Performance Management

Now that we’ve discussed what Kubernetes does, let’s explain what it does not do. Remember, Kubernetes orchestrates containers that are part of an application. It does not manage application performance or the availability of highly distributed applications. Similar to
applications, Kubernetes doesn’t consider performance when managing infrastructure.

Kubernetes effectively adds a layer of abstraction between the running application (containers) and the actual compute infrastructure. On its own, Kubernetes makes decisions about where containers run, and can move them around abruptly.

Visibility of exactly how your technical stack is deployed, and how service requests are flowing across the microservices is not easily available via Kubernetes, nor is performance data (request rate, errors and duration or latency) of services a native part of Kubernetes.

Operational production monitoring of application performance and health is absolutely not available via Kubernetes.

Let’s look at other aspects of orchestrated containerized application environments that further complicate monitoring.

More Moving Parts - and Complexity

Any microservice application creates a trio of issues:
• Exponentially more individual components
• Constant change in the infrastructure and applications (the application stack)
• Dynamic application components, In a Kubernetes environment, there are many more moving parts than there would be in a traditional application stack.

With the addition of containers
- and then orchestration with Kubernetes - each of these management challenges becomes even more difficult.

Every time there is a decoupling of physical deployment from the application functionality, it becomes more difficult to monitor application performance and solve problems. Instead of host servers connected with a physical network, Kubernetes utilizes a cluster of nodes and virtualizes the network, which can be distributed across a mixture of on-premise and cloud-based infrastructure, or even multiple clouds.

With so many different pieces of infrastructure and middleware, as well as the polyglot of application languages used to create the microservices, it’s
difficult for monitoring tools to distinguish the different needs and behaviors of all these critical components in the application stack. For example, collecting and interpreting monitoring data from any one platform is different from all other platforms. What do you do when you have Python, Java, PHP, .NET, Application Proxies, 4 different databases and a multitude of middleware?

Decoupling of Microservices from Physical Infrastructure Kubernetes takes control of running the containers that make up the microservices of your
application, completely automating their lifecycle management and abstracting the hardware.

Kubernetes will run the requested workloads on any available host/node and using software-defined networks to ensure that those workloads are reachable and load balanced.

Compute resources (memory and CPU) are also abstracted with each workload having a configured limit for those resources. Because containers are ephemeral, any long term storage is provisioned by Persistent Volume Claims provided by various storage drivers.

The already deep level of abstraction may be further compounded by the Kubernetes nodes running on external cloud computing services such as EC2, GCE or Azure.

The high level of disconnect from the application code to the hardware it’s running on makes traditional infrastructure monitoring less critical.

It is considerably more important to understand how the microservices and overarching applications are performing and if they are meeting their desired SLAs. An understanding of the overall health of the Kubernetes backplane is also essential to ensure the highest levels of service for your application.

Service Mapping - A New Layerof Abstraction

As noted earlier in this eBook, one of the main reasons for using an orchestrator like Kubernetes is that it automates most of the work required to deploy containers and establish communications between them. However, Kubernetes on its own can’t guarantee that microservices can communicate and integrate with each other effectively. To do that, you need to directly monitor the services and their interactions.

That is challenging because Kubernetes doesn’t offer a way to automatically map or visualize relationships between microservices. Admins must manually determine which microservices are actually running, where within the cluster they exist, which services depend on other ones and how requests are flowing between services.

They must also be able to quickly determine quickly how a service failure or performance regression could impact other services, while also looking for opportunities to optimize the performance of individual services and communications between services.

Root-Cause Ambiguity

APM tools exist because middleware-based business applications - first using Java and .NET, then using SOA principles, and even microservices and containers - make it difficult to monitor performance, trace user requests, then identify and solve problems.

The more complex the application environment, the harder it becomes for DevOps teams to get the performance visibility and component dependencies needed to effectively manage application performance.

In a Kubernetes environment, determining the root cause of a problem based on surface-level symptoms is even more difficult, because the relationships between different components of the environment are much harder to map and continuously change. For example, a problem in a Kubernetes application might be caused by an issue with physical infrastructure, but it could also result from a configuration mistake or coding problem. Or perhaps the problem lies within the virtual network that allows microservices to communicate with each other.

Of course, when the problem lies within the application code, it’s important to have the deep visibility required to debug actual code issues, even understanding when bad parameters or other inputs are causing application problems. Ultimately, there could be a myriad number of root causes for the issue, ranging from configuration problems in Kubernetes, to an issue with data flows between containers, to a physical hardware failure.

To put it simply, tracing problems in a Kubernetes environment back to their root cause is not feasible in many cases without the help of tools that can automatically parse through the complex web of data and dependencies that compose your cluster and your microservice application’s structure.

Microservice Relationships

You also need to know how your Kubernetes services map to application services, the microservices they are built upon and their physical infrastructure in order to determine how the infrastructure impacts the services’ availability and performance. Kubernetes doesn’t easily reveal all of this information; you need to run multiple kubectl commands to manually build a mapping at a single point in time.

Good luck doing that when there is a production issue that needs to be fixed immediately.

Application Request - Mapping and Tracing

The microservices that comprise an application constantly send and receive requests from each other. Effective microservice application monitoring requires your APM tool to detect all the services, as well as the interdependencies between them - and visualize the dynamic relationships (i.e., map them) in real time.

Additionally, to solve problems, you will need exact traces from each individual application request across all the microservices it touches.

Performance Optimization Opportunities

In Agile development environments, developers often push new code into production on a daily basis.

How do they know that their code is delivering good response time and not consuming too many resources?

To help with this, the APM solution must work at the speed of DevOps, automatically and immediately recognizing when new code has been deployed - or any changes to the structure of the environment (including infrastructure). It must also make it easy for developers to analyze the efficiency of their code.

This use case calls for granular visibility into user requests, host resources (K8s nodes), and workload patterns. It’s also critical that you have a robust analytics mechanism for all of this data. You cannot accomplish this use case with Kubernetes alone.

Root-Cause Analysis Within the Application, Containers and Orchestration

One is the ability to identify the root-cause of performance issues automatically. It’s not good enough to just be aware of problems within your Kubernetes environment.

You must be able to trace those problems to their exact root cause and fix them in minutes.

Given the extreme complexity of a Kubernetes-based application and the lack of visibility into the environment, identifying the root causes of availability or performance problems is exceptionally challenging to do manually.

When your APM tool understands the relationships between Kubernetes, application services, and infrastructure, it can automatically identify the root cause of issues anywhere within the system.

Integrated Service / Infrastructure Mapping

Given that Kubernetes doesn’t offer full visibility into how services interact with each other, your monitoring tool must be able to map services automatically.

Equally important, it must have the ability to interpret the relationships and dependencies between those services in order to identify problems and understand how one service’s performance will impact that of others.

 

Conclusion
Kubernetes is rapidly becoming the standard orchestration platform in enterprises to augment and even complete the transition to DevOps, but does not include application performance visibility or management. Furthermore, Kubernetes introduces a new layer of abstraction into the datacenter creating observability challenges making it more difficult to manage application availability and deliver the needed performance SLAs demanded by your business.

 

To properly manage business critical applications on Kubernetes, Instana recommends an APM tool with these key capabilities:
• Full-stack visibility (including infrastructure, code, microservices, request traces, middleware, containers and Kubernetes) of all technology layers
• Continuous discovery of the full application stack to automatically adjust to changes in the environment
• Dependency mapping and correlation between the layers of technology
• Automatic root cause determination and assistance for the DevOps teams to troubleshoot application issues.

 

 

 

 

 

 

 

 

 

THE TQ CULTURE

Information Technology, Simplified

We are a company of self-motivated individuals that share a common goal and purpose.

Each individual in the company has a high degree of autonomy and we are managed and remunerated based on outcomes.