We couldn't find a match for given <KEYWORD>, please try again.

What is a Service Mesh?

A Service Mesh is a configurable infrastructure layer that makes communication between microservice applications possible, structured, and observable. A service Mesh also monitors and manages network traffic between microservices, redirecting and granting or limiting access as needed to optimize and protect the system. The benefits of using a Service Mesh include improved observability, security, and granular control over network traffic. It can also make application management easier and make it simpler to add or remove microservices from an application without impacting the rest of the system. 

A service mesh refers to the way that software code from cloud-hosted applications is woven together at different levels of the webserver in integrated layers. Rather than functioning in an isolated runtime at the top layer of a web server stack configuration, cloud-hosted application code can be built with APIs that facilitate calls to other software-driven services. The services are available at the operating system, web server, network, or data center levels. A service mesh increases the potential functionality of software applications by extending the levels of interoperable communication between infrastructure elements in production.  

A service mesh weaves together thousands of microservices across VMs in an elastic cloud data center through automated, cross-channel communication between running applications. Dedicated service-to-service communication functionality is required by cloud orchestration, load balancing, resource discovery SDN routing, API communication, database synchronization, and script optimization applications across all levels of data center operations. A service mesh can be used for better data analytics and traffic metrics for multi-tiered network architecture across millions of multi-tenant rack servers at a time. 

Exploring the Economic Benefits of VMware Tanzu Service Mesh

7 Ways to Improve Developer Experience and Accelerate Software Delivery on Kubernetes 

Benefits of a Service Mesh

The Service Mesh is focused on connecting various services, securing communications between them, and monitoring their performance in real time. It also provides a control plane, enabling policies and configurations to be updated at once in real time for all data planes within the service mesh. Many benefits result from using a Service Mesh, but here are a few of the highlights: 

  • Increased interoperability: A service mesh extends the functionality of SDN routing features for integrated microservice environments in support of web, mobile, and SaaS application code. 
  • Enhanced microservice discovery: A service mesh improves network configuration and management through better microservice discovery. 
  • Detailed real-time monitoring and analytics of network activity: A service mesh can increase observability by accessing backend processes and web server hardware for more detailed real-time monitoring and analytics of network activity. 
  •  Powerful automation of web and mobile scripts: Developers can script service mesh functionality through YAML files or utilities like Vagrant, Jenkins, Puppet, Chef, etc., to build powerful automation of web and mobile scripts at scale. This type of architecture is required to support complex SaaS applications in production in enterprises. A service mesh provides coordination for thousands or millions of containers running simultaneously in the cloud. Since Kubernetes has become the new standard for containerized applications, Service Mesh works well for Kubernetes deployments. 
  • Increased security: The Service Mesh enhances security in many ways and enables a control plane that makes it easy to implement policies on a global scale. Because the service mesh provides centralized control for applications, security teams can deploy policies across all microservices at one time without the need to redeploy entire applications. That ensures all apps within the company are protected by the same security policies. The Service Mesh also enables the authentication of end-user credentials. The authentication process can be moved out of the individual application if desired. Instead, the control over authentication and authorization can be consolidated and managed at the company level. 
  • Safe Rollout of Deployments: Service Mesh provides the option of blue/green deployments to enable application updates without service interruptions. For more information on blue/green deployments, see our blog on Kubernetes deployments. 

How does a Service Mesh work?

A service mesh works through discovery and routing applications that are installed on every VM instance or node in an elastic web server network to register running microservices by IP address. A central registry is used for configuring, managing, and administering all of the microservices that run simultaneously on a network. The service mesh can be referenced by parallel applications operating at the various layers of a web server, data center, or application to extend interoperable functionality through data analytics and network monitoring. This leads to increased data center automation at the level of IP routing, SDN definitions, firewall settings, filters, rules, and cloud load balancing.  

API connections can reference the service mesh for definitions of where to discover running applications and microservice features for data transfers or required processing activity. Elastic web server platforms that scale automatically with Kubernetes use Istio as the central registry and configuration management utility for microservice discovery. Elastic web server platforms like AWS EC2 and Kubernetes utilize the service mesh for managing multiple copies of cloud applications in simultaneous runtimes while synchronizing changes to master database and storage information. A service mesh permits the application layer to communicate with the webserver, internet, and data center network resources through APIs, or vice versa, depending on the microservice or code requirements. 

Service Mesh architecture

A service mesh is based on an abstraction layer that is installed across VMs or containers in a cloud data center. Code is installed on every VM or node, which communicates with a central administration software instance that runs the data center orchestration. Service mesh solutions like VMware NSX and Istio rely on Envoy to create the data plane at the node level. Envoy manages information related to the running microservices, licensed IP addresses, HTTPS encryption, active database formats, etc., for every VM or node. With NSX this includes distributed firewall integration at the level of the hypervisor. In elastic cloud networks, the data plane information for each VM or node is used for load balancing. API connections rely on service mesh architecture for inter-application routing requirements. Telemetry at Level 7 of the service mesh includes DNS, HTTP/S, SMTP, POP3, FTP, etc. 

Service Mesh implementation

The service mesh implementation includes load balancing and service discovery across the SDN, IP address, Microservice, and API resources of a web/mobile application. The service mesh manages communication, synchronization, and encryption for connections in the webserver backend across hardware in an elastic web server architecture. In cloud applications, the script, database, and static web files are often separated on different hardware, then assembled on the final page of the web browser. The SDN routing between hardware, scripts, database, and files becomes more complex with third-party APIs in the code. When this must be assembled across resources for every page load, the service mesh integrates, synchronizes, and standardizes the operation across VMs in elastic web server frameworks. The service mesh was created to meet a need that no other software provides in the data center. It also includes data analytics and user metrics from web traffic connections.  

Open source Service Mesh

Istio is currently the most advanced open-source service mesh project, with Envoy being used for the central features related to the management of the data plane across nodes. Istio was originally developed as part of the Cloud Native Computing Foundation (CNCF) and works within the VMware NSX Service Mesh and Enterprise PKS platforms. PKS is VMware’s Kubernetes distribution which orchestrates cloud web servers through containers. PKS is available as a self-hosted package for public and private  cloud requirements or as a fully managed Containers-as-a-Service (CaaS) product. Istio is used for microservice communication in Kubernetes with complex IP address routing capabilities and encryption for elastic web server orchestration in enterprise data centers at scale. Linkerd, Conduit, Aspen, and Consul are other important open source projects being developed as components of service mesh frameworks. 

Elastic Service Mesh

An elastic service mesh is required to synchronize database and website files in a cloud hosting framework like AWS EC2 or Kubernetes. The service mesh controls the routing between VMs in the webserver backend for API and SDN requirements in software application support. When the service mesh is also used for discovery and load balancing in elastic web server networks, administrators can automate the allocation of data center resources to match the requirements of user traffic in production. Web servers can be configured to automatically launch or be terminated when no longer required for more efficient use of cloud hardware resources. The ability to embed real-time monitoring and analytics capabilities into a service mesh at the level of the VM or node provides software developers, programmers, and web publishers with the ability to create new features for applications using microservices. 

Why microservices architecture needs a Service Mesh

A public cloud may contain millions of simultaneously running microservices across containers or virtual machines supporting different applications and databases in parallel through isolated runtimes. Multi-tenant environments based on virtualization require a better method to discover and register microservices so the unique functionality of each can be integrated by applications or shared to other devices using APIs. Many microservice formats are not explicitly designed for elastic web server platforms and need a service mesh to manage the operation in containers. A service mesh provides the fine-grained routing and encryption functionality over SDN that allows different APIs to communicate between running code processes on web servers, endpoints, and other devices. 

 

Related Solutions and Products

VMware Tanzu Service Mesh

Connectivity and security for modern applications.

VMware Tanzu Application Platform

A superior multi-cloud developer experience on Kubernetes