Features of a Service Mesh
1. Traffic Splitting
One of the most important feature of service mesh is traffic split configuration.
For example, when changes are made in payment micro service (Here i am taking previous example), a newer version is built (For example version 3.0), tested and deployed in production environment. Of course you can rely on tests the new version. But what if the new version has a bug that you couldn’t watch the tests.
This happens very often depending on the test coverage. So in this case you don’t want to end up with a new version of payment service in production that doesn’t work. It may cost your company a lot of money.
So you want send may be 1% or 10 % traffic to the new version for a period of time to make sure it really works.
So with the service mesh, you can easily configure a web server micro service to direct 90% of traffic to the payment service version 2.0 and 10% of traffic to the version 3.0. Which is also known as Canary deployment.
A service mesh typically also handles the security aspects of the service-to-service communication. This includes enforcing traffic encryption through mutual TLS (MTLS), providing authentication through certificate validation, and ensuring authorization through access policies.
There can also be some interesting use cases of security in a service mesh. For instance, we can achieve network segmentation allowing some services to communicate while prohibiting others. Moreover, a service mesh can provide precise historical information for auditing requirements.
Robust observability is the underpinning requirement for handling the complexity of a distributed system. Because a service mesh handles all communication, it’s rightly placed to provide observability features. For instance, it can provide information about distributed tracing.
A service mesh can generate a lot of metrics like latency, traffic, errors, and saturation. Moreover, a service mesh can also generate access logs, providing a full record for each request. These are quite useful in understanding the behavior of individual services as well as the whole system.
4. Load balancing
Most orchestration frameworks already provide Layer 4 (transport layer) load balancing. A service mesh implements more sophisticated Layer 7 (application layer) load balancing, with richer algorithms and more powerful traffic management.
Load‑balancing parameters can be modified via API, making it possible to orchestrate blue‑green or canary deployments.
5. Authentication and authorization
The service mesh can authorize and authenticate requests made from both outside and within the app, sending only validated requests to instances.
6. Service discovery
Service discovery is how applications and (micro)services locate each other on a network.
When an instance needs to interact with a different service, it needs to find – discover – a healthy, available instance of the other service.
Typically, the instance performs a DNS lookup for this purpose. The container orchestration framework keeps a list of instances that are ready to receive requests and provides the interface for DNS queries.