Author(s): Shikha Atri, Associate Engineer – CloudDevOps and Mohinder Kumar, Director
Problem Statement
Deploying applications into production systems with minimal outage is always a challenge. With the shift of moving monolithic applications to microservices has solved the problem of deployments to some extent where individual microservices can be deployed independent of each other. However large applications still need a mechanism where an application/microservice can be tested on production before we rollout the application for actual users.
Deploying microservices with an Istio service mesh can address this by enabling a clear separation between different application versions and traffic management. Istio mesh allows fine-grained traffic control decouples traffic distribution and management from replica scaling. You can define the traffic percentages and Istio will manage the rest.
Before we move into setup, let us look at the basics of Istio Service mesh and deployment strategies.
What is Istio?
Service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage its requirements can include service discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication . Istio is an open source tool written in Go which helps in creating an abstraction layer above various microservices running in Kubernetes. Istio addresses the challenges developers and operations teams face as monolithic applications transition towards a distributed microservice architecture. Istio is the first and most widely used open source project to make service mesh accessible to a wider audience. Istio helps to :-
- Connect Microservices.
- Control various API calls between services & traffic flow between them.
- Secure Microservices.
- Provides security by default — No modifications required in app code & infrastructure.
- Ultra Defense: Provides multiple layers of security by integrating with another security system.
- Allow traffic encryption, helps against MITM attacks.
- Control Microservices.
- Applies enforcement policies.
- Provide Auto-tracing, logging, and monitoring of all Microservices visualizes what’s happening under the hood.
Deployment Strategies
A deployment strategy describes the process of how an application upgrade should be performed on a live system.
Recreate Deployment Strategy
As the name suggests, in this deployment strategy we stop the running application deploy the newer version and then start the application. Since in this strategy we stop the older version, this technique creates a system downtime between shutting down the old version of application and booting the new version.

Rolling Deployment Strategy
A rolling deployment is a deployment strategy that slowly replaces previous versions of an application with new versions of an application by completely replacing the infrastructure on which the application is running. This is most commonly used/default deployment strategy when it comes to Kubernetes world

Blue/Green Deployment Strategy
In Blue/Green Deployment Strategy we create another set of server group identical to live environment, newer version is deployed to the newer server group and which is thoroughly tested before making live for end users. Once testing results are satisfactory then traffic is switched for end users from older version of server groups to newer version of server groups

Canary Deployment Strategy
Canary deployment strategy is similar to Blue/Green deployment strategy where we deploy the newer version of application into another set of server groups however unlike instead of switching traffic for end users from older version to newer version in one go, we rollout the newer version for smaller subset of users and gradually increase the traffic from older version to newer version

A/B Testing Deployment Strategy
With A/B testing, implementation is similar to the canary deployment strategy, where we have a baseline feature v1 and we deploy new feature v2 in parallel to v1. We then monitor the result of the feature release by splitting the traffic between v1 and v2.
Primarily difference between Canary release and A/B testing is , A/B testing deployment strategy focus is experimenting with new features for some set of users before releasing to all , while Canary deployment strategy focus is releasing to production with minimal impact
Steps to setup Canary deployment using Istio
Pre-requisites
This blog assumes that following components are already provisioned or installed.
- Kuberenetes Cluster
- Istio — Setup Instructions can be found here Getting Started
- For our canary deployment, we will use a sample app https://github.com/cloudtechner/ct-k8s-canary-deployment
– Branch “feature/v1” — This will be the existing version, v1, of our application. This version is already deployed, and we will partially replace it with a newer version of the application.
– Branch “feature/v2” — This will be the newer version, v2, of our application. This will be deployed in parallel to v2 and we will gradually shift traffic from version v1 to version v2. - Build docker images from both versions of application
– Create docker images from “feature/v1” and “feature/v2” branch using Dockerfile in the repository
– Push both to the docker registry
Deployment Steps
- Create Kubernetes resources for deploying the application. The deployment manifest for the v1 application is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: demoapp-v1
namespace: canary-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demoapp
version: v1
template:
metadata:
labels:
app: demoapp
version: v1
spec:
containers:
- image: xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/demo-app:v1
imagePullPolicy: IfNotPresent
name: demo-app-v1
ports:
- containerPort: 5000
2. Create corresponding service for it.
apiVersion: v1
kind: Service
metadata:
labels:
app: demoapp
name: demoapp
namespace: canary-deployment
spec:
ports:
- port: 80
name: http
targetPort: 5000
protocol: TCP
selector:
app: demoapp
type: ClusterIP
3. Now create the Istio resources Gateway and VirtualService
Gateway Resources
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: demoapp-gateway
namespace: canary-deployment
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- 'canary-deployment.test.cloudtechner.com'
port:
name: http
number: 80
protocol: HTTP
VirtualService Resources
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp-virtualservice
namespace: canary-deployment
spec:
gateways:
- demoapp-gateway
hosts:
- 'canary-deployment.test.cloudtechner.com'
http:
route:
- destination:
host: demoapp.canary-deployment.svc.cluster.local
port:
number: 80
4. Please make sure the namespace we are using for deployment have Istio injection enabled
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: canary-deployment
5. Verify the application from browser

6. Now we will deploy the version v2 of the same app alongside v1 using the Kubernetes deployment resource
apiVersion: apps/v1
kind: Deployment
metadata:
name: demoapp-v2
namespace: canary-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demoapp
version: v2
template:
metadata:
labels:
app: demoapp
version: v2
spec:
containers:
- image: xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/demo-app:v2
imagePullPolicy: IfNotPresent
name: demo-app-v2
ports:
- containerPort: 5000
7. Now add the destination rule to split the service
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: demo-app
namespace: canary-deployment
spec:
host: demoapp.canary-deployment.svc.cluster.local
subsets:
- labels:
version: v1
name: v1
- labels:
version: v2
name: v2
In the above we can see DestinationRule is splitting traffic between two different version of applications using deployment labels v1 and v2 with same service endpoint demoapp.canary-deploymentsvc.cluster.local for both versions.
8. Modify the virtual service to split 10% of the traffic to the newer version based on the weight parameter
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp-virtualservices
namespace: canary-deployment
spec:
gateways:
- demoapp-gateway
hosts:
- 'canary-deployment.test.cloudtechner.com'
http:
route:
- destination:
host: demoapp.canary-deployment.svc.cluster.local
port:
number: 80
subset: v1
weight: 90
- destination:
host: demoapp.canary-deployment.svc.cluster.local
port:
number: 80
subset: v2
weight: 10
9. Go to the browser again and refresh it multiple times; you will see that 1 out of 10 requests is going to version v2 and the newer page is loaded

In the current state, we have two deployments of our application with active replicas — one for each version. 10% of users are exposed to the new version to test it involuntarily. We should now monitor and check for errors etc to identify any potential issues.
- Once we are satisfied with the testing results and application behaviour of newer deployed version for smaller set of users, we can gradually increase the traffic to version v2 by increasing weight parameter in VirtualServices for v2 and reducing weight parameter for v1 and eventually 100% weight for version v2.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp-virutalservice
namespace: canary-deployment
spec:
gateways:
- demoapp-gateway
hosts:
- 'canary-deployment.test.cloudtechner.com'
http:
route:
- destination:
host: demoapp.canary-deployment.svc.cluster.local
port:
number: 80
subset: v1
weight: 60
- destination:
host: demoapp.canary-deployment.svc.cluster.local
port:
number: 80
subset: v2
weight: 40
- Monitor the traffic pattern using below command
while true; do curl -s http://canary-deployment.test.cloudtechner.com |grep label;sleep 0.1;done

- Once application is fully switched to version v2 we can reduce the number of replicas for version v1 to 0 to save the resources

As we have seen above, we deployed a newer version of a demo app using Istio & Kubernetes and tested it using canary deployment strategy before rolling out to all users. Kubernetes and Istio have simplified this process which otherwise would have been a complicated process controlling the traffic using Load balancer rules etc.
Leave a Reply