Cloud Run is a fully managed computing platform for deploying and scaling applications in containers.
Deployment means deployment architecture. And as you know, the CI/CD architecture is the most widespread and most widely used.
But it remains to be seen whether the tools currently available to make CI/CD are compatible with Cloud Run 🤔.
When setting up a CI/CD pipeline for a new project, I realized that there weren’t many tools that were compatible with Cloud Run. On top of that, most of them only do continuous deployments but not continuous delivery. As you know, it’s important to control the deployment of applications in production. After a lot of research, I set up 2 CI/CD architectures, one for the development environment and the other for the production environment 😊.
Development environment
Original image
When the development team finishes its work, it performs a push on the repository. Once the code change is detected, Cloud Build is triggered to perform the following tasks:
- Build docker image
- Run unit test
- Push the docker image into Artifact Registry
- Retrieve the docker image from the Artifact Registry and deploy it on Cloud Run with 100% traffic
Once our service is ready, we will trigger a workflow (Google Cloud Workflows) that will retrieve the service URL. This URL will be sent to a function (Cloud Functions) which will take care of sending an e-mail to all the members of your organization who need to access the service to perform tests.
As you can see, to access the services, we will use Identity Aware Proxy to control and limit access to our application. Then we will use Cloud Load Balancing to distribute the load of requests sent to our application (only for a service that is duplicated in several regions).
NB :
- The use of Cloud Load Balancing is not mandatory. But you need this to configure Identity Aware Proxy.
- External HTTP(S) Load Balancing with serverless NEGs is not supported with Cloud Run for Anthos.
- With GitLab CI you can perform all the tests (unit test) you want before triggering Cloud Build to perform the deployment in Cloud Run. All you have to do is define the order of the tasks.
Production environment
Original image
Despite the presence of certain elements of the development environment at the level of the production environment, we will take the explanations from the beginning for a better understanding.
When the development team finishes its work, it performs a push on the repository. Once the code change is detected, Cloud Build is triggered to perform the following tasks:
- Build docker image
- Run unit test
- Push the docker image into Artifact Registry
- Retrieve the docker image from the Artifact Registry and deploy it on Cloud Run with a tag and 0% traffic
Once our service is ready, we’ll trigger a workflow (Google Cloud Workflows) that will retrieve the service’s review URL (tags). This URL will be sent to a function (Cloud Functions) which will take care of sending an e-mail to all the members of your organization who need to access the service to perform tests.
As you can see the new deployment will not be accessible to the public as it is not receiving traffic. The objective here is to monitor the deployment in production and perform functional tests.
Thanks to the Cloud Run tag, you will be entitled to a revision URL that will be used by all the teams in your organization to access the new version of the application.
For more information on the use of tags you can read this article.
With the revision URL, your teams will be able to perform functional tests. Once the new deployment is approved, we will begin deployment in the production environment.
Still with a view to controlling our deployment, we will use the Gradual rollout to control the distribution of traffic. Here is a small illustration.
If during the production rollout you encounter errors, you can perform a rollback.
Finally, the application must be made accessible to the customer. To do this, we will use Cloud Armor for security, Cloud CDN for reliable and fast delivery of our content and Cloud Load Balancing for load balancing.
With this automated architecture, you can easily control the deployment of your applications.
In case you are interested in configuring this architecture, please read this article.