Docker has disrupted software development in many wonderful ways. With Docker based deployments, you can ensure that your development machine, test, staging, and production environments are very similar, which really mitigates frustrating ‘it works on my machine/in a specific environment’ issues. Even better, environments can be brought up and down on demand since Docker containers are lightweight and can spin up much faster than traditional machine images.
However, adopting Docker does introduce an additional step to consider while setting up continuous delivery. You need to choose a Docker registry which provides a central place to store your images. Just like your application components, your images also need to be part of your Continuous Delivery pipeline. This ensures that any time something changes in any of your images, the image is built, versioned, and pushed to the Docker registry of your choice. This triggers the rest of the pipeline which rebuilds all dependent images and applications and runs through the rest of your Continuous Delivery workflow.
It is important to set this workflow up correctly to ensure that Docker image management is automated and is an integral part of your CD pipeline.
You can build Docker images as part of your CI/CD workflow.
You can custom build your base images to speed up builds for your applications. Trigger a build for any Dockerfile in your source control repository.
Your base image build is part of your CD workflow.
Every time your base image is updated, configure it to rebuild all dependent images and applications and trigger the rest of your pipeline.
You can push your images to a registry with one command.
All popular artifact repositories such as Docker Hub, Amazon ECR, Google Container Registry, and Quay are natively supported.