A canary release is a deployment methodology where a small percentage of servers are chosen to run new code releases. If those new servers are running as expected, more servers are chosen to run new code. The eventual state is to either have all servers running new code or to detect errors and roll the release back to a previous state. This is named after the saying "a canary in the coal mine".
Capacity management is the process of planning, designing, and controlling the use of resources in a DevOps environment. The goal of capacity management is to ensure that the necessary resources are available to meet the demand of the systems and applications that are deployed in a DevOps pipeline.
In DevOps, capacity management is a critical component of the overall IT infrastructure management process. It involves the identification, analysis, and prediction of the resource requirements of the systems and applications that are being developed and deployed. This helps to ensure that the necessary resources, such as computing power, memory, storage, and network bandwidth, are available to meet the demand of the system at any given time.
Capacity management also involves monitoring the usage of resources in real-time and making adjustments as needed to ensure that the systems are performing optimally. This may involve adding or removing resources as demand changes, or making changes to the configuration of the systems to improve performance.
Overall, capacity management plays a crucial role in DevOps by helping teams to ensure that the necessary resources are available to meet the demand of the systems and applications that are being deployed. By continuously monitoring and adjusting the use of resources, teams can ensure that their systems are performing optimally, which helps to improve the overall efficiency and reliability of the software development and deployment process.
A capacity test is a type of performance test that is used in DevOps to determine the maximum load that a system can handle before it becomes unavailable or starts to degrade in performance. The goal of a capacity test is to identify the limitations of a system and determine the resources (such as CPU, memory, and network bandwidth) that need to be increased in order to handle higher loads.
Capacity tests are typically performed before a new release of a software application is deployed to production, or when changes are made to the infrastructure that supports the application. The test is designed to simulate the maximum expected load on the system, and the results are used to identify potential bottlenecks and performance issues.
In DevOps, capacity testing is an important step in the continuous delivery and deployment process. By performing capacity tests before deploying a new release, teams can ensure that the system will perform well under real-world conditions and that users will have a positive experience. Additionally, capacity testing can help to prevent system failures and downtime, which can result in lost revenue and damage to the reputation of the company.
Overall, capacity testing is a critical component of DevOps that helps teams to validate the performance and scalability of a system before it is deployed to production. By performing regular capacity tests, teams can continuously monitor and optimize the performance of their systems, ensuring that they are able to handle increasing loads as the user base grows.
Create a checkpoint for code at a point in time. A commit represents a unit of work being completed.
Early IT operations would make manual changes to systems that were hard to track and would lead to system outages if not completed properly. Configuration managed mains to solve this by ensuring that all changes are captured in code in a repository. Subsequently, those changes are pushed down CI/CD pipelines along with application code keeping infrastructure and applications in sync.
A container image is a zip file of a filesystem. When run, a container image is unzipped and resources are assigned from a CGroup to start the image.
Containerization is the act of taking an application that runs on bare metal and transforming it to run inside a container.
Containers are slices of a Linux machine's resources via CGroups. This allows for more fine-grained slicing and optimization of resources in an environment.
Continuous delivery is the act of building workflows, processes, and technology that allow for rapid delivery of valuable technology to customers with the lowest delay possible.
Continuous deployment is the act of deploying technology changes to production many times per day.
Continuous integration is the act of pulling together code changes, automated testing, and infrastructure configuration management into a single artifact and running automated testing to ensure the quality of the newly created deliverable. The goal of Continuous integration is to ensure that continuous delivery is fed with quality products that can be moved into production rapidly.
Continuous quality is the act of consistent review of work products (code changes) against quality standards. This can be done manually through code reviews, or automatically via tools.
Continuous testing is the idea that every change should be run through testing before being deployed to production. This can be done manually or via automation tools. Strategies such as TDD or BDD were developed to rationalize continuous testing as a methodology that can be communicated between different parties.