It is necessary to look after the goose that lays golden eggs in order to receive golden eggs. In this piece, I’d like to draw attention to your CI/CD pipeline.

Because the software industry has a lot of sick geese, it’s time to go through some prevalent anti-patterns, their effects, and how to remedy them.

Underestimating the importance of the pipeline

While the production line is for manufacturing, the CI/CD pipeline is for software development. Successful manufacturers understand the need of optimizing the production line since it affects quality, speed, and throughput. Production capacity and intended outcomes are carefully matched in investments.

The CI/CD pipeline is frequently considered a second-class citizen in software development. Building a new feature always takes precedence over-investing in the pipeline. As a result, your software’s speed and quality may be regularly degraded.

Instead, divide your investments between those that are required to improve the pipeline and those that are required to add new features.

Not implementing pipeline modeling

If you don’t know where you are, a map is meaningless. How do you figure out what the most pressing issues are in your pipeline? Many people choose their improvement projects based on gut instinct, whether it’s to adopt new technology or another tool or just to work on their favorite regions.

They are rightfully pleased with the advancements they have made, yet their efforts have yielded no major return on investment. And any enhancement that does not address an existing bottleneck is unlikely to have a significant influence on overall performance.

Pipeline modeling is a methodical way to detect any bottlenecks in your pipeline. It provides an end-to-end picture of your pipeline and assists you in identifying bottlenecks. Measure lead and process times, as well as the percentage of work, finished and accurate, as part of your modeling. This percentage is calculated for the entire end-to-end process after each stage is measured.

Create a prioritized improvement backlog using this data. Allocate enough capacity to eliminate bottlenecks on a regular basis, and track progress by comparing end-to-end performance to the previous condition.

Letting infrastructure become a deployment bottleneck

When you develop your application, as well as the infrastructure it will run on and the application deployment scripts, you begin the deployment phase of your pipeline’s continuous integration (CI) cycle. Don’t simply test the application during the early stages of testing; also test your deployment scripts.

The time it takes to set up the infrastructure and implement the solution is frequently a bottleneck, especially when the notion of flow is not well-understood by employees. The longer it takes to build up your infrastructure, the more likely you are to work in larger batch sizes and, as a result, with limited flow.

Change it if the infrastructure isn’t accessible on-demand or provisioning takes too long due to manual labor. To ensure that your infrastructure is not a bottleneck, invest in a private or public cloud.

Setting up the DevOps pipeline manually

Setting up a CI/CD pipeline necessitates knowledge and time. This is a full-time job, at least in the business world. Configuration management, build, static code analysis, binary repository, test, deployment, code signing, metrics, performance, and infrastructure are among the tools that must be set up, integrated, and deployed.

After the setup, you’ll need to maintain your pipeline, which includes all of the tools, plugins, and integrations: updating individual tools to new versions, integrating new versions into the larger ecosystem, resolving difficulties, and making improvements. These initiatives are frequently underfunded and understaffed.

Create a pipeline as a shared service or software factory to prevent these concerns. Dedicated professionals will ensure that you have a well-functioning pipeline while relieving the product teams of these responsibilities. However, don’t let the service become a bottleneck or a silo. The key to success is a well-balanced interaction model.

Discovering and resolving defects late

How frequently do you discover faults late in the development process, and how often does this cause you to postpone your release and compromise the quality of your solution?

Finding a problem at a later stage than necessary typically prevents the discovery of further defects or necessitates more effort and solutions. Integration becomes a nightmare, and protracted stabilization periods plague the situation. This isn’t even close to becoming a continuous delivery system.

Some examples include:

  • Testing was halted by a bug that crashed the system shortly after the tests began, resulting in the late discovery of a performance issue. The cause was a large defect backlog that caused the repair and deployment to the performance testing environment to be postponed for weeks.
  • Because a new component version’s quality was low, the consuming team—which relied on it to complete its work—considered an early integration to be a waste of time. However, this component was required for a lot of functionality, and when it was finally merged, a number of difficulties were discovered late. The fixes took time, and the system as a whole was delayed as a result. The size of the deployed batch raised the deployment risk. There were more service outages and repair efforts after that.

Developers improve and make fewer mistakes as a result of faster feedback, which allows for better learning and early defect repair (defect prevention). The higher the throughput, the fewer flaws in the system.

Make sure your pipeline is set up for quick feedback and that any flaws are fixed as soon as possible. This will boost your productivity and your engineers’ performance while also reducing work in progress.

Ignoring error types

The majority of mistake types can only be identified and corrected once a specific point has been reached. If that stage is skipped, the above-mentioned anti-pattern of late discovery emerges. The table below lists the characteristics of each test step, including feedback time, error types, repair prices, test infrastructure costs/effort, and environmental compatibility.

You get the most immediate feedback in the developer IDE since you get it while typing or, at the very least, when you build and run your local unit tests. Fixing is quick and inexpensive because the developer is immersed in the topic and does not require any effort to find the problem.

The most common fault type here is errors of commission, such as the IDE plugin for static code analysis already identifying a potential memory leak, the security scanner exposing a potential security vulnerability, the unit tests discovering an “index one-off” issue, and so on.

Analysis of static code Can detect the same errors as the IDE plugin at build time but usually does so later. Issues are piled up in a report, which must be assigned to the appropriate developer, taking longer to resolve. Fixes are frequently not implemented at all due to the extensive list of potential concerns, as well as time constraints and procedure overhead.

While it makes sense to measure and conduct static code analysis during the build to ensure code cleanliness, IDE-based methods are preferable for developer feedback and productivity.

Things that the developer has overlooked for implementation are referred to as omission errors. These are not found during unit testing because you don’t know what you don’t know, but they are frequently found during component testing, where the goal is to see if the component accomplishes what it should. TDD seeks to apply this perspective during unit testing and can assist in stimulating the consumer experience earlier in the process.

Integration tests can detect interface flaws as well as any other type of dynamic issue that only occurs during runtime. Testing has focused on smaller sections of the system up to this point, feedback has been quick, and test environment expenditures have been cheap.

When it comes to system tests, cost issues become more serious. You’ll need the “full” environment for these tests—systems, integrated systems, software configuration, and installation. This necessitates the complete setup and deployment of the frequently massive environment.

System tests are performed on the entire system; but, due to technical or financial constraints, you may be compelled to operate with stubbed external integrations or reduced infrastructure.

Nonetheless, this is the first opportunity to verify actual end-user functionality. This reveals errors of omission once more, such as missing routines or sections of the given functionality. In addition, system testing is the first time you’ll encounter behavioral issues like crashes or errors caused by poor runtime interactions or network issues between system components.

Testing from the perspective of the end-user is also the first opportunity to spot misinterpreted implementations of functionality due to ambiguity and lack of clarity in the specification, or misunderstood needs from the product owner or product manager.

Staging, on the other hand, checks two things: whether the (hopefully) automated deployment works, and whether production-like infrastructure and external system integrations perform as planned. Environment congruency ensures that any problems with networking, firewalls, storage, security rules, high availability, and performance, among other things, are discovered before they cause problems in production.

Because production is the last feasible stage in your pipeline, feedback is delayed as much as possible. Locating, debugging, reproducing, and correcting the problem at this point is time-consuming and costly. However, because the lack of environment congruity—the grade of identical setup—in the earlier phases does not cover the production scenario, some environment-specific faults may develop only here. Unknown usage patterns and production data-related difficulties are the same.

Examine where you’re now finding which types of flaws. It will give you the data you need to improve your testing approach, speed up feedback and learning, reduce costly rework, and improve pipeline quality, flow, and throughput.

Missing an explicit deployment strategy

The automatic deployment of new code and infrastructure is critical to the success of continuous flow. Any faults with this area’s reliability, whether due to a lack of automation or error-prone deployment code, stifle flow, delay feedback, and result in bigger batch sizes. As a result, you’ll require a well-defined deployment strategy.

Developing your deployment scripts and configurations runs concurrently with the development of your application and follows the same guidelines, such as leveraging configuration management and architecting for flow and testability, robustness, and debugging ease. Optimize the speed of your rollout if necessary.

Application code is produced and packaged, as shown in the diagram below. After that, the packages are installed in the system test environment. The goal of this step is to ensure that the deployment scripts perform as expected and that the application’s functionality is tested.

Deploy to staging as soon as you have a decent enough version to see if the solution works in this “as close to production as possible” staging system. Include any tests that could not be run in the system test environment due to a lack of production-like data or infrastructure, such as standby databases for high-availability systems.

For specialized tests such as performance, stress, internationalization, continuous hours of operation, and disaster recovery, you may need to establish additional environments.

You should have high confidence that all relevant issues have been resolved before you eventually deploy to production. It’s good to fail at staging, but it’s a different story when it comes to production.

The less comparable staging and production are, the more likely you are to experience service delays and outages. This is especially true if you need to switch between staging and production in your deployment scripts. Balance your investments in a staging infrastructure/environment that is consistent.

Not separating deploy and release

Many individuals don’t understand the difference between deployment and release, and as a result, they’re missing out on a significant benefit. Deployment is just getting new code into the production system to ensure compatibility with current code while keeping the new feature hidden from end-users. The term “releasing” refers to the process of making new functionality available to users.

This is not an all-or-nothing approach with the necessary technical skills, but rather a gradual introduction of capability to dedicated users.

The following is an example of a hypothetical scenario:

  1. The code has been pushed to the production environment.
  2. New features are turned on for some testers only if it is successfully deployed. They can carry out the necessary testing without affecting the real users. While you’re waiting, upgrade your telemetry and look for any irregularities.
  3. Expose functionality to real users if everything works as planned. Start with a small set of users and expand as needed until all users have access.

If there are any unwanted side effects at any point, roll back or repair forward right away.

Making deployment and release the same increases the risk of service disruptions and poor quality. To avoid this, keep deployment and release separately.

Not making a conscious decision about your delivery model

Some circumstances make working with tiny batch quantities difficult, negating the benefits of a well-functioning flow-based system. This could prompt you to reconsider your delivery strategy.

Here are a few circumstances that can cause this.

Scenario A

Many customers and a large support matrix can be found in an on-prem product that has been successful on the market for years. However, testing diverse conditions takes a lot of time and capacity, which piles up work and leads to higher batch sizes and longer stabilization stages.

Customers are dissatisfied, with customers discovering issues in their on-premises systems. This increases the amount of maintenance required and limits the amount of capacity accessible. This unused capacity can’t be used to fund new, creative features or solutions.

The software-as-a-service (SaaS) industry is catching up. In this situation, switching to a SaaS-based system with a continuous delivery model could free up the needed capacity, allowing the company to improve customer satisfaction and market share.

Scenario B

Every six to 18 months, a corporation consumes a commercial off-the-shelf (COTS) product. Because of the required adjustments and integrations into the present system, consuming new versions is a dangerous and costly three-to-six-month project.

A couple of releases were left out to prevent the dangers and costs of such an update. It isn’t getting any new features, yet it still has to cope with problems. The provider must support a large number of older versions and must go to great lengths to offer the necessary updates and hotfixes.

It’s a lose-lose situation here. The problem could be solved by replacing the COTS solution with a SaaS solution. Regular updates lower the chance of failure and the time it takes to incorporate it into the customer’s environment. The customer can choose when the upgrade will take place in order to test and prepare for the deployment. The SaaS vendor should also be transparent about the solution’s backward compatibility.

Not having a shift-left focus

The later you complete specific tasks, the sooner new capability will be available. This becomes apparent when each release goes through a several-week hardening period. The longer the hardening period lasts, the more opportunities there are to shift activity to the left.

Look at the actions you undertake during the hardening phase and see how they can be done sooner to see what you can shift left. Make the actions in the definition of done (DoD) into exit criteria for a stage in your story or feature the Kanban system.

It’s time to optimize your pipeline

Keep an eye on your golden goose: To get the most out of your software development, optimize your DevOps process. Always consider flow and work to eliminate bottlenecks. Automate everything; it will pay dividends in terms of higher flow, faster feedback, and better quality in the mid to long run.

Make the most of your test stages and uncover flaws as soon as feasible. Remember that shoddy code will not scale.

Also, be aware of your delivery paradigm. Can you outlast your competitors in the long run? Reduce and eliminate hardening times by shifting operations to the left. Keep these points in mind, and you’ll be able to get your hands on the golden eggs.