Five Ways Log Analytics Benefit your CI/CD Strategy

 

The following is a guest post from Ariel Assaraf, CEO of Coralogix.

In a recent study, conducted by IDC, 90 percent of organizations surveyed were either making use of log analytics or were planning to build out their log analytics capability. Logs are one of the heroes of continuously changing environments and a methodical, consistent approach to logging across your applications, network infrastructure, firewalls, databases and more will pay dividends time and time again.

We often discuss logs as a troubleshooting mechanism, to be employed after an issue is detected, due to their historical place in the operational hierarchy. This attitude is not entirely incorrect, but it ignores much of the tooling that is becoming ubiquitous across the software industry. Logs can do much more than simply tell you when things are going wrong. Rather than allowing your customers to tell you when there is a problem, your log analytics can tell when there is going to be a problem. This alert, in turn, reduces outages and contributes to exceptional customer experience.

In the same survey mentioned previously, 88 percent of companies that were surveyed by IDC were either using or planning to install CI/CD strategies for driving forward their organizational agility. Classic change management has become the hallmark of the failing enterprise, characterized by slow, clunky, error-prone releases. In the modern software economy, your ability to change is your currency. To survive and, most importantly, compete, your organization must see change as one of the key components of your operations process. You need to stop simply delivering, and start continuously delivering.

1. Measuring Success

CI/CD faces a common problem when it is first rolled out across many organizations. What does “success” look like?

For an engineer, success is how smoothly your code is deployed into production. If the deployment finished in good time, without any errors, then for your IT department, it’s time to break out the champagne. Your sales team, however, will still be glued to their screen. They are waiting to see what the customer does, to see if all of that effort was really worth it. This divide is common across many companies and without a single way to measure the success of the organizational effort, silos can naturally form.

“Business analytics” was cited by 7 percent of surveyed organizations as one of their uses of log analytics, with a further 41 percent planning to make use of this functionality in the future. Instead of each department deciding that their work is done following a feature deployment, everyone can know first hand the impact that they had on the company strategy.

2. Testing the Waters

We often think of the mayhem that ensues after a failed deployment. Customers booted off of product pages, login failures, payment issues, data loss, security breaches and more are all possible outcomes when changes are released into production. When an issue does occur, we often look to roll back the latest release.

In an environment of continuous change, it is not the latest change that we care about. There could have been 25 changes in the past hour. Is it prudent to keep track of all of them? This mentality simply won’t scale. Confidence that your system is healthy is the basis of rapid change. 

Fifty-two percent of the organizations surveyed were using log analytics for passive performance monitoring. Fifty-six percent were using log analytics to detect security issues and nearly 28 percent were inspecting the logs to look for regulatory violations. These are not “after the fact” debugging techniques - they are continuous pulses of information emanating from your system.

Using log analytics in this way gives you a bedrock on which to deploy your latest features. If you know the system isn’t faltering, is internally and legally compliant and it is performing optimally, you can be confident that your change is about to be released into a stable environment, maximizing your chances of a successful release, time and time again.

A key requirement here is that your change mechanism (usually a CI/CD pipeline built on Jenkins) is regularly feeding information into your log analytics platform. For example, a log analytics solution like Coralogix can consume data directly from your Jenkins pipeline, differentiating between different releases and generating machine-learning assisted reports for detecting unexpected changes. A direct line from your pipeline to your log analytics will give you confidence that if a change has disrupted the harmony of your architecture, it won’t go unnoticed.

3. Getting Back on your Feet

Even with all of these amazing new capabilities, changes will fail. Even with the most rigorous analysis, testing, measurements, operational insight and expertise, something is going to blow up at some point. It is a simple fact of modern software engineering that, in much the same way that we need to befriend change, we need to accustom ourselves to the reality of failure.

Fifty-six percent of organizations were using their log management capabilities for IT troubleshooting and nineteen percent were using their logs for finding problems in their production applications. With a powerful log management solution, problems that would have typically taken days to diagnose can be triaged and identified in minutes.

By bringing logs from various different parts of your system into the same place, relationships can be found. For example, the application code might be fine, but the firewall through which its traffic passes through might be manipulating the data in an unforeseen way. These types of issues are almost impossible to quickly diagnose without an in-depth understanding of both the journey that each request takes and the component through which each request moves.

By coupling a highly efficient log management solution with a quick, sophisticated CI/CD process, issues can be diagnosed and fixed in mere minutes, reducing the danger of deployment and turning dangerous gambles into calculated risks.

4. Looking Inward

Continuous delivery does not happen overnight. Many organizations find themselves frustrated in the early days of continuous change. Our minds are habit-forming and the regularity of release cadences provides a safety blanket that can feel much more comfortable than the dangerous world of free-flowing, autonomous change.

This is why we simply can not rely on our intuition to measure our performance. We need a source of data, analyzed and rendered, to give us a clear indication of how far we are along our CI/CD journey. As noted by the IDC report, organizations are increasingly turning to log analytics to help shed light on this area.

By measuring the frequency of deployments, the average time to resolution, the average time to failure, the lead time to production and benchmarking release versions integrated within their CI/CD pipelines, organizations can begin to truly reflect on their performance.

CI/CD isn’t simply about ensuring rapid change in your software. It’s about creating feedback cycles in your organization. Some of those feedback cycles focus on the market and its trends, but others will look towards your processes and practices.

By embracing change in both your software and your operational practice, you are not just placing yourself in the best possible position to build a reliable product that excites your customers. You are creating an environment in which those products can continue to grow and change, scalably and reliably.

5. Scalable Maintenance Costs

As your organization grows, so too does the complexity of your software. More and more time is spent simply trying to keep the lights on and, more importantly, working out which fuse has blown when the lights go out. The history of software enterprise is littered with organizations that failed to keep track of their operational expenditure.

Keeping track of the changes that are occurring is a key element of this. Maintaining audit trails are important for regulatory compliance, but they’re also an invaluable tool for debugging and tracing.

As you can imagine, the increasing volume of log data can take up quite a lot of space. Thirty-two percent of organizations that were surveyed were processing between 100GB and 500GB of log data for their analytics systems, with a further four percent processing over 1,000GB of data per day. This was especially true for financial services companies, of which 15 percent found themselves north of the 1,000GB mark. 

Storing this data is only part of the problem. Performing calculations over this massive dataset can be slow and expensive, as indicated by thirty-two percent of queried respondents. To speed up these queries, organizations hire experts to access the data they already have and quickly, their costs spiral.

This is a challenge that you will need to overcome, but with providers such as Coralogix offering tools to optimize log data usage, there are options around it. For example, you could establish priorities for different types of logs based on their business value and ingest and process those logs at varying degrees to keep the right balance between cost and value. This would enable you to store the volumes of data found at the top end of this survey without needing to hire an army of engineers to keep your log analytics afloat.

Continuous Delivery is Here to Stay

Log analytics offer a wellspring of information that can help accelerate your CI/CD processes No longer reacting to outages, you can fix them proactively. No longer guessing at the underlying cause of an issue. No longer guessing if your organization is delivering software more efficiently over time. These are the patterns of a company that is ready to take on even the most competitive markets.

Combining a mindset of continuous delivery with the data-driven insights that log analytics can offer you, is an outstanding first step that journey.