Table of Contents
Performing a vulnerability management process is a thorough job in which the advantages of each step must be analyzed and detected
When we talk about vulnerability management, many will think that we are just talking about keeping track of what has been detected, how serious it was and if it has been fixed.
It’s true that these items are important and that a vulnerability management process should include them. However, many times there are other points that are overlooked or where it is decided not to dedicate resources because there is no clear benefit.
The idea of this post is to identify some of these points and analyze the advantages they can bring. One of our main goals will be to obtain information about our assets that will allow us to take actions to improve their security.
The Asset Catalog
The first step to gain control of the vulnerabilities in an organization is to be clear about what is under analysis. In this case, being in a software environment, we should know which applications are being used in the company and what their features and deployments are like.
Sometimes it is complex even to define what an application is, and this problem makes a lot of sense, since every organization treats the developments in a different way.
For this post, we will assume that an application is a set of interrelated functionalities that share both a common objective and some technological characteristics.
Once the minimum unit to work (the application) is clear, wouldn’t it be interesting to have an inventory or catalogue with all the applications? We will call this the Asset Catalogue, and these are some of the advantages of having one:
It is very useful to establish a categorization of the applications according to their relevance for the business, the sensitivity of the data they work with, etc. A vulnerability in the application the employees use to book parking spaces will not have the same relevance as the one that manages our customers’ payments.
This part will be closely related to the time required to resolve a vulnerability.
Asset Security Healthiness
Based on the information about asset vulnerabilities, and some additional parameters, the Asset Security Healthiness can be calculated. This is a metric developed by our Cybersecurity team that expresses the security level of an asset in a quantitative way.
An application is usually developed by a team, and this team will be responsible of fixing the vulnerabilities that are detected in it.
When applications are not clearly defined, we might be doubtful about who is to fix a new vulnerability. By having an asset catalog in which each application is mapped to a development team, it will be very clear which team is responsible of fixing a new vulnerability when it is detected and it will even be possible to automate this process.
Advanced use cases
If we gather the technological characteristics of our applications in the asset catalog, when a security scan detects a vulnerability in a component, we will be able to establish which applications are vulnerable and how that affects the organization’s risk.
Another advanced use case would be the ability to prioritize improvements according to the overall impact; we may be able to fix a flaw in multiple applications at once if we address something common that affects them all.
Where to start?
This would be great, but I don’t have a catalog and it doesn’t look like I’m going to have one tomorrow… If Rome wasn’t built in a day, our asset catalog won’t either need to.
A good way to build a catalog is to establish security milestones in the application lifecycle. This way, you can identify the applications and while performing the security analyses you need, and go step by step.
Taking it one step further, if the approval of the application’s budget is bound to passing the first security milestone, then the onboarding of the applications in our catalog will be bulletproof. Of course, it is also possible to carry out an application inventory project, depending on the urgency and the available resources.
When a critical vulnerability is reported in the production environment of a business-critical application, it is usually resolved immediately. In the analogy of a car, this would be the equivalent of having the engine on fire.
It may not explode, but no one wants to wait until it happens. However, it is not uncommon to have a few minor mechanical failures that after some time result in a major breakdown.
It’s the same with security. It is not a matter of paying attention to the car only when the engine is on fire, but of fixing small breakdowns so that they do not become serious, and implementing preventive actions so that they never take place. When we are focused on delivery, it is easy for us not to solve all the security defects if they are not of high importance.
Continuing with the car analogy, it would be as if we see that we have worn tires and we leave them alone because we have a busy week. Perhaps we see that the wipers are not working properly but, since it is summer , we do not replace them, etc.
It could even happen that we take the car to the workshop for an oil change and we don’t even remember these things. To avoid such a situation, one approach would be to assign a severity, an owner, and a target resolution date for for the vulnerabilities as they are reported. This way we will end up solving them, and we will progressively reduce our security technical debt instead of increasing it.
Many times, we will be subject to a regulation that will force us to have records and metrics on our vulnerabilities. To avoid having to collect all the information before the audit, it is advisable to use vulnerability management systems.
There are different approaches, and each one will be valid for a type of client and its volume of applications. Ideally, if you have many applications and an industrialized environment, you will opt for the implementation of a defect manager, but there will be situations where a spreadsheet might do the job.
It will depend on the process and the business needs more than the tools that bring them alive.
Unifying the results of different types of analysis
It is most likely that we have different mechanisms that are able to identify security defects in our software. This is convenient as it would help us cover different attack scenarios and enable an in-depth security defense model. For example, we could have inputs from the following processes:
- Static application security testing (SAST)
- Dynamic web application security testing (DAST)
- Manual pentests
- Bug bounty programs
- Red Team exercises
It is complicated to manage application security by having to review the output of each of these processes manually. In addition, it makes it hard to correlate all the information to see the big picture.
However, by centralizing all security defects (for example in a defect manager), we will be able to have a global perspective of the asset without the need to review different platforms, which will help us to prioritize and obtain metrics focused on the asset and not on the analysis process.
One of the key points to ensure that vulnerabilities are resolved is to assign owners and establish procedures that set resolution deadlines for vulnerabilities according to their severity (commonly called Service Level Agreements or SLAs).
One of the key points to ensure that vulnerabilities are resolved is to designate owners and establish procedures that set deadlines for the resolution of vulnerabilities. To do this, we generally take into account both the severity of the vulnerability and the relevance of the asset.
But, if we are in charge of managing the vulnerability resolution process, keeping track of SLA breaches and notifying the owners is probably one of the last things we want to invest time and energy in. What if this was done automatically?
There are solutions that allow us to keep track of deadlines in real time and notify the responsible about that low severity finding that was reported 5 months ago and is about to expire.
Keeping track of which projects have the higher rates of SLA breaches is another point to consider, as it will allow us to detect potential gaps with the development teams and find ways to solve them.
The vulnerability management process provides us with valuable information that, when analyzed correctly, can be very valuable. Now, we will talk about some metrics that we could obtain from it and the advantages they can report:
Dynamic Security Risk
Dynamic Security Risk is a methodology developed by Tarlogic’s Cybersecurity team, will allow us to identify through different metrics how the vulnerabilities that exist in a technology produce an impact that affect not only the technology itself, but also the information it stores and processes. The reason why this happens is the relationship between technology and information.
The most common defects
Each development team works in a different way, has a different expertise, uses one technology or another, and this means that the most common security defects tend to vary from one organization to another, and even between teams within the same company.
Identifying them can allow us to organize trainings or workshops focused on their prevention. In this way, through training and awareness, it is possible to reduce their occurrence and, more importantly, their cost of resolution.
Improve detection activities
When a vulnerability appears repeatedly, it can be interesting to design a procedure for its detection. The Security team could rely on development teams to perform basic checks before audits, or even automate periodic reviews.
Sizing the work teams
This is a delicate task. Relying on vulnerability management metrics we can, for example:
- Establish whether development teams are able to keep up with the resolution of the reported security vulnerabilities from data on the volume of vulnerabilities reported and fixed.
- Determine whether security audits are causing a bottleneck in the production chain or whether, on the other hand, they are being able to keep up with the pace of releases.
Vulnerability management process implementation
Depending on the model you start from, it will be necessary to work on different aspects to progress the vulnerability management model that you have.
It is possible that, in some cases, the procedure is very well defined, but you have not found a way to land it in an efficient way, in which case it makes sense to study which tools we can use to implement our model.
There are solutions, both commercial and open-source (for example DefectDojo), that simplify vulnerability management and make it possible to implement almost all the ideas discussed in this post.
However, we might be happy with our tool, but not so happy with the procedure. A symptom of this could be that we find ourselves recurrently facing the same problem: problems matching our data model when unifying the input from different tools, problems with the steps to follow when a vulnerability is detected…
This situation is very similar to walking with a stone in our shoe. We know we are uncomfortable, but we keep walking and, somehow, we get used to that discomfort. To address it, we could stop and remove the stone from our shoe.
For us, this would be to analyze your vulnerability management process, find possible improvements, and work on them until we have an optimized process.
Can we help you?
In Tarlogic we have worked with different customers in the development of their vulnerability management system, starting from an initial analysis, proposing incremental improvements to work on their maturity level, and looking for the tool that best suits the developed model. If you think we can help you, do not hesitate to contact us.