Top Ten Challenges - Part 1: Image Vulnerability Scanning

(This is part 1 of a series about challenges and considerations that I have come across with customers when deploying software into Kubernetes. You can find the intro post here.)

Kubernetes is a management framework for containers. Each container is based on an 'image', which is a binary representation of the software and all related files the container uses, including files supporting operating system level function, plus any packages needed to run the software, for example, a Java runtime. 

Container images can contain so-called 'vulnerabilities', that is, weaknesses that can be exploited by hackers or malicious software to execute logic it wasn't intended for. New vulnerabilities are discovered all the time, and they are prioritized and published as 'Common Vulnerabilities and Exposures', or CVEs. Various tools exist in the market that scan container images to detect and report on these vulnerabilities. Organizations using Kubernetes will typically scan any image running in their environment, using a scanner of their choice, and apply whatever policy they have defined for the handling of these images. I have seen IT teams scanning images before they are made available for internal consumption, in an offline process that has the scanning sitting between the image being pulled from a public registry of sorts and the internal image registry that serves internal IT systems. There are also teams who use scanners that run 'live' on the cluster, that is, they continuously scan any image being pulled.

Typically, there is a defined time window within which a known vulnerability has to be addressed and patched images be published accordingly. Those policies will almost always differentiate between critical and less critical vulnerabilities.

Consequently, these organizations will scrutinize images they obtain from software vendors such as IBM, or any other source, for example, when using container images based on open source. The discussions I get involved in are often about the processes we apply internally to scan for and patch vulnerabilities in our images, and how that aligns with the policies the customer has in place.

The challenges related to this include both the vendor's ability to provide patches within defined time windows, and the consuming organizations ability to apply the patched images to their environments. Both require a high degree of automation, ideally a fully automated CI/CD toolchain that can drive the end to end process - on both sides.

And remember, a container image consists of three main parts: (1) the base OS (we use Red Hat's UBI for all of our images), (2) any package that gets installed on top of the base OS, and (3) the application software itself. Vulnerabilities that exist in the base have to be patched by whoever provides that base (in our case, Red Hat), and the providers of any package you may be using. Often, these are open source packages where you rely on the owning community to do the work, so that the time at which a patch is available is not under your control. 

Finally, note that the scanners, while all supposedly using the same source for vulnerabilities, apparently don't all deliver the exact same results. One difference, for example, may lie in whether all vulnerabilities are reported, or only those for which a fix is already available. That's important small print, of course.

The take away here is that, in my opinion, every IT organization using containers and Kubernetes should have a defined policy and related process to deal with vulnerabilities. Most likely, your CISO team will insist you do, anyway.
 
(Photo by Markus Spiske on Unsplash)

Comments

  1. Nice article @andre. Thanks for sharing.

    ReplyDelete
  2. @andre do organizations integrate vulnerability scans in their DevOps pipelines? If not, what problems are caused by doing that?

    ReplyDelete

Post a Comment

Popular Posts