End of open source: Dispelling the myth

Following the Log4j vulnerability disclosure in December 2021 and the recent case of a developer sabotaging his own Javascript libraries – ‘color.js’ & ‘faker.js’, the state of open source software has been called into question. 

With a high-profile meeting at the White House on open source, and Executive Orders from US President Biden, some have even suggested it is the ‘end of open source’. While it might be tempting to view a major vulnerability as an indication of open source somehow being deficient, the reality is far from that. 

Open source software is not more nor less secure than commercial software. In fact, most commercial software either includes or runs on open source technologies. Open source simply means that the software is developed in a manner where the source code is available to anyone who wants it.

What we have been seeing with the Log4j response from the Apache Log4j team is exactly what we would expect to see – a team that is taking the software they produce seriously and being responsive to the needs of their install base. Considering that they are volunteers, such a response is indicative of the pride of ownership we often see within open source communities. 

Rather than instigate the end of open source, an incident like Log4j is likely to improve open source development as a whole – much in the same way that Heartbleed improved development practices of both open and closed source development teams. So, if open source is here to stay, what should organisations be doing moving forwards to more efficiently identify and mitigate vulnerabilities?

Addressing software security from an organisational level

Well, the idea of identifying and mitigating vulnerabilities requires us to define some roles up-front. Most people expect their software suppliers – that is to say the people who produce the software they depend upon – to test that software. The outcome of that testing would be a set of findings highlighting the weaknesses in the software that the supplier produces. In an ideal world, each of those weaknesses would be resolved prior to the software shipping. 

In the real world, however, some of those weaknesses will be fixed, some will be marked as “no plan to fix” and some will optimistically be fixed in a future release. What the list of weaknesses are, and which ones were fixed, is not something a supplier typically divulges. Moreover, there is no one tool that can find all weaknesses, and some only work if you have the source code, while others require a running application. 

You will note that no mention was made of the word ‘vulnerability’ in this, as it has a special and simple meaning. In software, a vulnerability is simply a weakness that can be exploited or that has a reasonable chance of exploitation. 

Most, but not all, vulnerabilities are disclosed via a centralised system known as the National Vulnerability Database, or simply the NVD. While the NVD has roots in the US, and is maintained by the US Government, the contents of the NVD are available to all and replicated in multiple countries. From a governance perspective, monitoring for changes in the contents of the NVD is a good way of staying on top of new vulnerability disclosures. 

The problem is that the NVD updates slower than media coverage, so with major vulnerabilities like Log4Shell, HeartBleed and Dirty Cow, the team discovering the vulnerability might create a branded name for the vulnerability in an effort to broaden awareness of the issue. Creating a governance policy that monitors for media coverage of these cyber-events is certainly not great practice.

If media coverage as an input to vulnerability management is a bad idea, and the NVD is a bit slow to provide all details, what is the best governance policy then? That comes from a type of security tool known as “Software Composition Analysis”, or SCA. An SCA tool looks at either the source code for an application, or the executable or libraries that define the application, and attempts to determine which open source libraries were used to create that application. 

The listing of those libraries is known as an SBOM, or Software Bill of Materials. Assuming the SCA software does its job properly, then a governance policy can be created that maps the NVD data to the SBOM so you know what to patch… Except that there is still that latent NVD data to account for. 

Some of the more advanced SCA tools solve that problem by creating advisories that proactively alert users when there is an NVD entry pending but where the details of that NVD entry are augmented by the SCA vendor. Some of the most advanced tools also invest in testing or validating which versions of the software are impacted by the vulnerability disclosure.

Nevertheless, while SCA software can close the gap between disclosure and identification, it should be noted that it does have a fundamental limitation. If the SCA software has not scanned all of your applications, then at best it can only flag new vulnerability disclosures for a subset of your applications. 

From a governance policy perspective, it then becomes an IT function to identify all software and a procurement function to ensure that all software, including updates and free downloads, both come under an SBOM and that the SBOM is validated using SCA software. Since software is available in both source and binary formats, it is critical that governance teams heading down this path select SCA software that can effectively process software in all forms and formats. Such a governance policy would assist the identification of new vulnerability disclosures and the impact to the business, but would leave the matter of effective mitigation to a different policy, since mitigation would require application testing.

Addressing software security as a community

Ensuring the security of one’s own technology is one thing, but the beauty of open-source is that it is built to be collaborative. 

To paraphrase Abraham Lincoln, open source is technology of the people, by the people and for the people. The modern open source movement was founded on the principle that if you did not like the way the code was working, then you were free to modify it and address whatever gaps in functionality that were perceived to exist.

Part of the problem that we face today is a sentiment that has consumers or users of open source projects behaving as if the open source project is a commercial software vendor. 

If you look at the issues list of any reasonably popular open source project on GitHub, you will see feature requests and comments about when certain problems might be resolved. Such issue reports and complaints about serviceability have an implicit expectation that a product manager is on the receiving end of those requests and that they will be added to a roadmap and eventually be released – all for free.

In reality, gaps in functionality and even in perceived bugs, represent opportunities not to request free programming services but instead to contribute to the future success of code that is significantly important to the person complaining. 

Yes, some people won’t know the programming language used by the project, but to expect other people to prioritise a complaint from an unknown third party over changes that solve problems for active contributors is not realistic. As much as anything, open source functions through the altruism of contributors.

Over recent years we have heard core contributors for popular open source projects express frustration about the profits made by large businesses from the use of their software. While it is easy to relate to someone putting their energy into a project only to have a third party profit from the efforts, the reality is that if that third party is profiting from the efforts of an open source development team, then they should be contributing to its future success. 

If they don’t, then they run the risk that not only the code in question might change in ways they didn’t expect, but also that when security issues are identified and resolved, that they might have delays in applying those fixes. After all, if a business isn’t taking the time to engage with teams creating the software that powers their business, then it is likely they do not know where all the software powering their business originates and cannot reliably patch it. 

Finding vulnerabilities in open source is not a problem, but the detection of software defects representing a weakness that could be exploited, is an important topic. While open source and closed source software have an equal potential for security issues, with open source it is possible for anyone to identify those issues. With that in mind, organisations must take proactive steps – that does not rely on media coverage – to monitor the latest vulnerabilities. 

Equally important, they must play a contributing role to the open source projects they benefit from, otherwise they might fall victim to unexpected code changes or delayed awareness of critical patches.

Tim Mackey is Principal Security Strategist at Synopsys.

Original source: https://www.itproportal.com/features/end-of-open-source-dispelling-the-myth