The Golden Repo Is No Silver Bullet For Safer DevOps
When a manufacturer designs and ships a physical product that is proven to cause people harm (or even put people at risk of harm), in general it’s quickly withdrawn by the maker. Factory recalls like this are commonplace across all industries but are very notable in areas such as children’s toys, automobiles, and industrial machinery, to take three examples at random. Yet software publishers are not held accountable when they ship products that endanger people and systems because of cybersecurity flaws. That situation seems anomalous, given that in today’s world, the majority of systems on which life relies run on software.
As the effects of data breaches become more commonplace and damaging, that situation seems difficult to parse. Can and should a software vendor be held responsible for flaws in their products that endanger, for example, the power supply to millions of people?
For Brian Fox, co-founder and CTO of Sonatype, the answer is a firm ‘yes;’ in fact, it’s not only an inevitable evolution of the software industry, but the US Federal Trade Commission’s Consumer Protection Board already has the power to hold publishers as negligent. Speaking exclusively to Tech HQ, he said, that if software makers ship an inherently insecure product, “that’s potentially already against the rules, you can be held accountable when harm is reasonably foreseable […] Every other industry before us has gone through this transformation. Believe it or not, there was a time when food manufacturers were not liable for pests getting into their bottles and people getting sick, just 100 years ago […] There was a time when auto manufacturers were not held responsible when the wheels fell off their cars and hurt people. I say, look, if you’re betting that some form of accountability and liability doesn’t end up on the producers of software, you’re basically betting on something that has literally never happened in the history of […] any other industry. I don’t know what you think but I’m not taking that bet.”
The software industry hasn’t yet agreed by any means on what shape responsibility and accountability may take. Perhaps a few decades ago, the chain of responsibility was clearer when proprietary products were physically shipped on CD. But today’s quickly-iterating production cycles and use of open-source libraries, frameworks, and components make the eventual picture much more unclear. Ranking high in search engine returns at the moment is the concept of SBoM – software bill of materials – something that has had a greater interest shown in it since some of the better-known software supply chain vulnerabilities came to light, especially the Apache Log4j flaw of last year.
Software organizations, companies, foundations, and communities are coalescing opinions, and the beginnings of governance and legislation are emerging. In Europe, the focus is currently on the recalls and regulation in the form of elements of the Cyber Resiliency Act. But, Brian said, although in the right direction, the EU is off-target by muddying the waters around what is and isn’t considered open source.
In the US, the Executive Order’s concentration on an SBoM seems to be moving towards a type of fire-and-forget legislation. “Pushing on […] being able to do a recall […] is one thing I’ve always kind of pushed on [too]. The thing that I didn’t like about the initial SBoM mandate was that it was a little bit too focused on the wrong part of the problem. They focused on the need to create a bill of materials and give it to us [end-users] when you sell a software. And the analogy I like to use is, imagine if we told our auto manufacturers you don’t need to do recalls anymore. All you got to do when you sell a car is print out the bill of parts and stick it in the glove box, right? People always laugh when I say that because it’s ridiculous. But that’s exactly what the executive order on SBoM says to do. It doesn’t do anything to encourage the companies to actually pay attention to that SBoM, and make sure when their vulnerability pops up in the future, they do something about it. The implication of that is putting the responsibility on the end user.”
It seems logical that if software development were to slow down and perhaps lose its tendency to embrace fast iterations, the end product would be somehow safer for the end user. But that’s not the case, Brian told us. Of the companies Sonatype had surveyed recently, “We found that companies that are solving for both problems [fast and secure development processes] are actually both more secure, and faster than the companies that focus only on being secure, or the company that is focused only on going fast. So that seems paradoxical.” He explained that companies that develop software quickly and safely have the right DevSecOps-focused CI/CD processes in place, and those companies are the most successful.
“Does doing a better job managing your supply chain mean you have to go slower? No, the answer to this is actually you’re safer, and you’re faster because you have fewer unplanned fixes and rework. Having efficient ways to manage your dependencies means developers can also safely innovate. This is one of these rare win-win scenarios, if it’s done properly. [Security] doesn’t have to be a drag on the system.”
It’s borderline impossible to ship software that is and will remain entirely secure. Dependencies, updates, and new exploitation methods all conspire to ensure there’s no such thing as a sure thing. But, Fox points out, it’s important to note that most potential vulnerabilities, present or future, may remain only theoretical constructs. There has to be a mode of use of the software that allows any exploit to be leveraged by bad actors. The wrong thought process, he told us, was “to think about this problem like vulnerabilities are tainted lettuce, and you shouldn’t be selling tainted lettuce. That’s not true. Malicious attacks are like tainted lettuce, and those things do get taken down from the repository post haste, but what the typical vulnerability looks like is more like peanut butter. It causes deadly allergies in some people, and for a bunch of other people, it’s totally fine.”
Brian Fox’s advisory role to governments and organizations has taken him all over the world, most recently working with the Biden administration on cybersecurity and the software issue. There are no easy answers like, for instance, a pre-vetted repository of so-called safe components. “There’s millions of components out there, it’s impossible for you to vet all of them in a sufficient way. It’s impossible. And companies that do that, they call it the golden repository. The problem is you indirectly harm innovation because you stop developers from using something that hasn’t already gone through some crazy process. But the worst problem is, most processes don’t actually follow back up and check the stuff that’s in the repo.”
And if that’s the case, the owner of the golden repo, the software development company with the ‘all safe component repository’ is still effectively liable in the eyes of the end user for any damages accruing from software use.
While there are no quick ways that the trail of legal liability for software can be navigated, experts deeply embedded in the software industry (and cybersecurity industry) like Fox will be instrumental in forming coherent policy. It will just take a little more time before the rules are more than guidelines produced by multiple entities with different takes on the same problems. For the next few years, caveat emptor remains the best advice for staying safer in the daily use of software. And for development companies, DevSecOps has to be more than this month’s buzzphrase.