Posted on 2021-03-03 by Matt Strahan in Business Security
Earlier today Xerox reportedly threatened the Airbus Security Lab researcher Raphaël Rigo with legal action to prevent him from presenting at the Infiltrate security conference. Although obviously we haven’t seen the presentation, the summary said that he was going to talk about vulnerabilities in Xerox printers and give tips on how to secure them.
Is this going to prevent vulnerabilities from being exploited in the wild, or are the organisations who have Xerox printers now just less secure because they won’t know the steps they might need to take to protect themselves?
Silencing research
There has been a habit of trying to silence security research in the past. Salesforce fired two security researchers who presented on their new security tool and Ars Technica and CSO were sued for trying to report on security issues.
This makes total sense from the security product developers’ point of view. Their risk is reputational - that getting a bad reputation for cyber security will stop people and organisations from buying their products.
For the organisations and people who buy the software, the risks can be much more real. The software could lead to wider breaches, to ransomware and the massive disruption of business operations, to direct financial costs and to huge privacy violations. Businesses have been destroyed over vulnerabilities in software that simply weren’t their fault.
When you think of the total amount of risk incurred for a security vulnerability in a product, only a tiny sliver of that risk is borne by the developers of the product, and that risk ends up being mostly reputational. From the developer’s point of view, it makes sense to try and just stop people from talking about the vulnerability, doesn’t it? It’s your way of balancing risk.
Transparency leading to security
Sometimes making security public is what’s needed to push companies into making things more secure. The unfortunate truth is that sometimes the companies that make software aren’t always the ones most at risk.
Sometimes it’s only when people are talking about the vulnerability that the fixes get made. Take the case of Foxit Reader. They declined to fix a vulnerability and only fixed it when the vulnerabilities were made public. They sat on the vulnerabilities for months after they were first made aware of them and patched it within a week after their customers found out about them.
It’s a sad fact that because the direct risk on the developers tends to be mostly reputational, the incentive is to focus purely on reputational solutions. This includes trying to control the conversation and trying to prevent security issues from coming to light. It is sometimes only when these security issues come to light that things become more secure.
Living in the dark
A common feeling in the security industry is the feeling that you’re living in the dark. You don’t know what you don’t know. Is signing off this new project going to lead to our company being in the news?
You feel like you don’t know enough about your internal systems and the steps you need to take to protect them. There could be hidden vulnerabilities in any of them and a lack of information certainly doesn’t help.
This is where “security tips” like the ones that are apparently in the cancelled talk can really help. This gives you actionable information that you can use to protect your organisation and the information you need to make sensible decisions about whether to take those actions or not. It’s a light in the dark world of cyber security.
The more actionable information you have the better off you will be. With the right information you can know that the steps you are taking will genuinely make you more secure. This is where transparency in cyber security doesn’t just help fix security vulnerabilities but will also help improve internal practices and the internal security architecture. Especially for devices (like, say, printers), you can make informed decisions as to whether to isolate them on the network or how to treat them in a zero trust environment.
Balancing risk
There is, however, genuine risk in talking about unpatched vulnerabilities. Web application vulnerabilities can be exploited in the wild within hours of details coming to light. For a lot of organisations it’s an unrealistic expectation to have their internal IT waiting 24 hours per day just in case a critical update for their infrastructure is released. With automated exploitation of vulnerabilities organisations might be hacked before they’re even aware there’s an update there.
This becomes even more tricky in the case of devices that are hard to update (let’s say printers). Some devices you may need to specifically update out of hours or they may not fit neatly into your patch management system. The time between a vulnerability being dropped and the update could extend out.
So we have to have some way of balancing the risk. We don’t want to leave internal IT in the lurch and yet we need to get information out. Luckily there’s methods for handling this. You can provide security updates with general details including severities of security issues and supplement this with simple processes for updating. The full vulnerability writeups can follow a month or two later when there’s been plenty of time for updating.
This won’t stop exploitation completely since after all you can reverse engineer the security update itself. It does usually buy enough time though to balance the risk between giving out vulnerability details and managing the threat of exploitation in the wild.
Coming back to Xerox
We don’t really know the details yet of the Xerox issues. Have those bugs been fixed already? Were they given enough time to patch the bugs? Could the “security tips” that the researcher would have been recommending be vital information for protecting organisations?
The sad fact is that when it comes to vulnerabilities in printers the developer isn’t at risk, it’s their customers who might have severe disruption to their organisations. Unfortunately my gut feeling is that the company is simply playing the game where they’re managing their reputation at the expense of their customers security.
About the author
Matthew Strahan is Co-Founder and Managing Director at Volkis. He has over a decade of dedicated cyber security experience, including penetration testing, governance, compliance, incident response, technical security and risk management. You can catch him on Twitter and LinkedIn.
Feature image by Bank Phrom on Unsplash.
If you need help with your security,
get in touch with Volkis.
Follow us on Twitter and
LinkedIn