The Case Against ‘Responsible Disclosure’

The debate over vulnerability disclosure is as significant as it is contentious. ‘Responsible disclosure’, a term coined by software vendors many decades ago, advocates that security researchers privately report vulnerabilities to the affected vendor, allowing the company time to fix the issue before making it public. On the surface this approach seems reasonable – it gives vendors a chance to patch vulnerabilities without exposing the existence of said vulnerability to the world, and by extension exposing users to exploits before a solution is available. However, a deeper examination reveals critical flaws in this system: It disproportionately favours corporate interests over public safety. Worse, the term “responsible” is both misleading and manipulative, suggesting that disclosures outside of this model are reckless, when in reality they may server the greater good.

Thesis statement: Responsible disclosure unfairly empowers vendors to control vulnerability information, delaying public awareness and, in some cases, hampering security researchers. The terminology itself is misleading, pressuring researchers to conform to a system that prioritizes vendor reputation and profits over transparency and user safety. A more equitable disclosure model is needed.


Responsible disclosure, at its core, places power over vulnerability information in the hands of vendors. This is problematic because vendors, as profit-driven entities, often have priorities that conflict with the urgency of security researchers and users. When vulnerabilities are reported privately, vendors control how and when information is released, as well as how quickly (or slowly) they respond. This setup seems to allow vendors time to address issues without exposing users to unnecessary risks, but in many cases it instead leads to significant delays in patching. Worse, during this period of silence, users remain unaware that they are even at risk of being exploited.

For example, in 2019 a security researcher discovered a severe flaw in Zoom, the popular video conferencing platform, which allowed hackers to access users’ webcams without their permission. Despite being informed of the vulnerability, Zoom was slow to act and the researcher publicly disclosed it after the vendor failed to provide an adequate response. Only after the disclosure gained widespread attention did Zoom rush to fix the issue. In this case public pressure was necessary to motivate the vendor to prioritize user security. This situation shows the fundamental issue with responsible disclosure in granting vendors the ability to manage the timing and scale of their response, often to our detriment.


The concept of responsible disclosure becomes even more troubling when we consider how it manipulates language to shift control over information. The word “responsible” is used to frame the disclosure of vulnerabilities as a moral obligation on the part of the researcher. It implies that researchers who go public with a vulnerability before the vendor has fixed it are acting recklessly . This characterization is not only misleading, it is a form of Orwellian doublespeak, wherein the term is used to control the narrative and conceal the true intentions of the system.

Consider George Orwell’s concept of “Newspeak” from his novel 1984 – it is language designed to eliminate critical thinking and dissent by redefining words to suit the needs of a ruling party. In much the same way, “responsible disclosure” redefines what is considered responsible behaviour, shaping the narrative in a way that benefits vendors. By labeling public disclosure as “irresponsible”, vendors are able to position themselves as the protectors of public safety, even though their delays in addressing vulnerabilities often leaves the public exposed to harm. Researchers that choose to challenge the system by making vulnerabilities public are thus unfairly branded as reckless or self-serving.


A major counterpoint to the idea of responsible disclosure is the practice of full disclosure: vulnerabilities are made public as soon as they are discovered. This approach would force vendors to act quickly, as the public – and potential attackers – are immediately aware of the issue. While critics of full disclosure argue that it also puts users at risk by exposing vulnerabilities before they are fixed, there are numerous examples where this method has lead to faster patches and improved security in the long run.

Take the example of Microsoft and Google’s Project Zero, which has a policy of disclosing vulnerabilities 90 days after reporting them to the vendor, regardless of whether a patch has been issued. In 2020, Project Zero publicly disclosed a critical vulnerability in Windows after Microsoft failed to patch it within the 90-day window. This forced Microsoft to expedite its efforts and release a fix. Project Zero’s philosophy is that public disclosure creates a sense of urgency for vendors, preventing them from delaying patches indefinitely. Without public disclosure vendors might be less motivated to prioritize security updates, knowing that they face no immediate consequences for failing to act swiftly.


One of the most troubling aspects of responsible disclosure is the way it frames public discourse by misusing the word “responsible”. The word suggests a moral high ground, where researchers must adhere to a vendor’s demands to avoid causing harm. In reality the opposite is often true – responsible disclosure can prolong the time users are vulnerable to attacks. Researchers who disclose vulnerabilities to the public are not monsters, they are not reckless or irresponsible; they are prioritizing public safety by ensuring that users are informed of the risks they face.

In 2018, Tavis Ormandy, a researcher from Google’s Project Zero, discovered a critical vulnerability in the widely-used antivirus software Kaspersky. After reporting the issue to Kaspersky, Ormandy allowed the vendor time to address it but made the vulnerability public after the agreed-upon period expired. The public disclosure led Kaspersky to quickly issue a patch. This case shows that researchers who adhere strictly to vendor timelines may not always be acting in the best interests of the people, and public disclosure plays an important role in holding vendors accountable.


AMENDMENT (5/12/2024):

Read up about the recent RCE vulnerabilities found in legacy D-Link routers for this next part to make sense.

D-Link, in their infinite wisdom, has decided that a remote code execution vulnerability affecting SIX of their routers is no longer their problem. Their solution? “Buy a new router from us, we’ll keep taking your money and screwing you over, thanks.” That’s it. No patch, no workaround, just a wave towards their newer models with a smug “Good luck out there!” to go with it.

To get this straight, D-Link has known about a critical vulnerability that lets attackers exploit these routers, and their response is essentially “That’s not our problem anymore.” You’re the ones who made the damn hardware. You profited from it. And now when there’s a major security issue that puts your users at risk, you wash your hands of the whole thing like you’ve done your part?

This is corporate apathy at its absolute worst. In 2019, Microsoft released a security patch for WINDOWS XP, a fossil of an operating system that reached its EOL 5 years prior, because they understood that leaving people exposed to catastrophic vulnerabilities wasn’t just bad optics – it was irresponsible. But D-Link can’t be bothered. They’ve got new products to sell. What a convenient little racket: “Oh no, your old device has a critical flaw! Guess you’ll just have to buy our new one.” This is infuriating.

It’s not about EOL policies or lifecycle management, it’s about passing a buck. It’s about a company prioritizing profits over people’s security, plain and simple, and the exact argument I was making above. The fact D-Link, and others like them, can frame this as normal, when things get inconvenient for them, is an insult to everyone who ever trusted them to deliver a secure product.

Understand, none of this even depends on whether the disclosure of these vulnerabilities was “responsible” or not. D-Link knew. They had the information in their hands, handed to them by a researcher who played by the rules. But clearly, the rules don’t matter to companies that just don’t give a damn. Responsible disclosure, full disclosure, it’s all irrelevant when the company at the other end of the process shrugs and says “Not our problem.” Companies like this have no intention of stepping up, no matter how much lead time or opportunity they’re given. A complete lack of accountability… that is the real issue here.

,

Leave a Reply

Your email address will not be published. Required fields are marked *