By Brad Antoniewicz.
The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.
Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.
As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.
The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.
These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .
The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.
Disclosure Standpoints
While it’s debatable, I think hacking, more specifically vulnerability discovery, started to better the overall community – e.g. we can make the world a better, more secure place by finding and fixing vulnerabilities within the software we use. Telling software maintainers about vulnerabilities we find in their products falls right in line with this idea. However, there is also something else to consider: recognition and sharing. If you spend weeks findings an awesome vulnerability, you should be publically recognized for that effort, and moreover, other’s should also know about your vulnerability so they can learn from it.Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.
As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.
The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.
General Approaches
In general, there are three categories of disclosure:- Full disclosure– Full details are released publically as soon as possible, often without vendor involvement
- Coordinated disclosure– Researcher and vendor work together so that the bug is fixed before the vulnerability is disclosed
- Private or Non-Disclosure – The vulnerability is released to a small group of people (not the vendor) or kept private
These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .
Established Disclosure Standards
To give better perspective, let's look at some existing standards that help guide you in the right direction.- Internet Engineering Task Force (IETF) – Responsible Vulnerability Disclosure Process - The Responsible Vulnerability Disclosure Process established by this IETF draft is one of the first efforts made to create a process that establishes roles for all parties involved. This process accurately defines the appropriate roles and steps of a disclosure; however it fails to address publication by the researcher if the vendor fails to respond or causes unreasonable delays. At most the process states that the vendor must provide specific reasons for not addressing a vulnerability within 30 days of initial notification.
- Organization for Internet Safety (OIS) Guidelines for Security Vulnerability Reporting and Response - The OIS guidelines provide further clarification into the disclosure process, offering more detail and establishing terminology for common elements of a disclosure such as the initial vulnerability report (Vulnerability Summary Report), request for confirmation (Request for confirmation receipt), status request (Request for Status), etc.. As with the Responsible Vulnerability Disclosure Process, the OIS Guidelines also do not define a hard time frame for when the researcher may publicize details of the vulnerability. If the process fails, OIS Guidelines define a “Conflict Resolution” step which ultimately results in the ability for parties exit the process, however no disclosure option is provided. The OIS also introduces the scenario where an unrelated third party discloses the same vulnerability – at that time the researcher may disclose without the need for a vendor fix.
- Microsoft Coordinated Vulnerability Disclosure (CVD) - Microsoft’s Coordinated Vulnerability Disclosure is similar to responsible disclosure in that its aim is to have both the vendor and the researcher (finder) work together to disclose information about the vulnerability at a time after a resolution is reached. However, CVD refrains from defining any specific time frames and only permits public disclosure after a vendor resolution or evidence of exploitation is identified.
Coordinator Policies
Coordinators act on the behalf of a researcher to disclose vulnerabilities to vendors. They provide a level of protection to the researcher and also take on the role of finding an appropriate vendor contact. While coordinators goal is to notify the vendor, they also satisfy the researcher’s aim to share the vulnerability with the community. This sections discusses gives an overview of coordinators policies.- Computer Emergency Response Team Coordination Center (CERT/CC) Vulnerability Disclosure Policy - The CERT/CC Vulnerability disclosure policy sets a firm 45 day timeframe from initial report to public disclosure. This occurs regardless of if a patch or workaround is released by the vendor. Exceptions to this policy do exist for critical issues in core components of technology that require a large effort to fix, such as vulnerabilities in standards or core components of an operating system.
- Zero Day Initiative (ZDI) Disclosure Policy - ZDI is a coordinator that offers monetary rewards for vulnerabilities. It uses the submitted vulnerabilities to generate signatures so that its security products can offer clients early detection and prevention. After making a reasonable effort, ZDI may disclose vulnerabilities within 15 days of initial contact if the vendor does not respond.
Researcher Policies
Security companies commonly support vulnerability research and make their policies publically available. This section provides an overview of a handful:- Rapid 7 Vulnerability Disclosure Policy - Rapid7 attempts to contact the vendor via telephone and email then after 15 days, regardless of response, will post its finding to CERT/CC. This combination provides the vendor a potential of 60 days before public disclosure because it is CERT/CC’s policy to wait 45 days.
- VUPEN Policy - VUPEN is a security research company that adheres to a “commercial responsible disclosure policy”, meaning any vendor who is under contract with VUPEN will be notified of vulnerabilities, however all other vulnerabilities are mostly kept private to fund the organization’s exploitation and intelligence services.
- Trustwave/SpiderLabs Disclosure Policy - Trustwave makes a best effort approach to contacting the vendor then ultimately puts the decision of public disclosure in its management’s hands if the vendor is unresponsive.
Summary of Approaches
The following table summarizes the approaches mentioned above.Policy | Notification Emails | Receipt Time Frame | Status Update Time Frame | Verification /Resolution Time Frame | Disclosure Time Frame |
Responsible Vulnerability Disclosure Process | security@ security-alert@ support@ secalert@ And other public info such as domain registrar, etc.. | 7 days | Every 7 days or otherwise agreed | Vendors make best effort to address within 30 days, can request up to 30 day additional grace period and extensions without defining limits. | After resolution by vendor. |
OIS Guidelines for Security Vulnerability Reporting and Response | security@ secure@ security-alert@ secalert@ Alternates: abuse@ postmaster@ sales@ info@ support@ And other public info such as domain registrar, etc.. | 7 days, then can send a request for receipt. After three days, go to conflict resolution | Every 7 days or otherwise agreed – Finder can send request for status if vendor does not comply. After three days, go to conflict resolution. | 30 day suggestion from vendor receipt, although should be defined on case by case basis. | After resolution by vendor. |
Microsoft Coordinated Vulnerability Disclosure | security@ secure@ security-alert@ secalert@ support@ psirt@ info@ sales@ And search engine results, etc.. | Not defined | Not defined | Not defined | After resolution by vendor. |
CERT/CC Vulnerability Disclosure Process | Not published | Not defined | Not defined | Not defined | 45 Days from initial notification |
ZDI Disclosure Policy | security@ support@ info@ secure@ | 5 days then telephone contact. 5 days for telephone response then intermediary | Not defined | 6 Months | 15 days if no response is provided after initial notification. Up to 6months if notification response is provided |
Rapid7 | Not defined | 15 days after phone/email | Not defined | Not defined | 15 days then disclosure to CERT/CC |
VUPEN | Not defined | Notification only to customers under contract | Not defined | Not defined | Disclosure only to customers |
TrustWave Spider Labs | Not Defined | 5 days | 5 days | Not defined | If vendor is unresponsive for more than 30 days after initial contact, potential disclosure decided by Trustwave Management. |