The European Union (EU) may soon require software publishers to disclose unpatched vulnerabilities to government agencies within 24 hours of an exploitation. Many IT security professionals want this new rule, set out in Article 11 of the EU’s Cyber Resilience Act (CRA), to be reconsidered.

The rule requires vendors to disclose that they know about a vulnerability actively being exploited within one day of learning about it, regardless of patch status. Some security professionals see the potential of governments abusing the vulnerability disclosure requirements for intelligence or surveillance purposes.

In an open letter signed by 50 prominent cybersecurity professionals across industry and academia, among them representatives from Arm, Google, and Trend Micro, the signatories argue that the 24-hour window is not enough time — and would also open doors to adversaries jumping on the vulnerabilities without allowing organizations enough time to fix the issues.

“While we appreciate the CRA’s aim to enhance cybersecurity in Europe and beyond, we believe that the current provisions on vulnerability disclosure are counterproductive and will create new threats that undermine the security of digital products and the individuals who use them,” the letter states.

Gopi Ramamoorthy, senior director of security and GRC at Symmetry Systems, says there is no disagreement about the urgency of patching the vulnerabilities. The concerns center on publicizing the vulnerabilities before updates are available, as that leaves organizations at risk of attack and unable to do anything to prevent it.

“Publishing the vulnerability information before patching has raised concerns that it may enable further exploitation of the unpatched systems or devices and put private companies, and citizens, at further risk,” Ramamoorthy says.

Prioritize Patching Over Surveillance

Callie Guenther, senior manager of cyber threat research at Critical Start, says the intent behind the EU’s Cyber Resilience Act is commendable, but it’s vital to consider the broader implications and potential unintended consequences of governments having access to vulnerability information before updates are available.

“Governments have a legitimate interest in ensuring national security,” she says. “However, using vulnerabilities for intelligence or offensive capabilities can leave citizens and infrastructure exposed to threats.”

She says a balance must be struck wherein governments prioritize patching and protecting systems over exploiting vulnerabilities, and proposed some alternative approaches for vulnerability disclosure, starting with tiered disclosure.

“Depending on the severity and impact of a vulnerability, varying timeframes for disclosure can be set,” Guenther says. “Critical vulnerabilities may have a shorter window, while less severe issues could be given more time.”

A second alternative concerns preliminary notification, where vendors can be given a preliminary notification, with a brief grace period before the detailed vulnerability is disclosed to a wider audience.

A third way focuses on coordinated vulnerability disclosure, which encourages a system where researchers, vendors, and governments work together to assess, patch, and disclose vulnerabilities responsibly.

She adds any rule must include explicit clauses to prohibit the misuse of disclosed vulnerabilities for surveillance or offensive purposes.

“Additionally, only select personnel with adequate clearance and training should have access to the database, reducing the risk of leaks or misuse,” she says. “Even with explicit clauses and restrictions, there are numerous challenges and risks that can arise.”

When, How, and How Much to Disclose

John A. Smith, CEO at Conversant Group, notes that responsible disclosure of vulnerabilities is a process that has, traditionally, included a thoughtful approach that enabled organizations and security researchers to understand the risk and develop patches before exposing the vulnerability to potential threat actors.

“While the CRA may not require deep details about the vulnerability, the fact that one is now known to be present is enough to get threat actors probing, testing, and working to find an active exploit,” he cautions.

From his perspective, the vulnerability should also not be reported to any individual government or the EU — requiring this will reduce consumer confidence and damage commerce due to nation state spying risks.

“Disclosure is important — absolutely. But we must weigh the pros and cons of when, how, and how much detail is provided during research and discovery to mitigate risk,” he says.

Smith notes an alternative to this “arguably knee-jerk approach” is to require software companies to acknowledge reported vulnerabilities within a specified but expedited timeframe, and then require them to report back on progress to the discovering entity regularly, ultimately providing a public fix within a maximum of 90 days.

Guidelines on how to receive and disclose vulnerability information, as well as techniques and policy considerations for reporting, are already outlined in ISO/IEC 29147.

Impacts Beyond EU

Guenther adds the US has an opportunity to observe, learn, and subsequently develop well-informed cybersecurity policies, as well as proactively prepare for any potential ramifications if Europe moves forward too quickly.

“For US companies, this development is of paramount importance,” she says. “Many American corporations operate on a global scale, and regulatory shifts in the EU could influence their global operations.”

She points out that the ripple effect of the EU’s regulatory decisions, as evidenced by the GDPR’s influence on the CCPA and other US privacy laws, suggests that European decisions could presage similar regulatory considerations in the US.

“Any vulnerability disclosed in haste due to EU regulations doesn’t confine its risks to Europe,” Guenther cautions. “US systems employing the same software would also be exposed.”

Source: www.darkreading.com