One of the aims of the new cybersecurity disclosure rules approved by the Securities Exchange Commission last month is to give investors better information about the cybersecurity risks associated with public companies. The other objective is to encourage public companies to enhance their cybersecurity and risk posture.
But it appears the Devil is in the details, as concerns swirl over exactly which incidents to report, and what details are required when disclosing information. Most significantly, the rules require enterprises to create a mechanism to determine when any security incident is material. For several reasons, that task is deceptively difficult.
The SEC considers an incident material if it can have significant impact on the company’s financial position, operation, or relationship with its customers. The new rules, as written, include a requirement for a “Form 8-K disclosure of material cybersecurity incidents within four (4) business days of the company’s determination that the cybersecurity incident is material.” There are specific requirements for what must be disclosed in the 8-K: When the incident was discovered and whether it is ongoing; a brief description of the nature and scope of the incident; whether any data was stolen, altered, accessed or used for any other unauthorized purpose; the effect of the incident on the enterprise’s operations; and whether the company has remediated or is currently remediating the incident.
But determining whether or not an incident is “material” may be more complex than organization’s are prepared for. Beyond the bureaucratic and logistical issues involved in creating a group of senior managers to regularly make that determination, the ugly truth is that security incidents look very different as time goes by and additional analysis is completed. That means that if the committee looks at a data breach that was only discovered a day earlier, there is a very high chance that they will be making the decision based on incomplete and likely flawed preliminary data.
That puts enterprise executives in a no-win scenario. Option one is that they choose to move quickly and run the risk that they report an incident as a material security event that turns out to have not been a material event at all. Option two is that they wait for as long as they can to let the forensic analysis and examination of backup files deliver a more complete and accurate picture, but run the risk that the SEC–and/or investors–will later discover the timetable and accuse the enterprise of failing to disclose in a timely manner.
Disclosure Timetable Also a Challenge
The SEC’s four-day disclosure timetable— which doesn’t start its countdown until the enterprise has determined that an incident is material— is also problematic. Any SEC filing is going to require Security Operations Center (SOC) staff to prepare a list of the incident’s specifics. Those details would go to Legal to draft the SEC filing, which would also require review by investor relations. Any such filing would also have to be reviewed and approved by the CFO and the CEO. The CEO may want to run it by board members before filing. That process, even under ideal circumstances, could take longer than four days.
Mark Rasch, an attorney specializing in cybersecurity issues who used to head the U.S. Justice Department’s high-tech crimes group, stressed that there is nothing new about the requirement for companies to report material security incidents. The SEC has required publicly-held companies to report any material incident since its founding in 1933. What is new is the timetable.
This requires hard thinking by corporate leadership on what constitutes a material incident. Some of the factors considered would include the organization’s verticals, the geographies involved, the nature of operations and the kind of attackers/attacks the business is likely to attract. A military subcontractor working on weapons systems, for example, might conclude that someone stealing product blueprints is material in a way that an agricultural company might not.
Another point Rasch stressed is definitions. Security professionals and lawyers define “data breach” very differently. To a security manager, any time an unauthorized individual gets through an authentication system and into protected areas, it is a security breach. To an attorney, a breach is when data is accessed, exfiltrated or modified/deleted. That definition is based on various compliance requirements.
The SEC is looking for any security incident. A DDOS attack, for example, could absolutely be a material security incident, but on its own would usually not be considered a data breach.
Key Information Left Out
Importantly, the SEC has carved out an exemption about the information contained in the 8K filing. The requirement would not extend to “specific, technical information about the registrant’s planned response to the incident or its cybersecurity systems, related networks and devices, or potential system vulnerabilities in such detail as would impede the registrant’s response or remediation of the incident.”
Rasch says the exemption is necessary, as disclosing certain details about the attack could hinder the investigation or give too much information to potential attackers. But the exemption will also likely be used by companies to avoid saying anything specific enough to provide meaningful and valuable information to investors and potential investors.
Many disclosures today speak of vague hypothetical risks, such as that customers might tire of a particular product and stop buying it. Rasch calls those speculative comments “pablum” and argues that they are almost always worthless to investors. “You’re just going to end up with a lot more of these pablum disclosures,” Rasch says.
Another cybersecurity expert –Michael Isbitski, director of cybersecurity strategy for security tool vendor Sysdig -agrees with Rasch’s concern and pointed to an incident in July when mattress company Tempur Sealy reported a data breach. The disclosure revealed that a cybersecurity event occurred and, as a result, the company shut down “certain of the company’s IT systems” and had a “temporary interruption” of operations. It also said that the company “has begun the process to bring certain of its critical IT systems back online,” which means that some IT systems were still offline. But there are no details about which systems were shutdown, for how long, or how long these other systems would remain down.
Isbitski says that he expects this to result in “a deluge of paperwork. Companies will report far too much, there will be too many form 8Ks filed.”
“There is no clear definition. I don’t see organizations doing it clearly or effectively. We don’t even have alignment in the security community about what is a breach,” Isbitski says, adding that executives will worry that reporting almost any meaningful details will make potential attackers “see that we’re poor in security or that our development teams suck.”
Who Makes the Determination?
A potentially daunting logistical problem is the massive number of security incidents every week, depending on how that specific company chooses to define a security incident and the size and nature of the business.
Most experts interviewed agreed that a management committee would be given only a few incidents to review, and almost certainly no more than 20. That means that someone in the CISO’s office, likely a SOC manager, would decide which incidents are considered possibly material.
“This is where a lot of SOCs are going to fail. They need a way to filter down a lot of these vulnerabilities so that they tell (executives) things that are truly exploitable.”
Matthew Webster, a veteran CISO with stints at B&H Photo and Healthix who currently runs virtual CISO firm Cyvergence, agrees that the CISO and the SOC team wading through all incidents to determine which handful will be presented to the management committee is a problem. An important objective of creating a committee with representatives from the offices of the CFO, IR, CIO, CISO, Legal, Risk, Audit, Compliance is to arrive at strategic business decisions for the enterprise about what is material. But if such decisions are most often made by a SOC staffer, that could easily undermine the point of creating such a committee.
“If the SOC is making that cut, you have already failed,” Webster says.
Rasch says that this puts the onus right back on the management committee. “The committee needs to tell the SOC what it needs to know. And the board needs to tell those managers what the board wants to know,” Rasch says. “The committee needs to give clear guidance to the CISO what they want to know and that includes non-reportable stealing of trade secrets and business processes. In a cyber environment and AI environment, there are very substantial risks. These are risks related to availability, confidentiality, integrity, supply chain, liability. It is not just breaches and it is not even primarily breaches.”
Source: www.darkreading.com