By Stephen Goudreault, Cloud Security Evangelist, Gigamon
As with all technology, new tools are iterations built on what came before and classic network logging and metrics are no different. The tooling and instrumenting of network traffic can be found virtually unchanged in the private cloud and on-prem. Many of the logs and metrics in use today are almost two decades old and were originally designed to solve for billing and other problems, making traffic flow patterns a bonus. Traffic logging just happens to be the use-case that has withstood the test of time.
What is port spoofing and why is it important?
Just like application and data visibility on the network, many of the rules/request for comments (RFCs) that are used today were written over a decade ago and described how things ‘should’ work, but there are no rules really enforcing that. This gives a lot of flexibility for possible deployments, which are rarely used. When an app or service is misconfigured, or if someone wants to evade detection, the slightest changes to standard ports can hamper most current visitable and detection schemes. Port spoofing is a known technique and MITRE ATT&CK has a whole category dedicated to evasion.
Example: Port spoofing secure shell protocol (SSH)
One of the most common and versatile examples of evading visibility is using Secure Shell (SSH) protocol on non-standard ports. SSH is usually assigned to Port 22. Security tools assume SSH traffic will use Port 22, and nearly every security team in the world keeps that port thoroughly locked down. Common practice is to block the port at the perimeter and call the network secure. Easy right?
Not quite.
Not Port 22, but Port 443
What if someone changed the default port on their SSH traffic to Port 443? Port 443 is widely used for Hypertext Transfer Protocol Secure (HTTPS)/Transport Security Later (TLS) and is almost always kept open. HTTPS traffic is utterly ubiquitous in the modern enterprise for both business-critical and personal activities. IT firewalls are not going to routinely block Port 443/HTTPS, thus making it an ideal point of entry for attackers. Changing SSH to operate on Port 443 is a very easy task. There are many forums giving detailed instructions on legitimate and illegitimate reasons to do this. Almost all modern cloud visibility tools will report the traffic as what it appears to be, not what it actually is.
Screenshot of Gigamon workload showing TLS on 443 cloud workload.
Even workloads in the cloud can misidentify their own connections. In the screenshot above, we see an active SSH session being misreported at TLS because the Linux OS assumes the connection simply based on port. The network gets it wrong and operating system tools get it wrong by reporting this traffic as “known known.” The assumption is that all traffic is RFC compliant so when it is not it is not seen correctly.
The problem as it exists today.
Today, almost all traffic is assessed by its Transmission Control Protocol (TCP) port. This leads to a lot of assumptions to the nature of traffic. This is true in the public cloud, on-prem, and private cloud. In today’s ever more security conscious world, making assumptions about the nature of traffic isn’t as safe as it once was. For example, a recent report found that 81% of all North-South traffic and 65% of all East-West traffic is encrypted, leaving a large portion of traffic unencrypted and creating a perfect hiding place for cybercriminals to sit, wait, and plan their attack.
SSH is a very powerful tool that can facilitate a threat actor with file transfers, tunneling, and lateral movement across any network. This is only one example of how a single tool can have many uses. Factor in other applications and protocols, and a massive amount of data remains hidden. MITRE has its own category for port spoofing and the trend is only growing.
East-west traffic needs deep packet inspection too.
Next-generation firewalls (NGFW) have solved for this problem on-prem at perimeters points. The public cloud is a different story that has not been solved at scale for east and west traffic, otherwise known as intra-organization lateral communications. Virtual private cloud (VPC) flow logs only record conversations that take place along the port without really knowing what app or protocol is in use. Deep packet inspection investigates the conversation and can properly identify the applications and protocols in use. This can also be referred to as “application intelligence,” which can identify more than 5,000 applications, protocols, and attributes in network traffic inspections.
Application Metadata Intelligence doesn’t just look at the outer headers, it also looks deeper into the packet. By looking deep into the unique characteristics of the packet we can define a given application.
Only with this level of visibility can an organization easily span east and west across their entire enterprise in the public cloud, including container to container communications.
In the public cloud, deep packet inspection has a unique set of challenges. There is no broadcast, and to inspect traffic there either needs to be traffic mirroring or a security VPC to funnel traffic through. The less complicated option is to mirror the traffic to appropriate tools which would allow for less deployment complexity and operational friction to avoid performance bottlenecks.
The “known known” is that developers will continue to run fast, DevOps will deploy unknown or misconfigured apps, and threat actors will continually exploit vulnerabilities and blind spots. Additionally, SecOps will try to verify rules and protections, which can only be done with deep packet inspection.
If you cannot detect a simple use case of SSH on a non-standard port, what “known unknowns” are out there?
About the Author
Stephen Goudreault is a Cloud Security Evangelist at Gigamon. You can read more of Stephen’s thought leadership on the Gigamon company blog https://blog.gigamon.com/author/stephengoudreault/.
Source: www.cyberdefensemagazine.com