~ 4 min read

"PCAP or It Didn't Happen" - Now @ Scale

The new release (38.0) brings Kubeshark scalability to a whole new level, introducing a new distributed PCAP storage system, DNS support, TCP/UDP stream replay, historic traffic snapshot and much more.

Viewing Kubernetes’ internal API traffic is one thing, supporting large-scale production clusters is another.

The new release 38.0 is all about increasing Kubeshark’s scalability and making it fit to run on large scale production clusters in addition to newly introduced cool new features including:

  • Distributed PCAP-based storage
  • Historic traffic snapshot
  • PCAP export & view
  • TCP and UDP stream replay
  • Identity-aware service map
  • V38.2 introduces support for DNS

Scalable New Architecture

Kubeshark has been reinforced with a new architecture that promises a very low CPU and network overheads, now capable of processing significantly more traffic compared to before.

By distributing the CPU intensive operations and storage, we’ve been able to bring Kubeshark scalability to a whole new level.

Distributed Storage, Low CPU and Network Overheads

The new Kubeshark architecture introduces the concept of workers that are responsible for capturing traffic, storing the captured traffic locally at the node level.

The CPU intensive operations of traffic dissection are now distributed and occur on-demand by the workers at the node level, causing only a fraction of the traffic to be sent over the network.

The node level storage limit can be extended to as much as the volume attached to the node permits.

Distributed PCAP-based Storage

At its core, Kubeshark’s architecture is based on distributed PCAP storage, limited only by the size of the sum of all volumes attached to all of the nodes.

Why PCAP

PCAP provides all packet information from the Ethernet header all the way to the application payload, providing full visibility of the application and network interaction, pre- and post-event.

“PCAP or it didn’t happen” - Now Possible

“PCAP or it didn’t happen” is a term commonly used by security experts to prove that an attack or compromise occurred by capturing and analyzing packets surrounding an event. This capability is considered challenging and expensive.

By moving to an architecture that is based on distributed PCAP storage and by optimizing its performance (CPU, network, storage) we plan to democratize this capability and make capturing, monitoring and analyzing traffic easy and available.

PCAP Operations

Kubeshark enables exporting any traffic snapshot to PCAP and viewing any previously exported PCAP file.

The example below shows how to export the past 72 hours TCP streams to a PCAP file:

Historical Traffic

You can view any previously exported PCAP snapshot using the CLI:

kubeshark tap --pcap <pcap-snapshot.tar.gz>

Historic Traffic Snapshot

Kubeshark can retain the captured traffic over a long period of time, enabling it to present a historic traffic snapshot.

The example below presents traffic captured between two timestamps:

Historical Traffic

TCP and UDP streams

Kubeshark stores complete TCP and UDP streams that include all of the request-response pairs that were included in a communication between two endpoints from when a connection was established until the connection was closed.

Request-Response

Kubeshark can replay a TCP or UDP stream by opening a connection to the server using the server IP and port, and sending only the requests packets.

TCP stream Replay

Identity-aware Service Map

With the support of DNS protocol dissection, Kubeshark no assigns DNS-aware identities to external workflows in addition to pod label identities assigned to internal workflows, achieved by subscribing to Kubernetes API events.

The new Service Map now works in conjunction with the KFL query to focus on specific parts of the cluster.

For example, this query will analyze the dependencies of three Pods:

Query a Subset of Traffic

The resulting query will show the following service map:

Service Map Subset

Another example would be to analyze the traffic only at a certain node or a set of nodes.

Kubernetes Node

Read more about the new Service Map here.

DNS Support

Kubeshark provides protocol-level visibility into Kubernetes’ DNS traffic by capturing all UDP streams that include DNS traffic. Once captured, DNS traffic is dissected and become available as any other protocol supported by Kubeshark.

DNS support provides the following capabilities:

  • DNS log
  • DNS investigation
  • Service-to-DNS connectivity map
  • DNS payload replay

DNS Log

Read more about it here.

New and Improved Documentation

You can learn more about new and existing features in our new and revamped documentation available here.

Summary

Kubernetes distributed and highly dynamic nature is rendering traditional observability and network-based security tools less relevant.

Kubeshark provides real-time visibility into Kubernetes’ internal network, capturing, dissecting and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. It is now reinforced with a new architecture that makes it more fit to run on large scale production clusters.

We are just starting our journey with Kubeshark and are always looking to learn about use-cases Kubeshark can potentially support. If you know of any, please let us know.

As always, if you like Kubeshark, don’t forget to give us a star :star2: on GitHub.