Here are some (raw, down-in-the-trenches) notes from Black Hat 2016 presentations I attended.
The whitepapers and slides can be found here: https://www.blackhat.com/us-16/briefings.html
Keynote Dan Kaminsky
Speed super NB. Cloud impact.
- Ppl moving away from the Internet: fears of privacy, security.
- Internet could be regulated out of existence.
Companies do not compete on security. Do not share internal information.
Defense – Offense == Compliance. Checkboxes. Everyone is busy doing things but the house burns down.
Bugs are not random. Fixes neither. Function of poor architecture, bad design.
Eg ASM1 C implementations. Mostly the C implementations are extremely vulnerable. Why then do we use this pattern of development? Eg performance, embedding. Need to understand what patterns produce vulnerabilities, and address these.
- If I can trust cloud providers to provide reasonable isolation, I should be using cloud providers much more.
- Rate limit losses: eg choke access to confidential data.
- Put my vulnerable stuff “elsewhere”. Eg pwds, keys, etc in the cloud provider service.
We don’t share information, learn from previous lessons.
No mandate to “do it right”. Companies just “build code”.
Every generation tried to invent the Internet. (e.g. Minitel, other telco experiences, AOL). Our generation actually did. Because no kingmakers, no gatekeepers, not designed to make someone a lot of money. No fine-grained control. Swap components in and out.
Most cloud providers are US. Stopping ppl from storing data on US cloud is counter-productive. Increases deployment costs.
- Docker container running in VM. Vulnerable stuff loaded + running before the 1st attacker packet comes in.
- Simplistic but effective.
HTTP2 + QUIC
Page sizes increasing
MPTCP slow, OS-dependent, didn’t support many use cases, eg multi-homing, HA.
Slns: udp → QUIC, then app level → SPDY, then standard →HTTP2
Not many security tools support these protocols.
QUIC (everything Chrome), widely deployed. Ggl Duo, Ggl Chrome, Ggl websites
Http2: Chrome, Edge, Facebook, Twitter, Yahoo, Google
Transport encapsulates HTTP:
- Binary framing
- MPX requests
1 connection / origin with # of bidirectional binary, framed streams per connection
- Upgrade hdr
- TLS with ALPN
- Or if prior knowledge, then special marker octets to start stream
Huffman encoding HPACK
- Connection reuse: connections may be reused for requests with multiple different URI authority components
- Server push
- Takes things from HTTP/2, adds network layer as well.
- Connection: combines connection setup + TLS handshake in 1 operation
- Udp transport protocol
- Latency optimized → Main driving force for QUIC
- Reliable (even if Udp)
- Open src
- No OS reqts
- Always encrypted
Demo of HTTP/2 → QUIC → QUIC → HTTP/2 using metasploit listeners.
New protocols could have hostile traffic in multiple streams, hopping from stream to stream.
Less and less use in trying to detect hostile traffic in these complex, binary-oriented, encrypted protocols. Tools are just not keeping pace with the speed of change / rapid deployment of the new protocols.
Joshua Saxe Invincea Labs
Neural network, deep learning based on raw strings to detect malicious URLs
Memory Forensics using VM Introspection for Cloud computing
Built a POC using open-src to introspect VM memory for DFIR memory forensics.Imperva: HTTP/2 Attacks
Whitepaper is on Imperva web site: http://www.imperva.com/DefenseCenter/HackerIntelligenceReports
Hdr compression attack: dictionary tables for static part / dynamic parts of data. HPACK Bomb Attack. Send extremely long hdr to fill up dynamic hdr. Then send 1000’s references to the specific hdr. Inflates srv memory by 4000x. 64 Kb attack pckts → server 64 MB data, server crashes. Wireshark also. (Both using same vulnerable library.)
Stream abuse: send frames out of context. 2 requests on same stream → MS IIS bluescreen
HTTP/2 is still an immature protocol. But is the way of the future.