Security Implications of Zygote Process Creation Model

In the previous post I discussed about the Zygote process creation model in Android OS and importance of having different process creation model than a linux process in a mobile device. Before getting into the technical specifics, it is advisable to freshen the concept pertaining to Linux process creation and ASLR.

In  Zygote process creation model, a process template is created on the system startup, having the Dalvik VM (DVM) instance initialized and also other essential libraries loaded. When an application launch request is received, this template process is forked and application is loaded into it, saving significant time by avoiding loading of libraries and in instantiating DVM on every launch. Also, Linux COW (copy-on-write) helps in reducing global memory usage. But this trade-off have a major security implication.

Zygote process creation model causes two types of  memory  layout  sharing  on  Android,  which  undermine  the effectiveness of ASLR. Firstly, the code of an application is always loaded at the exact same memory location across different runs even when  ASLR  is  present;  and  secondly,  all  running apps  inherit  the  commonly  used  libraries  from  the  Zygote process (including the libc library) and thus share the same virtual memory mappings of these libraries. If an attacker is able to get the memory mapping information for one process, he can easily predict for the target process (as both share same mappings for Zygote loaded libraries). Thus developing ROP attacks becomes very easy, in spite of having ASLR. For more details, read this excellent article by Copperhead team.

After understanding the pros and cons of the approach, it must raise the curiosity that how can it be fixed without breaking the existing applications. To start with, one approach is to switch to Linux style process creation model, i.e, fork and exec, rather than creating a template and keep forking it subsequently. This approach fixes the security issue, but raises the very issue for which Zygote model was created. We will revisit this approach again at the end and re-evaluate its applicability in light of new performance enhancements measures introduced in Android.

Another approach to fix the security issue with Zygote process creation model was proposed by  Lee et. al. in this paper. They name their approach as Morula process creation model. Why it is called Morula is left as an exercise for the readers.

 Morula Process Creation Model:
In this approach, a template process performs the common and time-consuming initialisation tasks beforehand.  The whole process is divided into 2 steps:

1. Preparation Phase: initiated by the Activity Manager.  A preparation request is made to the Zygote process, when the system is idle or lightly occupied. The Zygote process forks a child, which immediately call a exec() to establish a new memory image with a fresh randomized layout. Then, the new process constructs DVM instance and loads all shared libraries and common Android classes, which would tremendously prolong the app launch time if not done in advance. Now the Morula process is fully created, waiting for the request to start an app. Multiple Morula process can be created in order to accommodate a flurry of requests to start several apps. If these newly created process is not used immediately, the  process enters the sleep mode and awakened only when needed.

2. Transition Phase: When the Activity Manager requests an app launch,  the request is routed through Zygote process first, where a decision is made regarding if the app should be started in a Morula process or in a fork of the Zygote process. Having this option allows the Morula model to be backward compatible with the Zygote model, in order to carry out an optimization strategy. Depending on the configuration, using either of the process the application is launched.

From Android 4.4.4 onward, Google was previewing a new runtime environment instead of Dalvik, called ART. From Android 5.0 it was fully deployed and considered to be faster than previous Dalvik VM. Factoring in the advances made by ART, Copperhead experimented with fork and exec approach in their Copperhead OS. As per their findings, “The Morula proof of concept code has some issues like file descriptor leaks and needs to be ported to Lollipop. It’s much less important now that ART has drastically improved start-up time without the zygote.” To conclude, with ART, fork and exec approach is not as slow as compared with Dalvik runtime. For security paranoid users, a small performance glitch should not be a big barrier.

It is highly recommended to read both, the Morula paper and blog by Copperhead to understand the nitty-gritty of the topic. And for the hackers out their, patch for implementing Morula is available here.

TLS Sequence Numbers

When talking about SSL/TLS most of the discussion centers around the ciphersuites, the types of messages or other complex cryptographical aspects. But there are many subtle things embedded in the protocol, which are often skipped or not discussed generally. One such thing is sequence numbers. Like in TCP, a sequence number for messages is also maintained in SSL/TLS protocol and one gets to know only is he/she delve into the RFCs.

In case of SSL/TLS, sequence number is a simple count of messages sent and received. This is maintained implicitly i.e, not sent in the messages explicitly. The protocol requires to maintain a separate sequence number counter for read and write sessions respectively.

A touch of history,  sequence numbers were not used in the SSLv1, and were introduced in SSLv2 only.  Thus making SSLv1 prone to replay attacks (against which sequence numbers protect).

The question arises, if sequence number for a connection is maintained, and given that it is not explicitly transmitted, then how it is useful? To answer, sequence numbers are used in the MAC. To prevent message replay or modification attacks, the MAC is computed using the MAC secret, the sequence number, the message length, the message contents, and two fixed character strings. When either side calculate the MAC for a given message, if sequence number does not correspond to the current message, then message authentication will fail, and the receiver will demand the sender to re-send the message.

From RFC 6101 states following about how sequence number should be calculated and also what data type should be used. Note that by using int64, chances of overflow are minimized.

“Each party maintains separate sequence numbers for transmitted and received messages for each connection.  When a party sends or receives a change cipher spec message, the appropriate sequence number is set to zero.  Sequence numbers are of type uint64 and may not exceed 2^64-1.”

To summarize, the sequence number provides protection against attempts to delete or reorder messages.

[3] SSL and TLS Theory and practice by Rolf Oppliger

KCI attacks against TLS

In past 2 years or so, SSL/TLS protocol is under severe scrutiny and rightly so, as it is one of the most widely used cryptographic protocol on the Internet. How secure place Internet is directly or indirectly do depend on SSL/TLS protocl. In the past there have been many vulnerabilities discovered, ranging from design issues like POODLE, FREAK or LOGJAM to implementation bugs like HEARTBLEED and OpenSSL CCS. Recently in Usenix’15 conference, another attack on TLS was presented – Prying open Pandora’s box: KCI attacks against TLS. Where, KCI stands for Key Compromise Impersonation. Although, this attack is not as severe as other existing attacks, but still as per author’s claim, might affect a tiny part of the Internet. In this post, lets try to understand the how the attack works and what are the ways to protect against this attack.

TL;DR this attack is only possible when the communication is happening using non-ephemeral (EC)DH and client certificates are used for authenticating the clients, and some more conditions thrown in the mix. Use of non-ephemeral (EC)DH is very limited on the Internet and further use of client certificates is also rare, thus making this vulnerability less likely to occur.

Before getting into the details of this attack, it is presumed that readers know about the working of SSL/TLS protocol in general and how SSL/TLS work when non-ephemeral (EC)DH supporting cipher suits are used. Also, understanding how a typical MITM attack against a TLS session works will be helpful.

Before getting into the actual attack phase, there is pre-attack phase in which attacker needs to collect information to carry on the attack.

Pre-Attack Phase: The attacker should be able to get the possession of a client certificate and its corresponding private key that is or will be installed at the client. This might seem to be little unlikely to perform, but authors say it is possible in cases like a  software vendor shipping product with pre-installed client certificates or a malicious Android App installs one on the system etc.

Attack Phase: A client C  (e.g. a browser) initiates a TLS connection to a server S. The attacker M, attacking as Man-in-the-Middle, will block the traffic from C to S. For the ClientHello message sent by the client, M responds with a ServerHello message, choosing a non-ephemeral  cipher suit for communication.

In Certificate message, attacker M sends the original server’s certificate, unlike in a typical MITM attack, a new certificate (not of original servers) is sent. We will see shortly, how attacker derives the master secret, as he does not have the private key of the certificate sent.

The attacker asks for the client certificate by sending CertficateRequest message and requests for non-ephemeral (EC)DH client authentication. In the message, the attacker also asks for the client certificate to be of same type that of the server certificate it sent. In the CertificateRequest message by specifying the CA name, the attacker can ascertain that client uses the compromised pair of certificate and secret key to authenticate itself.

The client proceed to finish the handshake, oblivious of the attack being performed, as it would normally do. While the attacker does not know have the private key of the server’s certificate and hence cannot derive the master secret. But the he knows the secret key, and uses that to calculate the master secret and complete the handshake.

Lets summarize the pre-conditions to pull off this attack and then see how to prevent such attack:

  1. Client Support: The client must support a non-ephemeral (EC)DH cipher suites, i.e, by sending the (EC)DH protocols in the ClientHello message. Additionally, the client implementation must support any of the fixed (EC)DH client authentication options implied by the client certificate types: rsa_fixed_dh, dss_fixed_dh, rsa_fixed_ecdh, ecdsa_fixed_ecdh.
  2. Server Support: The adversary attacking a server must get possession of certificate of the server that contains static (EC)DH values. There are further subtle details discussed in the paper, like why such attack is possible in case of DSS or ECDSA certificate.
  3. Compromised Client Cert: The adversary must have possession of a client certificate and the corresponding secret key that must both be installed at the client.

On server side:

  1. Disable non-ephemeral (EC)DH handshakes
  2. “Set appropriate X509 Key Usage extension for ECDSA and DSS certificates, and disable specifically the KeyAgreement flag.”

In the paper authors have went on to motivate why this attack at present might be unlikely but still possible in future scenarios. IMHO, this might not be the most sorted after attack vector, but surely such research ensure that TLS protocol will become more secure in the future.

Keep Hacking 😀

Why Firefox’s new control center design not good

In the latest release of Firefox, version 42, Mozilla has added a new feature of  Control Center, to manage a site’s privacy and security controls.  Also, the way HTTPS connection indicators shown in top right corner of address bar has also been updated. Mozilla, in their blog post go in detail to explain the changes and the motivation behind it. The changes are summarized in the image below.

One major change, many might have noticed, is the way the certificate information is shown on clicking the lock icon in address bar. In the older versions, on clicking the HTTPS lock in the address used to show the information about the certificate’s issuer.

Figure 2: Old style (source: wikipedia)

Post update, this has been changed to a mere indication whether the connection is secure or not. And now you have to make another click (arrow icon on the right of pop-up) to see the information about the certificate’s issuer.

Figure 3: New Style

On the first look it might look innocuous,  but in the light of recent MITM fiascoes like, Lenovo’s Superfish and then Dell, it might not be. With the new design, an additional click is required to see the issuer information. This additional click discourages the users to check the issuer of the certificate and might inadvertently help in certain MITM attacks.

Although, some might argue that only power users check into such information and for a normal user all this is too complicated to comprehend and hence they don’t. A green lock on the top right of address bar is all they care about (do they?). IMHO, showing an additional line about the issuer in the pop-up does not alter the UX and ensures that users can keep an eye on possible malicious activity.


iOS Solid State NAND Storage

There is not much literature available on how does NAND storage of Apple’s iDevices is like. While reading “Hacking and Securing iOS Applications” by Jonathan Zdziarski, I came across how does the NAND storage looks like until iOS 5. As a note on caution, it is very possible that this structure has been changed in the subsequent iOS versions.

Knowing the structure of storage is an important step in order to understand how encryption works in iOS. The text below is verbatim from the book.
The NAND is divided into six separate slices:

  • BOOT: Block zero is referred to as the BOOT block of the NAND, and contains a copy of Apple’s low level boot loader.
  • PLOG: Block 1 is referred to as effaceable storage, and is designed as a storage locker for encryption keys and other data that needs to be quickly wiped or updated. The PLOG is where three very important keys are stored, which you’ll learn about in this chapter: the BAGI, Dkey, and EMF! keys. This is also where a security epoch is stored, which caused iOS 4 firmware to seemingly brick devices if the owner
    attempted a downgrade of the firmware.
  • NVM: Blocks 2–7 are used to store the NVRAM parameters set for the device.
  • FIRM: Blocks 8–15 store the device’s firmware, including iBoot (Apple’s second stage boot loader), the device tree, and logos.
  • FSYS: Blocks 16–4084 (and higher, depending on the capacity of the device) are used for the filesystem itself. This is the filesystem portion of NAND, where the operating system and data are stored. The filesystem for both partitions is stored here.
  • RSRV: The last 15 blocks of the NAND are reserved.

If there is any recent material on this topic, please let me know by commenting below.

Keep Hacking 😀

RC4 must die

Rivest Cipher 4 (RC4) is one of the most popular stream cipher and as per ICSI Certificate Notary project statistics, RC4 encrypted communication comprises of about 30 percent of SSL/TLS communication. In past few months, couple of attacks, or enhancement of previous known attacks against this widely used algorithm have been proposed and thus making future use of RC4 perilous and hopefully bringing curtains to 28 years of successful existence.

Before getting into the details of the attacks, firstly let us revisit what SSL/TLS is. Secure Socket Layer (SSL, a.k.a Transport Layer Security (TLS)), is the most widely used secure communication protocol on the Internet today. SSL/TLS is presently used for protecting various application level traffic, like in HTTPS to encrypt web browsing traffic, in IMAP and SMTP to cryptographically protect email traffic, to name a few. It is also extensively used with embedded systems, mobile devices, in point of sale payment devices etc. To learn more about SSL/TLS, please read my previous post.

An SSL session, to explain briefly, consists of two phases: SSL Handshaking Protocol and Record Protocol. The former consists of a client authenticating itself to the server and both establishing a cryptographic session keys, which will be used for protecting further communication. While, the latter phase comprises of using the established session key and symmetric key cryptography to build a secure channel of communication for application-layer data. The symmetric key cryptography algorithms come in two flavors based on their underlying design:

– Block Ciphers (CBC mode of operation, e.g: AES)
– Stream ciphers (e.g: RC4)

In the recent past, SSL/TLS has been in the limelight for various security flaws of varying severity  and discussed over here. Broadly, the flaws were either algorithmic weakness or implementation weakness. Heartbleed and OpenSSL CCS occurred because of improper implementation in OpenSSL library, a widely used SSL/TLS library. In contrast, POODLE involved algorithmic weakness. Implementation based security vulnerabilities are relatively easy to fix, as it involves patching the vulnerable part of the code, but vulnerabilities arising because of shortcomings in algorithm design requires change of components which are  vulnerable.

The two attacks are based on the inherent design flaws of RC4 and discussed below.

The first attack [1] is based on the fact that some of the pseudo-random bytes in RC4 used to encode messages are predictable. In 2013 researchers demonstrated that this lack of randomness can be exploited by seeing more than 17 billion (234) separate encryptions of the same data. At the time, this was considered to be impractical attack. Now researchers have further refined the attack and claim that now it takes a tad more than 67 million (226) encryptions of same data to recover the underlying plaintext with 50 percent success rate. It is successfully demonstrated that the enhanced attack can extract security tokens from both basic access authentication over HTTPS and IMAP protocol, used for retrieving and storing e-mail.

The second attack [2], dubbed “bar-mitzvah” attack, given the age of “invariance weakness” on which the attack is based. The attack is devised by researchers at Imperva, a security firm and was presented at Black Hat security conference in Singapore. “Invariance weakness” in RC4 was responsible for fatal exploits against Wired Equivalent Privacy (WEP) technology used to encrypt Wi-Fi networks in 2001. “Bar-mitzvah” attack requires sampling of one billion RC4 encryption to decrypt credit card number, password, or authentication cookie key. Though this attack has a very limited scope, as it is restricted to first 100 bytes of ciphertext generated by RC4. In spite of the limitation, it is significantly faster than performing exhaustive attack that guesses passwords, credit card numbers etc and thus should be considered a practical threat.

Historically, block ciphers also did not have a good past. AES-CBC has seen significant cryptanalysis attacks like BEAST, Lucky 13, POODLE to name a few. These attacks were based on design shortcomings of the algorithm. After series of attacks against block ciphers, RC4 (stream ciphers) was proposed as the recommended work around the attacks. Which also justifies the presence of huge chunk of RC4 encrypted traffic on the Internet. And now two new proposed attacks on RC4, pushes it towards the cliff.

Many argue that the above attacks against RC4 may not be easy to execute practically, but if anything is to be learned from BEAST, Lucky 13 and POODLE, is that – attacks only get better. FREAK vulnerability is a perfect example of implications of using old and archaic standards beyond a certain threshold may come back and bite us really hard. To further strengthen the case against the use RC4, as per RFC 7456 [4], use of RC4 with any TLS version is prohibited.

Various browser vendors are working on doing away with support of RC4 in their browsers  [7]. Mozilla Firefox [5] will stop supporting RC4 in Firefox 44 (expected in Jan 2016), while Google’s Chrome [6] and Microsoft’s Edge [8] browser will stop support by Feb 2016.


Detecting Microsoft HTTP.sys vulnerability

On April 14th, 2015 patch Tuesday, Microsoft released a patch for a remote code execution vulnerability in HTTP.sys module of Windows. The vulnerability affected all versions of windows, ranging from Windows 7 to Windows servers. Microsoft’s bulletin MS15-034 talked about the vulnerability only in brief and left the details to be revealed only by reverse engg the patch. It was a race against time for people to patch their servers and for attackers to reverse engineer the patch to zero-down on the exact vulnerability. The vulnerability was assigned CVE-2015-1635. In this blog we will see what is HTTP.sys is and how to detect the vulnerability. Understandably the fix was to apply Microsoft’s patch.

What is HTTP.sys?

HTTP.sys is a kernel-module which is a HTTP listener. Prior to HTTP.sys, windows used Windows Socket API (Winsock), a user-mode component, to receive HTTP requests. Having HTTP listener in kernel have following advantages [1]:

  • Kernel-mode caching: Requests for cached responses are served without switching to user mode.
  • Kernel-mode request queuing: Requests cause less overhead in context switching because the kernel forwards requests directly to the correct worker process. If no worker process is available to accept a request, the kernel-mode request queue holds the request until a worker process picks it up.

For more details on working and advantages of HTTP.sys, visit [2].


The vulnerability exists in the parsing of the Range Header [3] of the HTTP request sent to the server. By sending Range header’s value as bytes=0-18446744073709551615, triggers a buffer overflow and this can be used as a test to detect HTTP.sys vulnerability on a server.

Following curl command can be used for testing:

$ curl -v -H "Host: irrelevant" -H "Range: bytes=0-18446744073709551615"

If response is “HTTP Error 400. The request has an invalid header name.”, then the server is patched, any other response apart from this indicate the server is still vulnerable.

Sending following request will cause Blue Screen of Death (BSoD) and thus causing Denial-of-Service.

$ curl -v -H "Host: irrelevant" -H "Range: bytes=20-18446744073709551615"

As per comments on Hacker News [4], the vulnerability affects only those server on which “Output Cache” or “Enable Kernal Caching” is checked.

There is ready to use tool available on Github for testing your server for HTTP.sys vulnerability.

This attack is similar to Range-attack on Apache servers, causing Denial of service [5].

Keep Hacking :D.