Android Root Detection Techniques

I am trying to create a exhaustive list of various techniques which can be used to detect whether an Android device is rooted or not. These techniques have been taken from various sources, listed in the references section at the end of this post. If you are aware of techniques apart from one mentioned in the post, please add them in the comment.

  • Check Installed Packages:
    1. SuperSU app
    2. Rooting apps: Apps that exploit privilege-escalation vulnerabilities to root the device (e.g One Click Root, iRoot)
    3. Root Apps: Apps that require root privileges for their functions. E.g busybox, SetCPU, Titanium Backup.
    4. Root Cloakers: Apps that hide whether the device is rooted or not. e.g Root Cloaker, Root Cloaker Plus.
    5. API hooking frameworks: Libraries that provide API hooking functions. E.g Cydia Substrate, Xpose Framework.
  • Check Installed Files:
    1. Static Paths:
      1. /system/xbin/su, /system/bin/su or /system/xbin/../xbin/su (path manipulated)
      2. /system/xbin/busybox and all symbolic links of commands created by BusyBox.
      3. /data/app/<APK name> or /system/app/<APK name> of popular apps packages that are installed during or after rooting.
    2. Dynamic Paths: Parse the PATH variable, appending “/su” to each entry; open each in a loop
  • Check the BUILD tag: stock android images from Google are built with “release-keys” tag. If “test-keys” are presented, this can probably mean the Android image is developer build or an unofficial build. This value is basically from “”.
  • Check Directory Permissions:
    1. Rooting makes certain root folders readable, like /data, or writable, like /etc, /system/xbin, /system, /proc, /vendor/bin etc. Run the mount command and check if any device is mounted with “rw” flag, or try to create a file under “/system” or “/data” folder.
    2. Attempt to mount “/system” partition with command “mount -o remount, rw /system“, and check the return code.
  • Check Processes/Services/Tasks:
    1. ActivityManager.getRunningAppProcesses method returns a list of currently running application processes. This API can be used to determine if any app which requires root privileges is running.
    2. getRunningServices or getRunningTasks: Get currently running services or tasks.
  • Check Rooting Traits Using Shell Commands: Using Runtime.exec, ProcessBuilder or execve()
    1. su
    2. which su
    3. ps | grep <target> : lists currently running processes.
    4. ls – <target>: check existence of a file in the file system
    5. pm list packages
    6. pm path <package>: Output the full path of the targeted package
    7. cat /system/build.prop or grep Check whether =release-keys. This test can be used only as an indicator, as there are many contrary observations in the wild.
    8. Build Version: “ro.modversion” can be used to identify certain custom Android ROMs (e.g CyanogenMod)
  • Check System Properties:
    1., means adb shell will be running as root, instead of shell user.
    2. ro.debuggable =1 or service.adb.root=1, then adb will run as root as well.

Needless to say, these techniques can be bypassed by function hooking or custom build Android ROMs etc.



Security Implications of Zygote Process Creation Model

In the previous post I discussed about the Zygote process creation model in Android OS and importance of having different process creation model than a linux process in a mobile device. Before getting into the technical specifics, it is advisable to freshen the concept pertaining to Linux process creation and ASLR.

In  Zygote process creation model, a process template is created on the system startup, having the Dalvik VM (DVM) instance initialized and also other essential libraries loaded. When an application launch request is received, this template process is forked and application is loaded into it, saving significant time by avoiding loading of libraries and in instantiating DVM on every launch. Also, Linux COW (copy-on-write) helps in reducing global memory usage. But this trade-off have a major security implication.

Zygote process creation model causes two types of  memory  layout  sharing  on  Android,  which  undermine  the effectiveness of ASLR. Firstly, the code of an application is always loaded at the exact same memory location across different runs even when  ASLR  is  present;  and  secondly,  all  running apps  inherit  the  commonly  used  libraries  from  the  Zygote process (including the libc library) and thus share the same virtual memory mappings of these libraries. If an attacker is able to get the memory mapping information for one process, he can easily predict for the target process (as both share same mappings for Zygote loaded libraries). Thus developing ROP attacks becomes very easy, in spite of having ASLR. For more details, read this excellent article by Copperhead team.

After understanding the pros and cons of the approach, it must raise the curiosity that how can it be fixed without breaking the existing applications. To start with, one approach is to switch to Linux style process creation model, i.e, fork and exec, rather than creating a template and keep forking it subsequently. This approach fixes the security issue, but raises the very issue for which Zygote model was created. We will revisit this approach again at the end and re-evaluate its applicability in light of new performance enhancements measures introduced in Android.

Another approach to fix the security issue with Zygote process creation model was proposed by  Lee et. al. in this paper. They name their approach as Morula process creation model. Why it is called Morula is left as an exercise for the readers.

 Morula Process Creation Model:
In this approach, a template process performs the common and time-consuming initialisation tasks beforehand.  The whole process is divided into 2 steps:

1. Preparation Phase: initiated by the Activity Manager.  A preparation request is made to the Zygote process, when the system is idle or lightly occupied. The Zygote process forks a child, which immediately call a exec() to establish a new memory image with a fresh randomized layout. Then, the new process constructs DVM instance and loads all shared libraries and common Android classes, which would tremendously prolong the app launch time if not done in advance. Now the Morula process is fully created, waiting for the request to start an app. Multiple Morula process can be created in order to accommodate a flurry of requests to start several apps. If these newly created process is not used immediately, the  process enters the sleep mode and awakened only when needed.

2. Transition Phase: When the Activity Manager requests an app launch,  the request is routed through Zygote process first, where a decision is made regarding if the app should be started in a Morula process or in a fork of the Zygote process. Having this option allows the Morula model to be backward compatible with the Zygote model, in order to carry out an optimization strategy. Depending on the configuration, using either of the process the application is launched.

From Android 4.4.4 onward, Google was previewing a new runtime environment instead of Dalvik, called ART. From Android 5.0 it was fully deployed and considered to be faster than previous Dalvik VM. Factoring in the advances made by ART, Copperhead experimented with fork and exec approach in their Copperhead OS. As per their findings, “The Morula proof of concept code has some issues like file descriptor leaks and needs to be ported to Lollipop. It’s much less important now that ART has drastically improved start-up time without the zygote.” To conclude, with ART, fork and exec approach is not as slow as compared with Dalvik runtime. For security paranoid users, a small performance glitch should not be a big barrier.

It is highly recommended to read both, the Morula paper and blog by Copperhead to understand the nitty-gritty of the topic. And for the hackers out their, patch for implementing Morula is available here.

Android Zygote

In this post I will discuss about a very interesting piece of Android Operating System. If you have worked with Android, you might have run the ps command and might have observed that all applications have same parent PID (PPID). Android takes an unconventional approach to spawn processes, which ensure application startup is snappy. The process from which all the Android applications are derived is called Zygote. So in the screenshot below, all the applications have PPID of 1914, which is the PID of Zygote. In the rest of the post, I will talk about what is the need of Zygote, how does it come into existence and some discussion about Zygote in general.


Need of Zygote?

In a typical Linux process, when a process is started – by forking parent process, it goes through various setup steps, including loading of libraries and resources. The details are out of scope for this post. This process setup consumes time and on our beefy desktops, it is hardly noticeable. But in case of Android, not all devices are high spec and this process setup is noticeable to end user. As a workaround to normalize the process startup times on various devices, Android coldstarts a process during OS startup, from which applications can be forked when required. This process is called Zygote.

Zygote Startup?

After the Android device is turned on, and following all booting up steps, then init system starts, and run /init.rc file to setup various environment variables, mount points, start native daemons etc. There are many resources available on the internet discussing about Android bootup process and init system as well, and thus bypassed in this post. While executing init.rc, that Zygote is started. There is no binary directly corresponding to Zygote, instead it is started by a binarycalled app_process. Corresponding line in init.rc can be found here.

service zygote /system/bin/app_process -Xzygote /system/bin –zygote –start-system-server

app_process firstly starts Android Runtime, which in turn starts Dalvik VM of the system, and finally Dalvik VM invokes Zygote’s main().

Zygote initilization can be simplified into following steps:
1.Register Zygote socket (listens for connections on /dev/socket/zygote) for requests to start new apps
2. preload all java classes
3. preload resources
4. start system server (not covered in this post)
5. open socket
6. listen to connections

Zygote Socket

As mentioned above, zygote opens up a socket, /dev/socket/zygote, and listens on it for requests to start new applicationss. On receiving a request, it simply forks itself and launches the new requested application. So now the new application have Dalvik VM already loaded, including other necessary libraries and resources, and can start with execution straight away.

One feature of Linux is important to understand over here, Copy-on-Write (COW) policy for forks. Forking in Linux involves creating a new process, which is an exact copy of the parent process. With COW, Linux doesn’t actually copy anything. On the contrary, it just maps the pages of the new process over to those of the parent process and makes copies only when the new process writes to a page. Thus, saving significant amount of memory and setup time as well. To add, in case of Android, these pages are never written, as the libraries are mostly immutable and hardly changed over the process lifetime.

If you want to learn more details, with corresponding code, this is an excellent resource.

In upcoming post I will discuss about the security implications of having Zygote model and various existing alternative workarounds.

TLS Sequence Numbers

When talking about SSL/TLS most of the discussion centers around the ciphersuites, the types of messages or other complex cryptographical aspects. But there are many subtle things embedded in the protocol, which are often skipped or not discussed generally. One such thing is sequence numbers. Like in TCP, a sequence number for messages is also maintained in SSL/TLS protocol and one gets to know only is he/she delve into the RFCs.

In case of SSL/TLS, sequence number is a simple count of messages sent and received. This is maintained implicitly i.e, not sent in the messages explicitly. The protocol requires to maintain a separate sequence number counter for read and write sessions respectively.

A touch of history,  sequence numbers were not used in the SSLv1, and were introduced in SSLv2 only.  Thus making SSLv1 prone to replay attacks (against which sequence numbers protect).

The question arises, if sequence number for a connection is maintained, and given that it is not explicitly transmitted, then how it is useful? To answer, sequence numbers are used in the MAC. To prevent message replay or modification attacks, the MAC is computed using the MAC secret, the sequence number, the message length, the message contents, and two fixed character strings. When either side calculate the MAC for a given message, if sequence number does not correspond to the current message, then message authentication will fail, and the receiver will demand the sender to re-send the message.

From RFC 6101 states following about how sequence number should be calculated and also what data type should be used. Note that by using int64, chances of overflow are minimized.

“Each party maintains separate sequence numbers for transmitted and received messages for each connection.  When a party sends or receives a change cipher spec message, the appropriate sequence number is set to zero.  Sequence numbers are of type uint64 and may not exceed 2^64-1.”

To summarize, the sequence number provides protection against attempts to delete or reorder messages.

[3] SSL and TLS Theory and practice by Rolf Oppliger

KCI attacks against TLS

In past 2 years or so, SSL/TLS protocol is under severe scrutiny and rightly so, as it is one of the most widely used cryptographic protocol on the Internet. How secure place Internet is directly or indirectly do depend on SSL/TLS protocl. In the past there have been many vulnerabilities discovered, ranging from design issues like POODLE, FREAK or LOGJAM to implementation bugs like HEARTBLEED and OpenSSL CCS. Recently in Usenix’15 conference, another attack on TLS was presented – Prying open Pandora’s box: KCI attacks against TLS. Where, KCI stands for Key Compromise Impersonation. Although, this attack is not as severe as other existing attacks, but still as per author’s claim, might affect a tiny part of the Internet. In this post, lets try to understand the how the attack works and what are the ways to protect against this attack.

TL;DR this attack is only possible when the communication is happening using non-ephemeral (EC)DH and client certificates are used for authenticating the clients, and some more conditions thrown in the mix. Use of non-ephemeral (EC)DH is very limited on the Internet and further use of client certificates is also rare, thus making this vulnerability less likely to occur.

Before getting into the details of this attack, it is presumed that readers know about the working of SSL/TLS protocol in general and how SSL/TLS work when non-ephemeral (EC)DH supporting cipher suits are used. Also, understanding how a typical MITM attack against a TLS session works will be helpful.

Before getting into the actual attack phase, there is pre-attack phase in which attacker needs to collect information to carry on the attack.

Pre-Attack Phase: The attacker should be able to get the possession of a client certificate and its corresponding private key that is or will be installed at the client. This might seem to be little unlikely to perform, but authors say it is possible in cases like a  software vendor shipping product with pre-installed client certificates or a malicious Android App installs one on the system etc.

Attack Phase: A client C  (e.g. a browser) initiates a TLS connection to a server S. The attacker M, attacking as Man-in-the-Middle, will block the traffic from C to S. For the ClientHello message sent by the client, M responds with a ServerHello message, choosing a non-ephemeral  cipher suit for communication.

In Certificate message, attacker M sends the original server’s certificate, unlike in a typical MITM attack, a new certificate (not of original servers) is sent. We will see shortly, how attacker derives the master secret, as he does not have the private key of the certificate sent.

The attacker asks for the client certificate by sending CertficateRequest message and requests for non-ephemeral (EC)DH client authentication. In the message, the attacker also asks for the client certificate to be of same type that of the server certificate it sent. In the CertificateRequest message by specifying the CA name, the attacker can ascertain that client uses the compromised pair of certificate and secret key to authenticate itself.

The client proceed to finish the handshake, oblivious of the attack being performed, as it would normally do. While the attacker does not know have the private key of the server’s certificate and hence cannot derive the master secret. But the he knows the secret key, and uses that to calculate the master secret and complete the handshake.

Lets summarize the pre-conditions to pull off this attack and then see how to prevent such attack:

  1. Client Support: The client must support a non-ephemeral (EC)DH cipher suites, i.e, by sending the (EC)DH protocols in the ClientHello message. Additionally, the client implementation must support any of the fixed (EC)DH client authentication options implied by the client certificate types: rsa_fixed_dh, dss_fixed_dh, rsa_fixed_ecdh, ecdsa_fixed_ecdh.
  2. Server Support: The adversary attacking a server must get possession of certificate of the server that contains static (EC)DH values. There are further subtle details discussed in the paper, like why such attack is possible in case of DSS or ECDSA certificate.
  3. Compromised Client Cert: The adversary must have possession of a client certificate and the corresponding secret key that must both be installed at the client.

On server side:

  1. Disable non-ephemeral (EC)DH handshakes
  2. “Set appropriate X509 Key Usage extension for ECDSA and DSS certificates, and disable specifically the KeyAgreement flag.”

In the paper authors have went on to motivate why this attack at present might be unlikely but still possible in future scenarios. IMHO, this might not be the most sorted after attack vector, but surely such research ensure that TLS protocol will become more secure in the future.

Keep Hacking 😀

Why Firefox’s new control center design not good

In the latest release of Firefox, version 42, Mozilla has added a new feature of  Control Center, to manage a site’s privacy and security controls.  Also, the way HTTPS connection indicators shown in top right corner of address bar has also been updated. Mozilla, in their blog post go in detail to explain the changes and the motivation behind it. The changes are summarized in the image below.

One major change, many might have noticed, is the way the certificate information is shown on clicking the lock icon in address bar. In the older versions, on clicking the HTTPS lock in the address used to show the information about the certificate’s issuer.

Figure 2: Old style (source: wikipedia)

Post update, this has been changed to a mere indication whether the connection is secure or not. And now you have to make another click (arrow icon on the right of pop-up) to see the information about the certificate’s issuer.

Figure 3: New Style

On the first look it might look innocuous,  but in the light of recent MITM fiascoes like, Lenovo’s Superfish and then Dell, it might not be. With the new design, an additional click is required to see the issuer information. This additional click discourages the users to check the issuer of the certificate and might inadvertently help in certain MITM attacks.

Although, some might argue that only power users check into such information and for a normal user all this is too complicated to comprehend and hence they don’t. A green lock on the top right of address bar is all they care about (do they?). IMHO, showing an additional line about the issuer in the pop-up does not alter the UX and ensures that users can keep an eye on possible malicious activity.


iOS Solid State NAND Storage

There is not much literature available on how does NAND storage of Apple’s iDevices is like. While reading “Hacking and Securing iOS Applications” by Jonathan Zdziarski, I came across how does the NAND storage looks like until iOS 5. As a note on caution, it is very possible that this structure has been changed in the subsequent iOS versions.

Knowing the structure of storage is an important step in order to understand how encryption works in iOS. The text below is verbatim from the book.
The NAND is divided into six separate slices:

  • BOOT: Block zero is referred to as the BOOT block of the NAND, and contains a copy of Apple’s low level boot loader.
  • PLOG: Block 1 is referred to as effaceable storage, and is designed as a storage locker for encryption keys and other data that needs to be quickly wiped or updated. The PLOG is where three very important keys are stored, which you’ll learn about in this chapter: the BAGI, Dkey, and EMF! keys. This is also where a security epoch is stored, which caused iOS 4 firmware to seemingly brick devices if the owner
    attempted a downgrade of the firmware.
  • NVM: Blocks 2–7 are used to store the NVRAM parameters set for the device.
  • FIRM: Blocks 8–15 store the device’s firmware, including iBoot (Apple’s second stage boot loader), the device tree, and logos.
  • FSYS: Blocks 16–4084 (and higher, depending on the capacity of the device) are used for the filesystem itself. This is the filesystem portion of NAND, where the operating system and data are stored. The filesystem for both partitions is stored here.
  • RSRV: The last 15 blocks of the NAND are reserved.

If there is any recent material on this topic, please let me know by commenting below.

Keep Hacking 😀