Abstracts

Paper Abstracts

Session 1: Systems Security

GRIM: Leveraging GPUs for Kernel Integrity Monitoring
Lazaros Koromilas, Giorgos Vasiliadis (Qatar Computing Research Institute, HBKU), Elias Athanasopoulos (VU University Amsterdam), Sotiris Ioannidis (FORTH)

Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious changes. A KIM can be implemented either in software, using a hypervisor, or using extra hardware. The latter option is more attractive due to better performance and higher security, since the monitor is isolated from the potentially vulnerable host. To remain under the radar and avoid detection it is paramount for a rootkit to conceal its malicious activities. In order to detect self-hiding rootkits researchers have proposed snooping for inferring suspicious behaviour in kernel memory. This is accomplished by constantly monitoring all memory accesses on the bus and not the actual memory area where the kernel is mapped. In this paper, we present GRIM, an external memory monitor that is built on commodity, off-the-shelf, graphics hardware, and is able to verify OS kernel integrity at a speed that outperforms all so-far published snapshot-based systems. GRIM allows for checking eight thousand 64-bit values simultaneously at a 10 KHz snapshot frequency, which is sufficient to accurately detect a self-hiding loadable kernel module insertion. According to the state-of-the-art, this detection can only happen using a snoop-based monitor. GRIM does not only demonstrate that snapshot-based monitors can be significantly improved, but it additionally offers a fully programmable platform that can be instantly deployed without requiring any modifications to the host it protects. Notice that all snoop-based monitors require substantial changes at the microprocessor level.

Taming Transactions: Towards Hardware-Assisted Control Flow Integrity using Transactional Memory
Marius Muench (Eurecom), Fabio Pagani (Eurecom), Yan Shoshitaishvili (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara), Davide Balzarotti (Eurecom)

Control Flow Integrity (CFI) is a promising defense technique against code-reuse and code-injection attacks. While proposals to use hardware features to support CFI already exist, there is still a growing demand for an architectural CFI support on commodity hardware. To tackle this problem, in this paper we demonstrate that the Transactional Synchronization Extensions (TSX) recently introduced by Intel in the x86-64 instruction set can be used to support CFI. The main idea of our approach is to map control flow transitions into transactions. This way, violations of the intended control flow graphs would then trigger transactional aborts, which constitutes the core of our TSX-based CFI solution. To prove the feasibility of our technique, we designed and implemented two coarse-grained CFI proof-of-concept implementations using the new TSX features. In particular, we show how hardware-supported transactions can be used to enforce both loose CFI (which does not need to extract the control flow graph in advance) and strict CFI (which requires pre-computed labels to achieve a better precision). All solutions are based on a compile-time instrumentation. We evaluate the effectiveness and overhead of our implementations to demonstrate that a TSX-based implementation contains useful concepts for architectural control flow integrity support.

Automatic Uncovering of Tap Points From Kernel Executions
Junyuan Zeng (University of Texas at Dallas), Yangchun Fu (University of Texas at Dallas), Zhiqiang Lin (University of Texas at Dallas)

Automatic uncovering of tap points (i.e., places to deploy active monitoring) in an OS kernel is useful in many security applications such as virtual machine introspection, kernel malware detection, kernel rootkit profiling, and active kernel malware defense. However, current practice to extract a tap point for an OS kernel is through either analyzing kernel source code or manually reverse engineering of kernel binary. This paper presents AutoTap, the first system that can automatically uncover the tap points directly from kernel binary. Specifically, starting from the execution of system call (i.e., the user level programing interface) and the exported kernel APIs (i.e., the kernel module/driver development interface), AutoTap automatically tracks kernel objects, resolves their kernel execution context, and associates the accessed context with the objects, from which to derive the tap points based on how an object is accessed (e.g., whether the object is created, accessed, updated, traversed, or destroyed). The experimental results with a number of Linux kernels show that AutoTap is able to automatically uncover the tap points for many kernel objects, which would be very challenging to achieve with manual analysis. A case study of using the uncovered tap points shows that we can use them to build a robust hidden process detection tool at the hypervisor layer with very low overhead.

Detecting Stack Layout Corruptions with Robust Stack Unwinding
Yangchun Fu (University of Texas at Dallas), Junghwan Rhee (NEC Laboratories America), Zhiqiang Lin (University of Texas at Dallas), Zhichun Li (NEC Laboratories America), Hui Zhang (NEC Laboratories America), Guofei Jiang (NEC Laboratories America)

Stack is a critical memory structure to ensure correct execution of programs because the control flow changes through the data stored in it such as return addresses and function pointers. Thus stack has been a popular target by many attacks and exploits such as buffer overflow and return-oriented programming (ROP). We present a novel system to detect the corruption of stack layout using a robust stack-unwinding technique and detailed stack layouts extracted from exception handling information commonly available in off-the-shelf binaries. Our evaluation with real world ROP exploits have demonstrated successful detection of them with the performance overhead of only 3.93% on average transparently without accessing any source code or debugging symbols of a protected binary.

Session 2: Low-level Attacks and Defenses

APDU-level attacks in PKCS#11 devices
Claudio Bozzato (Ca’ Foscari University, Venice), Riccardo Focardi (Ca’ Foscari University, Venice and Cryptosense, Paris), Francesco Palmarini (Ca’ Foscari University, Venice), Graham Steel (Cryptosense, Paris)

In this paper we describe attacks on PKCS#11 devices that we successfully mounted by interacting with the low-level APDU protocol, used to communicate with the device. They exploit proprietary implementation weaknesses which allow attackers to bypass the security enforced at the PKCS#11 level. Some of the attacks leak, as cleartext, sensitive cryptographic keys in devices that were previously considered secure. We present a new threat model for the PKCS#11 middleware and we discuss the new attacks with respect to various attackers and application configurations.
NOTE: all the attacks presented in this paper have been timely reported to manufacturers following a responsible disclosure process.

CloudRadar: A Real-Time Side-Channel Attack Detection System in Clouds
Tianwei Zhang (Princeton University), Yinqian Zhang (Ohio State University), Ruby B. Lee (Princeton University)

We present CloudRadar, a system to detect, and hence mitigate, cache-based side-channel attacks in multi-tenant cloud systems. CloudRadar operates by correlating two events: first, it exploits signature-based detection to identify when the protected virtual machine (VM) executes a cryptographic application; at the same time, it uses anomaly-based detection techniques to monitor the co-located VMs to identify abnormal cache behaviors that are typical during cache-based side-channel attacks. We show that correlation in the occurrence of these two events offer strong evidence of side-channel attacks. Compared to other work on side-channel defenses, CloudRadar has the following advantages: first, CloudRadar focuses on the root causes of cache-based side-channel attacks and hence is hard to evade using metamorphic attack code, while maintaining low false positive rate. Second, CloudRadar is designed as a lightweight patch to existing cloud systems, which does not require new hardware support or any hypervisor, operating system, application modifications. Third, CloudRadar provides real-time protection and can detect side-channel attacks within the order of milliseconds. We demonstrate a prototype implementation of CloudRadar in the OpenStack. Our evaluation suggests CloudRadar achieves negligible performance overhead with high detection accuracy.

Session 3: Measurement Studies

The Abuse Sharing Economy: Understanding the Limits of Threat Exchanges
Kurt Thomas (Google), Rony Amira (Google), Adi Ben-Yoash (Google), Ari Berger (Google), Ori Folger (Google), Amir Hardon (Google), Elie Bursztein (Google), Michael Bailey (University of Illinois at Urbana-Champaign)

The underground commoditization of compromised hosts suggests a tacit capability where miscreants leverage the same machine—subscribed by multiple criminal ventures—to simultaneously profit from spam, fake account registration, malicious hosting, and other forms of automated abuse. To expedite the detection of these commonly abusive hosts, there are now multiple industrywide efforts that aggregate abuse reports into centralized threat exchanges. In this work, we investigate the potential benefit of global reputation tracking and the pitfalls therein. We develop our findings from a snapshot of 45 million IP addresses abusing six Google services including Gmail, YouTube, and ReCaptcha between April 7–April 21, 2015. We estimate the scale of end hosts controlled by attackers, expose underground biases that skew the abuse perspectives of individual web services, and examine the frequency that criminals re-use the same infrastructure to attack multiple, heterogeneous services. Our results indicate that an average Google service can block 14% of abusive traffic based on threats aggregated from seemingly unrelated services, though we demonstrate that outright blacklisting incurs an untenable volume of false positives.

SANDPRINT: Fingerprinting Malware Sandboxes to Provide Intelligence for Sandbox Evasion
Akira Yokoyama (Yokohama National University), Kou Ishii (Yokohama National University), Rui Tanabe (Yokohama National University), Yinmin Papa (Yokohama National University), Katsunari Yoshioka (Yokohama National University), Tsutomu Matsumoto (Yokohama National University), Takahiro Kasama (National Institute of Information and Communications Technology), Daisuke Inoue (National Institute of Information and Communications Technology), Michael Brengel (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Christian Rossow (CISPA, Saarland University)

To cope with the ever-increasing volume of malware samples, automated program analysis techniques are inevitable. Malware sandboxes in particular have become the de facto standard to extract a program’s behavior. However, the strong need to automate program analysis also bears the risk that anyone that can submit programs to learn and leak the characteristics of a particular sandbox. In this paper, we introduce SandPrint, a program that measures and leaks characteristics of Windows-targeted sandboxes. We submit our tool to 20 malware analysis services and collect 2666 analysis reports that cluster to 76 sandboxes. We then systemically assess whether an attacker can possibly find a subset of characteris- tics that are inherent to all sandboxes, and not just characteristic of a single sandbox. In fact, using supervised learning techniques, we show that adversaries can automatically generate a classifier that can reliably tell a sandbox and a real system apart. Finally, we show that we can use similar techniques to stealthily detect commercial malware security appliances of three popular vendors.

Enabling Network Security Through Active DNS Datasets
Athanasios Kountouras (Georgia Institute of Technology), Panagiotis Kintis (Georgia Institute of Technology), Chaz Lever (Georgia Institute of Technology), Yizheng Chen (Georgia Institute of Technology), Yacin Nadji (Netrisk), David Dagon (Georgia Institute of Technology), Manos Antonakakis (Georgia Institute of Technology), Rodney Joffe (Neustar)

Most modern cyber crime leverages the Domain Name System (DNS) to attain high levels of network agility and make detection of Internet abuse challenging. The majority of malware, which represent a key component of illicit Internet operations, are programmed to locate the IP address of their command-and-control (C&C) server through DNS lookups. To make the malicious infrastructure both agile and resilient, malware authors often use sophisticated communication methods that utilize DNS (i.e., domain generation algorithms) for their campaigns. In general, Internet miscreants make extensive use of short-lived disposable domains to promote a large variety of threats and support their criminal network operations.
To effectively combat Internet abuse, the security community needs access to freely available and open datasets. Such datasets will enable the development of new algorithms that can enable the early detection, tracking, and overall lifetime of modern Internet threats. To that end, we have created a system, Thales, that actively queries and collects records for massive amounts of domain names from various seeds. These seeds are collected from multiple public sources and, therefore, free of privacy concerns. The results of this effort will be opened and made freely available to the research community. With three case studies we demonstrate the detection merit that the collected active DNS datasets contain. We show that (i) more than 75% of the domain names in PBL appear in our datasets several weeks (and some cases months) in advance, (ii) existing DNS research can implemented using only active DNS, and (iii) malicious campaigns can be identified with the signal provided by active DNS.

Session 4: Malware Analysis

A Formal Framework for Environmentally Sensitive Malware
Jeremy Blackthorne (Rensselaer Polytechnic Institute), Benjamin Kaiser (Rensselaer Polytechnic Institute), Bulent Yener (Rensselaer Polytechnic Institute)

Theoretical investigations of obfuscation have been built around a model of a single Turing machine which interacts with a user. A drawback of this model is that it cannot account for the most common approach to obfuscation used by malware, the observer effect. The observer effect describes the situation in which the act of observing something changes it. Malware implements the observer effect by detecting and acting on changes in its environment caused by user observation. Malware that leverages the observer effect is called environmentally sensitive.
To account for environmental sensitivity, we initiate a theoretical study of obfuscation with regards to programs that interact with a user and an environment. We define the System-Interaction model to formally represent this additional dimension of interaction. We also define a \emph{semantically obfuscated} program within our model as one that hides all semantic predicates from a computationally bounded adversary. This is possible while still remaining useful because semantically obfuscated programs can interact with an operating system while showing nothing to the user. We analyze the necessary and sufficient conditions of achieving this standard of obfuscation and experimentally demonstrate a candidate approach to achieving those conditions. Our approach utilizes asymmetric cryptography within a multi-threaded program with race conditions.

AVClass: A Tool for Massive Malware Labeling
Marcos Sebastian (IMDEA Software Institute), Richard Rivera (IMDEA Software Institute & Universidad Politécnica de Madrid), Platon Kotzias (IMDEA Software Institute & Universidad Politécnica de Madrid), Juan Caballero (IMDEA Software Institute)

Labeling a malicious executable as a variant of a known family is important for security applications such as triage, lineage, and for building reference datasets in turn used for evaluating malware clustering and training malware classification approaches. Oftentimes, such labeling is based on labels output by antivirus engines. While AV labels are well-known to be inconsistent, there is often no other information available for labeling, thus security analysts keep relying on them. However, current approaches for extracting family information from AV labels are manual and inaccurate. In this work, we describe AVCLASS , an automatic labeling tool that given the AV labels for a, potentially massive, number of samples outputs the most likely family names for each sample. AVCLASS implements novel automatic techniques to address 3 key challenges: normalization, removal of generic tokens, and alias detection. We have evaluated AVCLASS on 10 datasets comprising 8.9 M samples, largest than any dataset used by malware clustering and classification works. AVCLASS leverages labels from any AV engine, e.g., all 99 AV engines seen in VirusTotal, the largest engine set in the literature. AVCLASS’s clustering achieves F1 measures up to 93.9 on labeled datasets and each cluster is labeled with a fine-grained family name commonly used by the AV vendors. Upon publication, we will release AVCLASS to the community.

Semantics-Preserving Dissection of JavaScript Exploits via Dynamic JS-Binary Analysis
Xunchao Hu (Syracuse University), Aravind Prakash (Binghamton University), Jinghan Wang (Syracuse University), Rundong Zhou (Syracuse University), Yao Cheng (Syracuse University), Heng Yin (Syracuse University)

JavaScript exploits impose a severe threat to computer security. Once a zero-day exploit is captured, it is critical to quickly pinpoint the JavaScript statements that uniquely characterize the exploit and the payload location in the exploit. However, the current diagnosis techniques are inadequate because they approach the problem either from a JavaScript perspective and fail to account for “implicit” data flow invisible at JavaScript level, or from a binary execution perspective and fail to present the JavaScript level view of exploit. In this paper, we propose JScalpel, a framework to automatically bridge the semantic gap between JavaScript level and binary level for dynamic JS-binary analysis. With this new technique, JScalpel can automatically pinpoint exploitation or payload injection component of JavaScript exploits and generate minimized exploit code and a Proof-of-Vulnerability (PoV). Using JScalpel, we analyze 15 JavaScript exploits, 9 recent memory corruption exploits from Metasploit, 4 exploits from 3 different exploit kits and 2 wild exploits and successfully recover the payload and a minimized exploit for each of the exploits.

Session 5: Network Security

The Messenger Shoots Back: Network Operator Based IMSI Catcher Detection
Adrian Dabrowski (SBA Research), Georg Petzl (T-Mobile Austria), Edgar R. Weippl (SBA Research)

An IMSI Catcher, also known as Stingray or rogue cell, is a device that can be used to not only locate cellular phones, but also to intercept communication content like phone calls, SMS or data transmission unbeknown to the user. They are readily available as commercial products as well as do-it-yourself projects running open-source software, and are obtained and used by law enforcement agencies and criminals alike. Multiple countermeasures have been proposed recently to detect such devices from the user’s point of view, but they are limited to the nearby vicinity of the user.
In this paper we are the first to present and discuss multiple detection capabilities from the network operator’s point of view, and evaluate them on a real-world cellular network in cooperation with an European mobile network operator with over four million subscribers. Moreover, we draw a comprehensive picture on current threats against mobile phone devices and networks, including 2G, 3G and 4G IMSI Catchers and present detection and mitigation strategies under the unique large-scale circumstances of a real European carrier. One of the major challenges from the operator’s point of view is %the fact that cellular networks were specifically designed to reduce global signaling traffic and to manage as many transactions regionally as possible. Hence, contrary to popular belief, network operators by default do not have a global view or their network.
Our proposed solution can be readily added to existing network monitoring infrastructures and includes among other things plausibility checks of location update trails, monitoring of device-specific round trip times and an offline detection scheme to detect cipher downgrade attacks, as commonly used by commercial IMSI Catchers.

On the Feasibility of TTL-based Filtering for DRDoS Mitigation
Michael Backes (CISPA, Saarland University & MPI-SWS), Thorsten Holz (Horst Görtz Institute for IT-Security, Ruhr University Bochum), Christian Rossow (CISPA, Saarland University), Teemu Rytilahti (Horst Görtz Institute for IT-Security, Ruhr University Bochum), Milivoj Simeonovski (CISPA, Saarland University), Ben Stock (CISPA, Saarland University)

One of the major disturbances for network providers in recent years have been Distributed Reflective Denial-of-Service (DRDoS) attacks. In such an attack, the attacker spoofs the IP address of a victim and send a flood of tiny packets to vulnerable services which then respond with much larger replies to the victim. Led by the idea that the attacker cannot fabricate the number of hops between the amplifier and the victim, Hop Count Filtering (HCF) mechanisms that analyze the Time to Live of incoming packets have been proposed as a solution.
In this paper, we evaluate the feasibility of using Hop Count Filtering to mitigate DRDoS attacks. To that end, we detail how a server can use active probing to learn TTLs of alleged packet senders. Based on data sets of benign and spoofed NTP requests, we find that a TTL-based defense could block over 75\% of spoofed traffic, while allowing 85% of benign traffic to pass. To achieve this performance, however, such an approach must allow for a tolerance of +/-2 hops.
Motivated by this, we investigate the tacit assumption that an attacker cannot learn the correct TTL value. By using a combination of tracerouting and BGP data, we build statistical models which allow to estimate the TTL within that tolerance level. We observe that by wisely choosing the used amplifiers, the attacker is able to circumvent such TTL-based defenses. Finally, we argue that any (current or future) defensive system based on TTL values can be bypassed in a similar fashion, and find that future research must be steered towards more fundamental solutions to thwart any kind of IP spoofing attacks.

Session 6: Systematization of Knowledge and Experience Reports

A Look into 30 Years of Malware Development from a Software Metrics Perspective
Alejandro Calleja (Universidad Carlos III de Madrid), Juan Tapiador (Universidad Carlos III de Madrid), Juan Caballero (IMDEA Software Institute)

During the last decades, the problem of malicious and unwanted software (malware) has surged in numbers and sophistication. Malware plays a key role in most of today’s cyber attacks and has consolidated as a commodity in the underground economy. In this work, we analyze the evolution of malware since the early 1980s to date from a software engineering perspective. We analyze the source code of 151 malware samples and obtain measures of their size, code quality, and estimates of the development costs (effort, time, and number of people). Our results suggest an exponential increment of nearly one order of magnitude per decade in aspects such as size and estimated effort, with code quality metrics similar to those of regular software. Overall, this supports otherwise confirmed claims about the increasing complexity of malware and its production progressively becoming an industry.

Small Changes, Big Changes: An Updated View on the Android Permission System
Yury Zhauniarovich (Qatar Computing Research Institute, HBKU), Olga Gadyatskaya (SnT, University of Luxembourg)

Since the appearance of Android, its permission system was central to many studies of Android security. For a long time, the description of the architecture provided by Enck et al. was immutably used in various research papers. The introduction of highly anticipated runtime permissions in Android 6.0 forced us to reconsider this model. To our surprise, the permission system evolved with almost every release.
After analysis of 16 Android versions, we can confirm that the modifications, especially introduced in Android 6.0, considerably impact the aptness of old conclusions and tools for newer releases. For instance, in Android 6.0 some signature permissions can be granted to third-party apps even if they are signed with a different certificate; many permissions considered before as highly threatening are now granted by default.
In this paper, we carefully consider the updated system, introduced changes and their security implications. We highlight some bizarre behaviors, which may be of interest for developers and security researchers. We also found a number of bugs during our analysis, and provided patches to AOSP where possible.

Who Gets the Boot? Analyzing Victimization by DDoS-as-a-Service
Arman Noroozian (Delft University of Technology, The Netherlands), Maciej Korczynski (Delft University of Technology, The Netherlands), Carlos Hernandez Ganan (Delft University of Technology, The Netherlands), Daisuke Makita (Yokohama National University, National Institute of Information and Communications Technology, Japan), Katsunari Yoshioka (Yokohama National University, Japan), Michel van Eeten (Delft University of Technology, The Netherlands)

Since the rise of amplification DDoS attacks, a lot of research has been devoted to understanding the technical properties of such attacks and the emergence of the underground DDoS-as-a-service economy, especially the so-called booters. Much less is known about the consequences for victimization patterns. We profile victims via data from amplification DDoS honeypots. We develop victimization rates and present explanatory models capturing key determinants of these rates. Our analysis demonstrates that the bulk of the attacks are directed at users in access networks, not at hosting, and even less at enterprise networks. We find that victimization in broadband ISPs is highly proportional to the number of ISP subscribers and that certain countries have significantly higher or lower victim rates which are only partially explained by institutional factors such as ICT development. We also find that victimization rate in hosting networks is somewhat proportional to the number of hosted domains and number of routed IPs and that content popularity somewhat explains variation in victimization rates. Finally, we reflect on the implications of these findings for the wider trend of commoditization in cybercrime.

Session 7: Web & Mobile Security

Uses and Abuses of Server-Side Requests
Giancarlo Pellegrino (Saarland University), Onur Catakoglu (Eurecom), Davide Balzarotti (Eurecom), Christian Rossow (Saarland University)

More and more web applications rely on server-side requests (SSRs) to fetch resources (such as images or even entire webpages) from user-provided URLs. As for many other web-related technologies, developers were very quick to adopt SSRs, even before their consequences for security were fully understood. In fact, while SSRs are simple to add from an engineering point of view, in this paper we show that—if not properly implemented—this technology can have several subtle consequences for security, posing severe threats to service providers, their users, and the Internet community as a whole.
To shed some light on the risks of this communication pattern, we present the first extensive study of the security implication of SSRs. We propose a taxonomy and four new attack scenarios that describe different ways in which SSRs can be abused to perform malicious activities. We then present an automated scanner we developed to probe web applications to identify possible SSR misuses. Using our tool, we tested 68 popular web applications and find that the majority can be abused to perform malicious activities, ranging from server-side code execution to amplification DoS attacks. Finally, we distill our findings into eight pitfalls and mitigations to help developers to implement SSRs in a more secure way.

Identifying Extension-based Ad Injection via Fine-grained Web Content Provenance
Sajjad Arshad (Northeastern University), Amin Kharraz (Northeastern University), William Robertson (Northeastern University)

Extensions provide useful additional functionality for web browsers, but are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware. Users are often unaware of such practices, believing the modifications to the page originate from publishers. Additionally, automated identification of unwanted third-party modifications is fundamentally difficult, as users are the ultimate arbiters of whether content is undesired in the absence of outright malice.
To resolve this dilemma, we present a fine-grained approach to tracking the provenance of web content at the level of individual DOM elements. In conjunction with visual indicators, provenance information can be used to reliably determine the source of content modifications, distinguishing publisher content from content that originates from third parties such as extensions. We describe a prototype implementation of the approach called OriginTracer for Chromium, and evaluate its effectiveness, usability, and performance overhead through a user study and automated experiments. The results demonstrate a statistically significant improvement in the ability of users to identify unwanted third-party content such as injected ads with modest performance overhead.

Trellis: Privilege Separation for Multi-User Applications Made Easy
Andrea Mambretti (Northeastern University), Kaan Onarlioglu (Northeastern University), Collin Mulliner (Northeastern University), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Federico Maggi (Politecnico di Milano), Stefano Zanero (Politecnico di Milano)

Operating systems provide a wide variety of resource isolation and access control mechanisms, ranging from traditional user-based security models to fine-grained permission systems as found in modern mobile operating systems. However, comparatively little assistance is available for defining and enforcing access control policies within multi-user applications. These applications, often found in enterprise environments, allow multiple users to operate at different privilege levels in terms of exercising application functionality and accessing data. Developers of such applications bear a heavy burden in ensuring that security policies over code and data in this setting are properly expressed and enforced.
In this paper, we present Trellis, an approach for expressing hierarchical access control policies in applications and enforcing these policies during execution. The approach enhances the development toolchain to allow programmers to partially annotate code and data with simple privilege level tags, and uses a static analysis to infer suitable tags for the entire application. At runtime, policies are extracted from the resulting binaries and are enforced by a modified operating system kernel. Our evaluation demonstrates that this approach effectively supports the development of secure multi-user applications with modest runtime performance overhead.

Blender: Self-randomizing Address Space Layout for Android Apps
Mingshen Sun (The Chinese University of Hong Kong), John C.S. Lui (The Chinese University of Hong Kong), Yajin Zhou (Qihoo 360 Technology Co. Ltd.)

In this paper, we first demonstrated that the newly introduced Android RunTime (ART) in latest Android versions (Android 5.0 or above) exposes a new attack surface, namely, “return-to-art” (ret2art) attack. Unlike traditional return-to-library attacks, the ret2art attack abuses Android framework APIs (like the API to send SMS) as payloads to conveniently perform malicious operations. This new attack surface, along with the weakened ASLR implementation in the Android system, makes the successful exploiting of vulnerable apps much easier. To mitigate this threat and provide self-protection for Android apps, we propose a user-level solution called Blender, which is able to self-randomize address space layout for apps. Specifically, for an app using our system, Blender randomly rearranges loaded libraries and Android runtime executable code in the app’s process, achieving much higher memory entropy compared with the vanilla app. Blender requires no changes to the Android framework nor the underlying Linux kernel, thus a non-invasive and easy-to-deploy solution. Our evaluation showed that Blender only incurs around 6MB memory footprint increase for the app with our system, and does not affect other apps without our system. It increases 0.3 seconds of app starting delay, and imposes negligible CPU and battery overheads.