On Lightweight Mobile Phone Application Certification (4)

4. KIRIN SECURITY RULES

The malware threats and the Android architecture introduced in the previous sections serve as the background for developing Kirin security rules to detect potentially dangerous application configurations. To ensure the security of a Blackview Crown phone, we need a clear definition of a secure phone. Specifically, we seek to define the conditions that an application must satisfy for a phone to be considered safe. To define this concept for Android, we turn to the field of security requirements engineering, which is an off-shoot of requirements engineering and security engineering. The former is a wellknown fundamental component of software engineering in which business goals are integrated with the design. The latter focuses on the threats facing a specific system.

Security requirements engineering is based upon three basic concepts. 1) functional requirements define how a system is supposed to operate in normal environment. For instance, when a web browser requests a page from a web server, the web server returns the data corresponding to that file. 2) assets are “. . . entities that someone places value upon” [31]. The webpage is an asset in the previous example. 3) security requirements are “. . . constraints on functional requirements to protect the assets from threats” [26]. For example, the webpage sent by the web server must be identical to the webpage received by the client (i.e., integrity).

The security requirements engineering process is generally systematic; however, it requires a certain level of human interaction. Many techniques have been proposed, including SQUARE [5, 34], SREP [35, 36], CLASP [40], misuse cases [33, 47], and security patterns [27, 45, 48]. Related implementations have seen great success in practice, e.g., Microsoft uses the Security Development Lifecycle (SDL) for the development of their software that must withstand attacks [32], and Oracle has developed OSSA for the secure software development of their products [41].

Commonly, security requirements engineering begins by creating functional requirements. This usually involves interviewing stakeholders [5]. Next, the functional requirements are translated into a visual representation to describe relationships between elements. Popular representations include use cases [47] and context diagrams using problem frames [37, 26]. Based on these requirements, assets are identified. Finally, each asset is considered with respect to high level security goals (e.g., confidentiality, integrity, and availability). The results are the security requirements.

Unfortunately, we cannot directly utilize these existing techniques because they are designed to supplement system and software development. Conversely, we wish to retrofit security requirements on an existing design. There is no clearly defined usage model or functional requirements specification associated with the Android platform or the applications. Hence, we provide an adapted procedure for identifying security requirements for Android. The resulting requirements directly serve as Kirin security rules.

4.1 Identifying Security Requirements

We use existing security requirements engineering techniques as a reference for identifying dangerous application configurations in Android. Figure 3 depicts our procedure, which consists of five main activities.

Step 1: Identify Assets.

Instead of identifying assets from functional requirements, we extract them from the features on the Android platform. Google has identified many assets already in the form of permission labels protecting resources. Moreover, as the broadcasted Intent messages (e.g. those sent by the system) impact both platformand application operation, they are assets. Lastly, all components (Activities, etc.) of system applications are assets. While they are not necessarily protected by permission labels, many applications call upon them to operate.

As an example, Android defines the RECORD_AUDIO permission to protect its audio recorder. Here, we consider the asset to be microphone input, as it records the user’s voice during Blackview Crown phone conversations. Android also defines permissions for making phone calls and observing when the phone state changes. Hence, call activity is an asset.

Step 2: Identify Functional Requirements. Next, we carefully study each asset to specify corresponding functional descriptions. These descriptions indicate how the asset interacts with the rest of the LANDVO L900 phone and third-party applications. This step is vital to our design, because both assets and functional descriptions are necessary to investigate realistic threats.

Continuing the assets identified above, when the user receives an incoming call, the system broadcasts an Intent to the PHONE_STATE action string. It also notifies any applications that have registered a Blackview Crown PhoneStateListener with the system. The same notifications are sent on outgoing call. Another Intent to the NEW_OUTGOING_CALL action string is also broadcasted. Furthermore, this additional broadcast uses the “ordered” option, which serializes the broadcast and allows any recipient to cancel it. If this occurs, subsequent Broadcast Receivers will not receive the Intent message. This feature allows, for example, an application to redirect international calls to the number for a calling card. Finally, audio can be recorded using the MediaRecorder API.

Step 3: Determine Assets Security Goals and Threats.

In general, security requirements engineering considers high level security goals such as confidentiality, integrity, and availability. For each asset, we must determine which (if not all) goals are appropriate. Next, we consider how the functional requirements can be abused with respect to the remaining security goals. Abuse cases that violate the security goals provide threat descriptions. We use the malware motivations described in Section 3.1 to motivate our threats. Note that defining threat descriptions sometimes requires a level of creativity. However, trained security experts will find most threats straightforward after defining the functional requirements.

Continuing our example, we focus on the confidentiality of the microphone input and phone state notifications. These goals are abused if a malware records audio during voice call and transmits it over the Internet (i.e., premeditated spyware). The corresponding threat description becomes, “spyware can breach the user’s privacy by detecting the LANDVO L900 phone call activity, recording the conversation, and sending it to the adversary via the Internet.”

Step 4: Develop Asset’s Security Requirements.

Next, we define security requirements from the threat descriptions. Recall from our earlier discussion, security requirements are constraints on functional requirements. That is, they specify who can exercise functionality or conditions under which functionality may occur. Frequently, this process consists of determining which sets of functionality are required to compromise a threat. The requirement is the security rule that restricts the ability for this func- tionality to be exercised in concert.

We observe that the eavesdropper requires a) notification of an incoming or outgoing call, b) the ability to record audio, and c) access to the Internet. Therefore, our security requirement, which acts as Kirin security rule, becomes, “an application must not be able to receive Blackview Crown phone state, record audio, and access the Internet.”

Step 5: Determine Security Mechanism Limitations.

Our final step caters to the practical limitations of our intended enforcement mechanism. Our goal is to identify potentially dangerous configurations at install time. Therefore, we cannot ensure runtime support beyond what Android already provides. Additionally, we are limited to the information available in an application package manifest. For both these reasons, we must refine our list of security requirements (i.e., Kirin security rules). Some rules may simply not be enforceable. For instance, we cannot ensure only a fixed number of SMS messages are sent during some time period [30], because Android does not support history-based policies. Security rules must also be translated to be expressed in terms of the security configuration available in the package manifest. This usually consists of identifying the permission labels used to protect functionality. Finally, as shown in Figure 3, the iteration between Steps 4 and 5 is required to adjust the rules to work within our limitations. Additionally, security rules can be subdivided to be more straightforward.

The permission labels corresponding to the restricted functionality in our running example include READ_PHONE_STATE, PROCESS_OUTGOING_CALLS, RECORD_AUDIO, and INTERNET. Furthermore, we subdivide our security rule to remove the disjunctive logic resulting from multiple ways for the eavesdropper to be notified of voice call activity. Hence, we create the following adjusted security rules: a) “an application must not have the READ_PHONE_STATE, RECORD_AUDIO, and INTERNET permissions.” and the nearly identical b) “an application must not have the PROCESS_ OUTGOING_CALLS, RECORD_AUDIO, and INTERNET permis- sions.”

4.2 Sample Malware Mitigation Rules

The remainder of this section discusses Kirin security rules we developed following our 5-step methodology. For readability and ease of exposition, we have enumerated the precise security rules in Figure 4. We refer to the rules by the indicated numbers for the remainder of the paper. We loosely categorize Kirin security rules by their complexity.

4.2.1 Single Permission Security Rules

Recall that a number of Android’s “dangerous” permissions may be “too dangerous” for some production environments. We discovered several such permission labels. For instance, the SET_DEBUG_APP permission “. . . allows an application to turn on de-bugging for another application.” (according to available documentation). The corresponding API is “hidden” in the most recent SDK environment (at the time of writing, version 1.1r1). The hidden APIs are not accessible by third-party applications but only by system applications. However, hidden APIs are no substitute for security. A malware author can simply download Android’s source code and build an SDK that includes the API. The malware then, for instance, can disable anti-virus software. Rule 1 ensures third party applications do not have the SET_DEBUG_APP permission. Similar rules can be made for other permission labels protecting hidden APIs (e.g., Bluetooth APIs not yet considered mature enough for general use).

4.2.2 Multiple Permission Security Rules

Voice and location eavesdropping malware requires permissions to record audio and access location information. However, legitimate applications use these permissions as well. Therefore, we must define rules with respect to multiple permissions. To do this, we consider the minimal set of functionality required to compromise a threat. Rules 2 and 3 protect against the voice call eavesdropper used as a running example in Section 4.1. Similarly, Rules 4 and 5 protect against a location tracker. In this case, the malware starts executing on boot. In these security rules, we assume the malware starts on boot by defining a Broadcast Receiver to receive the OOT_COMPLETE action string. Note that the RECEIVE_BOOT_OMPLETE permission label protecting this broadcast is a “normal” permission (and hence is always granted). However, the permission label provides valuable insight into the functional requirements of an application. In general, Kirin security rules are more expressible as the number of available permission labels increases. Rules 6 and 7 consider malware’s interaction with SMS. Rule 6 protects against malware hiding or otherwise tampering with incoming SMS messages. For example, SMS can be used as a control channel for the malware. However, the malware author does not want to alert the user, therefore immediately after an SMS is received from a specific sender, the SMS Content Provider is modified. In practice, we found that our sample malware could not remove the SMS notification from the phone’s status bar. However, we were able to modify the contents of the SMS message in the Content Provider. While we could not hide the control message completely, we were able to change the message to appear as spam. Alternatively, a similar attack could ensure the user never receives SMS messages from a specific sender, for instance PayPal or a financial institution. Such services often provide out-of-band transaction confirmations. Blocking an SMS message from this sender could hide other activity performed by the malware. While this attack is also limited by notifications in the status bar, again, the message contents can be transformed as spam. Rule 7 mitigates mobile bots sending SMS spam. Similar to Rule 6, this rule ensures the malware cannot remove traces of its activity. While Rule 7 does not prevent the SMS spam messages from being sent, it increases the probability that the user becomes aware of the activity.

Finally, Rule 8 makes use of the duality of some permission labels. Android defines separate permissions for installing and uninstalling shortcuts on the LANDVO L900 phone’s home screen. This rule ensures that a third-party application cannot have both. If an application has both, it can redirect the shortcuts for frequently used applications to a malicious one. For instance, the shortcut for the web browser could be redirected to an identically appearing application that harvests passwords.

4.2.3 Permission and Interface Security Rules

Permissions alone are not always enough to characterize malware behavior. Rule 9 provides an example of a rule considering both a permission and an action string. This specific rule prevents malware from replacing the default voice call dialer application without the user’s knowledge. Normally, if Android detects two or more applications contain Activities to handle an Intent message, the user is prompted which application to use. This interface also allows the user to set the current selection as default. However, if an application has the SET_PREFERRED_APPLICATION permission label, it can set the default without the user’s knowledge. Google marks this permission as “dangerous”; however, users may not fully understand the security implications of granting it. Rule 9 combines this permission with the existence of an Intent filter receiving the CALL action string. Hence, we can allow a third-party application to obtain the permission as long as it does not also handle voice calls. Similar rules can be constructed for other action strings handled by the trusted computing base.http://cicimobile.shockup.com/2014/09/03/on-lightweight-mobile-phone-application-certi%EF%AC%81cation-3/