Proactive Measures in Healthcare Application Security


Health IT:

The Growing Need for Proactive Healthcare Application Security

Web application security is a far more complex issue than many other areas of security. There is a high potential for applications to allow users to extract sensitive data, bypass authorization, and even execute commands on the system. But even applications that prevent these types of attacks are often found vulnerable to other “less direct” attacks. As we see time and time again, criminal hackers will go to extreme lengths of effort to profit in any way possible.

As many know, healthcare is an industry which has (understandably) lagged in security. It has had an extreme need for technology development, and has never been an attractive target like banks were to criminals. Unfortunately, the rising black market value of medical records is changing this, and with it bringing more sophisticated attacks. And to see what is coming, we don’t need to look far. We can take a lesson from the financial industry, which has been the target of cutting edge attacks for the last two decades.

As application security improves and attackers find themselves unable to gain direct access to systems, they will look to the next best thing: getting users to do the dirty work. These types of issues are heavily exploited in the financial world, an industry which has poured millions into ensuring applications are not susceptible to such attacks. These are most commonly known as a “confused deputy problem”, where a malicious actor persuades a victim into performing an action against their knowledge or will. Before we look at these attacks individually, it’s important to understand that all of these issues require several prerequisites in order to pose actual risk:

  1. An application performing a sensitive operation.
  2. An attacker with some prior knowledge of the application.
  3. A user currently logged into the application.

Cross-site Request Forgery (CSRF)

Chances are at some point you’ve seen a URL that looks like this:


It performs a specific sensitive function, and takes predictable parameters. While this probably doesn’t seem like a big problem to most people, an attacker may exploit this to force a user to delete records within an application.

CSRF Illustrated

Let’s take a detailed look at the steps:

  1. The attacker sends an email or message to the victim, convincing them to click a link to a website controlled by the attacker.
  2. The user clicks the link, visiting the malicious website.
  3. The attacker takes the URL that will perform a sensitive operation and embeds that link on the malicious site.
  4. When the victim’s browser loads the content of the malicious site, it attempts to fetch the content embedded on the page. This causes the victim’s browser to make a request for this URL – because the user is already authenticated to the application, the operation is performed successfully without any knowledge of the user.

You may be thinking “How is this a vulnerability? This is the way the web works” and you’re exactly right. For a long time this was just how applications were designed; and that was that. The ultimate weakness here is in the HTTP protocol itself, it was simply not designed for the fact that we would be using it for the most sensitive and life dependent operations that we do today. But in order to face reality, we must build applications that compensate for the weaknesses of all components, even the HTTP protocol itself.

ClickJacking (UI Redress Attack)

Similarly to CSRF, clickjacking also persuades a user into performing an action against their will. In this scenario, the attacker creates a web page that embeds the target application within a frame. The attacker then creates a CSS overlay to mask the encapsulated application. The mask may include a JavaScript game which encourages the victim to click on areas of the screen, or enter data into form fields. What the victim doesn’t know is they are actually clicking and typing into an application they are already logged into.

Clickjacking Illustrated

It’s important to note that the risk of CSRF and Clickjacking can vary wildly, and there are several factors which may increase or decrease the risk:

  • Sensitivity of operations – An attacker must be able to benefit from a user performing certain actions. Can the application transfer money? Send a medical record?
  • Ability to target users – An attacker must be able to persuade users to visiting a website. A broad user base of an application will increase risk.
  • Inside knowledge – An attacker must have some inside knowledge of the application. Open source and widely distributed products have a significantly elevated risk because of this.

The expanding health IT ecosystem drives all three of these factors higher in risk. As patient facing applications become more powerful, the more likely users will face these types of attacks. The good news is that both of these issues have well documented fixes which have been adopted into most development frameworks and web servers. Next week we will publish part 2 detailing mitigation strategies.

Defeating Android Emulator Detection



Defeating Android Emulator Detection

At some point while performing vulnerability assessments on android applications you will encounter apps that don’t want to be run within an emulator. We can’t blame application owners for wanting to ensure that the user interaction they see comes from genuine devices, but it doesn’t help us do any security testing on it.

There are several ways to detect an emulator; however this example is only relevant to the most common way we see. In this application, a check is performed for an IMEI value of ’000000000000000′ which is the value used by the emulator that ships with the Android SDK.

The code segment below checks for this value and exits if true. While we could easily patch the value from within the application, it may be more efficient in the long run to simply change the IMEI value of our emulator. This way we don’t have to patch the next application that does this.

android emulator check

The IMEI is stored as a text string, so we will search for a ‘text string’ accordingly. Open the binary with hexeditor, hit ^W, and search for the fifteen zeroes. Note that the binary we wish to open is not the “emulator” binary, but the “emulator-arm” binary. If you are using a different architecture you may be using the mips or x86 binary.

cp emulator-arm emulator-arm.bak
hexeditor emulator-arm

hexedit search

Note once again this is an ascii string, so the zeroes are 0×30.


In this case, we just replace four characters with 1234 by updating 0×31, 0×32, 0×33, and 0×34. Do not change the length of data in this segment or overwrite bytes outside this segment or you will corrupt the binary.


Just save and exit. Now our emulator will be using our new custom value.

Preventing Cross-site Scripting in PHP



Preventing Cross-site Scripting in PHP

Preventing Cross-site Scripting (XSS) vulnerabilities in all languages requires two main considerations: the type of sanitization performed on input, and the location in which that input is inserted. It is important to remember that no matter how well input is filtered; there is no single sanitization method that can prevent all Cross-site Scripting (XSS). The filtering required is highly dependent on the context in which the data is inserted. Preventing XSS with data inserted between HTML elements is very straightforward. On the other hand, preventing XSS with data inserted directly into JavaScript code is considerably more difficult and sometimes impossible.

Input Sanitization

For the majority of PHP applications, htmlspecialchars() will be your best friend. htmlspecialchars() supplied with no arguments will convert special characters to HTML entities, below shows the conversions performed:

'&' (ampersand) becomes '&'
'"' (double quote) becomes '"'
'<' (less than) becomes '&lt;'
'>' (greater than) becomes '&gt;'

Eagle eyed readers may notice this does not include single quotes. For this reason we recommend that htmlspecialchars() is always used with the ‘ENT_QUOTES’ to ensure single quotes will be encoded. Below shows the singe quote entity conversion:

"'" (single quote) becomes '&#039;' (or &apos;)  

htmlspecialchars() vs htmlentities()

Another function exists which is almost identical to htmlspecialchars(). htmlenities() performs the same functional sanitization on dangerous characters, however, encodes all character entities when one is available. This may lead to excessive encoding and cause some content to display incorrectly if character sets change.


strip_tags() should NOT be used exclusively for sanitizing data. strip_tags() removes content between HTML tags and cannot prevent XSS instances that exist within HTML entity attributes. strip_tags() also does not filter or encode non-paired closing angle brackets. An attacker may be able to combine this with other weaknesses to inject fully functional JavaScript on the page. We recommended that strip_tags() only be used for its intended functional purpose: to remove HTML tags or content. In these situations, input should be passed through htmlspecialchars() after strip_tags() is used.


addslashes() is often used to escape input when inserted into JavaScript variables. An example is shown below:"st

 var = "te\"st ";   // addslashes()

As we can see, addslashes() adds a slash in attempt to prevent an attacker from terminating the variable assignment and appending executable code. This works, sort of, but has a critical flaw. Most JavaScript engines will construct code segments from open and closed <script> tags before it parses the code within them. This is done before the browser even cares about the data that resides between the two quotes. So to exploit this, we don’t actually need to “bypass” addslashes(), but simply terminate the script tag.

 var = "test1</script><script>alert(document.cookie);</script>";

As far as the browser is concerned, the code injected is an entire new code segment and contains valid JavaScript.

Where Entity Encoding Fails

We talked before about considerations for the location of data, and will go over some examples where entity encoding with htmlspecialchars() is not enough. One of the most common examples of this is when data is inserted within the actual tag or attribute of an element.

HTML Event Attributes: HTML has a number of elements with attributes that allow for JavaScript to be called after a particular event. For example, the onload attribute can execute JavaScript when an HTML object is loaded.

<body onload=alert(document.cookie);>

This is just one of many somewhat rare situations where extremely strict filtering is required. For an in depth look at many injection scenarios and their prevention methods, take a look at the OWASP XSS Prevention Cheat Sheet.

Third Party PHP Libraries

Virtue Security makes no recommendation or provides any warranty for third party products or software; however, we are aware that several third party PHP libraries are commonly used to assist in XSS prevention. Below are projects that may assist developers building suitable whitelists:

HTML Purifier –
PHP Anti-XSS –
htmLawed –

Other Things to Remember

A great rule of thumb to go by is simply not to insert user controlled data unless its explicitly needed for the application to function. It’s often surprising to see XSS vulnerabilities exist because parameters are inserted into HTML or JavaScript comments. Not only does this serve no functional purpose to the application, but it can introduce serious security vulnerabilities.

Extortion is a Rising Motive in New Attacks



Extortion is a Rising Motive in New Attacks

There is one trend that has remained consistent across the internet over the last twenty years; attacks have become more sophisticated, more common, and more malicious every year. In 2013, the Cryptolocker virus became one of the first tools used by criminal organizations to extort money from victims on a mass scale. When the malware infected a machine, document files were encrypted with a unique public key, the private key was maintained on remote servers leaving victims with no way to decrypt their data without paying the ransom.

Organized cybercrime is a massive industry of its own, and has its own struggles of saturation, technical advancement, and economic problems like any other industry. As more career criminals enter the industry, criminal hackers must try harder to make the most profit from every attack. What we’re seeing now are criminals looking to monetize on breaches by extorting their victims.

Record Profits

To date, Cryptolocker has compromised almost a quarter million computers and has fetched over $27,000,000 in ransom payouts. Not only does this allow criminals to reinvest substantial funds into newer and more advanced attacks, it set a precedent for other “would be” criminals who may be looking to profit.

Cryptolocker ransomware

Code Spaces

In June 2014 Code Spaces was notified by an unknown attacker that they had gained access to their Amazon EC2 admin tools. Along with the communication came a demand for a large sum of money. When Code Spaces did not deliver the payment, all data backups, virtual servers, and live virtual machines were wiped by the attacker. Code Spaces was forced to close their doors and cease all business operation.

code spaces

Domino’s Pizza

Also in June 2014 was a compromise of Domino’s systems in France and Belgium. The group Rex Mundi claimed responsibility and threatened to publish stolen data of Domino’s customers unless a ransom of $40,000 was paid. Domino’s announced they had no intention of paying a ransom, and at the time of this writing there is no public resolution available.


Looking Forward

We often hear IT staff dismiss potential threats because their data would not be useful to an attacker, people often say “Why would an attacker be after this information?”. Extortion is becoming a more popular way to monetize on data regardless of the direct usefulness to the attacker. Chances are, if the data is valuable to you, it is now inherently valuable to the attacker also.

5 Ways Healthcare Applications Leak ePHI


Application Security

5 Ways Healthcare Applications Leak ePHI

Protecting ePHI is one of the most important responsibilities assumed by all of us working in healthcare. Unfortunately we frequently find that applications still leak critical ePHI data, often in very simple and needless ways. Web applications that handle sensitive information need to do more than many people think to properly protect data. Although the issues mentioned here are not very technical or even critical in nature, they have far bigger implications in healthcare applications than most other industries.

Below are five of the most common vulnerabilities we see when conducting vulnerability assessments on applications handling ePHI:

1 – Protected Health Information in URLs

The majority of data handled by web applications is sent in one of two ways, with a GET or POST request. GET requests are commonly misused for handling sensitive information, in several circumstances they can easily allow for information to be disclosed to unauthorized parties. Below is a simple example of an application passing parameters in a GET request:

GET /showrecord.aspx?id=12345&name=JOHN+DOE&dob=12/12/1965 HTTP/1.1

Using this method, there are several ways that the patient’s name and DOB may be leaked to unauthorized parties:

  • URLs are cached in web browser history logs. Anyone with physical access to the machine may obtain data passed in URLs.
  • Data passed in GET requests may be visible on screen for longer than necessary and susceptible to shoulder surfing.
  • URLs may be cached by intermediate web proxies and viewed by unauthorized parties.
  • URLs may be cut and pasted by users and sent to other users.

Below shows the same request made with the POST method (avoiding the scenarios listed above):

POST /showrecord.aspx HTTP/1.1


Any HTTP request containing sensitive parameters should use the HTTP POST method.

2 – Improper Cache Controls

Web servers typically respond to HTTP requests with a number of headers; they are not visible to users, but they instruct browsers how to handle the content being sent. Among these headers are optional directives for how long data is to be cached. If web browsers are not explicitly told not to cache data, the content will often be stored locally, creating local files with potentially sensitive information available to anyone with access to that computer.

There are three main cache control headers that can be used to instruct web browsers as well as intermediate proxies on how data should be stored. Below is an example of the three required headers being used to prevent caching of ePHI:

Cache-Control: no-store
Pragma: no-store
Expires: -1

These directives can also be issued via Meta tags within HTML source; however, HTTP headers are the most effective and preferred delivery method.

3 – Poor Enforcement of SSL

Most applications handling ePHI have the good sense to use SSL for secure communication. While a properly configured TLS/SSL connection can keep data secure from third parties, we still find applications that are also available over plain-text HTTP. If users access the application by directly typing the ‘http://’ protocol, they may end up sending their credentials or session token over a plain text channel. This would allow anyone with physical access to network infrastructure between the user and server to view or modify the data in transit. Because of this, it’s important to remember that applications should forcefully redirect all users to HTTPS pages if accessed over HTTP.

Below shows an appropriate response to any page requested with an ‘http://’ prefix. Note the new location is over HTTPS.

HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8

4 – Excessive Application Timeouts

Every secure application should expire user sessions after a certain length of inactivity. There’s no one fixed length of time that can be deemed appropriate for all applications, but we can do our best to use reasonable timeout periods. Industry standard timeouts range from thirty to 60 minutes, so unless there are specific usability concerns it is strongly recommended that sessions are expired within 60 minutes.

In addition to session timeouts, we strongly recommend that the application redirect users to a login screen when the session does timeout. This ensures that ePHI is not left shown on screen after the timeout.

5 – Insufficient Access Controls

Improper validation of user privileges often results in a user’s ability to read other users’ data or take complete control over the application. When applications perform authorization checks based on parameters supplied by users, rather than a secure session token, the server loses control over that operation. It is absolutely critical that every authorization check is performed by the permission granted to the session token granted to the user when their username and password were provided.

In the below example, a user makes a request to view their medical record. Their web browser passes four pieces of information to the application: a session token, a user ID, Name, and date of birth. It’s easy for developers to take the information they need from this request and display the medical record based on the user ID provided. Unfortunately the user ID parameter is trivial to modify by any moderately skilled attacker. This ID can also easily “brute forced” to make the same request for the next 10,000 increments with just a few clicks.

POST /showrecord.aspx HTTP/1.1
Cookie: SessionID=f427e90cc3b78024ebbd99a731ca1b4f;


Before any operations are performed, the application must validate the SessionID token provided by the user and ensure the operation requested is allowed by their privilege role. While any bypass of user privileges is a high risk issue, the severity of issues like this in healthcare applications carries even more weight. Patient portals and any patient facing applications with these types of vulnerabilities can quickly turn into a nightmare.


Technology in healthcare is advancing so rapidly that security is often an oversight. Building security processes into application development is critical for building robust and sustainable technology. We must remember that the protocols in which most of the internet was built on, have no security built in. Healthcare applications need to go far out of their way to compensate for the lack of security in underlying protocols. Healthcare organizations go through great lengths to protect PHI, we must ensure that applications have the same processes and checks to do their own due diligence.

 Page 1 of 5  1  2  3  4  5 »