Preventing Cross-site Scripting in PHP



Preventing Cross-site Scripting in PHP

Preventing Cross-site Scripting (XSS) vulnerabilities in all languages requires two main considerations: the type of sanitization performed on input, and the location in which that input is inserted. It is important to remember that no matter how well input is filtered; there is no single sanitization method that can prevent all Cross-site Scripting (XSS). The filtering required is highly dependent on the context in which the data is inserted. Preventing XSS with data inserted between HTML elements is very straightforward. On the other hand, preventing XSS with data inserted directly into JavaScript code is considerably more difficult and sometimes impossible.

Input Sanitization

For the majority of PHP applications, htmlspecialchars() will be your best friend. htmlspecialchars() supplied with no arguments will convert special characters to HTML entities, below shows the conversions performed:

'&' (ampersand) becomes '&'
'"' (double quote) becomes '"'
'<' (less than) becomes '&lt;'
'>' (greater than) becomes '&gt;'

Eagle eyed readers may notice this does not include single quotes. For this reason we recommend that htmlspecialchars() is always used with the ‘ENT_QUOTES’ to ensure single quotes will be encoded. Below shows the singe quote entity conversion:

"'" (single quote) becomes '&#039;' (or &apos;)  

htmlspecialchars() vs htmlentities()

Another function exists which is almost identical to htmlspecialchars(). htmlenities() performs the same functional sanitization on dangerous characters, however, encodes all character entities when one is available. This may lead to excessive encoding and cause some content to display incorrectly if character sets change.


strip_tags() should NOT be used exclusively for sanitizing data. strip_tags() removes content between HTML tags and cannot prevent XSS instances that exist within HTML entity attributes. strip_tags() also does not filter or encode non-paired closing angle brackets. An attacker may be able to combine this with other weaknesses to inject fully functional JavaScript on the page. We recommended that strip_tags() only be used for its intended functional purpose: to remove HTML tags or content. In these situations, input should be passed through htmlspecialchars() after strip_tags() is used.


addslashes() is often used to escape input when inserted into JavaScript variables. An example is shown below:"st

 var = "te\"st ";   // addslashes()

As we can see, addslashes() adds a slash in attempt to prevent an attacker from terminating the variable assignment and appending executable code. This works, sort of, but has a critical flaw. Most JavaScript engines will construct code segments from open and closed <script> tags before it parses the code within them. This is done before the browser even cares about the data that resides between the two quotes. So to exploit this, we don’t actually need to “bypass” addslashes(), but simply terminate the script tag.

 var = "test1</script><script>alert(document.cookie);</script>";

As far as the browser is concerned, the code injected is an entire new code segment and contains valid JavaScript.

Where Entity Encoding Fails

We talked before about considerations for the location of data, and will go over some examples where entity encoding with htmlspecialchars() is not enough. One of the most common examples of this is when data is inserted within the actual tag or attribute of an element.

HTML Event Attributes: HTML has a number of elements with attributes that allow for JavaScript to be called after a particular event. For example, the onload attribute can execute JavaScript when an HTML object is loaded.

<body onload=alert(document.cookie);>

This is just one of many somewhat rare situations where extremely strict filtering is required. For an in depth look at many injection scenarios and their prevention methods, take a look at the OWASP XSS Prevention Cheat Sheet.

Third Party PHP Libraries

Virtue Security makes no recommendation or provides any warranty for third party products or software; however, we are aware that several third party PHP libraries are commonly used to assist in XSS prevention. Below are projects that may assist developers building suitable whitelists:

HTML Purifier –
PHP Anti-XSS –
htmLawed –

Other Things to Remember

A great rule of thumb to go by is simply not to insert user controlled data unless its explicitly needed for the application to function. It’s often surprising to see XSS vulnerabilities exist because parameters are inserted into HTML or JavaScript comments. Not only does this serve no functional purpose to the application, but it can introduce serious security vulnerabilities.

Extortion is a Rising Motive in New Attacks



Extortion is a Rising Motive in New Attacks

There is one trend that has remained consistent across the internet over the last twenty years; attacks have become more sophisticated, more common, and more malicious every year. In 2013, the Cryptolocker virus became one of the first tools used by criminal organizations to extort money from victims on a mass scale. When the malware infected a machine, document files were encrypted with a unique public key, the private key was maintained on remote servers leaving victims with no way to decrypt their data without paying the ransom.

Organized cybercrime is a massive industry of its own, and has its own struggles of saturation, technical advancement, and economic problems like any other industry. As more career criminals enter the industry, criminal hackers must try harder to make the most profit from every attack. What we’re seeing now are criminals looking to monetize on breaches by extorting their victims.

Record Profits

To date, Cryptolocker has compromised almost a quarter million computers and has fetched over $27,000,000 in ransom payouts. Not only does this allow criminals to reinvest substantial funds into newer and more advanced attacks, it set a precedent for other “would be” criminals who may be looking to profit.

Cryptolocker ransomware

Code Spaces

In June 2014 Code Spaces was notified by an unknown attacker that they had gained access to their Amazon EC2 admin tools. Along with the communication came a demand for a large sum of money. When Code Spaces did not deliver the payment, all data backups, virtual servers, and live virtual machines were wiped by the attacker. Code Spaces was forced to close their doors and cease all business operation.

code spaces

Domino’s Pizza

Also in June 2014 was a compromise of Domino’s systems in France and Belgium. The group Rex Mundi claimed responsibility and threatened to publish stolen data of Domino’s customers unless a ransom of $40,000 was paid. Domino’s announced they had no intention of paying a ransom, and at the time of this writing there is no public resolution available.


Looking Forward

We often hear IT staff dismiss potential threats because their data would not be useful to an attacker, people often say “Why would an attacker be after this information?”. Extortion is becoming a more popular way to monetize on data regardless of the direct usefulness to the attacker. Chances are, if the data is valuable to you, it is now inherently valuable to the attacker also.

5 Ways Healthcare Applications Leak ePHI


Application Security

5 Ways Healthcare Applications Leak ePHI

Protecting ePHI is one of the most important responsibilities assumed by all of us working in healthcare. Unfortunately we frequently find that applications still leak critical ePHI data, often in very simple and needless ways. Web applications that handle sensitive information need to do more than many people think to properly protect data. Although the issues mentioned here are not very technical or even critical in nature, they have far bigger implications in healthcare applications than most other industries.

Below are five of the most common vulnerabilities we see when conducting vulnerability assessments on applications handling ePHI:

1 – Protected Health Information in URLs

The majority of data handled by web applications is sent in one of two ways, with a GET or POST request. GET requests are commonly misused for handling sensitive information, in several circumstances they can easily allow for information to be disclosed to unauthorized parties. Below is a simple example of an application passing parameters in a GET request:

GET /showrecord.aspx?id=12345&name=JOHN+DOE&dob=12/12/1965 HTTP/1.1

Using this method, there are several ways that the patient’s name and DOB may be leaked to unauthorized parties:

  • URLs are cached in web browser history logs. Anyone with physical access to the machine may obtain data passed in URLs.
  • Data passed in GET requests may be visible on screen for longer than necessary and susceptible to shoulder surfing.
  • URLs may be cached by intermediate web proxies and viewed by unauthorized parties.
  • URLs may be cut and pasted by users and sent to other users.

Below shows the same request made with the POST method (avoiding the scenarios listed above):

POST /showrecord.aspx HTTP/1.1


Any HTTP request containing sensitive parameters should use the HTTP POST method.

2 – Improper Cache Controls

Web servers typically respond to HTTP requests with a number of headers; they are not visible to users, but they instruct browsers how to handle the content being sent. Among these headers are optional directives for how long data is to be cached. If web browsers are not explicitly told not to cache data, the content will often be stored locally, creating local files with potentially sensitive information available to anyone with access to that computer.

There are three main cache control headers that can be used to instruct web browsers as well as intermediate proxies on how data should be stored. Below is an example of the three required headers being used to prevent caching of ePHI:

Cache-Control: no-store
Pragma: no-store
Expires: -1

These directives can also be issued via Meta tags within HTML source; however, HTTP headers are the most effective and preferred delivery method.

3 – Poor Enforcement of SSL

Most applications handling ePHI have the good sense to use SSL for secure communication. While a properly configured TLS/SSL connection can keep data secure from third parties, we still find applications that are also available over plain-text HTTP. If users access the application by directly typing the ‘http://’ protocol, they may end up sending their credentials or session token over a plain text channel. This would allow anyone with physical access to network infrastructure between the user and server to view or modify the data in transit. Because of this, it’s important to remember that applications should forcefully redirect all users to HTTPS pages if accessed over HTTP.

Below shows an appropriate response to any page requested with an ‘http://’ prefix. Note the new location is over HTTPS.

HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8

4 – Excessive Application Timeouts

Every secure application should expire user sessions after a certain length of inactivity. There’s no one fixed length of time that can be deemed appropriate for all applications, but we can do our best to use reasonable timeout periods. Industry standard timeouts range from thirty to 60 minutes, so unless there are specific usability concerns it is strongly recommended that sessions are expired within 60 minutes.

In addition to session timeouts, we strongly recommend that the application redirect users to a login screen when the session does timeout. This ensures that ePHI is not left shown on screen after the timeout.

5 – Insufficient Access Controls

Improper validation of user privileges often results in a user’s ability to read other users’ data or take complete control over the application. When applications perform authorization checks based on parameters supplied by users, rather than a secure session token, the server loses control over that operation. It is absolutely critical that every authorization check is performed by the permission granted to the session token granted to the user when their username and password were provided.

In the below example, a user makes a request to view their medical record. Their web browser passes four pieces of information to the application: a session token, a user ID, Name, and date of birth. It’s easy for developers to take the information they need from this request and display the medical record based on the user ID provided. Unfortunately the user ID parameter is trivial to modify by any moderately skilled attacker. This ID can also easily “brute forced” to make the same request for the next 10,000 increments with just a few clicks.

POST /showrecord.aspx HTTP/1.1
Cookie: SessionID=f427e90cc3b78024ebbd99a731ca1b4f;


Before any operations are performed, the application must validate the SessionID token provided by the user and ensure the operation requested is allowed by their privilege role. While any bypass of user privileges is a high risk issue, the severity of issues like this in healthcare applications carries even more weight. Patient portals and any patient facing applications with these types of vulnerabilities can quickly turn into a nightmare.


Technology in healthcare is advancing so rapidly that security is often an oversight. Building security processes into application development is critical for building robust and sustainable technology. We must remember that the protocols in which most of the internet was built on, have no security built in. Healthcare applications need to go far out of their way to compensate for the lack of security in underlying protocols. Healthcare organizations go through great lengths to protect PHI, we must ensure that applications have the same processes and checks to do their own due diligence.

Vulnerability Assessments and HIPAA


Application Security

Vulnerability Assessments and HIPAA

HIPAA defines a broad set of rules to govern how health care providers and related covered entities share and protect patient data. While the scope of HIPAA is far broader than an ethical hacking specialist is concerned, vulnerability assessments are often a vital component of HIPAA compliance. With that said, ethical hackers do not need to be experts on the entire HIPAA regulations, but there are some things they should be aware of.


One of the primary concerns during a vulnerability assessment for a HIPAA covered entity is the transmission and storage of ePHI (electronic Protected Health Information). There are 18 specific items that are considered ePHI:

  • Names
  • Geographic data
  • All elements of dates
  • Telephone numbers
  • FAX numbers
  • Email addresses
  • Social Security numbers
  • Medical record numbers
  • Health plan beneficiary numbers
  • Account numbers
  • Certificate/license numbers
  • Vehicle identifiers and serial numbers including license plates
  • Device identifiers and serial numbers
  • Web URLs
  • Internet protocol addresses
  • Biometric identifiers (i.e. retinal scan, fingerprints)
  • Full face photos and comparable images
  • Any unique identifying number, characteristic or code

Application vulnerability assessments evaluate sensitive information which may be disclosed to unauthorized parties, however, a good security consultant will always understand that “sensitive” is highly subjective. In the case of a HIPAA covered entity, the above 18 items should all be considered sensitive. Leakage of any data included in the above list can have serious consequences for the covered entity in question. HIPAA violations can have significant financial penalties, so never assume something ever so slightly sensitive is ok to disclose. What may normally be considered a Low Risk information disclosure issue to another industry, can sometimes be a High Risk issue to a HIPAA covered entity.

The Security Rule

HIPAA does not get into the specifics or technical nature of vulnerability assessments. The HIPAA security rule simply mandates that covered entities “[c]onduct an accurate and
thorough assessment of the potential risks and vulnerabilities to the
confidentiality, integrity, and availability of electronic protected health
information held by the covered entity.”

In other words, if you have an application handling ePHI, you need to perform a vulnerability assessment against it. But being so vague is both good and bad. Application vulnerability assessments are highly subjective assessments and risks depend greatly on the business logic used. This is an important freedom to allow security experts to get the job done the right way. This allows the security tester to focus on the actual risk rather than a checklist which may not actually reflect a real life attack scenario.

On the other hand, it may allow for misguided decisions to be made about how risk should be identified. Healthcare is an industry that’s no stranger to products with exaggerated or false claims. There is no shortage of software companies who are ready to sell something that may lead organization to believe they are doing their due diligence in identifying risk.

EHR Data Sensitivity

As with any assessment, an ethical hacker is often the last line of defense in protecting against a breach; and a critical aspect of our job as ethical hackers is to protect the people who use these applications. We must not forget the level of sensitivity of data used by HIPAA covered entities and their applications. Special consideration should be given to handling of EMR/EHR records which contain detailed records of medical conditions and history.

Moving Forward

Successful business operation is the ultimate goal of any security assessment. While electronic health records are relatively new, healthcare technology has greatly improved the care that individuals receive. Vulnerability assessments can be difficult, but it’s a fact of life that security testing must coincide with application deployment. By keeping both of these requirements in mind we can stay secure and keep moving forward.

For more information on Virtue Security services please see our vulnerability assessment datasheet.

Getting the Most Out of Application Vulnerability Assessments



Application Vulnerability Assessments

Application vulnerability assessments are a vital part of a security process for any organization deploying online applications. Vulnerability assessments identify critical threats, sensitive information leakage, and a wide array of other security issues. While the discovery of these vulnerabilities is the most immediate value returned from an assessment, there are several other commonly missed benefits from them. If performing multiple engagements, an even greater opportunity is provided for organizations to build better long term security practices. Below are several things to keep in mind after performing an assessment:

Build Fixes into the SDLC

As vulnerability assessments are conducted across multiple applications within an organization, a clear picture is often painted of areas in which need improvement. While vulnerabilities are often remediated with quick patches to insecure code, by putting slightly more effort into a long term solution an even larger benefit can be gained. This is a great opportunity to take the remediation and build it into the development process. Instead of creating many small code patches, build the patches into trusted frameworks which can be reused by many applications.

If issues of Cross-site Scripting (XSS) are a recurring problem, development teams should take the time to build proper input validation into request handling functions. This will ensure input is properly validated across the application, and that these controls are reused across the organization, rather than a few isolated areas.

A CISO looking at recurring XSS issues may wish to meet with lead developers and discuss long term solutions to input validation coding practices. Remember that each issue has a cost, both in terms of a security tester finding and reporting it, and the developers remediating the issue. Over multiple years, multiple applications, and multiple vulnerability assessments, the cost of issues can be compounded greatly.

Pay Attention to the Lows

Low Risk vulnerabilities are often not remediated if a significant cost associated with it. While threat prioritization is a necessity, Virtue Security strongly recommends Low Risk issues be remediated within 90 to 180 calendar days. While these issues often disclose only minimally sensitive information, it’s important to keep in mind this is still useful to an attacker. If an attacker were to discover every low risk item across an organization, chances are high that they will have an easier time carrying out other attacks against the organization.

Even small amounts of internal knowledge, can be used as strong leverage when carrying out social engineering attacks. An attacker can gain a lot of credibility if he or she is able to ask specific questions regarding internal systems or software. Additionally, some low issues have compounding risk; the existence of some low risk issues can increase the exploitability of higher risk issues.

Think More Than Compliance

To some, vulnerability assessments may be like a trip to the dentist; not everyone wants to go through an audit, but they exist to help. They may be conducted as part of a PCI, HIPAA, or regulatory audit, but it’s important that they ultimately exist to benefit the organization. Performing assessments with the only goal of maintaining compliance is not a sustainable practice. It’s also important to realize that compliance is only a short term goal, and only doing the minimum to meet these goals is a recipe for disaster.

Working with vendors to ensure the scope of assessments is properly defined is also critical for a thorough assessment. Ensuring test dates are fair and reasonable and start on time will only benefit the organization. Attempting to squeeze assessments into shortened time frames should be avoided if possible.

Compare Vendors

Large organizations should use multiple vendors; keeping a competitive atmosphere can deter a vendor from complacency and ensure engagements are performed with diligence. While some organizations perform direct simultaneous comparative tests between two vendors, this can result in assessments double in price. Organizations wishing to save money but still validate another vendor’s findings may wish to alternate applications between vendors on an annual or biannual basis. Over time, a weak security vendor may become apparent. Do consider that ethical hacking changes quickly, and additional issues should be expected to be discovered after one year’s time.


Application vulnerability assessments are one of the most cost effective ways to identify weaknesses in application business logic. By coordinating long term plans with development teams, even greater value can be obtained. To keep a steady value, work with vendors to make sure testing scope reflects real world attack scenarios and ensure testers have reasonable timeframes to complete testing. Thinking in terms of long term solutions often has far greater rewards than simply maintaining short term compliance. Avoiding a breach is the ultimate goal of just about everything we do, if we keep this in mind and properly manage short term goals, we will all be better off.

For more information on how you can get the most out of your next application vulnerability assessment, contact Virtue Security founder Elliott Frantz at

 Page 1 of 4  1  2  3  4 »