AWS Penetration Testing Part 2 – S3, IAM, EC2

Arrow

Penetration Testing AWS Services

AWS Penetration Testing Part 2.
S3, IAM, EC2

S3 and IAM Policies

Unlike ACLs and bucket policies, IAM policies are targeted at IAM users/groups instead of S3 buckets and objects. Using an IAM policy, we can give an IAM user limited access to S3 resources (or any AWS service in general). The following is an example IAM policy:

{
    "Statement": [
        {
            "Effect":"Allow",
            "Action": [ 
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource":"arn:aws:s3:::examplebucket/*"
        }
    ]
}

This gives the IAM user assigned that policy read access to any object stored in the “examplebucket” S3 bucket as well as the ability to create and delete objects.

Note: The same tests for bucket policies are applied to IAM policies by the AWS Extender Burp extension; however, IAM user credentials must be supplied in this case.

Pre-signed URLs

In addition to the access control mechanisms listed above, S3 can allow temporary read/write access to private objects hosted in buckets via pre-signed URLs. Applications that use S3 to host mildly sensitive images such as avatar images are recommended to use pre-signed URLs to ensure images cannot be harvested by other users.

Pre-signed URLs typically look like the following:
https://s3.amazonaws.com/{S3_BUCKET}/{path}?AWSAccessKeyId={S3_ACCESS_KEY_ID}&Expires={expire_date}&Signature={signature}

Two checks are performed by the AWS Extender Burp extension:

  1. Whether or not authentication is enforced for objects referenced in pre-signed URLs.
  2. Whether the token is valid for an excessive amount of time.

The Intersection of Access Control Mechanisms

When more than one access control mechanism is applied, Amazon decides what to allow based on the union of all of them. For instance, if an IAM policy grants access to an object that a bucket policy denies, that object will not be accessible to the user as an explicit “DENY” rule always takes precedence over an “ALLOW” rule. And while any operation that does not have an appropriate “ALLOW” rule set is rejected by default, a lot of misconfiguration issues can arise due to various mistakes and misunderstandings on the bucket owner’s part; sometimes leaking very sensitive data publicly [4].

It’s also worth noting that Amazon S3 does not have a concept of hidden or internal buckets. As you might imagine, this creates an inherent security problem of bucket name enumeration, so some may consider that when choosing bucket names.

EC2 Metadata IP

AWS provides instance metadata for EC2 instances via a private HTTP interface only accessible to the virtual server itself. While this does not have any significance from an external perspective, it can however be a valuable feature to leverage in SSRF related attacks. The categories of metadata are exposed to all EC2 instances via the following URL:

http://169.254.169.254/latest/meta-data/

We commonly find that image and PDF rendering endpoints are susceptible to attacks such as this. If user generated content is used in conjunction with utilities such as wkhtmtopdf, it can be used as a vector to grab this data.

Cognito Authentication

AWS provides the capability to fully manage authentication of application users via Cognito authentication. This can be integrated with large identity providers like Google, Facebook, Twitter, and custom interfaces as well. This functionality also supports access for anonymous access where anyone can request an access token. Penetration testers should be aware of this behavior and be able to test for such cases.

Our plugin has the capability to test for unauthenticated access when an identity pool is discovered in proxy traffic, however penetration testers should not rely on this scenario for test coverage. Identity pool IDs may often be encoded in token requests sent to cognito-identity.amazonaws.com. Awareness of this behavior is a critical first step to verifying unauthenticated Cognito access. In such situations the plugin can be forced to make the test case by sending the extracted pool ID to any request parameter to allow the plugin to detect it.

Other Cloud Services

Many of the behavior described so far in this series applies to the other major cloud services such as Google and Azure. In part 3 we will look at more examples of those and how they can be tested as well. Please check back shortly!

AWS Penetration Testing Part 1 – S3 Buckets

Arrow

Penetration Testing AWS Services

AWS Penetration Testing Part 1.
S3 Buckets

Amazon Web Services (AWS) provides some of the most powerful and robust infrastructure for modern web applications. As with all new functionality on the web, new security considerations inevitably arise. For penetration testers, a number of AWS services can pose obscure challenges at times.

In this series of blog posts, we will discuss AWS services in detail, common vulnerabilities and misconfigurations associated with them, and how to conduct sufficient security tests for each service with the aid of automated tools. This article is intended to be used by penetration testers with our AWS BurpSuite extension to easily assess the security of AWS S3 buckets.

We’ve released our BurpSuite plugin AWS Extender which can identify and assess buckets discovered from proxy traffic. It also has been extended to identify identity pools, and Google Cloud and Microsoft Azure services as well.

Amazon Simple Storage Service (S3)

Launched in March 2006 and currently hosting trillions of objects, Amazon S3 is an extremely popular object storage service that provides scalable storage infrastructure. And despite the possibility of hosting static websites, S3 by itself does not support code execution or any programmatic behavior. It only provides storage through the REST, SOAP, and BitTorrent web interfaces to read, upload, and delete static files.

Amazon provides different mechanisms of access control for S3 buckets. That includes access control lists (ACLs), bucket policies, as well as IAM policies. By default, an S3 bucket is assigned a default ACL upon creation that grants the bucket owner full control over the bucket.

S3 Penetration Testing Basics

There’s a few key concepts that any web application penetration tester should be aware of:

  • All S3 buckets share a global naming scheme. Bucket enumeration is not avoidable.
  • All S3 buckets have a DNS entry: [bucketname].s3.amazonaws.com
  • It’s generally easiest to access a bucket over it’s HTTP interface (https://[bucketname].s3.amazonaws.com) or to use the more powerful AWS CLI:
    apt-get install awscli
    aws s3 ls s3://mybucket

S3 Common Vulnerabilities

If you’re new to AWS or S3, there are a few common vulnerabilities you should be aware of:

  • Unauthenticated Bucket Access – As the name implies, an S3 bucket can be configured to allow anonymous users to list, read, and or write to a bucket.
  • Semi-public Bucket Access – An S3 bucket is configured to allow access to “authenticated users”. This unfortunately means anyone authenticated to AWS. A valid AWS access key and secret is required to test for this condition.
  • Improper ACL Permissions – The ACL of the bucket has it’s own permissions which are often found to be world readable. This does not necessarily imply a misconfiguration of the bucket itself, however it may reveal which users have what type of access.

Access Control Lists (ACLs)

S3 access control lists can be applied at the bucket level as well as at the object level. They generally support the following set of permissions:

  • READ
    At the bucket level, this allows the grantee to list the objects in a bucket. At the object level, this allows the grantee to read the contents as well as the metadata of an object.
  • WRITE
    At the bucket level, this allows the grantee to create, overwrite, and delete objects in a bucket.
  • READ_ACP
    At the bucket level, this allows the grantee to read the bucket’s access control list. At the object level, this allows the grantee to read the object’s access control list.
  • WRITE_ACP
    At the bucket level, this allows the grantee to set an ACL for a bucket. At the object level, this allows the grantee to set an ACL for an object.
  • FULL_CONTROL
    At the bucket level, this is equivalent to granting the “READ”, “WRITE”, “READ_ACP”, and “WRITE_ACP” permissions to a grantee. At the object level, this is equivalent to granting the “READ”, “READ_ACP”, and “WRITE_ACP” permissions to a grantee.

A grantee can be an individual AWS user referenced by his canonical user ID or email address or one of the following predefined groups:

  • The Authenticated Users Group
    Represents all AWS users and is referenced by the URI “http://acs.amazonaws.com/groups/global/AuthenticatedUsers“.
  • The All Users Group
    Represents all users (including anonymous ones) and is referenced by the URI “http://acs.amazonaws.com/groups/global/AllUsers”.
  • The Log Delivery Group
    Relevant only for access logging and is referenced by the URI “http://acs.amazonaws.com/groups/s3/LogDelivery”.

The following is a sample ACL:

<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Owner>
    <ID>*** Owner-Canonical-User-ID ***</ID>
    <DisplayName>owner-display-name</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
               xsi:type="Canonical User">
        <ID>*** Owner-Canonical-User-ID ***</ID>
        <DisplayName>display-name</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy> 

All of the aforementioned permissions are currently covered by the AWS Extender Burp extension. Namely, the following tests are performed once an S3 bucket is identified:

  1. The extension attempts to list objects hosted in the bucket (READ).
  2. The extension attempts to upload a “test.txt” file to the bucket (WRITE).
  3. The extension attempts to retrieve the access control list of the bucket (READ_ACP).
  4. The extension attempts to set the access control list of the bucket (WRITE_ACP) without actually changing it.

Note: Similar tests are conducted for every identified S3 object.

Bucket Policies

Using a bucket policy, a bucket owner can specify what a principal can perform on a specific resource. Where a principal can be any AWS user/group or all users including anonymous ones, an action can be any predefined permission supported by bucket policies, and a resource can be the entire bucket or a specific object. The following is a sample bucket policy expressed in the JSON format:

{
    "Version":"2012-10-17",
    "Statement": [
        {
            "Effect":"Allow",
            "Principal": "*",
            "Action":["s3:GetObject"],
            "Resource":["arn:aws:s3:::examplebucket/*"]
        }
    ]
}

This policy allows the “s3:GetObject” action on the resource “arn:aws:s3:::examplebucket/*” for a wildcard principal “*”. This is effectively equivalent to granting the “READ” permission to the All Users group on the “examplebucket” S3 bucket using an access control list (ACL).

The following permissions are currently covered by the AWS Extender Burp extension:

  • s3:ListBucket
  • s3:ListMultipartUploadParts
  • s3:GetBucketAcl
  • s3:PutBucketAcl
  • s3:PutObject
  • s3:GetBucketNotification
  • s3:PutBucketNotification
  • s3:GetBucketPolicy
  • s3:PutBucketPolicy
  • s3:GetBucketTagging
  • s3:PutBucketTagging
  • s3:GetBucketWebsite
  • s3:PutBucketWebsite
  • s3:GetBucketCORS
  • s3:PutBucketCORS
  • s3:GetLifecycleConfiguration
  • s3:PutLifecycleConfiguration
  • s3:PutBucketLogging

Part 2 of the AWS series will cover more on S3 permissions including IAM and access tokens, as well as considerations for EC2, Cognito authentication and more. Read part 2 here.

Understanding jQuery Security

Arrow

Application Penetration Testing

The jQuery Security Model Explained

jQuery is a JavaScript UI framework which provides an abstraction layer to many DOM manipulation functions. It provides developers with a friendly interface to quickly and dynamically update DOM without reloading the entire page. It’s a surprisingly simple concept but has given way to a new model of web app development and paved the way for many more JavaScript frameworks.

The security implications of jQuery are not terribly exciting, but they are commonly misunderstood and can have a large impact. In many situations, we see this costing significant time and money for larger organizations; jQuery vulnerabilities are often raised with risk blown out of proportion and are a common source of disagreement between penetration testers and developers.

It’s worth noting that almost all of jQuery security issues surround functions which were so commonly misused that the jQuery team modified behavior to protect developers. Although the changes have been widely interpreted as bug fixes, it can be easily argued that vulnerabilities introduced by jQuery are nothing more than developer error. It is our hope that this article can be used for organization to better assess the risk of common jQuery security issues.

jQuery Basics – The $() Function

$() is identical to, and the most common written form of the jQuery() function, it returns a jQuery object: essentially chunk of content to be written to DOM.

In most use cases, a jQuery function will take a selector, element, or object as a parameter. The selector, denoted by a hash (#), is an identifier for existing html content in the current DOM. In the following example we will use the jQuery html() function to modify an element with the #myDivTag selector:

<script src="https://code.jquery.com/jquery-1.10.2.js"></script>
 
<div id="myDivTag">My old div tag text!</div>
 
<script>
$( "#myDivTag" ).html("<b>My new div tag text!</b>");
</script>

Notice the "My old div tag text!" does not show. The jQuery modifies the DOM at runtime to replace the text of our div element:
jquery_div_tag1

This capability is not new. In the old world, the above code would be written similar to the example below:

<div id="myDivTag">My div tag text!</div>

<script>
document.getElementById("myDivTag").innerHTML = "My new div tag text!";
</script>

As you can see, the jQuery function is similar to the getElementById() function. But there is an important difference: jQuery accepts more than just a selector ID, including HTML and script content. Below shows an example of this:

$('<p>Hello world!</p>').appendTo('body');

Our ‘Hello world!’ is now bound to browser DOM and visible on screen.

jQuery and Application Pentesting

Application pentesters may already see the attack vectors in the examples above, however there’s a point we cannot stress enough: jQuery() and some element specific sub-functions are execution sinks. Penetration testers must evaluate the sources of data consumed by jQuery functions and determine if and how they are bound to DOM. For many applications this can be an extremely time consuming manual process. Fortunately, there is some help from tools like Burpsuite Professional’s passive scanner which will recognize simple occurrences of certain DOM properties placed within a jQuery function (Example: $(location.hash)). The unfortunate part is that more complex instances DOM XSS in cannot be reliably detected with automated methods.

jQuery and many other modern frameworks introduce a new dynamic layer to the creation of DOM in web applications. This creates a more complex model of context where XSS vulnerabilities can manifest. While there is no automated solution for this in the foreseeable future, it gives an increased demand for manual penetration testing. There has never been a better time for application pentesters to brush up on their JavaScript and jQuery skills.

jQuery’s "XSS Vulnerability"

If you arrived at this page today because a vulnerability titled “jQuery XSS Vulnerability” was raised on a pentest report, you’re not alone. At the time of this writing there are no known direct XSS vulnerabilities in the jQuery framework (not including jQuery plugins). Unfortunately, it is extremely common for the behavior changes to be interpreted as bug fixes.

Most savvy jQuery developers are well aware of the danger of introducing untrusted content into a jQuery object. Similar to other functions that modify DOM (innerHTML, document.write(), etc.), the jQuery function must be used with appropriate care. It is no more or less dangerous than the native JavaScript functions we call execution sinks.

Let’s take a closer look at the behavior change that has caused so many headaches. Below is an example of the most common vulnerable code:

<html><body>
<script src="https://code.jquery.com/jquery-1.6.1.js"></script>

<script>
$(window.location.hash).appendTo("body");
</script>
</body></html>

On the page below we can introduce arbitrary script directly into the browser DOM, this even bypasses Chrome’s XSS Auditor:
pop-up1

This XSS vector is so common that jQuery eventually changed the selector handling characteristics to prevent such attacks. A change was soon put in place to block HTML strings starting with a ‘#’. This requirement defeats XSS vectors from the window.location.hash property as content will always start with a hash.

doc-dot-location

In the following example using jQuery 1.6.1, an XSS bug is simulated. This passes script beginning with a # character as it would when being consumed from the location.hash property:
jquery-xss-1-6-1

The code successfully executes.

In the example below we upgrade jQuery to 1.6.3 and run the same code:
jquery-xss-1-6-3
The code no longer runs because the string starts with a # character. Not long after this change an additional behavior change was made to further fine tune the html detection of jQuery. In version 1.9.0b1 it became mandatory for html content to start with a < character. The discussion can be found here.

jQuery’s AJAX $.get() Response Handling Weakness

The jQuery ajax $.get() function (not to be confused with the .get() function) is used to make, as you might have guessed, ajax GET requests. It was found that in versions prior to 1.12.0 would automatically evaluate response content, potentially executing script if it were contained in a response.

Unlike the selector handling issue described above, we believe this behavior to be considered dangerous and potentially unexpected to even savvy developers. The important follow up to that statement is the scenarios in which this issue may manifest are far more unlikely than that of the previous issue.

This behavior may facilitate two potential vulnerabilities in an application.

  1. Applications making cross domain requests to untrusted domains may inadvertently execute script which may otherwise be perceived as safe content.
  2. Requests to trusted API endpoints may be leveraged in XSS attacks if script can be injected into data sources.

Conclusion

Like almost all modern software, jQuery aims to be powerful and versatile. There are countless safe and legitimate functions which can contribute to security vulnerabilities when misused. The jQuery issues described here were all a result of software which functioned as designed but was implemented improperly.

References

Wireless Penetration Testing Guide Part 1: Intro and Basics

Arrow

Penetration Testing

Wireless Penetration Testing Guide: Part 1 – Intro and Basics

 

Regardless if you work in Security, Compliance, IT, or management, it is a near 100% chance that you have encountered wireless networks in the enterprise before. Wireless networking has been around for quite some time and -in my experience- are given less consideration when it comes to configuration, deployment, and most importantly security. This is a problem, as a compromise of a company’s wireless network usually means access directly to the backbone of an organizations internal network and resources, among other things. This guide will take you through the how’s and why’s of wireless, in addition to teaching all of the most common (and some lesser known) attack vectors. We will also be covering Bluetooth, NFC, and some hash cracking in order to obtain a broader understanding and more effectively attacks against wireless systems.

image1
 

Firstly, let’s start off strong with a brief overview of the fundamentals of wireless network communications. Now I can understand that this part might be a bit dry, but it is definitely necessary to a full understanding of “the big picture”. In addition, I know that some of our readers might already know what is in this section, but a brief refresher never hurt anyone.
 
In general, there are 2 components to a basic wireless network; the Access Point (Referred to as the AP), and the client. The client and the access point create a connection between each other, and send each other wireless signals (Most commonly over 2.4 and 5Ghz) that are then interpreted on each end. These signals encapsulate packets and have a fixed structure that is dependent on the protocols that are used for that specific sort of communications. The AP then interprets these signals and (in most cases) converts them to regular network traffic, that is then either routed to other wireless clients, or back into the network that the AP is connected to.

image2

 

Now, with all of this data going over the air, anyone in range would be able to view and modify this traffic. That’s why different encryption methods have been devised to protect this traffic that is otherwise viewable by anyone with a good antenna and a bit of luck. A few of the most common encryption types are WPA, WPA2 (And variants), and WEP.

image3
Source: https://wigle.net/
 

As you can see in the above graphic, over time there has been a shift in what protocols are used. In the infancy of wireless, there were only open (unencrypted) wireless networks, and WEP encrypted wireless networks, but as time went on, WPA and WPA2x gained popularity for reasons of security. In an open wireless network, any client can connect (also known as “associate”) with the wireless network as long as they are in range. In addition, even if a client is not associated with an open AP, they would still be able to see all traffic going over the air in essentially plain-text (when using special hardware detailed in pt. 2).
 

In an encrypted wireless network, the connection between the client and the AP is secure in the sense that an outside onlooker, who does not have access to the wireless network (usually through password based authentication) would not be able to view and/or modify the traffic of the clients of that specific network. The first standard form of this type of encryption was called “WEP” (Wired Equivalent Privacy). For some time, WEP was known was the de-facto method for wireless security, until tools, attacks, and methods where developed essentially making this sort of authentication useless. In modern wireless networks, much more sophisticated WPA and WPA2x security is used to better protect these networks. Although these encryption protocols are strong, they all have weaknesses that can be exploited in order to gain access to the AP, Client, or underlining network.
Now, it is not just the security of the wireless protocol itself that is important, but what those wireless clients have access to. For example, in a typical home network, the entire wireless and wired network are on the same subnet, and thus a client on the AP can have access to a smart TV hooked up by Ethernet. In addition, home networks usually have WPA/WPA2-PSK encryption that only involves a password for authentication.

 

Conversely, in the enterprise, it is best practice to isolate the wireless networks from the rest of the company’s internal network, and only allow wireless clients to access parts of the network on a case-by-case basis. This is called “wireless isolation” and is commonplace in the modern enterprise system. Although it is the better option, a lot of company networks fall victim to negligence during wireless configuration, thus a wireless client will have access to the internal network. This is fantastic from a would-be attacker standpoint, as the wireless network now becomes a more lucrative point of entry. Additionally, in an enterprise, it is possible that the wireless networking system has added layers of security. An example of this would be “Mac address whitelisting”. This is when only a predetermined set of clients are allowed to connect to the AP. The MAC address is data that is unique to the specific wireless adapter installed on a device (i.e. a laptop wireless card, a USB wireless card, a cell phones internal wireless adapter.). Another example would be username/password authentication that could authenticate the user with a Radius, MS Active Directory, or other server.

 

Now that we have a solid understanding of the basics of these types of networks, we can move on to learning about their vulnerabilities and how to exploit that, and the first thing we need to cover is hardware. What exactly you are going to need to get going with wireless penetration testing, from a basic USB card to a 2 mile+ packet cannon, this will all be covered in part 2 coming shortly.

Stay tuned at https://www.linkedin.com/company/virtue-security

Mark Shasha is a penetration tester at Virtue Security in New York City – @bignosesecurity

Managing OpenSSH Patch Levels on Ubuntu

Arrow

Vulnerability Remediation

Managing OpenSSH Patch Levels on Ubuntu

Many vulnerability scanners will raise false positives regarding outdated installations of OpenSSH on Ubuntu; notably issues similar to:

  • OpenSSH < 7.0 Multiple Vulnerabilities
  • OpenSSH < 6.6 Multiple Vulnerabilities

A thorough penetration test should weed out false positives of these issues, however they are a common occurrence in assessments relying on automated tools. In this example we will look at a fully patched Ubuntu 14.04 server with OpenSSH installed and show how to properly validate this issue. To start let’s grab the banner of the host in question:

$ nc 10.0.1.35 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3

This banner reveals a number of configuration details about the server, but we’re only concerned right now with the very last flag on this example. The ‘2.3’ on the right is the internal Ubuntu patch level, which we can use to verify what vulnerabilities have been patched. First we should find out what the latest patch level is; to do this we can reference the following URL:

http://packages.ubuntu.com/search?keywords=openssh-server

trusty-updates

This will lead us to the following URL where we can look at the changelog: http://packages.ubuntu.com/trusty-updates/openssh-server

ubuntu-changelog

Which then brings us to the changelog for the latest Ubuntu openssh-server patch:
http://changelogs.ubuntu.com/changelogs/pool/main/o/openssh/openssh_6.6p1-2ubuntu2.3/changelog

openssh (1:6.6p1-2ubuntu2.3) trusty-security; urgency=medium

  * SECURITY REGRESSION: random auth failures because of uninitialized
    struct field (LP: #1485719)
    - debian/patches/CVE-2015-5600-2.patch:

 -- Marc Deslauriers   Mon, 17 Aug 2015 21:52:52 -0400

openssh (1:6.6p1-2ubuntu2.2) trusty-security; urgency=medium

[..]

  * SECURITY UPDATE: X connections access restriction bypass
    - debian/patches/CVE-2015-5352.patch: refuse ForwardX11Trusted=no
      connections attempted after ForwardX11Timeout expires in channels.c,
      channels.h, clientloop.c.
    - CVE-2015-5352

Here we have verification that the 2.3 patch includes an improved fix for vulnerability CVE-2015-5600 and that the 2.2 patch included updates for CVE-2015-5352 as well. This is a critical comparison that should be made to vulnerabilities raised by automated scanners. If no verification can be obtained from this method, the comparison should default to the OpenSSH version (6.6 in this case) and CVE details.

If the OpenSSH server is found to be out of date it can be easily upgraded with Ubuntu’s package management system.

$ sudo apt-get update
$ sudo apt-get upgrade

 Page 1 of 6  1  2  3  4  5 » ...  Last »