Insecure Direct Object Reference

Abusing internal API to achieve IDOR in New Relic

I recently found a nice insecure direct object reference (IDOR) in New Relic which allowed me to pull data from other user accounts, and I thought it was worthy of writing up because it might make you think twice about the types (and the sheer number!) of API’s that are used in popular web services.

New Relic has a private bug bounty program (I was given permission to talk about it here), and I’ve been on their program for quite some time, so I’ve become very familiar with their overall setup and functionality of the application, but this bug took me a long time to find … and you’ll see why below.

Some background first: New Relic has a public REST API which can be used by anyone with a standard user account . This API operates by passing the X-api-key header along with your query. Here’s an example of a typical API call:

Pretty typical. I tried to poke at this a little bit by swapping the {application_id} with another user account’s {application_id} that belongs to me. I usually test for IDOR’s this way, by having one browser (Usually Chrome) setup as my “victim account” and another browser (usually Firefox) as the “attacker” account, where I route everything through Burp and check the responses after I change values here and there. It’s kind of an old school way to test for IDOR’s and permission structure issues, and there is probably a much more effective way to automate something like this, but it works for me. Needless to say this was a dead end, and it didn’t return anything fruitful.

I looked further and found that New Relic also implements an internal API which occurs on both their infrastructure product and their alerts product. They conveniently identify this through the /internal_api/ endpoint (and put references to their internal API in some of their .js files as well).

The two products operate on different subdomains, and This is what it looks like in Burp, on the domain (where the IDOR originally occurred).

The reason I bring up the fact there are two separate subdomains is because this bug sat there for an excessive amount of time because I didn’t bother checking both subdomains and their respective internal API’s. To make it even more difficult, there are multiple versions of the internal_api, and the bug only worked on version 1. Here’s what the vulnerable endpoint looked like:

The account number increases by 1 every time a new account is created, so I could have literally enumerated every single account pretty easily by just running an intruder attack and increasing the value by one each time. The IDOR was possible because the application did not ensure that the account number being requested through the above internal API GET request matched the account number of the authenticated user. 

This IDOR allowed me to view the following from any New Relic account:

  • Account Events
  • Account Messages
  • Violations (Through NR Alerts)
  • Policy Summaries
  • Infrastructure events and filters
  • Account Settings

This bug has been resolved and I was rewarded $1,000. I’d just like to point out that the New Relic engineering and development team was super quick to remediate this. Special thanks to the New Relic team for running one of, if not the best bug bounty programs out there!

Follow me on Twitter to stay up to date with what I’m working on and security/bug bounties in general 🙂


Authentication Bypass

Inspect Element leads to Stripe Account Lockout Authentication Bypass

A common thing I see happening with many popular applications is that a developer will disable an HTML element through the “class” attribute. It usually looks something like this:

This works pretty well in some situations, but in other situations it can be manipulated to perform actions that really shouldn’t be done by an unauthenticated user. That’s exactly what happened in a bug I submitted to Stripe a few weeks ago.

When you are logged into your Stripe account, you will be timed out after a certain amount of inactivity. Once this you reach this timeout, you aren’t able to make any changes on the account or view other pages until you re-authenticate by entering your password. Herein lies the problem with using a “disabled” class tag – an attacker can simply manipulate the page through inspect element to allow them to delete the disable class tag and view other pages, allowing them to send requests.

In this video below, you’ll see how I’m locked out of a Stripe account because of inactivity, but by navigating to the “invite user” section of the timeout page through inspect element, I am able to invite myself as an administrator on the account that is timed out, without authenticating first.

This, of course, requires a person to first be logged in to their Stripe account and leave their computer out in the open… but using this method you can render the entire lockout process completely useless on a account. It’s interesting nonetheless that the folks at Stripe made sure that a malicious user couldn’t change the web hooks… but inviting an administrator to the account is completely allowed.

Stripe followed up and clarified by saying that simply dismissing the entire modal isn’t enough to bypass the authentication check, it is instead checked at the backend, but that check was accidentally removed in this situation which allowed me to invite another administrator.

Stripe security was very responsive in resolving this issue and it was fixed shortly after I reported it. I asked permission before publishing this article. Bounty: $500.

I have some more bounty writeups that are a bit more technical than this one coming soon, including a writeup on a CVE I discovered, so check back later for more updates. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.

Cross Site Scripting (XSS)

Penetrating PornHub – XSS vulns galore (plus a cool shirt!)

When PornHub launched their public bug bounty program, I was pretty sure that most of the low hanging fruits of vulnerabilities would be taken care of and already reported. When I first started poking around, I found my first vulnerability in less than 15 minutes. The second vulnerability was found a few minutes afterward. I have never in my entire bug hunting career found bugs this quickly, so it was pretty exciting.

In return, I received two payments of $250 and a really really cool T-Shirt + stickers that I posted on Reddit here:

When I posted this on Reddit I had no idea it would be so popular and raise so many questions. Most people asked “What was the hack?” followed by “Why would you hack PornHub?” and I couldn’t talk about it until…now. (These vulnerabilities have now been fixed.)

I found and reported two reflected cross-site scripting (XSS) vulnerabilities within 20 minutes of browsing the PornHub Premium website. Cross site scripting, if you don’t know what that is, is a type of vulnerability that enables an attacker to run dangerous scripts on a website. OWASP sums it up pretty nicely here:

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site. These scripts can even rewrite the content of the HTML page.

The first one was found using the “redeem code” section of the site – it didn’t check to see if the code being entered in the text input for the redeem code was an actual payload, so I was able to use the following payload to reflect the script on the page:

The first part of the payload “PAYLOAD STACK” ensures that the rest of the payload is sent through. If I entered:

Without the words in front of it, the application would reject it and nothing would appear on the page. Entering something non-malicious to begin would trick the validator, and in turn, allow the payload to execute.

The second vulnerability was also a XSS. This was was a bit simpler, and was found by entering a payload in a URL parameter that only appears once to new users… which is why I think it wasn’t found until now – most bug hunters like to get a feel of a website before they start poking around and trying to get things to break, but usually I take the different approach and use incognito windows so that the website thinks it’s the first time I’ve ever visited the site before. This is where the vulnerability existed.

I noticed that the PornHub Premium site was mostly off-limits unless you paid to access it. Before you could even pay, there is a “pop-up” window that displays to the user that they are going to be viewing pornography and to either enter or exit by clicking on a button. What I also noticed is that once you selected “enter”, there was a part of the URL that changed and added a parameter. This vulnerable parameter was &compliancePop=no_gateway  – this is where I was able to enter:

And I got a really nice “1” to appear on the screen, which shows evidence of cross-site scripting. I reported both of these vulnerabilities to PornHub and they were triaged within 24 hours.

I’d like to thank the folks at PornHub for running such a fair, quick responding program and keeping their users safe. Also – thanks for the amazing T-Shirt! Thanks also to the folks at Reddit for being so interested in this that I had to send over 200 pm’s to people who wanted to know what I did…hopefully this lives up to the promise that I would tell you about it, and sorry it took so long.

I have other bugs and vulnerabilities that are cooler and more intense than this one coming soon, so check back later and I’ll share them with you. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.

Cross Site Scripting (XSS)

Discovering a stored XSS that affects over 900k websites (CVE-2016-9751)

In my free time when I’m not hunting for bugs in paid programs, I like to contribute a bit to the open-source community and check for vulnerabilities that might arise. In doing so, I found a stored cross-site scripting vulnerability that affected over 900,000 websites… yikes.

The vulnerable application is called Piwigo  – an open source image showcase/album that, according to Google, is active on over 900,000 webpages. The true number is probably higher than that, but that’s just what the original search brings up. It’s commonly a one-click install on many web host platform for image showcases. Anyhow – on to the bug:

Piwigo has an option that allows for a “quick search” of the photo gallery. (Important to note: There are different “themes” that a visitor can choose that changes the way the pictures are displayed and the way the page looks, this will be important to remember later.)

When you enter a payload, the page displays the payload (sanitized properly) – and then saves the search as a number inside the URL. For example, my search URL is:

That number at the end can be changed, so you can see what other keywords people have searched on the site. I’m not sure if this is a good or a bad idea, but that’s not the bug.

The bug is that when you enter a payload in this quick search area and have also selected the elegant theme there is the option to open a “search criteria” page.

It just so happens that on this search rules page… the keywords (or payload) you entered earlier are not sanitized. You end up getting this beautiful pop-up that all of us bug-hunters love to see:

Sidenote: If you’re a bug bounty hunter, it’s always best to use alert(document.domain) instead of alert(1) – it tells you if the payload is actually firing on a domain that is in scope for the program.

Now here is where it gets bad… that URL above is permanently stored on your website – and I think the only way to remove it is if you manually purge the search history from the administrator backend. Below is a picture of where you can perform that purge:

Why is this bad? Well, before I reported this bug, I’d be concerned if an attacker kept a variety of payloads that will still execute even after the website has implemented a patch – since the search is stored in the database before the patch went into place, it makes sense that all the attacker needs to do is direct the victim to the old URL – and the website owner can’t do much if they haven’t purged the search history.

Example of what I mean: At the time of this writing, if you visit the above URL in that picture on, the payload will still execute, even though the vulnerability was fixed a long time ago.

I was assigned (my first ever!) CVE-2016-9751 regarding this vulnerability. A fix ( was implemented after reporting. Webmasters and gallery owners should update to Piwigo 2.9 in order to implement the patch.

Payload used:


Other Bugs

Bypassing Apple’s iOS 10 Restrictions Settings – Twice

By default, Apple has a feature that allows all of their iOS devices to be assigned restrictions, so that employees and mostly children cannot access naughty websites and other types of less-desirable content. You can enable these settings by visiting Settings > General > Restrictions on your iPhone or iPad.

Around the beginning of every year I try to break Apple’s restrictions settings for websites. It’s a pretty nerdy thing to do and its not really classified as a “vulnerability” – but it’s a fun challenge and leads to some pretty interesting bugs, so I wanted to talk about a few of them here:

When I test the restriction settings, I turn restrictions on, and then I change the website settings to allow Safari, but only the default specific websites (see screenshot below).

The first time I found out how to bypass the restrictions, I did it on accident. I noticed that there were certain pages I had open previously in Safari that, when restrictions were turned on, I was still able to reload even though the domains were not on the list of approved websites. I realized that all these pages I had open had one thing in common: they were all displaying PDF’s. So now, by simply appending .pdf at the end of the Safari URL, it was possible to visit any website. An example is below:

Restricted URL: (left image)

Allowed URL: (right image)

That one was pretty interesting. I reported it to Apple through their bug tracker and it was marked as a duplicate  – looks like someone else had found it before me. I tried again to see if it was possible through another method, and after a few hours I discovered another way:

(The following is my assumptions as to how the website restrictions work behind the scenes) When Apple checks a URL, they check the structure of the URL to see if it matches the list of whitelisted domains. What doesn’t happen is an additional check to ensure that the URL actually ends… or if it contains subdomains that match a whitelisted domain. This may be hard to explain so I made a photo to demonstrate:

See what’s happening here? Only the URL up to “.com” is checked against the whitelist. The restrictions settings do not check to see if a URL contains subdomains… so I’m able to trick the filter to allowing a domain such as “” to be let through. The actual domain name in this case is – which is definitely not on my approved list of domains.

Restricted URL: (left image)

Allowed URL (but shouldn’t be!): (right image)

I also reported this to Apple about 7 months ago and it still isn’t fixed. I asked them for permission to share this article.

This is just an interesting bug that slipped through the cracks, I assume they will have a fix out eventually for the bugs. Still haven’t made it in the Security HoF for Apple yet, but it’s definitely a goal of mine for the year.

I have other bugs and vulnerabilities that are cooler and more intense than this one coming soon, so check back later and I’ll share them with you. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.