For more than 15 years, tech leaders from around the globe have been lobbying governments to not regulate the technology industry. Over the last 12 months, this sentiment has been virtually reversed with tech leaders pleading governments to regulate Artificial Intelligence (AI). It’s one indication that if we’re going to do AI as a human race, we need to get it right and if we don’t get it right, the consequence will be disastrous.
History serves as a guide for the future
For those who struggle to see the red flags, history serves as a guide; while social media companies were vigorously lobbying against regulation, their algorithms had a vastly damaging effect on the wellbeing of millions of individuals around the world. Countless reports have shown children paying the price because algorithms were serving up content that drove them to eating disorders and even suicide – amongst other things. Yet despite multiple reports of these ill-effects around the globe – and even a coroner’s report in London which ascribed instagram’s algorithm as having contributed significantly to the cause of a death – social media companies insisted that regulation was unnecessary.
Enter Chat GPT and a tidal wave of sentiment-change swept across the tech industry overnight with tech leaders deciding that regulation of AI was essential. There was a collective realization that this technology was about to become pervasive in our lives far beyond what the common person can possibly understand and without safety measures, its impact might even outpace the trail of disasters that have been left by unsafe social algorithms.
Over the last 5-10 years, we’ve seen social media companies compete in an arms race to build the most powerful social algorithms yet; these algorithms drive up engagement which in turn drive up revenue. As these tech companies compete for the attention of users around the world, the companies who succeed in growing their revenue and business value are the ones who build the most effective algorithms. This has become a problem for some social media companies – after all – if kids are moving onto other platforms where algorithms are more effective at grabbing their time & attention, this hits revenue and starts to become an existential risk.
How organizations act when unsafe AI is more profitable
Creating AI that is safe-by-design doesn’t just happen – it takes time and investment. When revenue loss or business viability is a looming threat, the appetite to invest in safety-by-design decreases, resulting in the development of technology that has a higher risk of being unsafe and/or toxic. Frances Haugen – a former Product Manager at Meta & whistleblower – used the term “profit over safety” to describe this phenomenon inside Facebook. We saw an example of this when TikTok experienced a meteoric rise in monthly active users, driving users away from Facebook – Zuckerburg called this out as a threat in an investor call and acted quickly to stay competitive by introducing an updated video algorithm that had a habit of elevating harmful content. Similarly, after facing a loss-making existential crisis, Musk decided to reduce Twitter’s safety team (the team responsible for ensuring content is safe) and following this, researchers identified a rise in unsafe content.
As these social media companies drive up their profit & revenue and compete with aggressive algorithms, they are playing with the lives of children. It doesn’t take a coroner’s report or leaked research from inside Meta, to see that. Apart from the fact that this is stealing a generation of livelihoods and having an ever-growing harmful effect on society, it is a harbinger of something more dangerous; it’s a taste of things to come if AI risks aren’t properly understood and managed.
You don’t need to listen to the hype about Musk’s comment on the likelihood of AI annihilating humanity to know that we have a problem on our hands. Whether you believe Musk or not is arguably immaterial – other problems are fast emerging which have the potential to create anything from vast systemic economic or social imbalances in our society, through to physical safety or privacy issues in the daily lives of people if AI design isn’t done properly from the outset.
Will a saturated AI market give rise to toxic business models?
Perhaps the most significant lesson to be learned from the history of social algorithms is that the pain wrought by these algorithms is not at the hand of one or two companies – the toxicity extends across an entire industry. These businesses live and die by how effective their algorithms are and it is this investor charged competition that has created an ‘algorithmic arms race’ which is happening at the expense of children. Similarly, a time will come when the market will be saturated with businesses who wield AI to great effect and the question will no longer be “do you have AI” but rather “how effective is your AI”. When this happens, businesses and entire industries whose competitive advantage depends on how effective their AI is, will more likely act in unethical or unsafe ways (ie. development of unsafe AI) to maintain a competitive advantage. At this point – like we saw with social algorithms – we could see large-scale development and use of AI which is causing harm in ways that people don’t expect.
How security leaders can make a positive impact right now:
#1 – Educate yourself on emerging frameworks & regulations.
It’s positive to see regulators and organizations moving quickly to introduce guidelines for the safe development of AI. Google, for example, has released a set of recommended practices for responsible AI and these are excellent resources for CEO’s, CTO’s & CISO’s to consume. If, as a security leader you are not already reviewing these emerging regulations and frameworks and using them as a basis for discussion with stakeholders in your business & the broader industry, then now is a great opportunity to start.
#2 – Start to think about how your vendors need to be more transparent on this.
When sourcing AI based products, CISO’s need to understand where and how AI is being used in their organizations; are vendors and providers being transparent around their use of AI in their products? What are vendors doing to ensure that equitability, interpretability, safety and privacy are baked into their AI by design? CISO’s need to seek more transparency from vendors on this and by demanding more transparency, they will send a signal to vendors that anything less is not commercially viable.
#3 – If your org builds AI, you need to have a meaningful ethics framework in place
At this point, the burden of responsibility sits with software & tech companies who are actually building AI. Never before has safe-by-design principles been so sorely needed; as AI models are being built, the core principles of equity, safety and privacy need to be seriously considered as part of the design of these models and AI & ML teams need to be trained on and take ownership of responsible AI principles & practice. Board members need to be asking their CEO’s whether they have meaningful ethics frameworks that govern how AI operates within their technologies. Without a framework and a vision to guide the responsible development of AI, history will only repeat itself; lucrative – but toxic – business models will spawn, spurred on by investors while in the background, the lives and wellbeing of individuals will be left on the wayside – just another job for the coroner to investigate.