I contributed to this article for the University of NSW (UNSW) Business Think Journal https://www.businessthink.unsw.edu.au/Pages/When-computer-hackers-turn-out-to-be-the-good-guys.aspx
The popular image of a computer hacker is a hoodie-wearing night owl, a ‘black hat’ who remotely breaks into an organisation’s systems, intent on mischief, financial gain, or political exposure.
But while wearing a hoodie and operating at night may still be de rigueur, recent years have seen the emergence of a new breed – ‘white hat’ hackers, who do what they do legally and with an organisation’s blessing, with some getting paid as much as $350,000 a year to do so.
Mortada Al-Banna, a doctoral researcher in the school of computer science and engineering at UNSW, and his academic colleagues have investigated this phenomenon of crowdsourced vulnerability discovery, interviewing 36 key informants from various organisations about the challenges and benefits of inviting outsiders to test their computer systems in this way.
“I’m interested in how externally generated events affect the security posture of an organisation, and crowdsourcing security is one of these,” Al-Banna says.
While the first award of a ‘bug bounty’ (a payment for finding and reporting a bug) was by web browser company Netscape as far back as 1995, the wider industry remained sceptical.
But in 2017, this attitude was transformed in remarkable fashion when the US Department of Defense announced via website Hackerone that they wanted people to “hack the Pentagon”.
“This has motivated a lot of companies to get involved,” says Al-Banna. “The Department of Defense started small and then expanded, and the US government is currently considering expanding the program throughout all areas of their operation.”
‘Humans are actually better at this. They are more creative and look for the unexpected’
Test your system
Al-Banna’s research has revealed a number of challenges and reservations that organisations have about crowdsourced vulnerability discovery, including the lack of managerial expertise to run a successful bug bounty program, the possibility of low-quality submissions and cost escalations, and a general distrust of ‘white hat’ hackers.
“If companies want to run a bug bounty, but want to minimise the problems, there are techniques to help them do this,” says Al-Banna.
But while it’s possible to automate, say, the examining of reports from bug hunters to exclude duplication or out-of-scope issues, actually automating the process of looking for bugs is more difficult.
“The current automated tools for looking for vulnerabilities are actually more ‘noisy’ than the crowd,” says Al-Banna.
“Humans are actually better at this. They are more creative, and look for the unexpected.”
So how can organisations make use of this research? Al-Banna’s advice is that businesses need to do their homework first.
“Don’t just jump straight into a bug bounty. You need to test your system yourself with [network] availability tools – bug hunters will use these themselves – before leveraging the crowd for problems that require more creative input.
“In the first instance, limit the scope and only invite in a small number of bug hunters. But if organisations keep it this way forever, they will not reap the benefit of crowdsourcing,” says Al-Banna.
Despite being only 22 years of age, Shubham Shah is a veteran of the world of crowdsourced vulnerability discovery. His childhood interest in computer gaming and ‘game hacking’ (modifying games) soon escalated into the world of computer security. By the age of 13, he was hacking web applications.
Shah’s skills led him to work for professional services multinational EY, and then as a consultant for Bishop Fox, doing work for Fortune 500 companies. But he soon found he could make more money pursuing bug bounties, which he has done exclusively for the past year.
‘They can often show you where you are most vulnerable more effectively than your security team could identify’
“My first bug bounty was from PayPal. It took me eight hours to get into an internal network that they owned, and they paid me US$1500. If you’re good at it, the financial incentive is very high,” Shah says.
“When you find a big vulnerability in a big company, there’s an adrenaline rush. You feel you’ve achieved something big – like running a marathon. But you could spend many hours finding nothing, and there’s no model for predicting what money you’ll make.”
Shah envisages a wider move towards a crowdsourced economy, and not just in computer security – he cites the example of design consultancy 99 Designs, which has been operating a similar model in its industry.
“Traditional consulting, where companies charge even if they ultimately do nothing, involves a waste of resources,” he says. “It’s not based on results.”
During the next five to 10 years, Shah believes that low-level bug hunting will become automated – which will focus the attention of the crowd on being more creative, and searching for more serious vulnerabilities.
“We’re currently paying the crowd to do what is in effect manual labour. We’re encouraging ‘noise’, and it’s a significant effort for a company to run a bounty,” Shah says.
“The only way to reduce the noise is to automate what can be automated.”
Shannon Sedgwick, a senior manager for cyber risk at Deloitte Canberra, has experience of employing ‘white hat’ hackers and observing the benefits they can bring to an organisation.
“In my experience, the industry is quite open about engaging with ‘white hats’,” he says. “Google paid out US$3 million in bounties in 2017, and some individual bounties can be as much as $100,000.”
Sedgwick believes that, even with the large budgets available to companies such as Google or Apple, ‘white hat’ hackers can be more efficient and cost-effective than companies performing the same tasks with internal staff.
“They can often show you where you are most vulnerable more effectively than your security team could identify. A plan is only effective if you’ve tested that plan, and this is especially true for security systems.”
Another advantage for companies is that ‘white hat’ penetration testing typically occurs outside of business hours, thus minimising potential disruptions to their business operations.
If a company is considering offering bounties for the first time, Sedgwick suggests trialling the process internally first and then, when approaching the market, establishing strict NDAs [non-disclosure agreements] and parameters of what is under review and cannot be exploited.
“Don’t release all of your applications and systems for testing at once, and engage an experienced specialist security company to oversee the process,” he says.
For Sedgwick, one of the challenges for companies engaging with ‘white hat’ hackers is the risk that some can edge towards becoming ‘grey hats’, who identify vulnerabilities but don’t report them, going on to exploit the vulnerabilities for financial gain or selling them to interested parties on the dark web.
“If ‘white hats’ feel they’ve been treated poorly by a company – for example, being underpaid, or not appreciated – then they can cause problems.”
But importantly for Sedgwick, the boards of organisations have to understand that information security is a business risk, not just a technology risk.
“They need to identify their critical data and assets, and direct appropriate resources to those as a priority,” he says.
“You need to consider the big picture. You can patch vulnerabilities all day, but if a company’s governance and security strategy are not effective, then patching vulnerabilities is not going to do the trick.”