Ethics in Technology and Cyber Security

Global connectivity is on a meteoric rise. Increasingly we see everyday items connected to the internet — connected refrigerators, baby monitors, washing machines, vehicles, medical devices, and even fish tanks. As innovative technology proliferates and evolves, it becomes increasingly embedded into our personal and working lives. However, this increased connectivity leads to increased risk for Australian citizens and businesses. It is no secret that cyber security is and will continue to be the hot topic in 2019, with global cyber security spending expected to reach USD 124 billion. (Gartner, 2018) The recent and highly-publicised cyber-attacks against Toyota and Landmark White serve as a stark reminder of the pervasive threat of cyber criminals. The issue becomes rather dispiriting when you delve into the statistics of data breaches.

However, data breaches are not the only concern arising from the proliferation of technology. Ethical issues, particularly concerning automation, artificial intelligence, and robotics, are now in front of mind for the public and media. Recent incidents have raised questions on ethics and responsibility, such as a death in March 2018 caused by an Uber self-driving car. Who is ultimately responsible? The manufacturers? The driver? The software programmers?

There is always a trade-off in technology. The trade-off by achieving a balance between accessibility and security, functionality and compliance, and convenience and privacy. It is essential to achieve a balance between these themes to establish trust and minimise any potentially harmful effect of the loss, theft, or destruction of sensitive data.

As we create and adopt technology, there needs to be ethically sound standards and regulations that govern the use of artificial intelligence and automation. This piece examines emerging innovative technology, ethical issues for the cyber security industry, the efficacy of current regulations and guidelines, and the options available for organisations who aim to embed ethical decision-making into their culture.

Ethical decision-making is about making the “right choice” and the reasoning behind those choices. The standard of ethics in an organisation is a direct reflection on the purpose of the organisation. Ethics forms the basis of the organisational purpose by asking “Why do we do what we do?”. Ethics in cyber security is about what decisions are aligned with our values and what is morally acceptable for both the data owner and the organisation. Ethical standards should also describe how to implement processes for ensuring ethical decision-making.

Ethical issues are a daily occurrence in cyber security. Every organisation that stores personal and sensitive data has a responsibility to ensure that ethics are interwoven throughout the company, from the boardroom to the interns and grads. Ethical decision-making promotes transparency and honesty, and as this piece concludes, the pursuit of such laudable values leads to both greater trust in the marketplace and greater profits.

The Australian public, consumers, and the media expect organisations to protect the data they store and use and have effective frameworks in place for guiding ethical decisions concerning the confidentiality, integrity, and availability of that data. They expect organisations to abide by legislation and regulations as a minimum, but as we have seen in recent times, “legally right” does not always equate to “morally right”. The oft-competing values of legislation vs morals means that the decision to abide by one or the other must take into account the organisation’s corporate social responsibilities and what is in line with both their organisational and personal moral values.

Emerging technology and risks

The IBM/Ponemon Cost of a Data Breach study concluded that the average cost of a data breach is $3.86 million, and the likelihood of a recurring breach in the following two years is 27.9%. A data breach of more than 1 million records will cost approximately $40 million, and a loss of more than 50 million records will cost a staggering $350 million.

Australian small to medium business (SMB) owners have long had a folie à deux that they “fly under the radar” of cyber criminals because they deem themselves too small to be a target. The recent statistics from Verizon show that this is no longer the case, with 43% of data breaches involving small business victims. Unfortunately, over 500,000 Australian small businesses fell victim to cyber crime in 2017, and research shows that over 60% of SMBs go bankrupt within six months of a data breach. It is no longer an option for Australian businesses, regardless of size, to do nothing and hope for the best.

Emerging technology, such as the Internet of Things (IoT) is designed to solve problems that affect us as humans and to make our lives easier and more enjoyable. However, that same cutting-edge technology can be used against us. While the employment of IoT yields many benefits across a vast range of industries, it is not without risks including privacy and security concerns, liability around automated equipment and self-driving cars, and a lack of global regulations and standards. There are numerous case studies of IoT use gone wrong, from hacked vehicles and baby monitors to the destruction of nuclear reactors and shutdown of the largest websites in the world via a D-DOS attack launched by the Mirai Botnet.

No alt text provided for this image

Artificial Intelligence (AI) has been used by cyber criminals to create something called a “deepfake”. A deepfake is a fake video, image, or audio message that looks incredibly realistic and fools the recipient into believing it to be a real person. This malicious use of AI takes phishing to a whole new level of sophistication and can be used to trick people into handing over passwords and sensitive data, or to pay fraudulent invoices, or possibly for “catfishing”. Malicious actors could also use “deepfakes” to manipulate elections by posting a fake video of a government leader discussing inflammatory topics or renouncing their campaign. This type of “fake news” could cause electoral disruption or cause conflict with foreign governments.

No alt text provided for this image

It has been argued that it is quantum computing, not AI, that will define our future. Classical computing systems are binary, which means they work on bits that exist as either 0 or 1. Quantum computers are not limited to binary bits. They use something called quantum bits, or “qubits”. Qubits stand for atoms, ions, electrons, and photons and control mechanisms working collaboratively as both memory and processor. Because a quantum computer is not limited to binary processing, it can contain multiple states at the same time which gives it the ability to be infinitely more powerful than even the most advanced computing systems available today. Cyber criminals could possibly harness the processing power of quantum computing to break advanced encryption algorithms.

No alt text provided for this image

Cloud computing is leading the transformation of where businesses and individuals store and use their data. As the volume of cloud usage grows, so does the amount of sensitive data stored in the cloud, which is potentially exposed to risk stemming from cloud-specific security issues:

  • Malware injections are malicious code that is injected into a cloud computing repository and enables malicious actors to gain access to any data that is uploaded to that repository. This type of malware is particularly challenging to identify without appropriate detection systems.
  • APIs (Application Programming Interfaces) assist organisations by enabling them to create customised cloud solutions that meet their data and operational requirements. Improperly secured APIs are a commonly-used entry point for cyber criminals, leading to lost or stolen data.
  • Just like physical servers, accessing cloud databases requires login details, which makes usernames and passwords a valuable target to cyber criminals. Similar to “deepfakes”, phishing emails is a common method criminals use to gain access to cloud login credentials.

Ethical issues and challenges for cybersecurity

The landscape of cyber evolves continuously. As does the threats that organisations and governments face. This required an evolving and equally-agile workforce. However, there is a widening gap between demand and supply of qualified cyber security professionals. This quite often leads to the rushed recruitment and onboarding of new cyber security staff, and potentially, a lack of guidance provided to the new recruit on ethical decision-making and expectations. When a recruit is forced to rely on their own standard of morality, this causes a rise in differing standards of right and wrong in the workplace, which ultimately leads to mistakes.

When an organisation sets and follows ethical standards or an industry abides by regulation that enforces ethical behaviour, it ensures that all relevant parties are held to the same standard and have a clear understanding of their ethical responsibilities. The C-Suite and the board must be seen to be leading by example and engendering a culture of high standards of ethical decision-making,

If a company’s data is compromised, it may face lawsuits, reputational damage, and questions about its ethical standards. Delaying a public announcement can compound these consequences. Those responsible for overseeing information security practices within organisations, such as CISOs and supporting management, must ensure a fit-for-purpose communications policy is implemented to guide incident response procedures.

There are a number of ethical considerations regarding the impact of technology and cyber security. One is the privacy of a user’s data. Organisations need to consider whether they have appropriate controls and processes in place to safeguard the integrity and privacy of their customers and their data. A key question to ask would be: what would the result to the customer be if this information was compromised?

Another consideration is the customer’s right to their information. This is particularly important when considering how long user data should be stored. Should it be deleted immediately after its use? If it is kept, how will it be secured? An even thornier question is what happens to the data when the user dies? Should their family be able to gain access to it?

A customer consenting to the use of their data is a critical consideration. It is now not sufficient to have a tiny script at the bottom of contracts and webpages detailing user’s rights to their data and the company’s privacy policy. Informed consent requires easy-to-access and easy-to-read language so the user can acquiesce without having to go to university to study law.

The consideration of bias in algorithms and AI is increasingly a topic of consternation for developers. Algorithms used in correctional facilities to determine the likelihood of recidivism, i.e. a prisoner’s likelihood to re-offend, has been used to decide the outcome of bail/release hearings in America. It was discovered that this algorithm, called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) contained biased data and was less likely to look favourably upon African Americans or people from low socioeconomic neighbourhoods.

There is currently at play, an Australia-specific example of an ethical issue concerning cyber security. The Assistance and Access Bill that was passed in 2018 allows Australian government law enforcement and intelligence agencies to demand technology manufacturers and providers to give access to encrypted communications. The law stipulates that a technology provider must create a “back door” or access point into their products so the government agencies can gain access to encrypted communications. This forced creation of a back door into technology created by Australian organisations leads to various ethical issues, not the least of which is the privacy of their user’s data. Technology companies, especially those who invest heavily in encryption products, may be forced to move their manufacturing operations internationally. The legislatively mandated “weakness” will likely undermine the trust of users in their products. This will have a profound effect on local research and development initiatives and manufacturing due to a reduction in jobs and revenue from the export of technology products.

Ethical case studies

Two (2) case studies come to mind that reflects the opposite ends of the spectrum of ethical decision-making in response to cyber security incidents and the effects the wrong decision can have on an organisation.

Yahoo was in the middle of being acquired by Verizon in 2017 when it disclosed it had discovered three data breaches in 2013 and 2014 that affected over one (1) billion users. Unfortunately, these data breaches were not disclosed until late 2016 after the original Verizon acquisition deal had been agreed to, but not yet paid for. The original deal between Verizon and Yahoo was worth USD 4.8 billion, and after the data breaches were disclosed, Yahoo’s worth was slashed by an incredible USD 352 million. The Security and Exchange Commission (SEC) also investigated Yahoo for waiting too long to notify victims of the data breach, and whether Yahoo violated SEC securities legislation by not providing documents to the SEC related to the data breaches. Yahoo continues to be liable for half (50 percent) of any debts incurred from third-party litigation and regulatory fines.

The Yahoo breaches and their lack of ethical behaviour concerning the notification of victims and regulatory bodies is an apt example of the damage that can occur when behaviours are not governed by ethical principles.

On the other end of the spectrum of ethical decision-making sits the Australian Red Cross. The Red Cross suffered a data breach of over 550,000 blood donor’s details, including name, address, date of birth, gender, and information regarding sexual history. The data was inadvertently published by a third-party contractor to an online public-facing application form.

The Red Cross immediately disclosed the data breach to affected donors and to the Australian Government CERT (Computer Emergency Response Team). Not only did the Red Cross avoid any fines for the data breach, but they also received an extraordinary commendation for their response efforts by the Commissioner of the Office of Australian Information Commission, Timothy Pilgrim. The assurance that the Red Cross provided donors served to increase their reputation for transparency and trust within the Australian community.

Both of the above examples highlight the importance of adequate incident response procedures that align with the values of the organisation. All organisations should seek to establish trust between themselves and their customers.

No alt text provided for this image


An organisation should implement a decision-making framework that aligns with the values and purpose of the company. The framework should balance organisational risk and best practice for cyber security in a well-defined and replicable manner which meets the needs of business along with regulatory and legislative obligations, and ensure that leaders have access to accurate information that is appropriate to ethical decision-making processes.

Ethics and cyber security go hand-in-hand. Organisations must establish its purpose and values and continuously monitor the behaviour of their staff in relation to those values. Customers expect honesty and transparency, and as detailed in the report, the results can be devastating when ethical behaviour is ignored. The protection of data and prevention of harm should be the primary focus in all ethical/cyber decision-making.

The following steps should be established as a minimum standard:

  • Every organisation should consider the data and assets they own and identify what is critical to their business operations and their consumers/customers. It is impossible to protect everything at all times, and there is a limit to the capital available for cyber security budgets. The identification of your critical data and assets, your “crown jewels”, will enable you to implement appropriate security controls where it matters most.
  • Invest in cyber security awareness training for staff. The majority of data breaches occur due to human error, such as clicking on phishing emails or sending information to the wrong recipient. Promoting a risk-aware culture and ensuring your employees are capable of responding to cyber threats is a cost-effective method of reducing your risk.
  • The theft of credentials can compromise an entire organisation’s network. Multi-factor authentication requires the user to enter a password, and then another form of credentials, such as a pin sent as a text to your phone, a fingerprint scan, or Universal 2nd Factor (U2F) security key. When multi-factor authentication is implemented, it is substantially harder for a cyber criminal to gain access to credentials and networks because they have to show they have access to the other authentication factor.
  • Next, and with equally great importance, backup your data. Ransomware is a type of malware that blocks access to your data or systems until a financial payment is made. Many organisations choose to pay the ransom because they do not have their data backed up, and to retrieve it they must decide between making a payment with no guarantee their data will be returned or lose everything.

It is not all “doom and gloom”. There is an “egg in one’s beer” to cyber security. Organisations that invest in cyber security and have high standards of ethical decision-making strengthen their resilience, decrease the likelihood of a successful attack, and subsequently have a higher level of trust with their consumers. The focus on consumer trust is now de rigueur in Australia, particularly after the Hayne Royal Commission. Research shows that over 50% of customers will pay more for a company’s services and products if they trust them.

Essential to determining whether a consumer trusts an organisation is transparency about their cyber security and data use. Through the timely disclosure of data breaches, the design of fit-for-purpose security controls, and the informed consent of the use of user’s data, organisations show they are transparent and therefore elicit a greater level of trust. Australian companies need to make cyber security, ethical decision-making, and data privacy a priority and demonstrate their commitment to the trust of their stakeholders, to remain competitive in the digital age.

Shannon Sedgwick GAICD

When computer hackers turn out to be the good guys – UNSW Business Think

I contributed to this article for the University of NSW (UNSW) Business Think Journal

The popular image of a computer hacker is a hoodie-wearing night owl, a ‘black hat’ who remotely breaks into an organisation’s systems, intent on mischief, financial gain, or political exposure.

But while wearing a hoodie and operating at night may still be de rigueur, recent years have seen the emergence of a new breed – ‘white hat’ hackers, who do what they do legally and with an organisation’s blessing, with some getting paid as much as $350,000 a year to do so.

Mortada Al-Banna, a doctoral researcher in the school of computer science and engineering at UNSW, and his academic colleagues have investigated this phenomenon of crowdsourced vulnerability discovery, interviewing 36 key informants from various organisations about the challenges and benefits of inviting outsiders to test their computer systems in this way.

“I’m interested in how externally generated events affect the security posture of an organisation, and crowdsourcing security is one of these,” Al-Banna says.

While the first award of a ‘bug bounty’ (a payment for finding and reporting a bug) was by web browser company Netscape as far back as 1995, the wider industry remained sceptical.

But in 2017, this attitude was transformed in remarkable fashion when the US Department of Defense announced via website Hackerone that they wanted people to “hack the Pentagon”.

“This has motivated a lot of companies to get involved,” says Al-Banna. “The Department of Defense started small and then expanded, and the US government is currently considering expanding the program throughout all areas of their operation.”

‘Humans are actually better at this. They are more creative and look for the unexpected’

Test your system
Al-Banna’s research has revealed a number of challenges and reservations that organisations have about crowdsourced vulnerability discovery, including the lack of managerial expertise to run a successful bug bounty program, the possibility of low-quality submissions and cost escalations, and a general distrust of ‘white hat’ hackers.

“If companies want to run a bug bounty, but want to minimise the problems, there are techniques to help them do this,” says Al-Banna.

But while it’s possible to automate, say, the examining of reports from bug hunters to exclude duplication or out-of-scope issues, actually automating the process of looking for bugs is more difficult.

“The current automated tools for looking for vulnerabilities are actually more ‘noisy’ than the crowd,” says Al-Banna.

“Humans are actually better at this. They are more creative, and look for the unexpected.”

So how can organisations make use of this research? Al-Banna’s advice is that businesses need to do their homework first.

“Don’t just jump straight into a bug bounty. You need to test your system yourself with [network] availability tools – bug hunters will use these themselves – before leveraging the crowd for problems that require more creative input.

“In the first instance, limit the scope and only invite in a small number of bug hunters. But if organisations keep it this way forever, they will not reap the benefit of crowdsourcing,” says Al-Banna.

Adrenaline rush
Despite being only 22 years of age, Shubham Shah is a veteran of the world of crowdsourced vulnerability discovery. His childhood interest in computer gaming and ‘game hacking’ (modifying games) soon escalated into the world of computer security. By the age of 13, he was hacking web applications.

Shah’s skills led him to work for professional services multinational EY, and then as a consultant for Bishop Fox, doing work for Fortune 500 companies. But he soon found he could make more money pursuing bug bounties, which he has done exclusively for the past year.

‘They can often show you where you are most vulnerable more effectively than your security team could identify’

“My first bug bounty was from PayPal. It took me eight hours to get into an internal network that they owned, and they paid me US$1500. If you’re good at it, the financial incentive is very high,” Shah says.

“When you find a big vulnerability in a big company, there’s an adrenaline rush. You feel you’ve achieved something big – like running a marathon. But you could spend many hours finding nothing, and there’s no model for predicting what money you’ll make.”

Shah envisages a wider move towards a crowdsourced economy, and not just in computer security – he cites the example of design consultancy 99 Designs, which has been operating a similar model in its industry.

“Traditional consulting, where companies charge even if they ultimately do nothing, involves a waste of resources,” he says. “It’s not based on results.”

During the next five to 10 years, Shah believes that low-level bug hunting will become automated – which will focus the attention of the crowd on being more creative, and searching for more serious vulnerabilities.

“We’re currently paying the crowd to do what is in effect manual labour. We’re encouraging ‘noise’, and it’s a significant effort for a company to run a bounty,” Shah says.

“The only way to reduce the noise is to automate what can be automated.”

Establishing parameters
Shannon Sedgwick, a senior manager for cyber risk at Deloitte Canberra, has experience of employing ‘white hat’ hackers and observing the benefits they can bring to an organisation.

“In my experience, the industry is quite open about engaging with ‘white hats’,” he says. “Google paid out US$3 million in bounties in 2017, and some individual bounties can be as much as $100,000.”

Sedgwick believes that, even with the large budgets available to companies such as Google or Apple, ‘white hat’ hackers can be more efficient and cost-effective than companies performing the same tasks with internal staff.

“They can often show you where you are most vulnerable more effectively than your security team could identify. A plan is only effective if you’ve tested that plan, and this is especially true for security systems.”

Another advantage for companies is that ‘white hat’ penetration testing typically occurs outside of business hours, thus minimising potential disruptions to their business operations.

If a company is considering offering bounties for the first time, Sedgwick suggests trialling the process internally first and then, when approaching the market, establishing strict NDAs [non-disclosure agreements] and parameters of what is under review and cannot be exploited.

“Don’t release all of your applications and systems for testing at once, and engage an experienced specialist security company to oversee the process,” he says.

For Sedgwick, one of the challenges for companies engaging with ‘white hat’ hackers is the risk that some can edge towards becoming ‘grey hats’, who identify vulnerabilities but don’t report them, going on to exploit the vulnerabilities for financial gain or selling them to interested parties on the dark web.

“If ‘white hats’ feel they’ve been treated poorly by a company – for example, being underpaid, or not appreciated – then they can cause problems.”

But importantly for Sedgwick, the boards of organisations have to understand that information security is a business risk, not just a technology risk.

“They need to identify their critical data and assets, and direct appropriate resources to those as a priority,” he says.

“You need to consider the big picture. You can patch vulnerabilities all day, but if a company’s governance and security strategy are not effective, then patching vulnerabilities is not going to do the trick.”