Global connectivity is on a meteoric rise. Increasingly we see everyday items connected to the internet — connected refrigerators, baby monitors, washing machines, vehicles, medical devices, and even fish tanks. As innovative technology proliferates and evolves, it becomes increasingly embedded into our personal and working lives. However, this increased connectivity leads to increased risk for Australian citizens and businesses. It is no secret that cyber security is and will continue to be the hot topic in 2019, with global cyber security spending expected to reach USD 124 billion. (Gartner, 2018) The recent and highly-publicised cyber-attacks against Toyota and Landmark White serve as a stark reminder of the pervasive threat of cyber criminals. The issue becomes rather dispiriting when you delve into the statistics of data breaches.
However, data breaches are not the only concern arising from the proliferation of technology. Ethical issues, particularly concerning automation, artificial intelligence, and robotics, are now in front of mind for the public and media. Recent incidents have raised questions on ethics and responsibility, such as a death in March 2018 caused by an Uber self-driving car. Who is ultimately responsible? The manufacturers? The driver? The software programmers?
There is always a trade-off in technology. The trade-off by achieving a balance between accessibility and security, functionality and compliance, and convenience and privacy. It is essential to achieve a balance between these themes to establish trust and minimise any potentially harmful effect of the loss, theft, or destruction of sensitive data.
As we create and adopt technology, there needs to be ethically sound standards and regulations that govern the use of artificial intelligence and automation. This piece examines emerging innovative technology, ethical issues for the cyber security industry, the efficacy of current regulations and guidelines, and the options available for organisations who aim to embed ethical decision-making into their culture.
Ethical decision-making is about making the “right choice” and the reasoning behind those choices. The standard of ethics in an organisation is a direct reflection on the purpose of the organisation. Ethics forms the basis of the organisational purpose by asking “Why do we do what we do?”. Ethics in cyber security is about what decisions are aligned with our values and what is morally acceptable for both the data owner and the organisation. Ethical standards should also describe how to implement processes for ensuring ethical decision-making.
Ethical issues are a daily occurrence in cyber security. Every organisation that stores personal and sensitive data has a responsibility to ensure that ethics are interwoven throughout the company, from the boardroom to the interns and grads. Ethical decision-making promotes transparency and honesty, and as this piece concludes, the pursuit of such laudable values leads to both greater trust in the marketplace and greater profits.
The Australian public, consumers, and the media expect organisations to protect the data they store and use and have effective frameworks in place for guiding ethical decisions concerning the confidentiality, integrity, and availability of that data. They expect organisations to abide by legislation and regulations as a minimum, but as we have seen in recent times, “legally right” does not always equate to “morally right”. The oft-competing values of legislation vs morals means that the decision to abide by one or the other must take into account the organisation’s corporate social responsibilities and what is in line with both their organisational and personal moral values.
Emerging technology and risks
The IBM/Ponemon Cost of a Data Breach study concluded that the average cost of a data breach is $3.86 million, and the likelihood of a recurring breach in the following two years is 27.9%. A data breach of more than 1 million records will cost approximately $40 million, and a loss of more than 50 million records will cost a staggering $350 million.
Australian small to medium business (SMB) owners have long had a folie à deux that they “fly under the radar” of cyber criminals because they deem themselves too small to be a target. The recent statistics from Verizon show that this is no longer the case, with 43% of data breaches involving small business victims. Unfortunately, over 500,000 Australian small businesses fell victim to cyber crime in 2017, and research shows that over 60% of SMBs go bankrupt within six months of a data breach. It is no longer an option for Australian businesses, regardless of size, to do nothing and hope for the best.
Emerging technology, such as the Internet of Things (IoT) is designed to solve problems that affect us as humans and to make our lives easier and more enjoyable. However, that same cutting-edge technology can be used against us. While the employment of IoT yields many benefits across a vast range of industries, it is not without risks including privacy and security concerns, liability around automated equipment and self-driving cars, and a lack of global regulations and standards. There are numerous case studies of IoT use gone wrong, from hacked vehicles and baby monitors to the destruction of nuclear reactors and shutdown of the largest websites in the world via a D-DOS attack launched by the Mirai Botnet.
Artificial Intelligence (AI) has been used by cyber criminals to create something called a “deepfake”. A deepfake is a fake video, image, or audio message that looks incredibly realistic and fools the recipient into believing it to be a real person. This malicious use of AI takes phishing to a whole new level of sophistication and can be used to trick people into handing over passwords and sensitive data, or to pay fraudulent invoices, or possibly for “catfishing”. Malicious actors could also use “deepfakes” to manipulate elections by posting a fake video of a government leader discussing inflammatory topics or renouncing their campaign. This type of “fake news” could cause electoral disruption or cause conflict with foreign governments.
It has been argued that it is quantum computing, not AI, that will define our future. Classical computing systems are binary, which means they work on bits that exist as either 0 or 1. Quantum computers are not limited to binary bits. They use something called quantum bits, or “qubits”. Qubits stand for atoms, ions, electrons, and photons and control mechanisms working collaboratively as both memory and processor. Because a quantum computer is not limited to binary processing, it can contain multiple states at the same time which gives it the ability to be infinitely more powerful than even the most advanced computing systems available today. Cyber criminals could possibly harness the processing power of quantum computing to break advanced encryption algorithms.
Cloud computing is leading the transformation of where businesses and individuals store and use their data. As the volume of cloud usage grows, so does the amount of sensitive data stored in the cloud, which is potentially exposed to risk stemming from cloud-specific security issues:
- Malware injections are malicious code that is injected into a cloud computing repository and enables malicious actors to gain access to any data that is uploaded to that repository. This type of malware is particularly challenging to identify without appropriate detection systems.
- APIs (Application Programming Interfaces) assist organisations by enabling them to create customised cloud solutions that meet their data and operational requirements. Improperly secured APIs are a commonly-used entry point for cyber criminals, leading to lost or stolen data.
- Just like physical servers, accessing cloud databases requires login details, which makes usernames and passwords a valuable target to cyber criminals. Similar to “deepfakes”, phishing emails is a common method criminals use to gain access to cloud login credentials.
Ethical issues and challenges for cybersecurity
The landscape of cyber evolves continuously. As does the threats that organisations and governments face. This required an evolving and equally-agile workforce. However, there is a widening gap between demand and supply of qualified cyber security professionals. This quite often leads to the rushed recruitment and onboarding of new cyber security staff, and potentially, a lack of guidance provided to the new recruit on ethical decision-making and expectations. When a recruit is forced to rely on their own standard of morality, this causes a rise in differing standards of right and wrong in the workplace, which ultimately leads to mistakes.
When an organisation sets and follows ethical standards or an industry abides by regulation that enforces ethical behaviour, it ensures that all relevant parties are held to the same standard and have a clear understanding of their ethical responsibilities. The C-Suite and the board must be seen to be leading by example and engendering a culture of high standards of ethical decision-making,
If a company’s data is compromised, it may face lawsuits, reputational damage, and questions about its ethical standards. Delaying a public announcement can compound these consequences. Those responsible for overseeing information security practices within organisations, such as CISOs and supporting management, must ensure a fit-for-purpose communications policy is implemented to guide incident response procedures.
There are a number of ethical considerations regarding the impact of technology and cyber security. One is the privacy of a user’s data. Organisations need to consider whether they have appropriate controls and processes in place to safeguard the integrity and privacy of their customers and their data. A key question to ask would be: what would the result to the customer be if this information was compromised?
Another consideration is the customer’s right to their information. This is particularly important when considering how long user data should be stored. Should it be deleted immediately after its use? If it is kept, how will it be secured? An even thornier question is what happens to the data when the user dies? Should their family be able to gain access to it?
The consideration of bias in algorithms and AI is increasingly a topic of consternation for developers. Algorithms used in correctional facilities to determine the likelihood of recidivism, i.e. a prisoner’s likelihood to re-offend, has been used to decide the outcome of bail/release hearings in America. It was discovered that this algorithm, called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) contained biased data and was less likely to look favourably upon African Americans or people from low socioeconomic neighbourhoods.
There is currently at play, an Australia-specific example of an ethical issue concerning cyber security. The Assistance and Access Bill that was passed in 2018 allows Australian government law enforcement and intelligence agencies to demand technology manufacturers and providers to give access to encrypted communications. The law stipulates that a technology provider must create a “back door” or access point into their products so the government agencies can gain access to encrypted communications. This forced creation of a back door into technology created by Australian organisations leads to various ethical issues, not the least of which is the privacy of their user’s data. Technology companies, especially those who invest heavily in encryption products, may be forced to move their manufacturing operations internationally. The legislatively mandated “weakness” will likely undermine the trust of users in their products. This will have a profound effect on local research and development initiatives and manufacturing due to a reduction in jobs and revenue from the export of technology products.
Ethical case studies
Two (2) case studies come to mind that reflects the opposite ends of the spectrum of ethical decision-making in response to cyber security incidents and the effects the wrong decision can have on an organisation.
Yahoo was in the middle of being acquired by Verizon in 2017 when it disclosed it had discovered three data breaches in 2013 and 2014 that affected over one (1) billion users. Unfortunately, these data breaches were not disclosed until late 2016 after the original Verizon acquisition deal had been agreed to, but not yet paid for. The original deal between Verizon and Yahoo was worth USD 4.8 billion, and after the data breaches were disclosed, Yahoo’s worth was slashed by an incredible USD 352 million. The Security and Exchange Commission (SEC) also investigated Yahoo for waiting too long to notify victims of the data breach, and whether Yahoo violated SEC securities legislation by not providing documents to the SEC related to the data breaches. Yahoo continues to be liable for half (50 percent) of any debts incurred from third-party litigation and regulatory fines.
The Yahoo breaches and their lack of ethical behaviour concerning the notification of victims and regulatory bodies is an apt example of the damage that can occur when behaviours are not governed by ethical principles.
On the other end of the spectrum of ethical decision-making sits the Australian Red Cross. The Red Cross suffered a data breach of over 550,000 blood donor’s details, including name, address, date of birth, gender, and information regarding sexual history. The data was inadvertently published by a third-party contractor to an online public-facing application form.
The Red Cross immediately disclosed the data breach to affected donors and to the Australian Government CERT (Computer Emergency Response Team). Not only did the Red Cross avoid any fines for the data breach, but they also received an extraordinary commendation for their response efforts by the Commissioner of the Office of Australian Information Commission, Timothy Pilgrim. The assurance that the Red Cross provided donors served to increase their reputation for transparency and trust within the Australian community.
Both of the above examples highlight the importance of adequate incident response procedures that align with the values of the organisation. All organisations should seek to establish trust between themselves and their customers.
An organisation should implement a decision-making framework that aligns with the values and purpose of the company. The framework should balance organisational risk and best practice for cyber security in a well-defined and replicable manner which meets the needs of business along with regulatory and legislative obligations, and ensure that leaders have access to accurate information that is appropriate to ethical decision-making processes.
Ethics and cyber security go hand-in-hand. Organisations must establish its purpose and values and continuously monitor the behaviour of their staff in relation to those values. Customers expect honesty and transparency, and as detailed in the report, the results can be devastating when ethical behaviour is ignored. The protection of data and prevention of harm should be the primary focus in all ethical/cyber decision-making.
The following steps should be established as a minimum standard:
- Every organisation should consider the data and assets they own and identify what is critical to their business operations and their consumers/customers. It is impossible to protect everything at all times, and there is a limit to the capital available for cyber security budgets. The identification of your critical data and assets, your “crown jewels”, will enable you to implement appropriate security controls where it matters most.
- Invest in cyber security awareness training for staff. The majority of data breaches occur due to human error, such as clicking on phishing emails or sending information to the wrong recipient. Promoting a risk-aware culture and ensuring your employees are capable of responding to cyber threats is a cost-effective method of reducing your risk.
- The theft of credentials can compromise an entire organisation’s network. Multi-factor authentication requires the user to enter a password, and then another form of credentials, such as a pin sent as a text to your phone, a fingerprint scan, or Universal 2nd Factor (U2F) security key. When multi-factor authentication is implemented, it is substantially harder for a cyber criminal to gain access to credentials and networks because they have to show they have access to the other authentication factor.
- Next, and with equally great importance, backup your data. Ransomware is a type of malware that blocks access to your data or systems until a financial payment is made. Many organisations choose to pay the ransom because they do not have their data backed up, and to retrieve it they must decide between making a payment with no guarantee their data will be returned or lose everything.
It is not all “doom and gloom”. There is an “egg in one’s beer” to cyber security. Organisations that invest in cyber security and have high standards of ethical decision-making strengthen their resilience, decrease the likelihood of a successful attack, and subsequently have a higher level of trust with their consumers. The focus on consumer trust is now de rigueur in Australia, particularly after the Hayne Royal Commission. Research shows that over 50% of customers will pay more for a company’s services and products if they trust them.
Essential to determining whether a consumer trusts an organisation is transparency about their cyber security and data use. Through the timely disclosure of data breaches, the design of fit-for-purpose security controls, and the informed consent of the use of user’s data, organisations show they are transparent and therefore elicit a greater level of trust. Australian companies need to make cyber security, ethical decision-making, and data privacy a priority and demonstrate their commitment to the trust of their stakeholders, to remain competitive in the digital age.
Shannon Sedgwick GAICD