Skip to content

Insights

5 Risks of Artificial Intelligence to Organizational Cybersecurity

Our article “Artificial Intelligence’s Influence on Accounting” delved into the pros and cons of the use of Artificial Intelligence (AI) models in accounting. If you are unfamiliar with AI, read the previous article to gain a better understanding of how the trending technology might change the landscape of accounting. This article will continue the same thought process as the one mentioned and cover five risks AI poses to organizational cybersecurity, including deepfakes, hallucinations, and data exposure.

Risk 1: Improved Deepfakes

Deepfakes are AI generated videos featuring well-recognized and influential figures, such as Mark Zuckerburg, who have been digitally altered to spread false information. As access to deepfake technology continues to grow and the technology improves, it will become increasingly difficult to differentiate between real videos and deepfakes. Similar AI models exist that can be easily trained to mimic specific human voices. These trends are particularly concerning for businesses because it means a deepfake of someone’s boss or a firm partner could be used to great effect against unsuspecting employees who have no reason to doubt the communications with their “supervisor.”

Risk 2: Credible Phishing

It’s likely most people have received a phishing email before. The message might have been to fill out a survey or re-enter information to prevent an account from being deleted, all in an attempt to trick someone into handing out personal information to bad actors or downloading malicious software. AI, like ChatGPT, has the ability to generate personalized, convincing phishing emails in seconds. In the past, poor grammar and incorrect spelling often served as indicators that an email was fake. With the nearly flawless output of chatbots, though, the likelihood an employee will fall victim to these phishing campaigns grows, opening up systems to further attacks.

Risk 3: Impromptu Hallucinations

One of the most interesting and unexpected behaviors of ChatGPT is hallucinations. This occurs when the AI model makes up a fictitious response. While every person will make mistakes and some will report something that is false as true, part of the goal of AI is to surpass human error. Often, these hallucinations involve generating links to nonexistent websites, giving the attackers the chance to populate those websites with malicious content. Any company or person who uses ChatGPT or other AI models must be aware of the potential of a hallucinated response.

Risk 4: Greater Data Exposure

The issues of data exposure and security were briefly touched on in the previous article. In many cases, AI models, such as ChatGPT, will store the questions they are asked in a repository, so they can be used in future AI training; this means if a company wants to utilize a service like ChatGPT, it will need to limit the amount of sensitive data that passes through. When asked what should be done in the future to ensure security for this kind of data, ChatGPT suggested encryption, secure transmission, an incident response plan, and employee training. These steps are crucial to mitigate risks to data posed by AI. Additionally, any company wanting to provide AI services needs to ensure that data is not stored in a global repository without the consent of the client.

Risk 5: Added Vulnerabilities

As AI continues to creep its way into businesses, it is crucial to ensure the AI itself is secure. Like any other piece of software, AI models are susceptible to faulty programming that can lead to vulnerabilities in the system. Many times, these vulnerabilities are simply unforeseen errors in the code that developers patch as soon as they are found. However, it is possible for a vulnerability to be left in a system, giving malicious actors a backdoor inside. Of course, everyone is familiar with the adage, “the bigger they are, the harder they fall.” Well, that same principle applies to AI. The more one relies solely on AI without the proper controls, including routine patching and penetration testing, the more catastrophic an attack would be for them and their company.

If you would like more information regarding how your organization and its cybersecurity may be affected by AI, McKonly & Asbury would be happy to help. Be sure to visit our System and Organization Controls (SOC) service page and don’t hesitate to contact us with any questions.

This article was written by SOC intern Carolina Hatch under supervision of Principal Lynnanne Bocchi during McKonly & Asbury’s 2023 Summer Internship Program.

About the Author

Lynnanne Bocchi

Lynnanne joined McKonly & Asbury in 2018 and is currently a Director with the firm. She is a key member of our firm’s System and Organization Controls (SOC) Practice, preparing SOC 1, SOC 2, and SOC 3 reports for our clients. She holds the… Read more

Related Services

Related Industries

Subscribe to Our Newsletter