AI is finding a variety of uses in many industries. From copywriters using it as a convenient and effective editor, to software engineers utilizing AI to effectively debug code at a rapid pace, Artificial Intelligence is finding its place as a tertiary employee in workplaces. The advantages are apparent. The automation of simple tasks to save time, quick solutions to problems without the need of additional resources and creative work arounds are among the many perks introduced when LLMs (also known as Large Language Model) are involved in work processes. There is a downside that isn't often considered, however. Use of generative artificial intelligence can pose a significant security threat for the organizations they were originally supposed to benefit.
In May of 2023, Samsung experienced a leak of proprietary code. You can imagine how this can be a nightmare for an entity known for being ahead of the pack with its cutting edge innovation. Unbeknownst to their IT department, this code had been uploaded to ChatGPT by a Samsung employee to check for bugs. This led to an immediate ban of generative AI tools within the organization. Data shared with these LLM services are stored on the servers of the ones providing them (usually Microsoft, Google, OpenAI) and cannot be removed by the end user. What's more concerning is that the data could also end up on servers of other end users. It doesn't need to be explained how this completely compromises any efforts that have been made to protect sensitive information to begin with.
Employee use of AI tools without the authorization and supervision of the organization's IT department is known as Shadow AI. The next time you think about telling Chat GPT to write an email to a customer that may include sensitive information or proprietary data , remember you could potentially be putting your organization in danger of the following:
For all the benefits that Artificial Intelligence provides corporations, it is becoming exceedingly apparent that it can introduce unforeseen hazards if not utilized responsibly. For those who wish to to take advantage of this tool without leaving themselves vulnerable, here are some ways to mitigate risks:
To quote a great man whose name escapes me at the moment, "With great power comes great responsibility." AI and large learning models are no exception. With all the benefits they add to our jobs, we must take greater care going forward as we find new and effective ways to work. SCS can walk you and your organization through the process and provide valuable insight on how LLMs can benefit you while offering you the proper protections as well. We look forward integrating your organization with the future.
Citations:
Maher S. What is shadow AI and what can IT do about it? Forbes. https://www.forbes.com/sites/delltechnologies/2023/10/31/what-is-shadow-ai-and-what-can-it-do-about-it/. Published January 6, 2025.
Ray S. Samsung bans ChatGPT among employees after sensitive code leak. Forbes. https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/. Published May 2, 2023.
Enterprise AI Implementation - Expedient. Expedient. https://expedient.com/solutions/challenge/ai-implementation/.