Blog - Secure Compliance Solutions

Looming in the Shadows: Dark Threats from the Use of AI

Written by Devon Edwards | January 15, 2025

AI is finding a variety of uses in many industries. From copywriters using it as a convenient and effective editor, to software engineers utilizing AI to effectively debug code at a rapid pace, Artificial Intelligence is finding its place as a tertiary employee in workplaces. The advantages are apparent. The automation of simple tasks to save time, quick solutions to problems without the need of additional resources and creative work arounds are among the many perks introduced when LLMs (also known as Large Language Model) are involved in work processes. There is a downside that isn't often considered, however.  Use of generative artificial intelligence can pose a significant security threat for the organizations they were originally supposed to benefit.

In May of 2023, Samsung experienced a leak of proprietary code. You can imagine how this can be a nightmare for an entity known for being ahead of the pack with its cutting edge innovation. Unbeknownst to their IT department, this code had been uploaded to ChatGPT by a Samsung employee to check for bugs. This led to an immediate ban of generative AI tools within the organization. Data shared with these LLM services are stored on the servers of the ones providing them (usually Microsoft, Google, OpenAI) and cannot be removed by the end user. What's more concerning is that the data could also end up on servers of other end users. It doesn't need to be explained how this completely compromises any efforts that have been made to protect sensitive information to begin with. 

Employee use of AI tools without the authorization and supervision of the organization's IT department is known as Shadow AI. The next time you think about telling Chat GPT to write an email to a customer that may include sensitive information or proprietary data , remember you could potentially be putting your organization in danger of the following:

  • Compliance Violations
    • You could be breaking the law! That's right. Many companies fall under federal regulations such as HIPPA, GDPR and CMMC. Each time an employee inserts sensitive information into an unprotected LLM, it opens the company up to financial penalties, certificate or license revocation.  

  • Security Risks
    • Public AI tools typically store inputs to improve their model. This can be not only accessible to bad actors, but also competitors.  That undisclosed product or proprietary research that gives you an edge in your industry can potentially fall into the hands of the competition. So think twice!

For all the benefits that Artificial Intelligence provides corporations, it is becoming exceedingly apparent that it can introduce unforeseen hazards if not utilized responsibly. For those who wish to to take advantage of this tool without leaving themselves vulnerable, here are some ways to mitigate risks:

  • Develop policies and guidelines that define responsible use of AI within each role within the organizations. Every department and position should be have specifications on what is and isn't allowed to be inserted into an LLM.

  • Educate and train your team on benefits and, more importantly, risks involved with AI as well as responsible usage. If users not only know the what, but also why they aren't permitted to use AI in certain ways, they are more likely to be cognizant of their behaviors in regards to its usage.

  • Monitor employee usage, track effectiveness and gather feedback from those who use it within the defined guidelines. Make sure the IT department is knowledgeable of who is doing what and can trace past interactions with learning models within the organization. 

  • Offer alternatives- Team up with Secure Compliance Solutions (SCS) and Expedient to introduce your organization to safe and private AI use via a secure AI gateway and private model hosting. Contact us to learn how to implement, today!

To quote a great man whose name escapes me at the moment, "With great power comes great responsibility." AI and large learning models are no exception. With all the benefits they add to our jobs, we must take greater care going forward as we find new and effective ways to work. SCS can walk you and your organization through the process and provide valuable insight on how LLMs can benefit you while offering you the proper protections as well. We look forward integrating your organization with the future. 


 

 

Citations:

Maher S. What is shadow AI and what can IT do about it? Forbes. https://www.forbes.com/sites/delltechnologies/2023/10/31/what-is-shadow-ai-and-what-can-it-do-about-it/. Published January 6, 2025.

Ray S. Samsung bans ChatGPT among employees after sensitive code leak. Forbes. https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/. Published May 2, 2023.

Enterprise AI Implementation - Expedient. Expedient. https://expedient.com/solutions/challenge/ai-implementation/.