Use AI compliantly: Legal Challenges

Business secrets, copyright, and privacy - there are legal pitfalls lurking in the use of generative artificial intelligence that can have serious consequences if not addressed. Find out what you need to consider and how you can significantly reduce the risk of running afoul of the law with a digital adoption platform.
March 18, 2024
6 min

The United Kingdom and the United States have seen significant volumes of AI implementations in their workforces. In the UK, companies are actively adopting AI practices, with a focus on large-scale, transformational initiatives to enhance productivity and competitiveness. Notably, UK companies rank second in having a comprehensive strategy for AI adoption, reflecting a strong commitment to ambitious AI projects amid global competition

On the other hand, the United States has been a leader in both public and private AI research, with substantial investments in AI initiatives over the years. Venture capitalists have significantly funded AI projects in the US, leading to a transformation of organizations into sophisticated AI users. US enterprises manage numerous AI production systems, showcasing a high level of adoption and sophistication in leveraging AI technologies within their workforce. Both countries demonstrate a robust uptake of AI technologies in their work environments, emphasizing the importance of AI in driving innovation and efficiency across various industries.

1. Do not use business secrets in prompts

Various sources have reported that Samsung recently put its internal use of ChatGPT on hold. The reason: engineers had entered secret source code for debugging and confidential meeting notes into the chat window. They had not considered that OpenAI could use the disclosed information to train the software and share it with anyone. 

This case illustrates how quickly organizations can be accused of failing to take appropriate confidentiality measures. This is because the terms of service exclude GenAI vendors as infringers. Instead, employees, managers and directors are responsible. Even if they inadvertently disclose trade secrets or confidential customer or personal information. Depending on the extent of the damage and the motive for the offense, the consequences range from a warning to termination without notice to damages and imprisonment. 

It is possible to have the information subsequently deleted by the AI operator. However, even with API-protected AI applications, the best protection is not to use confidential information in prompts. In the case of sensitive information from customers, suppliers, and other stakeholders, contracts and nondisclosure agreements must be used to determine what, if any, data may be used in GenAI tools.   

3. Protection of privacy and personal rights

The General Data Protection Regulation (GDPR) sets high standards for the handling of personal data. Accordingly, companies are also required to ensure that their use of AI systems complies with GDPR requirements. 

For example, care must be taken to ensure that the training data and AI algorithms used do not lead to discrimination or bias that unlawfully disadvantages employees, customers, or job applicants. If the countermeasures taken are inadequate, the company will have to answer for itself in the event of a lawsuit and pay damages if it loses.

An additional risk is that the processing of personal data with GenAI applications may also take place outside the EU. If this process is not GDPR compliant, there is a risk of heavy fines.

Risk management should be based on the EU AI Act

With the EU AI Act, the European Union wants to steer the use of AI-based applications in a safe direction. At the end of 2023, the European Parliament and the EU member states agreed on a version that provides a preview of the law that will come into effect in 2026. It divides AI applications into three categories of risk:

Unacceptably high risk

AI applications that are considered a threat to the fundamental rights and security of EU citizens and are prohibited for businesses, such as surveillance systems or social scoring. 

High risk

AI applications that are not explicitly prohibited, but could lead to harms, such as AI applications to assist in the selection of job applicants. 

Low risk

All other AI systems, including chatbots or video games with AI. 

To avoid security risks and data breaches, companies should already base their internal assessments on the risk-based approach of the EU AI law. AI applications that pose an unacceptably high risk should be disabled. In the case of high-risk AI, such as people analytics or HR recruiting, particular attention must be paid to future requirements for use. For example, organizations must ensure transparency and comply with technical documentation and event logging requirements. They should also consider future requirements for accuracy, robustness and cybersecurity of the systems and adapt their processes to the requirements of the EU Directive. 

Framework for using AI in the enterprise

Regardless, it is advisable to develop a basic framework for the use of GenAI applications in the workplace that minimizes risks and prevents undesirable developments. In particular, this includes the following

  • Conduct a comprehensive compliance risk management assessment and document the results.
  • Create a regulatory policy for the use of GenAI applications.
  • Establish an AI task force that embeds the policy into the corporate and work culture through communication and leadership.
  • Ensure consistent monitoring of operational processes and define the necessary reporting and escalation processes. 
  • Clarify who should have what access to the company's AI applications, and which employees' use must be restricted or prohibited to protect trade secrets. Adjust policies and work instructions accordingly.

Direct support in the flow of work is crucial

It should be noted that there are a number of things to consider when using GenAI applications for companies and their employees. The risks are considerable and it is difficult to contain them effectively. This is also the view of the legislator. For this reason, the EU AI Act will require companies to train their employees in the use of GenAI. 

In view of the dynamic technological development, the far-reaching impact on work routines and the associated uncertainties, traditional training measures are likely to quickly reach their limits. Organizations should therefore consider a solution that supports their employees with questions about the correct and legally compliant use of GenAI applications and the corresponding processes, guidelines and procedures directly in the workflow.

A digital adoption platform such as the tts performance suite offers exactly this support: it provides holistic support directly in the workflow and ensures that the new technologies are used efficiently and compliantly. The unique side-by-side approach of the tts performance suite places a help window next to the AI application like a sidebar. For example, users can access information about approved AI tools and compliance regulations, as well as help with prompting and sample prompts - all without leaving the AI tool. This ensures that employees are always safe and confident when using the GenAI program in accordance with AI policies.

Newsletter

Register now and never miss a post again!
By entering my email address, I consent to the processing of my data in accordance with the declaration of consent.

Related articles