Generative AI and DAP: Alternative or "Perfect Fit"?

Tools like ChatGPT have made generative AI accessible to everyone. Many companies are now exploring how AI can facilitate both day-to-day work and knowledge management. Some even hope to replace existing systems completely. However, a closer look reveals that generative AI alone is not enough: it offers enormous potential - but only if companies create the right framework to use it safely and productively.
December 15, 2023
6 min

The prospect sounds enticing: install ChatGPT, Copilot, or any other generative AI solution, feed the software with the company's know-how, and then deploy it in every digital workplace - and you have a solution that McKinsey says could enable the global economy to achieve an annual productivity gain of $2.5 to $4.4 trillion.

The potential is certainly there. Generative AI (GenAI) can produce any type of content in seconds that would take a human hours or days. At the same time, however, the technology's ease of use presents companies with enormous challenges.

AI presents companies with 3 major challenges

1. Legal certainty

The use of generative AI raises a host of privacy, copyright, and personal rights issues that need to be addressed with legal certainty - because in the worst case scenario, the entire company is liable.

2. Process changes

Business processes will also need to be redesigned. AI can only reach its full potential if it is integrated in the right places.

3. Training

Employees must be trained in the use of GenAI. This is the only way to ensure that AI is used in accordance with the company's goals, values and policies.

AI simplifies knowledge management

Used well, generative AI can bring significant benefits in a short period of time, especially in knowledge management, and since many companies maintain their knowledge pool manually, generative AI offers enormous potential for improvement by

  • Assisting employees in correcting and updating outdated documents, enriching them with additional information, and significantly improving quality
  • Helping authors create articles from existing documentation, training materials, or chat histories of help desk requests
  • Personalizes content for roles, languages, and regions, and optimizes the user experience
  • Bringing together siloed knowledge, recombining it and providing input for new ideas

Extensive use of Copilot and Co. in everyday working life can help to optimize efficiency and effectiveness - but it can also become the Achilles' heel for companies undergoing digital change.

Automatically produced is not automatically professional

The impressive results of generative AI are based on machine learning algorithms and the use of large language models (LLMs), which, among other things, enable easy use with natural language. In addition, there are huge amounts of data on which the AI has been trained and which provide the raw material for the creation of texts, images, programming code or videos.

ChatGPT 3, for example, was trained on 175 billion data sets or tokens from the Internet. This is the equivalent of about 131 billion English words. Version 4 of ChatGPT has already 74 times as many. However, even with this, the new version of ChatGPT still provides inaccurate or incorrect information when used to search for information or create text.

Risk: AI does not provide reliable answers

However, organizations typically have far less training data of varying quality. This quickly leads to problems, especially if the generative AI is used by employees as a point of contact for questions about new applications, processes, or policies:

  • Source of Multiple Truth
    Generative AI does not understand text, but produces content based on statistical probabilities. Depending on the search query or prompt, different answer variants are produced that differ from each other. The company cannot guarantee that an AI's answer will always be correct.
  • Weakening of compliance
    Depending on the complexity of the question and the quantity and timeliness of the training data, the accuracy and quality of the answers can vary widely. However, even incorrect answers often seem so plausible that they go unquestioned, which can lead to compliance violations.
  • Occurrence of the "bias" effect:
    Generative AI models can reinforce biases that are unconsciously embedded in the training data. Combined with inaccurate instructions, this leads to data bias. Since it is impossible to predict when and why this bias effect will occur, organizations risk a gradual erosion of the quality of their data and processes, the exact opposite of what they are hoping for.

Regardless of whether these problems occur individually or in combination in real-world operations: Acceptance of generative AI will definitely take a massive hit, meaning that its potential will no longer be used, or hardly used at all. This would be a fatal development.

Two that complement each other: AI and Digital Adoption Platform

So what can an organization do to use generative AI safely and productively?

First and foremost, it must ensure that the new applications are used in a controlled and manageable way. This is where a Digital Adoption Platform (DAP) makes a valuable contribution.

A digital adoption platform provides targeted support in the work context. This helps employees to understand the new features and workflows more quickly and to use generative AI efficiently and in line with compliance requirements. Thanks to rapid learning successes, a DAP increases the necessary acceptance of new AI technologies.

Importantly, when implemented correctly, the interaction between AI and DAP creates an enterprise-wide single source of truth. Employees are not presented with alternative facts in different versions, but receive clear and reliable support that is 100% aligned with the company's goals.

Choosing the right digital adoption platform

However, many digital adoption platforms offer technology guidance, i.e., specific application assistance, such as on-the-fly creation. However, this only covers a small part of the challenges of using GenAI.

In practice, other questions quickly arise, such as

  • Can I use trade secrets or personal data in prompts?
  • How can I assess the quality of the content generated by the AI?
  • What should be done if there is reasonable doubt about the AI's output?
  • Who is liable for breaches caused by incorrect information from the AI?

Technology guidance does not help with any of these questions. This is why tts performance suite, the digital adoption platform from tts, also offers business guidance. This includes direct access to process knowledge, regulations and company-specific expertise. As a result, employees develop a broader understanding of the new ways of working that come with the introduction of generative AI.

Reliable and efficient: AI and DAP are a "perfect fit"

Companies that want to deploy AI across the board need a digital adoption platform. Only the combination of technology and business guidance will provide employees with reliable support when working with AI.

In particular, the three major challenges of legal certainty, process reliability and qualification can be overcome: With DAP, employees always receive legally compliant answers. They get immediate help with every process. And they quickly learn how to use AI applications safely.

AI and DAP are also a "perfect match" in other respects: DAP providers are now also integrating AI capabilities into their digital adoption platforms. For example, the new technologies increase efficiency in the creation of training materials, the generation of step lists, or the production of learning videos, provided that quality control is performed by subject matter experts before publication.

Corporate knowledge now reaches employees faster than ever before. At a time when the half-life of knowledge is getting shorter and shorter, this is essential.


Register now and never miss a post again!
By entering my email address, I consent to the processing of my data in accordance with the declaration of consent.

Related articles