Podcast

LERNLUST #9 // Evaluation of learning – A mental backpack

Too many learning programs today are designed to quickly deliver content to learners. Evaluating their success is limited to feedback questionnaires sent to participants. Jen Shivel and Andreas Hohenstein advocate thinking about the evaluation of learning programs from the outset, drawing on various well-known evaluation models.

Show me similar content

LERNLUST Podcast Corporate Learning
March 24, 2025
37 min
Susanne Dube, Learning Manager, tts Susanne Dube

Shownotes

Host:
Susanne Dube, Teamlead Learning // LinkedIn

Guests:
Jen Shivel, Learning Consultant // LinkedIn
Andreas Hohenstein, Senior Learning Consultant // LinkedIn


Evaluationmodell von Donald Kirkpatrick
Thalheimers LTEM
Kirkpatrick+ von Kaufmann


You can also find all episodes of our LERNLUST podcast at:

Apple Podcasts | Spotify | Google Podcasts | Amazon Music | Deezer

Transcript

[Claudia Schütze]
Lernlust, the podcast for everything related to corporate learning. 

[Susanne Dube]
We are Claudia Schütze and Susanne Dube and we are Learning Consultants at tts and we are the hosts of this podcast. 

[Claudia Schütze]
And here we will exchange ideas about topics related to our work, i.e. everything that affects learning in organizations today and in the future.

[Susanne Dube]
And from time to time, we will invite internal or external experts to join our group.

[Claudia Schütze]
And we would be delighted to have you there.

[Susanne Dube]
Hello dear listeners. It has been quite a while since Donald Kirkpatrick presented his four stages of the evolution of learning measures. Sometime around the 1960s. 

Since then, we are all familiar with happy sheets, the nice feedback sheets after training measures. And some of us may also know the next stage, the implementation of knowledge tests after a training or learning measure. This topic is relevant to the works council and is therefore perhaps not implemented as often as one might think.

Nevertheless, the topic of evaluation has not lost its presence. In the 2000s, Roger Kaufmann developed his Mega-Planning-Model or Kirkpatrick Plus. And Thalheimer was still dealing with it in 2018 with his Transfer-Evaluation-Model. 

In the meantime, driven by technology and the pressure of the VUCA world, the focus is more on creating quick learning than on the evolution of good learning. In fact, my colleagues and I would like to continue to focus on the evolution and improvement of learning measures and their implementation options. And that is precisely why two colleagues have taken this topic to heart.

Jennifer Shivel and Andreas Hohenstein are visiting me today. Jennifer has come a long way to become my colleague as a Learning Consultant. She joined us from the USA, studied teaching, but then decided on adult education and as my colleague she now develops very interesting concepts and every now and then she still does training.

And I am very happy that she is coming to my kitchen-cum-living room today for the topic of evolution. And as her partner for this discussion she brought Andreas Hohenstein with her. He is a highly experienced man who came to tts for strategic consulting, systemic consulting, management training and much more and has felt at home here for many years. Andreas and I are actually linked by the ACAP criteria catalog for the evaluation of e-learning. That was a long time ago.

He was involved in its creation and I used it for my diploma thesis, and that was a long time ago. Now you can guess how old I am. I am pleased that you both came to my coffee kitchen sofa to discuss a learning topic with me, namely the evolution of learning measures, a thought backpack.

Welcome to you both.

[Jen Shivel]
Hello Susanne.

[Andreas Hohenstein]
Yes, thank you for the invitation, Susanne.

[Susanne Dube]
Yes, always a pleasure. It's great to have you both here. Perhaps, as an introduction to the topic of learning evaluation, it is, of course, very, very exciting for me to learn how you actually came to it, or rather, why you are actually dealing with the topic of learning evaluation at tts.

[Jen Shivel]
Yes, I can start with that, and yes, there are many reasons to evaluate, but from my perspective as an online trainer and WBT developer, I mainly want to know from the evaluation whether what I and my company do and what we achieve is effective and whether we meet the learning needs of our participants. So getting feedback from participants is one of the reasons why I am driving this forward at all, and ultimately also to make learning, to make the value of learning visible, and to make it visible to others, that is, to all the various stakeholders, and precisely to ensure that this is also taken forward.

[Susanne Dube]
I love the way you say that, because I think it's actually in the hearts of everyone who is somehow involved in learning. Andreas, was it the same reason or motivation for you to get involved in this topic, or where did you come from?

[Andreas Hohenstein]
So the value of learning, that's for sure, that's what has driven me, too, ever since I started dealing with the topic, but even more so from a customer perspective, to see that the customer always has expectations of the learning. Why am I doing this? Why am I doing a learning project?

How can I change culture with learning in the coexistence of people? And I was always interested in what comes out at the end of the day? What do people show on the pitch?

How do they implement it? And what have we contributed to this from the personnel development in the company? That's one thing.

And the second thing is, how do we design this learning journey for our employees? And to also draw impulses from the evaluation in order to improve, become more suitable, become more goal-oriented from stage to stage on the learning journey. These are the topics that I associate with the evaluation at the customer's premises.

And I believe that these are the things that belong together. Namely, we can also improve our service quality from this.

[Susanne Dube]
Yes, exactly. And it was a real stroke of luck that you two were brought together by us in the field of consulting and were then able to choose this topic in order to pursue it for us. Now, of course, the question for me is, you have joined forces with your two perspectives on what evaluation can be and what you can do it for.

What approach have you chosen for us, for our company, for tts or perhaps for our customers with regard to the topic of evaluation? And maybe you could describe a little of what you do or have in mind.

[Andreas Hohenstein]
Yes, yes, then gladly. So in principle, evaluation in learning, in education is not a new topic per se. However, it is a topic where there are relatively leading models, but relatively little experience in practice, in the implementation and application of these models.

And that is, so to speak, what drives us to build something out of well-founded models, a pragmatically feasible solution that can be integrated very flexibly into the respective context, so to speak. Because we are convinced that only evaluation that is clearly integrated into the context and implemented in a very pragmatic, i.e. conceptual way, will be accepted by the customer. And that means that we have only borrowed from leading and existing models, but we do not use them one-to-one.

And that was something where we spent a lot of time in the last few months with many colleagues from the company, but also on the customer side, to put something together there, to set up an architecture for evaluation. And that was, so to speak, the main work from existing things, to put them together in such a way that it can ultimately become our pragmatic tts approach. That was the idea.

[Susanne Dube]
And could you describe it briefly? What is our approach?

[Andreas Hohenstein]
Yes, well, on the one hand, we said that it is important to say what is often said in companies today in evaluations, to look at a first level, what was the learning environment in which I learn like? What are the learning conditions in which I learn? So, to look at how is the learning process in the narrower sense?

How is it perceived? How is it experienced? And how can we optimally design this learning environment?
This doesn't have much to do with learning success itself. That is, we associate certain learning objectives with learning. These can be cognitive, affective learning objectives.

They can be competence-oriented learning objectives. This can be very diverse. And in the next step, to say how specifically can we describe these learning objectives?

And at the end of the day, to see if we have achieved learning success. And if it is a longer learning journey, which path might lead efficiently and effectively to learning success, perhaps even more efficiently and effectively than another? Or is it the mix of learning interventions that can be used to achieve optimal learning success?

In other words, in the first two stages, we actually do something that has become something of a tradition in learning: we look at the learning process itself. How does it feel? Did I enjoy it?

What was the environment like? And at the end of the day, what learning success do we have?

[Susanne Dube]
So basically just the classic Happy Sheets and tests? Or what should I imagine?

[Andreas Hohenstein]
Happy sheets and tests, but not in the form of happy sheets without a specific dimensioning of what comes afterwards. For example, if we say a learning environment. So I have an expectation of learning that the self-learning competence increases through the actual personnel development project.

That is my expectation. We would also check at a later point in time whether this has changed, the self-learning culture in the company. But for that, it is of course important to also look at whether we have created learning environments and conditions in the first step that promote a self-learning culture.

Or do we have a learning culture where we say, go to the seminar, there you get Nuremberg funnels, you are trained, the food is delicious, the seminar rooms are great. But we have actually not focused on whether the learning environment itself promotes self-learning skills, promotes self-development. That's why it's not just a happy sheet, but linked to the strategic expectations of the company.

And when a company says, I want to promote self-learning skills, self-learning culture, then we have to illuminate and highlight things that are linked to it in the very first phase. And that's not what happy sheets typically do.

[Susanne Dube]
What does it do? I always remember the Happy Sheets. I gave them out ten years ago and I always thought they were great.
But I was just wondering where you can describe how I measure whether self-learning culture has been achieved or self-learning skills have increased. Can I measure that with a questionnaire? That would be the Happy Sheet.

Or do I do it differently?

[Andreas Hohenstein]
Yes, well, a questionnaire in itself doesn't have to be a happy sheet. A questionnaire can also be structured in such a way that it is clearly derived from strategic expectation dimensions. And if I now have a cultural development expectation, then the company says, well, I would actually quite like to work more agile and for that we need a different kind of learning culture, of self-development people who are willing to take responsibility for their learning


And then we say, okay, come on, let's look at four levels. What learning conditions do we need on level one? What learning environment on level two?

What learning objectives are linked to the development of self-learning competence, of self-learning culture? These can be affective learning objectives. You always have to design learning in such a way that there are suggestions for self-motivation, that there are constructivist elements that make learning designable by the learner, because only then does one's own motivation arise.

So that means, how do I design the path to the learning objective? And the third way is perhaps to say, well, now that I have learned, how do I show it on the pitch? So, on which learning objective taxonomy do I move towards the pitch and say, I implement what I have learned and I develop it further on the pitch.

You can do that through 360-degree feedback, employee appraisals, mentoring or tandem coaching, or peer-to-peer exchange. In other words, you exchange views on actions that have been taken, give feedback and say, are we actually implementing what we have learned? That is the minimum that a company expects.

A company doesn't expect people to be smart and well educated, but rather that they implement what they know and what they can do and what skills they have and what their desires and motivations are in the workplace. So for us, so to speak, a major objective is to look at the company together with the company, to see what has been implemented from what you have done as a learning investment. And if the self-learning competence is, for example, to look, do formats such as peer counseling work on their own?

Do employees go to and into the knowledge management portals that companies have, do they extract the knowledge themselves? Do the requests to a support center decrease with the use of online support formats that have been implemented? So the passive consumption of learning and knowledge is changing into active design.

And that can be seen very nicely in the square. And at the end of the day, there are definitely also cultural radar instruments, which can be used to see, for example, from Denison or others, where cultural factors have changed. And there we would of course start by considering with the customer what the cultural elements are that, from your point of view, make up a self-learning culture?

You can then use this as a common thread throughout the entire journey, we would pack an evaluation backpack and then unpack it again and again and at the end of the day use the radar to see if anything has changed? One year after the intervention, for example. If you have said that you have a longer training process of nine months here, that you want to show a different behavior, then you might look at it for six months to see if the behavior changes.

And then, after a year or a year and a half, you look at the culture again, because that is, of course, a delayed effect. A cultural change does not occur after the training or after the personnel development measures. It is a longer process.

And then you look again retrospectively in summative terms. That is a form of summative evaluation. Then the culture has really changed at one point or another, without us claiming that this is 100 percent causally related to the intervention.

But the intervention makes a contribution because it is based on the dimensions and the spotlighting criteria that we have previously worked out with the customer.

[Jen Shivel]
I think I wanted to add to that, so I just wanted to chip in and say, so the multi-perspectivity. I think multi-perspectivity is an important aspect here, that you don't just look at these higher corporate goals. That's important too, of course.

You should also somehow link all the learning objectives with the company's strategic goals. And if you can't do that, then you should definitely take a step back and ask whether the learning program we have here is necessary at all. Should we rethink the whole thing?

This perspective is also important, but of course from the learners' point of view as well. You should always ask what their expectations are for this learning project. In other words, what do they want to learn from it?

And also at the level of the developers of the project and also at the individual team leads. So you should bring everyone, or representatives of all interest groups, to the table and first clarify the goals and expectations. Then define good success criteria and make a plan for checking them. You can use different evaluation methods for this.

Andreas has already mentioned some of them. Interviews, questionnaires and not just happy sheets as questionnaires. You can develop and implement much better questionnaires.

You can then also, yes, simply use different methods.

[Susanne Dube]
Maybe I should briefly explain to the audience what I actually mean when I talk about a happy sheet, and why do we talk about happy sheets in such a disrespectful way? Maybe everyone who used to take part in classic face-to-face training knows that they got a questionnaire at the end that asked whether the windows were opened often enough, whether the teacher (no, not the teacher, the trainer) behaved well, was nice, was personable, knew the subject matter, and whether there was enough food and drink, and whether it was warm enough in the room. You know it, right?

These are Happy Sheets and you say your questionnaires are better. You talked about interviews, you talked about 360-degree feedback. That means that these are much bigger topics and I have the impression, when I hear it for the first time, the way you approach the topic sounds very, very complex to me.

Could you maybe just, like Kirkpatrick had the four levels for evaluation, could you sort your topic into that or is there maybe a reason why you don't want to do that right now? Can you briefly summarize our approach for me in five sentences and maybe also put it in relation to all the other approaches that actually exist in the world outside? How about a Kirkpatrick-Talheimer?

Oh, you know a lot more than I do.

[Andreas Hohenstein]
Thank you for your question, Susanne, because Kirkpatrick, the Kirkpatrick model, was indeed a very clear point of reference for us in our conceptual work. Kirkpatrick has four levels in the core model. We have expanded the first stage, reaction, so to speak, to include the learning environment and the learning setting, in which the reactions are embedded.

Here we look at things that are important in total to achieve the strategic goals of the company. As I mentioned earlier, with self-learning competence, it could be the topic of a constructivist learning environment: which elements does it work with? That is, we have expanded it, but in principle it is a similar level.

The second level of Kirkpatrick is learning. We have also adopted this, but we have differentiated the topic of learning success a bit by saying that there are cognitive learning successes. But there are also affective learning successes and there are also competence-oriented learning successes, which, so to speak, can be measured in terms of a person's biography and are considered resources for future action.

And it was precisely this juxtaposition of the three types of learning success that was, so to speak, our conceptual expansion.

[Susanne Dube]
Wait, then I'll just dive in again, because you're in the flow right now. I have to interrupt that. When you say cognitive learning outcomes, I understand that, because with Kirkpatrick I had always understood very clearly: then I set a test afterwards and just ask people if they have understood it and just ask them how you would do A, B, C. What do you mean by affective learning outcomes and competence-based was the third?

[Andreas Hohenstein]
I think the cognitive learning success model is sufficiently well known. When it comes to affective learning success, this is particularly important to us when it really comes to accompanying continuous change processes. There are topics such as curiosity, how it develops from extrinsic to intrinsic motivation, how to deal with it, how to reduce fear in transformation processes.

These are affective learning successes. Do I feel like it? Do I feel like it?

How do I, so to speak, transform reluctance into desire in my own learning process? And that is very, very important, because learning processes today are rarely there to learn and implement a pure fact. They usually take place in the context of some continuous change processes, of transformation processes.

And that is why these affective levels are so important, because they often also limit the cognitive learning success. When I am afraid, I also block myself from learning successfully cognitively. When I feel like it, I invest much more in the cognitive learning processes.

That is why it is important for us to supplement this level and with competence-based learning success, it is the case that there are requirements for people to fulfill their role. And so, for example, it is extremely important for a sales representative to have strong communication skills.

I can train that at a knowledge level, communication models, theories. But the extent to which he is willing to listen communicatively, to treat the other person with respect, is not a question of knowledge. It is a question of attitude, of life experience and life models.

And that is what competency theories focus on: they ask, what makes you authentic as a person? What have you experienced so far? And how can you approach people with an open mind in your communication?

And how can you work on this? This is a completely different kind of learning success than cognitive knowledge, for example.

[Susanne Dube]
Okay, exciting. Now you have actually already contrasted the first two levels of Kirkpatrick with our model. Then we will continue with the last ones.

There are still two with Kirkpatrick.
How many with us?

[Andreas Hohenstein]
Actually, there are still three with Kirkpatrick and only two with us.

[Susanne Dube]
Okay.

[Andreas Hohenstein]
So at the behavioral level, we are in complete agreement with Kirkpatrick. Kirkpatrick says, what do I show the learners on the pitch? And we do exactly that.

The only difference is that we don't see ourselves as being in opposition to or developing Kirkpatrick. We have simply considered a variety of methods for making it easy to reflect on behavior and action in the field. Self-reflection, external reflection via feedback systems, and to some extent measurable if there are key figures, even in the area of behavior, depending on what is at stake. And that means that it is more likely that we say that there is not one tool, but a variety of possible tools.

We come with a toolbox and select the appropriate ones, and then they work. So it is very comparable. But we differ again in the fourth stage of Kirkpatrick, where, so to speak, he looks very closely at the results.

And then in a further development, there is also a stage called Return on Investment at Kirkpatrick. That means that these fourth and fifth stages are very, very indicator-based. And you can see from the fifth stage, it also goes very much into monetary values, we save something to a certain extent.

So if one euro goes into education at the top, two euros come out at the bottom. This is not something we want to neglect, but we have expanded that too. And so, for example, we have combined this into one stage, so our fourth stage is called Return of Expectation.

And this Return of Expectation really means that you can have very different expectations of learning. We want to be more versatile in the company. That is why we promote digital transformation processes.

This first of all has to evaluate adaptability, which is completely different from evaluating IT fitness in digitization processes. And saying that we have made the processes more efficient through digitization processes, we have saved money and a WBT has also helped us so and so much. It may have cost 100,000 euros, but it saved 200,000 euros. We can do that too. But for us, it is important to also focus on the soft factors, which are often linked to transformation processes. Then it may not be such hard-hitting key figures, but they are orientation values, qualitative statements that we can get out and thus say, if you have a colorful bouquet of expectations for this learning project at the beginning of the journey, then we are able to see at the end of the journey what this bouquet looks like.

Are they the flowers you wanted? Are they different ones? And derive hypotheses about what you could do to make it even better fit your actual objectives.

[Susanne Dube]
What a nice image, to compare the whole thing with a bouquet of flowers, and how wonderfully disrespectful that you said, we'll take the Kirkpatrick, which is a good foundation, but then we'll build on it, and then almost ignore someone like Thalheimer or Kaufmann, right? They're not mentioned in your piece, at least they weren't mentioned just now. Is there a reason for that?

[Jen Shivel]
Yes, we also looked at Thalheimer's model and, in our view, Thalheimer also builds on Kirkpatrick to a certain extent. However, he has eight stages instead of five, and Thalheimer is also a bit radical when he talks about his stages one and two. These two stages are not completely the same as Kirkpatrick's stages of discovery, but they are definitely similar. And in levels one and two, so in the first level, it's about attendance and participation of the participants, and in his level two, it's about the learners' activities, so whether they were motivated in the matter.

And he says, these two levels, we can collect data on them, but ultimately it has no influence on learning in the end. And exactly, so that's kind of the crux of it. He also talks about a third and fourth level, and level three for him is, yes, the perception of the learning experience from the learners' point of view.

So how did they feel about it? And at the fourth level, he is talking about knowledge. So did they learn anything at the end of this learning event?

So can they report facts at the end? And these, so level three and level four with him, you can also collect data on these points in his view, but ultimately that also has only a minimal influence on actual learning. With him, it only really begins at levels five, six, seven and eight.

And his level five is about decision-making skills. Level six with him is about action skills. Stage seven is about learning transfer, which is similar to Kirkpatrick's stage three.

And with Thalheimer, the last stage is the effects of learning transfer. This is also similar to Kirkpatrick's stage four, only with Thalheimer it is a bit broader. For example, he also takes into account the impact on the environment when participants travel to the location, and whether this might have a negative impact on the environment.

So he also looks at things like that at level eight.

[Susanne Dube]
That's really exciting, especially in light of the current situation, where a lot of attention is also being paid to carbon footprints and so on. I was at a really interesting session recently where he takes that into account. But we didn't want that because we focused on something else.
Do I understand that right?
Exactly.

[Jen Shivel]
Right, so we looked at his model and we definitely see that critically. When he says at levels one and two, that you don't actually need to evaluate that. That attendance, participation and also, if you evaluate active participation in the learning process, that these things are not important at all.

And that, for example, the perception of the learning experience is that it is only of minimal importance. So we are a bit critical of that. And even if there are scientific studies on this that also say similar things.

So there was a study, that was in 2011 by Gessler and Säber-Opfermann. They examined this chain of effects between the different classic Kirkpatrick levels. So they did a correlation analysis, right?

And ultimately, the result was that the first stage of Kirkpatrick actually has little influence on stage two. Or rather, on stage three. So, in this classic Kirkpatrick, the learners actively participated and were motivated.

Ultimately, this has little or very little influence on learning. But we still see the value in doing an analysis and collecting data at this low level.

[Susanne Dube]
Yes, I think it is valuable and still for those who offer training. So especially for us it is still exciting, isn't it?

[Andreas Hohenstein]
Exactly. And above all, we do it differently. So we really look at the learning environment and learning conditions.

[Susanne Dube]
Yes.

[Andreas Hohenstein]
And we take a closer look at the factors that influence the learning environment. So we don't just say, did you like it, was the trainer good, we do that too, but it doesn't stop there. And we look at the other things as well.

That's why it's a bit different from the typical level one.

[Susanne Dube]
Okay, exciting. So actually we are a bit between Kirkpatrick and Thalheimer, I understand that right now.

[Andreas Hohenstein]
Exactly. And also a businessman. So a businessman has, in the highest level, a somewhat broader outcome, namely for the company as a whole, but even for society, that is, social impact.

That's a bit too much for us now with the social aspect, but that's why we have a graphic for it, where you can see structures, processes, and culture in the outer ring. But it always refers to the company or the organization, but there it refers to everything in the organization. And the expectations of what you want to change in the long term in the company with personnel development can be totally different.

And we are simply open to taking everything on board and seeing how we can best do it, without having a ready-made recipe for anything.

[Susanne Dube]
Okay, if we don't really have ready-made recipes because we always want to be very customer-oriented, what is the benefit for the customer of implementing your evaluation approach? We are a management consultancy, we work for customers. So what does the customer get out of this new approach, maybe not to do Kirkpatrick, to follow someone else or Thalheimer, but to do it with us?

Or what use might it be to us?

[Andreas Hohenstein]
Well, in principle, evaluation helps us to advise and support the customer strategically. For example, we currently have a project where the customer has four strategic goals for the next three years. They are formulated.

A strategy has been formulated in advance and strategic goals have been derived from it. And then, of course, the question arises as to how education is structured in the company in order to promote these strategic goals. And at the same time, the company's academy, for example, has a kind of manifesto of what it understands by learning and what is important to them about learning.

And now, in the design of learning architecture, we are combining the two. That is, on the one hand, we say that no matter what your strategic goals and learning topics are, how can you align your architecture to help achieve those strategic goals? And how can you design it to fit your learning manifesto, your learning philosophy as an academy?

And from that, we then derive evaluation criteria and dimensions.

[Susanne Dube]
That means that basically, the big issue for us is that we don't think of learning concepts without evaluation, but that we think of the evaluation first and then, so to speak, put the learning concept on top of it.

[Andreas Hohenstein]
We think of them at the same time, not first, but at the same time. This means that if a company says, these are our four strategic goals, we want to align our educational processes more closely with them, then we operationalize these strategic goals and then we take this operationalization and look at the personnel development measures and interventions available today or develop new ones together, but these are then linked precisely to these strategic goals and for that it is, so to speak, an iterative process to develop both at the same time. At the end of the day, you have a concept and a portfolio for learning, and you also have a concept and a portfolio for evaluations.

And both are ready at the same time. That would be optimal.

[Susanne Dube]
And that actually has a knock-on effect on us as a company, because of course it also allows us to benchmark our own learning measures and all the topics that we implement for customers, so we can always see if it fits in with the strategy somehow, and we can evaluate it again and again and perhaps draw conclusions about our general offerings.

[Andreas Hohenstein]
Exactly, and that's why it's important to focus when we do learner journeys, digital fitness journeys for target groups, so that we not only look at the beginning and the end, but also keep looking along the way, in loops, in iterative processes, so that, just as there are agile sprints in companies, we also do agile evaluation sprints. That's why our idea is really to pack a learning backpack, so to speak, with which you then go on a learning journey together with the customer and during the journey you have certain stops where you pause and say, what has gone well so far, what can we do in the, we currently have a customer who does training in three waves, so that we can say, we do an evaluation in the first wave and when the first wave is over, we look at how it went, what input we can get for the second wave, and then we look at certain things during the first wave and the second wave then starts at a different level.

[Susanne Dube]
I have the feeling that when you talk about your backpack, you always have a relatively precise idea of what you are talking about. And when you look at it from the outside, I have the feeling that in the half an hour that we have been talking, we have gained a very small insight into it and, above all, an idea of where you come from in terms of thinking, but what the backpack is, I think we'll put a little bit of that in the show notes, so you can maybe take a deeper look at it, and I actually think that it's just as valuable for you as it is for me, curiosity-nose, to actually see what comments we get, what questions we get from the listeners and maybe what suggestions, offers, criticisms can still be countered in our thinking so that we can continue to challenge ourselves and, with that, actually, by presenting this, we have also put the concept here up for evaluation again, or am I wrong?

[Andreas Hohenstein]
We would be absolutely delighted if we could further develop this concept together with our customers, if we could challenge each other to promote what we said at the very beginning, the value of learning, and if customers, cooperation partners, in other words, everyone who is interested in participating, would put the experiences they have gained on the table and further develop them. So we don't have any ready-made recipes, but we bravely go and say that we have incorporated all of our experiences into it.

We would be absolutely delighted to share them with others.

[Susanne Dube]
I think that's really great and I'd love to continue to support you in this. Tell me more about it in other episodes. Until then, I'll say, you have a warm invitation, dear listeners, to comment on this episode today.

Take advantage of it, I'm very, very excited about it and look forward to next time and thanks again to you both for being there.

[Andreas Hohenstein]
Thank you very much for the invitation.

[Jen Shivel]
Thank you Susanne.

[Susanne Dube]
By the way, have you subscribed to us yet? You can find us on iTunes, on Spotify, well, everywhere you want.

Just type “Lernlust Podcast” into Google. Don't forget to rate us there. And if you want to leave comments, just use LinkedIn or Twitter.

We're on there a lot. We look forward to seeing you.

Show me similar content

LERNLUST Podcast Corporate Learning

Related articles