Preparing for the AI Revolution with Sultan Saidov, Co-Founder and President of Beamery and TalentGPT

“I think AI is no longer just about automation and efficiency. It’s about transparency, giving people an idea of choices.”
– Sultan Murad Saidov
Over the next 5 years, 6 out of 10 people will need to re-skill or up-skill. That’s the reality we’re faced with now as we stare down the AI revolution that’s unfolding and gaining momentum each day. On this episode of the On Work and Revolution podcast, Debbie interviews Sultan Murad Saidov, Co-Founder and President of Beamery and TalentGPT, the world’s first generative AI for HR. This conversation delves into how AI is transforming the recruitment process and provides insights into the ethical considerations for implementation.

Debbie & Sultan discuss:

✓ What’s shaking up at the intersection of HR and AI
✓ What CEOs and leadership teams need to be doing to prepare for the AI revolution
✓ The business risks associated with inaction and “waiting to see what will happen”

About our guest, Sultan Murad Saidov:

Sultan Murad Saidov is the Co-Founder and President of Beamery, the leading Talent Lifecycle Management Platform. Prior to Beamery, Sultan worked at Goldman Sachs. He studied Politics, Philosophy, and Economics at the University of Oxford. Sultan is a frequent speaker on AI, the Future of Work, and Talent Transformation, was listed in the Forbes 30 under 30 list, and is the host of the Talent Blueprint podcast.

 Sultan is now leading Beamery’s mission to create equal access to work for the world – helping every human to find the right job and achieve their career ambitions, while enabling businesses to create more human experiences for talent, unlocking the true potential of their workforce.

Helpful Links:

Follow Sultan on LinkedIn

Open for Full Episode Transcript

Open for Full Episode Transcript

Debbie Goodman 00:49

Welcome to On Work and Revolution where we talk about what’s shaking up in the world of work.

I’m your host, Debbie Goodman, and today we have as our guest Sultan Saidov. So Sultan is co-founder and president of Beamery, which is a talent life cycle management platform.

Making waves with the launch of their new product, TalentGPT, one of the first generative AI tools in the HR tech space and which we’re going to be talking about in a bit. Sultan has been involved in entrepreneurial ventures since his college days at University of Oxford, like me, he’s an employee activist and his work has focused on revolutionizing the way we approach talent acquisition and workforce development, leveraging technology and AI. He’s a thought leader and a speaker. He shares his insights on the future of work and the impact of AI on talent transformation, which is an explosive topic right now. And today, we’re gonna be talking to Sultan about TalentGPT, the ethics of deploying AI tech technologies. And most importantly, how best to prepare for what is really the AI revolution, which is here. So welcome, Sultan.

Sultan Saidov 01:21

Thanks for having me. I’m really excited for the conversation.

Debbie Goodman 01:30

Great, Okay, let’s start with TalentGPT. I saw this in a media release probably about a month or so ago, which I understand is a combination of your own proprietary AI, OpenAI’s GPT 4 and then some other leading large language models. And rather than go into the features of the product, which look amazing, I’ve had a little bit of a scan, and I mean, it’s super amazing. But my real question is, what is the problem that TalentGPT solves?

Sultan Saidov 02:54

Yeah. It’s the right question to start with, not just with TalentGPT, but I think most new AI innovations. In our case, the problem that we’ve been solving as a company and the problem that TalentGPT is focusing on is about skill-centricity. It’s very difficult for companies to give their employees career development opportunities based on what that potential is or what their interests are. It’s very difficult for companies to hire in skills and capability centric in a fair way. And one of the things that can make this easier, both for the companies that are making these decisions and for the employees and managers and candidates involved, is the ability to identify what other relevant skills for a job. And before posting jobs to decide, actually does this need to be a job? Could we actually create an internal project, or could we train somebody? And TalentGPT is a new embedded assistant across a number of experiences that we power for managers, for recruiters, for HR teams, for learning teams, for candidates, that helps make those better skill-centric decisions. People decide when can we train versus when do we need to hire? How do we best make those, you know, people choices? And the thing that really powers that skill-centricity and where the AI comes in is the ability to help identify skills. If you ask somebody what skills do you have, it’s a really hard question to answer. Yeah. It might sound like the right thing to do. It’s great to focus on skills and capabilities, but how do you actually do it? And it’s one of those areas where AI can add a lot of value by looking at billions of data points, looking at where do people work, what do they do, what kind of courses do people do, what kind of backgrounds do people have, and from that, you can infer what skills are relevant to what kind of roles, what kind of skills do you get from learning and then embed, you know, embed it into people’s day to day decisions when they try to open a job and when they’re looking at a course, create that kind of assistant of, hey, depending on what you wanna do, here’s some things that might be relevant and so on.

Debbie Goodman 04:53

Yeah. I mean, I think that without going into too much detail around the products, though, the idea around the suggestions, the prompting questions that the technology then suggests to the user being the recruiter or the hiring manager around a few steps to take and a few additional internal processes possibly to run before they even consider, putting, you know, doing an external hire for example. Those alone can really transform hiring processes that are often very inefficient, unwieldy and long winded. We see it ourselves as executive search professionals when internal processes, companies think they’ve done it, but they they have no way of really determining, particularly in a large company, whether they have the skills internally and then how to efficiently deploy them, even to suggest individuals in a company that might be suited for a job. That’s a really hard task if you don’t have a lot of data or some kind of ATS or other. So it’s a pretty transformational tool. More broadly, how do you see AI transforming the recruitment and talent acquisition process?

Sultan Saidov 06:26

It’s in some ways, you question in other ways a continuation of what’s already been happening, you know, the beginning of the HR function. If you look at the sort of very early days of how HR was formed, was in response to the first wave of technology, was the industrial revolution back in the 1700 that suddenly led to people having to be trained and hired in a different way. And you sort of fast forward to the day of AI being born. And early AI began to transform the HR recruiting function through more of the same mindset, more efficiency, more automation. What do you do with that, what does it mean if you automate payroll? How does that change people’s roles and so on? But I think what is new in terms of what AI is doing is it’s no longer just about automation or efficiency. It’s about things like transparency, giving people an idea of choices. So for example, if you imagine what some of the things AI did for our consumer experiences. If you use something like Google Maps, AI is something that can show you like different routes you can take to a destination.

Do you wanna cycle? Are you mindful of the weather? And AI in that scenario is analyzing lots of information and saying, here’s some routes depending on your preferences. Now that component and that type of AI is what’s being very actively and deeply embedded in HR experiences.

Giving employees career choices and giving managers choices of your point earlier, you know, do I train people? Do I hire people? How do I look at options? And I think the way that this is, you know, impacting recruiting in HR is quite significant because it’s not just a case of which jobs or people are going to be impacted by AI doing it for you. It’s really a case of how are the experiences we have in doing jobs, both as people working in HR or people who are hiring, how can those roles become more fulfilling or assisted? And at the same time, you know, there’s new responsibilities. Most companies are having to figure out how to help people re-train or re-skill themselves. You know, there’s research. I was recently at the Future of Work event from the World Economic Forum and their latest prediction is that over the next 5 years, 6 out of 10 people will need to re-skill or upskill.


And that’s Those

Debbie Goodman 08:17

Wow. And those are knowledge workers, right?

Sultan Saidov 08:59

Well, the estimate applies across the board, but knowledge workers are probably the most impacted by the latest kind of AI innovations. And this is the sort of cycle of how quickly people’s roles are changing and shrinking, you know, every few years. But the question is, not just what does that mean, but how can we help navigate that change, you know? What information can you give to people to say, well, if you’re working in this role, let’s say that you’re working in customer support, here’s some of the things that are central to your role, let’s say creating knowledge articles or interacting with customers that may be relevant in other parts of where the organization is heading or in other roles. And here are some courses you can do or experiences you can have that can help which directions. So I think that there’s this new role of people, teams, and functions that isn’t just about hiring and training, but providing that transparency and having the empathy to help people get the most out of this, but also be mindful of the fact that there are risks, you know, there are both opportunities and risks in these technologies creating this kind of change.

Debbie Goodman 09:39

Right. So firstly, I love the analogy around Google Maps because that’s, you know, you talk about AI assisted suggestions, different pathways and routes, particularly in the careers landscape.

I mean, that’s a really great analogy that everybody can relate to. So thanks for that. Gonna use that. I’m going forward to explain to people who are still encountering people who are going, AI, what? Believe it or not. And then let’s just pick up on the tail end of that, which are risks. So let’s talk a little bit about AI ethics, which is another very hot topic. We’ve seen some business leaders wanting to pause the development of AI whilst we consider what the impact is, and we know that that’s not gonna be happening anytime soon. The race is real, and it’s on. No stopping it now. But let’s talk about ethical considerations. What in your mind as we embark into this new field of AI assisted technology, AI augmented and in some cases, AI replacement. I mean, let’s be real. The Goldman Sachs report, the IBM saying that they’re not gonna be they’re, you know, freezing hires where they see that that could be replaced by AI. I mean, that’s scary shit, right? So what needs to be taken into account when implementing AI technologies in the workforce. How can this be done responsibly? What do people need to be thinking about? Even those who are still at the very early stages of going, okay, we could use this to our advantage right away.

Sultan Saidov 11:37

Yeah. It’s naturally a very complicated field, AI ethics, and choices. There’s even a question of, is it ethical to not act or not do something when these things are already happening and evolving? And how do you sort of think about, you know, what is the responsible thing to do in a world where people are already using these technologies no matter what you do. I think there’s couple of frameworks that can be helpful. The first is, what can you do to provide people with more transparency, you know, rather than starting with what can we automate or what can we do or what tool do we use. How do we, whatever our organization is, wherever our people are, how can we give people insights into trends that are happening and understand the potential risks and implications of those things. For instance, to your point around the Goldman Sachs report, there are clearly some people that have started being impacted in certain companies that have started making choices.

But we’re already seeing that some of the people who were told they were at risk are being the fastest to actually benefit, for instance. Last year, there was a lot of talk of how the creative industry is going to be entirely replaced because a lot of it was generating art and images. And yet, this year, one of the fastest rising roles in this new sort of AI enabled world, like prompt engineering, etcetera, is being filled faster than ever by exactly the creatives that previously might have had their jobs impacted. And providing transparency that there is this wave of new roles and these opportunities, and you might wanna think about them. And then there is this set of things happening. Even if you don’t act on those choices, it gives people insight around, what can I do with my skills? Where am I heading? And I think, you know, in every business, this idea of how do we provide transparency, what do we let people try, it can look a little bit different. We see some companies starting to just organize lunch and learns and sessions and debates around, like, this is the field and what’s emerging because it’s not necessarily helpful to either ignore it or run at it, but at least evaluating, you know, this is what’s happening, can be helpful. But the the second framework, I think, that’s helpful, is looking at the types of impacts to people’s immediate lives that can be beneficial, for instance, one of the findings that people are seeing from the kind of early wave of this new kind of conversational AI is that I actually spoke to heads of AI for places like homeland security and kind of government organizations and we can clearly see in areas where roles might be at risk and so forth, but also areas where people are getting a lot of personal value. For instance, when it comes to mental health, there’s a lot of cases both for people who are retired, but also people who have discomfort speaking to mental health professionals, where new AI conversational technologies are really helping people engage in difficult conversations. And also, in some cases, deal with loneliness and many other things that have been previously not really relevant to the AI conversation.

Debbie Goodman 14:30


Sultan Saidov 15:02

Recognizing that, you know, these positives are happening and that there is a way for people to try this out to see if it helps them. There is, you know, an ethical consideration for how we let people explore this and recognize where their benefits are? And then on the flip side, there’s roles that are being impacted quite quickly. You know, I mentioned customer support as an example. Lots of organizations are looking at roles that were previously sort of in the domain of knowledge work, but considered to be more easily automatable by some of these AI. But there’s an ethical responsibility to recognize that the people who work in those functions are invaluable across the business in other areas, and rather than a business saying, well, great, we’re gonna bring in this tool and automate X.

The first ethical responsibility there is to say, But before we do that, how can we take care of our people? How can we think about what else they could do to bring value across the business?

And one of the challenges with decisions like that is time horizons. I mentioned the industrial revolution earlier. And when you look at it through the lens of like hundreds of years away, people can say, well, great. You had all these people who werelet’s say working on farms, and then you had tractors come in, and a lot of the people driving the tractors were previously the farmers and unemployment seems to have gone down, so it kind of worked itself out. But that didn’t happen by itself. You know, people formed labor unions, child protection laws, and for long periods of time, there were spikes in unemployment and that would have been a lot of people with major anxieties. And so, you know, even if, let’s say, a few years from now, some people who lost their role today would find better roles, that’s still a few years of really heavy impactful lives. And I think that’s where there is an ethical responsibility to try and consider, you know, where are these options? How do we maybe slow some of these things down, which is one of the things that, you know, regulations trying to do, which people might say is like, well, what’s good for innovation? What’s bad for innovation? But really, the question is what’s good for people’s well-being? And what’s the sort of route out of this that can help us all end up with more fulfilling jobs and, you know, a better society. And I think that remains to be seen, but we certainly right to be analytical and think about it.

Debbie Goodman 17:23

I mean, I think that does remain to be seen. For the most part, many particularly large companies have not had a great track record in considering employees first when there’s been new technology.

It’s often been well we could do away with X number of jobs as per the report, let’s go ahead and see what we can streamline here. And so I think justify absolutely thoughtful consideration, but an expectation that there is likely to be some ebbs and flows in a bit of a roller coaster ride as we transition with this new enabler that in some cases can be so amazing. I mean, we know that people are suffering from burnout, the overload of work. The fact that you could have an assistant that can reduce some of the work so that you could just get back to an even keel of some sort. What is that even for some people? They can’t imagine it. Would be amazing. People are overworked pretty much everywhere. So I’m interested to see how companies over the next 3 to 6 months, particularly the large ones, start to react and respond and how considerate and thoughtful they are. I read a survey by Baker Mckenzie, it was done at the end of 2022, so it’s probably a little out of date. They did a North American AI survey which indicated that business leaders, most certainly, at that point, were under appreciating the potential AI related risk to their organizations. At that time, only 4 percent of c suite level respondents said that they considered the risks associated with using AI to be significant. And then under half said that they have any kind of AI expertise at board level. I mean, I can confirm that. Leaders at board level, at executive level, they expect like the people below to get these AI skills but they themselves are really not equipped at all. And granted there’s been since the end of 2022 to now in 4 or 5 months of the year, there’s been astronomical developments with AI adoption and awareness. So this is likely to have changed. But for CEOs and c level leaders, there’s certainly a massive overwhelmwhere to look, what to focus on, what AI products to consider Yeah. And certainly, obviously, this big question around people and jobs. So if, you know, how would you guide a CEO or a leadership team on what they need to be doing to prepare for this AI transformation.

Sultan Saidov 19:42

I would start by focusing on one of the pieces you touched on, which is the people’s first side of it.

You know, most businesses would agree that people are their greatest asset, and that’s what runs how the business operates.

Debbie Goodman 19:45

Well, they say that, Sultan. They say that.

Sultan Saidov20:00

That’s it. Then most businesses agree that that is the truth you want. The question is to what degree is that real? And how can you use a moment like this to really make that real and lean into it? I think for many people, it’s scary. You’ve touched on the examples. And a responsibility for business leaders is to not only recognize that, but help people see the opportunity, see the risks, and also in terms of business goals and priorities, figure out what is most relevant to them. And, you know, for some businesses, it might be dependent on who their customers are and how this is impacting them.

For others, it might be based on what your competitors are doing. You know, every business is different. But what is universally true is that the way to leverage and deal with the situation is and this evolution is through your people, whether it’s in your team or in your department in your organization or your business as a whole. For example, you know, one of the companies we work with, which has, I think, done a really great job of this, is Salesforce. Who have been thoughtful around how they’ve embedded AI into their products. We’re gonna make some very quick decisions.

They’ve worked with AI for many years, but with the latest advent of large language models, etcetera, they’ve lent into that. But they’ve also created AI ethics councils and AI groups that try to be really considerate around how do we within the business create frameworks for people to think about what is ethical and how people can be involved in it and have a voice in it and how we can hear, you know, where people’s anxieties or opportunities are and what people are excited about, what people are concerned about, Because this is a time of acceleration, you know, everything is moving faster in some way. And that’s the time where you have to be as a business leader really considerate of how to give your people the right opportunities and how to reframe the objectives you have in the business the opportunities you create for people to help with those.

Debbie Goodman 22:01

Okay. So a lot of consultation and engagements and communication as with any change. I mean, you think a few years ago with COVID, the pandemic. I mean, there was a lot of engagement. Do you think that level of engagement is happening now only just in dribs and drabs? Sporadically?

Sultan Saidov 22:25

I think it’s already a very wide curve. I mean, look, it’s only been a few months since the latest kind of large language model announcements. And most larger companies and even many small ones, at least work on quarterly goals. And so I think for many companies, this is yet to truly kick off. You know, it’s sort of this thing on the horizon. And perhaps, the next couple of months is where it will really kick into action as people start replanning sort of second halves of their year. Other companies might wait until next year because of the dynamics of how their business moves. And I think that that can be risky because for people themselves, this is moving much faster even if the company is on an annual cycle for people with this. Real, and this is happening now. Other companies have moved very quickly already, including some of the world’s largest companies, certainly – the ones in the technology space. Many have formed like new units, new groups, new councils. And I think that’s creating a rapid advantage for many of those businesses And sometimes it’s actually, you know, an advantage for larger businesses than small ones because you can be much more sophisticated in how you build and use these models. Some of the work to do this is expensive. You know, many many smaller companies can’t afford to build or train their own AI models, etcetera. So it all depends, you know, on your constraints and your opportunities on what the best action is. But, certainly, it is important to start making some decisions quickly around what matters to us.

You know, do we have to think about new products? Do we have to think about helping train people and who might be most impacted? I think inaction and waiting to see what happens as a business leader is very risky. Given, you know, the pace of how quickly this is impacting people’s lives and how quickly it’s evolving company strategies.

Debbie Goodman 24:38

Yeah. I mean, I’m yet to see. I know that on LinkedIn there has been a massive escalation of the increase in the number of jobs that require GPT for generative AI, skills and capabilities at sort of like individual contributor and technical expert level. At leadership level, I’m yet to see roles and job descriptions that have that as a requirement, either the ability to insight into the data analytics ability to oversee AI programs. And that could be a, you know, feature of the size of the company.

And as you say, larger organizations have a little bit more budget to hire specific skills or get in the right consultants or add somebody to their leadership team who’s got the specific knowledge.

But even at a small business level, I think because the technology is so directed to a consumer. I’m certainly for my team, I have directed everybody in the team to figure out how they can use ChatGPT at the very least in their work. Today, I had somebody say to me, oh, one of our clients they’re looking for a new interview guide. Does anybody have something? I was like, just ask ChatGPT. You’ll get it in 30 seconds. We’re not necessarily thinking like that automatically.

Sultan Saidov 25:31

There’s somebody I was at this generative AI conference and somebody made a point similar to what you’re saying. You know, back in the day, there was the story of when you’re having a meeting with your team having an empty chair for the customer. I think Jeff Bezos and Amazon probably raise them. And somebody commented, well, for all these panels we have talking about AI, we should probably have a seat for ChatGPT. What does it think? And it’s a funny question. But it’s an interesting example of the point you’re raising, which touches on this isn’t necessarily the time to just sit, you know, in a vacuum and say, what can we do? It’s a time to bring people into that question and give people the head space to think about it. You know, we’ve been creating forums for conversation because people will have different areas of expertise and therefore see different things that might be interesting or useful across the business. And we, as a company, you know, we run hackathons where people can come together and even try projects around it. And that, you know, the fun thing is that in the past, words like hackathon sounded like it has to be an engineer that does it. Whereas now, more than ever, you don’t have to be an engineer at all, you know? You can never have seen any code or anything in your life and actually build websites and applications with this thing. And you don’t even need to do that to see how it could be useful, right? I think it’s an interesting time because it can give people an opportunity to be creative and try things in a way that otherwise might have been inaccessible. But it’s also really valuable to have people from different perspectives, you know, whether you’re on the sales side of a business or the customer side or whatever it might be. Will bring a different mindset, different experiences, different empathy to see how this could help you and help your team. And I think it’s important to give people that headspace.

Debbie Goodman 27:32

I am definitely seeing that there’s sort of a range in any organization. Those who are there’s the natural early adopters, then there are those who just don’t even don’t even wanna look because it’s just overwhelming. It’s causing a lot of anxiety particularly in clerical jobs. I mean, we’ve just seen the results or the research that was put out by the economic forum summit on the number of data and clerical jobs that are definitely going to be impacted. So what advice would you give to people? People in, you know, daily jobs that are not earlier leadership level, but are potentially severely impacted in the upcoming time, who are anxious about what they hear, but are not yet, nobody really knows, how it’s gonna impact their work, their career, their livelihood, What do you say to these people?

Sultan Saidov 28:10

Look, I think of a couple of different messages, one of which is not everyone’s in the same boat, so inherently it’s difficult to to give a message or advice universally, right? I think it’s important to be empathetic and highlight firstly that people have to be empathetic to each other, this is similar to how, you know, when we had the pandemic and lockdowns and people ended up behind the same zoom screens, but in different boats. But, you know, it’s easy when you go through these times of change to be unintentionally unempathetic about how your colleagues might be feeling and how to interact with folks. I think firstly, it’s a time not just for self reflection and self development, but also for empathy and thinking about, you know, being thoughtful and kind to others. But in terms of, you know, what people can do themselves, it’s a really exciting opportunity from the perspective of the options to learn and try things. I’m a fan of this educational website called Khan Academy and the founder of it recently gave a Ted talk where he talked about how quickly, they’ve been able to improve people’s ability to get personal tutoring through technology on topics that they might find interesting. And for people who are curious and want to see how they can try and learn and say, well, what’s gonna do about this? How could I use this? There is a growing world of companies and products and educators that are packaging and bundling, you know, these new technologies into courses and ways of learning. And I think it’s a great time to explore and engage. Obviously, not everybody has the time to do these things, but it’s becoming something that’s taking less time.

It’s becoming easier and more digestible to try these things out and learn new things.

And I think to whatever degree, you know, people have time to try these things.

It’s really worth trying to engage in that and exploring. You know, we even have a tool we built that’s like a career discovery coach helping people ask questions like, but what does it mean for me? Well, based on your job title, here’s the kind of things you could think about. And there are lots of other technologies that are starting to do similar, you know, educational, conversational interactions.

And so I think it is, you know, a time to self reflect and potentially try these new technologies to learn new things and try to develop in whatever direction opens up as interesting.

Debbie Goodman 30:57

Yeah. Well, thanks for the reminder. Sal Khan’s Ted talk is probably I don’t know how many millions of billions of views it’s had, but we’ll actually include that in the show notes because I think It’s a really great example of anybody who’s feeling overwhelmed, can see the first of all, the amazingness of and who hasn’t yet started to experiment the amazingness of the technology, but also how, you know, the really the really positive spin on how this can improve our work and our lives. Just to end off because I mean, listen, we could be spending all day talking, but I try to keep this to 30 minutes or so. You’ve just returned from the World Economic Forum summit growth summits where the big big focus is on AI. What were your key take outs from that? Aside from the reports that we can all read?

Sultan Saidov 31:54

Yeah. I think there’s a number of really important themes, not just the future of work and the future of AI, but the future of people’s welfare. And some of the topics we’ve talked about, I think the biggest takeaway was actually around the different vantage points for looking at silk skill centricity.

You know, there were people from education groups, there were people from government institutions, and this idea of how do we help people lean into learning developments and how do we help companies do that? How do we help educational institutions do that, how do we help people do that for themselves, was a really big theme with lots of different viewpoints. But there was also a theme around this opportunity for this to help equalize things across the world. You know, it’s certainly a development over the last couple of years and accelerated by this AI is how much easier it is to access opportunity and how much easier it is to to teach yourself without necessarily having had the privilege of going to the best school or being in a certain country or environment with more access. And these things can take time to really manifest. But we are seeing the world open up in terms of access to education, opportunity, skills, and work, and a big topic was, well, how do we lean into that and really help that very positive, you know, movement? There was also talk about, you know, how the generation that is perhaps most impacted by this, you know, people in their twenties are disproportionately in countries in Africa and in places in the world where this question of how do you really help lean into this opportunities is perhaps even more important in countries or perhaps older populations. And I think these topics are important, and it’s not just a question of, you know, having immediate answers. It’s a question of what can we, you know, light up as opportunities to to lean into this as a positive force while being mindful of where the risks aren’t.

Debbie Goodman 33:28

Well, on that note, a perfect summary to this conversation. Thank you so much for sharing your insights. It has been fascinating. Thank you for some guidelines and advice. We’ll be sharing this with not just our listeners, but our entire network and it’s been a real pleasure. All the best for all the things AI related in your world and I look forward to engaging again sometime.

Sultan Saidov 33:39

Thank you very much. Been a real pleasure.

Debbie Goodman 33:39

Bye now.


Kind podcast reviews help us make awesome content for you!

This site uses cookies to provide you with a great user experience, analyse traffic and serve targeted content.