AI can now turn a whitepaper into a podcast voiced by your CEO. But should it? We explore how B2B brands can innovate using synthetic media without losing the trust that underpins their reputation.
At SE10, we had a moment recently that felt like a glimpse into the future. We fed a blog post into an AI tool and, a few minutes later, listened back to a relatively convincing, conversational podcast on the topic, complete with two distinct hosts.
This was the source blog post: Less paper, more purpose: Rethinking waste at industrial trade shows
And this is the resulting podcast:
The potential was immediately obvious. For our B2B clients in complex industrial sectors, this technology offers an interesting new way to make dense, technical information accessible, engaging, and digestible.
But as PR professionals, whose work is built on a foundation of authenticity and trust, we must approach this new frontier with a healthy dose of critical thinking. The question isn’t just can we do this, but should we? And if so, how?
This isn’t a distant, theoretical debate. In June this year, payments giant Klarna launched an AI-powered hotline enabling customers to have a one-to-one conversation with an AI avatar of their CEO, Sebastian Siemiatkowski, trained on his real voice and insights. Not to be outdone, Zoom’s CEO Eric Yuan also has an AI-powered digital twin. These brands are using AI to be in more places at once, handling tasks from earnings calls to gathering employee feedback.
It’s easy to imagine the next step: a CEO’s voice clone hosting a podcast series, turning a technical whitepaper into a fireside chat, all while the real executive is in a board meeting. But what are the implications for reputation?
The opportunity: Content creation on steroids
Let’s first acknowledge the positives, because they are significant. AI-generated audio can be a game-changer for B2B communication, bringing benefits of:
• Radical accessibility: Turning a detailed case study or white paper into an audio format makes that content easily accessible, from auditory learners to busy commuters. It’s a genuine step forward for inclusivity.
• Simplifying complexity: A conversational podcast format, even with AI hosts, can break down jargon and complex ideas into a more natural, understandable dialogue for customers, journalists, and investors alike.
• Unlocking expertise: Your top engineers and subject matter experts are your company’s greatest assets, but their time is finite. AI enables their approved, scripted insights to be transformed into scalable content, freeing them up to focus on innovation.
• Maximising ROI: One piece of core research can be repurposed into a multi-episode podcast series, short audio briefings, and internal training materials, multiplying the return on that initial investment.
The risk: The deepfake in the room
The central conflict, as we see it, is a classic tension between efficiency and authenticity. If your audience listens to a podcast believing they are hearing directly from your lead engineer, only to discover it’s an AI clone, the reaction isn’t admiration for the tech. It’s a feeling of deception.
This is the ‘deepfake dilemma’ and it threatens to erode the very trust that good PR is meant to build. The risks include:
• The erosion of trust: A single instance of perceived deception can breed suspicion across all communications. Every subsequent piece of content, whether real or synthetic, will be viewed through a lens of scepticism.
• The ‘uncanny valley’: Today’s AI is good, but it often lacks the subtle imperfections of genuine human conversation – the slight hesitation, the spontaneous laugh, the authentic rapport. An overly polished podcast can feel sterile and hollow, undermining the goal of creating a human connection.
• Accountability and error: What happens if the AI mispronounces a critical technical term or presents data with a misleading tone? If it’s using a cloned voice, the reputational damage falls directly on the individual. A minor AI error becomes a major executive gaffe.
A framework for responsible innovation: Augment, don’t deceive
So, is this good or worrying? It depends entirely on the context and, crucially, on transparency. Audiences are surprisingly forgiving of new technology, but they are unforgiving of being misled.
At SE10, we believe the guiding philosophy should be for AI to augment human expertise, not to impersonate it.
Here’s how we’re advising our clients to think about it:
1. Low-risk / high-reward: Start by turning written content into audio using a designated, high-quality synthetic voice. Don’t clone an employee. Instead, brand the voice – give it a name like “The SE10 Digital Brief.” Be explicit in the podcast description that it’s an AI presenter. This delivers the accessibility benefits with zero risk of deception.
2. Medium-risk / engaging reward: To create a more dynamic, conversational podcast like the one we trialled, use two distinct AI personas. Create branded hosts and be clear that they are AI presenters. The key is that the script is still meticulously reviewed and approved by a human expert. The AI is the performer, not the source of the knowledge.
3. High-risk / the CEO clone: Using a cloned executive voice is dangerous territory. While Klarna is using its AI CEO for feedback channels, using one for external-facing thought leadership is a significant reputational gamble. If this path is chosen, it demands radical transparency. Every communication must carry a clear, upfront disclaimer: “You are hearing a message from our CEO, delivered using an authorised AI voice.” Anything less is courting a PR crisis.
The journey ahead
The pace of AI development is exhilarating and, at times, daunting. We are all on this learning journey together. As AI makes content creation easier and more efficient, the strategic and ethical counsel on how to deploy that content becomes the most valuable part of the equation.
The future of AI in PR isn’t about becoming prompt engineers; it’s about providing the critical, human-centric judgment that a machine cannot. It’s about building and protecting the one thing AI can’t generate: genuine trust.
Are you navigating the challenges and opportunities of AI in your communications strategy? Let’s talk about how to innovate responsibly and protect the trust you’ve built with your audience.
Hannah Kitchener
Associate Director
About the author
Hannah is an associate director in the UK, leading strategic campaigns for industrial clients across the EMEA region. A professionally qualified journalist (NCTJ), she combines specialist sectoral knowledge in construction, energy, and materials handling with a strong network of trade media contacts to secure valuable coverage. Her expertise in inter-cultural communication, honed by degrees in modern languages and translation, is key to executing campaigns that succeed across diverse European markets.


