
At Digital Wonderlab, we were proud to take part in EntreConf 2025 with the panel, AI: Opportunities Abound?, where we explored the ways businesses are embracing AI - not just as a shiny new toy, but as a powerful tool to increase speed, improve processes, and respond to the new challenges this technology itself is creating.
The conversation was lively, with a range of perspectives—from those building AI-first products, to those like us, using AI to create better solutions for clients and enhance how we work internally. Across the board, one thing was clear: AI is no longer a distant promise. It’s here, it’s being used, and it's evolving fast. But have we reached the point of true transformation? Not quite.
Riding the hype wave
The general consensus was that we’re still firmly in the hype phase of AI adoption. The technology is making headlines daily, sparking excitement, anxiety, and a lot of experimentation. But it hasn’t plateaued, nor has it fully disrupted most industries in the way we expect it eventually will.
For many of us, the big impact hasn’t landed yet. We’re seeing incremental gains: faster prototyping, smarter content generation, better process automation. But the societal and systemic side effects—the job shifts, ethical debates, and trust challenges—are still brewing under the surface.
Unlike any tech that came before?
One of the more interesting points of discussion was whether AI is truly different from technologies of the past. Unlike past innovations that were designed for a specific purpose (think: the calculator, the word processor, the database), AI is a shape-shifter. It can be applied to hundreds of problems across multiple domains, from writing and analysis to diagnostics and decision-making.
This flexibility is what makes it both exciting and difficult to regulate or fully understand. It’s also what makes it such a powerful companion in sectors like healthcare and environmental science, where time-critical insights and complex data need to be acted on quickly.
However, as AI becomes more embedded in how businesses and individuals operate, a new issue is emerging: access. Many of the most capable models today are locked behind subscription paywalls. While there are still free tools available, there’s a real risk that access to the “best” AI becomes a matter of who can afford it. In this scenario, digital inclusion could suffer, and we may start to see divides between those who can afford enhanced productivity and insight and those who can’t.
The problem of echo chambers and AI bias
One of the more thought-provoking conversations on the panel was around reliability. AI tools like ChatGPT are only as good as the data they’re trained on—and increasingly, they’re learning from the content we ask them to generate.
The risk? A closed feedback loop. Incorrect or biased information gets generated, then circulated, then regurgitated, slowly becoming the “truth” AI models learn from. As we move forward, we need to ask: how do we keep our inputs clean? How do we ensure that the data we rely on to train and inform AI tools remains diverse, accurate, and free from the distortion of repetition?
This challenge is further complicated if access to the most up-to-date or accurate tools is restricted to those who can pay for them. If communities or organisations are limited to older or less capable models, their ability to innovate—or even to question inaccurate AI-generated content—may be constrained.
Legal lag: Who owns what?
Another area that remains murky is ownership. Many AI tools are built on models developed by large organisations, and yet they’re used to create outputs by businesses, freelancers, or teams like ours.
So, who owns what? Is the code, content or artwork generated by an AI truly yours if the model itself belongs to someone else?
Right now, these questions are mostly philosophical or theoretical, but it’s likely they’ll come knocking sooner than we think. Legal frameworks are lagging behind the pace of AI development, and many businesses are building tools and products without considering the long-term implications.
Where do we go from here?
The opportunities with AI are abundant—but so are the unknowns. For us, it’s about using AI intentionally: not as a replacement for human creativity or ethical decision-making, but as an enhancer of what we already do well. We’re excited by the potential to solve bigger, more complex problems faster.
But we're also mindful. Mindful of the data we feed in. Mindful of how we review and verify outputs. Mindful of the humans behind the tools—our team, our clients, and the communities they serve.
And, increasingly, we’re mindful of how access to AI could shape inclusion. As the best tools shift behind paywalls, we’ll need to work even harder to ensure that innovation stays open, accessible, and equitable for all.
We’ll continue asking the hard questions. Because while AI might be the tool of the moment, the responsibility for how we use it still sits firmly with us.