Issue 137: The Real Cost of Ignoring Digital Responsibility Ft. Charles Radclyffe, CEO @EA

Author :
November 30, 2025

In this edition of Coffee with Calyptus, we sit down with Charles Radclyffe, founder of Titan, former CEO of BIPB, ex Head of AI at Fidelity International, and now CEO of EA, the AI-driven autofill platform transforming corporate workflows. Charles shares hard-won lessons from bootstrapping a kitchen-table startup, scaling a global data consultancy, and navigating the complex realities of ethical AI inside a major financial institution. His journey offers a rare mix of sharp operator insight, humility, and bold clarity on where AI is heading.

Charles, you founded titan.co.uk in the early 2000s and later scaled BIPB to $9 million in revenue. How have these entrepreneurial experiences shaped your journey?

Well, they couldn’t be two more different experiences!

Titan was a true startup. Quite literally began on my parents’ kitchen table, and when they got fed up of me taking over the house; forced my hand to go off and get a proper business premises.

It was then a 5-year long rollercoaster of ups and downs, and the truth is I always hoped that it would fund my degree (which I was doing simultaneously). Actually, my parents hoped that it would fund my degree also, but they ended up funding both!

Looking back, it’s an experience I’m grateful for – it was like my personal MBA programme – and I learned a great deal about the difference between management and leadership, hands-on skills of every part of business operations – from PR to credit-control. However, those were also the very same days that some of the great internet companies were born – and while Geek Squad went on to dominate the space that we were trailblazing – the greatest lesson I learned over that time was the need to focus! Doing two full-time jobs (as entrepreneur and student) was never going to result in success of either… I guess that’s why some of the startup founder greats are university/ college drop-outs!

As for BIPB, it was also a 5-year stint, but that’s where the similarities with Titan ended. For one thing, I joined as an employee and for the first few years was helping the founder scale-up the business from its roots before leading a management buy-out with fresh capital to grow before its eventual sale to Keyrus in 2014.

Whereas most of our business at Titan was B2C, BIPB was firmly a B2B proposition given we were selling data analytics software and expertise to banks and other financial institutions.  I had been fortunate before BIPB to have had a few years’ experience working in a couple of banks, and so this wasn’t as daunting a jump as it would have otherwise been. That said, I definitely felt a high-level of imposter syndrome the first day I rocked up for a sales meeting on Wall Street. Perhaps even more so a few years’ later when I wound up living on Wall Street!

The BIPB years were for me a lot of fun. Again, there were ups and downs on the journey – but mostly ups. I got the opportunity to travel to nearly 50 countries, sold $millions of deals to complex organisations, and (mostly) delivered what we promised. Still, the headaches of a nearly 100-person organisation weighed heavily at times, and my exit from the business was truly bitter-sweet. Sweet because I was glad to be free from the trials and tribulations that running an under-capitalised business caused, and bitter because becoming a divisional manager for a French multi-national consulting firm was never going to be a job that I held down for long!

As Head of AI at Fidelity International, you championed ethical AI adoption. Could you share some actionable insights from that role that would be useful for those building in AI and are inclined to ethical practices?

While it’s true that from the outside it might be perceived that I was championing ethical AI during my time with Fidelity, I think the reality was more that Fidelity was the sort of organisation which necessitated doing the right thing, and so it was intrinsically and also was necessarily a core part of my work there.

It’s like how I imagine everyone who works for Mercedes F1 to be inherently competitive and out to win regardless of how nerdy the engineering they might be doing day-to-day; similarly at Fidelity, I felt a strong drive through the culture of the firm to be doing the right thing – and this just happened to manifest in my work with a greater emphasis (and perhaps more early-adopter thinking) on ethical-AI.

I was delighted to see after I left Fidelity that the firm went on to creating a ‘digital ethics’ fund – essentially an investment thesis aimed at steering capital to the tech companies doing things right. However, I can also see that it’s been a real challenge for them to get investors to put their money where their mouth is and also to deliver investment returns that make the fund an attractive proposition!

I think this is the core problem we face as an AI industry today. Doing the right thing is probably not always compatible with delivering the best short-term investor returns – and if AI is core technology driving the 4th industrial revolution, and data is the new oil – then we’re definitely living in the ‘drill-baby-drill’ times.

For people reading this and seeking practical guidance, I’d point them back to the paper I co-authored during my time at Fidelity, which is probably not the most riveting of reads, but I think makes a couple of important contributions to the field, namely AI governance being truly an ESG topic, and therefore should be handled and implemented in the same way.

I personally find it so surprising that even today how few organisations point to digital responsibility in their ESG strategies, and yet we see sadly so many examples of firms who have failures in digital governance which lead directly to revenue and capital value loss. I suspect £ for £, the risk weighted return to investors of solid governance over unbridled enthusiasm for innovation is far far greater, but we’ll perhaps have to wait a few more years before that data is in the public domain and well understood sufficiently for investors to truly demand change and the early work that I did at Fidelity to become more mainstream.

Transitioning from EthicsGrade's AI-driven ESG datasets to CEO of EA, what challenges you faced about tackling the chaos of corporate forms with intelligent autofill?

I think the greatest challenge transitioning from the time we spent building EthicsGrade to our EasyAutofill platform at EA was not really one of business model (EthicsGrade being a market data/ ratings agency and EA being a SaaS/ agentic-AI platform) – but more a challenge to ourselves to admit that we were causing a problem to our users and we should instead focus on fixing it!

Having built a ratings agency on leaving Fidelity, we learned first hand all the issues such organisations face. How to collect data? How to engage with corporates? How to refine, analyse and create value-add insights from the data to make it commercially viable?

Shamefully it took us years to realise that we were causing a pain to our users (and when I say “we”, I guess I really mean “me”)! The truth is, much as you and I are not the customer of Facebook – we are just users – we had the same approach to large corporates. We focussed on the “UI” of our platform without focussing on the “why” – and it was only when we learned how users didn’t really care about the UI at all, because they ultimately just spent their time dealing with the questions we posed offline via spreadsheets, that we had our eureka moment.

At this point, we went through a second learning, which was that while we had been focussed previously on the interaction between corporate and investor, there is a much bigger problem in terms of information flow between companies who sell B2B.

Whether it’s bids, tenders, or RFPs at the beginning of the commercial journey, or InfoSec questionnaires at the point of closing deals, or ESG surveys at the point where commercial relationships are matured and fall under the annual procurement and supply-chain reviews – each of these activities creates operational spreadsheet-driven workflows that traditional software platforms have no answer for.

I think it’s funny now to look back at all this, because if I was to summarise much of my career (certainly since my first banking roles, through BIPB and later) – it’s been to eradicate spreadsheets. Now I’m embracing them, by automating their production – but the value is much faster, and arguably much greater.

While EthicsGrade was never the great commercial success we all hoped it would become (perhaps down to our over emphasis on digital responsibility?) – it certainly gave us a unique perspective when it came to building EA; a unique perspective that I think will be very helpful to us as we outcompete other startups in the agentic-AI space – as I think very few other founders will have been responsible for the pain that they are now solving!

Charles, how is the EA team internally using AI tools and what are the observed benefits or challenges?

As for our own use of AI, I think there are two areas where we are seeing great benefits – firstly in our own sales and marketing, and secondly in our software development.

The greatest challenge of course with the use of AI in sales and marketing is that while AI obviously brings significant efficiencies – you have to guard carefully about any loss of human-touch.

Sales is all about building relationships, and the first whiff of automated messages to prospects really counteracts trust. So you have to use AI really selectively. While I’m sure there are people out there writing fully AI generated copy in positioning and messaging – even perhaps for DMing and email personalisation – I think the best use of AI is to arm the user with a ‘cheat sheet’ of everything you need to know about a prospect quickly, so you can write a truly personalised message. This might not scale so well in the short term, but in the long run – especially for a platform like ours which is also generating content – I’m sure it’s the right way to go.

As for software development – my worry is that vibe-coded operations just create inordinate levels of technical debt and unoptimized codebases. It’s more pipe-cleaner and Sellotape solutions which I saw enough of in my banking days!

Where I think generative AI works really well in the SDLC for instance though is in bug-fixing. Users are notoriously bad at explaining their bugs, and this is a nightmare for human developers trying to reproduce the issues. AI in this context is perfect. Even the most detail-lacking bug report can still be analysed and multiple candidates of problems can be quickly identified and rectified.

I still prefer humans in architecting solutions, and while I imagine AI will come to beat us eventually at finding the right patterns to reproduce when it comes to constructing an overall workable (and scalable/ resilient) architecture – so much of the detail can today be driven by AI, and that’s only likely to increase as a share of the overall codebase.

We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at

.