Skip to main content

Questions I Hear Most

Answers to things teams ask or wonder about before we speak 

Working With Me

I help organisations define which AI use cases are worth pursuing, build the governance and data infrastructure to support them, and deliver programmes to reach production — not just slide decks. The responsible part means safety, compliance and human oversight are built in from day one, not as an afterthought.

Yes — fractional CDO/CPO engagement is one of my most common arrangements. Typically 2–3 days per week over 6–15 months. You get senior AI and data leadership without the full-time cost or the risk of a permanent hire before you're ready.

No. I've worked with teams at every stage — from pre-data-function startups to enterprises with large data and engineering teams. The starting point is your challenge and your outcomes, not your current capability.

Most clients start with a 3–5 day Insight-to-Opportunity Workshop. I map your challenges, identify your highest-value AI use cases, and produce an Opportunity Pack you can share internally. From there, some move into a 12-week programme; others go straight to fractional leadership. We can brainstorm this together.

Workshop engagements can typically start within 2–3 weeks. Fractional arrangements usually begin within a month. Book a call and we'll scope it from there.

AI Readiness & The Framework

It's a 4 pillar operational framework I've developed across 20 years of product, data and AI delivery. Every engagement I run — from a 3–5 day workshop to a 15 month Fractional CDO arrangement — is structured around whichever of these four areas is missing:

AI-READY STRATEGY (right use cases, right outcomes) - defining the right use cases before building anything, aligned to commercial and customer outcomes, not AI hype. This means knowing what problem AI is actually solving, what success looks like in measurable terms and what to build first. AI-READY TEAMS (upskilled, aligned teams)- aligning and upskilling cross-functional teams so they have the literacy to make AI decisions, challenge outputs and own the results without needing 'the consultant' in the room. A strategy no one can execute is just a deck. AI-READY PRODUCT (safety-by-design, data governance)- embedding Safety by Design, clean and governed data, and human oversight into how the product is built from day one, not as an afterthought after launch. This is now a requirement under the EU AI Act and UK Online Safety Act, and something I've been building since before those laws existed. AI-READY PROCESS (human-in-the-loop workflow) - end-to-end workflows where humans and AI operate together with clear checkpoints, performance monitoring, feedback loops and the organisational change management needed to reduce risk and achieve measurable outcomes.

Most aren't — and that's not a problem, it's the starting point. The three signals I look for: do you have a clear business problem AI should solve, is your data accessible and roughly trustworthy, and does your leadership team have a shared view of success? If one of those is missing, that's where we start.

Strategy defines what to build, why, and in what order — aligned to your commercial and customer outcomes. Implementation is the execution. Most organisations rush to implementation before strategy is clear, which is why 80% of AI projects fail to deliver measurable results. I work at the strategy and governance layer; I don't write the code.

Regulation & Compliance

If your organisation operates in or sells into the EU and uses AI in any customer-facing or decision-making capacity, you need a compliance assessment before August 2026. High-risk applications — in health, finance, employment and critical infrastructure — face the strictest obligations.

I've been designing governance frameworks since before this legislation existed. I help organisations understand their readiness gaps and build the strategy and processes to address them. For formal legal compliance sign-off, I work alongside specialist data protection counsel.

It's an approach to product and system design where safety, privacy and ethical considerations are embedded into the architecture from the beginning — not retrofitted after build. I've been applying Safety by Design principles since my work in app store certification with Telefonica O2 and EE, through to adolescent wellbeing apps with the Wellcome Trust. It is now a requirement under several regulatory frameworks including the UK Online Safety Act and EU AI Act.

Aerospace (Rolls-Royce Data & AI Lab), healthtech (IESO Health therapeutics platform), edtech (Goozby adolescent wellbeing), telecoms (Telefonica O2, Reliance Infocomm, MEF), SaaS (Wazoku — spanning water infrastructure, energy and RAAC), and studied Sustainability Leadership at the University of Cambridge CISL.

This is an area I'm passionate about - helping organisations prepare the next generation for AI-enabled work. I design programmes that go beyond tool training and prompts - focusing on how AI-enabled systems operate, including decision making, governance, risk management and responsible use, and how to apply AI to real problems.

I’ve co-designed and delivered programmes with 2,000+ young people, families and educators, including a residency with KidZania London and work across London councils and schools.
 My approach is real world application: participants identify real problems, build prototype solutions, test ideas and present outcomes - mirroring how moden organisations operate - and build responsible, critical thinking capability that can translate directly into the workplace.

I work with employers, schools and public sector partners. Book a discovery call to learn more.

The shift from AI tools to AI agents — systems that act autonomously on behalf of users and organisations — is already underway.

"The most profound economic impact of generative AI lies not in productivity gains within existing workflows, but in reducing communication frictions between consumers and businesses entirely."
— The Agentic Economy, Rothschild, Mobius et al. 2026

For regulated sectors, this creates a specific and urgent challenge: agents make decisions, take actions, and interact with other agents at speed. Without the right governance infrastructure in place now, organisations face a gap between what their agents can do and what they can safely be permitted to do.

The four pillars of the Human-Centred AI Readiness Framework — strategy, teams, product and process — are exactly the readiness infrastructure organisations need before agentic systems can be deployed responsibly. The question is no longer whether to prepare. It's whether your governance, data and team capability can keep pace with what's coming.

EU AI Act: Full compliance deadline August 2026

If your organisation has EU exposure, it's important to understand your risk and get your governance frameworks in order by the deadline. I've been embedding safety and governance into product delivery since before this legislation existed. 

I can help you identify your gaps, build the frameworks to address them, and connect you to the right legal specialists where needed.


Let's talk AI Act Readiness

Not sure where to start? Neither are most teams.

Book a free 30 minute chat to brainstorm how I can help.

Book a Call - No Prep Needed