Scroll Top

Fail Smart: Building Responsible AI in Financial Services

Woman using digital tablet. Male leader talking to employees, showing the plan on the projector in office of a financial company.

In part 1 of this 4-part series, From Experimentation to Execution, I wrote about how financial institutions are moving from AI pilots to scalable, compliant foundations, systems that unite innovation with auditability and trust. 

That shift was about architecture: building the foundation beneath responsible growth.

This next one is about behavior, how leaders, teams, and institutions innovate quickly without losing control. 

AI is moving faster than any technology I’ve worked with, and I’ve spent years pushing innovation, developing new experiences, and leveraging technology to make financial services smarter and simpler. 

But lately, I’ve felt both awe and unease. 

AI is reshaping how we work, how we serve, and how we think. It’s also testing how ready we are to lead responsibly. 

As a former compliance officer turned innovation leader, I’ve seen both sides: the excitement of capability and the risks that come with it. AI is no longer the “next frontier” — it’s here, embedded in the way we serve customers, operate our businesses, and make decisions. 

I’ve also watched pilot fatigue creep in: projects stuck in testing, culture losing momentum, teams asking: “What happens when this goes to production?” That hesitation creates its own risk. People disengage. Progress slows. Competitors keep moving. 

Recently, at the CDO Conference North America – Global AI Summit, I moderated a discussion with leaders across financial services and technology. The conversation mirrored what I’m seeing every day: ambition is high, but confidence lags. There was a simple insight that stayed with me — not a grand theme, just a useful reminder: when teams trust the design, they move faster and with better judgment. 

That aligns with a lesson I learned earlier in my career. When I led operations and compliance at a wealth management firm, I saw what happens when governance and infrastructure don’t evolve together. Our data lived in too many systems. Reports weren’t end-to-end. Routine regulatory requests meant days of stitching together evidence by hand. We weren’t doing anything wrong, but we were working within limits that made oversight harder than it needed to be. The issue wasn’t compliance itself; it was the design around it. 

The same dynamic applies to AI today. If systems, data, and accountability can’t keep pace with innovation, even good intentions introduce unnecessary risk. Responsible AI isn’t about avoiding risk; it’s about building the resilience to move quickly and withstand scrutiny when it comes. 


So, what does that look like in practice?

For me, it comes down to six commitments. They’re not slogans or boxes to check; they’re how we keep speed and stewardship in balance. 

1) Secure by design, compliance first.
Guardrails aren’t afterthoughts. Security, privacy, and oversight belong in the architecture from day one so teams can ship with confidence, not caution. 

2) High-quality data for high-quality outcomes.
AI reflects its inputs. Clean, connected, inclusive, and well-governed data is the quiet engine behind fair decisions, reliable performance, and fewer surprises. 

3) Power is in the people.
AI should amplify human judgment, not replace it. When we equip analysts, advisors, and risk teams to work with intelligent systems, adoption accelerates and outcomes improve. 

4) Increase productivity & accelerate growth — with intention.
Speed without structure is chaos. Clear governance and alignment turn experimentation into execution and keep scale from outpacing control. 

5) Empathize and personalize with purpose.
Personalization should make finance simpler and more equitable — not invasive. If an experience isn’t improving clarity, fairness, or access, we should reconsider why we’re doing it. 

6) Transparent and accountable by design.
If we can’t see it, we can’t steward it. Traceable models, explainable decisions, and auditable workflows convert innovation into integrity — and reduce fear of production. 

These commitments aren’t theoretical. They’re how institutions avoid “pilot purgatory,” how leaders replace hesitation with judgment, and how teams move fast without losing the plot. They also bridge a tension I hear constantly: How do we scale responsibly without slowing down? The answer is to make responsibility an enabler — a property of the system — not a meeting at the end. 


What Comes Next

This next chapter in financial services isn’t purely technical. It’s cultural. Every time we automate a decision, we take on new ethical weight. Every time we scale intelligence, we scale our values. The organizations that will lead are the ones that design for both: the capacity to learn quickly and be accountable for what they learn. 

We’ve proven that AI works. Now we need to prove that it can work responsibly — at the pace the market demands and the standard our customers deserve. 

The fastest organizations won’t be the ones taking the biggest risks. They’ll be the ones people trust to keep moving — safely, intelligently, and with purpose. 

That’s how we fail smart.
That’s how we move forward responsibly. 


Read part 1 in this 4-part series here. Stay tuned for the next blog in the series, Thinking Systematically: Turning Responsible Innovation into Repeatable Practice.

Taryn Balthazar

Industry Principal, Financial Services at Presidio |  + posts
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.