An hour into a long ideation workshop, I stopped taking notes and started building. The traditional next step, after a session like that, would have been to wrap up, spend a week and a half assembling PowerPoints, book a follow-up call to confirm we understood the requirements, and then — maybe — start building something. Instead, I piped the live transcript into Claude Code and asked it to scaffold a working prototype that reflected what the room was describing. By the time we hit the demo slot at the end of the call, I had a small working application running on my laptop, complete with simulated data, that let everyone see a rough version of the idea they had been discussing for three hours.
The reactions in the room were a study in contrast, and the technology was the least interesting part of what happened. That afternoon taught me more about enterprise AI adoption than any think-piece I’ve read.
Related Read: The Real Challenge To AI Adoption In Business
The thesis: context is the product, not the model
I’ve spent the last five months building what I now call my agentic AI operating system — the set of tools, prompts, and shared-context patterns I use to run my day as a principal architect. I’m using our enterprise license of Claude, so that’s my daily driver, but that’s almost beside the point. The models matter less than people think. Tools will change every six months. What doesn’t change is the problem underneath: information is scattered across silos nobody reads, and every pursuit leaks context as it moves through the funnel.
Sales knowledge lives in Salesforce, SharePoint, Teams chats, email threads, and spreadsheets. Delivery knowledge lives in standards documents that, let’s be honest, nobody opens after onboarding. Discovery notes get lost between the first call and the statement of work. Compliance and business rules surface at the worst possible moment — usually right before signature, when a delivery lead raises a concern that could have been addressed four weeks earlier.
I didn’t want to build another silo. I wanted to build a layer that pulls from the silos we already have and keeps everyone on the same page — sales, delivery, leadership — without asking anyone to change where they work.
Five principles that actually moved the needle
If you’re a director of engineering trying to scale AI inside your company, here’s what I’d tell you based on doing it, not reading about it.
- Stop chasing model benchmarks. Invest in shared context. Every time a new model drops, people re-litigate their tool choice. That’s the wrong axis. The teams that get real value from AI are the ones that figured out how to feed their existing institutional knowledge into whatever model they’re using this quarter. Context is portable. Tool choices aren’t.
- Dogfood before you sell. The fastest way to build conviction — yours and your customers’ — is to use the tools on your own workflow first. I can’t recommend an adoption pattern to a client I haven’t stress-tested on my own pursuits. Every blog about AI transformation would be better if the author had been forced to use the thing they’re describing for a month.
- Design for compliance at creation time, not review time. Most of the painful rework in any regulated workflow happens because rules surface late. An agentic system that reads your standards and flags violations as you author — not after — is worth more than any dashboard. For me that meant wiring in allocation rules, margin thresholds, QA ratios, and handoff requirements as pre-built skills, so the standards document becomes a living constraint instead of a PDF.
- Treat meetings as inputs, not artifacts. We record everything already. The waste isn’t in the recording — it’s in the fact that nothing downstream ever reads it. Transcripts into shared context changed how I prep, how I qualify, and how I hand off to delivery. The meeting becomes an input to the next five decisions instead of a 90-minute video nobody will rewatch.
- Measure the work you stop doing. The ROI question I get most often — “how do you prove this is working?” — is easier than people expect. Don’t measure model accuracy. Measure the hours you used to spend on research, opportunity qualification, SOW drafting, and handoff prep, then measure them again after. In my role I’ve seen an order-of-magnitude productivity improvement on those pre-sales tasks, in a range I’d conservatively put at 10x and not overstate much beyond that.
What this means if you’re a Presidio customer
The implications for Presidio’s clientsx are concrete. We’re not a company telling you we can help you adopt AI because we read the same research reports you did. We’re adopting it ourselves, on our own workflows, and the practices that fall out of that experience travel directly into our client engagements — faster discovery, faster SOWs, faster prototypes, and fewer handoff surprises. That conviction is something you either have or you don’t, and you can’t fake it with a slide.
If you’re earlier in this journey and trying to figure out where to start, we run a structured engagement called the AI Blueprint workshop. It’s the same framing I used on myself: inventory the silos, identify the context that’s trapped, pick the workflows where shared context will return value fastest, and build toward an adoption pattern that survives the next model release.
I’d rather you leave that workshop with three honest candidates for internal dogfooding than a deck full of aspirational use cases. The first group becomes a real capability. The second becomes a memo.
Ready to map your own enterprise AI adoption path? Schedule an AI Blueprint workshop with Presidio.


