Renee Hennessee, partner and chief technology officer at Mission Wealth.
Mission Wealth’s Renee Hennessee explains how clear guardrails, practical pilots, and manager-led coaching can help advisors adopt AI without losing the human touch.
When wealth firms talk about artificial intelligence, the conversation tends to center on tools and features: copilots, bots, auto-summaries, predictive models.
But for Mission Wealth partner and chief technology officer Renee Hennessee, it’s also about something less flashy and much harder to execute – how to bring advisors along for the ride.
Hennessee built Mission Wealth’s technology function from a one-person CRM role in 2019 into a multi-person team responsible for much more than software.
“I think of our department as not only technology, it’s scaling and innovation and operations,” she said. “It’s everything that other departments can’t or just don’t do when they’re looking for something new or special or interesting to be done.”
After a near-doubling in headcount at Mission Wealth last year, that remit now includes guiding nearly two hundred people through a fast-moving AI landscape.
Weaving AI into the workflow
For Hennessee, meaningful AI success starts with friction – specifically, finding and addressing known pain points for advisors. One early win has been a back-office automation pilot around prospect document intake, targeted at teams doing heavy new-business work.
“We’re piloting a document intake system that’s AI-assisted for our associates for prospects, where the documents automatically go to a certain folder, [are] automatically processed to rename them to our naming convention and remove all the holdings and put them into a separate CSV that can be uploaded to our proposal tool,” she said.
In some cases, prospects arrive with “literally hundreds of holdings.” Previously, that meant associates spending hours renaming files, extracting positions and massaging data into the firm’s proposal system. While it’s not cognitively draining work, Hennessee said it has taken away valuable energy, time, and attention from higher-value work.
At Mission Wealth, she says the approach to AI adoption is highly intentional: roll out to a couple of tech-forward or heavily burdened teams first, prove the value, and then expand.
“We have this now on two different teams,” she said, adding that they use those early adopters as champions while they move the capability into additional regions.
Conscientious objectors to AI
Rather than finding use cases or hooking up models, Hennessee says the hard yards in AI adoption involves addressing the emotions that come with any change in how advisors work. And despite the saying about old dogs and new tricks, she said those harboring AI trust issues aren’t defined by a particular generation, but are cut from the same psychological cloth.
“The people who really need the most support are the conscientious people,” she said, pointing to advisors who “care really deeply about client relationships.”
From her perspective, the caution shows up as fear of getting something wrong with a client, or of losing the “human touch” that differentiates advisors from robots. To reduce people’s misgivings, she said Mission Wealth makes it a point to be explicit about the intent and goals around AI adoption.
“We want to make sure they have the tools they need to succeed in their career,” she said. “We’re not trying to reduce headcount. We’re trying to minimize future headcount, and to help people to be able to navigate the changes that are coming.”
Managers as the tip of the spear
Hennessee’s team thinks of AI rollouts as culture projects as much as technology ones. That means leaning heavily on managers, training them to own the conversation around what AI is, what it is not and how it can help.
“We want our managers to be the tip of the spear in talking to their teams and making sure they can talk openly about AI and how it’s even helping them to prepare or to take a moment before doing something,” she said.
Rather than issue firmwide mandates or presenting tools as perfect fixes, she said her firm leans on small, low-risk pilots where associates and advisors can try capabilities in a contained environment, see what works, and bring back what does not.
“It’s about normalizing learning. We are not wanting there to be pressure of getting it right immediately,” Hennessee said, describing timelines as long as three months for the typical pilot. “We know there are going to be tests and failures.”
While having a “fast fail” philosophy is essential in any pilot project, she acknowledged that mindset won’t come easily for everyone, which means supporting people through the growing pains of tech adoption.
“You’ll have some people who say, ‘I tried it and it didn’t work. So I didn’t try it again,’” she said. “That coaching is needed there.”
Checkpoints and guardrails
One nonnegotiable in Mission Wealth’s approach is human review. Even in places where AI automates the bulk of a task, Hennessee said staff are trained to validate the output before anything goes to a client.
“As part of our acceptable use policy, you’re never going to take something from AI and just turn around and email,” Hennessee said. Documents and summaries “need to be cross-checked, and that’s the human in the loop, that makes it work.”
She also emphasized how associates are encouraged to flag errors for the tech team to trace what went wrong and make fixes as necessary. Over time, that turns frontline users into what Hennessee calls a “tool optimization force,” constantly refining how automation and AI are configured.
For other RIAs thinking about how to incorporate AI, Hennessee argues that drawing clear policies and bright lines around usage is just as important as tools.
“We need clear rules around the data, the confidentiality, and what tools are approved,” she said. “It’s just as important to be explicit about what it won’t be used for as what it will be used for,” she said.
And even as firms build or buy new tools to place in the advisor toolkit, Hennessee stressed that the responsibility for client outcomes should fundamentally remain in human hands.
“People have to know that they are going to own the decisions that are made with it and the accountability from using it,” she said.



