In our first article, Are You Ready for AI?, we explained how true AI readiness is less about chasing the latest tools and more about achieving operational maturity. Firms that succeed start with clean data, sound processes, and strong governance; the foundation quality is what determines whether AI delivers value or magnifies flaws. 

This second article, Fluent, Not Smart: What Large Language Models Really Mean for Asset Management, puts that readiness to the test. Large language models (LLMs) are the most visible form of AI today, but they are not substitutes for expertise. Early experiments in asset management revealed a hard truth: if your foundation is shaky, LLMs will find the cracks and make them bigger. We share lessons from false starts and show how these tools accelerate trusted workflows. The journey to transform LLM fluency into lasting leverage begins with effective governance, process, and data management.

False Starts and Real Lessons

The first trial began with excitement. A team member prompted the latest LLM to prepare a draft Q1 fund commentary, and it delivered instantly. But the excitement was short-lived. The polished summary included references to a macro event the team had never discussed and to a position the fund did not hold. It read well. It was compelling. It was simply not true. 

The second trial produced similar results. When asked to generate a competitor comparison, the LLM misassigned the mandate and invented a strategy feature that the competitor did not offer. The output was articulate, plausible, yet complexly incorrect. 

These were internal experiments, not client-facing moments, but they exposed real risks. The issue wasn’t just hallucination; it was trust. We recognized an even bigger threat: if those prompts included confidential data such as fund metrics, strategy decks, or client identifiers, that information could sit in a public black box with no control over how it might be stored, used, or disseminated. 

The turning point: 

It became clear that LLMs are undeniably powerful, but also imprecise and risky. Public models couldn’t be trusted with sensitive data. However, private, retrieval-augmented generation (RAG) systems, built on protected and controlled inputs, offer a path to leverage LLM fluency without surrendering firm knowledge.

Many asset managers have now shifted toward small, well-structured use cases, such as commentary drafting with verified data, enabling internal document search across decks, memos, and policies, meeting prep from CRM; all with human review, transparent sources, and no public or open-access LLMs when using proprietary data. These smaller deployments limit the scope for error, keep outputs easy to verify, and allow governance processes to mature before scaling. The results have not only been safer, but also more useful.

Putting LLMs to Work

LLMs are the most prominent and widely applied form of AI today. But in asset management, their value comes not from clever turns of phrase or confident guesses, but from alignment with the firm and the truth.  

Our experience and what we observe across the industry prove that data, process, and governance drive value, not the model itself. LLMs excel at turning facts into compelling language, stitching context across sources, and surfacing patterns we might miss. LLM’s strength isn’t human-style reasoning; it’s fluency 

Used well, that fluency enables: 

  • Commentary drafting 
  • Executive summary generation from transcripts and notes 
  • Meeting briefs 
  • Internal knowledge search
  • CRM insight extraction 

In these areas, LLMs do not replace expertise. They scale it, distributing institutional knowledge more broadly, efficiently, and coherently. They act as accelerators, not decision-makers. 

Understanding the Limits

Despite their polish, LLMs have hard limitations: 

  • They do not reason or understand nuance. 
  • They will hallucinate with confidence when context is missing. 
  • They will amplify errors in your source materials. 
  • They don’t know your clients. 

An LLM might generate a well-written paragraph about a portfolio’s underperformance due to rising rates, even if the real driver was sector allocation. It might summarize a call but miss the tone of a volatile quarter. It might suggest steps that are procedurally invalid or legally risky. 

And it will not know the difference. 

Fluency without understanding can mislead. That is the risk, and it is why real, reliable, and structured context is essential.

Governance and Risk

Governance turns LLMs from a novelty into a dependable, effective tool. 

Firms must treat LLMs as part of the data supply chain: 

  • Use verified sources or human review to check for made-up facts. 
  • Establish clear audit and sign-off processes. 
  • Avoid data leakage by keeping models private and controlling what goes in. 

Treating LLMs like magic boxes invites exposure. Treating them like systems infrastructure invites leverage. This focus on oversight increasingly aligns with emerging regulatory expectations in both the U.S. and Europe.

Where to Start

Firms seeing early success use LLMs to speed up work they already understand, with structure, oversight, and control:

  • Drafting commentary from structured data. 
  • Summarizing CRM notes and prepping meeting briefs. 
  • Searching internal documents with plain-language queries. 

According to Daizy, clients have cut production time for monthly and quarterly commentary by more than 70 percent through automation while maintaining compliance alignment. Claude, in a separate case study, reports compressing underwriting review timelines more than fivefold while improving data accuracy from 75 percent to over 90 percent.

These use cases succeed because they are bounded. They do not require reasoning, only the fluent expression of what is already known.

Turning Fluency Into Leverage

Across the industry, the conversation is shifting from “What can they do?” to “How do we control them?” Early excitement is giving way to questions of governance, risk, and integration. Firms that answer those questions first will be in the best position to scale with confidence. 

LLMs excel at language tasks but lack real comprehension. In asset management, where the smallest misread of context can erode trust, that distinction is everything.

The firms making real progress are not just exploring these tools. They are building around them. They have laid the groundwork: clean data, sound processes, and clear oversight. 

LLMs are not silver bullets. They are a force multiplier for firms that know where they fit and how to make them fit well.

Why Meradia 

At Meradia, we work with asset managers and asset owners to ensure AI adoption is built on a solid operational foundation. Our experience spans data architecture, workflow design, and governance models that safeguard both accuracy and confidentiality. We help firms define the right use cases, structure their inputs, and implement oversight processes that turn LLM potential into measurable results. By aligning technology with operational maturity, we enable clients to scale innovation without sacrificing trust. 

Up Next… 

In our next article, we will explore how technology and operational expertise come together to maximize AI’s value. From data pipelines to model integration, we will examine how to design AI-enabled processes for real-world investment operations. Building on the readiness principles from Are You Ready for AI? and the practical lessons in Fluent, Not Smart, this piece will show how the right technology partner can help transform well-governed pilots into enterprise-wide capabilities. 

Download Thought Leadership ArticleServices: Expertises: , Clients: , Authors: