At some point in the last two years, every investment committee meeting has included a version of the same question.
A board member — doing exactly what good board members do, staying current, pressing the team to have a view — looks up from the agenda and asks about AI. Sometimes it's framed broadly. Sometimes it's specific: are we using it? should we be? what are the managers we hire doing with it?
Most investment teams have learned to answer it without really answering it. Either enthusiasm that positions AI as a transformational opportunity the institution is already moving to capture — even if that mostly means the consultant now has a new slide deck. Or defensiveness that positions AI as a tool for technologists with limited relevance to how this institution actually invests.
Both answers are designed to end the conversation rather than have it. I want to try something different.
What systematic approaches actually do well
I come to this question from a specific direction. My career started on the CBOE trading floor, pricing equity options and managing derivatives risk — which gave me a practitioner's understanding of how quantitative models work from the inside, not just from the outside. I've spent the years since evaluating systematic and quantitative managers, constructing portfolios that included them, and thinking carefully about what they actually deliver.
So when I say that systematic and AI-driven approaches do certain things genuinely well, I'm not being diplomatic. Pattern recognition in large data sets — satellite imagery, credit card transactions, shipping data, earnings call transcripts — represents a real source of informational edge that didn't exist a decade ago. The computational infrastructure to process these signals at speed and scale is genuinely beyond what a discretionary research team can replicate. Risk management that operates on rule-based parameters across thousands of positions simultaneously can remove specific categories of emotional error from the decision process.
These capabilities are real. The best systematic managers have built something that took decades to construct. When I evaluate a quantitative strategy, I'm not skeptical of the technology. I'm applying the same framework I apply to everything else: show me the edge, explain why it's durable, tell me what would end it.
What they can't do
In Discipline I of this series, I described a meeting with the manager of a London-based fund that had just won the top European alternatives award. His fund would be gone six months later. I asked what would happen if he was wrong. He looked at me like I'd asked something in a foreign language.
No algorithm would have caught that.
The thing visible in that meeting wasn't in any data set. It was in his face — the genuine confusion of a person who had built an elegant thesis and never seriously imagined its failure. You can't quantify that. You can't scrape it from a document or detect it in a transcript. It's a human skill: the ability to sit across a table and read whether someone has actually grappled with their own risk, or whether they are selling conviction that has never been tested.
In Discipline II, I described a different meeting — a fund manager who nodded to acknowledge that growing assets were affecting his performance, then denied it out loud. The nod was the honest answer. The verbal denial was the business answer. No model detects a nod-and-denial. That's not a data problem. That's a judgment problem, and judgment requires a person in the room who knows what they're looking for.
There's a third category, equally important: understanding what an organization actually is. In Discipline III, I described recognizing immediately that a mortgage fund was wrong for an insurance company that insures homes — because I understood the business deeply enough to see a risk correlation the OCIO had missed. No model without that context makes that call. The OCIO had sophisticated tools. They didn't have the organizational knowledge to use them correctly for this client.
The real risk isn't automation — it's abdication
The most dangerous misuse of AI in institutional investing isn't the scenario where an algorithm takes over the decision-making process. The scenario that actually concerns me is more subtle.
"The model said so" is becoming the new "we have high conviction." It has the same structure: a statement that sounds rigorous, that points to a process or a tool rather than to actual reasoning, and that is specifically designed to be difficult to challenge.
When a board member asks "why do we own this?" and the answer is "our quantitative model identified it as attractive," the conversation has been ended by a reference to a tool rather than advanced by an analysis. That is the same evasion as "we have high conviction" — it just sounds more technical and therefore harder to press on.
This matters because boards are already inclined toward deference on investment decisions. Most board members don't feel equipped to challenge their CIO on manager selection, and they feel even less equipped to challenge a quantitative framework they can't evaluate. AI, deployed carelessly, gives institutions cover for exactly the kind of unexamined conviction that Parts I and II of this series were about. The format changes. The problem doesn't.
The same questions apply
The discipline I described across this series — ask what would change your mind, identify the specific structural advantage, re-underwrite continuously, pre-specify exit conditions — doesn't disappear when the manager is systematic rather than discretionary. A quantitative strategy has an edge that can be identified. It has capacity constraints that matter. It has style drift that can be detected. It has an exit condition that can be pre-specified.
All the same questions apply. The mechanism of edge is different. The framework for evaluating it is identical.
Systematic strategies face the same crowding problem as discretionary managers. When enough capital chases the same signal — momentum, value, quality factors — the signal becomes the risk. The 2007 quant deleveraging event, when a basket of unrelated equities moved in synchronized, violent patterns for a week because multiple quantitative funds were liquidating similar factor exposures simultaneously, is the clearest modern example. The strategies that looked uncorrelated were uncorrelated only in calm markets. Under stress, they were running the same book. An algorithm can generate a signal. It cannot tell you whether that signal is crowded in ways that will produce correlated drawdowns the first time markets stress.
That evaluation requires a practitioner asking hard questions — the same practitioner, with the same framework, applying the same standard to a quant fund that they'd apply to a discretionary manager down the hall.
What endures
There is a version of this AI conversation that gets it right, and it sounds something like this: AI changes how we process information, how we manage positions, and how we identify signals. It does not change the fundamental questions we need to ask about any investment thesis. If anything, it raises the stakes on those questions — because the tools that give the illusion of rigor are now more widely available, and the difference between genuine discipline and the appearance of it is harder to detect.
The most valuable thing I took from the culture I described in Discipline I — where every decision had to earn its place, every cycle, without exception — was not any specific investment insight. It was the understanding that the quality of an investment program depends entirely on the quality of the questions it asks. Not the sophistication of its tools. Not the size of the team. The questions.
"What would change your mind?" works on a fund manager pitching structured credit in 2007. It works on a systematic manager pitching factor strategies in 2024. It works on the AI vendor pitching a portfolio construction tool to your board. It works on the analyst presenting a new manager recommendation. It works on the CIO who hasn't genuinely re-underwritten their highest-conviction position in eighteen months.
The algorithm doesn't replace that question. The algorithm makes it more important.
The boards and CIOs who figure this out — who deploy new tools without surrendering the discipline to evaluate them honestly — will build programs that endure. The ones who let AI end conversations that should be started will build programs that look sophisticated right up until the moment they don't.
That distinction will matter more in the next decade than any single allocation decision they make.
Questions to bring to your next board or investment committee meeting
- For any AI-driven or systematic strategy we hold: can our team articulate the specific edge being captured, the evidence it remains durable, and the conditions under which we'd exit — in the same terms we'd apply to a discretionary manager? Or are we holding it because the model says so?
- When a systematic strategy underperforms, do we have the internal capability to distinguish between a strategy that has stopped working and one experiencing a drawdown in a specific market regime? These require completely different responses. Do we know which one we're in before the drawdown happens?
- Where in our investment process are we using AI — or AI-adjacent language — to end a conversation we should be having? What questions are we not asking because "the model handles it"?
- What would change our minds about our current approach to systematic and AI-driven investing? If the answer is "nothing — we're committed to this direction," go back and read Discipline I.
If you're running an investment program, and want to talk culture, performance, or anything – let's chat.