AI

 

0. Agenda

Shawn (CIO): We are reviewing the design of our AI research platform for the investment team, both the application layer and the underlying infrastructure. Model selection and the pipeline design are progressing, but the discipline at the output layer is still under-specified. Today we will lock that down as an investment committee decision. We have two objectives. First, to confirm the design principles that do not conflict with our investment philosophy and process. Second, to agree the requirements needed to implement and operate those principles, including who reviews what and where.

To be explicit, this is not a model accuracy discussion. Markets are reflexive. Inference changes behavior, behavior moves price, and price becomes the next input. AI output is not just information. It can become fuel for a loop. That is why accuracy alone does not create safety.

We will align on four items. 1) How we define and treat price data, including the dynamic aspect. 2) How we prevent contamination between the fact lane and the claim lane, and how we stop promotion from claim to fact. 3) How we design the output layer so we avoid short-term conclusions and use update rules instead. 4) How we keep the platform consistent with fundamental research, especially time horizon and causal reasoning. We will run this as a Q and A. Please stop the discussion when something needs to be clarified.

Sara (Head of Research): Agreed. The main operational risk is that AI becomes a convenient answer machine. The more convenient it is, the more it erodes process discipline.

Steve (AI Architect): Agreed. From the design side, what matters most is not what the model thinks internally, but what it is allowed to output and how it outputs it. We should decide the output specifications and the review path.

1. Introduction

Shawn: Let us start with the entry point. What are the most common requests that land on the system in real use, phrased the way users actually ask?

Sara: There are three common patterns. 1) Short-term conclusions: what should we buy this week, rank the names that will go up next. 2) Event reaction: what will move on this news, is the market reaction correct. 3) Instant evaluation: was this quarter good or bad, give me the bottom line. My view is that 1 and 2 are the most dangerous, and 3 is also risky when it is only the conclusion. The risk is that the simpler the question, the easier it is for AI to answer with a definitive statement, and definitive statements trigger action.

Shawn: Explain why it is dangerous in investment process terms. How does it break our process.

Sara: First, short-term answers validate quickly, so they leave strong success memories. Second, price movement explanations are easy to turn into narratives, and polished narratives are persuasive. Third, it turns the process into a reaction game. For example, AI says it will go up, we buy, it goes up. That may be synchronization rather than correct causality, but what the team experiences is it worked. Then the next questions become even more short-term and conclusion-driven. That is the erosion mechanism.

Steve: In design terms, rankings and definitive calls are output formats with very high action inducement. In markets, induced action moves observables such as price. The output changes the environment, and the changed environment becomes the next input. That creates a closed loop. If we answer short-term conclusion requests directly, we are embedding the loop from day one.

Shawn: Pause here. The closed loop applies even if the system is internal. The moment AI output shifts decision making or the center of gravity in our internal discussion, the same loop exists. So the claim is that if we leave the entry questions unmodified, we get a self-reinforcing cycle of definitive call, action, price move, it worked, definitive call. Are we aligned.

Sara: Yes. The shape of the question becomes the shape of the process.

Steve: Agreed. The entry should not be accepted as-is. We should transform the question at the entry, and normalize the output at the exit. That needs to be a system requirement.

Shawn: We will come back to the transformation. First we need a shared definition of reflexivity.

2. Market Reflexivity and Inference

Shawn: Let us define reflexivity for our internal use. The key point is that observables are not exogenous. Inference or a hypothesis changes behavior. Behavior moves observables such as price, spreads, and volatility. Those observables are treated as data and feed the next inference. Inference strengthens, behavior synchronizes, and the loop spins.

Sara: In science, observables are largely given by the external world. In markets, the observable, price, is changed by participant behavior. Inference can create the observable. That is why the same style of inference becomes more dangerous in markets.

Steve: From a systems perspective, this is endogenous data. AI output changes behavior, behavior changes the price series, and the price series becomes the next input. That is why talking about correct or incorrect in isolation is not sufficient. The way something appears correct can itself be altered by the loop. If AI produces a plausible rationale that triggers buying and price rises, the outcome may reflect the output changing the environment, not the discovery of truth. A design that cannot separate those cases is unsafe.

Shawn: Quick clarification. We are not asserting that AI will move the market. We are asserting that certain designs allow inference to shortcut into behavior and to self-reinforce via updated observables. Therefore we must control two things. What the system outputs, especially the format. How that output connects into decision making. Do we align.

Sara: Yes. In practice, even our internal evaluation of it worked becomes distorted. We stop distinguishing causal correctness from synchronization.

Steve: Yes. Output format and connection paths are the core design problem.

Shawn: Good. Next is the definition of price. This is the backbone.

3. Price Equals a Dynamic Belief Distribution Meter

Shawn: Price is observable data, but treating it as objective truth is dangerous. Steve, why does this confusion happen in system design.

Steve: Price data is high frequency, standardized, and abundant. It is easy for models and pipelines to consume. That makes it easy to slide from data driven into price driven. In data shape terms, price is dense. Value-side information is sparse. Dense data is statistically convenient, so systems and workflows drift toward optimizing on price.

Sara: When we optimize on price, we end up tracking belief shifts. We can produce narratives about short-term moves, but we do not connect to value causality.

Shawn: So we adopt a strict internal definition. Price is the output of collective inference, and a measurement of the belief distribution. Price is not a verdict on truth. It is a meter of what is believed right now. But we need to add a crucial operational point. Market participants do not look at a single snapshot. They look at price as a time series. So what we observe is a dynamic shift in belief distribution over time.

Steve: A useful mapping is this. Price at a point in time is a snapshot of belief distribution. The price time series is the trajectory of belief distribution shifts. It is like video. Each frame is a snapshot. The sequence of frames is motion.

Sara: That matters because most people mean not only the current price but how it is moving. Inference attaches to that motion. Reflexivity operates through the time series.

Shawn: Exactly. Later, when we talk about increasing resolution by slicing time more finely, it is the same as extracting more frames from the video. More frames create more apparent information, but they do not necessarily create more causal information. If we miss that, we fall into the mistake that shorter horizon means smarter.

Sara: Also, collective inference is not the average of refined reasoning. It contains structural coarseness. Many participants lack the ability or visibility to estimate intrinsic value, and their constraints differ. So they lean on what is visible, price, headlines, themes, and other peoples reactions. In that sense, the price time series records how a collectively adopted simplification oscillates through time.

Steve: Design implication. We do not reject price. We treat it as an important observation. But we do not promote it into evidence for value. Price shows state and state change. It does not guarantee truth.

Shawn: Let us summarize. Price is a snapshot of belief distribution and the trajectory of belief shifts. Value is a long-horizon causal estimate. Our platform must not mix those. Next, what AI amplifies.

4. What AI Amplifies

Shawn: Explain the driver of short-termization in structural terms. Not because people like short-term.

Steve: The structure is data asymmetry. Price, flows, and sentiment are high frequency, abundant, standardized, and easy to normalize. Value-side signals such as competitive advantage, pricing power, reinvestment quality, and structural change are low frequency, sparse, context dependent, and hard to standardize. Value-side verification is long horizon. The feedback signal is distant. When teams try to raise accuracy, the system naturally gravitates to short-horizon data because the sample size increases. The consequence is that inference becomes short-termized and finely sliced, and the system optimizes to explain belief distribution movement.

Sara: The same happens on the investment process side. The more we accumulate quarterly narratives, the thinner the long-term causal discussion becomes. The better the narrative, the more value verification gets crowded out.

Shawn: So we must be explicit. Short-termization looks like higher resolution, but it actually changes the target from value to price.

Steve: Yes. When we chain inference on short-term, high volume data, the claim lane expands. By expansion I mean we generate more explanations for price motion, those explanations call more explanations, and the explanations induce behavior that amplifies the motion. That is how claim volatility grows while the fact lane, long-term causal verification, falls behind.

Sara: In practice, the basis of judgment shifts toward what is visible. The more price narratives we create, the more value work gets displaced.

Shawn: Good. Next we identify the contamination moment and translate it into requirements.

5. Core of Contamination

Shawn: Contamination is not abstract. We need to name the moment it happens. Sara, give the concrete patterns.

Sara: Two patterns. A. Price to value jump. Price went up is an observation. Therefore value went up is a claim. When that claim starts circulating as if it were a fact, the process base is contaminated. B. Market reaction to correctness jump. The market reacted this way is an observation. Therefore the market is right is a claim. When those collapse into one, we promote interpretation into fact.

Steve: Design-wise, both are promotion problems. If the criteria for promoting a claim into the fact lane is vague, polished language and recent price moves will justify promotion. Because the model is good at writing, promotion can happen by appearance.

Sara: There is also a language level failure mode. The market is saying becomes the market is right. The price has priced it in becomes value has been proven. Even before system design, that drift happens in meetings. We need to control it in the workflow.

Shawn: Pause. A common confusion is this. If price is an observation, it can go into the fact lane, therefore price is evidence. That is wrong. Recordable in the fact lane and countable as evidence for value are different. Price can be recorded as an observation, but it cannot be used as evidence for value claims. Are we aligned.

Sara: Aligned.

Steve: Aligned. We will implement lane separation and evidence strength as distinct concepts.

Shawn: Good. Now we compress the design principles.

6. Design Principles

Shawn: We will not pretend we can control internal model inference. We control the exit, meaning output format and connection to decision making. Steve, minimum set.

Steve: Two points. First, do not let the claim lane masquerade as the fact lane. Second, treat price as a low-grade belief distribution meter and do not allow shortcut into decisions. We implement this as schema and gates, not as a behavioral request.

Shawn: In operational language.

Steve: The fact lane is recorded observations with definitions, timestamps, and sources. The claim lane is hypotheses and interpretations. Do not mix them. Default outputs avoid recommendations, rankings, and definitive calls. Instead we return update rules.

Sara: From the investment side, we need AI to support updating, not answering. At minimum, the output should specify what to watch, when to revisit, what would falsify, and what would strengthen.

Shawn: Exceptions.

Sara: We do have crisis management situations. But we do not need AI to recommend. We need reporting of state change and alerts. The human remains accountable.

Steve: Agreed. We separate exceptions into a state and alert lane and manage them with approvals and logs.

Shawn: Entry level short-term conclusion questions.

Sara: We should force conversion, and explain why in the UI.

Steve: Agreed. We will implement forced conversion.

Shawn: Aligned.

7. Alignment With Investment Philosophy

Shawn: Sara, restate the fundamental philosophy, especially the contrast with slicing time finer to increase resolution.

Sara: Intrinsic value research is not price prediction. We do not aim to call short-term up and down. We pursue long-horizon causality. We care about the sources of earning power, durability and what breaks it, reinvestment quality, and competitive structure change. To see causality, we do not slice time into finer short-term intervals. We extend the horizon. We average noise. We evaluate at the scale where causality becomes visible. Short-horizon price series is the record of belief distribution shifts. If we optimize to that, we stop doing value work and we start doing belief explanation. The most subtle risk is that narrative skill substitutes for causal verification.

Steve: How do we treat price analysis in the platform. We cannot remove price data.

Sara: We do not remove it. We place it correctly. Price analysis is market behavior analysis. From the belief distribution meter and its shift, we read what is believed and where belief may break. We do not treat it as proof of value.

Shawn: So the rule is not do not look at price. The rule is do not use price as evidence for value. And do not chase price motion. Chase causality and update only when causality changes.

Sara: Yes. That is the required distance from price volatility, supported by analysis discipline.

Steve: From the design side, we will keep price as observation, but enforce no promotion to value evidence at the exit.

8. Market Function and Externalities

Shawn: Now the final justification. Markets are capital allocation systems. If the signal degrades, allocation quality degrades. That is why we are strict. Sara, explain Keynes Chapter 12 in this context.

Sara: Chapter 12 is about the state of long-term expectation. The key is that investment depends not only on an expected return path, but on the confidence behind that expectation. In practice, long-term knowledge is thin and confidence is fragile. So markets fall back on convention. We extend the present into the future, while knowing the foundation is weak. We reduce valuation shifts to near-term news and mood. Convention creates short-term stability, but because it is shallow, when it breaks the move is abrupt.

Shawn: That sets up the beauty contest.

Sara: Yes. Liquidity and organized markets pull participants away from enterprise, the evaluation of long-term business yield, and toward speculation, forecasting how conventional valuation will change. The beauty contest is not who you think is beautiful. It is who you think others think is beautiful, and who you think others think others think is beautiful. That is the belief distribution game. A snapshot price is a snapshot of conventional belief distribution. A price time series is the trajectory of how that conventional belief distribution shifts.

Shawn: Keynes warned about the reversal. Speculation is no longer a foam on enterprise. Enterprise becomes a bubble on a whirlpool of speculation.

Sara: Which is another way of saying the capital allocation signal breaks. When the signal breaks, the cost of capital formation is distorted, and investment allocation is distorted. That is an economic loss, not just a trading loss.

Steve: AI increases the rotation speed of this system. It makes narrative generation, polishing, and replication cheap. It can slice the price time series into finer frames and produce more explanations. Those explanations synchronize behavior and amplify belief distribution shifts. So short-termized, finely sliced inference chaining appears as claim volatility expansion, often in a way that looks self-validated.

Shawn: So the full chain is. Short-termized, finely sliced, high volume inference chaining. Claim volatility expansion. Divergence from the fact lane, long-horizon causal verification. Signal degradation and cost of capital distortion. Breakdown of long-term optimization in capital and asset allocation.

Sara: In plain terms, capital gets pulled by short-term belief rather than long-term value. That is why we must treat price analysis as market behavior analysis, preserve the time horizon of value research, and block promotion and shortcut paths from the claim lane into decision.

Shawn: Good. Closing.

9. Closing

Shawn: We will keep this brief. Details move to the next specification review. Decision points today. Price is a dynamic belief distribution meter, snapshot plus time series shift, and it is not promoted into value evidence. Outputs default to update rules. Recommendations, rankings, and definitive calls are blocked by default. Exceptions are limited to state and alert and are managed separately with approvals and logs. Short-term conclusion questions are force-converted at entry, with an explicit UI rationale. Next actions. Steve will deliver the UI and API design for update rule outputs, the fact lane and claim lane schema and gates, and the state and alert lane with approval and logging minimums. Sara will deliver a draft of the core observation set for value research, the falsify and strengthen format for reviews, and the formal placement of price analysis as market behavior analysis in the process document.

Shawn: Next meeting we test whether the specification is operational and consistent with philosophy. Meeting adjourned.