Let me be direct. The Securities and Exchange Commission is not producing new artificial intelligence-specific rules for investment advisors anytime soon. I made the full structural case for that in a recent newsletter article. The short version: the commission is shorthanded, the chair is deregulatory by conviction, and the Administrative Procedure Act sets a floor on rulemaking timelines that no amount of political will can compress below 18 months. You are looking at 2029 at the earliest.
That is not a reason to relax. It is a reason to pay attention to the risk you are already carrying under the frameworks that exist right now.
The SEC’s 2026 Examination Priorities name AI governance explicitly: AI governance policies, vendor oversight documentation, supervision procedures for AI-assisted recommendations, and staff training records. Examiners are walking into RIA offices today and asking for those things. Not because a new rule compels them to, but because the existing investment advisor oversight framework already covers it. The firms that don’t have documentation ready are handing examiners a finding before the conversation starts.
The Examination Is Already Happening
I want to be specific about what examiners are asking for, because the gap between what most RIAs have and what examiners expect is larger than most principals realize.
The first ask is a written AI Acceptable Use Policy: a document that defines what tools are permitted, what is prohibited and what requires supervisory approval before use. It should specify what categories of client data may and may not be processed by AI systems. It should cover AI risks. Most firms do not have one. For the ones that do, most have not updated it since the AI landscape looked entirely different 12 months ago.
The second ask is vendor oversight documentation for every third-party AI tool in the firm’s stack. Not a list of tools. Documentation that shows someone at the firm evaluated the tool before advisors started using it with client data. What does the vendor do with that data? Where is it stored? What are the retention policies? What security representations has the vendor made in writing? If you are using an AI meeting tool, an AI email assistant, an AI proposal generator or any vendor platform that has embedded AI into its workflow, you have vendor oversight obligations under the 2024 Regulation S-P amendments. The compliance deadline for smaller advisor firms is June 3, 2026. That deadline is not hypothetical. It is on the calendar.
The third ask is documentation showing that AI-assisted recommendations are subject to human supervisory review before they reach clients. This is called the human in the loop. Firms need evidence that the review is actually happening, in the form of records that survive an examination.
The Risk You Probably Haven’t Fully Mapped: Shadow AI
Of all the AI governance risks accumulating for RIAs right now, shadow AI is the least visible to principals and the fastest growing in terms of compliance exposure.
Your advisors are using AI tools the firm has never reviewed, approved or documented. I say this without qualification because I have not found an RIA where it is not true. Consumer AI tools are free, capable and available on every personal device. Advisors use them for meeting prep, client communication drafts, research summaries, portfolio commentary and a dozen other tasks that the firm has no visibility into. The data that flows into those tools; client names, account details, portfolio positions, full client addresses; leaves the firm without a trace.
The principals who believe this is not happening at their firm are misguided. I have had this conversation enough times to say that plainly. The question is not whether shadow AI exists inside your walls. The question is whether you are going to surface it and govern it before an examiner or a data breach surfaces it for you.
The practical implication: an AI tool inventory is not just a governance best practice. It is the foundational step without which none of the rest of the program works. You cannot write an Acceptable Use Policy for tools you do not know about. You cannot document supervisory review of AI-influenced recommendations if you do not know which recommendations were influenced by AI. You cannot train staff on policies that do not address the tools they are actually using.
Five Things to Build Before June 2026
1.AI tool inventory. Survey every tool in use across the firm, including tools used informally by staff on personal devices. Ask advisors and operations staff directly. The goal is a complete picture of what AI is touching client data, recommendation workflows, and client communications. This is the prerequisite for everything else.
2.Written AI acceptable use policy. Classify tools into approved, limited use, and prohibited categories. Specify what data types may and may not be processed by AI systems. Define what requires supervisory approval before use. Include a clear process for how staff request approval for new tools. Update it at least annually. Quarterly is more realistic given the pace of change.
3.Vendor due diligence documentation. For every third-party AI tool that touches client data, a documented review addressing the vendor’s data handling practices, storage locations, retention periods, security posture, and compliance representations. Map each vendor relationship against your Regulation S-P obligations. If the vendor cannot answer the due diligence questions, that itself is a finding you need to document and act on before the June 2026 compliance deadline.
4.Supervisory procedure update. Your Written Supervisory Procedures do not address AI. They need to. Specifically: how AI-assisted recommendations are reviewed before they reach clients, who is responsible for that review, and how the review is documented. A policy that says recommendations should be reviewed by a human is not sufficient. A procedure that produces a record that a human reviewed it is what survives an examination.
5.Staff training record. The policy exists is not a defensible answer. The policy exists, was communicated to staff on a specific date, and training completion was documented is. Build a training record that shows the program is operational, not theoretical. Include the date, the content covered, and the staff who completed it.
Why Building Now Is the Right Business Decision
There is a due diligence question that more institutional clients and sophisticated prospects are asking in 2026: how does your firm govern its AI use? The firms that can answer that question with a coherent, documented program are winning business the unprepared firm loses. It is not a compliance advantage. It is a trust advantage.
The Oasis Group’s AI Readiness Index is a free diagnostic tool built specifically for fiduciary wealth management firms. It evaluates governance and technology maturity across 70 questions and identifies exactly where the gaps are. You can access it at theoasisgrp.com. If you are unsure where your firm stands, that is the place to start.
This is the second article in a four-part series on the AI governance gap in wealth management. The anchor piece, “The SEC Isn’t Coming. That’s the Problem,” is available on the Oasis Group website.


