AI Forecasts Push Securities Law Into Algorithmic Territory
|

AI Forecasts Push Securities Law Into Algorithmic Territory

Securities law still turns on an old question: what did people know, and when did they know it. But firms now make market-moving statements with the help of systems that no single person fully understands. Forecasting models, automated trading tools, and investor-facing analytics can generate numbers and narratives that look authoritative while hiding assumptions, gaps, and shortcuts in code and training data.

The Regulatory Premise: Old Rules, New Actors

Regulators have started to say out loud what the law already implied. Automated systems do not sit outside the intent and duty framework; they sit inside it. When a company leans on an opaque model to shape guidance, metrics, or trading behavior, the decision to rely on that model becomes the human act that courts and enforcement staff will examine. The question for issuers and their counsel is shifting from “what did you say” to “how did the algorithm you trusted produce those words and numbers.”

The core anti-fraud architecture is already in place. Rule 10b-5 under the Securities Exchange Act prohibits untrue statements of material fact or omissions that make statements misleading in connection with the purchase or sale of a security. Courts have layered on the scienter requirement, allowing liability where issuers act with intent to deceive or with reckless disregard for the truth. Parallel doctrines address market manipulation, including schemes that create artificial prices or deceptive trading signals, even when no false sentence appears in a filing. That line was drawn sharply in the Supreme Court’s 2024 decision in Macquarie Infrastructure Corp. v. Moab Partners, which held that Rule 10b-5(b) reaches misleading statements and half-truths rather than pure omissions, a distinction that will matter when AI-driven forecasts leave known model limitations off the page.

The internal-controls side of the house also matters. Exchange Act provisions on books and records and internal accounting controls require public companies to maintain reasonable systems to ensure that figures in financial statements and disclosures reflect reality. Those obligations do not stop at the edge of a data center. A forecasting engine that feeds revenue guidance or a model that informs key performance indicators becomes part of the internal control environment, not a black box sitting in a separate technical silo.

Regulators are increasingly explicit that technology neutrality cuts both ways. The Securities and Exchange Commission, the Commodity Futures Trading Commission, and other authorities have stressed that existing rules apply regardless of whether a human analyst, a spreadsheet, or an artificial intelligence model produces the output. The CFTC’s December 2024 staff advisory on the use of artificial intelligence by registrants makes that point in derivatives markets, while speeches and statements from SEC leadership on AI in finance repeat the same message for securities. Automated tools are treated as part of the firm’s decision-making apparatus, not as separate actors.

Forecasting, Guidance, and the Edge of Material Misstatement

Corporate forecasting has always relied on models. What has changed is the ease with which complex, machine-learning systems can ingest vast data sets and produce investor-ready graphics and narrative summaries. Finance teams can now generate model-driven scenarios that feel empirical and precise, even when the underlying relationships are fragile or untested. When those AI-assisted forecasts shape earnings guidance or investor presentations, they become candidates for scrutiny under familiar misstatement and omission standards.

The enforcement record around financial models offers a roadmap. Over the past decade, the SEC has brought cases where companies misdescribed how non-GAAP metrics were calculated, relied on flawed valuation engines, or failed to understand the operational data feeding key user statistics. In several matters, the problem was not that a model existed, but that management did not grasp its mechanics, did not reconcile internal calculations with public statements, or allowed optimistic outputs to stand without challenge. AI forecasting tools that operate with limited explainability fit neatly into this pattern when their outputs are pulled directly into guidance.

Regulatory staff have already flagged AI-related forecasting risks in examinations and public remarks. The SEC’s enforcement and examinations programs have highlighted AI and predictive data analytics in multiple alerts and speeches, including actions against advisers that overstated their AI capabilities. The March 2024 enforcement sweep against two investment advisers for false claims about AI capabilities, announced in SEC Press Release 2024-36 and analyzed in detail by practitioners in “Decoding the SEC’s First AI-Washing Enforcement Actions”, shows how quickly AI-related forecasts and marketing statements can be reframed as misrepresentations.

For issuers, the internal-controls overlay is just as important as the disclosure language. Firms that allow AI-generated numbers to influence public statements will be expected to show how they validated the model, how they tested it against historical performance, and which human decision-makers approved its use in external messaging. Documentation of model governance, challenge processes, and override decisions is likely to matter as much as the slides themselves. A lack of explainability may not only fail to help the defense, it may signal that the company did not exercise the level of care that securities law expects around material forward-looking information.

AI-Enhanced Trading and Market Manipulation

Securities and derivatives regulators have spent years developing theories of liability for algorithmic trading. High-frequency strategies, order anticipation, and spoofing schemes have all produced actions where agencies argued that code-driven behavior distorted markets in ways that met long-standing manipulation standards. The emergence of AI systems that can learn, adapt, and refine trading logic on their own raises the stakes. Reinforcement-learning tools and advanced predictive engines can, in principle, discover profitable strategies that regulators view as abusive, without anyone explicitly coding a manipulative plan.

Here, too, the legal building blocks are familiar. Manipulative intent can be inferred from the design of a strategy, its foreseeable effects, and the surrounding communications, not only from a single email or confession. If a trading algorithm is configured or allowed to operate in ways that routinely create false impressions of supply and demand, layer orders that are never intended to execute, or trigger price dislocations, authorities can argue that the firm acted with the requisite state of mind by deploying and failing to supervise that system. The fact that the algorithm tweaked its own parameters along the way does not sever that chain.

Recent global guidance makes this direction clear. The CFTC’s December 2024 advisory on AI in CFTC-regulated markets links AI use directly to existing system-safeguards, risk controls, and market-integrity duties. In Europe, the European Securities and Markets Authority issued its May 2024 public statement on the use of AI in investment services, emphasizing that MiFID II obligations apply fully when firms deploy AI in trading, advice, or customer interactions. The practical takeaway for counsel is simple: regulators will evaluate AI-driven trading systems by their design, oversight, and outcomes, not by whether a human pressed the button.


Firms that deploy AI-based trading or hedging tools should therefore treat model governance as an extension of their traditional market-abuse defenses. That means documented scenario testing, controls that prevent strategies from operating outside defined parameters, and escalation paths when surveillance systems flag unusual behavior. It also means understanding how third-party vendors develop and monitor any AI components embedded in their products. A firm that cannot explain why an algorithm traded in a particular pattern will struggle to persuade enforcers that it met its obligations to prevent and detect manipulative activity.

The New Face of Fraud: AI Washing and Capability Misrepresentation

The clearest bridge between AI and securities enforcement so far has been marketing, not modeling. In March 2024, the SEC brought its first settled cases against investment advisers that overstated their use of artificial intelligence and machine learning, accusing the firms of making false and misleading statements about supposed AI-driven strategies and data pipelines. That sweep is described in SEC Press Release 2024-36 and dissected for corporate governance audiences in the Harvard Law School Forum analysis.

The trend has already moved beyond asset managers. In January 2025, the SEC announced settled charges against restaurant-technology company Presto Automation for misstatements about the capabilities and deployment of its drive-through voice product, as described in the Presto Automation consent order. Commentators quickly cast the case as a reporting-company variant of false AI claims, an extension of the same basic principle that companies must tell the truth about how and where they use AI.

Chair Gary Gensler repeatedly compared exaggerated AI claims to greenwashing during his tenure, warning that companies cannot slap an artificial-intelligence label on conventional tools or aspirational road maps and expect to escape scrutiny. His February 2024 remarks at Yale Law School on AI, finance, and misleading AI claims, reinforced by follow-on comments from the SEC’s enforcement director, frame AI exaggeration as a straightforward securities-fraud problem. The novelty lies in the buzzword. The legal theory is the same: issuers that make specific claims about AI capabilities, proprietary algorithms, or model-driven performance must ensure those claims match reality.

This narrows the safe space for what companies sometimes describe as tech-forward optimism. Traditional puffery defenses argue that vague statements of ambition or corporate philosophy are immaterial. But when a company makes specific claims about its AI capabilities, the architecture of its data models, or the measurable effect of algorithms on performance, those statements begin to look less like general cheerleading and more like verifiable, quantitative assertions. Disclosures that delegate insight to an unnamed “AI engine” may still require ordinary verification. A footnote stating that “results are generated by an AI model” does not excuse management from the duty to confirm that the model and its outputs are accurately described.

Building Disclosure Controls Around AI Systems

The firms best positioned to weather AI-related enforcement will be those that treated model risk as part of their disclosure controls from the start. That begins with mapping where AI systems touch information that may reach investors: revenue or demand forecasts, risk dashboards, customer or user metrics, operational data, and narrative summaries that appear in management commentary or investor-relations content. For each of those intersections, companies can build a documented review pipeline that brings together legal, finance, risk, data, and product teams before an AI-generated figure or statement appears in public.

Model validation should sit alongside traditional disclosure checklists. That includes confirming the provenance and quality of training and input data, assessing whether models perform differently across market conditions, and testing how sensitive outputs are to key assumptions. It also means creating audit trails that record when a model version changed, who approved that change, and whether downstream disclosures were updated to reflect any shift in methodology. International work such as the OECD’s 2021 report on artificial intelligence, machine learning, and big data in finance offers a useful vocabulary for thinking about these risks, even if it does not create binding rules.

At the board level, expectations for AI oversight are rising. According to the NACD’s 2025 Public Company Board Practices and Oversight Survey, more than 62 percent of director respondents are now setting aside agenda time for full-board AI discussions, reflecting boards’ transition from AI education to strategic AI governance. Audit committees in particular may be expected to ask how AI tools affect estimates, valuations, and controls over financial reporting, and whether internal audit plans reflect that impact.

External auditors face parallel responsibilities. As AI-driven systems increasingly influence the financial data that auditors examine, audit firms must evaluate whether management has adequate controls over AI models that affect financial reporting. This includes assessing data quality, model validation processes, and the competence of personnel overseeing AI systems. Auditors may need to engage specialists to test complex models and ensure that AI-generated estimates comply with accounting standards.

Investor-relations teams sit at the final gate. Firms can reduce risk by setting clear protocols for how AI appears in scripts, Q&A prep materials, investor decks, and website content. Those protocols might require that any reference to “AI-powered” results be tied to a defined, internally documented system, that quantitative claims derived from AI analysis be cross-checked against alternative methods, and that forward-looking statements drawing on model outputs include appropriate cautionary language. In practice, AI governance becomes an essential component of the existing alignment between public communications, risk factors, and internal records.

Litigation and Enforcement Outlook

AI-related securities litigation is no longer a theoretical possibility. Tracking services have documented a steady rise in filings where artificial intelligence features prominently in the allegations, whether through claims that companies overstated their AI capabilities, misrepresented AI-driven revenue potential, or failed to disclose reliance on fragile model-based forecasts. Commentary from securities litigators reviewing the Presto Automation settlement and related actions frames these suits as a new front in familiar disclosure wars rather than as entirely new categories of claim.

Those suits are already shaping how discovery may unfold when AI is at issue. Plaintiffs seek not only emails and slide decks, but also model documentation, version histories, and internal discussions about limitations or red flags. They may ask for prompt logs, validation reports, and vendor correspondence in order to test whether public statements about AI tools matched internal reality. Companies that cannot produce a coherent record of how models were selected, tested, and monitored will have a harder time persuading courts that any misstatements were accidental or immaterial.

Private litigation risk extends beyond securities class actions. Shareholders may bring derivative suits alleging breach of fiduciary duty where directors fail to oversee AI systems that cause material harm. Customers and business partners may pursue contract claims or fraud allegations based on AI capability misrepresentations. Directors and officers liability insurance policies may face coverage disputes over AI-related claims, particularly where policies exclude losses from technology failures or data breaches.

On the enforcement side, recent actions and public statements suggest three focal points. First, supervision of AI-assisted trading and risk tools, including whether firms updated policies, procedures, and controls in line with staff advisories on AI use. Second, accuracy of AI-related branding and claims, from “proprietary algorithms” to “AI-powered platforms,” particularly where those claims are paired with fundraising or public offerings. Third, alignment between AI-driven metrics and the broader internal-control framework.

As accounting and governance standards evolve, the old line about “trust but verify” is being updated for the model era: trust the algorithm only after someone has audited it. In August 2025, the SEC underscored that shift by creating an internal AI task force led by its new chief AI officer, Valerie Szczepanik, to centralize AI projects across the agency and help staff identify emerging issues in filings, examinations, and market data that may warrant rulemaking or enforcement attention.

The Private Securities Litigation Reform Act’s safe harbor provisions for forward-looking statements may offer limited protection for AI-generated forecasts. Courts will likely examine whether cautionary language adequately disclosed the limitations and uncertainties inherent in AI models, and whether companies had a reasonable basis for model-driven projections at the time they were made. Boilerplate warnings about model risk may prove insufficient if the company failed to implement meaningful validation processes.

How Lawyers Should Rethink Model Risk

Securities law is moving into algorithmic territory without waiting for a single, AI-specific statute. Instead, regulators and courts are applying familiar concepts of intent, materiality, controls, and market integrity to systems that generate numbers and language on a scale that would have been unthinkable a decade ago. For counsel, the practical question is no longer whether AI is inside the securities framework, but how to manage the fact that it is.

A basic playbook is starting to emerge. Inventory all AI systems that influence disclosures, metrics, or trading behavior, including tools provided by vendors. For each, document the purpose, data sources, validation approach, and escalation paths when outputs look wrong. Align model-governance documentation with disclosure controls and financial-reporting policies so that any number or narrative derived from an algorithm faces the same scrutiny as traditional information. Train executives and investor-relations staff on how to describe AI capabilities accurately, avoid exaggerated claims, and handle questions from analysts and journalists about model-driven insights.

Finally, keep a close eye on evolving guidance from securities, derivatives, and corporate-governance authorities in multiple jurisdictions. The details differ, but the message is consistent: automated systems do not sit outside legal responsibility. They are part of it. The firms that fare best in this environment will be those that treat algorithms not as mysterious oracles, but as tools that must be understood, challenged, and documented with the same care as any other source of market-moving information.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Keep the Data, Share the Knowledge: The Promise and Peril of Federated Learning

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *