Dentons Goes Direct to OpenAI as Big Law Splits on AI Strategy
The world’s largest law firm has chosen to sit as close to the AI engine as possible. Dentons’ new collaboration with OpenAI gives its UK, Ireland and Middle East division early access to the latest large language models, firmwide ChatGPT Enterprise, and direct engineering support on custom tools. For a profession still weighing whether to buy AI through intermediaries or plug into general-purpose platforms, the global giant has planted a flag on the “build on OpenAI” side of the line.
From Pilot Programs to Platform Bet
Legal IT Insider reports that Dentons UKIME’s deal with OpenAI follows a pilot that began in March, testing generative tools across 25 use cases and rolling ChatGPT Enterprise to all UKIME (UK, Ireland and Middle East) employees in October. The collaboration is non-exclusive and open-ended, which matters for clients who want assurance that the firm is not locked into one model family forever. The agreement combines early access to new models, application programming interfaces (APIs), and assistance from OpenAI engineers when Dentons builds firm-specific products on top.
Dentons UKIME CEO Paul Jarvis frames the move as a quality play, calling direct engineering access to the latest models and firmwide ChatGPT Enterprise “the most effective and direct strategy for delivering high-accuracy, high-impact legal work.” The firm stresses that OpenAI does not get client data and that lawyer oversight remains non-negotiable. Data science lead Bugra Ozer highlights speed and flexibility: in his account, new models turn up “the next day,” rather than waiting weeks for an intermediary to wrap and ship them.
The Dentons strategy extends beyond OpenAI alone. In June 2025, Dentons partnered with Legora to roll out collaborative legal AI tools across its European offices, demonstrating the firm’s portfolio approach to AI platforms. This multi-vendor strategy positions Dentons to leverage best-in-class tools while maintaining flexibility across different model providers and specialized platforms.
Two Paths Emerge for Firms
Until very recently, the default Big Law strategy was to buy AI as a platform. Tools like Harvey promised law-specific workflows, guardrails and analytics without firms having to manage raw models themselves. That approach is still expanding. On December 10, CMS announced an enterprise-wide roll out of Harvey to 7,000 lawyers and staff across more than 50 countries, following a trial that began in March 2024 and reporting that 93 percent of users saw productivity gains.
Harvey’s own trajectory underlines how powerful that platform layer has become. A separate Legal IT Insider report notes that Harvey has raised 160 million dollars at an 8 billion dollar valuation in its latest round. The platform now works with firms including A&O Shearman, Ashurst, Latham & Watkins, Mayer Brown and Orrick. In August 2025, Latham & Watkins signed an enterprise license for firmwide rollout to its 3,600-plus attorneys globally. Harvey is also rolling out “Shared Spaces” to let firms and clients collaborate on workflows and playbooks in a shared environment.
Dentons is not rejecting that platform layer outright. Its OpenAI agreement is non-exclusive, and the firm uses a portfolio of tools including both OpenAI for UKIME and Legora across Europe. But the OpenAI collaboration formalizes a second path that many firms are now exploring: keeping supplier power with the general-purpose model provider, and building toolchains, prompts and guardrails in-house.
A Third Path: Building to Monetize
A hybrid approach has also emerged. In April 2025, A&O Shearman announced a partnership with Harvey to create agentic AI workflow tools for antitrust filing analysis, cybersecurity, fund formation and loan review. The tools leverage senior lawyer expertise and will be rolled out internally and sold to clients and other law firms, with A&O Shearman sharing in the software revenue.
This revenue-sharing model represents neither pure platform adoption nor pure direct-model access. It sits between the two: using Harvey as the underlying platform while developing proprietary agents that become productized offerings. For firms with deep domain expertise and a history of knowledge productization, this path offers a way to offset AI investment through external licensing revenue.
Incumbents and Startups Race for Position
The Dentons announcement lands in a market where incumbents and startups are racing to define the “default” legal AI stack. Thomson Reuters has publicly set its sights on becoming “the AI platform for lawyers,” using its Westlaw content and products like CoCounsel to compete with general-purpose systems while still relying on large language models from players such as OpenAI under the hood.
LexisNexis has taken a partnership route. A June 2025 strategic alliance between LexisNexis and Harvey integrates LexisNexis’ proprietary models and U.S. primary law into the Harvey platform, with citation support powered by Shepard’s and LexisNexis knowledge graphs. Lawyers inside Harvey can ask questions of LexisNexis’ Protégé assistant and receive answers grounded in that corpus, without switching tools.
Meanwhile, general-purpose AI providers remain the gravitational center. OpenAI is already the competitor everyone benchmarks against, even when they are building “wrapper” products on top of its models. For many buyers, the question is no longer whether OpenAI will enter legal, but how deeply law-specific vendors will depend on OpenAI’s release cycle and performance curves.
Adoption Outpaces ROI Measurement Systems
The numbers suggest that Dentons is surfing, not starting, a wave. The Thomson Reuters Institute’s 2025 Generative AI in Professional Services Report finds that 26 percent of legal organizations are already actively using generative AI, up from 14 percent in 2024, and 95 percent of legal professionals expect it to become central to workflow within five years. Awareness and intent are near universal, and usage is no longer confined to a few early adopter firms.
At the same time, independent research shows that return-on-investment metrics are still immature. RSGI’s report, “Defining the Impact of Legal AI: How Harvey Customers Realise Value,” based on interviews with 40 law firms and in-house teams, describes tools that are embedded in day-to-day work and widely liked, but only selectively measured. Legal IT Insider’s deeper dive into the study notes that most respondents track adoption, usage and subjective satisfaction rather than hard financial savings, even as some power users report dramatic time gains.
Dentons’ OpenAI move sits squarely in that tension. Ozer points to usage spikes because employees already knew ChatGPT from personal life, and Jarvis emphasizes speed and bespoke workflows as the headline benefits. Those are rational goals, but they sharpen a question every law firm leader now faces: if AI becomes embedded infrastructure, how will the firm prove that a direct platform strategy outperforms a vendor-first approach on profitability, risk and client value?
Direct Access Brings Governance Responsibilities
Direct collaboration with a foundation model provider changes governance obligations. When a firm buys a point solution, it can outsource large parts of the safety case. When it builds internal tools on a general-purpose model, it inherits responsibilities around prompt design, retrieval architecture, evaluation, monitoring and control of training data.
Dentons emphasizes that OpenAI has no access to client data and that lawyers remain in the loop, which aligns with bar guidance on AI use and confidentiality obligations. The firm also highlights plans for UK data residency, designed to satisfy clients who require that their data stay within specific borders. Those assurances are now table stakes for any firm proposing to run client work through bespoke AI workflows, whichever vendor stack sits underneath.
There is also a competition and concentration angle. If many large firms build directly on a single model provider, governance teams have to watch not just firm-level risk but dependency risk. A release that changes model behavior, pricing or acceptable use terms can ripple across matters and jurisdictions. The flip side is that non-exclusive access gives firms leverage to adopt multi-model strategies over time, but only if their internal tooling is designed to be portable.
Six Questions for Outside Counsel
For in-house teams, the Dentons announcement is a useful prompt to revisit outside counsel questionnaires. When a firm pitches a direct OpenAI collaboration as a differentiator, law departments can press for specifics that go beyond promotional claims. Helpful questions include:
- Which workflows currently depend on OpenAI models, and are those workflows optional or default for my work?
- What data leaves the firm’s own environment when those tools run, and under which OpenAI terms and data residency arrangements?
- How are outputs validated, and what human oversight controls are documented for high-risk uses such as advice, advocacy and regulatory filings?
- How does the firm evaluate and benchmark different models over time, including non-OpenAI options, to avoid vendor lock-in?
- What happens to client-specific configuration, prompts or knowledge bases if the firm changes model provider?
- How is the firm training lawyers and staff to use these tools safely, and which misuse scenarios are explicitly off-limits?
Many of these questions apply equally to vendor platforms such as Harvey and other legal-specific tools. The Dentons deal simply makes them more visible by putting OpenAI on the face of the announcement instead of behind a product label.
Market Splits into Competing Architectures
From a market-structure perspective, Dentons’ choice confirms that legal AI will not be a one-layer game. General-purpose platforms, legal content incumbents and specialist startups are competing and partnering at the same time. Thomson Reuters’ CoCounsel, LexisNexis’ alliance with Harvey, and other vertical integrations all pull in the direction of vertically integrated platforms built around proprietary content and workflow.
Dentons’ move, by contrast, treats OpenAI as core infrastructure and reserves the last mile for the firm’s own innovation team. That puts more pressure on internal talent and governance capacity, but potentially yields more differentiated tools. This approach may also appeal to clients who want transparency on which model stack sits under the hood, rather than encountering it indirectly through a vendor.
For other firms, the practical takeaway is not that there is a single “right” architecture. The key point is that whatever the stack, the choices need to be explicit, documented and connected to measurable goals such as reduced cycle times, fewer write-offs, lower error rates or better access to multi-jurisdictional insight. RSGI’s research and Thomson Reuters’ surveys both suggest that adoption is outpacing measurement. That gap will not be sustainable as AI spending moves from experiment to infrastructure line item.
Strategic Choices for All Stakeholders
For law firms watching the Dentons announcement, the short-term question is simple: would a direct collaboration with a model provider strengthen or dilute your existing vendor portfolio. Firms with strong innovation and data teams may see an opportunity to follow Dentons toward more direct model access, particularly where they already have internal copilots or GPT-style tools. Others may decide that, for now, the governance and engineering lift is better outsourced to platforms that bundle models, content and guardrails.
For in-house clients, the signal is that AI strategy is becoming a visible part of firm identity in a way that will affect panel selection and matter staffing. When panel firms describe their AI programs, it will be increasingly important to distinguish between promotional claims and concrete, auditable practices around data handling, evaluation and model choice. Dentons’ OpenAI collaboration offers one blueprint. The broader ecosystem of Harvey deployments, LexisNexis alliances and incumbent platforms offers others.
The common denominator is that AI procurement and governance questions are now as central to law firm and vendor strategy as traditional debates about rates, leverage and geographic footprint. As more firms pick sides between direct model access and platform-first approaches, those choices will shape not only internal workflows but also the terms on which clients are comfortable letting AI anywhere near their matters.
Sources
- Artificial Lawyer: “CRS → Harvey; Dentons → Legora; DLA → Newcode” (July 1, 2025)
- LawNext: “Thomson Reuters Survey: Over 95% of Legal Professionals Expect Gen AI to Become Central to Workflow Within Five Years,” Bob Ambrogi (April 15, 2025)
- LawNext: “Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies,” by Bob Ambrogi (June 18, 2025)
- Legal IT Insider: “A&O Shearman partners with Harvey to launch practice-based workflow tools” (April 7, 2025)
- Legal IT Insider: “Latham & Watkins to roll out Harvey globally” (Aug. 11, 2025)
- Legal IT Insider: “The Impact of Legal AI – A deeper dive into the RSGI/Harvey adoption report” (Dec. 2, 2025)
- Legal IT Insider: “Harvey raises $160m at $8bn valuation + launches Shared Spaces: Interview” (Dec. 4, 2025)
- Legal IT Insider: “CMS announces enterprise roll out of Harvey across 50+ countries” (Dec. 10, 2025)
- Legal IT Insider: “Dentons enters partnership with OpenAI – Insights from data science lead Bugra Ozer,” by Patrick Shortall, RSGI (Dec. 11, 2025)
- Legora: “Dentons partners with Legora to roll out collaborative, advanced legal AI across Europe” (June 2025)
- LexisNexis: “LexisNexis and Harvey Announce Strategic Alliance to Integrate Trusted, High-Quality AI Technology and Legal Content and Develop Advanced Workflows” (June 18, 2025)
- RSGI: “Defining the Impact of Legal AI: How Harvey Customers Realise Value,” by Reena SenGupta (Nov. 19, 2025)
- Thomson Reuters Institute: “2025 Generative AI in Professional Services Report” (April 7, 2025)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, regulations, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Courts Tighten Standards as AI Errors Threaten Judicial Integrity

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
