The Authorless Will: Proving Testamentary Intention in AI-Generated Instruments
An AI-drafted will creates unique problems—and could be declared invalid
As generative artificial intelligence becomes faster and more capable, a growing number of products are popping up offering AI-generated will-drafting services. Some of these products are marketed to lawyers as estate-planning tools, while others are geared directly at clients as a cheaper alternative to professional estate-planning services. Beyond these commercial services, anyone can now prompt an AI chatbot to draft them a will, producing a document that resembles (and may even be legally effective as) a will.
Many legal commentators have identified the same potential issues with AI-generated wills that previously arose with “fill-in-the-blank” will kits in the estate planning context. For one, such “DIY” wills often fail to address complex estate-planning considerations, including tax implications or ownership of assets held in complicated corporate structures. Accordingly, it is widely acknowledged that a poorly drafted DIY will (either from a kit or AI-generated) can leave a testator worse off than having no will at all.
AI-generated wills, however, raise unique evidentiary challenges that did not arise with traditional fill-in-the-blank wills kits, particularly in establishing the testator’s intentions and their knowledge and approval of the document.
As with will-kit documents, an AI-generated will is unlikely to be accompanied by the external evidence of testamentary intent on which courts typically rely, such as drafting solicitor’s notes, correspondence with the testator, draft wills, or estate-planning questionnaires. With AI-generated wills, however, the evidentiary gap is even more pronounced: the court will likely have no handwritten annotations from the testator at all, but only the final document produced by generative AI in response to an unseen prompt.
Accordingly, where a testator simply enters a generic AI prompt such as “draft a will” and fails to execute the resulting document in accordance with the required formalities, there may be insufficient evidence of the testator’s knowledge and approval for the will to be admitted to probate, even under the curative provisions of the Wills, Estates and Succession Act (“WESA”). This concern would not typically arise with holograph wills, where knowledge and approval can be inferred from the testator’s own handwriting.
Although no Canadian court has yet considered an AI-generated will, it is almost inevitable that wills generated by AI will begin to appear in estate litigation. For example, AI-drafted documents have already begun to crop up in the contractual context. In the recent case of Urban Roots Salon Spa Ltd., 2025 BCEST 103, the Director of Employment Standards reviewed an AI-generated employment contract. The Director found that the contract lacked the certainty required for enforceability, containing unfilled placeholders for the “[Date]” of the agreement, “[Notice Period],” the “[Duration]” of non-compete and non-solicitation clauses, and specifying it was to be “governed by and construed in accordance with the laws of the State of [State]” (para. 19).
When AI-generated wills are eventually scrutinized by the courts, the central concern will likely be the absence of reliable evidence of the testator’s testamentary intentions, both in assessing whether the document reflects the testator’s final wishes and in interpreting its terms once it is admitted to probate.
It goes without saying that drafting ambiguities of the kind seen in Urban Roots can impede a court’s ability to ascertain testamentary intent. For example, in the recent Ontario case Mansour v. Girgis, 2024 ONSC 1611, the will gifted a life estate to the testator’s sister, with the property to pass to his brother if she predeceased the testator or no longer wished to live there. The sister survived the testator but later became hospitalized and died without ever expressing an intention to leave the property. The court found the will ambiguous because it did not address a situation where the sister ceased residing in the property without expressing a desire to leave, which created uncertainty as to whether the property passed to the brother or fell into the residue.
I decided to conduct a brief experiment with different AI chatbots to see if they could identify and address the ambiguity that arose in Mansour. I used the prompt “draft a will that gives my real property to my sister during her lifetime. If my sister dies before me or no longer desires to live in the property, then the property will go to my brother.” While some AI chatbots were able to identify and address the ambiguity that arose in Mansour by including express language gifting the property to the brother upon the sister’s death, others failed the test.
For example, ChatGPT and Microsoft Copilot drafted the following clauses, respectively, that are still ambiguous as to what would happen to the property in the event the sister died before expressing a desire to leave the property:
| ChatGPT | MS Copilot |
|---|---|
|
4. Termination of Sister’s Interest and Gift to Brother If my sister: predeceases me; or ceases to reside in the Property; or no longer desires to live in the Property, then her interest in the Property shall immediately terminate, and I give the Property to my brother, [Brother’s Full Legal Name], absolutely. |
2. Gift of Real Property – Life Interest to My Sister I give my real property located at [address or legal description] to my sister, [Sister’s Full Name], for her lifetime, for her use and enjoyment as long as she desires to live in the property. If my sister dies before me or no longer wishes to reside in the property, then her interest shall end, and the property shall pass to my brother, [Brother’s Full Name], absolutely. |
In Mansour, the court concluded that the ambiguity led to an absurd result inconsistent with the testator’s intentions and rectified the will to ensure that the property passed to the testator’s brother. The court relied on the drafting solicitor’s evidence that the will failed to implement the testator’s wish to gift the property to his brother and that the ambiguity was due to a drafting error. But what if the will in Mansour had been drafted by ChatGPT or MS Copilot?
As Urban Roots and the experiment described above illustrate, the use of AI to draft legal documents carries a risk of drafting errors. These errors may render a will, or specific provision, ambiguous and require rectification to avoid being void for uncertainty. And, as always, whenever a layperson creates their own will, AI-generated or otherwise, there is an increased risk of formal deficiencies.
WESA contains provisions that may be capable of addressing some deficiencies in AI-drafted wills. Sections 58 and 59 empower the court to rectify errors (as in Mansour) or to “cure” a testamentary document that fails to meet formal validity requirements (i.e., if it is not signed or witnessed properly). However, both provisions require the court to first determine the will-maker’s testamentary intentions. In the context of an AI-drafted will, the central question therefore becomes what evidence is available to establish those intentions?
Even if it is possible to determine which AI system was used to generate a will (or whether the document was AI-generated at all) it is unlikely that the testator’s original prompt could be recovered to shed light on what the testator intended the will to say. This evidentiary gap significantly increases the risk that a court will be unable to discern the testator’s intentions, resulting in the will, or specific provisions, being declared void for uncertainty and beyond rectification. Ultimately, while using AI to generate a will may seem enticing, such instruments raise novel challenges in establishing testamentary intent. Both practitioners and testators should be mindful that the risk of drafting errors and the absence of supporting evidence may substantially increase the likelihood of a will being challenged or declared invalid.