Navigating AI: Balancing Promise and Peril
Safeguarding privacy and human rights amid the rise of artificial intelligence

It’s no secret that the use of artificial intelligence has experienced a meteoric rise, with no signs of stopping. What remains mysterious, however, are AI’s implications for individuals, professionals, and businesses.
Two recent cases, McMaster University (Re), 2024 CanLII 17583 (ON IPC) [McMaster], and Zhang v Chen, 2024 BCSC 285 shed some light on how AI intersects with human rights, privacy concerns, and professional responsibilities.
Recommendations on the Proper Use of AI
In McMaster, the Office of the Information and Privacy Commissioner of Ontario (Commissioner) investigated a complaint against McMaster University’s use of examination proctoring software under the Ontario Freedom of Information and Privacy Act, RSO 1990, c. F.31 [FIPPA]. FIPPA prohibits the unauthorized collection of personal information (B.C.’s privacy legislation contains a substantively similar provision). The Commissioner concluded that the information collected was lawful, as it only collected information necessary to fulfill its functions.
However, the Commissioner noted the heightened risks associated with AI usage and recommended additional guardrails to ensure compliance with privacy and human rights concerns. The Commissioner recommended that the university should:
- Undertake an algorithmic impact assessment to assess potential impacts of an automated decision-system;1
- Meaningfully consult with affected parties and experts, prior to using AI and on an ongoing basis;
- Provide students with an opportunity to opt out;
- Retain human oversight over, and create policies for challenging, AI’s findings;
- Recognize that ultimate responsibility over the use of AI cannot be outsourced, even if vendors are engaged;
- Understand and be able to explain how the technology functions; and
- Prohibit the use of personal information for algorithmic training purposes unless meaningful consent is obtained.
AI May be Biased
Notably, AI may have the unintended consequence of perpetuating discrimination. McMaster cited research suggesting that examination proctoring software may flag persons with darker skin tones or certain disabilities more often. Although the university provided accommodations for students with disabilities, none were provided for individuals potentially facing discrimination from AI on racial or ethnic bases.
AI May Cost You — Literally
In Zhang, the BC Supreme Court ordered counsel to pay personal costs for filing an application containing two non-existent cases, subsequently discovered to be created by ChatGPT. Although the false cases were identified prior to the hearing and were not relied upon, the Court found costs under Rule 16-1(3)(b) and (c) of the Supreme Court Family Rules, BC Reg 169/2009 appropriate. Such costs can be awarded where a lawyer has caused costs to be incurred or wasted through delay, neglect, or some other fault (an identical provision exists in the civil context). The Court accepted that the applicant’s initial reliance on the non-existent cases led to delay, additional expense, and effort in remedying the confusion they created.
The judge emphasized that citing fake cases in court materials “is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.”
The Court further emphasized a study conducted in January 2024, which found that “legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2” and found that “large language models (“LLMs”) often fail to correct a user’s incorrect legal assumptions in a contrafactual question setup, and that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations.”
Going Forward
While AI holds tremendous promise for various aspects of society, its widespread use presents significant challenges, especially in the absence of legislation. AI users should use and apply AI with caution, keeping in mind not only inherent biases and privacy risks, but also security vulnerabilities and ethical dilemmas arising from AI.
1. The Commissioner recommended Treasury Board of Canada’s Algorithmic Impact Assessment tool