Seeing is Believing… Right?
How “AI washing” marketing practices harm consumers and investors

“Seeing is believing” is a centuries-old idiom that emphasizes the power of firsthand observation in forming beliefs. Relatedly, “I’ll believe it when I see it” is another idiom often associated with skepticism. In a recent paradigm shift — particularly fueled by the rapid dissemination of artificial intelligence in our digital world — these idioms are losing their resonance. Today’s skeptics are urging others not to believe something just because they see it.
This article will highlight how widespread adoption and integration of AI with daily life has resulted in (i) bad actors leveraging deepfake technology to advance agendas of disinformation, fraud, and/or targeted harm, and (ii) businesses adopting misleading marketing practices that exaggerate AI functionality in pursuit of competitive advantages, to the detriment of consumers and investors.
In 2023, and seemingly overnight, AI became a global cultural phenomenon. Owing to the rise in awareness of and access to generative AI tools — bolstered by social media trends — public perception of AI has largely shifted away from AI as a dystopian sci-fi movie threat toward a more nuanced appreciation of AI as a toolkit of endless possibilities for day-to-day integration and vast industry-disruption potential.
As an intellectual property and technology lawyer, what an exciting time to be alive!
It is well-known that the emergence of cutting-edge or disruptive technologies often outpaces the evolution of the legal frameworks in which these technologies operate. With the recent surge of new AI tools on the market, lawmakers are grappling with numerous unprecedented and complex gaps, loopholes, and vulnerabilities within existing legal frameworks, the full implications of which continue to unfold in real time.
AI tools’ unparalleled ability to process and synthesize massive volumes of data at astonishing speeds has empowered public and private actors and individuals around the world to achieve remarkable outcomes by leveraging AI-powered insights.
Despite these positive impacts, history books — or perhaps now ChatGPT summaries of history books — would be quick to remind us that power does not always fall into the hands of those acting with pure intentions.
Bad actors have been quick to capitalize on the widespread availability of AI tools. One such example involves deployment of deepfake technology. Merriam-Webster defines “deepfake” as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” While deepfakes can be productively used for educational or entertainment purposes, bad actors have leveraged deepfake technology to, for instance: (i) disseminate disinformation, (ii) deploy sophisticated fraudulent scams, and (iii) deliver targeted emotional and reputational harms. To illustrate the pervasiveness of harm when people today are too quick to believe what they see, consider the fictional silhouettes below:
- Disinformation: On election day, Gail, an impressionable first-time voter, stumbles across a compelling video of her choice candidate taking an aggressive position on several key issues in complete misalignment with her views. Grateful for seeing this in time, Gail casts her vote for someone else who goes on to win by a narrow margin. Gail later learns the video that changed her vote was a deepfake intended to manipulate voting outcomes.
- Fraud: Stanley, a fraud prevention analyst, receives a call from an unknown number. His sister’s voice explains that her phone and wallet were stolen while traveling, and she needs to borrow money to get home. Without hesitation, Stanley e-transfers her a few thousand dollars. That evening, he receives another call, this time from his sister’s phone number. Stanley expresses his relief that she got her phone back. His sister has no idea what he is talking about. Stanley realizes he fell victim to a highly sophisticated phishing scam using deepfake technology to mimic his sister’s voice.
- Targeted Harm: Nora, an esteemed university professor, wakes up to missed calls and emails alerting her to a viral video of her making offensive comments. She never made these comments. However, the court of public opinion is quick to “cancel” Nora, and the university succumbs to demands for her suspension. Nora later discovers the video was a deepfake created by a disgruntled student.
If you read these and immediately thought — that could happen to anyone! — you can appreciate the far-reaching implications of mass AI adoption lawmakers are tasked with addressing.
Individual bad actors are not the only ones taking swift action during the current AI frenzy. Businesses are feeling pressure and urgency to remain relevant and competitive by positioning themselves as early adopters and integrators of AI functionalities.
The hype and sensationalism of AI’s limitless potential can cloud business judgment and contribute to decision-makers rationalizing — “AI washing” — a practice of “deliberate misrepresentation of AI capabilities for competitive advantage,” at the expense of misleading consumers and investors and “undermining trust in legitimate AI-driven innovation” (Glorindal et al., 2025, p. 495).
AI washing is only the newest installment in a decades-long history of manipulative “washing” marketing practices that undermine the spirit of consumer and investor protection regimes, such as:
- Greenwashing: “the act or practice of making a product, policy, activity, etc. appear to be more environmentally friendly or less environmentally damaging than it really is” (Merriam-Webster).
- Rainbow washing: “a strategy of instrumentalization that relegates LGBT+ inclusion as a handmaiden to organizational and societal ends in unsupportive and supportive contexts” (Erbil & Özbilgin, 2024, p. 140).
- Healthwashing: “the misuse of health to advance self-interests (e.g., of companies, governments, organisations) whilst actively contributing to poor health outcomes” (Ezzine et al., 2024, p. 1).
To illustrate the pervasiveness of consumer and investor harm when people today are too quick to believe what they see, consider the fictional silhouettes below:
- Consumers: Riley, a sole proprietor, invested in an AI-powered customer service software marketed as highly adaptive and capable of learning from and imitating her personal communication style to facilitate streamlined customer interactions. Unimpressed by impersonal and generic outputs, Riley discovered this technology was just a basic email template recommendation engine powered by keyword matching.
- Investors: Peter, a recently retired firefighter, invested much of his life savings into a mutual fund that claimed its revolutionary AI algorithms could predict and capitalize on stock market fluctuations. After experiencing modest returns followed by devastating losses, Peter learned the so-called AI algorithms were merely basic statistical methods with no advanced forecasting capabilities.
As the ramifications of mass AI adoption continue to develop and lawmakers continue to grapple with the limitations of existing legal frameworks to address the vast array of emerging AI issues, including harms from malicious deepfakes and AI washing, this article serves as a cautionary reminder to remain vigilant and adopt a critical lens to what one sees before one indeed believes.
“Deepfake.” Merriam-Webster.com. 2025. Accessed May 2, 2025.
Erbil, C., and Özbilgin, M.F. “Rainbow burning to rainbow washing: how (not) to manage LGBT+ inclusion.” In Elliot, C.J, Fox-Kirk, W., Gardiner, R. A., and Stead, V. (Eds.), Genderwashing in Leadership: Power, Policies and Politics (Transformative Women Leaders) (pp. 135-152). Emerald Publishing Limited, 2024, at p. 140.
Ezzine, T., Gepp, S., Guinto, R.R., Parks, R.M., Singh, A., Thondoo, M., et al. “Reflections from COP28: Resisting healthwashing in climate change negotiations.” PLOS Global Public Health 4(3), edited by Robinson, J., 2024, para. 2.
Glorindal, G., Haridasan, A.C., Krishnan, M. and Xavier, N. “AI Washing in Marketing: Feasibility, Viability, and Ethical Constraints.” In Bolesnikov, M., Dulloo, R., Kurian, A., Mathiyazhagan, K. and Struweg I. (Eds.), Strategic Blueprints for AI-Driven Marketing in the Digital Era (pp. 496-523). IGI Global Scientific Publishing, 2025, p. 495. Accessed online: AI Washing in Marketing: Feasibility, Viability, and Ethical Constraints.
“Greenwashing.” Merriam-Webster.com. 2025. Accessed May 2, 2025.