What do you know about "sportswashing"? If you're like me, it's very little. That is, until I asked ChatGPT.
Sportswashing is a practice by which governments use sports to improve or launder a tarnished reputation. I needed to education myself on this topic fairly quickly so that I could read my son's 10th grade history research paper with a base of knowledge upon which to critique it.
So, I activated ChatGPT's deep research tool in its new 4.1 model and asked the following: "Can you draft a detailed and persuasive memo on sportswashing in global football."
ChatGPT responded with some follow-up questions:
1. Who is the intended audience? (e.g., general public, a school project, a policy maker, a sports organization)
2. Do you want the memo to focus on specific countries or cases, like Russia 2018, Qatar 2022, or Trump's involvement in 2026?
3. Should the memo include proposed actions or solutions, or just explain the issue?
4. How long or in-depth would you like the memo to be?
I answered: University-level research paper; No; Propose solutions; 10 pages, double-spaced. And off it went. 15 minutes later, I had my results—a well-researched and detailed memo on the issue—which can download and read here.
This experience highlights both the powerful advantages of using generative AI tools like ChatGPT for legal and professional research. Still, I'd be remiss if I didn't point out their inherent risks.
On one hand, ChatGPT's speed, breadth of knowledge, and ability to produce clear, structured analysis can dramatically enhance efficiency—delivering well-informed content in minutes that might otherwise take hours, days, or even weeks. This democratizes access to complex topics and supports professionals in making faster, more informed decisions.
On the other hand, reliance on AI-generated content without critical review can be risky. ChatGPT, while capable, is not infallible and may present information that lacks nuance, context, or up-to-date accuracy.
For legal professionals in particular—where precision, source validation, and ethical responsibility are paramount—AI should be treated as a powerful assistant, not as a substitute for human expertise and judgment. Used thoughtfully, it can be an important tool; used carelessly, it may lead to oversights or misinformed conclusions. Blind reliance without verification is reckless and irresponsible. Nevertheless, I remain impressed by the work product that ChatGPT can produce, and can't wait to see how it continues to develop, evolve, and improve.
Here's what I read this week that you should read, too.
Incivility at Work Is a Culture Problem — via hr bartender
Two of my employees won't speak to each other — via Ask a Manager
The Workplace Social Contract Is Broken. Now What? — via Improve Your HR by Suzanne Lucas, the Evil HR Lady
Kristi Noem Thinks Habeas Corpus Is A Deportation Spell — via Above the Law
Trump Calls for Investigation into Springsteen, Other Musicians Who Supported Kamala Harris and Neil Young to Trump: "I'm Not Scared of You. Neither Are the Rest of Us." — via Consequence
Dads have workplace rights, too -- with a twist — via Employment & Labor Insider
Dads have workplace rights, too -- with a twist — via Employment & Labor Insider
Agentic AI Is Already Changing the Workforce — via Harvard Business Review
Sesame Street Avoids Cancellation, Finds New Home on Netflix — via TVLine
CHEERS Act reintroduced to support bars, restaurants and draft beer investments — via Craft Brewing Business
So You've Gotta File an EEO-1 Report. Now What? — via Eric Meyer's Employer Handbook Blog
Chicago Sun-Times publishes made-up books and fake experts in AI debacle — via The Verge
Google unveils 'AI Mode' in the next phase of its journey to change search — via The Guardian
This outlandish story about ChatGPT can teach us all a lesson about AI — via Boy Genius Report