Friday, May 23, 2025

WIRTW #760: the 'sportswashing' edition


What do you know about "sportswashing"? If you're like me, it's very little. That is, until I asked ChatGPT.

Sportswashing is a practice by which governments use sports to improve or launder a tarnished reputation. I needed to education myself on this topic fairly quickly so that I could read my son's 10th grade history research paper with a base of knowledge upon which to critique it.

So, I activated ChatGPT's deep research tool in its new 4.1 model and asked the following: "Can you draft a detailed and persuasive memo on sportswashing in global football."

ChatGPT responded with some follow-up questions:

1. Who is the intended audience? (e.g., general public, a school project, a policy maker, a sports organization)
2. Do you want the memo to focus on specific countries or cases, like Russia 2018, Qatar 2022, or Trump's involvement in 2026?
3. Should the memo include proposed actions or solutions, or just explain the issue?
4. How long or in-depth would you like the memo to be?

I answered: University-level research paper; No; Propose solutions; 10 pages, double-spaced. And off it went. 15 minutes later, I had my results—a well-researched and detailed memo on the issue—which can download and read here.

This experience highlights both the powerful advantages of using generative AI tools like ChatGPT for legal and professional research. Still, I'd be remiss if I didn't point out their inherent risks.

On one hand, ChatGPT's speed, breadth of knowledge, and ability to produce clear, structured analysis can dramatically enhance efficiency—delivering well-informed content in minutes that might otherwise take hours, days, or even weeks. This democratizes access to complex topics and supports professionals in making faster, more informed decisions.

On the other hand, reliance on AI-generated content without critical review can be risky. ChatGPT, while capable, is not infallible and may present information that lacks nuance, context, or up-to-date accuracy.

For legal professionals in particular—where precision, source validation, and ethical responsibility are paramount—AI should be treated as a powerful assistant, not as a substitute for human expertise and judgment. Used thoughtfully, it can be an important tool; used carelessly, it may lead to oversights or misinformed conclusions. Blind reliance without verification is reckless and irresponsible. Nevertheless, I remain impressed by the work product that ChatGPT can produce, and can't wait to see how it continues to develop, evolve, and improve.



Here's what I read this week that you should read, too.





The Workplace Social Contract Is Broken. Now What? — via Improve Your HR by Suzanne Lucas, the Evil HR Lady