Factual Dispatch #56 - ChatGPT is Zapp Brannigan
Skynet as branding, the WGA/SAG strike, and the Black & Brown women who called it.
I don’t know how else to say this, ChatGPT is Zapp Brannigan from Futurama. This might make Kiff, his dependable albeit boring second in command Microsoft, but we need to steer clear of the Job Black Hole to find out. (Editor’s Note: Yes, we really should have written or otherwise explained the untimely two year absence, but startup layoffs are wild. Read to the end for what happened & our mea culpa)
If you’ve never watched Futurama, Zapp is a mediocre & smugly successful space navy officer. The show’s creator Billy West, said Zapp was inspired by how he imagined William Shatner would be played by James T Kirk. Think classic pulp action like Flash Gordon, dashing white men in who take command & save the day with a smile. His hubris creates or exacerbates the problems he’s solving, making things so much worse for his troops, the Futurama cast, and the surrounding galaxy. But, he blurts out an answer quickly and plagiarizes people’s work with a smile, so people listen to him and give him credit for the solutions Fry and his more diverse crew actually created. He’s started believing his own hype so much he’s Stop me if you’ve heard this one before.
“AI” or Generative Large Language Models (LLMs) are the “new big thing,” replacing crypto and the “metaverse” as the thing people with too much money bore other people with at bars or parties. Why are we so impressed, so enamored with the conflagration of Generative AIs that spit out a reasonable facsimile of John Oliver’s cabbage wedding or legal briefs to use in court? The same reason Leela fell for Zapp once. Being with “someone” that is confidently right 60-90% of the time at predicting (and delivering) what others expect, feels enchanting. Who cares if the facts are true, or if the joke is stolen? ChatGPT is impressive because it is built from millions of articles, scanned from the best non-BS sources on the internet.
In the same way the “explosion of startup innovation” we’ve seen in the West relies on outsourced virtual assistant & low-pay freelance labor from overseas, AI is reliant on minimum wage labeling, annotation, and “Data Entry But Make It Fashion” work. Human workers maintain, clean, and refine gigantic text and image bins, essentially. AI art creators & LLMs use those to build incredible renderings of the Spanish Armada with Toy Story characters as sailors or spin up a business plan for your underwear slingshot e-commerce business in 0.4 seconds. And now, they’re being fired, cast aside along with junior or freelance developers, whose work is now being “done” by Github Co-Pilot.
Do LLMs really outproduce millions of humans? Probably not. LLMs are only a sub-type of AIs, here are a few others you might already be using everyday:
Reactive Machines (car auto braking)
Limited Memory (weather forecasts)
Theory of Mind (that chatbot you hate)
Narrow AI (e-comm products “for you”)
Supervised Learning (pick all the bridge pictures to sign-in)
This is impressive tech, but the gap between what it is and what it’s hyped to be is significant. Most don’t know Google Maps is AI, but most have been told ChatGPT is going to become the perfect synthetic assistant from the Avengers.
The reality is less legendary, with LLMs unable to answer simple questions like “how many e’s are in the word ‘ketchup’?” This is because LLMs aren’t “thinking” when they answer your question. There isn’t a virtual brain reviewing databases or “hacking the Gibson” to get your info, it’s just building a word string that it has high confidence answers the question/prompt you asked it. Why is it confident? Because it consumed 10 billion question & answer pairs “similar” to the one you’re asking. But it isn’t looking information up in a database or “researching the answer.” Which is also why it can’t “think” but it can pass the SAT.
(Editor’s Insert: Academically, robots “passing” tests that shouldn’t exist anyway, shouldn’t be impressive. The term AI is problematic enough, given its origins in racist intelligence science. Even the supposedly objective field of statistical analysis isn’t above reproach given how some argue it was shaped by eugenics.)
Does that matter? It’s starting to, given how many humans misunderstand what they can do:
Researchers are citing academic references that ChatGPT just made up, and two lawyers have been fined $5,000 for submitting legal citations that ChatGPT invented.
ChatGPT has gotten “dumber” or noticeably worse on certain test types since it shot to stardom a few months ago.
So many junior SEO analysts are using LLMs to “diagnose” SEO problems, a Google rep had to remind the industry that wouldn’t work.
Like Zapp Brannigan, LLMs have a horrible track record with Asian languages.
Interestingly, they aren’t bad at being a CEO. Max Read’s point that LLMs get the “vibes” of an answer, and the one made by Kristopher Jansma using Corinthian Leather both hit parallel targets: feeling is what matters, and ChatGPT is both excellent and hilariously bad at it, depending on your magnification level.
Same with Zapp. He won’t hit the bullseye, but like a good consultant, he’ll get onto the green in one or two shots. Does that involve blowing up the course? Maybe. But then we get to the problem I’m calling the AI Data Ouroboros:
Of course, the LLM is only as good as its data. Remember those AI data workers? They’re now pulling from ChatGPT prompts and using AI to answer those questions, producing a negative feedback loop deforming LLMs that already hallucinate quite a bit. What they aren’t going to do, is become self-aware from Thrice Boiling the Internet Ocean.
Alarmism about AI allows companies to distract from true regulation and antagonistic evaluation of these “magic black box” systems that are too complex for even the biggest genius anywhere to make laws about. Even the most impressive phenomena we’ve seen, are most likely mirages, something akin to an emerging idea called “neuroenchantment.” It is startlingly easy to produce an illusion that most people believe is “reading their mind,” especially with the emergence of fMRI and neuroimaging. Just because it feels magical & visionary, doesn’t mean it is.
Companies like OpenAI, Jasper, Imagica, and others to claim they’ve built systems that are months or years away from becoming “aware,” and ride that hysteria to a large investing round, while never building anything remotely capable of becoming Skynet or Ultron. This results in genuine confusion, perfectly illustrated when Snoop Dogg said:
“And I heard the dude, the old dude that created AI saying, "This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own sh*t. I'm like, are we in a f*cking movie right now, or what? The f*ck man? So do I need to invest in AI so I can have one with me? Or like, do y'all know? Sh*t, what the f*ck?" I'm lost, I don't know.”
Why do companies do this? Having raised billions in venture capital, they need to deliver on the promise that their LLMs are superior to the competition. Are they? Probably not, at least according to this leaked internal memo from Google. Open Source models look to be able to match or outperform many proprietary ones, which has led to FB open sourcing their LLMs. Sometimes, the more data you hoover up, the better your LLM can be, which is where OpenAI & other AI companies have been accused of copyright infringement, with the cases ranging from hilarious to deadly serious. The companies pretend that their tech is “Thirty Seconds to Skynet,” then benefit when regulators & pop culture use language that puts a halo around what data is being analyzed and where (see “the cloud”). If you’re dedicated to preventing an AI-driven nuclear war, you don’t have much time to address the massive risk of institutionalizing bias, predatory data collection practices, and prejudiced systemic thinking in the robot that makes fun Hamilton Klingon raps.
It’s not just companies clucking-for-bucks. AI experts are using Skynet fear to sell books, get interviews, or make grandiose predictions about the Laws of Robotics. Of course, others peddle balm-style language to treat AI apocalypse anxiety. This is why it’s important not to believe the world is safe/ending because of the predictions of one man with a book to sell. In case you’re like me and couldn’t keep track, the Institute for Electrical & Electronics Engineers has a scorecard, tracking prominent experts & their alarm regarding the threats AI pose. It’s like the Atomic/Midnight Clock, but somehow less useful.
Frustratingly, the black & brown women warning this would happen were fired/laid off/otherwise sidelined but have been punching back. These PhD researchers, lawyers, policy heavyweights & master engineers called out the 31 flavors of Baskin Robbins bullsh*t that the industry is rife with. They were calling out algorithmic bias, institutional rot, “garbage in, garbage out,” the climate based ramifications of big data, algorithmic decision-making, and AI rot before it was cool. And when those experts had the chance to back those black & brown women, did they? If you guessed not, you & Meredith Whitaker would be right. Meredith correctly shows that biggest risk isn’t with LLMs or AI going “rogue,” it’s that we’ll let corporations control the development of this technology, potentially inoculating them from culpability if they do blow up the world. Matteo Wong reinforces this in The Atlantic, Big Tech “has shown little regard for years of research demonstrating that AI’s harms are not speculative but material,” Edward Ongweso Jr. doubles down, “The real threat is the industry that controls our tech ecosystem. And Mozilla’s Deborah Raji hits the Grand Slam, wondering why Big Tech has to Save The World, writing, “the need for this ‘saving the world’ narrative over a ‘let's just help the next person’ kind of worldview truly puzzles me.” Given these voices are coming from the left, right, infuriating center, and the industry rags that cover the sector, maybe we should listen?
If you are looking for informed voices to follow on these topics, follow these brave souls into Mordor and defend them with your life:
Dr. Timmit Gebru - (PhD, Computer Vision, Stanford Univ.)Showed bias issues and climate costs after being hired by Google to do that. Did it, got fired.
Dr. Safiya U. Noble (PhD Library & Info Science, Univ. Illinois) Primary researcher illustrating how algorithmic bias is used to oppress
Ifeoma Ozoma - Whistleblower who inspired the Silenced No More Act in California, and the author of the Tech Worker Handbook
Dr. Ruha Benjamin (PhD Sociology, UC Berkeley) Her TEDx talk on the biases inherent in modern research is titled “From Park Bench to Lab Bench”
Dr. Joy Buolamwini - Her PhD thesis in Media Arts & Sciences at MIT uncovered racial & gender bias in the AI systems used at Microsoft, IBM, Amazon, and many others. Her TED Talk on algorithmic bias has been viewed 1M+ times.
This list is laughably non-comprehensive, happy to add luminaries we missed getting fired by Google or Microsoft for <gestures nebulously> unspecified reasons related to culture fit. There’s also the Partnership for AI Prosperity that’s built guidelines to push us towards AI that actually benefits all members of society, not just AI company workers & investors.
But why the tortured Zapp metaphor? Consultants, like those at McKinsey, Bain, and BCG are brought in by executives to lend credence to the “unpopular but needed” decisions that inevitably lead to mass layoffs or questionable financialization schemes that “just makes sense” to “all the smart people in the room.” This allows companies to ignore known harms, by pointing to the “decision-maker” and saying they had to. This is even part of “AI Education” courses created by Microsoft & LinkedIn, as this question from Chapter 1 illustrates.
Two generations ago, Bain, Mckinsey, and BCG consultants (like Mitt Romney) made the “smart calls” that closed mills and factories, shipped jobs overseas, and hollowed out our manufacturing sector. This time, Microsoft (a ChatGPT/OpenAI investor) is pretending AI could “Fix Work,” while McKinsey released a report on the “economic potential” of Generative AI & LLMs. Instead of bringing in a hot shot consultant to suggest the greedy thing, we can get ChatGPT and LLMs to do the “work” of creating TV & movies.
This is why it’s a Zapp Brannigan-level idea. The people who know a computer can “write” a script, know tax cuts create jobs, and know you can take that hill if you just throw another wave of conscripts at it. From Ed Zitron’sAbsentee Capitalism
Executive excitement around generative AI is borne from these disconnected economics, because none of these people actually create anything. They are not writers, or producers, or even present on sets. They are not active participants in creating value of any kind, other than moving chunks of money around and making vague cuts to things — including by scrapping already-complete TV shows to lower tax liabilities, a move that disproportionately affects those creatives who rely on residuals for a living, particularly writers.
This is one of the reasons the WGA strike has gone on so long SAG has joined. But even more telling, is the Deadline piece that came out, quoting “anonymous” execs & producers. Literal starvation and desperation tactics aren’t the side effect, they’re the end goal of not negotiating.
The endgame is to allow things to drag on until union members start losing their apartments and losing their houses.” -Anonymous Studio Executive, Deadline
Producers & studios really believe they can cut a writer’s room from 5-7 full-time staff to a pool of contract labor who would “touch up” the scripts, copy, advertising plans, and all knowledge work required to run a show. (Great outline of why the strike is happening by Max Read (again), and from Mike Drucker, a perfect summation:
What’s crazy is that, after realizing that throwing a 100-year-old business model out of an airlock wasn’t strategically sound, companies are returning to advertising-supported models. They started cutting password sharing. They started raising prices. Because a lot of this new business model was an illusion. Essentially, businesses are punishing both the audience and the people creating things for that audience. Maybe CEOs believe that if they can squeeze out just a little more, they’re going to finally be just as cool as an actually famous person. Perhaps kill one more comedy and they’ll be crowned the King of Cannes!
Just pay us, man.
Instead of ordering third seasons, paying residuals, or treating the talent like…talent, they are spending vault-loads perfecting the synthetic recreation of James Dean, probably because a Zapp-type consultant showed them how much they could “save.”
Shout out to Fran Drescher, Adam Conover, and the dozens of A, B, and C-list celebrities actually fighting against the Zapp-ification of art for the common artist here. To give Snoop Dogg the last word, bringing it almost full circle:
How does this play out? No idea. Generative AI & LLMs could be the Next Big Thing in the way that mobile internet was, or it could be the Next Big Thing in the way the NFTs & the metaverse was. Just because you were in the lead, doesn’t mean you will be forever. Ask MySpace, Cisco, IBM, Sharp, and Woolworths how that turned out.
Courage doesn't always roar. Sometimes courage is the little voice at the end of the day that says I'll try again tomorrow.
~Mary Anne Radmacher
Keep your head up,
t
P.S./Editor’s Mea Culpa: Faithful readers deserve more than essentially being ghosted after a New Year’s Eve party. Long story short, putting in overwhelming hours didn’t save our hero from being laid off twice in ~375 days, as the world of startups is…turbulent to say the least. It is truly an honor to do the Afternoon Tea, Factual Dispatch, and other formats, but the bandwidth simply isn’t there to get it done for free anymore. We’re unsure what form FD will take in the future, but if you want to help us figure it out, answering these six questions will help immensely. Oh, if you need SEO, digital marketing, or social media strategy, drop OSF Solutions a line.
Good to see you back! Sorry to hear about the hard times, but I’m looking forward to seeing where you go with this