Fox News Texts, and Thinking with ChatGPT
In this edition of the newsletter, I’m writing about the Fox News texts and musing about the potential of innovations like ChatGPT to affect how we think. For the latter theme, I’m starting from an article in The Atlantic about prompt-writing for AI programs and specifically the insight that “good prompts tend to reveal an awareness of the medium’s abilities that the user is trying to replicate.” But first, some comments on the new Fox News texts.
(Jason Koerner/Getty Images; Jason Koerner/Getty Images; Carolyn Kaster/AP; Alex Brandon/AP; Michael Brochstein/SOPA Images/LightRocket via Getty Images; Slaven Vlasic/Getty Images)
The Dominion lawsuit against Fox News has led to the unearthing of text messages from leading Fox pundits as well as senior executives — providing conclusive evidence that Fox News knew the “stolen election” narrative was a lie, but leaned into it to placate viewers, in part due to sharp competition from Newsmax and other alternative media who had already embraced the Big Lie narrative wholesale.
“It’s measurably hurting the company. The stock price is down. Not a joke,” Tucker Carlson texted Hannity after a Fox News journalist dared to fact-check Trump regarding claims of voter fraud. In that text exchange, both pundits agreed that the journalist needed to be fired, to protect the brand. Earlier, after Fox was the first to call Arizona for Biden on election night, Hannity had texted Calrson and Ingraham that the decision to call had “destroyed a brand that took 25 years to build and the damage is incalculable."
A couple years ago, I wrote a Substack post comparing Playboy magazine to legacy media: the tldr version of my argument is that structural forces have removed gatekeeping power and created a new business model that rewards extreme content:
Two primary factors here: 1): proliferation of news sources, made possible because the cost of production and distribution is much lower, and leading to a loss of cultural gatekeepers as well as a structural whirlwind toward more extreme content (amateur bloggers and amateur porn). 2): a new funding mechanism that cuts out the middle-man, where the market responds directly to demand, and creates revenue based on views and clicks of increasingly extreme content (viral, in every sense.) And to be clear, I am arguing both that extreme porn is bad for us (all porn, probably, but Playboy less than Pornhub) and also that extremely partisan journalism is bad for us - for reasons unique to each, but also for a lot of overlapping reasons (reinforcing our biases/tastes/preferences, creating a taste for violence and cruelty, exposing us to potent misogyny and racism, etc..)”
In other words, I think the story of Fox News’ descent into promoting complete bullshit (that it’s executives and journalists know is bullshit) is a story about structural forces more than it is a story about individual bad actors. A former managing editor at Fox has reflected that “it’s remarkable how weak ratings make good journalists do bad things.” And yes, that’s certainly true of these pundits who understand the factors that shape their paychecks. But the driver here is the incentive structure and that means that if it wasn’t Tucker saying xyz untrue things, it would be whoever was brought on to replace him - because that’s what the underlying business model demands in order to retain viewers who want news to reflect what they already believe, and who are more than happy to go to whatever news source will give them what they want.
It’s hard to parse through agency here. On the one hand, Murdoch and Tucker are being held hostage by the mass of viewers who demand that they be given what they want or else. But on the other hand, this is a mass of viewers whose views, tastes, and preferences have been actively cultivated and shaped by Fox News for decades. But zooming out even further, just as one can say if not Tucker, then another host, one can also say that if not Fox, then another competitor like Newsmax. That’s also a point I tried to make in my original post on this issue:
No one is really at fault for the growing polarization and partisanship, on both the supply (media) and demand (consumer) side. This isn’t a story about left-wing coastal elites or right-wing klansmen (though they can factor in, for sure.) I think it’s more apt to say that we are all suffering as victims of structural forces that we are only now beginning to grasp, and which lie outside our individual power to confront…This isn’t a story about abortion or gay marriage or climate change. This is a story about a technological revolution akin to the industrial revolution that has reverberated throughout our society and our world.
To wrap up this brief segment, let me say that I’m even more pessimistic about solving the Playboy problem as I call it now than I was a couple years ago. But I think at least being able to name the problem can be the starting point for thinking about structural solutions.
Thinking With ChatGPT
Okay, now for the main attraction. In recent months/weeks, we’ve seen rapid developments in the world of artificial intelligence. AI models like ChapGPT can generate or debug computer code, produce book summaries, metered poetry, and law review articles, pass law and business school exams, and provide [likely heretical] analogies for the Holy Trinity (“that’s modalism, Patrick!”). Meanwhile, art-generating AI like Midcentury is already creating award-winning artwork. If you haven’t already interacted with these technologies, I highly encourage you to spend some time doing so. You can use ChatGPT here, and Midjourney here.
It remains to be seen what the initial waves of commercialization will look like for these new AI programs. But likely the first major commercial application will be enhancing search engines. Google and Microsoft are currently in heated competition over who can be more successful in integrating AI into their search engines, even as new startups like You.com have emerged as would-be competitors.
A recent article in The Atlantic by Charlie Warzel speculates that learning how to interact with AI by writing good prompts may become a vital skill of the upcoming century. Warzel notes that “In order to create [with these models], one must know how to guide the machines to a desired outcome. Asking ChatGPT to write a five-paragraph book report about Animal Farm will yield forgettable, even inaccurate results. But writing the introductory paragraph to the book report yourself and asking the tool to complete the essay will feed the machine valuable context.”
Rather than rendering human intelligence and creativity obsolete, AI will instead require applying intelligence and creativity, based on “a deeper understanding of the model you are trying to manipulate.” Warzel explains that “one way to think of prompt trial and error is as an attempt to glean what information the model is pulling from and how the AI organizes and indexes the information at its disposal. It’s informed guesswork…” Here’s more insight along these same lines:
Most important, she told me, is knowing the model you’re speaking to. Each tool is built and trained differently, giving it unique aesthetics and vernacular—like how people who share a language have regional dialects and cultural quirks. “In the way that prose writing differs from technical or academic writing, there are different ways of marshaling the language depending on your audience,” she told me. “I’ve seen people who are really good at DALL-E 2, which seems to reward an ability to draw on references and high- and low-culture mash-ups. But the way I conceptualize the world is more along the lines of how Midjourney’s model works,” she said.
Over time, Conley has familiarized herself with the model’s order of operations. “Something I’ve learned is the importance of the weight of a prompt,” she told me. “In Midjourney, if you type the word girl before the adjective red, it’ll focus on the girl more than the color red. With longer prompts, it’s like a puzzle, and you learn which terms to give more weight.”
The Atlantic article is mostly focused on understanding the models in order to control its outputs. I have some sense of what image I want produced based on my imagination, and I need to figure out how to translate that into a prompt that can generate what I’m looking for. But I’m also interested in understanding the models in order to strengthen the work of interpreting those outputs. Specifically, I’m hoping that future iterations of ChaptGPT and other such tools will allow us to see into the model’s processes as a way of evaluating the veracity or trustworthiness of the output. At the very least, I’d like to see the AI programmed to provide a list of citations to bolster scientific or medical claims. Beyond that, I think other meta attributes could include confidence intervals or estimate ranges that help the reader know how much weight to ascribe to a stated conclusion: “the sky is blue” hardly needs any accompanying throat-clearing, whereas answering “how likely is it that my covid infection cause Long Covid” needs a lot.
This dimension of AI bots as truth-telling or not truth-telling and as potential agents of programmers with political motives is already picked up by conservatives who flock to Twitter to lament that ChatGPT won’t say the n-word or claim that Trump won the 2020 election. Presumably, this reflects explicit overwrites on the program reflecting the prerogatives of the programmers. And to be clear, I’m fine with those specific decisions. But there’s a deeper question here about the epistemic certainty of claims on the one hand, and about the authority undergirding those claims when I as an end user am not in a position to verify based on limits to my knowledge, experience, etc.. And there’s an interconnected question of whether the ability to shape those outputs constitutes a new form of political power that may have substantial implications down the road.
For now, I think the main way to address both issues is to make the underlying models as transparent as possible. I don’t necessarily mean that the underlying code needs to be open source. Opening the code is an outcome I’d love to see, but I recognize that it goes against the commercialization of the technology, and the massive amount of spending in AI right now is taking place precisely because of the expected return through commercialization. But even assuming the code itself is a black box, I think there are ways of designing models to show their work.
This theme of “showing your work” is central to what I’m trying to do here at Thinking Aloud and in all my published writings. Like a student showing each step of their calculations for a math problem, I want to show the underlying steps in my thinking leading me from underlying assumptions, observations, first principles, or hypothesis all the way through to specific conclusions. (You can read more about how I think about thinking here.) Inasmuch as we can program AI models to do the same, rather than just producing a static answer to a static question, we can produce models that are more trustworthy and that actually help us to think better.
In a way, this discussion runs parallel to the Fox News conversation: the Fox conversation is about what happens when the black box pushes toward one set of outcomes (shaped by market incentives) that run contrary to the stated motive of reporting truthfully. The Dominion lawsuit is sourcing code that demonstrates the model is not trustworthy. (Is there a path for AI models to help us evaluate news claims by aggregating across media sources, social media posts, and other potential sources of information? Is there a path for such models to do so in a credible and transparent way? Who knows?! Worth asking though!) Ultimately, whether its journalism or search engine responses, the more that we can “show our work,” the more we can weed out deception, misinformation, or false consensus. And if we can create feedback loops that reinforce in end users that such “showing our work” frameworks are good and that therefore shape demand for more models along those lines, than I think we may be in a position to harness AI for good while mitigating some of the ways it could create epistemic harm.
Finally, one other thought I want to mention. AI is not sentient, nor does it have personal attributes like will, desire, and intellect. (It’s of course debate whether AI could ever become sentient.) Instead, these models are built on pattern-recognition and train on preexisting images and text. For now, those images and text are overwhelmingly human-generated. In the future, AI models might train on predominately AI-generated images and text — but even in that scenario, the genealogy will always trace back to human thought, however remote. I do wonder though if AI training on AI could undermine everything I just in the previous paragraph, amplifying epistemic harm rather than mitigating or eliminating it.
What I’m Reading Elsewhere:
Here’s a great example of what not to do with ChapGPT: 'Peabody EDI Office responds to MSU shooting with email written using ChatGPT: “There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Kayat said. “[Administrators] only care about perception and their institutional politics of saving face.”
Politico did a FOIA request to allow analysis of applications for student debt cancellation. The analysis finds that: “Student loan borrowers living in lower-income areas applied for the program at a higher rate compared to those who live in wealthier neighborhoods, the analysis also found. Most applications came from places where the per-capita income is under $35,000.”
Professor Vincent Lloyd's sobering account here in Compact Magazine about a seminar being hijacked by anti-racist activists is about more than just the usual tendencies of the left to eat its own. It's also a clear and urgent warning to all of us devoted to liberal arts education that "antiracism" workshops and seminar education are not compatible.
Federal Reserve official Christopher J. Waller recently gave a great speech on the crypto-ecosystem that is well worth reading: “To me, a crypto-asset is nothing more than a speculative asset, like a baseball card. If people believe others will buy it from them in the future at a positive price, then it will trade at a positive price today. If not, its price will go to zero. If people want to hold such an asset, then go for it. I wouldn't do it, but I don't collect baseball cards, either. However, if you buy crypto-assets and the price goes to zero at some point, please don't be surprised and don't expect taxpayers to socialize your losses.”