431 stories
·
0 followers

This Place Is All Fucked Up

1 Share

It doesn't appear to have been close. Georgia flipped for Trump. Pennsylvania and Wisconsin, too. The Blue Wall crumbled like Jericho's. Just about everywhere voted redder than in 2020. Young people went right. New York City went right. The polls said what they said, but then Americans went to the voting booth and said what they said, which is that they want four more years of Donald Trump. This feels worse than 2016. That was a squeaker, and you could convince yourself it was a fluke. The Comey letter. No Midwest Dem ground game. A uniquely unpopular candidate. Belief that maybe Trump wasn't being totally serious about the things he said, or might surround himself with smarter, better people. Maybe it was possible people were voting for Trump just to seize on an intrusive thought, a call of the void. You can't tell yourself these stories this time. Trump is a known quantity. We know what he would do as President, and we know all the things he'd promised to do as President a second time, and millions of Americans took all that into consideration and said, Yes, I want that.

What do you do in a place like this, with an electorate like that? You survive it, mostly, unless you don't. You will try to figure out how it got there, got so angry and stupid and cruel, so that you can nudge it somewhere else next time. You do this by seeing what went wrong this time. This seems like an impossible task this morning, because everything went wrong.



Read the whole story
nickwustl
6 hours ago
reply
Seattle, WA
Share this story
Delete

The End of Francis Fukuyama

1 Share

From 11:09 a.m. to 11:14 a.m. yesterday, I thought Francis Fukuyama had died. When an X account that seemed connected with Stanford University announced the legendary political scientist’s passing, many people were fooled. Much to my chagrin, I was among them. And then the account declared itself to be a hoax by Tommaso Debenedetti, an Italian prankster. Minutes later, Fukuyama himself posted on X, “Last time I checked, I’m still alive.”

Debenedetti, whom I could not immediately reach for comment, has previously issued many fake death announcements, including for the economist Amartya Sen (still alive), the pseudonymous writer Elena Ferrante (still alive), the Cuban leader Fidel Castro (dead as of 2016). In 2012, Debenedetti told The Guardian that his purpose was to reveal how poorly the media do their job, arguing that “the Italian press never checks anything, especially if it is close to their political line.” But fooling people undercuts the idea of shared truth—a cornerstone of liberal democracy itself.

That the hoax was targeting Fukuyama, one of liberal democracy’s greatest defenders, made the situation all the more striking. In 1989, as communism was on the verge of collapse, Fukuyama published an essay called “The End of History?,” which argued that modern liberal democracy had outcompeted every viable alternative political system. Humanity, he argued, had reached “the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.” (He later expanded the essay into a book, The End of History and the Last Man.)

[Francis Fukuyama: More proof that this really is the end of history]

But how durable is liberal democracy? Although Americans are experiencing far greater material prosperity than their forebears, fears of political violence are growing, and the Republican presidential candidate, Donald Trump, is using authoritarian language. Fukuyama foresaw the potential for trouble in 1989. “The end of history will be a very sad time,” he wrote back then. “The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands … Perhaps this very prospect of centuries of boredom at the end of history will serve to get history started once again.”

Wondering what Fukuyama thought of yesterday’s hoax—and our current political moment—I requested an interview. The transcript below has been condensed and edited for clarity.

Jerusalem Demsas: It’s great to find you alive and well. How are you feeling?

Francis Fukuyama: Yeah, that was an unusual event.

Demsas: How did you learn about your “death”?

Fukuyama: One of my former students, I guess, tweeted that this had happened and that it was a hoax. And then I went back and looked at the original tweet, and then it just went viral, and everybody was tweeting about it, so I decided I should actually assert that I was still alive. So it got a lot of attention.

Demsas: What was your reaction when you saw it?

Fukuyama: I couldn’t figure out what the motive was, and I also couldn’t figure out why anyone would take the time to produce a tweet like that. It was a pointless exercise. I guess the other reaction is that X, or Twitter, has become a cesspool of misinformation, and so it seemed it was a perfect thing to happen on X that might not happen on other platforms.

Demsas: Do you know who Tommaso Debenedetti is?

Fukuyama: No.

Demsas: He is an Italian who has claimed responsibility for a series of hoaxes, including the fake announced death of Amartya Sen. He told The Guardian years ago that the Italian press never checks anything. This seems like a part of his broader strategy to, I guess, reveal the problems with fact-checking in the media. What do you make of this strategy?

Fukuyama: Well, first of all, it wasn’t very successful. The fact that you can propagate something like this on Twitter doesn’t necessarily tell you much about the media. People debunked it within, I would say, seconds of this having been posted, so I’m not quite sure what kind of a weak link this exposes.

Demsas: This sort of informational ecosystem seriously weakens liberal democracy, right? If there cease to be shared facts, if it becomes difficult for voters to transmit their feelings about the world, culture, the economy to elected officials, it weakens the legitimacy of democratic signals.

Fukuyama: When I wrote my book Trust back in the mid-1990s, I described the United States as a high-trust society. That’s just completely wrong right now. And a lot of that really is due to the internet or to social media. This is a symptom of a much broader crisis, and it’s really hard to know how we’re going to ever get back to where we were 30 years ago.

Demsas: Does it say anything about the strength of liberal democracy that the democratization of media erodes trust?

Fukuyama: The classic theorists of democracy said that just formal institutions and popular participation weren’t enough, and that you had to have a certain amount of virtue among citizens for the system to work. And that continues to be true. One of the virtues that is not being cultivated right now is a willingness to check sources and not pass on rumors. I’ve caught myself doing that—where you see something that, if it fits your prior desires, then you’re very likely to just send it on and not worry about the consequences.

Demsas: Next week we have the election between Trump and Kamala Harris, and there are a great deal of normal policy distinctions between the two candidates. And when you look at why people are making their decisions, they often will point to things like inflation or immigration or abortion. But there’s also a distinction on this question of democracy too, right? Why does it feel like there’s this yearning for a more authoritarian leader within a democracy like the United States?

Fukuyama: What’s really infuriating about the current election is that so many Americans think this is a normal election over policy issues, and they don’t pay attention to underlying institutions, because that really is what’s at stake. It’s this erosion of those institutions that is really the most damaging thing. In a way, it doesn’t matter who wins the election, because the damage has already been done. You had a spontaneous degree of trust among Americans in earlier decades, and that has been steadily eroded. Even if Harris wins the election, that’s still going to be a burden on society. And so the stakes in this thing are much, much higher than just the question of partisan policies. And I guess the most disappointing thing is that 50 percent of Americans don’t see it that way. We just don’t see the deeper institutional issues at stake.

Demsas: We’re in a time of great affluence—tons of consumer choice, access to goods and services, bigger houses, bigger cars. George Orwell once wrote, in his 1940 review of Mein Kampf, that people have a desire to struggle over something greater than just these small policy details. [“Whereas Socialism, and even capitalism in a more grudging way, have said to people ‘I offer you a good time,’ Hitler has said to them ‘I offer you struggle, danger and death,’ and as a result a whole nation flings itself at his feet,” Orwell observed.] Does that desire create a problem for democracies?

Fukuyama: There’s actually a line in one of the last chapters of The End of History where I said almost exactly something like if people can’t struggle on behalf of peace and democracy, then they’re going to want to struggle against peace and democracy, because what they want to do is struggle, and they can’t recognize themselves as full human beings unless they’re engaged in the struggle.

Demsas: In The End of History, you wrote that “men have proven themselves able to endure the most extreme material hardships in the name of ideas that exist in the realm of the spirit alone, be it the divinity of cows or the nature of the Holy Trinity.” And I worry that liberal democracy is unable to provide the sorts of ideas that make people want to struggle or fight for it. Does it feel to you like it’s doomed?

Fukuyama: Well, I don’t think anything is doomed. This is the problem with peace and prosperity. It just makes people take [things] for granted. We’ve gone through periods of complacency, punctuated by big crises. And then in some of these prior cases, those crises were severe enough to actually remind people about why a liberal order is a good thing, and then they go back to that. But then time goes on, so you repeat the cycle, with people forgetting and then remembering why liberal institutions are good.

Demsas: After Trump beat Hillary Clinton in 2016, I had friends say, do you think your entire view of the American public would change if 120,000 people in Wisconsin, Michigan, and Pennsylvania had voted differently? And I wonder if that’s a question to ask ourselves now, if Trump wins again. Does it really say that much about people’s views on democracy?

Fukuyama: It has much deeper implications. The first time he won, he didn’t get a popular-vote majority. You could write it off as a blip. But everybody in the country has lots of information now about who he is and what he represents. So the second time around, it’s going to be a much more serious indictment of the American electorate.

Read the whole story
nickwustl
8 days ago
reply
Seattle, WA
Share this story
Delete

The Deep Psychological Reason We Are Stuck in This Feedback Loop with Donald Trump

1 Share
Even after almost a decade of Trump, we still can’t quite see it.

Read the whole story
nickwustl
8 days ago
reply
Seattle, WA
Share this story
Delete

Six principles for thinking about AI risk

1 Share

When OpenAI released GPT-4 in March 2023, its surprising capabilities triggered a groundswell of support for AI safety regulation. Dozens of prominent scientists and business leaders signed a statement calling for a six-month pause on AI development. When OpenAI CEO Sam Altman called for a new government agency to license AI models at a Congressional hearing in April 2023, both Democratic and Republican senators seemed to take the idea seriously.

It took longer for skeptics of existential risk to find their footing. This might be because few people outside the tight-knit AI safety community were paying attention to the issue prior to the release of ChatGPT. But in recent months, the intellectual climate has changed significantly, with skeptical arguments gaining more traction.

Last month a pair of Princeton computer scientists published a new book that includes my favorite case for skepticism about existential risks from AI. In AI Snake Oil, Arvind Narayanan and Sayash Kapoor write about AI capabilities in a wide range of settings, from criminal sentencing to moderating social media. My favorite part of the book is Chapter 5, which takes the arguments of AI doomers head-on.

Arvind Narayanan is a computer science professor and Sayash Kapoor is a graduate student in computer science at Princeton. (Photos courtesy of Princeton University Press)

Some skeptics of AI doom are skeptics of AI in general. They downplay the importance of generative AI and question whether it will ever be useful. At the opposite extreme are transhumanists who argue it would actually be a good thing if powerful AI systems superseded human beings.

Narayanan and Kapoor stake out a sensible position between these extremes. They write that generative AI has many “beneficial applications,” adding that “we are excited about them and about the potential of generative AI in general.” But they don’t believe LLMs will become so powerful that they’re able to take over the world.

Some of Narayanan and Kapoor’s arguments are similar to points I’ve made in my newsletter over the last 18 months. So as an alum of the Princeton computer science program where Narayanan teaches and Kapoor is studying, I’m going to label their perspective the Princeton School of AI Safety.

The Princeton School emphasizes continuity between past and future progress in computer technology. It predicts that improvements in AI capabilities will often require a lot of real-world data—data that can only be gathered through slow and costly interactions in the real world. This makes a “fast takeoff” in AI capabilities very unlikely.

The Princeton School is skeptical that future AI systems will have either the capacity or the motivation to gain power in the physical world. They urge policymakers to focus on specific threats, such as cyberattacks or the creation of synthetic viruses. Often the best way to do this is by beefing up security in the physical world—for example by regulating labs that synthesize viruses or requiring that power plants be “air gapped” from the Internet—rather than trying to limit the capabilities of AI models.

Here are six principles for thinking about existential risk articulated by Narayanan and Kapoor in AI Snake Oil.

Subscribe now

1. Generality is a ladder

Two of today’s leading AI labs—OpenAI and Google’s DeepMind—were explicitly founded to build artificial general intelligence. A third, Anthropic, was founded in 2021 by OpenAI veterans worried OpenAI wasn’t taking the safety risks from AGI seriously enough.

In a recent essay, Anthropic CEO Dario Amodei predicted that AGI (he prefers the term “powerful AI”) will dramatically accelerate scientific progress.

Views like this are common in the technology industry. A widely read June essay by former OpenAI employee Leopold Aschenbrenner predicted that leading labs would create AGI before the end of the decade, and that AGI would provide a “decisive economic and military advantage” to whichever country gets it first.

Narayanan and Kapoor see things differently.

“We don’t think AI can be separated into ‘general’ and ‘not general,’” they write. “Instead, the history of AI reveals a gradual increase in generality.”

The earliest computing devices were designed for one specific task, like tabulating census results. In the middle of the 20th Century, people started building general-purpose computers that could run a variety of programs. In AI Snake Oil, the Princeton authors argue that machine learning represented another step toward generality.

“The general-purpose computer eliminated the need to build a new physical device every time we need to perform a new computational task; we only need to write software. Machine learning eliminated the need to write new software; we only need to assemble a dataset and devise a learning algorithm suited to that data.”

Pretrained language models like GPT-4 represent yet another step toward generality. Users don’t even need to gather training data. Instead, they can simply describe a task in plain English.

Many companies today are trying to build even more powerful AI systems, including models with the capacity to make long-term plans and reason about complex problems.

“Will any of the thousands of innovations being currently produced lead to the next step on the ladder of generality? We don’t know,” Narayanan and Kapoor write. “Nor do we know how many more steps on the ladder there are.”

These authors view AGI as a “serious long-term possibility.” But they believe that “we’re not very high up on the ladder yet.”

2. Recursive self-improvement isn’t new

One reason many people view AGI as an important threshold is that it could enable a process called recursive self-improvement. Once we have an AI model that is as intelligent as a human AI researcher, we can make thousands of copies of that model and put them to work creating even more powerful AI models.

Narayanan and Kapoor don’t deny that this could happen. Rather, they point out that programmers have been doing this kind of thing for decades.

At the dawn of the software industry, programmers had to write software in binary code, a tedious and error-prone process that made it difficult to write complex programs. Later people created software called compilers to automate much of this tedium. Programmers could write programs in higher-level languages like COBOL or Fortran and a computer would automatically translate those programs into the ones and zeros of machine code.

Over the decades, programmers have created increasingly powerful tools to automate the software development process. For example, cloud computing platforms like Amazon Web Services allow a programmer to set up a new server—a process that used to take hours—with a few clicks.

“There is no way we could have gotten to the current stage in the history of AI if our development pipelines weren’t already heavily automated,” the Princeton pair write. “Generative AI pushes this one step further, translating programmers’ ideas from English (or another human language) to computer code, albeit imperfectly.”

In August, I pointed out another example of a company using AI to create better AI: programmers at Meta used older Llama models to generate data they used to train the Llama 3.1 herd of models.

We should absolutely expect this AI-improving-AI process to continue, and even accelerate, in the coming years. But there’s no reason to expect a discontinuity at the moment a frontier AI lab “achieves AGI.” Rather, we should expect smooth acceleration as AI systems become more powerful and the AI development process becomes more automated. By the time we reach AGI, the process may already be so thoroughly automated that there’s not much room for AGI to further accelerate the process.

3. Real-world experience is essential for new capabilities

At this point I expect some readers will object that I—and the authors of AI Snake Oil—are not taking exponential growth seriously. In February 2020, many people dismissed COVID because it seemed that only a handful of people were getting infected per day. But thanks to the power of compounding growth, thousands of people were getting infected daily by the end of March.

So maybe there won’t be a sudden change at the precise moment when an AI system “achieves AGI.” But you might still expect exponential increases in computing power to produce AI systems that are far more capable than humans.

That might be a reasonable assumption if better algorithms and more computing power were the only things required to make AI systems more capable. But data is the third essential ingredient for any AI system. And unlike computing power, data is not fungible. If you want an AI model to design rockets, you need training data about rockets. Data about French literature or entomology isn’t going to help.

In the last 15 years, the AI companies enjoyed a massive infusion of data scraped off the Internet. That enabled the creation of broad and capable models like GPT-4o and Claude 3.5 Sonnet. But more progress is needed to reach human-level capabilities on a wide range of tasks. And that isn’t just going to take more data—it’s going to require different kinds of data than AI companies have used in the past.

In a piece last year, I argued that real-world experience is essential to mastering many difficult tasks:

There’s a famous military saying that “no plan survives contact with the enemy.” The world is complex, and military planners are invariably working with incomplete and inaccurate information. When a battle begins, they inevitably discover that some of their assumptions were wrong and the battle plays out in ways they didn’t anticipate.

Thomas Edison had a saying that expresses a similar idea: “genius is one percent inspiration and 99 percent perspiration.” Edison experimented with 1,600 different materials to find a good material for the filament in his most famous invention, the electric light bulb.

“I never had an idea in my life,” Edison once said. “My so-called inventions already existed in the environment—I took them out. I’ve created nothing. Nobody does. There’s no such thing as an idea being brain-born; everything comes from the outside.”

In other words, raw intelligence isn’t a substitute for interacting with the physical world. And that limits how rapidly AI systems can gain capabilities.

Narayanan and Kapoor have a similar view.

“Most human knowledge is tacit and cannot be codified,” they write. “Beyond a point, capability improvements will require prolonged periods of learning from actual interactions with people. Much of the most prized and valuable human knowledge comes from performing experiments on people, ranging from drug testing to tax policy.”

The pair point to self-driving cars as an example. In the early years, these vehicles seemed to make rapid progress. By the late 2010s, many companies had built prototype vehicles with basic self-driving abilities.

But the authors write that more recently, progress “has been far slower than experts originally anticipated because they underestimated the difficulty of collecting and learning from real-world interaction data.” Self-driving cars have to deal with a long list of “edge cases” that must be discovered through trial and error on real public streets. It took more than a decade for Waymo to make enough progress to launch its first driverless taxi service in Phoenix. And even Waymo’s cars still rely on occasional assistance from remote operators.

And “unlike self-driving cars, AGI will have to navigate not just the physical world but also the social world. This means that the views of tech experts who are notorious for misunderstanding the complexity of social situations should receive no special credence.”

Subscribe now

4. The superintelligence is us

Maybe it will take a while to invent AI systems with superhuman intelligence, but doomers still insist that these systems could be extremely dangerous once they’re invented. Just as human intelligence gives us power over chimpanzees and mice, so the extreme intelligence of future AI systems could give them power over us.

But Narayanan and Kapoor argue that this misunderstands the source of our power over the natural world.

“Humans are powerful not primarily because of our brains but because of our technology,” they write. “Prehistoric humans were only slightly more capable at shaping the environment than animals were.”

So how did humans get so powerful? Here’s how I described the process in an essay I wrote last year:

Humanity’s intelligence gave us power mainly because it enabled us to create progressively larger and more complex societies. A few thousand years ago, some human civilizations grew large enough to support people who specialized in mining and metalworking. That allowed them to build better tools and weapons, giving them an edge over neighboring civilizations.

Specialization has continued to increase, century by century, until the present day. Modern societies have thousands of people working on highly specialized tasks from building aircraft carriers to developing AI software to sending satellites into space. It’s that extreme specialization that gives us almost godlike powers over the natural world.

Doomers envision a conflict where most of the AI systems are on one side and most of the human beings are on the other. But there’s little reason to expect things to work out that way. Even if some AI systems eventually “go rogue,” humans are likely to have AI systems they can use for self-defense.

“The crux of the matter is that AI has already been making us more powerful and this will continue as AI capabilities improve,” Narayanan and Kapoor write. “We are the ‘superintelligent’ beings that the bugbear of humanity-ending superintelligence evokes. There is no reason to think that AI acting alone—or in defiance of its creators—will in the future be more capable than people acting with the help of AI.”

5. Powerful AI can be used for both offense and defense

This isn’t to say we shouldn’t worry about possible harms from new AI. Most new technologies enable new harms and AI isn’t an exception. For example, we’ve already started to see people commit fraud using deepfakes created using generative AI.

AI systems have the potential to be very powerful—and hence to enable even more significant harms in the future. So maybe it would be wise to press pause?

However, AI systems could also have tremendous benefits. And the benefits of a new technology are often closely connected to the harms.

Take cybersecurity as an example. There is little doubt that foundation models will enable the creation of powerful tools that hackers could use to identify and exploit vulnerabilities in computer systems. But Narayanan and Kapoor argue that this isn’t new:

Hackers have long had bug-finding AI tools that are much faster and easier to use than manually searching for bugs in software code.

And yet the world hasn’t ended. Why is that? For the simple reason that the defenders have access to the same tools. Most critical software is extensively tested for vulnerabilities by developers and researchers before it is deployed. In fact, the development of bug-finding tools is primarily carried out not by hackers, but by a multibillion-dollar information security industry. On balance, the availability of AI for finding software flaws has improved security, not worsened it.

We have every reason to expect that defenders will continue to have the advantage over attackers even as automated bug-detection methods continue to improve.

6. Safety regulations should focus on specific threats

Many AI safety experts advocate legislation that focuses on the safety of AI models. California’s failed SB 1047, for example, would have required AI companies to certify that their models were safe before releasing them to the public.

But Narayanan and Kapoor question whether this approach will work.

“The alignment research we can do now with regard to a hypothetical future superintelligent agent is inherently limited,” they write. “We can currently only speculate about what alignment techniques might prevent future superintelligent AI from going rogue. Until such AI is actually built, we just can’t know for sure.”

Instead, they advocate a focus on specific real-world risks. In other words, we should lock down the physical world rather than AI models.

Take biological risks as an example.

“It’s possible that in the future, AI might make it easier to develop pandemic-causing viruses in the lab,” they write. “But it’s already possible to create such viruses in the lab. Based on the available evidence, the lab-leak theory of COVID remains plausible.”

“We need to improve security to diminish the risk of lab leaks. The steps we take will also be a defense against AI-aided pandemics. Further, we should (continue to) regulate the lab components needed to engineer viruses.”

I advocated a similar approach in an essay I wrote last year:

It would be a good idea to make sure that computers controlling physical infrastructure like power plants and pipelines are not directly connected to the Internet. [Matt Mittelsteadt, a scholar at the Mercatus Center] argues that safety-critical systems should be “air gapped”: made to run on a physically separate network under the control of human workers located on site.

This principle is particularly important for military hardware. One of the most plausible existential risks from AI is a literal Skynet scenario where we create increasingly automated drones or other killer robots and the control systems for these eventually go rogue or get hacked. Militaries should take precautions to make sure that human operators maintain control over drones and other military assets.

These precautions won’t just protect us from AI systems that “go rogue,” they will also provide an extra layer of protection against terrorists, foreign governments, and other human attackers—whether or not they use AI in their attacks.

If you enjoyed this article, I encourage you to subscribe to Narayanan and Kapoor’s excellent newsletter, which is also called AI Snake Oil.

Read the whole story
nickwustl
13 days ago
reply
Seattle, WA
Share this story
Delete

The Slop Candidate

1 Share

For me, it’s the amber glow of the fry machine gently illuminating the exhausted 45th president of the United States of America. The glare of the potato-warming apparatus casts a shadow on the left side of Donald Trump’s face as he works at a McDonald’s in Bucks County, Pennsylvania. This man, who held the nuclear codes just 1,369 days ago, is now wearing an apron and doling out fast food.

The images of Trump’s McDonald’s stunt—in which he jiggled the fryer and handed burgers out of a window yesterday—are uncanny. There’s Trump, face contorted in the appearance of deep concentration, tilting a fry basket to the heavens; Trump hanging two-thirds of the way out a drive-through window, waving like a beleaguered Norman Rockwell character; Trump, mouth agape, appearing to yell into the middle distance of a fast-food parking lot. The shadows of the McDonald’s kitchen, the interplay between the sheen of the stainless steel and the cast of the nugget-warming lights, give the very real photos a distinct Midjourney aesthetic. These pictures immediately reminded me of the viral, glossy AI-generated images of Trump being arrested and thrown in jail that started circulating in the spring of 2023.

Perhaps it’s because my feeds have been simultaneously clogged with election-season garbage and AI-generated slop, but the McDonald’s photoshoot struck me as a moment of strange synthesis, where reality and tech-enabled fiction felt somehow mashed together by the internet’s cultural particle accelerator. Trump proffering Dollar Menu items isn’t AI, but it is still slop in all the ways that matter: a hastily staged depiction of a fairly stupid, though entertaining fantasy, meant to delight, troll, and, most important, emphasize a false impression of the candidate.

[Read: The most revealing moment of a Trump rally]

This is clarifying, insofar that it demonstrates that Trump’s primary output is always a kind of slop. Slop, as it relates to AI, is loosely defined as spammy, cheap blocks of text, video, or images, quickly generated by computer programs for mass distribution. But nonsynthetic slop is everywhere too. What is a Trump rally but a teleprompter reading of stump-speech slop, interspersed with inexplicable lorem ipsum about Hannibal Lecter and wind turbines spun up by the unknowable language model in Trump’s own head? What are Trump’s tweets and Truth Social shitposts if not slop morsels, hurled into the internet’s ether for the rest of us to react to? And what is the Trump campaign producing if not fantastical propaganda intended to conjure a false image of Joe Biden’s America as a dark, dangerous place on the verge of destruction, besieged by immigrants, and savable only by one heroic man? (For instance, earlier today, Trump posted an AI-generated picture of himself as a buff Pittsburgh Steelers lineman.) The McDonald’s photo op was barely real: The restaurant was closed to the public during Trump’s visit. He ignored a question about the minimum wage. Only prescreened customers were allowed in the drive-through, and those customers were not able to place orders—they just took whatever Trump handed to them. Like any good AI slop, the op illustrated a fantasy—in this case, that Trump, a man who has long lived in a gilded penthouse, is a working-class man.

In August, I wrote that AI slop is now the aesthetic of the far-right and MAGA coalition, in part because it allows hyper-partisans to illustrate the fictional universe they’ve been peddling and living in for the past decade-plus. But MAGA world has always trafficked in slop. Old memes depicted “God Emperor” Trump. Right-wing artists including Ben Garrison and Jon McNaughton have long illustrated Trump in an absurd light—hulking and hypermasculine or holding a lantern on a boat, like George Washington crossing the Delaware. This was proto-slop, for a simpler, more analog time.

Slop isn’t necessarily a commentary on quality so much as on how it is meant to be consumed: fleetingly, and with little or no thought beyond the initial limbic-system response. The main characteristic of slop is that there is an endless supply of it. And so it makes sense that campaigns—not just Trump’s—tend to traffic in it. Campaigns are nothing if not aggressive, often-desperate content farms hoping to get attention. In service of that mission, they meme, pander, email, and text, frequently in cringeworthy fashion. Not unlike the fast food that Trump was hawking, slop is sometimes delicious, but it is never nutrient dense.

[Read: The MAGA aesthetic is AI slop]

AI slop has clogged the internet with synthetic ephemera, but it has also given a name to the human-made attentional grist that’s all around us—the slop that exists in real life, in meatspace. Trump was really at that Buck’s County McDonald’s, debasing himself for swing-state votes in the same way that candidates have for generations (see: Rick Perry eating a corn dog in 2011). Presidential campaigning has long offered an unreal portrait of American life—it’s just been made more peculiar by the presence of Trump.

If AI slop can teach us something about a man like Trump, it seems that the opposite is also true. In the lead-up to the candidate’s fast-food stop, various news outlets, fans, and even T-shirt sellers used generative-AI tools to mock up what the visit might look like. The photos aren’t terribly far off (a few of them accurately placed Trump in an apron), but all of them seem to be trying too hard. In some, Trump’s clothing is too garish; in others, he’s toting a comically large amount of food. None capture the awkward banality of the candidate’s actual campaign stop. In his own way, Trump has shown us all the limits of artificial intelligence. Computers, at least for now, cannot quite capture the crushing surreality and maddening absurdity of modern electoral politics.

Read the whole story
nickwustl
15 days ago
reply
Seattle, WA
Share this story
Delete

Former Pitchfork writers launch new music website, Hearing Things

1 Share

A new era for digital media continues to evolve with the launch of Hearing Things, a new "worker-owned music and culture platform" in the vein of Defector and Aftermath. This new digital magazine was founded by Andy Cush, Ryan Dombal, Julianne Escobedo Shepherd, Dylan Green, and Jill Mapes, five former staffers and writers from Pitchfork who came together to form something new after mass layoffs and restructuring at Condé Nast earlier this year. Hearing Things was created to be "a bulwark against the bullshit" and give its contributors "the freedom and expertise to delve into the art we love (or absolutely do not love!) in a way that furthers the conversation instead of just regurgitating it," according to the site's mission statement

Hearing Things will have album and song reviews in addition to investigative reporting or more "second day pieces" rather than a traditional news section, Dombal tells The A.V. Club. Readers can also expect more live event reviews, because "that's where a lot of music fans are really spending their money now. Probably way too much money in some places," he says, pointing to a piece by Cush about the ongoing Ticketmaster issues. "[There's] not a ton of… critical writing about live shows," he adds. "So we felt like that was a nice place to kind of step in and be like: Is this concert enjoyable, is it worth the money?"

The founders are focused on building a community around Hearing Things. One of the early subscription perks is commenting, which is an attempt to "bring it back to basics" so that the writers can "talk to people who really care about what we're doing." Proposed future perks include virtual listening parties and even in-person events in New York like Q&As with artists and music writers. "[It] seems like a kind of a quaint idea, but I feel like that intentionality is really missing for music," Dombal explains. "And we want to bring that back, and meet like-minded people at the same time."

Moving forward, the founders of Hearing Things envision publishing digital cover stories and perhaps even putting their work in print: "some sort of zine-y element that we could take some of the best stuff from the site and put out in a physical form," Dombal shares. But for now, they're focused on "trying to build up this audience of like minded folks that… see us as people and not necessarily just like this institution from on high. We'll be doing the writing on the site, but we also wanna be approachable," Dombal says, "and have people feel like they can get to know us, get to know our tastes and our style also." You can check out the new site here.



Read the whole story
nickwustl
21 days ago
reply
Seattle, WA
Share this story
Delete
Next Page of Stories