Click here to view original web page at jacobin.com
Lately, I have been experimenting with artificial intelligence image-generating programs. Their power is astonishing. They’re not just good at the technical aspect of creating “art” but appear to be genuinely creative, producing works that not only match the competence of a highly skilled artist but that can be surprising, interesting, and even beautiful.
Of course, everything still depends on the human who is entering the prompt to tell the AI what to do. But nobody who sees these things in action can doubt that we’re entering a very strange new era in which a great deal that could be done by hand will soon be automated.
Many artists are up in arms about the new AI programs — and with good reason. Some are furious that their works have been used as training data without their permission. Others fear that corporate clients will simply turn to machines to do the work that used to be done by human hands. AI is causing the cost of image generation to plummet. In a capitalist economy, where everyone depends for survival on the value of their labor in the marketplace, a massive drop in the value of a skill will cause widespread suffering.
Art is far from the only domain about to be transformed by generative AI. Paralegals, programmers, market researchers, customer-service agents, financial analysts, and many other professions are at risk of seeing much of their work automated in the near future. OpenAI’s hugely successful ChatGPT not only writes bad jokes and slightly better poetry but is even helping to write published research papers. The models are only getting better, too. The newly released GPT4 is already being used to write whole books.
The powers of this new technology are frightening. Scammers are already using the ability to generate realistic “deepfakes” to fool people into thinking their relatives are asking them for money. Credible-looking misinformation can now be produced at lightning speed, an especially unfortunate development at a time when we lack trusted media institutions. The problem is that new generative AI is being introduced into a capitalist society that is ill-equipped to handle it.
While Noam Chomsky, Gary Marcus, Erik J. Larson, and others have produced convincing arguments that the fear of artificial “superintelligence” coming in the near term is overstated, there are all kinds of ways in which the technology as it already exists can wreak havoc on society.
People are right to be terrified of the disruptions that AI might cause within our lifetimes. But when we think about what those disruptions actually are, it’s clear that the main problem is not actually the development of the technology itself. Introduced under a different economic and political system, few of the risks would be so grave.
The problem is that new generative AI is being introduced into a capitalist society that is ill-equipped to handle it.
Artists, for example, are mostly not afraid of AI because they fear having a machine be better at art. Chess players didn’t stop playing chess when the computer program Deep Blue beat grandmaster Garry Kasparov. And if art is made for pleasure and self-expression, it doesn’t matter what anyone else can do.
The problem is that, in our world, artists have to make a living through their art by selling it, and so they have to think about its market value. We’re introducing a technology that can utterly wreck people’s livelihoods, and in a free-market economic system, if your skills decline in value, you’re screwed.
AI generated illustration for the front cover of a sci-fi book. (Courtesy of Nathan Robinson)
It’s interesting that we talk about jobs being “at risk” of being automated. Under a socialist economic system, automating many jobs would be a good thing: another step down the road to a world in which robots do the hard work and everyone enjoys abundance. We should be able to be excited if legal documents can be written by a computer. Who wants to spend all day writing legal documents? But we can’t be excited about it, because we live under capitalism, and we know that if paralegal work is automated, that’s over three hundred thousand people who face the prospect of trying to find work knowing their years of experience and training are economically useless.
Luddism is a rational approach to automation in a capitalist society. If machines threaten your job, fight the machines. Even a reactionary like Tucker Carlson has said that politicians should intervene to stop automation, for example, by banning self-driving trucks, because having millions of people thrown out of work would cause too much social disruption. But that solution is absurd: Why would we have people do needless labor that could be done by robots? Truck drivers have their health destroyed and don’t get to see their families for long stretches of time. Even when a machine could do the hard work instead, we’re going to make people do it?
We can expand our imagination far further than Carlson’s. What if finding out that your job could be automated was a thrill? What if it meant that a worker could be paid while the robot did the work? How about this: once the job that you train for is automated, you get an automation pension and get to relax for the rest of your life. Everyone will be praying their job is next on the list to go. How about this: once your job is automated, you get an automation pension and get to relax for the rest of your life. Everyone will be praying their job is next.
We shouldn’t have to fear AI. Frankly, I’d love it if a machine could edit magazine articles for me and I could sit on the beach. But I’m afraid of it, because I make a living editing magazine articles and need to keep a roof over my head. If someone could make and sell an equally good rival magazine for close to free, I wouldn’t be able to support myself through what I do. The same is true of everyone who works for a living in the present economic system. They have to be terrified by automation, because the value of labor matters a lot, and huge fluctuations in its value put all of one’s hopes and dreams in peril.
Most of the other problems AI could cause really boil down to problems of the way power and wealth are apportioned in our existing society. Because the world is organized into militarized nation-states, we have to worry that AI technology will be used in terrifying new superweapons. Because we let scams and fraud flourish in our Wild West economy, we will see a lot of people get rich using AI to prey on hapless consumers. The profit motive is already socially destructive, but AI will make it much worse, because it will allow companies to figure out how to more efficiently trick and exploit people. The new technologies are being developed by private corporations that have no incentive to ensure that a benefit to them is a benefit to all.
We need to be clear on the source of the problems with AI. They are real and will accelerate the crisis that socialists are devoted to helping humanity solve. But the problem is not technology itself. Technology should be a tool for liberation. Unless we transform the economic system, however, it will be a tool for ever greater exploitation and predation.