👉 How AI will upend labor, trust and what it means to be human over the coming decade
The philosophy of GPT-3, cyborgs & the red pilling of America
Yo! ✌️ I’m Brett! I am a Product Manager and former Cognitive Science researcher. Social Studies is a semi-weekly newsletter for people building great products for humans. It includes recaps of what happened on Tech Twitter every week plus deep analysis using frameworks from Psychology, Economics, and the other Social Sciences.
🤖 GPT-3 Armageddon - OpenAI’s new AI model is so powerful, it’s likely to completely remake the labor market over the next several years.
🔮 This has happened before and will happen again - The printing press, spreadsheet software, and AWS all had the same effect. Job loss, but also incredible expansion.
👽 Truth will be destroyed… on purpose? - Misinformation is already a huge problem, and GPT-3 could make it 100x worse. *Puts tin foil hat on*. Its release along with one of the biggest Twitter security breaches in history happening around the same time may not have been a coincidence. Some believe this was a coordinated campaign to instill a culture of distrust on the internet as a nuclear option to combat misinformation ahead of the US presidential election.
🤯 Humans won’t seem very human anymore - As AI like GPT-3 continues to progress, we will undoubtedly approach a time where AI is more convincingly human than humans themselves. This is called “hyperreality.”
🤔 What is “human” anyway? - We’ve always struggled with this definition and every new technological advancement seems to change our minds. Author Brian Christian identified some strategies for seeming human in chat in 2009 but AI can imitate them now.
🤖 We’re all cyborgs - Our brains treat tools as literal extensions of our body… and minds. Modern humanity means coming to terms with the fact that AI and humans aren’t distinct, but rather two connected computing systems.
🤖 GPT-3 Armageddon
GPT-3 is a truly groundbreaking technology OpenAI recently made available to developers.
Founders Fund investor, Delian Asparouhov explains what it is:
GPT-3 is essentially a context-based generative AI. What this means is that when the AI is given some sort of context, it then tries to fill in the rest. If you give it the first half of a script, for example, it will continue the script. Give it the first half of an essay, it will generate the rest of the essay.
The societal impact of this technology cannot be understated.
As I mentioned in Metaphors We Build By, it’s fundamentally impossible to predict the future more than a couple iterations from the present. No one could have predicted Tik Tok in 2005 let alone 1995. We’re in 1995 for GPT-3.
After looking at the demos, it’s easy to conclude GPT-3 (or more confidently, GPT-4, 5, or 6) is going to have catastrophic effects on the labor market in the near term.
Non-technical people will soon be able to build completely functional applications with plain English as the input. Entry-level software engineering will be hit hard.
The same will be true for design. The field could be impacted much sooner given how much less technically complex design workflows are compared to engineering.
Many have been predicting an oversupply of machine learning engineers for a few years, but GPT-3 radically expedites things. Many companies will opt to use GPT-3 rather than hire expensive engineers to train their own (less powerful) models.
Data scientists, customer support agents, legal assistants, and many more jobs are at serious risk.
GTP-3 is even coming for Lana Del Rey.
🔮 This has happened before and will happen again
The history of technology is the history of booms and busts in labor markets. Every technological revolution destroyed jobs before creating many many more by enabling single individuals to do the work of many.
The printing press eradicated scribes but kicked off religious, scientific and cultural revolutions.
Spreadsheet software like Excel decimated traditional accountants but drove incredible efficiency gains for the financial services industry.
AWS eliminated the need for roles supporting premises servers but dramatically reduced the cost and complexity of launching internet startups, which has driven much of the growth in the sector we see today.
GPT-3 may be the beginning of the next major cycle remaking labor markets around the world. But this is only part of the story.
GPT-3 will fundamentally change our concept of what’s true and who is human.
👽 Trust will be destroyed… on purpose?
There have been ongoing predictions that the upcoming US presidential election will be overrun with fake news, bots, and other doctored content. With GPT-3, this seems like it will be inevitable and on a scale even larger than previously imagined.
Unfortunately, solutions that could prevent the spread of disinformation, like cryptographic signatures, may be years away even if they’re viable. Knowing this, why would OpenAI release GPT-3 on the world?
With no clear way of stopping misinformation dissemination, the best solution may be to burn everything to the ground.
Perhaps, arming everyone with the tools to flood the internet with misinformation ahead of the election could make distrust and skepticism even more commonplace in America. This would in turn make the public more resistant to hostile misinformation campaigns seeking to influence the presidential election.
Twitter saw what could have been a trial balloon for one such misinformation campaign last week. Hackers executed one of the largest security breaches in Twitter history, gaining access to hundreds of verified accounts, from Jeff Bezos to Kanye West.
With this access, the hackers could have done incredible damage - think about all the blackmail material they could have pulled from DMs. Instead, they chose to steal only a modest amount of Bitcoin.
Why? Cyan Banister has a theory (click to read the whole thread):
Discrediting Twitter would further the same end as GPT-3 - the inauguration of a post-truth internet.
The line between what’s real and what’s fake, and what’s shoe and what’s cake, is blurring.
🤯 Humans won’t seem very human anymore
In 1983, French Philosopher Jean Baudrillard introduced the concept of “hyperreality.” It says (paraphrased):
Simulations that are highly realistic (and cannot be identified as simulations) end up being treated as reality
As an example, let’s take two people: Daniel and Jessica.
Daniel doesn’t have COVID19 but is pretending to have symptoms. And he’s REALLY good at pretending.
Jessica has COVID19 but doesn’t have many symptoms.
If you had to guess which one had COVID19, you might choose Daniel, not Jessica. The mere act of Daniel trying to imitate a COVID19 patient may actually make him a more convincing COVID19 patient.
Now let’s take two more people: GPT-3 and Andrew.
GPT-3 is texting with you and is pretending to be human. She pulls all the stops she can to appear human and says something like “heyyyy how’s it goin??”
She’s convincing today and only getting better.
Andrew is texting with you and not necessarily trying to be more human because, well… he is human. He says “Hey. How are you?”
If you had to guess which one is human, you might choose GPT-3, not Andrew. Again, the fact that GPT-3 is continuously improving on it’s ability to pretend that it’s human (thanks to the millions of dollars and thousands of engineering hours poured into it) will eventually mean it becomes a more convincing human.
As AI continues to progress, we will undoubtedly approach a time where humans seem less human than AI
🤔 What is “human” anyway?
Simulating humans actually shouldn’t be too difficult. Oxford philosopher John Lucas remarked that this will come to be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.”
Brian Christian, author of The Most Human Human, explores what it would mean for a human to try to be human. In the book, he documents his participation in a Turing Test competition in 2009 where he competed against AI and other humans to convince judges over chat that he is human.
He ends up winning "The Most Human Human” award. Here are a few strategies he used to seem more human - people on dating apps take note:
Being emotional - being highly moody, irritable, and obnoxious or witty, friendly and playful
Keeping state - reference things brought up before
Generally talking a lot - using gaps in conversation to elaborate on previous points, increasing back and forth between speakers, sending multiple messages in a row
In training for the competition, Christian digs deep into what it actually means to be human. He shared some of his findings with The Atlantic:
We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.
Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist follows up:
As though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.
GPT-3 pushes our definition of “human” in the corner further. With more powerful AI projects being unleashed every year, our conception of what it means to be human may completely implode.
🤖 We’re all cyborgs
So what are we then? Feminist theorist, Donna Harraway explains with Cyborg Theory:
We are all chimeras, theorized and fabricated hybrids of machine and organism…condensed images of both imagination and reality.
We are cyborgs. We have merged with machines, both physically and mentally.
Humans have been using tools to extend physical capabilities for millennia - rocks, swords, bicycles, etc. Studies have shown that the brain represents these tools as parts of the body.
Humans have also used tools to extend mental capabilities - currencies, writing, clocks, computers, and now AI. The brain also represents these tools as parts of the mind.
For example, the Getting Things Done productivity framework emphasizes writing tasks down so they don’t stay in your head. Here, writing is a means of transferring data from one storage device (your brain) to another (a note taking app). Once done, you can add new data to your brain.
Elon Musk likes to make this point that we have already merged with computers, but that latency is just really high - think about the upload speed to your brain for this newsletter. His company, Neuralink seeks to reduce this latency with a direct brain-machine interface.
Brian Christian, author of The Most Human Human remarked in 2011:
The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math.
I believe that by the beginning of 22nd century, peace will be made when we fully embrace the idea that we have been cyborgs all along.
GPT-3 is part of us. 🤖
By the way, the Turing Test competition I mentioned earlier was financed by Hugh Loebner, who made his riches selling portable disco dance floors in the 70s and authored “The Magna Carta for Sex Work.” Yea... I’m just as confused as you are. 😜