Artificial Intelligence 101, but in 20 years
2041 will be a very different year from 2021. A lot can (and will) change in 20 years.
20 years ago, the world (disproportionately the US) was in the aftermath of the dot com crash. At this time, few could predict the profound impact the internet would have on our lives. Many businesses and products we love and use today did not exist 20 years ago.
Youtube was founded in 2005, and Spotify was founded the following year. You could only stream and binge a show on Netflix from your device starting in 2007. Amazon Web Services (AWS) launched post-2005. And more recently, Uber, Doordash, Lyft, Instacart, and GoPuff, were all founded after the 2008 financial crisis.
20 years is a long time!
So how will AI shape the very fabric of our society in 20 years? That’s the question Kai-Fu Lee and Chen Qiufan set out to answer in AI 2041: Ten Visions for Our Future. It is a dense read (over 400 pages). About 70% is fiction rooted in the authors’ prediction of AI evolution. The other 30% explains how existing AI capabilities and future breakthroughs will enable the fictional stories to become a reality by 2041.
There were 10 fictional stories each focusing on a specific technology. I found the following 3 to be most interesting.
Twin Sparrows - Language Processing and AI Education
Twin Sparrows is the story of a set of twins at Fountainhead Academy, a sort of foster home. Despite being twins, Golden and Silver Sparrow, are polar opposites and don’t get along for the majority of their story. The only thing more surprising than their animosity is the ‘AI partners’ that each twin has.
These AI partners are responsible for much of the children’s development. It dictates their educational journeys, helps Golden Sparrow with pranks, and supports Silver Sparrow in creating magnificent art.
Artificial Intelligence is not sufficiently developed for ‘AI partners’ to exist today. The most similar technology to AI partners today is GPT-3, released in 2020 by Open AI, a research laboratory founded by Elon Musk.
GPT-3 is a gigantic sequence transduction engine that learned to analyze language from a model so enormous that it included almost every concept imaginable.
GPT-3 takes in a sequence of words, a prompt, and transforms this prompt into another sequence of words, the answer or response.
But GPT-3 or other existing AI cannot have seamless conversations with people, let alone be responsible for a child’s education. Though AI has the advantage of 24/7 availability, low cost, and proficiency with large volumes of data, it is dull at “causal reasoning, abstract thinking, explanatory statements, and common sense”. In fact, one critic describes that:
“They will never have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, or the environment. They will never enjoy music or fall in love, or cry at the drop of a hat”
Sounds convincing, right? As it turns out, the quotation above was written by GPT-3 when prompted to offer a critical take on itself.
So while the concept of an ‘AI partner’ that can tend to all our needs seems far-fetched today, we might be much closer than we think.
Also, the applications and implications of AI for education are significant. Teachers get immediate first-order benefits: AI can automate processes such as correcting students’ errors, answering common questions, assigning homework, and grading them. This frees teachers to focus on stimulating the students’ critical thinking, creativity, empathy, and teamwork.
In my modest experience teaching in the past couple of years, I can attest that the least rewarding aspects of teaching are the administrative and routine tasks. Technology always promises to improve our lives by automating the least rewarding/engaging uses of our time. AI will be no different.
My Haunting Idol - Virtual Reality & META Gmaps Pokemon go
On October 28, 2021, The Facebook company announced it would be changing its name to Meta Platforms Inc. A bold, controversial, and decisive bet on XR technologies. XR technologies are the focus of My Haunting Idol.
“XR is a term encompassing three types of technologies: VR, AR, and MR. Virtual Reality (VR) renders a fully synthesized virtual environment in which the user is immersed… By contrast, Augmented Reality (AR) is based on the world that the user is physically in, capturing it through a camera, and then superimposing another layer on top of it.”
Mixed Reality combines the best of VR and AR by laying virtual content on our physical world (like AR), and enabling us to interact with the virtual objects (like VR).
Zuckerberg’s bet on the metaverse makes a lot of sense once you imagine the possibilities. XR can (and will) completely change how we work and play, not unlike in Ready Player One, which is set in 2045.
In the metaverse, we will make friends, do work, exercise, play games, and much more.
XR holds a lot of promise but I am slightly worried about Meta leading the pack. Facebook struggles with content regulation, fake news, privacy issues, and data security today. About a month ago, a whistleblower uncovered evidence of Facebook‘s knowledge of the harmful effects of Instagram on its users, including teens.
If we’re still discovering the negative externalities of Instagram, should we be more cautious about a future where we spend most of our waking hours in a virtual world?
If Meta can’t moderate content on Facebook blue, how will it moderate the metaverse??
The Holy Driver - Levels of AV, Smart Cities, Remote Drivers
Will future highways simply be streams of autonomously driving Teslas?
An Autonomous Vehicle (AV) or self-driving car is a computer-controlled vehicle that drives itself. The Society of Automotive Engineers defines 6 levels of automation to make the AV conversation more concrete. I have listed them here:
L0 (no automation) - The human does all the driving, while AI watches the road and alerts the driver when deemed appropriate.
L1 (“hands on”) - AI can do one specific task only if the human driver turns it on, such as just steering.
L2 (“hands off”) - AI can do multiple tasks (such as steering, braking, and acceleration) but still expects the human to supervise and take over as needed.
L3 (“eyes off”) - AI can take over driving but will need the human to be ready to take over upon request by AI. (There are skeptics who wonder whether an abrupt handover would exacerbate danger, rather than mitigate it)
L4 (“mind off”) - AI can take over driving completely, but only on roads and in environments that IA understands, like city streets and highways that have been mapped in high-definition.
L5 (steering wheel optional“) - No human is required at all, for any roads and environments.
L0-L3 are currently available in cars like Tesla, but L5 feels much more elusive. A car with no steering wheel that can drive itself in any environment sounds like pure science fiction. The authors suggest two exciting developments that can make L5 realistic.
Firstly, in 2041, the authors assume we adapt our cities to enable AVs by creating smart cities. This means that most roads have been scanned, AVs are familiar with the environment, cars seamlessly communicate with one another, and accurately perceive their environment through sensors on the roads. But importantly, humans would be unable to drive on most roads. It would be illegal!
Humans will not be able to keep up with the blazing-fast communication between many AVs on a highway moving at meteoric speeds. At this point, the error rate of AVs would be much less than human error rates so the cars move faster. If an ambulance needs passage, the cars make way and close back up magically.
I think a lot of this is wishful thinking for 20 years, but Bill Gates put it nicely:
“Most people overestimate what they can do in one year and underestimate what they can do in ten years.“
Secondly, there will inevitably be situations where an AV faces an unexpected environment such as during a natural disaster or acts of terrorism. The solution presented in the Holy Driver is that an expert human driver takes control of the car remotely through a simulator. Driving will become a high-skill job reserved for expert drivers who remotely control self-driving cars in emergencies.
“By that time, people who love driving will do what equestrians do today -- go to private areas designated for entertainment or sports”
Conclusion
The next 20 years hold promise for AI. Many people, including myself, believe that AI is the next digital revolution after the Internet (say nothing of web 3.0). Google has committed to an ‘AI first’ strategy, infusing AI into all its products.
I am most excited about AI applications in Language Processing, Virtual Reality, and Autonomous Vehicles. However, Kai-Fu Lee and Chen Qiufan discuss other pertinent AI applications in AI 2041 -- Finance applications, Biometrics and Deepfakes, AI in Healthcare, and Autonomous weapons.
Do you want to learn more about AI and its future applications? Are you unafraid of reading a 400-page book?
If your answer is yes to both, then get a copy of AI 2041.
But if you haven’t read a dense book in a while, but still want to learn more about AI, just subscribe. I will be revisiting this book and AI in general soon.
Note: All unattributed quotations are from AI 2041: Ten Visions for Our Future