expressed opinion entrepreneur Contributors are their own.
No college campus is complete without intense competition among STEM and humanities students — and it’s fair to say scientists have won the competition for a long time. Artists and thinkers may have dominated the Renaissance, but the Industrial Revolution has always been the age of tech workers.Apple’s market cap is greater than 96% The world economy and digitally transformed businesses now constitute close to half Global GDP.
But as technology hits more milestones and reaches a certain critical mass, I think the humanities is about to make a long-awaited comeback. Technological innovations—especially artificial intelligence—will urgently require us to think about key questions about human nature. Let’s take a look at some of the biggest debates and how disciplines such as philosophy, history, law, and politics can help us answer these questions.
Related: Here’s What You Should Know About the Coming Power of AI
The rise of perceptual artificial intelligence
The potentially ominous or disruptive consequences of artificial intelligence have been the subject of countless books, movies and TV shows. For a while, this seemed little more than fear-mongering speculation — but as technology continues to advance, the ethical debate seems to be starting to become more relevant.
As AI is able to replace more and more occupations, and many people lose their jobs, it raises all kinds of ethical dilemmas. Is it the government’s job to provide a universal basic income, completely restructure our society, or leave people to fend for themselves and call it survival of the fittest?
Then there is the ethical issue of using AI to improve human performance and avoid human failure. Where do we draw the line between “people” and “machines”? If the lines blur, do robots need the same rights as humans? The decisions we make will ultimately determine the future of humanity, and may make us stronger or weaker (or see us completely obsolete).
Human or Machine?
One of the remarkable AI advancements is Google’s Conversational Application Language Model (LaMDA). The system was originally introduced as a way to connect different Google services together, but eventually sparked a heated debate about whether LaMDA was actually sentient – as Google engineer Blake Lemoine observed after witnessing the authenticity of its conversations. as claimed.
Ultimately, the general consensus is that Lemoine’s argument is absurd. LaMDA just uses predictive statistical techniques to have a conversation – just because its algorithms are sophisticated enough to appear to have a conversation doesn’t mean LaMDA is sentient. It does, however, raise the important question of how things might turn out if theoretical AI systems could do everything humans can, including having original thoughts and feelings. Should it have the same rights as humans?
Related: What are some ethical issues with artificial intelligence?
Debates over how exactly we should think about humans are nothing new.Back in 1950, Alan Turing Create a Turing Test To determine whether a machine is intelligent enough and similar enough to humans, we can consider the machine to have some degree of “consciousness”.
However, not everyone agrees. Philosopher John Searle proposed a thought experiment called “Searle’s Chinese Room”, which says that a machine program that only speaks Chinese can be given to non-Chinese speakers in the form of cards . That person could then follow the instructions on the card to trick people outside the room into thinking that if they communicated through a slot in the door, they would speak Chinese; but obviously, that’s not the case.
According to Lemoine, Google was reluctant to allow the Turing test on LaMDA, so Searle doesn’t seem to be alone. But who will solve these problems?
As AI enriches our lives, more of these questions will arise. 80,000 Hours, a non-profit run by Oxford University academics focused on how individuals can make the greatest impact in their careers, highlighted Actively shape the development of AI as one of the most prominent issues facing the world today.
While some of this work may focus on researching technical solutions for how to program artificial intelligence in ways that fit humans, policy and ethics research will also play a huge role. We need people who can resolve debates, such as which tasks humans have fundamental value in performing, which tasks should be replaced by machines, or how humans and machines work together as a human-machine team (HMT).
Then there are all the legal and political implications of an AI-filled society. For example, if the AI engine running a self-driving car goes wrong, who is responsible? There are cases where the company designing the model, the person whose model learns to drive, or the AI itself is at fault.
The last question requires lawyers and policymakers to analyze the problem at hand and advise the government on how to respond. Their efforts will also be complemented by historians and philosophers who can look back and see where we didn’t, what kept us going as humans, and how AI could adapt to that. Anthropologists will also have a lot to offer over time based on their research on human societies and civilizations.
Related: Why AI and humans are stronger together than apart
Exponential growth in artificial intelligence requires revitalization of the humanities
The rise of artificial intelligence may come sooner than anyone expected. Metcalfe’s Law Say, every time you add a person to a network, the potential value of that network doubles—meaning a network gets stronger with each addition. We’ve seen this happen as social networks spread, but when we’re talking about the rapid rise of artificial intelligence, the law can be a dire prospect. To understand the issues outlined in this article, we need thinkers from all disciplines.Yet the number of students in the U.S. with degrees in the humanities in 2020 25% reduction Since 2012.
As AI becomes an essential part of our daily lives and technology continues to advance, no one would deny that we need great algorithm developers, AI researchers and engineers. But we also need philosophers, policymakers, anthropologists, and other thinkers to guide AI, set limits, and help in the event of human failure. This requires someone with a deep understanding of the world.
At a time when the humanities are largely seen as “meaningless degrees” and young people are discouraged from studying them, I think there is a unique opportunity to revitalize them as a more relevant discipline than ever before — but this requires collaboration between technologists and non-technical disciplines that are complex to build. Either way, these functions will inevitably play a role, but how well they do it will depend on our ability to develop future professionals in these fields with multidisciplinary and interdisciplinary perspectives on the humanities.