Knowledge and access to information are two very different things. The mere possession of a dictionary does not make anyone a master of the English language any more than an AI chatbot makes anyone a genius. Like a dictionary, machine learning and other technology can be a resource. However, availability cannot replace true ownership of skills, knowledge, or understanding.
Do not misunderstand my position: we must invest in the future of Artificial Intelligence as a nation. If we don’t, our foreign adversaries will. We cannot afford to lose the technological arms race of our lifetime. Thankfully, President Trump has been a leader on ensuring our nation is prepared and equipped to be the world’s frontrunner in Artificial Intelligence.
However, as we build this new AI-linked economy, I believe we must be deliberate in our consideration for the state of the human condition. After all, it is easy to lose sight of personal proficiency when the world is focused on total efficiency.
The broad picture being painted by experts regarding AI is that our lives will benefit from the utilization of the resource. In fact, industry and economic planners like Bill Gates constantly remind us that AI will make most of our lives “easier.” He is probably accurate in one sense. Yet, does “easier” necessarily mean better? That is the question for the hour.
Some commentators might contend that innovation is almost always met with skepticism. While that is also largely true, this argument leaves out an important fact: history tells us that sometimes the concerns have been warranted.
There is a long history in our country of lasting societal acceptance before dangers are revealed years or even decades later – after it is too late for a generation of Americans. For example, “forever chemicals” have been a consistent part of our daily living over the last half century. In fact, these chemicals were once the favored kitchen tool for cleanliness and ease. However, it is now widely accepted that these chemicals are linked to cancer and other diseases.
Another shocking example comes from the 1800s, when a concoction of morphine and alcohol was marketed and distributed to children across the country to treat “crankiness” and other so-called ailments. The company responsible for the brand shipped millions of bottles annually, recommending 3 or 4 doses a day of their product to children, which with every single dose provided many times more morphine than the maximum recommendation allowed. The drug was hailed as a miracle cure.
My point is not about regulation – my point is about realism. In a free nation, you must be able to make your own decisions, but you have to be willing to leave open the possibility of personal consequences.
Related to the new era of increased human reliance on Artificial Intelligence, we simply don’t know the long-term impact on human cognition. What we do know is that peer-reviewed academic research has already found that the use of AI can lead to a significant loss of decision-making abilities in humans, as well as increased laziness. Yet, despite these findings, we don’t know the true extent because the prevalence of the technology has not yet been around long enough for generational study. However, science has long understood the principle of how our mental capacity operates: you either use it or you lose it. Meaning, if a brain cell is not used, that generally means a loss of function for that cell.
If this principle carries over into the AI age, we could possibly witness a decline in human mental capability that we are not currently prepared to face. Sadly, according to some researchers, evidence already points to a declining human intelligence due, at least in part, to the entrenchment of technology and Artificial Intelligence in our daily lives. Specifically, a loss of creativity and decision-making are key areas where declines have been most obviously linked to AI. In the short-term, these concepts are indispensable to the modern economy. In the long-term, they are a big part of what makes us human.
As I have referenced in prior writings, the economist Thomas Sowell has pointed out that there are generally no solutions to societal problems, but “only trade-offs.” We should be wary of those who point to a drug with no side-effects, a utopian society free from consequence, or a government that should be completely trusted – while that may be what we want to believe, that does not comport with the reality of nature.
In the service-oriented economy in which we live today, value is often brought to the market through expertise. Thus, professionals should be skeptical of trading personal competence and learned knowledge for the sake of efficiency and ease. Competence cannot exist without some level of personal proficiency and subject mastery.
Despite all the good that comes from these tools, we must tread carefully when building a future society where very little human knowledge exists separate from reliance on these technologies. Self-reliance and self-determination can only survive if we, as citizens, possess the ability to think critically and deeply independent of anyone or anything else.
For this reason, the coming age of AI not only requires a thoughtful discussion by leaders in government and business about utilization, ethics, and societal framework, but it also requires individual evaluations of the values and beliefs that can help us personally navigate this new era of technological advancement.