Are We Going To Be Outsmarted By AI In 2024?
"eLaborate" is not a word. eLearning is. But, if it were, it would describe the digital, online effort of developing or presenting an idea or question in detail. In biology, elaboration also means producing (a substance) from its elements or simpler constituents.
Are AI-Driven Sophisticated Digital Tools Going To Make Us Stupid In 2024?
Hold on to your "yes" or "no" answer. We'll get back to it. First, let's eLaborate on this question: is AI making us stupid?
The Naked Architect In The Desert Syndrome
If you've read my book, you may know the story of the naked architect in the desert. Let me eLaborate for those who have not yet had the chance to be exposed to it (I mean to the book, not the naked architect). Ages ago, I spent three weeks at college learning to be an architect. One of the reasons I left after three weeks was the naked architect in the desert syndrome.
I still remember this professor starting the semester, declaring that we were architects through our heads and hands, not through our tools. Also, he would make sure either we ran away from this school, or memorized and learned everything so we could be fully equipped to design anything even while naked in the middle of a desert.
Be careful! Once you visualize a naked architect in the middle of the desert, you can't unsee it. But it made me wonder that day... let's say I stick with the program and become an architect, what then? How often might I have to design something naked in the middle of the desert?
It's Not Literal, Stupid!
I got it! The message was that we must not rely on tools. We need to go through the same traditional path as experts before our generation did. Learn, drill, memorize, repeat... but it also made me wonder about flexibility in the evolving world. Let me eLaborate.
If I memorize everything and don't rely on tools, wouldn't I be designing the same way for the rest of my life? Designing the same things? Without evolving? Without being more efficient and effective in the changing world? So, I left. Never looked back at the desert with the naked architect.
Are AI-Driven Digital Tools Making Us Stupid?
You may not remember these times, but people were fearful about electric typewriters, calculators, and computers as well. If tools are doing our job, would we just become tools then? If I can't use math in my head, would I be cheated all the time at the cashier? Should I memorize all the phone numbers in case I lose my phone? Is AI making us stupid?
At another college (which I did complete, for my computer science degree), there was a professor who did all his exams open-book style. We were allowed to use any resources we wanted in the room. He said he wasn't interested in our memorization capabilities, but in our problem-solving skills using limited information and no clear, correct answers.
The challenges he gave us made us think. Think a lot. Think differently, with tools. I learned from that experience that I needed the foundations to think beyond the basic level. While it was an open-book exam, there was no time to look up everything, let alone think about the problem in creative ways to solve it. If I asked stupid questions, the tools gave me stupid answers. That one class alone was worth attending college.
Two professors with completely different ideologies. They both were extremely smart and skilled, by the way. They both learned to be an expert in the traditional way. Yet, they had a different view on how to help novices learn the foundations. If using smart tools makes us stupid, how do we help novices learn when tools are getting smarter and easier?
The very same question is being asked today about AI in the Engines of Engagement– A Curious Book About Generative AI:
How do we help novices mature their knowledge and skills when the foundational components on which they’re built can be accomplished effortlessly by digital tools?
If the AI-powered digital tools we have today can help us accomplish our tasks effortlessly, do we need to follow the learning paths of the previous generations? Do we need to learn anything at all?
The World Is An Open-Book Exam
What if we think of today's world as an open-book experiment, rather than measuring smartness by testing architects naked in the desert?
- You can use all the tools and resources available
- You have limited time to solve complex problems
- You don't have all the data to make fully informed decisions
- You're measured by the impact of your output, and not the effort of your input
Doesn't it sound like reality? Up until now, I think I've been inclined to say "no" to the original question of whether AI is making us stupid.
Let's eLaborate On What Can Go Wrong!
What are some dangers of relying on digital tools to do our job?
Coding, For Example
If you're using AI to help you code, what happens when you're offline? What if you're naked in the middle of the desert and you're asked to parse an object using forEach in Python? Would you remember that exact syntax? Or explain how to combine map and filter in JavaScript?
I always forget the exact syntax since languages implement these very differently. But is this the biggest obstacle I have? No. I understand the "why" and the concept. I've already got the foundations down. I just need to know the "how." And so, this is less about memorizing everything or nothing. We can all decide individually how much reliance we should have on specific tools. It is not a binary decision once you've got the foundations down.
What About Spelling?
Are you using the autocorrect on your phone? Or Grammarly in your writing? Would you be able to spell everything correctly and use perfect grammar naked in the desert? If you're a spelling bee, you love to memorize spelling. For others, getting close enough is enough. Again, the question is not a decision between spelling everything or nothing. But we do need to understand the foundations of the language and grammar. If we treat the world as a series of binary choices and let the smart tools outsmart us, the end game will more likely be our doom with AI making us stupid.
With machines doing all our daily mental tasks for us, our brains will become literally thoughtless, our minds a haven for endless daydreaming. We will become spiritually moribund.
The World Is Not Binary
The world is not binary. It's not "working naked in the desert" or "completely surrendering to tools." It is a scale. It is less of a yes/no question for a test, and more a sliding scale of "how much", when we do not have all the information to make the data-informed decision.
Why do humans love to simplify the world into binary boxes, then? Is it because it requires less energy to pick between two options than to understand the consequences and nuances of the in-between? Or, maybe, it is simply based on how the data is presented to them. Apparently, we're doomed. Binary bias is real and pervasive everywhere:
The fact that the bias is so pervasive suggests that it is not due to a specific feature of data visualization or statistical information but is instead a general cognitive illusion.
The binary bias distorts belief formation—such that when people aggregate conflicting scientific reports, they attend to valence and inaccurately weight the extremity of the evidence. The same effect occurs when people interpret popular forms of data visualization, and it cannot be explained by other statistical features of the stimuli. This effect is not confined to explicit statistical estimates; it also influences how people use data to make health, financial, and public-policy decisions.
Being a "data guy," for me this is alarming. Our binary bias distorts the view of continuous data into arbitrary buckets of in or out. Naked in the desert or stupid at work.
Participants seemed to collapse data into two categories, whether they were evaluating menu prices or determining which factories had higher carbon dioxide output.
Okay, maybe if we use a different type of chart to convey the insights? Other studies showed that the type of data visualization did not matter. Bias is bias.
Further evidence for the impact of imbalance score on participants' estimates emerged in two additional online studies, in which people saw data presented in various forms, including vertical and horizontal bar charts, pie charts, verbal descriptions with or without percentages, and dot plots.
Research also shows that we are skewed toward the first evidence we encounter as we develop an imbalanced score between weak and strong positive/negative indicators.
Fine. We're Doomed. What's The Harm?
What's the danger of binary bias? Binary bias can be used to drive the agenda to manipulate the audience. Next time you see survey data showing "gamified vs. non-gamified" courses or "microlearning" vs. "traditional learning" approaches, think of the intent and methodology used. Is it a really binary comparison, or is it serving a purpose? Are they really comparing two apples, or are they putting all the rotten fruits in one bucket and contrasting it to the fresh, shiny apple on the bio-tree? (And coincidentally, they own an orchard.) After this long eLaboration, let's get back to our original question: are AI-driven sophisticated digital tools going to make us stupid in 2024?
Maybe there's no correct yes or no answer to this question: "is AI making us stupid?". Once you don't treat this question as a binary yes or no problem, we may find some practical answers. Reliance on digital tools to help us do our jobs is certainly making us less capable of being productive naked in the desert. But it is not an either-or question.
The question is how much reliance, for you, is good enough for you. You decide what is worth learning and what is not. Don't let others spell out all the words for you in your story. Write your own narrative! Think big, start small, and iterate. In a less doomsday vision, algorithms can clean our brains from unnecessary information, providing us more space to think and more space to eLaborate.
Which is why our modern minds, once they have been purged of all that today’s algorithms might now deem unnecessary information, will be as ready as theirs were to think, to inquire, to wonder, to contemplate, to imagine, to create.
How Do We Help The Workforce Navigate The Journey Towards Using AI The Smart Way?
Asking "stupid" questions will produce stupid answers by smart tools. To mitigate that, we need to have the foundations down before we accelerate and scale our output using AI. How do we help novices in the workforce learn the foundations on the scale of "naked architect in the desert" at one end and "let smart tools completely outsmart us" at the other? Here are some ideas:
1. Understanding The Foundations
Workers must understand the fundamentals of their field, even if AI can handle many tasks. This might involve learning the principles and theories behind what the AI is doing. For instance, a novice graphic designer should understand color theory and composition, even if AI can create designs. This is why data literacy is critical. Lately, there are more examples of AI literacy (which must include data literacy) online.
2. Critical Thinking And Problem-Solving Skills
Focus on developing critical thinking and problem-solving skills. AI can provide solutions, but understanding why a solution works or how to adapt it to a new problem is key. Encouraging workers to analyze AI-generated solutions and explore alternatives can be beneficial. It used to take a long time to create viable alternative solutions. Now, workers can still come up with their best solution but also ask AI to provide alternatives simultaneously.
3. Ethical And Responsible Use Of AI
Teach the ethical implications and responsible use of AI. Understanding the limitations and biases of AI tools is essential. Workers should know where AI can go wrong and how to mitigate these issues.
4. Hands-On Experience
Provide opportunities for hands-on experience where workers must accomplish fundamental tasks without relying solely on AI. This could involve setting projects or challenges that require a mix of AI and human input.
5. Learning To Collaborate With AI
Teach how to effectively incorporate AI tools in the process. This involves understanding the capabilities and limitations of AI and how to integrate AI assistance into a broader workflow effectively.
6. Continual Learning And Adaptability
Emphasize the importance of continual learning and adaptability. AI constantly evolves, so staying updated with the latest developments and understanding how they impact the field is crucial.
7. Creativity And Innovation
Use AI to accelerate and scale creativity and innovation. AI can handle many tasks, but unique, creative ideas often come from humans or the dialogue between humans and AI. AI can play the role of trusted advisor, but also that of the constructive challenger. For example, workers can use AI to poke holes and identify dependencies in an idea, product, or process.
8. Interpersonal Skills
Focus on interpersonal skills, stakeholder management, leadership, collaboration, and teamwork. These are crucial in most fields and can complement the technical skills where AI is used. AI can also simulate specific roles for practice at scale.
9. Mentorship, Shadowing, Apprenticeship
Provide opportunities to learn skills from others through examples. Experienced professionals can offer insights and guidance, helping novices understand the nuances of their field.
10. Balancing AI Use With Traditional Methods
Teach a balanced approach to using AI and traditional methods. Understanding when to use AI and when to rely on traditional techniques is a valuable skill. Think big, start small, and iterate.
References:
- Association for Psychological Science. "Binary bias distorts how we integrate information." ScienceDaily, October 25, 2018.
- Banaji, M.R. and L. Heiphetz. 2010. "Attitudes." In Handbook of Social Psychology. Fifth Ed., edited by S. T. Fiske, D. T. Gilbert, and G. Lindzey, Vol. 1, 353-93. Hoboken, NJ: John Wiley.
- Fisher, M., and F. C. Keil. 2018. "The Binary Bias: A Systematic Distortion in the Integration of Information." Psychological Science 29 (11): 1846-58.
- Merriam-Webster Dictionary, s. v. "elaborate."
- Winchester, Simon. "The big idea: will AI make us stupid?". The Guardian, June 19, 2023.
Originally published at www.linkedin.com.