MAPping the Future
Column in INQUIRERAI Misuse is Making Our Youth Ignorant and Deceitful
written by Dr. BENITO “Ben” L. TEEHANKEE - June 17, 2024During the last academic term, after over 40 years of teaching, I noticed something remarkable—all my students’ term papers were written in impeccable English. This starkly contrasted to the usual mix of clarity and confusion I had come to expect.
And yet, while the flawless language was commendable, it raised a significant concern. As an educator, my role extends beyond assessing language proficiency; I must also evaluate critical thinking.
To probe their understanding, I asked my strategic management students questions about the concepts and arguments in their papers: “What did you mean by product differentiation on page 10?” “Why do you think buying shares of other companies would help achieve financial goals?” and “What evidence supports your recommendation to expand in China?” These questions are essential for developing critical thinking.
To my dismay, some students responded with vague explanations and blank stares. Further probing revealed that much of their work was generated by artificial intelligence (AI), not through their own efforts or understanding. This reliance on AI led to what I term the Stupefying Effect of AI. AI’s focus on fluency over factuality tempts students to substitute their own thinking with AI-generated content, often resulting in nonsense.
Worse still, some students were not honest about their use of AI, a phenomenon I call the Corrupting Effect of AI. Unlike the open use of AI tools, like Waze or Grammarly, students often hide their reliance on AI.
This issue is not confined to students; professionals also fall into this trap. A lawyer in Colorado was suspended for using AI to generate fake case citations and then lying about it. In Australia, professors filed charges of misconduct against consultancy firms based on false allegations generated by an AI chatbot.
This degradation of cognitive abilities and academic integrity is particularly concerning, given the current state of Philippine education. In 2019, Philippine Grade 4 students scored the lowest among 58 countries in the Trends in International Mathematics and Science Study.
In 2022, we didn’t do that well in the Programme for International Student Assessment either, with the skills of Filipino 15-year-old students in mathematics, reading and science ranking near the bottom among all countries. A senior education program specialist at the Department of Education observed that Filipino students trail behind their global counterparts by an average of five to six years’ worth of schooling.
Isn’t AI supposed to help us enhance our thinking abilities? Why is it diminishing my students’ thinking capacity and academic integrity? The answer lies in the design of popular chatbots, like ChatGPT, which prioritize language fluency and plausibility over factual accuracy. Academic study, however, is about the pursuit of truth. I fear that we are graduating a generation of chatbot-dependent nonsense spouters.
The Management Association of the Philippines’ Covenant for Shared Prosperity aims to enable all Filipinos to work together for an inclusive economy facilitated by enlightened businesses. Businesses cannot afford to have a generation of graduates who have been incapacitated by AI misuse. So, what can we do as teachers?
1. Educate on AI’s nature and proper use. We can teach our students about how AI works. A deep understanding of large language models will dispel unrealistic expectations about AI-generated knowledge claims. AI developers must also disclose the reliability of their chatbots for knowledge use.
The most important point is that large language models are not knowledge bases, rather, they are statistical models of knowledge bases. This means that they are prone to random error. Unfortunately, AI developers do not disclose how much random error (known as hallucinations) their models produce when generating knowledge claims. At best, it should only be used to enhance student thinking, not replace it.
2. Strengthen understanding of the nature of reality and knowledge. We must enhance our students’ understanding of the nature of reality, how knowledge is acquired, and the limitations of words in representing knowledge. This will help them differentiate among reality, perception and words. Large language models are trained mainly through words. They are not grounded in truth.
3. Clarify school AI use policy. We should explain our AI use policy to our students. In our department, AI use is allowed only if explicitly stated by the teacher and for specific purposes. Students must cite their AI use and include the AI-generated dialogue as part of their submission. This ensures transparency and focuses on the student’s thought process.
In 2016, Oxford Dictionaries named “post-truth” the Word of the Year. Post-truth is when objective facts are less influential than emotional appeals, which can lead to widespread misinformation. Last year, Geoffrey Hinton, one of AI’s “godfathers,” warned that we might soon struggle to discern truth from falsehood. His prediction has come true. AI misuse has accelerated the post-truth era by adding tremendous quantities of fluent nonsense to the misinformation mix.
De La Salle University (DLSU) is developing a trans-disciplinary university degree program focusing on the applied philosophy of AI. The program is a collaboration of the computer science, philosophy and management departments. Graduates will be trained to serve as AI ethics and quality assurance evaluators, and AI governance advocates within organizations and in the public domain.
The path ahead is challenging for educators and all who value the truth, but we must persevere in nurturing our young’s critical minds and moral compass. Our nation’s development depends on it.
This article reflects the personal opinion of the author and does not reflect the official stand of the Management Association of the Philippines or MAP. The author is chair of the MAP Shared Prosperity Committee and a Full Professor at DLSU.