Discussion Requirements: AI and Education
Review the following resources:
- Columbia perspectives on ChatGPT
- Is ChatGPT a threat to education?
- The future of artificial intelligence requires sociology
Discussion Questions:
- What are the limitations of using AI like ChatGPT, and how might these impact education and society?
- Why do some social scientists think AI is prone to systemic racism and social bias?
- If you were to write a classroom policy on the use of ChatGPT, what ethical considerations would you include to promote learning?
Learning in the Age of AI: It’s Still Up to You
By Edmond Leaveck - Posted Aug 22, 2024
"The machines are rising, but don’t worry—they still can't fold laundry." - Unknown
The largest limitation of large language models (LLMs) such as OpenAI's ChatGPT, in my opinion, is their current lack of control measures on accuracy. This concern is also supported by some of the researchers and professors interviewed at Columbia University (Choi, 2023). However, simply stating "lack of accuracy" doesn't fully capture the range of issues. First, what is commonly referred to as "hallucinations" is a significant problem. Hallucinations occur when the model works through a logical problem incorrectly or references something that doesn't exist. For example, a framework developer on the platform X.com discussed an instance where an end user was using ChatGPT and it referenced a library that doesn’t actually exist.
The second major concern is the dissemination of information on contested subjects. Because LLMs are trained on large datasets scraped from the internet, any information that is sufficiently discussed can drown out additional information that doesn't receive the same level of attention. This issue is not a hallucination; the information is correct for that particular viewpoint. Take economics as an example: Most modern economics courses heavily favor a Keynesian economic approach. LLMs using this as their default knowledge set for what is correct in economics isn't wrong from a false information perspective, but it is problematic because there are other economic theories that have different prescriptions or outcomes but are equally factual. Both options are technically correct, but one may not be the prevailing politically accepted approach. I'm torn on which of these, hallucinations or larger volume of discussion equals most true, I consider the larger issue.
The impact of these falsehoods, misrepresentations, or hallucinations on education is significant. The surface-level usefulness of an LLM to provide a 'personalized' instructor is very alluring, but without cross-referencing materials, it can become a net negative to the learning experience. For instance, when using an LLM to learn a programming language, it can be helpful, but without frequent cross-referencing with documentation, a student could end up learning syntax or implementation that doesn't exist or doesn't work in the framework or language being studied.
Despite my skepticism about the usefulness of current LLM models to adequately support quality educational learning, I believe that LLMs can be beneficial to society as a whole. My reasoning is that individuals who are inclined to deep dive into subjects and become experts will continue to work through the learning process in the same manner, with an LLM serving as just another tool. Meanwhile, those who rely solely on an LLM—whether as a sole source or to produce their work—will likely have the same educational outcome they would have had otherwise; the tool used to shortcut the process simply changes. As Pittalwala (2023) discusses, academic dishonesty is a concern, but plagiarism isn't a new phenomenon with LLMs—the control measures may just need updating.
In a more positive light, LLMs can serve as a great starting point for those venturing into new subjects. For example, when I used an LLM to start creating a game in the Unity game engine, it helped break down a daunting task into more manageable steps. Recognizing the limitations of the LLM, I deliberately worked to build my own knowledge base, effectively “working the LLM out of a job.” It provided a useful starting point and accelerated my learning, which I estimate saved me months of struggle.
While Stortz (2021) discusses issues with LLMs learning from datasets or algorithms that may contain embedded biases, I disagree with the blanket critique that all biases, regardless of their nature, are negative and need to be removed. For instance, a specific example cited is that ZIP code data might inadvertently encode information about race, potentially leading to biased outcomes. However, the critique of the biased outcome suggests that decisions ought not vary based on characteristics that some ideological viewpoints argue shouldn’t be used.
To illustrate my disagreement, consider the ZIP code example. If AI analyzes massive datasets and discovers patterns indicating that within a certain ZIP code, the overall wealth is lower, and there is a particular racial makeup, which is associated with a specific health outcome, the AI might focus more intently on that ZIP code for health-related education. This is indeed a bias, but I don't believe it is negative—it’s a positive action to focus resources where they are most needed.
Pittalwala (2023) suggests implementing clear usage guidelines, while researchers at Columbia University (Choi, 2023) recommend critically evaluating AI-generated content. Stortz (2021) also emphasizes the importance of education and guidelines on ethical usage. While these considerations are useful, they fall short. If we replace ChatGPT/LLM/AI with the more general term "plagiarism," we see that these considerations are already in place.
My reactive answer to the problem is something easy to say but difficult to implement. Design assignments and requirements so that even if AI tools are completely allowed, the answer cannot be derived simply by asking an LLM to do it for you. My experience with a Calculus course illustrates this point. Even before LLMs became widely known, online services like Mathway or Wolfram Alpha could solve complex formulas step by step. The professor assumed students would use such tools and designed problems that required an understanding of the material to determine which formula to use and when. For example, in calculus, solving a complex problem often involves multiple steps, such as deciding whether to apply differentiation, integration, or polynomial expansion. Simply using an online tool or calculator might allow a student to compute the derivative or integral once the function is inputted, but the real challenge lies in setting up the problem correctly. The student must first understand the nature of the problem: Is it asking for the area under a curve, which would require integration? Or is it looking for the rate of change at a specific point, which would involve differentiation? Recognizing these nuances and knowing which calculus technique to apply is crucial, and this understanding cannot be bypassed by simply relying on a tool to do the calculation.
In this class, however, where we review literature and write about what we have learned or synthesize the ideas, LLMs can easily provide a straightforward solution with minimal student effort. I find myself struggling to identify a solution outside of personal discipline to get the most out of the educational process.
I recognize this paragraph may lean toward cynicism and perhaps defeatism when it comes to solving the educational problem of AI usage. If LLMs are here to stay—just as using paper was once seen as an atrocity against learning to properly use a chalkboard a hundred years ago—and given the shift toward work that requires a computer, I don't see an immediate issue with assuming some work will simply require access to an LLM much like this course simply requires the student to have the internet, a computer, electricity, and internet access. This then creates the potential for university degrees earned online versus in-person to inherently include an asterisk that must be contended with in hiring processes. Handwriting this very essay in person would be something only in-person attendance could facilitate. Should society accept that regardless of student efforts, a degree earned through online-only classes like this one will always be viewed with skepticism compared to an in-person earned degree? I don't see this as an emergency, and it may even be acceptable. In the tech world, job search environments already include in-person or screen share video interviews with spot checks on knowledge. As a final note, I think perhaps the first part of each class should simply reiterate to students that if they choose to skate by with the use of an LLM, their future employment could suffer due to updated hiring processes, and students can choose to act accordingly. Personal agency has always been and will continue to be the crucial criterion in life, not only in academia.
- EJ
References
- Choi, C. Q. (2023, February 9). Columbia perspectives on ChatGPT. Columbia University Data Science Institute. https://datascience.columbia.edu/news/2023/columbia-perspectives-on-chatgpt/
- Pittalwala, I. (2023, January 24). Is ChatGPT a threat to education? UC Riverside News. https://news.ucr.edu/articles/2023/01/24/chatgpt-threat-education
- Stortz, E. (2021, April 13). The future of artificial intelligence requires the guidance of sociology. Drexel News. https://drexel.edu/news/archive/2021/april/the-future-of-artificial-intelligence-requires-sociology