Relentless advancement to produce new gen of seppos
I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”
Eh I dunno about that, the plagiarism machine would probably also be good for, for example, undergrad math or compsci because the work is usually fairly simple but necessary to get into the deeper stuff. This kind of cheating on an industrial scale seems dangerous.
I think it’s probably pretty bad at both of those things when it comes to what should actually be evaluated, which is students’ understanding of concepts, not just recall. If you ask for the complexity of some algorithm, an LLM will try to find some pattern that matches the kinds of answers it has already seen before. It might get the answer right because it has digested 100 examples like it before and matched the input to it. But if you ask students to actually explain their reasoning and walk through it step by step, and throw in a modification to the algorithm that impacts the answer, the LLM is likely to fail in some way.
Though really, what should be graded is evaluations like tests. Homework should be for learning and practice, not a grade.