- Necessary Nuance
- Posts
- Wikipedia never told me to eat rocks
Wikipedia never told me to eat rocks
Why would we accept answers from ChatGPT but not Wikipedia?
Hello, dear readers. It’s been a while, much longer than I intended. Between a new job, some writers block, and gestures at the state of the world I’ve had a harder time than usual finding my focus. And then I got embarrassed by how long a break it had been which, surprisingly, does not get better the longer I take to write. So in an attempt to get out of my head and writing again I’m going to do my best to jump back in by writing about something that annoyed me today.
I attended a professional development workshop today which, unexpectedly, spent a lot of time talking about generative AI. If you know me, you know I have A Lot of Opinions about generative AI but writing about them would take a Lot of Time so I’m going to save that for another day.
One of the presenters began by saying he wanted to first define artificial intelligence, and in order to do so he had asked ChatGPT for a definition. Which… is a different choice than I would have made. He then mentioned that ChatGPT had given him a source for the definition, “but it was Wikipedia.” And many people in the room laughed, mostly at the idea that you could use Wikipedia as a source for anything.
I have a few problems with this approach to definitions and source evaluation. First, any student I’ve taught to evaluate sources will tell you that you can’t trust a source to tell you about itself. We go outside the source to verify information and reputation, for the same reason we check references when hiring or ask for letters of recommendation when applying to schools: we want to know what other people think about a source. Asking ChatGPT to define artificial intelligence is functionally the same as going to a website’s About Us page looking for them to share unbiased opinions about their own reliability.
But, fine. I can maybe get past it as an attempt at a hook for a presentation. The definition itself wasn’t great, but it wasn’t the worst I’ve seen. The discounting of Wikipedia as a source of reliable information is what really got my hackles up. I need academics to update their understanding of Wikipedia and the way it works, because it’s a lot different than it was 20 years ago.
I teach students to use Wikipedia as a “stepping stone” source to build background knowledge, and I show them how to look at the Talk page to find out more about who’s writing the article and what they’re debating. Plenty of researchers in different fields have examined Wikipedia for accuracy and, by and large, they’ve found it to be a solid source of information. Is it perfect? No. Does it have a better track record than generative AI? It sure does.
Wikipedia also does what no generative AI tool does: it offers transparency about how it’s being written and edited, it cites sources, and it flags information that is in need of a citation. ChatGPT’s record on citing its sources is not great. I’ve worked with several students who come to the library looking for help locating a source that simply does not exist, and they’re not getting those inaccurate citations from the reference sections on Wikipedia.
I have significant concerns about the largely uncritical adoption of generative AI tools in education, especially with the emerging research on AI tools, cognitive offloading and critical thinking. I’d much rather have students reading Wikipedia and using the information there to build their own understanding of the world.
Thanks for reading! I miss writing as a way of working out my thoughts, so I’m hoping this has helped me get over my mental block and I’ll be sharing more soon.
Reply