When Wikipedia rose to prominence as an online encyclopedia, teachers everywhere began admonishing students not to use it as a primary source of information.
Some have gone so far as to ban students from even consulting Wikipedia as part of their research.
At best, that's a heavy-handed tactic to help their students build good research habits. But at worst, it prevents them from honing their B.S. detectors through interaction with a tool that can be very useful for research if you know how to use it correctly.
The cardinal rule you need to employ when using Wikipedia for research is to treat it as a way to find promising sources, rather than as a source itself. Or, to put it another way, Wikipedia is a lead-generation engine for reputable sources, but you've got to follow those leads.
The information presented in the entry itself should, according to Wikipedia's own standards, be backed up with citations, which are helpfully listed at the bottom of each page.
True, it takes some additional discernment to know whether a cited source is reputable, but with practice, you can cultivate that skill.
Over the past few months, I've enjoyed exploring ChatGPT's capabilities. But the more I've encountered its failings, particularly when it comes to factual assertions, the more I've come to realize how little tolerance we must have for incorrect information, if we want to do anything truly useful with it.
At first, I approached it the same way I approached Wikipedia with a grain of salt.
I understood that because of the way LLVMs work, I had to assume that everything it said, however matter-of-factly, had the potential to be utter nonsense.
"Okay, fine, I can live with that," I thought, "just as I've lived with that limitation of Wikipedia for 20 years."
But then I asked ChatGPT to start citing its sources.
It's worth noting that by default, ChatGPT provides information in a conversational way, without attributing it, but you can add "Cite your sources" at the end of practically any prompt, and its answer will include its purported references.
I say "purported references" because I am not convinced that these sources are, in all cases, authentic. Some of the sources it has cited look like they could be legitimate URLs or legitimate research paper titles just as its answers look like they could be factual but when I examine those sources, they appear to be made up. I'm not just talking about reaching dead links, as one occasionally does with Wikipedia sources, but links to URLs that have never been cached or linked anywhere else.
I can't rule out the possibility that ChatGPT is merely composing what it thinks an authoritative URL might look like. And that's a huge problem, even if you're only treating ChatGPT as you might treat Wikipedia, in terms of its credibility.
We are living in the Information Age, with incredible storehouses of information just a click or a tap away.
This is a treasure but it is one that requires a considerable amount of upkeep.
One of the chief threats to this treasure is misinformation, which left uncountered will grow like weeds and choke out the garden.
And misinformation comes in many forms, with varying degrees of intentionality.
My worry is that ChatGPT has some "dark patterns," including the lack of attribution, that make it harder for us to pick up on its limitations and give it more credulity than it deserves.
Shaun Gallagher is the author of three popular science books and one silly statistics book:
He's also a software engineering manager and lives in northern Delaware with his wife and children.
Visit his portfolio site for more about his books and his programming projects.
The views expressed on this blog are his own and do not necessarily represent the views of his publishers or employer.
Adapted from a 2020 study, this web experiment tests a cognitive quirk that contributes to dogmatic worldviews.
This student guide explores three economic systems (capitalism, socialism, and distributism) and explains how distributism is different from the other two.
What if making money is not one of your top goals? And what if you happen to stumble into a high-paying career nonetheless?
How to build up and encourage code authors during the review process
A poem about all the rules you can break and the one rule you can't.