Approached cautiously, AI can help us tech and learn.
Our understanding of AI is strongly shaped by the recent success of ChatGPT and other chatbots. They mold a language model [which in principle could imitate any number of genres] into the familiar form of a question-answering utility like Siri Goggle Search: ask a short question and get a short answer. An all purpose question answerer is certainly easy to use. But professor want students to acquire habits of skeptical dialogic inquire, not a habit of rely on a oracle. IN an academic context, we should approach language models as engines for provisional reasoning – “calculator of word. “as Briths programmers Simon Willison calls them. Instead of assuming that the model already has a answer to every question in memory, this approach provides, in the prompt, any special assumptions or background knowledge the model will be expected to t use. A student might hand in a model of scientific article and ask for explication of a difficult passage. A Researcher might provide a thousand documents, serially, and ask the model to answer the same questions about each one. A writer might provide a draft essay and ask for advice about structure. Used in this epistemically cautious way, A has a valid, important role to play n education. But if a cautious approach to AI is advisable, I don’t mean to implying that it is easy to achieve. Students will be tempted instead to rely on model as oracles. And it will takes work to define an appropriate level of caution. What counts as an assumption that needs to be offered provisionally in a prompt – or as linguistic common sense that can be hard-coded in a model? The question is contestable. In fact, we will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model of a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studi3es.”The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machines learning themselves, which are deeply self-conscious about their own epistemic procedures. Readings from all of those fields belong in a 21st-centruy core curriculum.
Ted Underwood is a professor of information science and English at the University of Illinois at Urbana-Champaign