Read a thought-provoking interview about AI in education with Jon Ippolito, keynote speaker at the 2025 Teaching and Technology Summit.

Researcher and author Jon Ippolito will open the Ball State March 20-21 Teaching and Technology Summit with the opening keynote address, “Thinking or Shrinking? AI and Post-Citation Scholarship.”

Jon Ippolito is an artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine. Winner of Tiffany, Lannan, American Foundation, and Thoma awards, Ippolito is co-founder of the Variable Media Network for preserving new media art, UMaine’s Digital Curation and Just-in-Time Learning programs, and Learning With AI, a toolkit for educators and students that makes it easy to filter for AI assignments and resources by discipline or purpose. Ippolito has given over 200 presentations, co-authored the books At the Edge of Art and Re-collection: Art, New Media, and Social Memory, and published 80 chapters and articles in periodicals from Artforum to the Washington Post. His AI focus is creators—writers, programmers, and media makers—and how the technical, aesthetic, and legal ramifications of generative AI empower and frustrate them.

In the following interview, Ippolito shares some of his thoughts on AI in education. I had a chance to connect with Jon ahead of the Teaching and Technology Summit with the following questions, and he dove deeper into his thoughts on AI. For additional context, I reference a Teaching in Higher Ed podcast episode in some of my questions for Jon. Consider listening to that podcast ahead of the Summit.

In the Teaching in Higher Ed podcast, you noted that “intelligence is broken” because of the rise of artificial intelligence. Could you tell us more?

I mean to say that advances in generative AI have broken the single concept of intelligence into separate shards, each with its own features and capabilities. When I was growing up, mastery of chess was a marker for supreme intellect, but after Deep Blue beat Gary Kasparov, the ability to outmaneuver an opponent in a game with predefined rules seemed a lot less universal as a measure of intelligence.  

In the classroom, meanwhile, the gold standard for intelligence was having a way with words. If you could speak or write articulately about something you’d read—whether assessed in a middle school book report, college application, or PhD dissertation—you were intelligent. Now, we’re at a point where linguistic fluency is ubiquitous, and it’s not coming from people. 

So, the concept of intelligence has splintered again to include faculties beyond rhetorical skill. These include emotional intelligence—the ability to “read the room” and respond to emotive cues—and what my UMaine colleague Joline Blais calls maker intelligence—the ability to enter into a revelatory interaction with a creative medium. Machines still struggle at both—for now. 

A type of intelligence that particularly interests me is creativity, which may be harder to define than the other variants. Is a toddler who’s picked up a crayon for the first time and touched it to paper creative? What about an office worker who designs an unconventional spreadsheet? Are spider webs and bird nests creative? Does creativity require intention? Can it be genetic, or is it a skill that has to be learned? 

Generative AI comes with a laundry list of downsides, which I’ve tried to inventory with the IMPACT RISK framework. But one of the upsides is that its arrival forces us to examine many of our preconceived notions about how intelligence might depend on things like consciousness, social interaction, or authenticity.

How can faculty “playfully tinker” with AI? In other words, what practical ways can instructors explore and evaluate the technology?

For a technology only a few years old, generative AI has already been remarkably polarizing. The backlash has been especially vehement among artists and their representatives, who have been suing AI companies to halt what they see as an unauthorized reuse of their work that is destined to replace them. AI seems to present creators with a Manichean choice: resist an AI takeover of creativity or jump on the AI bandwagon. Is there a third path? 

Some artists and designers are crossing their fingers and hoping the courts ban AI’s encroachment on their creativity and livelihoods. Others have submitted to AI’s inevitable encroachment into nearly all creative processes, trading Procreate for Leonardo and ProTools for Suno. 

Artist Eryk Salvaggio and some of his peers offer a third path: misusing AI creatively to discover its potential and weaknesses. This strategy has a long history of artists working with new media. Charlie Chaplin created cinematic effects by cranking film the wrong way through the camera; Nam June Paik created dazzling geometries by lugging a magnet onto a TV set. Salvaggio’s creative misuse includes “red teaming” AI to find its weaknesses and perversely asking diffusion models to generate pure noise. We won’t find out what generative AI is capable of by following the owner’s manual. The most interesting AI artists today are bending models until they break, illuminating what’s inside the black box. 

Of course, not all instructors may have the creativity or confidence required for Salvaggio’s style of hacktivism, but we can all try some playful tinkering. One of my favorite exercises is to ask students to explore a sophisticated image generator like Leonardo.ai—even in writing classes. Rather than present them with an oracular answer to a question typed into a single prompt box, media-generating interfaces often let users tweak a variety of settings such as temperature and model, as well as generating a variety of outputs by default. It’s also much easier to see stereotypes and fabrications in images than in text; students may not know whether Napoleon won the battle of Austerlitz, but they know he probably didn’t have six fingers.

People often try to categorize AI into dichotomies, such as good/bad or high/low stakes uses.  Is there a right approach or dichotomy to frame ethical uses of AI?

Yeah, I don’t buy the “high versus low stakes” test for whether to use AI. There are too many high-stakes situations where AI could be helpful (such as identifying potential drugs) and too many low-stakes situations where it is hurtful. Recent cases of the latter include Google’s retraction of the “Dear Sydney” Superbowl commercial, in which a father encourages his daughter to use AI to write a fan letter, and the backlash against AI note-takers from Zoom participants who feel their words could be taken out of context. AI threatens to fray the social connections that otherwise might be forged in those moments of human contact. 

The best way to navigate this ethical minefield, I suggest, is to distinguish between opportunistic and prescriptive tasks. Generative AI excels at opportunistic problems—those where trial and error can lead to success without major consequences for mistakes. These problems have many viable solutions, which suits AI’s probabilistic approach. In contrast, prescriptive problems require a single correct answer or a small set of them, and errors carry a high cost.  

The distinction plays out in security: a hacker only needs one weak spot to break into a system, while defenders must secure every entry point. It’s also evident in medicine, where AI can outperform human radiologists at spotting tumors. A few false positives may lead to unnecessary tests, but the chance of catching undetected cancer justifies the risk. However, when AI shifts from discovery to implementation—like determining a drug’s dosage—a miscalculation could be fatal. 

Education presents similar contrasts. Using AI as a Socratic tutor or for essay critique is an opportunistic task—it may generate insightful questions, and students can simply discard irrelevant ones. But grading is prescriptive. ChatGPT will assign grades probabilistically, which says if two students got a C, then student work between those two will also deserve a C. However, this erases divergent achievements and could penalize students on the margins of a population. When failure is cheap, AI’s exploratory nature is an asset. But when the stakes demand exactness, probability alone isn’t good enough.

You explain that AI has the potential to degrade human relationships—like the example of one of your students using AI to leave feedback on their peers’ work. Are there any uses of AI that promote connection in the classroom?

I think there’s potential here, but so far, I’ve personally been unable to realize it. 

The main arena in which I’ve attempted such experiments is my Introduction to New Media class at the University of Maine. In a multi-year collaboration with computer science faculty Greg Nelson and Troy Schotter, I asked 100 students to do eight tasks, first with conventional tools and then with AI. For example, after writing an essay in Week 1, students revisit the assignment but lean on GPT-4o for help brainstorming, drafting, and polishing a similar essay. Students then write a detailed comparison of the two drafts. The class repeats this process for designing logos, coding game avatars, creating illustrations, writing stories, recording soundscapes, and even grading homework. 

One of the classroom tasks I’ve struggled with over the years is dividing students into effective teams. For this class, I created a survey including practical questions like “When can you meet outside of class?” and subjective questions like “What is the quality you value most in a friend?” Sadly, when I asked GPT-4o to create teams of three based on these affinities, it simplified the problem too much to be useful. It’s possible that some of the more recent chain-of-thought models might do better, but I think it’s important to document our failures as well as our successes. 

Nevertheless, as I suggested before, introducing generative AI to students may be helpful as a pedagogical tool outside of its practical function. Those additional purposes might include inspiring a discussion about how AI impacts cheating or preparation for future careers. One of my favorite classes in the Intro to New Media course used a democratic process to distill AI ethics policies. Based on a “think-pair-merge” process suggested by Greg Nelson and Rotem Landesman, 50 students brainstormed AI policies individually, then paired up to debate, merge, or select one policy to advance . These pairs then joined larger groups to further refine policies, identifying overlaps and negotiating priorities. I was happy to sit back as groups of students held lively debates about which ethical guidelines were most important to promote to the next level. 

We can also test the application of AI to student life outside of class. For the 2024 US election, I used a framework developed by John Swope to build an AI Microapp that draws a decision tree to help students choose when and where to vote.

Learn More from Jon Ippolito at the Summit

To engage with more of Ippolito’s thoughts on AI in education and see him speak, register for the virtual Teaching and Technology Summit on March 20 and 21, where he will provide the opening keynote.  

While you’re at it, check out the summit agenda and preview the other great presentations lined up for this year. We hope to see you there!

  • Cheri Madewell

    Cheri is the director of instructional consultation on the Teaching Innovation Team. Prior to joining the Division of Online and Strategic Learning, she was a faculty member for the Ball State Women’s and Gender Studies Program. Cheri’s background is in instructional design and technologies and leading international gender and LGBTQ grant projects.

    View all posts
  • Jon Ippolito is an artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine. Winner of Tiffany, Lannan, American Foundation, and Thoma awards, Ippolito is co-founder of the Variable Media Network for preserving new media art, UMaine's Digital Curation and Just-in-Time Learning programs, and Learning With AI, a toolkit for educators and students that makes it easy to filter for AI assignments and resources by discipline or purpose.

    View all posts