Evan Weinberg’s and Jay Goodman’s companion articles in this issue, and posted on LinkedIn, explore the tension between AI and human capacity and the implications for Project-Based Learning (PBL) instruction. In the authors' own words: Our process was simple: we knew we had some things to say about AI and PBL, and so we texted and talked and recorded everything. Then we went our separate ways to write without input from one another. Evan used AI as a co-author and collaborator to synthesize the conversations and pull through threads that may have been invisible during the conversation. Jay did not rely on AI, instead focusing on the parts of the conversation that were interesting, engaging or personally challenging.
Both of these are very human approaches. Evan’s demonstrates the exact kind of approach Jay is advocating for in his article. Jay’s represents a desire to hold onto difficult parts of thinking at every level of the process. We’re not sure if these articles will talk to each other, or be so different as to feel grounded in completely different conversations. But we do know that if we, as educators, are not having the conversation and pushing how we think about the capabilities of AI, then we’re going to end up with the basest case-uses showing up in our classroom, and if that is where we end up, we have failed as educators. We’re both more hopeful than that. We believe we can leverage this to explore ideas and solve problems that remind us of the joys and challenges of being human.
-Jay and Evan
The Artificial Intelligence (AI) in education opinion market is crowded. The one that I’m hearing and reading daily is something like “if we believe thinking skills are important, we can’t let students off-load that work to a machine.” The next step in this argument usually leads to banning it, regulating it or teaching it. The deep fear seems to be about protecting what it is to be human. “Being human”, and all the messiness around conflicting approaches, working with people different than yourself, expressing creative ideas and solving tough problems, is why I’ve been so drawn to Problem-Based Learning (PBL) for the last decade. The proliferation of AI has made me question the very purpose of education, and I’ve come to believe that we have an opportunity to push our thinking in a way that demands more of our humanity, not less.
Sometimes, in the middle of solving a problem, you realize you need some guidance that isn’t readily available. Maybe you’re replacing a car headlight, and there isn’t a mechanic around. Or doing your taxes, and don’t have an accountant in the house. Or, as I did a few months ago, you’re winterizing your water system but just can’t find a free plumber to come hang out with you all day.
In our problem-based learning project cycles, students encountered this all the time. They hit a point where they need someone, but that someone isn’t available. This led to an interesting question: How can we use AI to humanize the PBL experience?
Some of you were already taking your knives out when I used AI and humanize in the same sentence. Probably fair. I do share some sense of dread around the sudden ubiquity of AI, particularly in education where students don’t always have the tools to discern truth from fiction or reasoned opinion from sycophantic validation. But when I say humanize, I actually mean the opposite of what AI often does, which is create a frictionless experience that “solves” problems and regurgitates the simplest opinions. What I’m suggesting is using AI to introduce the very human quality of friction to drive discussion, challenge ideas and to synthesize conversations in a way that pushes learning rather than short-circuiting it.
When properly prompted, AI is an excellent role-player. Our PBL teams has asked it to play a few roles for students:
Expert: Even in the best developed projects under the best-case scenario, students rarely have access to more than one “expert in the field” for their project. But big ideas require expertise across disciplines (in fact, on both PBL teams that I’ve worked on, we explicitly designed problems that required transdisciplinary thinking). So what happens if you’ve made a connection with a video game developer who has given great guidance and feedback on game design, but now need to understand how to market it and assess the economics of launching it on the App Store? This is where AI can become your marketing mentor and steps in to provide support, challenge thinking, look for tension points in the process and ask the sorts of questions that require a rich evaluation of the project.
Facilitator: AI is an unbiased listener and expert synthesizer. This is the role of a facilitator in most meetings. But student groups don’t have facilitators because good facilitation requires someone to be outside of the conversation without inserting themselves. Recording a conversation, uploading the transcript, and asking for a synthesis of the discussion, big questions, decisions that need to be made and an agenda for the next meeting can be done with AI. Newer AI models even allow this to happen live with a voice that can be actively engaged in the conversation if needed. This doesn’t offload student thinking, but rather pushes the thinking that they’re doing by asking nuanced questions that they otherwise would not have considered.
Project Manager: The bigger and more ambitious a project becomes, the more management is required. A trained bot can play this role. It can handle timelines and roles and divide complex tasks into smaller, manageable parts. It can check in to see what work has been completed, and rethink the approach to the problem. It can make suggestions about next steps. This can take the generic project approach that teachers have to use and amplify it by personalizing it for each group, letting students see their own progress and the impact of the work.
I’m calling these roles “humanizing” because they force students to do deeper, richer, very human work. They provide a support system for that work, and guide the thinking without circumventing it. To say I’m completely comfortable with this, however, would be an overstatement. I worry about AI everyday, in a deep existential way. I worry about job loss and what that will do to families. I worry about young people neglecting difficult thinking before their brain is developed. I worry about its impact on art and creativity and relationships and all the things that make us human. I worry about the motivations of the people running these companies. But I also worry that if we don’t find ways to allow it to support our most human of capacities, then it will start to cannibalize them, with weaker, easy ideas devouring complex, nuanced ones.
There’s an opportunity here to leverage a new and powerful technology to push student thinking. If we can get this right, it’s to the great benefit of our students. I tread cautiously into this, but am still optimistic about the ways we’ll find to truly enrich our students' experience when intentionally used a support in problem-based learning.
Jay is an advocate for problem-based and community-based learning. He has 20 years of classroom experience, with 7 in PBL-specific contexts. He was a team member of the award-winning Innovation Institute at Shanghai American School (China), a founder of the Changemakers program at Nido de Aguilas (Chile) and holds an Ed.D (Western University) in organizational change, specifically in the development of PBL programming. He currently works as a consultant, collaborating with schools on the design and implementation of experiential learning.
Email: jay@goodmanlearning.com | Website: www.goodmanlearning.com | Linkedin: www.linkedin.com/in/jay-goodman-edd