AI Can Build Your PBL Program. It Can't Tell You Why You Need One
Evan Weinberg and Jay Goodman have written companion articles addressing the tension between AI and human capacity and the implications for Project/Problem Based Learning (PBL) instruction. Jay Goodman's article is published in the TRC website, which you can find in this link. In their own words: The process was simple: we knew we had some things to say about AI and PBL, and so we texted and talked and recorded everything. Then we went our separate ways to write without input from one another. Evan used AI as a co-author and collaborator to synthesize the conversations and pull through threads that may have been invisible during the conversation. Jay did not rely on AI, instead focusing on the parts of the conversation that were interesting, engaging or personally challenging.
Both of these are very human approaches. Evan’s demonstrates the exact kind of approach Jay is advocating for in his article. Jay’s represents a desire to hold onto difficult parts of thinking at every level of the process. We’re not sure if these articles will talk to each other, or be so different as to feel grounded in completely different conversations. But we do know that if we, as educators, are not having the conversation and pushing how we think about the capabilities of AI, then we’re going to end up with the basest case-uses showing up in our classroom, and if that is where we end up, we have failed as educators. We’re both more hopeful than that. We believe we can leverage this to explore ideas and solve problems that remind us of the joys and challenges of being human.
I experiment with Artificial Intelligence (AI) agents. I define custom instructions, context documents, and workflows that let large language models do substantial work. Drafting curriculum, generating student materials, prototyping entire units. Tools like Claude and ChatGPT with project contexts and automation workflows can now do in minutes what used to take weeks.
I can generate a full scope and sequence for a problem-based learning unit with the click of a button. Day-by-day lesson plans, proficiency scales, student-facing materials, assessment rubrics…all of it. The tools are there, and the quality is genuinely good. It's only going to get better. An automated AI workflow with a good context can generate a draft program structure, a sample unit, a set of assessment criteria, all in a very short time. Not perfect, but concrete enough to put in front of your administration and watch them react.
That’s the value: generating fast prototypes that force the real conversations to happen sooner. It means accelerating the path to "no." Because getting to "no" is useful. "We can't take tenth graders off campus" is useful. "Who's funding this?" is useful. Every objection you surface before launch is one less crisis during implementation.
But the quality of what comes out is entirely dependent on the clarity I bring to what goes in.
Clarity is the actual skill. Knowing what I'm looking for, what constraints matter, what the end product should accomplish is what I’ve gotten good at defining. The prompting is easy. Defining the context is hard.
Schools don't often have this clarity. Most can't articulate their non-negotiables in a way that would help a new teacher navigate their first week. If you can't onboard a human, you definitely can't onboard an AI agent.
The uncomfortable truth is that schools are really bad at deciding what they stand for. They have a mission statement, sure. But can they articulate the concrete points that come from that mission, the ones that help when decisions get tough? They're bad at identifying the internal conflicts between what the English department thinks matters and what the science department thinks matters. They're bad at articulating why they want a PBL program in the first place. And they're especially bad at naming their non-negotiables among the many variables that sit on a school leader's desk. These are the things that, if violated, would make any program a failure regardless of how innovative it looks on paper.
Artificial Intelligence (AI) can't fix this. Neither can a purchased curriculum. The foundational documents that would make an AI agent effective, the mission alignment, the scheduling constraints, the policies on student travel, the funding model, are the same documents most schools have never written down for anyone, human or otherwise.
My co-writer Jay told me about a contract he was working on that got pushed back. Why? Because the head of school told the person championing the initiative that they needed to "do the foundational work" before bringing in outside help.
On the surface, that might sound reasonable.
In reality, this head of school decided it was best to ask the principal to figure out the school's mission alignment, scheduling constraints, parent communication strategy, funding model, off-campus policies, and cross-departmental buy-in. All by themselves: figure out things that require facilitated conversations with stakeholders who don't agree with each other. This might also include things that have been kicked down the road for years because they're hard and politically uncomfortable.
That principal is going to fail. And when they do, the school will blame PBL, or blame AI, or blame the consultant they eventually hire, rather than the dysfunction that made implementation impossible from the start.
You want a PBL program? Great. Generate a prototype. Show it to your administration. The objections were always going to come up. AI just got you there faster. And faster matters, because every month you spend designing in a vacuum is a month you're not surfacing the conflicts that will kill your program when it actually launches.
The irony is that the schools most excited about AI are often the ones least prepared to use it. They want the tool to do the thinking for them. They want a curriculum that just works, that slots into their existing structures without requiring anyone to make hard choices.
That curriculum doesn't exist. Not from AI, not from a textbook company, not from the most expensive consultant in the world.
What exists is a process. It starts with a room full of people who don't entirely agree with each other, facilitated by someone who knows how to surface those disagreements productively. It continues with decisions. Actual, documented, "we're doing this and not that" decisions about what the school values and what it's willing to sacrifice. And it ends with implementation, which is where AI and all the other tools finally become useful.
Skip to the end, and you get beautiful documents that no one will use.
If you're thinking about launching a PBL program, ask yourself this: Can you articulate, in one sentence, why your school wants this? Not "because it's innovative" or "because colleges like it." Why your school, with your students, in your context?
If you can't answer that question, don't open ChatGPT. Don't buy a curriculum.
If you’re going to hire a consultant, do so to help with this one, critical step:
Get your people in a room. Have the conversation you've been avoiding. Make some decisions.
Then, and only then, the tools will be ready to help.

Evan Weinberg currently teaches interdisciplinary courses at International School Nido de Aguilas in Santiago, Chile. This is the latest in an over twenty year career teaching a range of subjects including technology, computer science, mathematics, physics, and engineering in New York City, China, and Vietnam. Evan uses AI to rapidly prototype curriculum and software solutions, generate student materials, and accelerate school design decisions. He values thoughtful integration of AI and technology into education. At the same time, he makes space for analog, hands-on approaches that get students talking, building, and getting messy ideas out of their heads.
Email: evan@elmhurstgroup.us | Website: www.evanweinberg.com | Linkedin: https://www.linkedin.com/in/evan-weinberg-b89a858
.png?width=2000&height=767&name=Updated%20logo%20TRC%202%20(1).png)