Macro Buddy: A New Tutor for Reasoning
A pair of economics professors at the University of Wisconsin–La Crosse unveiled Macro Buddy, a chatbot engineered to steer students toward explaining their reasoning rather than simply delivering answers. The goal is simple but ambitious: use AI to strengthen critical thinking in macroeconomics without turning the tool into a shortcut that bypasses learning.
Crucially, the project runs in a controlled classroom setting with the chatbot’s web access intentionally disabled. The researchers say this design preserves academic integrity while letting AI function as a facilitator of thought, not a calculator for the test.
The UW-La Crosse Trial
In the spring 2025 term, 140 undergraduate students—mostly first- and second-year majors—enrolled across four sections of a macroeconomics course. All four sections used identical course materials, assignments, and in-person exams. Talking points, problem sets, and exams were aligned to ensure comparisons across formats were meaningful.
After the first major assessment, the four sections were randomly assigned to distinct study formats. One group studied in isolation, another used Macro Buddy to prompt reasoning, a third engaged in structured peer discussion, and a fourth combined Macro Buddy with peer dialogue. In all cases, students were not allowed to reference notes or external sources during the in-person exams.
What Macro Buddy Does
Macro Buddy operates as a guided reasoning companion. It prompts students to articulate each step of their thought process, asks clarifying questions, and helps them connect macroeconomic concepts to real-world scenarios. The tool is designed as a thinking partner rather than a quick-answer factory, with the web disabled to maintain exam integrity.
What the Data Show
- Second-exam outcomes: Students using Macro Buddy alongside peer discussion posted the strongest gains, improving their average score by roughly 6–7 percentage points compared with peers studying alone.
- Group dynamics mattered: The AI-assisted group paired with peer dialogue outperformed AI-free, solo-study peers by a meaningful margin, suggesting the combination of AI prompts and human interaction amplifies learning.
- Consistency across sections: Despite the four different study environments, the materials and exams were identical, strengthening the case that the study format, not content, drove the differences in performance.
In numeric terms, the average second-exam scores by format hovered around the following pattern (out of 100): solo study roughly 78, AI-guided reasoning alone about 80, peer discussion alone about 81, and AI plus peer discussion around 85. While the study was limited to a single course at one university, the trend aligns with broader expectations that structured reasoning support boosts retention and transfer of macro concepts.
Voices From the Class
“Macro Buddy forced me to explain each step,” said a student in the AI-assisted plus peer group. “When I could verbalize why a curve shifted or why inflation expectations change, I learned to spot gaps in my argument.”

Professor Maria Chen, who co-authored the study, added, “The aim was not to replace teachers but to provide a tool that nudges students to reason. We’re hoping this kind of approach scales to other introductory courses.”
Asked about the underlying philosophy, the research team framed the project as a collaboration between machine guidance and human discourse. In their words, the venture embodies a meta-principle of learning: students reason aloud with an AI partner, then refine ideas through peer critique.
Why This Matters For Higher Education
The trial arrives as colleges navigate an era of rising AI adoption. A recent nationwide survey suggests a large share of students are turning to generative AI for everything from drafting essays to clarifying difficult concepts. Yet research on learning is mixed: AI may speed up task completion, but the benefits depend on how the tool is used.

Advocates say tools like Macro Buddy could help instructors scale personalized reasoning prompts that were previously too resource-intensive. Skeptics caution that AI-assisted reasoning could still become a crutch if not paired with active peer engagement and clear assessment goals.
The Ethos Behind the Project
Beyond the immediate findings, the project tackles the broader question of how economists should interact with AI in the classroom. In a nod to their approach, the research team describes the effort with a memorable line: we’re economists designed chatbot. The phrase reflects a belief that AI can be wired with economic reasoning tools—models, graphs, and critical questioning—that students can reason through, with the human element providing context and accountability.
During interviews, the team reiterated that the objective is to cultivate long-term understanding. “We want students to own their reasoning,” one co-author explained. “If the AI can reveal gaps in logic and then help fill them, that’s a win for durable learning.”
Implications For The Market And Policy
Educational technology vendors are racing to bring reasoning-centric AI tools to market, and universities are assessing how to integrate them into established curricula. The Wisconsin trial provides a framework for evaluating AI tools not as shortcuts, but as scaffolds for deeper understanding. If similar outcomes emerge across majors and institutions, expect more pilots that combine AI tutors with collaborative learning models.
Policy implications are also on the docket. Institutions may settle on licensing arrangements, usage guidelines, and assessment standards that emphasize students’ ability to articulate reasoning. Some universities are already revising honor-code language to reflect AI-assisted study while affirming a commitment to independent thinking on exams.
Looking Ahead
The UW-La Crosse study is a first step. Researchers plan to expand to additional macroeconomics courses and potentially other fields that rely on step-by-step problem solving, such as econometrics and introductory finance. They will also track long-run retention of macro concepts and translation into course performance across a broader student population.
Industry observers say the timing is right. As the AI education market grows, tools that specifically target reasoning and argumentation could find a large audience among students who want to learn deeply rather than merely complete assignments. If Macro Buddy proves scalable, it could become a blueprint for how educators deploy AI tools to foster thinking, not just faster answers.
Bottom Line
The spring 2025 trial at UW-La Crosse offers a compelling, data-driven argument for pairing AI tutors with peer discussion to improve learning outcomes in macroeconomics. The observed gains—especially when AI prompts are combined with human dialogue—signal a promising path for higher education in an AI-enabled era. And as the field evolves, the idea that we’re economists designed chatbot—in the sense of building AI that teaches students to reason—could reshape how instructors frame assignments, feedback, and exams in the years ahead.
Discussion