"Which tool should I use?"

I must have heard this question from faculty a thousand times. Should they use PlayPosit for their videos? Perusall for readings? CidiLabs for course design? The Canvas ecosystem offers incredible tools, but the abundance of choices was overwhelming faculty—and overwhelming our support team.

This is the story of how I turned that recurring pain point into an interactive solution that's now used hundreds of times per semester.

The Problem: Too Many Choices, Not Enough Guidance

Let me paint the picture: It's week two of the semester. A new instructor emails asking about video tools. I spend 20 minutes explaining the differences between Canvas Studio, PlayPosit, Kaltura, and YouTube embedding—only to realize halfway through the conversation that what they actually needed was a simple page with embedded videos, not an interactive tool at all.

This happened constantly. Faculty would:

  • Choose tools based on what their colleagues used, not what fit their needs
  • Get excited about a tool's features without considering if it aligned with their learning objectives
  • Request implementations that required extensive support, when simpler solutions existed
  • Feel paralyzed by choice and delay launching their courses

Our team was spending hours each week answering the same questions. More importantly, faculty weren't making pedagogically sound decisions—they were making popular decisions.

The Real Issue

The problem wasn't that faculty didn't know Canvas tools existed. The problem was they didn't know which tool to use when, and why one tool might be better than another for their specific teaching context.

The "Aha" Moment: Reframing the Problem

During a particularly frustrating day of back-to-back consultations, I had a realization: I was essentially acting as a human decision tree. Faculty would tell me their needs, I'd ask clarifying questions, and based on their answers, I'd recommend specific tools.

The process was actually quite systematic:

  1. What type of content? (Video, reading, assessment, design)
  2. What's your learning objective? (Knowledge recall, analysis, engagement)
  3. Do you need analytics? (Basic views vs. detailed engagement data)
  4. What level of interaction? (Passive viewing vs. active participation)

If this process could be mapped, it could be automated. Not to replace the human consultation—but to help faculty self-serve for straightforward decisions and come to consultations better prepared for complex ones.

"The best support system is one that empowers users to solve their own problems—but knows when to hand off to human help."

Designing the Solution: Principles First

Before writing a single line of code, I established core design principles:

1. Start With Pedagogy, Not Technology

Instead of asking "Which tool do you want to learn about?" I designed the tree to start with "What are you trying to accomplish?" This forced faculty to think about learning objectives before tool features.

2. Progressive Disclosure

Rather than overwhelming users with all options at once, the tool reveals information progressively. You see 4 broad categories first, then drill down into specifics. This reduced cognitive load dramatically.

3. Explain the "Why," Not Just the "What"

Each tool recommendation includes not just what the tool does, but why it's being recommended for their specific use case. This educates faculty while solving their immediate problem.

4. Make Backtracking Easy

Faculty needed to explore multiple paths without getting lost. I added clear navigation, a "Start Over" option, and visual breadcrumbs showing their current location in the decision tree.

Design Insight

The most powerful decision I made? Adding a "Why this recommendation?" section to every tool. Faculty told me later this was the most valuable part—it taught them to think through tool selection for future decisions, not just the current one.

The Build: Balancing Complexity and Usability

I built the Tool Matcher as a standalone HTML/CSS/JavaScript tool. Why not use a fancy framework or external service? Three reasons:

  • Maintenance: Pure HTML means anyone on our team can update it when tools change
  • Speed: No external dependencies means instant load times and no broken integrations
  • Accessibility: Full control over semantic HTML and ARIA labels

The Structure

The decision tree works in three levels:

  1. Level 1: Broad categories (Video, Reading, Assessment, Design)
  2. Level 2: Specific needs within that category
  3. Level 3: Tool recommendations with use cases and pro tips

For example, if you select "Video" → "I need students to interact with the video" → you get PlayPosit recommendations with specific implementation advice.

The Details That Mattered

Small touches made a big difference:

  • Visual hierarchy: Used card-based design with clear CTAs
  • Icons: Added visual cues to help users scan quickly
  • Real examples: Included screenshots and concrete use cases
  • Common mistakes: Added "Watch out for..." warnings based on actual faculty errors
  • Mobile-friendly: Designed for faculty using phones during consultations

Quick Win

I added an "I'm not sure" option at key decision points. This loops users back with more context and prevents them from guessing incorrectly. Faculty appreciated having an "escape hatch" when they felt uncertain.

Testing and Iteration: What I Got Wrong

My first version was... not great. Here's what I learned through pilot testing:

Mistake #1: Too Much Text

What I did wrong: Wrote detailed explanations for every tool, thinking more information was better.

What I learned: Faculty wanted quick answers, not dissertations. I cut text by 60% and moved detailed information to expandable sections.

Mistake #2: Assuming Everyone Knows Learning Objectives

What I did wrong: Asked "What's your learning objective?" assuming faculty would know Bloom's Taxonomy terms.

What I learned: Reframed questions in plain language: "Do you want students to watch and remember, or watch and do something?" Much better results.

Mistake #3: Not Including an Exit Strategy

What I did wrong: Focused only on getting faculty TO a recommendation, not what happens AFTER.

What I learned: Added "Next Steps" to every recommendation—links to tutorials, setup guides, and a "Still need help? Schedule a consultation" button.

The Results: Beyond My Expectations

Six months after launch, the data told a compelling story:

  • 1,200+ uses in the first semester
  • 30% reduction in basic tool selection support requests
  • 87% of faculty said it helped them make better decisions
  • Average session: 3.5 minutes (fast enough to be useful, long enough to be thorough)

But the numbers don't tell the whole story. The qualitative feedback was even more rewarding:

"I used to just pick whatever my colleague recommended. Now I understand WHY I'm choosing a tool and can explain it to my students."
"This saved me from making a huge mistake—I was about to implement a complex tool when a simple Canvas page would've worked better."

Perhaps most satisfying: faculty started using the language from the Tool Matcher in their consultation requests. Instead of "I need help with videos," they'd email "I want students to respond to specific moments in videos—I think I need PlayPosit?" That made consultations so much more productive.

Unexpected Win

Other departments requested their own versions! IT created a "Software Selector," and the Library made a "Research Database Matcher." The framework proved more versatile than I expected.

What I'd Do Differently

If I were building this today, here's what I'd change:

Add Analytics Earlier

I waited months before adding tracking to see which paths faculty took most often. Should've done that from day one—the data revealed patterns I never would have guessed.

Create a "Recently Used" Section

Faculty often return to explore similar tools. A history feature would save time and encourage exploration.

Build in Content Versioning

Canvas tools change frequently. I should have built a system to flag outdated information automatically rather than manually reviewing quarterly.

Key Takeaways for Instructional Designers

If you're considering building something similar, here's what matters most:

  • Map your actual workflow first: The best tools automate what experts already do naturally
  • Design for the decision, not the tool: Help users think through problems, not just find answers
  • Test early and often: Your assumptions about how people will use your tool are probably wrong
  • Keep it simple: Complex doesn't mean better—clarity and speed matter more
  • Make it educational: The best job aids teach users to fish, not just give them fish

Try It Yourself

Want to see the Canvas Tool Matcher in action? Check it out in my Resources section. I'd love to hear your feedback or ideas for improvement!

The Bigger Picture

Building the Canvas Tool Matcher taught me that the best instructional design often happens for instructors, not just students. When we empower faculty to make better decisions independently, we create a ripple effect that benefits everyone.

It also reinforced something I've learned over seven years in this field: the most impactful innovations often solve the small, recurring frustrations. You don't always need a revolutionary new approach—sometimes you just need to take a common problem and create a systematic, user-friendly solution.

The Tool Matcher is now a core part of our faculty onboarding and is referenced in almost every consultation. It's proof that thoughtful design, grounded in real user needs, can transform a pain point into a permanent solution.

What About You?

What recurring questions do you hear from faculty or students? Could any of them be systematized into a decision tree or job aid? I'd love to hear about the pain points you're tackling in your instructional design work.

Connect with me on LinkedIn or drop me an email—I'm always excited to talk shop with fellow instructional designers!