Baselines & Benchmarks: How Raw Data Becomes Actionable Insights

Two small figures, a man and a woman dressed in gray and white business attire, stand on a multicolored display of charts and graphs

By Robyn Defelice

They say the definition of insanity is doing the same thing over and over and expecting different results. In the learning function, we just call that Tuesday.

We take on the same types of projects, run into the same bottlenecks, make the same promises about timelines, and wonder why we’re still stretched thin at the end of every quarter. It’s not because we’re bad at what we do but because there’s never enough time to stop and figure out what’s actually happening in our own work.

If you have taken stock of your data literacy (the mindset, skillset, and toolset to work with data), you have started identifying what data you’re touching and whether it’s telling you anything useful. Next, it’s time to do something with what you found. (If you’re unfamiliar with the data terms in this article, a quick review of Taking Stock of Data Literacy for L&D Professionals might help.)

That “something” starts with two concepts that sound more technical than they are: baselines and benchmarks. They often get used interchangeably, and in some cases they can converge, but they do serve different purposes. Getting clear on the distinction matters because each one drives different decisions about how your learning function operates, whether it’s a team of one (I see you!) or a department of 30.

What’s the Difference?

A baseline is where your L&D function is right now. Its current state, measured. Not assumed, not estimated, not what someone told a stakeholder in a meeting last month. Measured:

  • How long does the design-to-delivery cycle actually take?
  • How is time allocated across projects?
  • What types of requests are coming in and from where?

If you’re a team of one, those questions apply to you and your work. If you’re leading or part of a larger team, they apply across everyone’s work. Either way, when you measure those things, you have a baseline.

A benchmark is what you’re measuring against. That could be a target the learning function sets for itself or a previous baseline they’ve improved on. It answers a different question: Not “where are we?” but “where should we be?” or “how does this compare?”

Sometimes they’re the same number. For example, your function improves its onboarding time for new hires from six weeks to four. Four becomes the new target. In that moment it’s both where you are (baseline) and what you’re holding yourselves to (benchmark). Then you either keep it as the bar or set a new target, and the cycle continues.

So where do you start?

Where to Start Looking

Your learning function is generating data from learning platforms (LMS, LXP), content management systems, communication channels (Teams, Slack, email), shared drives, project management tools, and so on. This data includes usage patterns, request volume, turnaround times, revision cycles, completion trends, and support tickets related to training. All of that data is sitting there—waiting to become information.

Now don’t try to baseline everything all at once; focus on one or two initiatives. Keeping it dialed in lets you get familiar with the constraints and opportunities of the data you own without creating too many combinations to manage.

One of the most practical starting points is something deceptively simple: an intake form for project requests.

If your function uses an intake form, you’re sitting on more than a decision tool. You’re sitting on an amazing data source that, with analysis, becomes information you can act on. If you don’t have one, that’s a starting point in itself. Even a simple one designed with baseline data in mind from the start gives you something to measure against a few months from now.

Here’s what a single intake form can tell you over time—and this is just one example!

Intake Form Data PointBaseline It CreatesDecision It Informs
Number of requests per quarterVolume baseline: Are you seeing consistent demand or spikes?Capacity planning: Can your function sustain this pace?
Source of requests (which departments)Demand distribution: Who’s driving your workload?Prioritization: Are you serving the loudest voices or the most strategic needs?
Type of request (new build, update, consultation)Work mix baseline: What kinds of work dominate?Capability planning: Does your team have the skills the demand requires?
Alignment to organizational or departmental strategyStrategic alignment baseline: How much of your work connects to stated priorities?Credibility: Can you show that your function’s work supports organizational goals?
Requested timeline vs. actual kickoffDecision-to-kickoff baseline: How long does it take for a project to go from request to active?Process improvement: Where is the hidden time between “yes” and “go”?
Scope at request vs. scope at deliveryScope accuracy baseline: How often does the final product match what was originally asked for?Stakeholder management: Where does scope creep enter and why?

That’s six baselines from one tool. Each one tells you something different about how your function operates. Some of that data is quantitative, like request volume and turnaround time. Some of it is qualitative, like why scope changed or what drove a request. Each one, over time, can become a benchmark; a combination of these can give you meaningful, actionable insights.

One thing to watch for: It’s tempting to look at the totals and stop there. Fifty requests this quarter sounds like a useful number. But who’s asking and for what? Break down aggregate data and you might find 80% of requests come from one department, or that most are updates, not new builds. That changes what you prioritize and where you focus.

Think in Quarters; Build Over Time

How long you collect data depends on what you’re measuring. The general principle is to capture at least one full cycle of whatever you’re looking at. The point isn’t identifying a magic number. The point is to collect long enough to see what’s actually happening versus catching a one-time anomaly. That’s empirical data in practice. You’re gathering it through direct observation and experience, not assumptions.

Here’s one way to think about it: Set up one baseline that needs longitudinal time to mature, something like turnaround from request to delivery or usage patterns across your learning platforms. Let that run in the background. Then identify three to four shorter-term baselines, one per quarter, where you take data you already have and build a picture from it.

QuarterBaseline FocusWhat You’re Learning
Q1: Intake volume and sourceCount your requests: Who’s asking, how often, what kind of work.Is your function demand-driven by one area or spread across the org? Does it align to strategic priorities?
Q2: Decision to launch a projectMeasure the time between “someone said we need this” and the team actually starting.Where is the hidden time? What’s eating capacity before a project is even active? If nothing turns around in a quarter, what small wins or pilots could deliver value faster?
Q3: Capability discoveryUse the intake form to invite requests. Evaluate what’s feasible. Baseline what capabilities your function needs for different types of work.What can you do well now? What types of requests expose gaps? If someone asks for microlearning conversion and you’ve never done it, you just learned something about your function.
Q4: Operations baselineTrack how your function’s time is allocated against predefined categories. Capacity, process efficiency, where effort goes versus where the organization says its priorities are.Does how you spend your time match what matters? Where are assumptions about workload not matching reality? This is ongoing, not a one-quarter snapshot.

By year’s end, you will have four pictures, plus whatever longitudinal data has been building. Each quarter taught you something. Each one built your function’s data literacy muscle. And each one gave you something concrete to bring to a conversation with a stakeholder (hello, trusted advisor).

Let AI Help You Think It Through

If you’re staring at all of this and thinking “where do I even begin?” you have a thought partner available to you right now. AI tools can help you figure out what to baseline and how to approach it, but they work better when you give them context.

Before you prompt, think about what you’re trying to get better at. What are you trying to learn? What decisions are you trying to make or advise on? What do you want to be more informed about? Give the AI that context. Let it ask you questions back. Use it as a coaching conversation, not a search engine.

For example: “I’m an L&D professional responsible for [describe your role and scope]. I’m trying to understand [what you want to learn or improve]. Here’s the data I currently have access to [list it]. Help me think through which of these could give me a useful baseline within a quarter and which ones need longer to show a pattern. Ask me questions to help me figure out what makes sense for my situation.”

That kind of exchange helps you think through your starting points, test new approaches, explore a capability in a tool you haven’t fully used, or figure out how to make sense of data you’ve been sitting on. And if you just built an intake form, ask AI to help you think through what baselines it could generate and what questions to ask of the data as it builds.

The Data You Don’t Own

You are likely to bump into this reality: Some of the most valuable data your learning function could use doesn’t live with your department. Platform usage data might sit with IT. Performance data sits with HR. Business outcomes live with operations or finance.

You can start without it. What you can baseline with your own data gives you a starting picture. It might be tighter in scope, and that’s fine. As you work with what you have, you’ll naturally start asking: What do I still not know? Is my data too general? What would help me understand this more completely? Those questions should lead you to the people who own the other data streams.

When you have those conversations, come with your story. Not a request for access. A perspective: “Here’s what I’m seeing from my data. Here’s what I’m trying to understand. Do you have data that could help inform this?” That’s a different conversation than “I need access to your system.”

Find out what they’re already collecting and for how long. If they only have three months of data, you can still work with it, but you’re seeing a snapshot, not a pattern.

Don’t start by asking them to start collecting something new. Build from what they already have. Some of these conversations may naturally lead to questions about how data is governed, who has access, and what policies exist around sharing it. That’s a good thing. Let those conversations happen.

Your Next Moves

Pick one: one area, one tool, one data source.

  • Determine how long you need to collect to see something real
  • Communicate what you’re doing and why, to your team, your manager, or just to yourself
  • Start small
  • Be aware of your own biases when you start reading baseline data

What you assume about the data is just as important as the data itself—and might lead you in the wrong direction. If completion is low, resist jumping to ‘learners aren’t motivated’ without looking further. Other possible reasons are lack of awareness that it was not completed or managers not providing time for employees to complete training (or even a manager saying “Ah, you don’t have to take it”—it happens!).

A messy baseline built from real data is infinitely more useful than a polished product based on a gut feel or a confident estimate with nothing behind it.

Baselines and benchmarks give you something to stand on. They’re how raw data becomes actionable insights and findings clear enough to drive a decision, not just a report.

Once you’ve got your starting point, the next question is: Am I collecting the right data? In the next article, we’ll dig into three questions that should come before any data collection effort, because collecting everything is just as risky as collecting nothing.

Image credit: sankai

Share:


Contributor

Topics: