Guides & Tutorials

Maintenance KPIs That Actually Matter: A Facility Manager's Guide to Metrics

Stop drowning in data and start measuring what matters. Learn which maintenance KPIs drive real decisions, how to calculate them, and what targets to set for your facility.

J

Judy Kang

Solutions Manager

November 15, 2022 14 min read
Facility manager analyzing maintenance metrics dashboard on computer

Key Takeaways

  • Focus on 5-7 core KPIs that drive decisions—more than that creates dashboard overload without improving outcomes
  • PM completion rate is your foundation metric—if preventive maintenance isn't happening, nothing else matters
  • Planned vs. reactive maintenance ratio reveals operational maturity; target 70%+ planned work
  • Context makes metrics meaningful—track cost per square foot, not just total cost, to enable valid comparisons

I once worked with a facility director who had a dashboard with 47 metrics. Forty-seven. He could tell you the average time to complete a plumbing work order on the third Tuesday of months with a full moon. What he couldn’t tell you was whether his maintenance operation was actually performing well.

Data isn’t insight. Metrics aren’t intelligence. The purpose of tracking KPIs is to make better decisions—and you can’t make better decisions when you’re drowning in numbers that don’t connect to actions you can take.

This guide focuses on the maintenance metrics that actually matter: ones that answer real questions, drive real decisions, and improve real outcomes. We’ll cover what to measure, how to calculate it, what targets to set, and most importantly, what to do when the numbers aren’t where you want them.

The Problem With Most Maintenance Metrics

Before diving into specific KPIs, let’s talk about why most maintenance reporting fails.

Too many metrics, too little focus. When everything is measured, nothing is prioritized. Teams that track 30+ KPIs rarely act on any of them. The cognitive load of monitoring that many numbers exceeds human capacity.

Metrics without context. “We completed 847 work orders last month.” Is that good? Bad? Depends entirely on context: how many work orders were received, what types, how large is your facility, how many technicians do you have. Raw numbers without context are meaningless.

Lagging indicators only. Many teams only measure what already happened. Last month’s PM completion rate tells you that you missed preventive maintenance—it doesn’t help you prevent missing it this month. Balance lagging indicators (what happened) with leading indicators (what’s likely to happen).

Vanity metrics. Some numbers look impressive but don’t connect to outcomes. “Our technicians logged 4,000 hours last month!” Great, but were those hours productive? Did they move the operation forward? Vanity metrics feel good but don’t drive improvement.

Measurement without action. The point of measuring is to make decisions. If a metric has been in the red for six months and nothing has changed, you’re not really using that metric—you’re just reporting it. Every KPI should have a defined response when it’s off-target.

Effective maintenance measurement is disciplined. Fewer metrics, tracked consistently, with clear thresholds that trigger specific actions. That’s what actually improves operations.

The Core Five: Essential Maintenance KPIs

If you measure nothing else, measure these five. They provide a balanced view of your maintenance operation across compliance, efficiency, service quality, and cost.

1. PM Completion Rate

What it measures: The percentage of scheduled preventive maintenance work orders completed on time.

How to calculate:

(PM Work Orders Completed On Time ÷ PM Work Orders Due) × 100

Example: 180 PMs completed on time ÷ 200 PMs due = 90% PM completion rate

Why it matters: Preventive maintenance is the foundation of proactive operations. When PM completion drops, you’re deferring maintenance that will return as emergency repairs—usually at worse times and higher costs. PM completion is a leading indicator of future reactive work.

Target: 90% or higher

What to do if it’s low:

Below 80%: You have a systemic problem. Common causes:

  • Insufficient technician capacity for the PM load
  • PM schedules too aggressive (too many PMs, not enough value)
  • Competing priorities pulling technicians to reactive work
  • Poor scheduling or work order routing

80-90%: Acceptable but watch the trend. Investigate the misses—is it random or are certain equipment types, locations, or technicians consistently behind?

Above 95%: Verify you’re not gaming the metric. Are technicians completing PMs without actually doing the work? Are you counting rescheduled PMs as “on time”? Also examine whether some scheduled PMs might be unnecessary.

2. Planned vs. Reactive Work Ratio

What it measures: The balance between proactive (planned) work and reactive (emergency/demand) work.

How to calculate:

(Planned Work Orders ÷ Total Work Orders) × 100

Planned work includes: preventive maintenance, scheduled repairs, planned projects Reactive work includes: emergency repairs, unscheduled breakdowns, demand maintenance

Example: 700 planned work orders ÷ 1000 total work orders = 70% planned

Why it matters: Reactive maintenance is expensive, disruptive, and stressful. A high reactive ratio indicates a “firefighting” culture where teams spend their time responding to emergencies rather than preventing them. Mature maintenance operations are predominantly planned.

Target: 70% planned or higher; world-class operations achieve 80%+

What to do if you’re too reactive:

Below 50%: Your operation is in firefighting mode. This is unsustainable. Focus on:

  • Identifying equipment that repeatedly fails and prioritize PM for those assets
  • Building PM schedules for critical equipment that currently has none
  • Protecting PM time—don’t let it be sacrificed for reactive work
  • Addressing root causes rather than just fixing symptoms

50-70%: Moving in the right direction. Continue building PM coverage and investigating repeat failures.

Above 80%: Excellent. Monitor for equipment that could benefit from more attention and consider whether any PM tasks are performed too frequently.

3. Average Response Time

What it measures: The average time from work order creation to technician arrival/acknowledgment.

How to calculate:

Sum of (First Response Timestamp - Work Order Created Timestamp) ÷ Number of Work Orders

Example: Total response time of 50 hours across 100 work orders = 30-minute average response time

Why it matters: Response time reflects how quickly your team reacts to reported issues. For many facilities—particularly those with tenants or customers—response time directly impacts satisfaction and perception of service quality.

Target: Varies by priority level and facility type. Examples:

  • Emergency: 15-30 minutes
  • High priority: 1-2 hours
  • Standard: 4-8 hours
  • Low priority: 24-48 hours

What to do if response time is too long:

Examine by priority level first—are emergencies being treated urgently? A slow average might be acceptable if high-priority items are handled quickly.

Common causes of slow response:

  • Notification routing problems (technicians don’t receive alerts promptly)
  • Unclear priority classification (everything marked “normal” when some items are urgent)
  • Understaffing or scheduling gaps
  • Geographic challenges (large campus, travel time between buildings)

Quick wins:

  • Enable push notifications and SMS alerts for high-priority work
  • Review priority definitions—ensure they match actual urgency
  • Consider on-call coverage for after-hours emergencies

4. Average Resolution Time

What it measures: The average time from work order creation to completion.

How to calculate:

Sum of (Completion Timestamp - Work Order Created Timestamp) ÷ Number of Work Orders

Example: Total resolution time of 2,400 hours across 100 work orders = 24-hour average resolution

Why it matters: Resolution time measures how long issues remain unresolved. It captures the complete customer experience—not just when you showed up, but when the problem was actually fixed.

Target: Again, varies by priority and type:

  • Emergency: 2-4 hours
  • High priority: 24 hours
  • Standard: 48-72 hours
  • Low priority: 1-2 weeks

What to do if resolution time is too long:

Distinguish between response delay and resolution delay. If response is fast but resolution is slow, the problem is during the work itself. If both are slow, start with response time.

Common causes of slow resolution:

  • Parts availability (waiting for parts extends resolution time significantly)
  • Complex issues requiring diagnosis
  • Multiple visits needed to complete work
  • Scheduling constraints (access limitations, coordination with occupants)
  • Unclear completion criteria (when is it “done”?)

Track first-time fix rate alongside resolution time. If technicians need multiple visits to resolve issues, that drives up average resolution time and indicates potential training, parts availability, or diagnostic process problems.

5. Maintenance Cost Per Square Foot

What it measures: Total maintenance spending normalized by facility size.

How to calculate:

Total Maintenance Cost ÷ Total Square Footage

Include: labor (internal and contract), parts and materials, vendor services, equipment Exclude: capital projects (major renovations, new equipment installation)

Example: $500,000 annual maintenance cost ÷ 200,000 square feet = $2.50 per square foot

Why it matters: Raw cost numbers are meaningless without context. Spending $1 million on maintenance sounds like a lot—until you learn it’s for a 500,000 square foot hospital, where that’s actually quite efficient. Cost per square foot enables comparison across facilities, against industry benchmarks, and over time.

Target: Highly variable by facility type. Rough benchmarks:

  • Office buildings: $2-4/sf
  • Retail: $2-5/sf
  • Industrial/warehouse: $1-3/sf
  • Healthcare: $8-15/sf
  • Education: $3-6/sf

Compare against industry peers and your own historical trend rather than absolute numbers.

What to do if costs are too high:

First, verify you’re comparing apples to apples. Different cost accounting methods produce different numbers. Ensure your benchmark comparisons use consistent definitions.

If costs are genuinely high:

  • Analyze by category—is one area (HVAC, electrical, plumbing) driving the total?
  • Review vendor contracts—are you getting competitive rates?
  • Examine reactive vs. planned ratio—reactive work typically costs 3-5x more than planned
  • Evaluate equipment age—very old equipment costs more to maintain; replacement may be justified
  • Check for deferred maintenance catching up—costs spike when historically deferred work finally happens

Quick Reference: KPI Target Benchmarks

Here’s a comprehensive overview of target values for core and supporting maintenance KPIs, organized by performance maturity level:

KPI MetricDeveloping
(Needs Improvement)
Target
(Industry Standard)
World-Class
(Best-in-Class)
PM Completion Rate< 80%90-95%95%+
Planned vs. Reactive Work< 50% planned70-80% planned80%+ planned
Emergency Response Time> 60 minutes15-30 minutes< 15 minutes
Standard Response Time> 24 hours4-8 hours< 4 hours
Average Resolution Time> 5 days1-3 days< 24 hours
Cost per Square Foot (Office)> $5.00$2.00-4.00$1.50-2.50
First-Time Fix Rate< 70%80-85%90%+
SLA Compliance Rate< 75%85-90%95%+
Requester Satisfaction Score< 3.0 / 5.03.5-4.0 / 5.04.5+ / 5.0
Work Order Backlog TrendGrowing monthlyStableDeclining or stable low

Important Context: These benchmarks represent typical ranges across facility types and industries. Your specific targets should account for facility type, age of equipment, organizational priorities, and resource availability. Use these as directional guides rather than absolute requirements. The key is establishing baselines for your operation and tracking improvement trends over time.

Download the Full Report

Get 100+ data points, verifiable sources, and actionable frameworks in a single PDF.

Get the Report

See It In Action

Watch how facilities teams achieve 75% less unplanned downtime with Infodeck.

Book a Demo

Supporting Metrics: The Second Tier

Once your core five are solid, consider adding these supporting metrics based on your specific needs.

Work Order Backlog

What it measures: Total open work orders at any point in time.

Why it matters: Growing backlog indicates capacity problems—more work coming in than going out. Shrinking backlog indicates the opposite. Stable backlog is healthy.

Target: Backlog should remain stable or decline. A specific number depends on your operation’s normal velocity. Track the trend, not the absolute value.

Mean Time Between Failures (MTBF)

What it measures: Average time between equipment breakdowns for a specific asset or asset class.

Why it matters: MTBF indicates equipment reliability. Declining MTBF for an asset suggests it’s wearing out and may need replacement. MTBF helps prioritize capital planning.

How to calculate:

Total Operating Time ÷ Number of Failures

Example: 8,760 operating hours (one year) ÷ 4 failures = 2,190 hours MTBF

Mean Time To Repair (MTTR)

What it measures: Average time to restore equipment to operation after failure.

Why it matters: MTTR indicates repair efficiency. High MTTR for specific equipment may indicate parts availability problems, technician skill gaps, or inherently complex repairs.

How to calculate:

Total Repair Time ÷ Number of Repairs

First-Time Fix Rate

What it measures: Percentage of work orders completed on the first visit.

Why it matters: Multiple visits to resolve one issue frustrates requesters, costs more, and reduces technician productivity. Low first-time fix rate often indicates parts availability problems or diagnostic challenges.

Target: 80%+ for routine maintenance

How to calculate:

(Work Orders Completed in One Visit ÷ Total Work Orders) × 100

SLA Compliance Rate

What it measures: Percentage of work orders meeting their service level agreement targets (response time, resolution time, or both).

Why it matters: SLAs represent commitments to your organization, tenants, or customers. Compliance rate shows whether you’re keeping those commitments.

Target: 90%+ compliance; below 80% indicates SLAs may be unrealistic or performance genuinely needs improvement

Requester Satisfaction

What it measures: Direct feedback from people who submitted maintenance requests.

Why it matters: Operational metrics don’t capture everything. Satisfaction surveys reveal whether the experience felt good to the person on the receiving end. A work order can be completed “on time” yet leave the requester unhappy due to communication gaps, partial fixes, or technician behavior.

How to measure: Post-completion surveys (keep them short—1-3 questions maximum). Response rate matters; low response rates make data unreliable.

Building Your Dashboard

Don’t display every metric on one screen. Different audiences need different views.

Executive Dashboard

Leadership wants the big picture: Are we in good shape or not?

Include:

  • PM completion rate (trend over past 6 months)
  • Planned vs. reactive ratio (trend)
  • Cost per square foot (trend and vs. budget)
  • Major open issues (critical equipment down, significant backlogs)

One page maximum. No more than 5-6 metrics. Green/yellow/red indicators for quick assessment.

Operations Dashboard

Maintenance managers need more detail for day-to-day decisions.

Include:

  • PM completion rate (current month and YTD)
  • Response time and resolution time by priority
  • Work order volume (incoming vs. completed)
  • Backlog status
  • SLA compliance
  • Top 10 open work orders by age

Technician Dashboard

Individual contributors need to know what’s on their plate and how they’re doing.

Include:

  • Assigned work orders
  • Personal completion metrics
  • PM due this week
  • Overdue items requiring attention

Keep it simple and actionable. Technicians don’t need portfolio-level cost analysis.

Setting Targets That Make Sense

Arbitrary targets (“Let’s aim for 95%!”) don’t work. Effective targets are:

Based on baseline. Know where you are before deciding where you want to be. If PM completion is currently 65%, targeting 95% next month is fantasy. Target 75%, then 85%, then 95%.

Informed by benchmarks. What do similar facilities achieve? Industry benchmarks provide external reference points. Your target should be competitive but realistic.

Connected to business impact. Why does this metric matter to the organization? A target connected to real outcomes (tenant satisfaction, cost control, safety) carries more weight than an arbitrary number.

Achievable with effort. Targets should stretch the team but remain achievable. Impossible targets demotivate. Too-easy targets don’t drive improvement.

Time-bound. When should you hit the target? “Improve PM completion” is vague. “Achieve 90% PM completion by Q3” is actionable.

When Metrics Go Wrong

Sometimes the numbers move in unexpected directions. Here’s how to diagnose common issues.

PM Completion Dropped Suddenly

Investigate:

  • Did PM volume increase? (New schedules added)
  • Did technician capacity decrease? (Turnover, absences, reassignments)
  • Was there a major reactive event that consumed resources?
  • Is this seasonal? (Some facilities have seasonal PM spikes)

Response Time Spiked

Investigate:

  • Volume spike? (More requests than normal)
  • Staffing gap? (On-call not covered, vacations)
  • Notification problem? (Alerts not reaching technicians)
  • Mis-prioritization? (Genuinely urgent items not marked as such)

Costs Increased Without More Work

Investigate:

  • Labor costs increased? (Overtime, rate increases, additional headcount)
  • Parts costs increased? (Supplier pricing, emergency parts purchases)
  • Vendor costs increased? (Contract renewals, scope expansion)
  • One-time expenses? (Major repair that won’t recur)

All Metrics Look Good But Complaints Continue

Investigate:

  • Are metrics capturing reality? (Gaming, incomplete data)
  • Are the right things being measured? (Metrics missing the real problems)
  • Communication gaps? (Work completed but requesters not informed)
  • Perception issues? (Expectations exceed what metrics target)

Moving From Measurement to Action

Metrics exist to drive action. Every KPI should have:

Thresholds: What’s acceptable, what’s concerning, what’s critical?

Ownership: Who’s responsible for this metric and its improvement?

Review rhythm: How often is this metric reviewed and by whom?

Action triggers: What happens when the metric crosses a threshold? What decisions does it inform?

Without these elements, you’re just reporting numbers. With them, you’re managing performance.

Example: PM Completion Rate Action Framework

StatusThresholdAction
Green≥90%Continue current operations; look for optimization opportunities
Yellow80-89%Investigate root causes; present findings to operations review
Redunder 80%Escalate to management; reallocate resources; defer non-critical work

Review: Weekly by operations manager, monthly by director Owner: Maintenance manager

The Bottom Line

The best maintenance operations don’t track the most metrics—they track the right metrics, consistently, and actually use them to make decisions.

Start with the core five: PM completion rate, planned vs. reactive ratio, response time, resolution time, and cost per square foot. Get those solid before adding complexity. Build dashboards appropriate to each audience. Set targets based on baselines and benchmarks. And most importantly, turn measurements into actions.

The goal isn’t a perfect dashboard. The goal is a better-maintained facility. Metrics are just a tool to get there.


Want to track maintenance KPIs without building spreadsheet reports? See how Infodeck provides real-time dashboards, automated calculations, and configurable alerts that turn maintenance data into actionable insights.

Frequently Asked Questions

What are the most important maintenance KPIs to track?
Start with these five: PM (Preventive Maintenance) completion rate, planned vs. reactive work ratio, average response time, average resolution time, and maintenance cost per square foot. These provide a balanced view of compliance, efficiency, service quality, and cost control. Add specialized metrics later based on your industry and specific challenges.
What is a good PM completion rate?
Target 90% or higher PM completion rate. Below 80% indicates systemic scheduling or resource problems. Between 80-90% is acceptable but leaves room for improvement. Above 95% may warrant examining whether you're over-maintaining or have scheduled work that isn't actually necessary. Perfect 100% completion is unusual and worth investigating.
How do you calculate planned vs. reactive maintenance ratio?
Divide planned work orders (preventive maintenance plus scheduled repairs) by total work orders. Multiply by 100 for percentage. For example: 700 planned work orders ÷ 1000 total work orders = 70% planned. World-class maintenance operations target 80%+ planned work. Below 50% indicates a reactive, firefighting culture.
What should maintenance cost per square foot be?
Benchmarks vary dramatically by facility type. General office buildings typically run $2-4 per square foot annually. Healthcare facilities range $8-15+. Industrial manufacturing varies by complexity. Compare against industry benchmarks and your own historical trends rather than absolute numbers. Decreasing cost with stable or improving service quality indicates operational improvement.
Tags: maintenance KPIs facility metrics CMMS reporting performance measurement maintenance management
J

Written by

Judy Kang

Solutions Manager

View all posts

Ready to Transform Your Maintenance Operations?

Join facilities teams achieving 75% less unplanned downtime. Start your free trial today.