SHARE Mobility | Blog

Transit KPIs for Small Agencies: What to Track | SHARE

Written by SHARE Mobility Team | Nov 4, 2025 3:15:00 PM

Most small transit agencies know they should be tracking performance data. Fewer are confident they are tracking the right things, calculating them consistently, or using the numbers in a way that actually improves how the program runs.

The gap is not usually a data problem. It is a focus problem. Small agency programs often collect more raw information than they use, while the specific transit KPIs that would tell them whether the program is working get buried in spreadsheets or go unmeasured entirely.

This post covers the transit KPIs that matter most for small demand-response and paratransit programs: what each metric measures, how to calculate it, and why it matters for both day-to-day operations and the grant reporting and city council accountability that come with running a public program. The examples throughout draw from real municipal transit programs that have used data to demonstrate program impact and drive operational improvement over time.

Why Transit KPIs Matter Beyond Internal Operations

For a private company, performance metrics are largely internal tools. For a small public transit agency, they serve a second audience: the people and institutions that fund and oversee the program.

City councils want to know whether the investment is justified. Grant agencies require specific outcome metrics as a condition of funding. Riders and the public expect accountability. When a program cannot produce clean, consistent performance data, it creates problems that go beyond day-to-day operations. Programs that cannot demonstrate outcomes are harder to defend at budget time and harder to grow when opportunity arises.

The transit KPIs below are the ones that appear most consistently in grant reporting requirements, city council presentations, and operational review cycles for small demand-response and paratransit programs. Tracking them consistently turns a good program into a defensible one.

On-Time Performance

On-time performance (OTP) measures the percentage of trips completed within a defined window of the scheduled pickup or drop-off time. It is the most commonly tracked transit KPI and the one most directly tied to rider trust.

How to calculate it: Divide the number of on-time trips by the total number of completed trips, then multiply by 100. Most programs define "on-time" as arriving within a window of the scheduled time, commonly five minutes early to ten minutes late for demand-response service. Define your window before measuring, and apply it consistently.

Why it matters for operations: OTP is a diagnostic metric. When it drops, it signals something specific: routes are overloaded, scheduling windows are too tight, drivers are starting late, or traffic patterns have shifted. Each root cause has a different fix. Tracking OTP over time tells you when a problem is developing before it becomes a rider complaint pattern.

Why it matters for accountability: Federal transit grant programs and state funding agencies frequently require OTP data as part of outcome reporting. City councils care about it because it is easy to understand and directly reflects service quality. A program that can report consistent 90-plus percent on-time performance has a strong foundation for funding conversations.

The Dublin Connector, an on-demand transit program for seniors and residents with disabilities in Dublin, Ohio, maintains an overall rider satisfaction rating of 4.95 out of 5. That satisfaction level is inseparable from the program's on-time reliability. Riders who cannot consistently predict when their vehicle will arrive stop using the service. Programs that track and manage OTP build the reliability that sustains ridership over time.

Cost Per Trip

Cost per trip measures the average fully loaded cost of delivering one completed ride. It is the most important financial transit KPI for small agency programs because it connects operational decisions directly to budget impact.

How to calculate it: Add up all program costs for a defined period: driver wages, vehicle costs (fuel, maintenance, depreciation or lease), software, administrative overhead, and any contracted services. Divide by the total number of completed trips in that period.

Why it matters for operations: Cost per trip makes efficiency gains visible. If you add a vehicle and cost per trip drops, the vehicle is paying for itself in utilization. If cost per trip is rising while trip volume is flat, something in the cost structure needs attention. The metric also lets you benchmark: if your program costs $18 per trip and a comparable program in a neighboring city costs $12, you have a reason to look at route efficiency and scheduling practices.

Why it matters for accountability: Grant agencies and city councils want to know what they are buying. Cost per trip translates program spending into a unit that is easy to evaluate and compare. A program that can say "we deliver a trip for $X, serving seniors and residents with disabilities who have no other transportation option" has a cleaner budget justification than one that can only report a total expenditure figure.

Software-driven programs generally produce lower cost per trip over time because route optimization reduces deadhead miles and scheduling efficiency allows each vehicle to complete more trips per shift. Hilliard Express, a door-to-door program for older adults and residents with disabilities in Hilliard, Ohio, saw trips per driver increase 48 percent year over year as scheduling efficiency improved. That efficiency gain translates directly into lower cost per trip: the same driver resources produce significantly more rides.

Trips Per Vehicle

Trips per vehicle measures how many completed trips each vehicle in the fleet delivers over a given period, typically daily, weekly, or monthly. It is a utilization metric that tells you whether your fleet is appropriately sized for your trip volume and how efficiently your scheduling is using the capacity you have.

How to calculate it: Divide total completed trips by the number of active vehicles in the fleet for the period. Use active vehicles, not total fleet size, to avoid skewing the number with vehicles that were out of service.

Why it matters for operations: Low trips per vehicle typically indicates one of three things: routes are not optimized, there is more capacity than demand requires, or scheduling is leaving gaps in vehicle utilization. High trips per vehicle signals the opposite: the fleet may be strained, and scheduling changes or an additional vehicle may be needed to maintain on-time performance.

Why it matters for accountability: Trips per vehicle is a fleet efficiency argument. When a program can show that its vehicles are running at high utilization, it demonstrates that existing resources are being used well before asking for more. When trips per vehicle is growing over time, it shows that the program is becoming more efficient, which is exactly the kind of trend that supports continued or expanded funding.

The 48 percent year-over-year increase in trips per driver for Hilliard Express is a direct measure of this efficiency improving over time. It reflects better scheduling, better route construction, and a program that learned to do more with the vehicles and drivers it already had.

No-Show Rate

No-show rate measures the percentage of scheduled trips where the rider was not present at pickup. It is one of the most operational transit KPIs on this list because no-shows have a direct cost: a vehicle and driver travel to a pickup location and return without completing a trip, consuming time and fuel that could have served another rider.

How to calculate it: Divide the number of no-show trips by the total number of scheduled trips in the period, then multiply by 100. Track no-shows separately from cancellations. A cancellation made with adequate notice allows the schedule to adjust. A no-show does not.

Why it matters for operations: Programs with high no-show rates typically have one or more of the following issues: inadequate rider notification systems, riders who are booking trips they do not actually need, or scheduling windows that are too long. Each is addressable. Automated trip reminders reduce no-shows significantly for programs that have not had them. Cancellation policies with a defined notice window give riders a clear path to cancel instead of simply not showing up.

Why it matters for accountability: No-show rate appears in federal transit reporting requirements. It is also a program efficiency argument: a high no-show rate is money spent delivering zero service. Demonstrating a low and declining no-show rate shows that the program is running efficiently and that riders are engaged with the service.

Platforms with automated rider notifications, including trip confirmation, vehicle ETA, and day-of reminders, consistently produce lower no-show rates than programs that rely on phone-based scheduling with no automated communication. When riders know exactly when their vehicle is arriving, they are present. When they are uncertain, they sometimes are not.

Rider Satisfaction

Rider satisfaction measures how riders rate the quality of their experience. It is typically collected through post-trip surveys, star ratings in a rider app, or periodic feedback mechanisms.

How to calculate it: Most programs use an average rating on a 1-5 or 1-10 scale. Collect ratings systematically, either through automated post-trip prompts or periodic surveys, and track the average over time. Segment by route or driver when the data volume supports it.

Why it matters for operations: Rider satisfaction is a leading indicator of ridership trends. Satisfaction that is declining before ridership drops gives programs time to identify and fix the underlying issues. Satisfaction data broken down by driver or route identifies where service quality is inconsistent, which is the most actionable form of this metric.

Why it matters for accountability: Rider satisfaction data is among the most persuasive evidence a small agency can present to city council or grant agencies. It puts a human number on program quality in a way that operational metrics alone do not. A program serving seniors and residents with disabilities that maintains a 4.95 out of 5 rider satisfaction rating, as the Dublin Connector does, has a straightforward case for its value to the community.

Approximately 80 percent of Dublin Connector ridership comes from seniors and residents with disabilities. For this population, consistent service quality is not a preference. It is a dependency. The satisfaction rating reflects a program that has earned the trust of riders who have no alternative if it fails.

Ridership and Trip Volume Trends

Total trips completed and month-over-month or year-over-year ridership trends are the most foundational transit KPIs. They answer the most basic accountability question: is the program being used, and is usage growing?

How to calculate it: Count completed trips per period. Track the trend over time. Segment by rider type (seniors, residents with disabilities, general public) if your program serves multiple populations with different reporting requirements.

Why it matters for operations: Ridership trends tell you whether demand is being met or whether there is unmet need the program is not capturing. Flat ridership in a growing service area may indicate a marketing or awareness problem. Consistently growing ridership signals that the program is working and that capacity planning decisions are coming.

Why it matters for accountability: Trip volume is the headline number for any program report. The Dublin Connector has completed more than 29,900 rides since launch. Hilliard Express has delivered over 11,000 rides. These are the numbers that lead a city council presentation because they are concrete, easy to understand, and directly answer the question: "Is this program serving people?"

For grant reporting, trip volume is typically a required metric. For city council audiences, it translates the program's existence into a tangible count of community members served. Both audiences need it, and both respond to it.

Accessibility and ADA Trip Metrics

For programs serving seniors and riders with disabilities, tracking accessibility-specific metrics is both an operational priority and a compliance requirement. The most relevant are: percentage of trips that are wheelchair trips, and the breakdown of ridership by priority population served.

How to calculate them: Wheelchair trip percentage is total wheelchair trips divided by total completed trips. Priority ridership percentage is riders meeting program eligibility criteria (seniors, riders with disabilities) divided by total unique riders.

Why they matter for operations: Wheelchair trips require accessible-vehicle assignment. Tracking the percentage ensures that vehicle scheduling is appropriately matching accessible vehicle supply to demand. A program where wheelchair trips are growing needs to confirm that vehicle assignments and scheduling rules are keeping pace.

Why they matter for accountability: Section 5310 and other federal grant programs specifically fund transportation for seniors and individuals with disabilities. Reporting the percentage of trips serving those populations demonstrates that the program is fulfilling its grant purpose. Dublin Connector's 80 percent priority ridership figure and Hilliard Express's 30 percent wheelchair trip rate are both proof points that speak directly to this accountability requirement.

Putting Transit KPIs to Work

Tracking these metrics is only useful if they drive decisions. The programs that get the most out of performance data use it in three ways: as an operational monitoring tool reviewed regularly by the team running the program, as a reporting input for grants and city council presentations, and as a year-over-year trend line that demonstrates program improvement over time.

The challenge for most small agencies is that producing this data manually is expensive. Compiling OTP from driver call-in logs, calculating cost per trip from budget exports, and assembling no-show data from spreadsheet entries takes staff time that small programs do not have in abundance.

Purpose-built transit software changes that equation. SHARE's reporting module tracks on-time performance, trip volume, no-show rates, rider satisfaction, and accessibility metrics automatically from the same operational data that runs the program. There is no separate reporting process. The data that drives the dispatch dashboard is the same data that produces the performance reports.

That is why programs that run on software-driven platforms tend to show stronger metric trends over time: they are measuring consistently, they can see problems as they develop, and they have the data to show improvement when it happens.

If your agency is looking for a cleaner path to the transit KPIs that matter most for municipal transit programs, and to the reporting that grant agencies and city councils require, explore how SHARE's scheduling and reporting tools work for programs like yours.