adambrousell

Employee Scheduling Optimization — Case Study
← Portfolio Projects Contact
CASE STUDY · LABOR OPTIMIZATION

Employee Scheduling Optimization

Using statistical modeling in R to build demand-driven staffing grids from historical POS data — eliminating gut-feel scheduling, cutting overstaffed hours by 30+/week, and reducing labor cost as a percentage of revenue by 8–12%.

R Studio tidyverse forecast (ARIMA) ggplot2 Excel POS Integration

Introduction

Most restaurant schedules are built the same way they were built ten years ago: the manager opens last week’s schedule, makes a few tweaks, and posts it. The result is a staffing plan that reflects habit, not demand. Slow Tuesday dinners get the same headcount as peak Friday nights. Overtime accumulates because no one modeled the threshold. And the P&L tells you what went wrong a month after it happened.

This project applies time-series forecasting and regression modeling in R Studio to build staffing grids that match labor to actual demand by hour and station. The model ingests POS transaction data, historical covers, and existing schedules, then outputs an optimized staffing plan with projected cost savings — before the week starts.

The methodology is now deployed as the TableStandards Labor & Scheduling Optimization service for independent restaurant operators.

Problems Identified

1. Flat Scheduling on Uneven Demand

A full-service restaurant running 150–400 covers per day had the same base staffing template across all weekdays. POS data showed Tuesday dinner averaged 165 covers while Friday hit 380 — yet both had 8 FOH staff scheduled. The model quantified 34.5 excess hours per week from this mismatch alone.

2. No Covers-per-Labor-Hour Benchmark

The operation had no metric for labor efficiency by station or daypart. “Was lunch profitable?” was answered with a feeling, not a number. Without CPLH benchmarking, there was no way to identify which shifts were burning payroll and which were understaffed enough to hurt service.

3. Reactive Overtime Management

Overtime was discovered on payday, not during scheduling. No one was modeling hours against OT thresholds before the schedule was posted. The result: 10+ overtime hours per week that could have been redistributed with a single schedule adjustment.

Methodology

01
Data Ingestion

Pull & Clean

Ingest 4–6 weeks of POS transactions, labor schedules, covers by hour, and revenue by daypart. Clean and normalize in R. Join event calendars and holiday flags for seasonal adjustment.

02
Statistical Modeling

Forecast & Benchmark

Forecast covers per hour using ARIMA and STL seasonal decomposition. Calculate CPLH by station and daypart. Run regression models tuned to the operation’s specific demand patterns.

03
Schedule Output

Optimize & Monitor

Generate optimal staffing grids with labor cost projections. Flag OT threshold risks. Deliver weekly variance reports comparing scheduled vs. actual via automated R Markdown.

The R Model

# —— TableStandards Labor Model ——
> library(tidyverse)
> library(forecast)
> library(lubridate)

> covers <- pos_data %>%
    group_by(day_of_week, hour) %>%
    summarise(
      avg_covers = mean(covers),
      p90_covers = quantile(covers, 0.9)
    )

> optimal_staff <- predict_staffing(
    covers, labor_model,
    target_cplh = 22,
    service_level = "full"
    )

# Projected weekly savings: $1,840

Analysis: What the Model Found

Staffing Heatmap — Overstaffed Dayparts

The model’s ggplot2 heatmap immediately surfaced the problem zones. Dark cells = high labor demand justified by covers. Light cells = staff on the clock with insufficient demand to justify. The Tuesday–Wednesday 2–5 PM block was the single largest source of wasted labor dollars.

Mon
Tue
Wed
Thu
Fri
Sat
Sun
11 AM
12 PM
2 PM
4 PM
6 PM
8 PM
10 PM

Blue = demand-justified staffing  ·  Orange/red = overstaffed (labor exceeds demand)

Model Output — Weekly Variance

# Weekly Labor Optimization Report
# Period: Jan 20 – Jan 26, 2026

Scheduled Hours:  412.5
Optimal Hours:    378.0
Variance:        +34.5 hrs
Est. Savings:    $1,242/wk

# Top overstaffed dayparts:
Tue 2–4 PM  +2.0 staff
Wed 3–5 PM  +1.5 staff
Sun 11–1 PM +1.0 staff

CPLH Benchmarking

StationBefore CPLHAfter CPLHChangeIndustry Target
FOH20.124.6+22%22–26
BOH15.818.4+16%16–20
Bar18.522.1+19%20–25

Results

8–12%
Labor Cost Reduction (% of Rev)
30+
Weekly Overstaffed Hours Cut
60%
Overtime Hours Reduced
18%
Labor Efficiency Improvement

Conclusion

Schedule to the forecast, not to last week

The single highest-value output of the model isn’t the heatmap or the CPLH benchmark — it’s the weekly staffing grid generated before the schedule is posted. Shifting from reactive (“what happened”) to predictive (“what will happen”) scheduling is the behavioral change that delivers the savings.

CPLH is the metric that matters

Labor cost as a percentage of revenue is a lagging indicator. Covers-per-labor-hour is a leading one. When managers can see CPLH by station in real time, they make different cut decisions, different call-in decisions, and different scheduling decisions. The model makes CPLH visible at the daypart level.

Overtime is a scheduling problem, not a payroll problem

Every dollar of overtime was foreseeable at the time the schedule was written. The model flags OT threshold risks before the schedule posts, allowing redistribution across staff before it becomes a payroll surprise.

Next Steps

This model is available as a service

  • The TableStandards Labor & Scheduling Optimization packages this analysis for independent operators
  • Send 4 weeks of POS data and your current schedule — first actionable model delivered within 3 weeks
  • All models built in R Studio and delivered as reproducible R Markdown reports
  • Weekly automated variance reports comparing scheduled vs. actual labor
  • Cost scenario modeling: wage increases, cross-training impacts, reduced hours — see P&L effects before making changes