/Aristotl
Language
All guides
GuideTracking

Flagging locations that fall behind on training

Locations fall behind. The question is whether HQ finds out on Monday or six weeks later when an auditor flags it. The flagging system is what closes that gap. It needs three things to work: thresholds calibrated to your network's normal range, alerts that go to the right person (not the entire L&D mailing list), and an escalation path that turns a flag into an intervention without three layers of approval.

Locations fall behind. The question is whether HQ finds out on Monday or six weeks later when an auditor flags it. The flagging system is what closes that gap. It needs three things to work: thresholds calibrated to your network's normal range, alerts that go to the right person (not the entire L&D mailing list), and an escalation path that turns a flag into an intervention without three layers of approval. ## Setting thresholds The wrong move is picking a round number. "Flag any location below 90%" sounds tidy but ignores your network's actual distribution. If your network sits at 75% completion across the board, flagging at 90% means everything is flagged and nothing is. Calibrate to your network. The right approach is relative, not absolute. Flag locations that are: 1. **More than one standard deviation below network average.** Statistical, not arbitrary. If your network averages 87% with a standard deviation of 6, anything below 81% gets flagged. 2. **Trending down for two consecutive weeks.** Catches sites that are sliding before they hit the absolute threshold. 3. **Above zero overdue compliance items.** Compliance overdue is a different category — flag any non-zero count immediately, regardless of overall completion. Three thresholds, three different signals. A single threshold misses cases two and three. ## Who gets the alert The alert routing rule: the closest accountable person, plus their manager, never the entire mailing list. For an underperforming location, that is the district manager (closest accountable) and the ops director (their manager). The store manager themselves should also see it on their own dashboard, but a separate alert to them is usually unnecessary — they already know. For a compliance-overdue flag, the routing escalates: the district manager, the ops director, and the compliance/audit lead at HQ. Compliance overdue is regulatory exposure; it gets visibility. The wrong routing — "send all training alerts to the L&D team" — generates noise that gets filtered into a folder nobody opens. Alerts go to people who can act on them. ## The alert format A useful alert has four lines: 1. The flag type: "Location below threshold" or "Compliance overdue". 2. The location and metric: "Site 47 — 72% completion (network 87%)". 3. The cause if known: "Three new hires not yet enrolled in onboarding." 4. The action: "District manager to confirm enrollment by EOD Wednesday." Four lines. The alert is the action item; the dashboard has the detail. ## The escalation path When a location is flagged, what happens? **Week 1.** The district manager is responsible for the intervention. They contact the store manager, identify the cause (new hires not enrolled, manager change, system access issue), resolve it. **Week 2.** If the location is still flagged, the ops director joins. Usually means the cause is structural — a manager who is not engaged, a location with unusually high turnover, a tablet that is broken. **Week 4.** If the location is still flagged, it goes on the franchise-relations agenda. At this point, training non-completion is a symptom of a deeper operational issue. This cadence — week 1 district, week 2 ops, week 4 franchise relations — keeps the chain accountable without burning through the ops director's attention on every micro-flag. ## What flagging tells you about the system If 30% of your locations get flagged every week, the threshold is wrong, or the system itself is broken. A healthy network has 5 to 10% of locations flagged in any given week, with the specific locations rotating. If the same three sites get flagged every week, the system has identified the right problem; the intervention is what is failing. ## Automation Flags should fire automatically. Aristotl's dashboard supports threshold-based alerts out of the box — set the percentile and standard-deviation rules once, the system fires alerts to the configured recipients on a daily or weekly cadence. No L&D coordinator manually scanning the dashboard for laggards. No spreadsheet of "sites I should chase this week." The system does the scanning, the L&D coordinator does the intervention. ## What this catches that manual review misses Manual review at scale catches the obvious. The site at 30% completion is going to get noticed eventually. What manual review misses: the site that was at 90% last quarter and is now at 78%, two weeks into a quiet decline. That is the case the relative threshold catches first. By the time it shows up on a manual eyeball-the-dashboard scan, the decline has been going on for a month.

Ready to put this into practice?

Book a demo