Mining Customer Error Patterns

How I used AI to analyze thousands of municipal review comments and surface actionable insights for customer conversations

TL;DR

We were hitting delays from customer errors—missing documents, incorrect plot plans, and repeated mistakes. Municipal reviewer comments were siloed per-project with no way to spot patterns across thousands of applications. I pulled comment data from our database, wrote a custom GPT prompt to categorize trends, and iterated with ops leads until the output matched their intuition.

The prototype validated the concept before any engineering work and became immediately useful to ops teams for customer calls. The trend reports enabled data-driven conversations with customers about their error patterns, turning insights into actionable improvement plans.

The Problem

Customer Errors Were Hidden in Per-Project Comments

PermitFlow could optimize internal operations perfectly and still hit delays from customer errors—missing documents, incorrect plot plans, repeated mistakes. Municipal reviewer comments contained valuable signals about these patterns, but they were siloed per-project with no way to aggregate insights across hundreds of applications.

Municipal reviewer comments held patterns about customer errors, but were trapped in individual project records

Ops teams knew patterns existed but it was difficult to manually identify and share with customers without significant manual work.

"We've already got a thousand projects there in two months. Would love to see a trend for [customer]. That would be valuable as hell. Imagine if I take that and go to the customer—that's like a gold mine to them."

Ops lead

Building the AI Analysis Workflow

I built a custom dashboard in Omni (our internal database tool) and wrote a custom GPT to identify and categorize trends by issue type—setback violations, tree protection requirements, missing documentation. I iterated with ops leads, refining the categorization until the AI output matched their intuition about what mattered. The prototype workflow was simple: CSV export from Omni → custom GPT → formatted trend report.

Example Trend Report Output

The AI analyzes municipal reviewer comments and groups them by issue type, showing frequency counts and representative examples. This Charlotte-Mecklenburg workspace report reveals that siting/dimensional compliance issues (28 comments) are the top source of delays, followed by tree protection requirements (26 comments) and missing documentation (17 comments).

Workspace Trends Report: Charlotte-Mecklenburg

1. Siting / Dimensional Compliance28 comments

Reviewers frequently flagged noncompliance with zoning setbacks, easement encroachments, and pool placement requirements. Comments often cite property line distances and failure to show required setback measurements.

"All swimming pools shall be located a minimum of ten (10) feet from any property line measured to the water's edge."
"The survey shows pool encroaching into the SDE. No encroachment into the easement will be allowed."
2. Tree Protection / Heritage Trees26 comments

Urban Forestry compliance continues to generate high comment volume. Reviewers emphasized tree protection fencing, critical root zones, and heritage tree identification.

"Show the tree protection fence around the heritage tree. Please keep in mind the room needed for construction."
3. Documentation / Missing Plan Details17 comments

Recurring pattern of incomplete plan sets — missing updates, outdated plats, absence of required documents like tree save areas or erosion control details.

Secondary issues: Encroachment into Easements (6), Fence Not Shown on Plan (5), Critical Root Zone Violations (4)

From Prototype to Product

The prototype validated the concept before any engineering work began. Once ops teams started using the reports in customer calls and saw the value, I wrote a PRD to productize it with automatic processing and weekly refreshes via a workspace insights page.

The production version expanded on the prototype with three key capabilities: cross-sectioning trends by municipality versus by customer (to identify whether issues were jurisdiction-specific or customer-specific), time-based analysis to surface short-term versus long-term patterns (catching emerging issues early), and automated weekly refreshes to keep insights current without manual exports.

Reflections

1.

Building the prototype first with real data made the PRD way easier to sell internally. Instead of a pitch, I could show actual outputs that ops were already using in customer calls.

2.

This shaped product direction in unexpected ways: surfacing issues internally is useful, but sharing them with customers is where the real value is. A customer who can see that 30% of their rejections come from missing plot plan details can actually fix that. Nobody else gives them that data.