Building a Manufacturing Dashboard That People Actually Use


Most manufacturing dashboards fail for a simple reason: they are built to impress executives in a boardroom, not to help the people who actually make decisions on the floor. I built the MQTS dashboard in Power BI to fix that problem, and the results were measurable — a 20% increase in OEE and a 30% reduction in unplanned downtime.

Here is how I approached it and what I learned along the way.

The Problem with Gut-Feeling Decisions

Before the dashboard existed, shift supervisors were making production decisions based on experience and intuition. That works when things are running normally. It fails when you need to diagnose why a line has been underperforming for three weeks or why scrap rates spiked on second shift last month.

The data existed in the ERP system. Nobody was using it because pulling a report meant exporting to Excel, filtering manually, and building a pivot table. By the time the analysis was done, the information was stale and the decision had already been made on gut feeling.

What OEE Actually Measures

Overall Equipment Effectiveness combines three factors: availability, performance, and quality. Availability measures how much of the scheduled production time the equipment actually ran. Performance measures whether it ran at the expected speed. Quality measures how many good parts came out versus total parts produced.

The power of OEE is that it gives you a single number to track, but the real value is in the breakdown. A machine with 85% OEE could be losing that 15% to changeover time, to running below rated speed, or to scrap. Each of those root causes requires a different response. The dashboard had to make that decomposition obvious at a glance.

Designing for Two Audiences

The biggest design decision was building role-based views. A shift supervisor looking at the dashboard on a tablet near the production line needs different information than a plant manager reviewing performance in a weekly meeting.

For the shop floor view, I focused on real-time and shift-level data: current line status, parts produced versus target, active downtime reasons, and top scrap codes for the current run. Large fonts, simple color coding — green, yellow, red — and no more than four metrics visible at once.

For the executive view, I built trend analysis across weeks and months, comparative views across lines and shifts, and Pareto charts showing the top contributors to lost time. This layer connected to Power Apps for drill-through into specific incidents and for supervisors to log downtime reasons directly from their mobile devices.

The role-based filtering was handled through Power BI row-level security tied to Active Directory groups, so each user saw the view relevant to their role without needing to navigate between reports.

Common Dashboard Mistakes

After building this and seeing other attempts at manufacturing analytics, a few patterns stand out in dashboards that fail to get adoption.

Too many metrics on one screen. If a dashboard requires scrolling, it will not be used on the floor. Prioritize ruthlessly.

No context for the numbers. Showing that OEE is 72% means nothing without a target line or a comparison to last week. Every metric needs a benchmark.

Stale data. If the dashboard refreshes once a day, supervisors will not trust it for shift-level decisions. I set up incremental refresh so the shop floor view updated every 15 minutes.

No feedback loop. The dashboard should not be read-only. Supervisors needed a way to annotate downtime events and log root causes. That is where the Power Apps integration became critical — it turned a reporting tool into an operational tool.

The Impact

Within three months of deployment, OEE across the tracked lines increased from the low 60s to consistently above 75%, a roughly 20% improvement. Unplanned downtime dropped by 30% because the data made recurring failure patterns visible before they became crises.

The less measurable but equally important outcome was cultural. Production meetings shifted from debating what happened to discussing what to do about it, because everyone was looking at the same data. That shift in conversation quality is what separates a dashboard that gets used from one that gets ignored.