You’re staring at a report that should be simple. A list of assets. Locations. Status. Instead you get mismatched locations, half-filled histories, and duplicates that look almost the same but not quite. An asset shows up in two places at once. Another one just disappears. It’s in the system, technically, but no one can find it on the floor.
You stop trusting the report. Then you stop trusting the asset records behind it. And that’s when things get expensive.
The argument usually starts there. Is the tool broken? Did someone configure it badly? Or are people just not using it the way they’re supposed to? Maybe the real problem is no one owns the data, so everyone edits it and no one feels responsible.
Blame feels productive for about five minutes. Then you’re back where you started.
What you actually need is a way to trace where the errors enter the lifecycle. Creation. Update. Transfer. Retirement. Not a lecture about data quality. Not a heroic cleanup project that makes the numbers look good for a quarter and then slowly drifts back to the same mess.
If you can name the type of error and connect it to the step where it was introduced, the system vs. process question stops being abstract. It gets specific.
What “bad data” looks like when you’re the one dealing with it
Bad data isn’t theoretical. It shows up when someone says, “I can’t find the asset,” and they’re not being dramatic. The tag says one location. The system says another. The preventive maintenance schedule pulls a serial number that doesn’t match what’s actually bolted to the floor.
Audit findings start stacking up. An asset listed as active was scrapped last year. A critical piece of equipment has no maintenance history attached to it. The cost center is wrong, so Finance keeps asking questions no one wants to answer.
Duplicates are the quiet killers. Two asset IDs for the same machine. Same serial number, slightly different description. One record has the maintenance history. The other has the correct location. Neither is fully right. People pick one at random just to get the work order closed.
Location drift happens slowly. A unit gets moved for a short-term project. No one updates the record because it’s “temporary.” Three months later it’s still there. Someone swaps tags during a repair and means to fix it later. Later never comes. Temporary storage becomes permanent because the system never got the memo.
Maintenance history goes missing in less obvious ways. Work orders closed without linking to the right asset. Meter readings entered under the wrong record. You don’t notice until you try to prove compliance or schedule the next PM and realize the history doesn’t add up.
Then the downstream damage starts to show. Planning and budgeting based on fiction. Teams arguing about whose numbers are right. People wasting an hour chasing a record that should take thirty seconds to pull up. None of this feels dramatic in the moment. It just feels slow. Annoying. Friction that shouldn’t be there.
This isn’t theory. It’s what breaks down on a Tuesday afternoon when someone needs the data to actually mean something.
Three buckets of causes: system rules, process handoffs, and governance ownership
Once you stop arguing about whose fault it is, most issues fall into three buckets.
A system problem is the simplest to spot. The software allows something it shouldn’t. No required fields on creation. No validation rules on serial numbers. The system happily accepts two records with the same tag because nothing tells it not to. Integrations push data in from procurement or mobile apps without checking whether the location code even exists. If the guardrails aren’t there, people will drive straight off the road.
A process problem feels different. The rules might exist, but the handoffs are messy. Receiving logs an asset, but tagging happens days later. Maintenance closes a work order but forgets to update the meter reading. A move happens after hours and the record gets updated “tomorrow.” Tomorrow turns into next week. The steps aren’t clearly owned, or they rely on someone remembering to do one extra thing at the end of a long shift.
Governance problems sit above both. Who owns the location field? Who decides what “retired” actually means? Is there a naming standard, or does everyone improvise? When no one is clearly accountable for a field or a definition, the data drifts. Not because anyone is careless. Because no one feels final responsibility.
Blaming the software too early usually backfires. You tighten a few settings, maybe add a required field, and for a month things look better. Then the same behaviors work around the new rules. Someone keeps a spreadsheet on the side. Someone creates a placeholder asset just to get a work order moving. The mess recreates itself in a slightly different shape.
These buckets bleed into each other. A weak process gets exposed because the system is too permissive. A strict system frustrates people, so they invent side paths that become process problems. Governance gaps make both harder to fix because no one can say what “correct” even is.
This isn’t about labeling something as system or process and moving on. It’s about narrowing where to look first.
Start with error types, not opinions: a quick classification that narrows the cause
Rather than debating whether it’s a system issue, start with the error in front of you.
Missing data is obvious. Blank required info. No attached documents. Maintenance events that should exist but don’t. That often points to creation or update controls. Either the system doesn’t require the field, or the process lets people skip it when they’re in a hurry.
Duplicate data has its own pattern. Multiple asset records for one physical unit. Repeated serial numbers. Child components created twice because no one realized the parent record already existed. Duplicates usually say something about creation controls. Either there’s no duplicate prevention, or people can’t easily find existing records and just create a new one.
Outdated data shows up when locations, owners, or statuses lag behind reality. Retired assets still marked active. Meter readings that haven’t changed in six months even though the equipment runs daily. That tends to point toward transfer and update discipline. Moves are happening. The record just isn’t keeping up.
Wrong data is trickier. Values entered incorrectly. Units swapped. Hierarchies assigned to the wrong parent. That can be a training issue. It can also be a design issue if the allowed values are confusing or too open.
Then look at the pattern. Is this a one-off mistake from someone new? Or does the same error show up across departments, across months? Repetition often points to a broken step rather than a careless person.
Once you classify the error, the conversation gets quieter. You’re not arguing about philosophy anymore. You’re looking at mechanics.
Trace the error to the moment it was introduced in the asset lifecycle
Every bad record entered the system at some stage. It wasn’t born wrong. It became wrong.
Start at creation. New assets come in through receiving. They get tagged. Someone enters the initial master data. If duplicates show up right after onboarding, look there. Are people checking for existing records before creating a new one? Are required fields actually required?
Move to updates. Maintenance work orders. Meter updates. Condition notes. Parts swaps. If maintenance history is missing or meters are stale, the update step is suspect. Maybe closing a work order doesn’t force the right linkage. Maybe the person doing the work doesn’t even have access to the asset record.
Transfers are where drift often starts. Equipment moves between buildings. Loaners get reassigned. Project teams shuffle assets around. If locations don’t match reality, trace the transfer step. Who is responsible for updating the record when the move happens? Is it clear? Is it timed correctly?
Retirement creates its own confusion. Disposal, sale, write-offs. If assets remain active long after they’re gone, the retirement process may exist on paper but not in practice. Maybe Finance marks it disposed, but operations never changes the status in the asset system.
Think of it as a timeline. What was true when the asset first entered the system? What changed after that? Who touched it? What triggered each change? You’re not running a generic audit. You’re looking for the step where the record and reality parted ways.
With that moment identified, the system vs. process question usually narrows down to something you can actually fix.
Check the system next: what your tool is allowing that it shouldn’t
After you’ve traced the break in the timeline, look at the tool with fresh eyes.
Look at it mechanically, not dramatically.
Are required fields actually required? Or can someone create an asset without a location, without a serial number, without a cost center, because the system doesn’t block it? If allowed values are wide open, people will improvise. If there’s no format check on serial numbers, you’ll end up with three variations of the same thing.
Duplicate prevention is another tell. Can someone create a new asset with a serial number that already exists? Is there matching logic in place, or is it left to memory and goodwill? Barcode enforcement helps, but only if it’s tied to something unique in the system. Otherwise you’re just scanning chaos faster.
Look at workflow gates. Can anyone change an owner, a status, a location, or retire an asset without any review? Sometimes that freedom feels efficient. Until someone flips an asset to “retired” to clean up a report and no one notices for six months.
Integrations deserve a hard look too. Data flowing in from ERP, procurement systems, mobile apps, even bulk imports from spreadsheets. Each one is a door. If those doors bypass validation rules, they become quiet back channels where bad entries slip in cleanly.
Permissions matter more than most teams admit. Who can create? Who can edit? Who can retire? If permissions don’t match accountability, you get edits without ownership. And when something goes wrong, no one can trace who made the change.
Even exception handling tells a story. When a bad entry is flagged, where does it go? Is there a queue? A review step? Or does it quietly become the new “source of truth” because nothing forces a correction?
The system doesn’t need to be perfect. It needs guardrails that fit the risks you’re actually dealing with.
Then map the process: where handoffs and incentives quietly create bad data
If the system looks reasonable, turn to the steps people actually follow.
Receiving logs assets. Operations uses them. Maintenance updates them. Finance cares about cost centers. IT might manage the platform. At each step, someone is supposed to do something small but important. The question is whether that responsibility is clear or just assumed.
Handoffs are where cracks show up. Receiving creates the record, but tagging happens later. Maintenance completes a job, but no one checks that the asset record reflects what changed. Project teams move equipment for a few weeks and don’t think to notify anyone outside the project.
Offline workarounds creep in fast. A paper note in a toolbox. A spreadsheet tracking temporary moves. A sticky note with a new serial number. People do this to get work done. Not to sabotage the system. But when the official record depends on someone remembering to circle back, it eventually drifts.
Timing problems are quieter but just as damaging. Moves after hours. Work orders closed days later. Audits run long after the physical reality has shifted. The record lags behind the floor, and once that gap exists, trust erodes.
Most teams optimize for speed and getting the job done. Perfect records rarely win that tradeoff in the moment. If the process relies on perfect behavior every time, it’s going to break.
You don’t need to document every edge case. You need to identify the few steps that must be consistent and make those painfully clear.
Governance is the Glue: Define Ownership and Standards so Fixes Don’t Fade
Even with solid controls and cleaner processes, things drift if no one owns the meaning behind the fields.
Who decides what “correct” means for location? Or status? If one team uses “inactive” to mean stored and another uses it to mean scrapped, reports will never line up. The system can’t solve a definition problem.
Field ownership matters more than most teams expect. Not in theory. In practice. If no one is accountable for the hierarchy structure or criticality rating, edits become casual. And casual edits compound.
Simple standards do a lot of work. Naming conventions. A clear location taxonomy. Agreed definitions for asset statuses. A minimum set of required metadata before an asset is considered “live.” These aren’t glamorous. They just reduce interpretation.
Change control is where governance shows up under pressure. Major edits to ownership, status, or retirement shouldn’t feel like flipping a light switch. They should leave a trace. An approval. A reason code. Something that slows down the casual change.
Routine checks keep it from slipping back. Exception queues reviewed on a cadence. Spot audits. Reconciliations between the asset system and ERP. Not to punish people. To catch drift early.
If repeated issues keep showing up in the same field or step, that’s not a training problem. It’s a policy or ownership problem. And until that’s addressed, the fixes won’t stick.
What Actually Works (and what just sounds good) when you try to fix it
With the break identified, fixes get less dramatic.
If duplicates are entering at creation, tighten creation. Add required fields that actually block submission. Constrain picklists so people can’t invent their own location codes. Enforce unique IDs or serial numbers in a way that the system actually checks them, not just stores them.
If location drift starts at transfer, focus there. Make location updates part of the move step, not an afterthought. Add a simple gate so a move isn’t considered complete until the record reflects it. Small friction in the right place is better than constant cleanup later.
Retirement problems often need one thing: a clear gate. You shouldn’t be able to mark something disposed without a reason code, maybe even a reference to documentation. Not to slow people down for fun, but to make casual changes harder than thoughtful ones.
Big data cleanup projects sound impressive. You pull a team together, scrub thousands of records, reconcile with Finance, fix hierarchies. For a while, reports look clean. Then the same weak controls and loose steps feed new errors back in. The surface improves, but the engine stays the same.
“More training” gets suggested a lot. Sometimes it’s valid. If people genuinely don’t know the expected step, show them. But if the process is awkward or the system allows shortcuts, training becomes a band-aid. You’re asking people to compensate for design flaws with discipline.
An exception workflow is underrated. Instead of pretending errors won’t happen, expect them. Flag duplicates. Flag missing fields. Route them to someone who owns correction. Track patterns. If the same exception repeats, fix the upstream control. That loop matters more than a one-time purge.
Track things you actually feel on the floor. Fewer duplicate exceptions month over month. Faster time to locate an asset. Fewer audit findings tied to record accuracy. Not just “percentage of fields filled in.” A record can be complete and still be wrong.
A fix doesn’t mean much if the same issue shows up again six months later.
A practical decision rule: when it’s mainly system vs mainly process (and what to do next)
In some cases, it’s pretty clear where the problem leans.
If the system allows easy duplicates, broad permissions, and integrations that bypass checks, it’s leaning system. You don’t need a cultural overhaul. You need tighter configuration. Start there.
If the same mistakes cluster around handoffs, timing gaps, or offline workarounds, it’s leaning process. Tightening fields won’t fix a step that no one truly owns. Clarify responsibility. Simplify the step. Make the expected action hard to miss.
If teams disagree on what “correct” even means, or fields get edited casually because no one is accountable, that’s governance. You can configure and retrain all day, but without clear ownership and standards, drift comes back.
Pick the dominant bucket and make one or two moves that hit that bucket directly. Not ten initiatives. Not a replatforming project unless the current tool truly can’t enforce basic controls. Tool changes can turn into expensive distractions if the underlying behavior stays the same.
Keep the focus on the step and the control, not the person who made the last mistake. That shift alone changes how teams respond.
At that point, the question isn’t “system or process?” It’s “where does this specific failure live, and what’s the smallest change that stops it from happening again?”