Core tool · governance assessment

UTM Governance Assessment

Score naming, controlled vocabulary, QA enforcement, ownership, and reporting validation. See the weakest control first, then route straight into the fix.

Use this when tracking is messy but nobody agrees where the failure actually starts. The assessment turns that debate into five scored layers so you can stop arguing about symptoms and fix the first broken control.

5 scored layers

Review the whole operating stack instead of blaming one symptom like broken reports or inconsistent names.

30-point maturity score

Each question is scored 0, 1, or 2 so the result stays simple enough to use during a real review call.

Fastest next step picked for you

Once you score it, the page points to the weakest layer first so the follow-up work is obvious.

Score and action

Find the weakest control first

Score each question 0, 1, or 2. The lowest layer is where you get the fastest ROI because everything built above a weak control inherits the same drift.

Overall maturity

0/30

Band:

Live result
Layer 1 — Naming contractFix layer
0/6
Layer 2 — Controlled vocabularyFix layer
0/6
Layer 3 — QA enforcementFix layer
0/6
Layer 4 — Governance ownershipFix layer
0/6
Layer 5 — Reporting validationFix layer
0/6

Fastest next step

Answer the questions to see your weakest layer.

Assessment layers

Score the five controls that hold the system together

These questions are written to expose operational weakness, not to make the score look good. Be strict. A clean result is only useful if the scoring is honest.

Layer 1 — Naming contract

If this layer is weak, campaign names drift fast and reporting becomes impossible to trust.

Naming rules

This layer checks whether campaign names follow one documented contract. Low scores here usually create duplicate campaigns, unreadable reports, and “close enough” naming that slowly corrupts your data.

Q1. Do you have a documented naming standard that everyone can access?

0 = no · 1 = partial · 2 = yes

Q2. Is there a single campaign naming formula people follow?

Q3. Are parameter rules defined clearly with examples?

Layer 2 — Controlled vocabulary

If this layer is weak, approved values fragment into duplicates, regional drift, and messy exports.

Taxonomy design

This layer checks whether your allowed values are controlled. Low scores here usually show up as near-duplicate source or medium values, regional drift, and messy exports that nobody trusts.

Q1. Do you maintain an approved source/medium list?

Keep values controlled before you build at scale.

Q2. Are brand/region rules documented?

Q3. Do exports show consistent values (not dozens of near-duplicates)?

Layer 3 — QA enforcement

If this layer is weak, good intentions disappear right before launch and problems go live.

QA checklist

This layer checks whether good intentions survive the moment before launch. Low scores here usually mean missing UTMs, broken redirects, or inconsistent links only get discovered after traffic is live.

Q1. Do major launches include a tracking QA step?

This is where checklists, QA tools, and redirect checks belong.

Q2. Do you use a checklist or tool before go-live?

Q3. Do you rarely discover missing UTMs after launch?

Layer 4 — Governance ownership

If this layer is weak, people invent their own rules because nobody owns the operating model.

Governance policy

This layer checks whether the system has a real owner, onboarding process, and change-control rhythm. Low scores here usually appear when agencies, regions, or new team members invent their own rules.

Q1. Is there a clearly named UTM owner?

Ownership, onboarding, and versioning should feel operational, not optional.

Q2. Do agencies get your rules during onboarding?

Q3. Are changes versioned and communicated?

Layer 5 — Reporting validation

If this layer is weak, clean launches still end in mystery buckets and reporting arguments.

Reporting validation

This layer checks whether the output is validated in reporting, not just built correctly at launch. Low scores here usually show up as mystery buckets, unexplained drops, or reports full of “direct / unassigned”.

Q1. Do you regularly check how UTMs appear in GA4?

The build is only finished when reporting output is readable.

Q2. Can you segment performance without mystery buckets?

Q3. Have you audited your reports for UTM mistakes?

Use this before a governance rebuild

If multiple teams are already shipping links, score the system first. That keeps the cleanup grounded in the weakest control instead of opinions about what “feels messy”.

Use this with launch and onboarding reviews

The assessment is useful when agencies change, new markets open, or a reporting cleanup is about to start. It gives the meeting a shared scoring language.

Use the result to prioritise work

Low Layer 1 or 2 scores usually mean rebuild the inputs first. Low Layer 3, 4, or 5 scores usually mean tighten release checks, ownership, or validation before scaling further.

Share the result

Copy the live score into the next review

Use this during a launch review, quarterly governance check, or agency onboarding so the discussion stays tied to the weakest layer instead of opinions.

UTM governance maturity: 0/30 (—)
Layer 1 (Naming contract): 0/6 —
Layer 2 (Controlled vocabulary): 0/6 —
Layer 3 (QA enforcement): 0/6 —
Layer 4 (Governance ownership): 0/6 —
Layer 5 (Reporting validation): 0/6 —

Next focus: —
Assessment: https://shortlinkfix.com/utm-governance-assessment/
Framework: https://shortlinkfix.com/utm-governance-framework/
Open the Starter Kit
30-day plan

Turn the score into a controlled rollout

Once you know the weakest control, stop there first. The goal is not to rewrite everything. It is to lock one unstable layer so the rest of the system has something stable to sit on.

Week 1 — Fix the weakest layer

Open the linked fix page, rewrite the rule or process, and agree what “done” looks like before moving on.

Week 2 — Push it into the tools

Use the Naming Generator, UTM Builder, or Bulk UTMs so the cleaner rule becomes real operating behaviour.

Week 3 — Enforce the release gate

Route launches through the UTM QA Checker, QA Checklist, and Redirect Checker so the same mistake does not come back.

Week 4 — Validate the output

Check how the cleaned system appears in reporting, then log the live state in your operating docs and share the new score.

FAQ

Questions teams ask before they score the system

Who should complete this assessment?

Use one score agreed by the people who actually ship links, review QA, and read reports. A shared score is more useful than one person guessing alone.

Does a high score mean the system never needs QA?

No. Strong governance still needs release checks. High scores mean the controls are stable, not that you can skip QA before launch.

What if multiple layers score badly?

Start with the lowest layer first. If naming or taxonomy are weak, fixing reporting first usually creates extra work because the inputs are still drifting.

What is the best page to open after scoring?

Use the layer-specific fix link the assessment gives you. That is faster than opening every governance page and trying to repair all five controls at once.

Start with the UTM Governance Starter Kit if the weakest layers are operational. If the result points to system design problems, step back into the UTM Governance Framework first.