Before QA
Creation should already have produced controlled source, medium, campaign, content, and destination values using approved dictionaries and protected formulas.
Go back to creation standardA bulk publish gate that decides whether a batch goes back upstream for rebuild, forward into logging for launch, or pauses under a documented exception.
This is the middle control layer between creation and logging. Good QA does not invent campaign logic after the fact. It validates whether the batch is standard, safe, and traceable enough to ship.
If creation produced uncontrolled values, duplicates, or malformed destinations, QA should block the batch and send it back upstream. A publish gate only works when it stops launch-ready rows and broken rows from being treated as the same thing.
Creation standardises the batch. QA validates the batch. Logging preserves the approved truth after launch. When those jobs blur together, teams patch problems downstream and lose evidence fast.
Creation should already have produced controlled source, medium, campaign, content, and destination values using approved dictionaries and protected formulas.
Go back to creation standardValidate field presence, naming consistency, duplicates, redirect survival, and exception ownership. This is where FAIL, WARN, and PASS are assigned.
Run the QA checkerOnly approved publish URLs move into the live tracker, launch pack, and post-launch sample check. Unapproved rows do not quietly slip into logging.
Move approved links into loggingMost tracking failures happen because teams publish before the batch is standardised. The publish gate is the simple control that stops that. Every row exits with a status and a next action.
Action: block the batch, assign an owner, and rebuild upstream. Do not patch live.
Action: hold the batch, record the rationale, owner, and expiry date, then attach evidence.
Action: export the publish pack, move approved rows into logging, and validate a live sample after launch.
A good handoff is visible at row level. The point is not just that the URL works. The point is that the team can explain why the row was approved and where the evidence lives.
| Stage | What the row looks like | Decision |
|---|---|---|
| Creation input | Destination: https://example.com/saleSource: instagramMedium: paid_socialCampaign: spring_sale_2026Content: carousel_a | Built in the creation sheet using approved values only. |
| Built output | https://example.com/sale?utm_source=instagram&utm_medium=paid_social&utm_campaign=spring_sale_2026&utm_content=carousel_a | Send the batch into QA Checker. |
| QA result | One row warns because another row uses spring-sale-2026. | Resolve the naming conflict before launch. The batch stays WARN until the family matches. |
| Approved publish row | Canonical value standardised to spring_sale_2026 across the batch. | Move to PASS only after duplicate logic is resolved and any redirect evidence is attached. |
| Logging | Log publish URL, owner, launch date, placement, and proof in the link tracker or live inventory. | Now the row is launch-ready and traceable after release. |
QA is not there to create strategy after the fact. It should validate that the batch it received is standard, safe, and publishable with no cleanup left for launch day.
A good QA workflow does not stop at labels. It tells the team what happens next, who owns the action, and what proof is required before launch.
Send the batch back to creation. The issue is upstream, so the fix belongs upstream too. Record what failed, who owns the rebuild, and what must be resubmitted before QA can reopen the batch.
Pause for review. A warning can still become a launch problem if the exception is undocumented. Record the exception, the owner, the rationale, the attached evidence, and the review date.
Export the approved publish URLs, attach redirect proof where relevant, and push the final batch into logging. After launch, validate one or two live rows in GA4 so the pass status reflects reality.
The key control rule: QA should never silently repair a weak batch and then mark it as safe. If the row needed rebuilding, that belongs in creation.
QA is stronger when it produces proof, not just verdicts. The evidence pack is what lets someone explain later why a row was approved and where the route was validated.
If QA is doing its job, these issues should be blocked, documented, or routed correctly before launch — not discovered weeks later in reporting.
Usually caused by inconsistent naming or duplicates inside a link family. QA should catch this before launch, not after the report is already fragmented.
That usually means the URLs are fine but the route is not. Check hop depth and parameter survival with redirect chain guidance and UTM loss diagnosis.
That is a logging failure, not a QA success. Close the loop by pushing approved rows into the logging workflow immediately after PASS.
The simplest way to make the workflow stick is to keep the sequence visible. This is what the full handoff should look like in practice.
Build the batch with approved values only using the creation workflow. Do not ask QA to invent standards the batch never had.
Audit the batch, resolve FAIL/WARN issues, and prove redirect behaviour where routes include shorteners, affiliate hops, or bio layers.
Export approved publish URLs, push them into the logging standard, and validate one live sample in GA4 so launch evidence survives.
Creation builds the batch. QA decides if it is safe. Logging preserves the approved truth. When those layers stay separate, launch gets faster and attribution stays cleaner after release.
QA is one layer in the release stack. Use the deeper page that matches the actual failure point instead of forcing every issue into this workflow.
Fix the upstream build layer first.
Automate UTM creationUse the core checker or the checklist directly.
UTM QA CheckerThat is a redirect or link-repair problem.
Fix broken linksCommon questions about governed UTM QA workflows.
No. QA should flag or block inconsistent values, not silently rewrite strategy live. If the batch needs rebuilding, that belongs in creation.
A WARN row is technically valid but needs documented review, evidence, or owner sign-off. A FAIL row is not safe to publish as-is.
No. Direct destination URLs usually do not. Evidence matters most for shortened, affiliate, creator-bio, or otherwise layered routes.
Immediately after PASS. Waiting until after launch creates a gap between what shipped and what the source of truth records.
Export the approved publish pack, log the final URLs, and validate a small live sample in GA4 so the QA decision stays grounded in reality.