Commerce Analytics | Product Intelligence 2026-01-29 CASE FILE // LOG-07

Digital Commerce Product Intelligence

Commerce analytics case study turning synthetic e-commerce events, orders, and returns into practical product, conversion, and returns-risk intelligence.

#CommerceAnalytics #ProductAnalytics #RetailIntelligence #Conversion #ReturnsRisk
Problem Product and trading teams can mistake traffic or revenue for healthy product performance when conversion friction and returns risk are hidden.
Focus I structured synthetic e-commerce events, orders, and returns into a practical intelligence pack across funnel performance, post-launch product health, and false-winner detection.
Outcome Stakeholders can quickly see what to promote, what PDPs to fix, what launches to deprioritise, and which apparent winners are commercially risky.
// analyst.signals
Funnel interpretation Product prioritisation Risk-aware merchandising insight

Overview

Digital Commerce Product Intelligence translates synthetic South African retail-inspired e-commerce events, orders, and returns into a practical product intelligence pack. The work focuses on where the funnel loses momentum, which launches underperform early, and which apparent revenue winners become risky once returns and margin are included.

Hero

Commerce analytics case study focused on product performance, conversion friction, and risk-adjusted trading decisions.

Intelligence Layer

Trading and merchandising teams can see traffic, orders, and revenue every day, but those signals do not automatically reveal where product-page friction is building, which launches need fixing, or which strong sellers are eroding value through returns. This project structures those signals into a sharper operating view.

Problem

High visibility and high revenue can both be misleading. Teams need a way to distinguish PDP friction from checkout friction, weak launches from fixable launches, and strong sellers from false winners that create returns or margin risk.

Data / Signals

Analyst Objective

Build a concise commerce intelligence pack that helps teams:

  • read the journey from sessions to purchase,
  • compare early product performance against category baselines,
  • and flag revenue-leading products that may still be commercially risky.

Stakeholders

  • Digital commerce or trading teams deciding what to promote, fix, or deprioritise.
  • Merchandising and product owners reviewing launch health and PDP effectiveness.
  • Category or operations leads needing clearer visibility into returns-driven risk.

Key Questions

  • Where is the main friction in the commerce journey, and does it differ by device?
  • Which newly launched products had visibility but failed to convert in the first 14 days?
  • Which revenue-leading products may be misleading because returns or weak margin undermine their apparent success?
  • Which actions belong in PDP improvement, quality investigation, pricing response, or deprioritisation?

KPI Framework

  • Funnel: sessions, PDP sessions, add-to-cart sessions, checkout sessions, purchase sessions.
  • Launch health: 14-day PDP visibility, ATC rate, purchase rate, and category-relative baselines.
  • Commercial risk: net revenue, estimated margin, return rate, and top return reason.

Insight

  • Structured the repo outputs into a decision-led trading narrative rather than a generic performance summary.
  • Used the project’s slow-starter logic to separate fixable launch friction from genuine low-demand cases.
  • Paired revenue with returns and estimated margin so commercially risky products did not look healthy on topline alone.
  • Kept the analysis recruiter-readable while preserving the actual logic from the source project.

Implication

  • Traffic concentration alone is not enough: mobile drives most sessions but converts less efficiently through later funnel stages.
  • The slow-starter queue is mostly fix-oriented rather than promo-first, which changes what teams should do next.
  • Revenue leaders in Women Shoes still need scrutiny because returns risk can distort what looks like strong trading performance.

Embedded Insights Report

Executive Snapshot

Journey Conversion
145,609 -> 2,431

The generated funnel moves from 145,609 sessions to 2,431 purchases, for a 1.67% session-to-purchase rate.

Device Gap
2.10% vs 1.50%

Desktop converts 2.10% of sessions into purchase, while mobile converts 1.50% despite driving 71.9% of all sessions.

Slow-Starter Queue
20 products

The prioritised list contains 20 slow starters: 12 PDP-content fixes, 6 quality-risk cases, and 2 clear deprioritisation calls.

Revenue vs Margin
ZAR 1.63m / 617k

Across 18 trading weeks, synthetic revenue totals ZAR 1.63m and estimated margin totals ZAR 617k, with average margin share at 37.9%.

False Winners
7 flagged

The false-winner logic surfaces 7 products, all in Women Shoes, with size and quality issues dominating the return narrative.

Funnel Insight

The funnel in funnel_metrics.csv tracks the journey as Sessions -> PDP -> Add-to-cart -> Checkout -> Purchase. Overall, 54.5% of sessions reach a PDP, only 11.4% of PDP sessions reach cart, 66.5% of carts reach checkout, and 40.5% of checkouts end in purchase.

The device split matters. Mobile generates 71.9% of all sessions but only 64.6% of purchases, while desktop has the stronger end-to-end conversion path. That points to two different decisions: improve mobile checkout flow and reduce PDP friction before expecting more traffic to lift sales.

Journey funnel by device for digital commerce product intelligence

Desktop turns 2.10% of sessions into purchase versus 1.50% on mobile, even though mobile carries most traffic volume.

Revenue vs Margin Context

This project also keeps revenue and estimated margin together at weekly level, because topline growth can look healthy while commercial quality deteriorates underneath. The generated series peaks in the week of 2026-02-09 at roughly ZAR 120.8k revenue and ZAR 46.0k estimated margin.

Weekly revenue and estimated margin trend for digital commerce product intelligence

Weekly revenue and estimated margin generally move together, but the margin gap reinforces why product ranking cannot rely on revenue alone.

Slow Starters Analysis

The slow_starters_top20.csv output uses the repo’s 14-day post-launch rule: above-category-median visibility combined with below-category-median ATC and/or purchase performance. The resulting queue is mostly operational rather than promotional. Twelve of the twenty prioritised products point to PDP improvement, six point to quality risk, and only two land in pure deprioritisation.

Home contributes 6 of the 20 slow starters, while Women Shoes contributes 5. That suggests launch underperformance is clustered in specific categories rather than evenly distributed across the catalogue.

Product Category 14d views ATC rate Purchase rate Return rate Recommended action
P00089 Accessories 72 12.5% 1.4% 0.0% Low demand (deprioritise)
P00091 Home 61 6.6% 1.6% 16.7% Quality risk (high returns)
P00195 Home 60 5.0% 1.7% 0.0% Improve PDP content
P00140 Home 55 5.5% 1.8% 0.0% Improve PDP content
P00186 Women Shoes 60 8.3% 1.7% 0.0% Improve PDP content
P00145 Men Tees 57 8.8% 3.5% 9.1% Improve PDP content
P00044 Beauty 54 7.4% 1.9% 33.3% Quality risk (high returns)
P00042 Men Tees 50 0.0% 0.0% 33.3% Quality risk (high returns)

Top 8 entries shown from the prioritised `slow_starters_top20.csv` export.

Returns Risk / False Winner Analysis

The returns_risk_products.csv export applies the project’s false-winner rule: top-revenue-quartile products that also have return rate at or above 18% or estimated margin in the bottom quartile. In this run, the rule flags 7 products, and every flagged product sits in Women Shoes.

That concentration is the real point of the logic. High revenue alone would push these products toward promotion, but the return story changes the decision. Four of the seven false winners are dominated by size-related returns, with the rest driven by quality and damage signals. The flagged set still contributes almost 8.9% of catalogue revenue, which is enough to matter in weekly trading review.

Returns risk matrix highlighting false winners for digital commerce product intelligence

Highlighted points show false winners: high-revenue products with return rates above the project risk threshold or weak relative margin quality.

Product Brand Revenue Est. margin Return rate Top reason
P00102 Moya ZAR 30.8k ZAR 12.9k 23.5% size
P00012 LunaRidge ZAR 28.5k ZAR 7.8k 35.7% damaged
P00004 LunaRidge ZAR 22.4k ZAR 11.3k 25.0% quality
P00109 Aurelia ZAR 20.2k ZAR 6.1k 20.0% size
P00125 Aurelia ZAR 16.1k ZAR 6.3k 18.8% size
P00066 Stone&Salt ZAR 13.8k ZAR 4.4k 28.6% size
P00155 Rivermark ZAR 12.8k ZAR 4.2k 20.0% quality

Flagged items rendered from the `returns_risk_products.csv` export.

  • Fix PDP content first on high-visibility, low-conversion launches rather than treating all slow starters as demand problems.
  • Separate mobile conversion work from desktop because later-stage mobile performance is materially weaker despite heavier traffic share.
  • Escalate sizing, quality, and expectation-setting issues on Women Shoes before increasing promotion on high-revenue items.
  • Keep weekly trade review anchored on revenue plus estimated margin, not revenue alone.
  • Deprioritise launches that remain weak after the first 14-day visibility window and do not show compensating commercial upside.

Closing

Deliverables

  • Reproducible synthetic e-commerce data generation and feature build pipeline.
  • Funnel metrics across overall traffic and device split.
  • slow_starters_top20.csv for 14-day post-launch prioritisation.
  • returns_risk_products.csv for false-winner detection.
  • Native portfolio report visuals built from the project outputs.

Outcome

The project demonstrates how product and trading analytics can move beyond traffic and revenue reporting into sharper decision support. The result is a more operational view of what to fix, what to watch, and what not to over-celebrate.

SYSTEM ONLINE // KM.III.IV 00:00:00 SAST CASE FILE // ACTIVE