Uno Data Hub

๐Ÿ“Š Growth Analytics Strategy

Experiment strategy, AARRR diagnostics, and implementation tracking in one view.

๐Ÿ’ก
Additional Queries Available: For ad-hoc analysis and diagnostic queries, visit the Growth Metrics Query Explorer.

Loading AARRR Framework Dashboard...

๐Ÿš€ Growth Hacks Implementation & Tracking

Growth hacks are strategic product changes designed to drive specific AARRR metrics. Each hack has clear success criteria, measurement methodology, and expected impact on key growth metrics.

Growth Hack Effectiveness: Loading outcome, leading indicators, and guardrail checks...

๐ŸŽฏ Growth Hack #1: Auto-Launch Hot Design on First Use

๐Ÿ“‹ Overview

Objective: Reduce time-to-first-value by automatically launching Hot Design when a user first opens Studio.

Target Metric: Discovery and successful usage (per Feature Usage Definition)

Expected Impact: Increase users who discover Hot Design and then reach successful usage.

Version: Introduced in v6.5.64 (released 2026-02-03 per NuGet.org)

๐Ÿ“… Release Date Source: All queries use the official package publish timestamp from NuGet.org metadata (2026-02-03T00:10:21.73Z) to accurately segment Pre-6.5 vs Post-6.5 telemetry data. Metrics compare baseline behavior (before v6.5.64) against auto-launch behavior (v6.5.64 and later).

Status: โœ… IMPLEMENTED

Metrics 1.1-1.5 below focus on objective-aligned indicators: time to interaction, onboarding completion, and quality guardrails (return sessions, duration quality, and drop-off rate). The effectiveness strip above tracks discovery and successful usage directly from canonical feature usage telemetry.

โฑ๏ธ 1.1: Time to First Interaction

Measures: Time between first session start and first meaningful interaction (AI Agent vs Manual).
Target: Reduce median time from 15+ minutes to under 5 minutes.
AI Agent: Chat interactions (ChatGenerateStarted, ChatXamlPreviewApplied).
Manual: Toolbox, property, or element interactions.

๐Ÿ“ View KQL Query โ–ผ

Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.

โœ… 1.2: Onboarding Completion Rate

Measures: Percentage of users who complete first session (session-ended event within 2 hours).
Target: 70%+ of users should complete their first Hot Design session.

๐Ÿ“ View KQL Query โ–ผ

Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.

๐Ÿ›ก๏ธ Guard Metrics (Prevent Negative Side Effects)

These metrics ensure auto-launch improves, not forces, engagement. All should remain stable or improve post-v6.5.

๐Ÿ”„ 1.3: Return Session Rate (Guard Metric)

Measures: Percentage of users who voluntarily return for a second session (2-7 days after first session).
Guard Threshold: >50% return rate. Ensures users come back because they want to, not just because of auto-launch.
Compares: Pre-6.5 vs Post-6.5 return rates to ensure feature doesn't harm voluntary engagement.

๐Ÿ“ View KQL Query โ–ผ

Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.

โฐ 1.4: Session Duration Quality (Guard Metric)

Measures: Median session duration (minutes) with P25/P75 percentiles before and after v6.5.
Guard Threshold: Duration should remain stable or increase. Prevents optimizing for quantity over quality.
Compares: Pre-6.5 baseline vs Post-6.5 to ensure auto-launch maintains quality engagement.

๐Ÿ“ View KQL Query โ–ผ

Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.

โš ๏ธ 1.5: Early Drop-off Rate (Guard Metric)

Measures: Percentage of users who close Hot Design within 30 seconds of session start.
Guard Threshold: <30% early drop-offs. High rate indicates auto-launch may be intrusive or poorly timed.
Time Between: Measures session-started to session-ended duration for all sessions.

๐Ÿ“Š Success Criteria & Monitoring

All metrics compare Pre-v6.5 (baseline) vs Post-v6.5 (with auto-launch)

  • Primary Success Metrics (Discovery + Successful Usage):
    • Discovery users for Hot Design trend upward over baseline weeks
    • Successful usage after discovery remains above target threshold and improves over baseline
  • Secondary Metrics (Engagement):
    • Median time-to-first-interaction reduces from 15min โ†’ under 5min
    • 70%+ of new users complete first Hot Design session
    • Weekly IDE completion cohort trend remains positive
  • Guard Metrics (Quality/Recurring Use):
    • โœ… Recurring use: User-initiated return sessions remain above 50% (voluntary engagement)
    • โœ… Session duration: Median/average duration stable or increases (quality maintained)
    • โœ… Early drop-offs: <30% close within 30 seconds (no intrusion signal)
    • โœ… Time between events: Track session-started to session-ended for quality analysis

๐Ÿ”ง Implementation Notes

Version Tracking:

  • Release Date: Version 6.5.64 NuGet publish timestamp 2026-02-03T00:10:21.73Z
  • Pre-6.5 (Baseline): All sessions were manually initiated by users
  • Post-6.5 (Treatment): First session auto-launches; subsequent sessions are user-initiated
  • Measurement Window: Compare cohorts before/after release date for true impact assessment

Telemetry Events Used:

  • uno/hot-design/usage with UsageEvent = "DebugSessionStarted" - Tracks each Hot Design launch
    • Pre-6.5: All occurrences are manual launches (baseline behavior)
    • Post-6.5 first occurrence per user: Auto-launch (growth hack)
    • Post-6.5 second+ occurrence per user: User-initiated launch (voluntary return)
  • uno/hot-design/session-end with SessionDuration measurement - Tracks session completion and duration
    • Duration: Time between session-started and session-ended events
    • Early drop-off: Duration < 30 seconds indicates immediate close
  • uno/hot-design/usage with interaction events - Tracks meaningful actions:
    • ToolboxDragToCanvas, ToolboxDragToElements, ToolboxDoubleClickToAdd
    • PropertyChanged, AddElement

Natural A/B Test Design:

  • Historical Control: Pre-6.5 users (all manual launches)
  • Treatment Group: Post-6.5 first sessions (auto-launch)
  • Internal Control: Post-6.5 second sessions (user-initiated, same users)

This three-way comparison allows us to measure both the absolute impact vs historical baseline AND the incremental effect of auto-launch vs manual within the same user cohort.

โฑ๏ธ Growth Hack #2: Trial Period Reduction (30โ†’15 Days)

Release Date: TBD | Hypothesis: Shorter trial periods create urgency, leading to more trial extensions and faster purchase decisions.
Key Questions: (1) Does reducing trials from 30 to 15 days increase extension requests? (2) Do we see higher purchase conversion rates?

๐ŸŽฏ Experiment Overview

Current State: 30-day trial period for Hot Design features.

Proposed Change: Reduce trial period to 15 days.

Rationale:

  • Urgency Creation: Shorter trials force users to evaluate value faster
  • Faster Decisions: Reduces "I'll decide later" procrastination
  • Active Engagement: Encourages users to engage with the product sooner
  • Extension Opportunities: Users who need more time will explicitly request it (signal of interest)

Expected Outcomes:

  • โ†‘ Trial extension request rate (users actively seeking more time = strong engagement signal)
  • โ†‘ Purchase conversion rate (urgency drives decision-making)
  • โ†“ Time to purchase decision (faster conversion cycles)
  • โ†‘ Trial usage intensity (users engage more actively in shorter window)

๐Ÿ“Š 2.1: Trial Period Comparison Summary

Measures: Key metrics comparing 30-day vs 15-day trial periods.
Success Indicators: Higher extension rate + higher conversion rate = experiment success.
Data: Extension requests, purchases, session counts, time to purchase.

Trial Period Total Trials Extension Rate (%) Conversion Rate (%) Avg Sessions Median Days to Purchase
Loading data...
โ–ผ Show Query

Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.

๐Ÿ“ˆ 2.2: Trial Urgency Timeline

Measures: Daily trend of new trials, extension requests, and purchases by trial period.
Purpose: Identify if 15-day trials show faster conversion velocity.
Interpretation: Steeper slopes for 15-day cohort = successful urgency creation.

๐Ÿ”„ 2.3: Trial Lifecycle Funnel

Measures: Conversion funnel from trial start โ†’ active usage โ†’ extension โ†’ purchase.
Active Users: 5+ sessions during trial period (meaningful engagement).
Success Pattern: Higher funnel efficiency for 15-day trials = better urgency.

Trial Period Trial Starts Active Users Active % Extensions Extension % Purchases Purchase % Overall Conv %
Loading data...

โœ… Success Criteria

Experiment succeeds if 15-day trials show:

  • Primary Metric (Extension Rate):
    • Extension request rate increases by 30%+ vs 30-day trials
    • Example: 30-day baseline = 10% โ†’ 15-day target = 13%+
  • Primary Metric (Purchase Conversion):
    • Purchase conversion rate increases by 15%+ vs 30-day trials
    • Example: 30-day baseline = 5% โ†’ 15-day target = 5.75%+
  • Secondary Metrics:
    • Time to purchase decision decreases by 25%+
    • Trial usage intensity (sessions per user) remains stable or increases
  • Guard Metrics:
    • โœ… Overall activation rate (trial starts) does not decrease
    • โœ… Active user rate (5+ sessions) remains above 40%
    • โœ… Purchase-to-extension ratio improves (more purchases per extension request)

๐Ÿ”ง Implementation Notes

Telemetry Events Used:

  • uno/licensing/license-status with LicenseName = "Trial" and TrialDaysRemaining - Tracks trial periods
  • uno/licensing/nav-to-trial-extension - User requests trial extension (strong interest signal)
  • uno/licensing/nav-to-purchase-now - User navigates to purchase (conversion intent)
  • uno/hot-design/session-started - Hot Design usage during trial (engagement)

Measurement Approach:

  • Cohort Comparison: Pre-change (30-day) vs Post-change (15-day)
  • Minimum Sample: 100+ trials per cohort for statistical significance
  • Measurement Window: 30 days post-trial start (captures full lifecycle)

Key Assumptions:

  • Trial period is tracked via TrialDaysRemaining dimension
  • Extension requests are explicit user actions (not automatic)
  • Purchase events are accurately captured via navigation or license status change
  • Session counts reflect actual product usage during trial

Configuration:

  • trialReductionDate: Set to implementation date (currently TBD)
  • All queries support dynamic timeframe via {timeFrameDays} parameter
  • Chart rendering uses pre-configured visualization settings

โš ๏ธ Risks & Mitigation

Potential Risks:

  • Insufficient Evaluation Time: Users may feel rushed and abandon without trying
    • Mitigation: Monitor early drop-off rates; easy extension process reduces friction
  • Negative Sentiment: Users may perceive shorter trials as less generous
    • Mitigation: Communicate value clearly; track extension request volume
  • Lower Activation: Urgency could discourage trial starts
    • Mitigation: Monitor trial start rate closely; revert if activation drops >10%

Rollback Plan: If guard metrics fail (activation drops, active user rate falls below 35%), revert to 30-day trials within 2 weeks.

๐Ÿ”ฎ Upcoming Growth Hacks

Future growth hacks in the planning and development pipeline.

๐Ÿ“‹
Growth Hack Pipeline

Additional growth hacks will be added here as they are planned and implemented.

Suggested next hacks: Progressive onboarding tooltips, In-app tutorial, Feature discovery prompts

๐Ÿ“ Growth Hacking Experiments โ€“ TODO

Planned growth hacking experiments and KPI tracking requirements. Items are organized by experiment pipeline and by the KPI framework ownership matrix.

๐Ÿงช Planned Growth Hack Experiments

Pipeline Status

These experiments are queued for implementation. Each will follow the same structured approach as Growth Hack #1 (Auto-Launch Hot Design): define hypothesis, instrument telemetry, set success criteria, and measure impact.

P1 = Next Sprint (highest impact) P2 = Backlog (after P1 validation)
# Experiment AARRR Stage Hypothesis Target Metric Guard Metric Priority Status Query Coverage
1 Auto-Launch Hot Design Activation Auto-launching Hot Design on first use reduces TTFV Activation rate 35% โ†’ 60% Early drop-off rate within first 30 seconds P1 โœ… LIVE โœ… Available
gh1-auto-launch-adoption-rate, gh1-time-to-first-interaction, gh1-activation-comparison, gh1-onboarding-completion-rate, gh1-version-impact-summary, gh1-return-session-quality, gh1-session-duration-guard, gh1-early-dropoff-rate
2 Trial Period Reduction Revenue Shorter trial period increases urgency and conversion Trial-to-paid conversion rate 7-day retention and NPS remain stable P1 ๐Ÿ“Š TRACKING โœ… Available
trial-period-conversion
3 Referral Loop In-Product Referral Prompting users to share after successful outcomes increases qualified referrals Referral invite rate and referred-user activation rate Session satisfaction and completion rate do not decline P1 ๐Ÿ”œ PLANNED โŒ Missing
No dedicated referral invite/share/referred-activation queries found
4 Win-Back Nudges for At-Risk Users Retention Contextual re-engagement nudges recover dormant users before churn Reactivation rate after 7-day inactivity and Day-30 retention Unsubscribe rate and negative feedback volume stay low P1 ๐Ÿ”œ PLANNED ๐ŸŸก Partial
retention-curve-d1-d7-d14-d30 exists; no nudge-trigger or win-back attribution queries
5 Template Marketplace Spotlight Activation / Retention Promoting high-quality starter templates increases early success and repeat usage Time to first meaningful action and feature depth per session Template abandonment rate does not increase P1 ๐Ÿ”œ PLANNED โŒ Missing
No template spotlight/template outcome queries found
6 Usage-Based Upgrade Moments Revenue Showing upgrade CTAs at value moments improves conversion versus generic prompts CTAโ†’trial conversion and trialโ†’paid conversion Activation and retention of free cohort do not regress P1 ๐Ÿ”œ PLANNED ๐ŸŸก Partial
8-user-navigation-to-purchase-funnel and trial-period-conversion exist; no value-moment CTA attribution query
7 Community Proof Surfaces Acquisition / Activation Surfacing social proof and examples increases confidence to start Installโ†’Project Created conversion rate Bounce rate and onboarding completion remain stable P2 ๐Ÿ”œ PLANNED โŒ Missing
No query for community-proof exposure or proofโ†’activation conversion
8 Friction Kill List Sprint Activation Weekly removal of top onboarding frictions reduces early-session drop-offs Drop-off within first 10 minutes and median TTFV Regression rate and support ticket spikes remain low P2 ๐Ÿ”œ PLANNED ๐ŸŸก Partial
gh1-early-dropoff-rate and operation-failure exist; no top-friction-priority query

๐Ÿ“Š KPI Framework โ€“ Ownership & Reporting Matrix

KPI Tracking Requirements

This matrix defines which KPIs each team owns, and whether they are reported internally only or also to the board. Items marked โœ… are actively tracked; items without a check are candidates for future instrumentation.

KPI Team Reporting Team KPI Internal External (Board)
Marketing Marketing Operational KPIs (time to resolution per OSS insights, release cadence etc) โœ…
NPS (Net Promoter Score) โœ…
Projects Created โœ… โœ…
Community Growth (Newsletter / Discord / Socials) โœ…
(Future) Marketing Qualified Leads โœ…
Product Management Marketing NEW Monthly Active Users โ€“ Uno Platform โœ…
NEW Monthly Active Users โ€“ Uno Platform Studio โœ… โœ…
NEW Uno Platform Studio Community Licenses / Trials โœ… โœ…
Churn MAU Uno Platform โœ… โœ…
Churn MAU Uno Platform Studio โœ…
Product Management Engineering Operational KPIs (time to resolution per OSS insights, release cadence etc) โœ…
NPS (Net Promoter Score) โœ…
Projects Created โœ… โœ…
Monthly Active Users โ€“ Uno Platform (MAU) โœ…
Monthly Active Users โ€“ Uno Platform Studio (MAU) โœ…
NuGet Package Downloads โœ… โœ…
Issues Closed โœ…
Sales Marketing New Licenses Sold โœ… โœ…
Active License Holders โœ… โœ…
Retention Churn โœ…
(Future) Sales Qualified Leads โœ…
Product Management Engineering Licensed User Active Usage โœ…

๐Ÿ”ฎ Future / Unassigned KPIs

Candidates for Future Tracking

These KPIs have been identified as valuable but do not yet have assigned ownership or instrumentation.

KPI Notes
Community Growth โ€“ Contributions PRs, issues filed, community plugins
GitHub Stars Vanity metric but useful for awareness tracking
Average Time to Close Issue/PR resolution velocity
MRR (Monthly Recurring Revenue) Revenue tracking โ€“ requires billing integration
Support Pipeline Ticket volume, resolution time, escalation rate

Loading growth analytics documentation...

Loading PLG gaps summary...