๐ Growth Hacks Implementation & Tracking
Growth hacks are strategic product changes designed to drive specific AARRR metrics. Each hack has clear success criteria, measurement methodology, and expected impact on key growth metrics.
๐ฏ Growth Hack #1: Auto-Launch Hot Design on First Use
๐ Overview
Objective: Reduce time-to-first-value by automatically launching Hot Design when a user first opens Studio.
Target Metric: Discovery and successful usage (per Feature Usage Definition)
Expected Impact: Increase users who discover Hot Design and then reach successful usage.
Version: Introduced in v6.5.64 (released 2026-02-03 per NuGet.org)
๐
Release Date Source: All queries use the official package publish timestamp from NuGet.org metadata (2026-02-03T00:10:21.73Z) to accurately segment Pre-6.5 vs Post-6.5 telemetry data. Metrics compare baseline behavior (before v6.5.64) against auto-launch behavior (v6.5.64 and later).
Status: โ IMPLEMENTED
Metrics 1.1-1.5 below focus on objective-aligned indicators: time to interaction, onboarding completion, and quality guardrails (return sessions, duration quality, and drop-off rate). The effectiveness strip above tracks discovery and successful usage directly from canonical feature usage telemetry.
โฑ๏ธ 1.1: Time to First Interaction
Measures: Time between first session start and first meaningful interaction (AI Agent vs Manual).
Target: Reduce median time from 15+ minutes to under 5 minutes.
AI Agent: Chat interactions (ChatGenerateStarted, ChatXamlPreviewApplied).
Manual: Toolbox, property, or element interactions.
Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.
โ 1.2: Onboarding Completion Rate
Measures: Percentage of users who complete first session (session-ended event within 2 hours).
Target: 70%+ of users should complete their first Hot Design session.
Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.
๐ก๏ธ Guard Metrics (Prevent Negative Side Effects)
These metrics ensure auto-launch improves, not forces, engagement. All should remain stable or improve post-v6.5.
๐ 1.3: Return Session Rate (Guard Metric)
Measures: Percentage of users who voluntarily return for a second session (2-7 days after first session).
Guard Threshold: >50% return rate. Ensures users come back because they want to, not just because of auto-launch.
Compares: Pre-6.5 vs Post-6.5 return rates to ensure feature doesn't harm voluntary engagement.
Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.
โฐ 1.4: Session Duration Quality (Guard Metric)
Measures: Median session duration (minutes) with P25/P75 percentiles before and after v6.5.
Guard Threshold: Duration should remain stable or increase. Prevents optimizing for quantity over quality.
Compares: Pre-6.5 baseline vs Post-6.5 to ensure auto-launch maintains quality engagement.
Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.
โ ๏ธ 1.5: Early Drop-off Rate (Guard Metric)
Measures: Percentage of users who close Hot Design within 30 seconds of session start.
Guard Threshold: <30% early drop-offs. High rate indicates auto-launch may be intrusive or poorly timed.
Time Between: Measures session-started to session-ended duration for all sessions.
๐ Success Criteria & Monitoring
All metrics compare Pre-v6.5 (baseline) vs Post-v6.5 (with auto-launch)
- Primary Success Metrics (Discovery + Successful Usage):
- Discovery users for Hot Design trend upward over baseline weeks
- Successful usage after discovery remains above target threshold and improves over baseline
- Secondary Metrics (Engagement):
- Median time-to-first-interaction reduces from 15min โ under 5min
- 70%+ of new users complete first Hot Design session
- Weekly IDE completion cohort trend remains positive
- Guard Metrics (Quality/Recurring Use):
- โ Recurring use: User-initiated return sessions remain above 50% (voluntary engagement)
- โ Session duration: Median/average duration stable or increases (quality maintained)
- โ Early drop-offs: <30% close within 30 seconds (no intrusion signal)
- โ Time between events: Track session-started to session-ended for quality analysis
๐ง Implementation Notes
Version Tracking:
- Release Date: Version 6.5.64 NuGet publish timestamp 2026-02-03T00:10:21.73Z
- Pre-6.5 (Baseline): All sessions were manually initiated by users
- Post-6.5 (Treatment): First session auto-launches; subsequent sessions are user-initiated
- Measurement Window: Compare cohorts before/after release date for true impact assessment
Telemetry Events Used:
uno/hot-design/usagewithUsageEvent = "DebugSessionStarted"- Tracks each Hot Design launch- Pre-6.5: All occurrences are manual launches (baseline behavior)
- Post-6.5 first occurrence per user: Auto-launch (growth hack)
- Post-6.5 second+ occurrence per user: User-initiated launch (voluntary return)
uno/hot-design/session-endwithSessionDurationmeasurement - Tracks session completion and duration- Duration: Time between session-started and session-ended events
- Early drop-off: Duration < 30 seconds indicates immediate close
uno/hot-design/usagewith interaction events - Tracks meaningful actions:ToolboxDragToCanvas,ToolboxDragToElements,ToolboxDoubleClickToAddPropertyChanged,AddElement
Natural A/B Test Design:
- Historical Control: Pre-6.5 users (all manual launches)
- Treatment Group: Post-6.5 first sessions (auto-launch)
- Internal Control: Post-6.5 second sessions (user-initiated, same users)
This three-way comparison allows us to measure both the absolute impact vs historical baseline AND the incremental effect of auto-launch vs manual within the same user cohort.
โฑ๏ธ Growth Hack #2: Trial Period Reduction (30โ15 Days)
Release Date: TBD | Hypothesis: Shorter trial periods create urgency, leading to more trial extensions and faster purchase decisions.
Key Questions: (1) Does reducing trials from 30 to 15 days increase extension requests? (2) Do we see higher purchase conversion rates?
๐ฏ Experiment Overview
Current State: 30-day trial period for Hot Design features.
Proposed Change: Reduce trial period to 15 days.
Rationale:
- Urgency Creation: Shorter trials force users to evaluate value faster
- Faster Decisions: Reduces "I'll decide later" procrastination
- Active Engagement: Encourages users to engage with the product sooner
- Extension Opportunities: Users who need more time will explicitly request it (signal of interest)
Expected Outcomes:
- โ Trial extension request rate (users actively seeking more time = strong engagement signal)
- โ Purchase conversion rate (urgency drives decision-making)
- โ Time to purchase decision (faster conversion cycles)
- โ Trial usage intensity (users engage more actively in shorter window)
๐ 2.1: Trial Period Comparison Summary
Measures: Key metrics comparing 30-day vs 15-day trial periods.
Success Indicators: Higher extension rate + higher conversion rate = experiment success.
Data: Extension requests, purchases, session counts, time to purchase.
| Trial Period | Total Trials | Extension Rate (%) | Conversion Rate (%) | Avg Sessions | Median Days to Purchase |
|---|---|---|---|---|---|
| Loading data... | |||||
Canonical query definitions are maintained in Query-Catalog.generated.md and sourced from www/config/queries.json. Embedded examples were removed to avoid drift from canonical identity, CI exclusion, and timeframe rules.
๐ 2.2: Trial Urgency Timeline
Measures: Daily trend of new trials, extension requests, and purchases by trial period.
Purpose: Identify if 15-day trials show faster conversion velocity.
Interpretation: Steeper slopes for 15-day cohort = successful urgency creation.
๐ 2.3: Trial Lifecycle Funnel
Measures: Conversion funnel from trial start โ active usage โ extension โ purchase.
Active Users: 5+ sessions during trial period (meaningful engagement).
Success Pattern: Higher funnel efficiency for 15-day trials = better urgency.
| Trial Period | Trial Starts | Active Users | Active % | Extensions | Extension % | Purchases | Purchase % | Overall Conv % |
|---|---|---|---|---|---|---|---|---|
| Loading data... | ||||||||
โ Success Criteria
Experiment succeeds if 15-day trials show:
- Primary Metric (Extension Rate):
- Extension request rate increases by 30%+ vs 30-day trials
- Example: 30-day baseline = 10% โ 15-day target = 13%+
- Primary Metric (Purchase Conversion):
- Purchase conversion rate increases by 15%+ vs 30-day trials
- Example: 30-day baseline = 5% โ 15-day target = 5.75%+
- Secondary Metrics:
- Time to purchase decision decreases by 25%+
- Trial usage intensity (sessions per user) remains stable or increases
- Guard Metrics:
- โ Overall activation rate (trial starts) does not decrease
- โ Active user rate (5+ sessions) remains above 40%
- โ Purchase-to-extension ratio improves (more purchases per extension request)
๐ง Implementation Notes
Telemetry Events Used:
uno/licensing/license-statuswithLicenseName = "Trial"andTrialDaysRemaining- Tracks trial periodsuno/licensing/nav-to-trial-extension- User requests trial extension (strong interest signal)uno/licensing/nav-to-purchase-now- User navigates to purchase (conversion intent)uno/hot-design/session-started- Hot Design usage during trial (engagement)
Measurement Approach:
- Cohort Comparison: Pre-change (30-day) vs Post-change (15-day)
- Minimum Sample: 100+ trials per cohort for statistical significance
- Measurement Window: 30 days post-trial start (captures full lifecycle)
Key Assumptions:
- Trial period is tracked via
TrialDaysRemainingdimension - Extension requests are explicit user actions (not automatic)
- Purchase events are accurately captured via navigation or license status change
- Session counts reflect actual product usage during trial
Configuration:
trialReductionDate: Set to implementation date (currently TBD)- All queries support dynamic timeframe via
{timeFrameDays}parameter - Chart rendering uses pre-configured visualization settings
โ ๏ธ Risks & Mitigation
Potential Risks:
- Insufficient Evaluation Time: Users may feel rushed and abandon without trying
- Mitigation: Monitor early drop-off rates; easy extension process reduces friction
- Negative Sentiment: Users may perceive shorter trials as less generous
- Mitigation: Communicate value clearly; track extension request volume
- Lower Activation: Urgency could discourage trial starts
- Mitigation: Monitor trial start rate closely; revert if activation drops >10%
Rollback Plan: If guard metrics fail (activation drops, active user rate falls below 35%), revert to 30-day trials within 2 weeks.
๐ฎ Upcoming Growth Hacks
Future growth hacks in the planning and development pipeline.
Additional growth hacks will be added here as they are planned and implemented.
Suggested next hacks: Progressive onboarding tooltips, In-app tutorial, Feature discovery prompts
๐ Growth Hacking Experiments โ TODO
Planned growth hacking experiments and KPI tracking requirements. Items are organized by experiment pipeline and by the KPI framework ownership matrix.
๐งช Planned Growth Hack Experiments
Pipeline Status
These experiments are queued for implementation. Each will follow the same structured approach as Growth Hack #1 (Auto-Launch Hot Design): define hypothesis, instrument telemetry, set success criteria, and measure impact.
| # | Experiment | AARRR Stage | Hypothesis | Target Metric | Guard Metric | Priority | Status | Query Coverage |
|---|---|---|---|---|---|---|---|---|
| 1 | Auto-Launch Hot Design | Activation | Auto-launching Hot Design on first use reduces TTFV | Activation rate 35% โ 60% | Early drop-off rate within first 30 seconds | P1 | โ LIVE | โ
Available gh1-auto-launch-adoption-rate, gh1-time-to-first-interaction, gh1-activation-comparison, gh1-onboarding-completion-rate, gh1-version-impact-summary, gh1-return-session-quality, gh1-session-duration-guard, gh1-early-dropoff-rate |
| 2 | Trial Period Reduction | Revenue | Shorter trial period increases urgency and conversion | Trial-to-paid conversion rate | 7-day retention and NPS remain stable | P1 | ๐ TRACKING | โ
Available trial-period-conversion |
| 3 | Referral Loop In-Product | Referral | Prompting users to share after successful outcomes increases qualified referrals | Referral invite rate and referred-user activation rate | Session satisfaction and completion rate do not decline | P1 | ๐ PLANNED | โ Missing No dedicated referral invite/share/referred-activation queries found |
| 4 | Win-Back Nudges for At-Risk Users | Retention | Contextual re-engagement nudges recover dormant users before churn | Reactivation rate after 7-day inactivity and Day-30 retention | Unsubscribe rate and negative feedback volume stay low | P1 | ๐ PLANNED | ๐ก Partial retention-curve-d1-d7-d14-d30 exists; no nudge-trigger or win-back attribution queries |
| 5 | Template Marketplace Spotlight | Activation / Retention | Promoting high-quality starter templates increases early success and repeat usage | Time to first meaningful action and feature depth per session | Template abandonment rate does not increase | P1 | ๐ PLANNED | โ Missing No template spotlight/template outcome queries found |
| 6 | Usage-Based Upgrade Moments | Revenue | Showing upgrade CTAs at value moments improves conversion versus generic prompts | CTAโtrial conversion and trialโpaid conversion | Activation and retention of free cohort do not regress | P1 | ๐ PLANNED | ๐ก Partial 8-user-navigation-to-purchase-funnel and trial-period-conversion exist; no value-moment CTA attribution query |
| 7 | Community Proof Surfaces | Acquisition / Activation | Surfacing social proof and examples increases confidence to start | InstallโProject Created conversion rate | Bounce rate and onboarding completion remain stable | P2 | ๐ PLANNED | โ Missing No query for community-proof exposure or proofโactivation conversion |
| 8 | Friction Kill List Sprint | Activation | Weekly removal of top onboarding frictions reduces early-session drop-offs | Drop-off within first 10 minutes and median TTFV | Regression rate and support ticket spikes remain low | P2 | ๐ PLANNED | ๐ก Partial gh1-early-dropoff-rate and operation-failure exist; no top-friction-priority query |
๐ KPI Framework โ Ownership & Reporting Matrix
KPI Tracking Requirements
This matrix defines which KPIs each team owns, and whether they are reported internally only or also to the board. Items marked โ are actively tracked; items without a check are candidates for future instrumentation.
| KPI Team | Reporting Team | KPI | Internal | External (Board) |
|---|---|---|---|---|
| Marketing | Marketing | Operational KPIs (time to resolution per OSS insights, release cadence etc) | โ | |
| NPS (Net Promoter Score) | โ | |||
| Projects Created | โ | โ | ||
| Community Growth (Newsletter / Discord / Socials) | โ | |||
| (Future) Marketing Qualified Leads | โ | |||
| Product Management | Marketing | NEW Monthly Active Users โ Uno Platform | โ | |
| NEW Monthly Active Users โ Uno Platform Studio | โ | โ | ||
| NEW Uno Platform Studio Community Licenses / Trials | โ | โ | ||
| Churn MAU Uno Platform | โ | โ | ||
| Churn MAU Uno Platform Studio | โ | |||
| Product Management | Engineering | Operational KPIs (time to resolution per OSS insights, release cadence etc) | โ | |
| NPS (Net Promoter Score) | โ | |||
| Projects Created | โ | โ | ||
| Monthly Active Users โ Uno Platform (MAU) | โ | |||
| Monthly Active Users โ Uno Platform Studio (MAU) | โ | |||
| NuGet Package Downloads | โ | โ | ||
| Issues Closed | โ | |||
| Sales | Marketing | New Licenses Sold | โ | โ |
| Active License Holders | โ | โ | ||
| Retention Churn | โ | |||
| (Future) Sales Qualified Leads | โ | |||
| Product Management | Engineering | Licensed User Active Usage | โ |
๐ฎ Future / Unassigned KPIs
Candidates for Future Tracking
These KPIs have been identified as valuable but do not yet have assigned ownership or instrumentation.
| KPI | Notes |
|---|---|
| Community Growth โ Contributions | PRs, issues filed, community plugins |
| GitHub Stars | Vanity metric but useful for awareness tracking |
| Average Time to Close | Issue/PR resolution velocity |
| MRR (Monthly Recurring Revenue) | Revenue tracking โ requires billing integration |
| Support Pipeline | Ticket volume, resolution time, escalation rate |