Dashboard Sections Guide

This guide provides detailed explanations of each dashboard section, the metrics they display, and how to interpret the data to make informed decisions.

Table of Contents

Overview Dashboard

The Overview dashboard provides a comprehensive high-level view of your engineering team's performance and health.

Key Components

Health Status Overview

  • Overall Health Score - Aggregated health indicator across all metrics
  • Status Breakdown - Count of metrics in good/warning/critical states
  • Trend Indicators - Whether metrics are improving or declining

Risky Pull Requests

  • Large PRs with Few Reviewers - PRs >500 lines with <2 reviewers
  • Very Large PRs - PRs >1000 lines with <3 reviewers
  • Stale PRs - PRs open >7 days (configurable threshold)
  • Action Required - Click to view detailed list and take action

Stale Pull Requests

  • Open Duration - How long PRs have been open
  • Last Activity - When PRs were last updated
  • Repository Filter - Filter by specific repositories
  • Quick Actions - Links to review or close PRs

Actionable Metrics Cards

Cycle Time

  • Average time from first commit to PR merge
  • Status: Good (<3 days), Warning (3-7 days), Critical (>7 days)
  • Trend indicator showing improvement or decline

Review Time

  • Average time from PR creation to first review
  • Status: Good (<24 hours), Warning (24-48 hours), Critical (>48 hours)
  • Helps identify review bottlenecks

Rework Rate

  • Percentage of commits made after first review
  • Status: Good (<20%), Warning (20-30%), Critical (>30%)
  • Indicates code quality and review effectiveness

Bus Factor

  • Minimum contributors needed for 50% code coverage
  • Status: Good (>3), Warning (2-3), Critical (<2)
  • Measures knowledge distribution and risk

Performance Charts

Velocity Trends

  • Line chart showing cycle time and review time over time
  • Compare different timeframes
  • Identify seasonal patterns or process changes

PR Classification

  • Breakdown of PRs by size and review status
  • Categories: Small, Medium, Large, Very Large
  • Review status: Approved, Needs Review, Stale

Bottleneck Heatmap

  • Visual representation of review bottlenecks
  • Shows which reviewers are overloaded
  • Identifies time periods with high load

Flow Efficiency Waterfall

  • Breakdown of PR lifecycle stages
  • Shows time spent in each stage (draft, review, merge)
  • Helps identify process improvements

Recommendations

  • AI-powered suggestions based on your metrics
  • Prioritized by impact and urgency
  • Actionable steps to improve team performance

How to Use the Overview

  1. Start Here - Check Overview first for a quick health check
  2. Identify Issues - Look for red indicators and risky PRs
  3. Drill Down - Click on metrics to see detailed breakdowns
  4. Take Action - Follow recommendations to address issues
  5. Track Progress - Monitor trends to see improvements

Impact Metrics

The Impact section measures the significance and value of contributions beyond simple commit counts.

Key Metrics

Overall Impact Score

  • Range: 0-100 (higher is better)
  • Calculation: Weighted combination of:
    • PR size (additions + deletions)
    • Review engagement (number of reviewers, comments)
    • Merge frequency and speed
    • Collaboration patterns
  • Interpretation:
    • 70-100: High impact contributions
    • 40-69: Moderate impact
    • 0-39: Lower impact (may need attention)

High-Impact PRs

  • PRs with impact scores above a threshold
  • Filtered list showing:
    • PR title and number
    • Impact score
    • Repository
    • Contributors involved
    • Link to view in provider

Contributor Impact Rankings

  • Top Contributors - Ranked by total impact score
  • Impact Distribution - Bar chart showing impact across team
  • Individual Scores - Per-contributor impact metrics

Impact Score Calculation

The impact score considers multiple factors:

  1. PR Size (30% weight)
    • Lines added + deleted
    • Larger PRs generally have more impact
  2. Review Engagement (25% weight)
    • Number of reviewers
    • Number of review comments
    • Review depth and quality
  3. Merge Frequency (20% weight)
    • How often PRs are merged
    • Merge speed (faster = higher impact)
  4. Collaboration (15% weight)
    • Cross-repository contributions
    • Team collaboration patterns
  5. Code Quality (10% weight)
    • Rework rate (lower is better)
    • Review approval patterns

How to Use Impact Metrics

  1. Recognize High Performers - Identify contributors making significant impact
  2. Balance Workload - Ensure high-impact work is distributed
  3. Set Goals - Use impact scores to guide contribution goals
  4. Track Trends - Monitor how impact scores change over time
  5. Improve Processes - Identify what makes PRs high-impact

Interpreting Impact Data

High Impact Score + High Volume

  • Contributor is highly productive and impactful
  • May indicate expertise or ownership

High Impact Score + Low Volume

  • Quality over quantity approach
  • May indicate senior contributor or complex work

Low Impact Score + High Volume

  • Many small contributions
  • May indicate maintenance work or bug fixes

Low Impact Score + Low Volume

  • Contributor may need support or different assignments

Velocity & Visibility

The Velocity & Visibility section tracks delivery speed, review efficiency, and team activity patterns.

Velocity Metrics

Cycle Time

  • Definition: Time from first commit to PR merge
  • Components:
    • Development time (first commit to PR creation)
    • Review time (PR creation to first review)
    • Revision time (first review to merge)
  • Targets:
    • Good: <3 days
    • Warning: 3-7 days
    • Critical: >7 days

Review Time

  • Definition: Time from PR creation to first review
  • Measures: How quickly PRs get initial attention
  • Targets:
    • Good: <24 hours
    • Warning: 24-48 hours
    • Critical: >48 hours

Rework Rate

  • Definition: Percentage of commits made after first review
  • Formula: (Commits after first review / Total commits) × 100
  • Interpretation:
    • Low (<20%): Good initial quality
    • Medium (20-30%): Normal iteration
    • High (>30%): May indicate unclear requirements or review feedback

Bottleneck Detection

  • Stuck PRs: PRs waiting for review >48 hours
  • Overloaded Reviewers: Reviewers with too many PRs in queue
  • Time Periods: Identify when bottlenecks occur

Visibility Metrics

Cross-Repository Activity

  • Definition: Work distribution across multiple repositories
  • Shows: Which contributors work across repos
  • Benefits: Identifies knowledge sharing and collaboration

Active PR Tracking

  • Open PRs: Currently open pull requests
  • By Repository: Breakdown by repository
  • By Contributor: Who has open PRs
  • Age Distribution: How long PRs have been open

Review Participation

  • Reviewer Engagement: Who reviews PRs
  • Review Frequency: How often contributors review
  • Review Distribution: Fairness of review workload

Activity Heatmaps

  • Time-based: Activity by day of week or time of day
  • Contributor-based: Activity by team member
  • Repository-based: Activity by repository

Charts and Visualizations

  • Line chart showing cycle time and review time over time
  • Compare different periods
  • Identify process improvements or regressions

Flow Efficiency Waterfall

  • Breakdown of time spent in each PR stage
  • Shows bottlenecks in the process
  • Helps identify optimization opportunities

Bottleneck Heatmap

  • Visual representation of review bottlenecks
  • Shows which reviewers are overloaded
  • Identifies time periods with high load

How to Use Velocity & Visibility

  1. Identify Bottlenecks - Find where PRs get stuck
  2. Optimize Review Process - Improve review time and distribution
  3. Track Improvements - Monitor velocity trends over time
  4. Balance Workload - Ensure fair distribution of reviews
  5. Improve Cycle Time - Reduce time from commit to merge

Action Items

If Cycle Time is High:

  • Review development practices
  • Improve PR size (smaller PRs = faster reviews)
  • Streamline review process

If Review Time is High:

  • Increase reviewer availability
  • Set review expectations (SLA)
  • Use review rotation

If Rework Rate is High:

  • Improve initial code quality
  • Clarify requirements before coding
  • Enhance review feedback quality

Workload Balance

The Workload section helps identify burnout risks and ensure fair distribution of work across your team.

Key Metrics

Burnout Risk Assessment

  • Individual Risk Scores: Per-contributor risk assessment
  • Risk Levels:
    • Low: Healthy workload, good balance
    • Medium: Approaching limits, monitor closely
    • High: At risk, needs immediate attention
  • Factors Considered:
    • PR volume (authored + reviewed)
    • Work intensity (commits per day)
    • Work distribution (across repos)
    • Review load (number of reviews)

Workload Distribution

  • Gini Coefficient: Measures fairness of work distribution (0 = perfect equality, 1 = perfect inequality)
  • Target: <0.3 (relatively fair distribution)
  • Interpretation:
    • Low (<0.3): Fair distribution
    • Medium (0.3-0.5): Some imbalance
    • High (>0.5): Significant imbalance, needs attention

PR Volume Tracking

  • Authored PRs: Number of PRs created per contributor
  • Reviewed PRs: Number of PRs reviewed per contributor
  • Ratio: Balance between authoring and reviewing
  • Target: Balanced ratio (not all authoring or all reviewing)

Top Contributors

  • High-Load Contributors: Contributors with highest workload
  • Workload Breakdown: Authored vs reviewed PRs
  • Risk Indicators: Contributors approaching burnout

Charts and Visualizations

Workload Distribution Chart

  • Bar chart showing PR volume per contributor
  • Color-coded by risk level
  • Shows authored vs reviewed breakdown

Risk Level Pie Chart

  • Distribution of team members by risk level
  • Quick view of team health
  • Identifies how many need attention

Contributor Workload Comparison

  • Side-by-side comparison of contributors
  • Helps identify imbalances
  • Supports workload rebalancing decisions

How to Use Workload Metrics

  1. Identify At-Risk Contributors - Find team members with high burnout risk
  2. Balance Workload - Redistribute work to achieve fairness
  3. Monitor Trends - Track workload changes over time
  4. Prevent Burnout - Take action before issues escalate
  5. Improve Team Health - Create sustainable work patterns

Action Items

If Burnout Risk is High:

  • Reduce workload for at-risk contributors
  • Redistribute work to other team members
  • Provide additional support or resources
  • Review and adjust expectations

If Workload Distribution is Uneven:

  • Identify contributors with low workload
  • Redistribute work to balance the team
  • Consider cross-training to increase capacity
  • Review assignment processes

If PR Volume is Imbalanced:

  • Balance authoring and reviewing responsibilities
  • Rotate review assignments
  • Set expectations for review participation
  • Recognize reviewers for their contributions

Developer Insights

The Developer Insights section provides detailed metrics and analysis for individual team members.

Key Features

Individual Developer Profiles

  • Overview Metrics: Impact score, PR volume, review participation
  • Activity Timeline: Recent contributions and activity
  • Repository Distribution: Work across different repositories
  • Performance Trends: Changes over time

Contributor Rankings

  • By Impact: Ranked by impact score
  • By Volume: Ranked by PR count
  • By Review Participation: Ranked by review activity
  • Multi-dimensional: Compare across different metrics

Detailed Contributor Lists

  • Search and Filter: Find specific contributors
  • Sort Options: Sort by various metrics
  • Detailed Views: Click to see individual profiles
  • Export Options: Copy data for reports

Developer Profile Page

Access individual developer profiles by clicking on a contributor name.

Profile Overview

  • Basic Information: Name, avatar, repositories
  • Key Metrics: Impact score, PR count, review count
  • Status Indicators: Health and risk assessments

Activity Breakdown

  • PRs Authored: List of PRs created
  • PRs Reviewed: List of PRs reviewed
  • Commits: Commit activity and patterns
  • Repositories: Work across repositories
  • Impact Score Over Time: How impact changes
  • Activity Patterns: When contributor is most active
  • Workload Trends: Changes in workload over time

Recommendations

  • Personalized Suggestions: Based on individual metrics
  • Growth Opportunities: Areas for improvement
  • Recognition: Celebrate achievements

How to Use Developer Insights

  1. Performance Reviews - Use metrics for objective performance data
  2. Career Development - Identify growth opportunities
  3. Team Planning - Understand team capabilities and capacity
  4. Recognition - Celebrate high performers
  5. Support - Identify team members who need help

Interpreting Developer Data

High Impact + High Volume

  • Highly productive contributor
  • May be a senior team member or expert
  • Consider for leadership or mentoring roles

High Impact + Low Volume

  • Quality-focused contributor
  • May work on complex or critical features
  • Valuable for code quality and architecture

Low Impact + High Volume

  • Many small contributions
  • May indicate maintenance work
  • Could benefit from more challenging assignments

Low Impact + Low Volume

  • Contributor may need support
  • Check for blockers or unclear expectations
  • Consider training or mentoring

Pull Requests

The Pull Requests section provides a comprehensive view of all pull requests with filtering and sorting capabilities.

Key Features

PR List View

  • Comprehensive List: All PRs in selected repositories and timeframe
  • Sorting Options: Sort by date, size, status, impact
  • Filtering: Filter by repository, status, size, contributor
  • Search: Search by PR title, number, or contributor

PR Details

  • Basic Information: Title, number, repository, status
  • Metrics: Impact score, cycle time, review time
  • Contributors: Author and reviewers
  • Timeline: Creation, first review, merge dates
  • Links: Direct links to view in provider

Status Indicators

  • Open: Currently open PRs
  • Merged: Successfully merged PRs
  • Closed: Closed without merging
  • Stale: Open for extended period
  • Risky: Large PRs with few reviewers

Filtering Options

By Repository

  • Filter to specific repositories
  • Compare PRs across repositories
  • Focus on problematic repositories

By Status

  • Open PRs only
  • Merged PRs only
  • Stale PRs only
  • Risky PRs only

By Size

  • Small (<100 lines)
  • Medium (100-500 lines)
  • Large (500-1000 lines)
  • Very Large (>1000 lines)

By Contributor

  • Filter by PR author
  • Filter by reviewer
  • See individual contributor's PRs

PR Metrics Display

Each PR shows:

  • Impact Score: Contribution significance
  • Cycle Time: Time to merge
  • Review Time: Time to first review
  • Size: Lines added/deleted
  • Reviewers: Number of reviewers
  • Status: Current state

How to Use the PR Section

  1. Review Open PRs - See what needs attention
  2. Identify Patterns - Find common issues across PRs
  3. Track Progress - Monitor PR lifecycle
  4. Take Action - Review, comment, or merge PRs
  5. Analyze Trends - Understand PR patterns over time

Action Items

For Stale PRs:

  • Review and provide feedback
  • Close if no longer needed
  • Assign reviewers if missing
  • Break into smaller PRs if too large

For Risky PRs:

  • Add more reviewers
  • Request smaller PRs
  • Provide detailed feedback
  • Schedule review sessions

For Large PRs:

  • Consider breaking into smaller PRs
  • Add more reviewers
  • Extend review time
  • Provide detailed documentation

Settings

The Settings page allows you to manage your repositories, preferences, and configuration.

Repository Management

Add Repositories

  • GitHub/GitLab: Add repositories manually
  • Azure DevOps: Sync repositories through discovery
  • Validation: Repositories are validated before adding

Remove Repositories

  • Delete: Remove repositories from tracking
  • Confirmation: Confirm before deletion
  • Note: Historical data is preserved

Repository Selection

  • Select/Deselect: Choose which repositories to analyze
  • Bulk Actions: Select all or deselect all
  • Search: Find specific repositories

User Preferences

Timeframe Default

  • Set default timeframe for dashboard
  • Options: 7 days, 30 days, 90 days, custom

Display Preferences

  • Theme: Light/dark mode (if available)
  • Metrics: Choose which metrics to display
  • Charts: Customize chart types and colors

Notification Settings

  • Email Notifications: Enable/disable email alerts
  • Alert Thresholds: Set thresholds for notifications
  • Frequency: How often to receive updates

Time Thresholds

Configure custom thresholds for metrics:

Stale PR Threshold

  • Default: 7 days
  • Custom: Set your own threshold
  • Impact: Affects stale PR detection

Review Time Thresholds

  • Good: <24 hours (default)
  • Warning: 24-48 hours (default)
  • Critical: >48 hours (default)

Cycle Time Thresholds

  • Good: <3 days (default)
  • Warning: 3-7 days (default)
  • Critical: >7 days (default)

Provider Configuration

GitHub

  • Connected: Status of GitHub connection
  • Repositories: List of connected repositories
  • Permissions: Review granted permissions

GitLab

  • Connected: Status of GitLab connection
  • Instance: GitLab.com or self-hosted URL
  • Repositories: List of connected repositories

Azure DevOps

  • Organizations: List of connected organizations
  • Connection Status: Active or expired
  • PAT Management: Update or rotate PAT tokens

Data Management

Export Data

  • Metrics Export: Download metrics as CSV
  • PR Export: Export PR data
  • Report Generation: Create custom reports

Data Retention

  • Historical Data: How long data is retained
  • Cleanup Options: Remove old data if needed

How to Use Settings

  1. Manage Repositories - Add or remove repositories
  2. Customize Experience - Set preferences and defaults
  3. Configure Thresholds - Adjust metric thresholds
  4. Manage Providers - Update provider connections
  5. Export Data - Download data for analysis

Profile

The Profile page shows your account information and allows you to manage your Git Wizard account.

Account Information

User Profile

  • Name: Display name
  • Email: Account email address
  • Avatar: Profile picture
  • Provider: Authentication provider (GitHub, GitLab, Azure)

Account Details

  • User ID: Unique identifier
  • Account Created: Registration date
  • Last Active: Last login time
  • Provider ID: Provider-specific identifier

Connected Providers

Active Providers

  • GitHub: Connection status and repositories
  • GitLab: Connection status and repositories
  • Azure: Connection status and organizations

Provider Actions

  • Reconnect: Refresh provider connection
  • Disconnect: Remove provider connection
  • Update Permissions: Modify granted permissions

Account Settings

Display Preferences

  • Name: Update display name
  • Avatar: Change profile picture
  • Theme: Set theme preferences (if available)

Security

  • Change Password: Update account password (if applicable)
  • Two-Factor Authentication: Enable 2FA (if available)
  • Active Sessions: View and manage active sessions

Activity Summary

Recent Activity

  • Last Login: When you last accessed Git Wizard
  • Repositories Added: Recent repository additions
  • Dashboard Views: Recent section visits

Usage Statistics

  • Total Repositories: Number of tracked repositories
  • Total PRs Analyzed: Count of analyzed PRs
  • Metrics Calculated: Number of metric calculations

How to Use Profile

  1. Review Account - Check your account information
  2. Update Details - Change name, email, or preferences
  3. Manage Providers - Connect or disconnect providers
  4. View Activity - See your usage and activity
  5. Security - Manage security settings

Best Practices Across Sections

Regular Review Schedule

  • Daily: Check Overview for critical issues
  • Weekly: Review Velocity and Workload
  • Monthly: Deep dive into Impact and Developer Insights
  • Quarterly: Comprehensive analysis across all sections

Data Interpretation

  • Context Matters: Consider team size, project phase, and goals
  • Trends Over Absolute Values: Focus on improvements
  • Compare Appropriately: Compare similar repositories or timeframes
  • Look for Patterns: Identify recurring issues or successes

Action Planning

  1. Identify Issues - Use metrics to find problems
  2. Prioritize - Focus on high-impact improvements
  3. Take Action - Implement changes based on data
  4. Measure Impact - Track improvements over time
  5. Iterate - Continuously improve based on results

Team Communication

  • Share Insights - Make metrics visible to the team
  • Discuss in Retrospectives - Use data to guide discussions
  • Set Goals Together - Align on what metrics matter
  • Celebrate Wins - Acknowledge improvements

Getting Help

  • In-App Documentation: Click "Docs" in the sidebar
  • Tooltips: Hover over metrics for explanations
  • Status Indicators: Use color coding to understand health
  • Recommendations: Follow AI-generated suggestions

For more detailed information, refer to the Usage Guide and Auth & Setup Guide.