Skip to main content

📋 Overview

Even the best algorithms need robust handling of edge cases. We’ve built systematic approaches to address rare scenarios and prevent gaming attempts.
Any algorithm dealing with real-world data and human inputs must account for edge cases and potential manipulation. This page explains how our system handles these challenges.

🔍 Small Sample Sizes

Small Sample Size Handling
When only a few employees from a company participate, statistical reliability becomes a concern:
  • Our Approach
  • Example
We require at least 3 eligible employees from a company to include its bullishness score with full confidence. Companies with 1-2 participants can still be included but receive special handling with appropriate markers of uncertainty.
For companies with few participants, we display confidence ranges around their scores:
  • 10+ employees: ±5 points
  • 5-9 employees: ±10 points
  • 3-4 employees: ±15 points
  • 1-2 employees: ±25 points
We apply Bayesian techniques to prevent outliers in small samples:Smoothed_Score=w1×Raw_Score+w2×Priorw1+w2\text{Smoothed\_Score} = \frac{w_1 \times \text{Raw\_Score} + w_2 \times \text{Prior}}{w_1 + w_2}Where:
  • w1w_1 = number of employee ratings
  • w2w_2 = smoothing factor (typically 2)
  • Prior = sector average score

🔄 Companies with No Outgoing Admiration

Default Distribution

Distribute missing votes evenly across sector

Penalty Factor

Small reduction in influence for incomplete data

Communication

Reminders to encourage complete participation

Tracking

Monitor participation rates across cycles
Some companies might have employees who submit bullishness ratings but don’t specify admired companies:
When a company’s employees don’t provide admiration votes, it reduces the richness of the network data. Our approach balances maintaining network integrity while encouraging proper participation.

📉 Companies with No Incoming Admiration

Companies with No Incoming Admiration
Companies that aren’t admired by any others might get artificially low scores:
1

Apply Minimum Network Presence

Create a minimum synthetic admiration level (0.05) distributed from all companies
2

Adjust Damping Factor

For these companies, reduce δ by 0.1 to increase the weight of their internal bullishness
3

Flag for Review

Automatically flag these companies for human review to determine if there are systematic reasons for their isolation
4

Track Across Cycles

Monitor these companies across multiple cycles to detect emerging patterns

⚠️ Outlier Detection

  • Inter-Quartile Range (IQR) Method
  • Z-Score Method
  • Pattern Detection
1

Calculate Quartiles

Determine the first (Q1Q_1, 25th percentile) and third (Q3Q_3, 75th percentile) quartiles of ratings
2

Find IQR

Compute Interquartile Range as IQR=Q3Q1\text{IQR} = Q_3 - Q_1
3

Define Boundaries

Set lower bound at Q11.5×IQRQ_1 - 1.5 \times \text{IQR} and upper bound at Q3+1.5×IQRQ_3 + 1.5 \times \text{IQR}
4

Weight Adjustment

Reduce outlier weight proportionally to distance beyond boundaries, rather than removing completely

🤝 Tie-Breaking

Secondary Metrics

Additional data points for ranking differentiation

Confidence Analysis

Statistical confidence as a deciding factor

Temporal Stability

Preference for consistent scores over time

Random Component

Small random factor for perfect ties
When companies converge to nearly identical scores:

Tie-Breaking Process

When two companies AA and BB have scores within 0.01 of each other:
  1. Compare stability scores (variance across last 3 cycles)
    • Lower variance gets preference
    • Narrower confidence interval gets preference
    • Positive trend gets preference
  2. If still tied, apply tiny random factor (εr\varepsilon_r)
    • Add εrU(0.005,0.005)\varepsilon_r \sim U(-0.005, 0.005) to each score

⏱️ Time Decay

Time Decay Factor
Older data may be less relevant to current company status:
We apply an exponential decay factor to older survey data, ensuring recent information carries more weight while still preserving long-term signals.
We set the half-life to two quarters, meaning:
  • Current quarter data: 100% weight
  • 2 quarters ago: 50% weight
  • 4 quarters ago: 25% weight
  • 6 quarters ago: 12.5% weight
We track the derivative of ratings over time, identifying companies with:
  • Sustained positive momentum
  • Recent reversals in trends
  • Cyclical patterns

🛡️ Manipulation Attempts

  • Coordinated Voting Detection
  • Self-Inflation Countermeasures
  • Strategic Omission Detection
  • Anonymous Voting
1

Pattern Analysis

Track unusual patterns of similar votes across employees
2

Historical Comparison

Compare current voting patterns to historical patterns
3

Statistical Testing

Flag statistically significant deviations
4

Response Mechanism

Apply diminishing weights to suspected coordinated clusters

📉 Partial Company Participation

Non-Participating Company Handling
Not all companies in a sector will participate:
We include non-participating companies if they receive significant admiration from participating companies, allowing them to exist as nodes in the network even without providing bullishness data.
For important non-participating companies, we may incorporate publicly available metrics as a proxy for bullishness:
  • Recent funding rounds
  • News sentiment analysis
  • Growth metrics
  • Industry analyst ratings
All companies with proxy data or incomplete participation are clearly labeled in visualizations and reports, with appropriate confidence intervals reflecting the limited data quality.

🔧 Algorithm Tuning

Sectoral Calibration

Parameters adjusted based on sector characteristics

Sensitivity Analysis

Simulations with varying parameters to ensure stability

Performance Metrics

Tracking how well rankings predict outcomes

Quarterly Review

Regular committee evaluation of algorithm performance
The damping factor and other parameters may need adjustment for different sectors:
Our algorithm isn’t static—it evolves based on performance data and sector-specific characteristics. The governance committee regularly reviews parameter settings and may approve targeted adjustments to improve accuracy.
By addressing these edge cases systematically, we ensure our algorithm produces fair, robust, and manipulation-resistant rankings across diverse scenarios and participant behaviors.