FYOS Documentation is synchronized to the current clean-core beta baseline.
Methodology
Trust Layer

Trust Layer

The trust layer evaluates the historical accuracy of FYOS predictions, creating accountability for displayed metrics.

Purpose

The trust layer answers: "How reliable are FYOS predictions?"

By comparing historical predictions against realized outcomes, we:

  1. Grade reliability per opportunity
  2. Identify systematic prediction errors
  3. Adjust confidence in current predictions

Components

1. Realized Outcome Evaluator

Captures actual funding outcomes at the end of holding periods:

  • Actual funding received vs. predicted
  • Actual fees paid vs. assumed
  • Actual capacity effects vs. modeled

2. Prediction Error Profiler

Measures systematic errors:

Errorpct=PredictedRealizedPredicted×100Error_{pct} = \frac{Predicted - Realized}{Predicted} \times 100

Tracks error distributions per:

  • Opportunity
  • Time horizon
  • Market condition

3. Reliability Grading Engine

Assigns reliability grades based on historical accuracy:

GradeError RangeInterpretation
A< 10%Highly reliable predictions
B10-20%Good reliability
C20-35%Moderate reliability
D35-50%Low reliability
F> 50%Unreliable predictions

Edge Reliability Score

The Edge Reliability Score (0-100) synthesizes:

  1. Historical prediction accuracy
  2. Sample size (more history = more confidence)
  3. Recent vs. historical performance
  4. Consistency across market conditions
Rel=f(Erroravg,nsamples,Recency,Consistency)Rel = f(Error_{avg}, n_{samples}, Recency, Consistency)

Confidence Levels

Based on reliability, we assign confidence:

Reliability ScoreConfidence Level
> 80High confidence
60-80Medium confidence
40-60Low confidence
< 40Insufficient data / unreliable

Trust-Adjusted Metrics

In some views, metrics are adjusted by reliability:

APRtrust=APRsurv×Rel100APR_{trust} = APR_{surv} \times \frac{Rel}{100}

This provides a more conservative estimate that accounts for prediction uncertainty.

Display in UI

Opportunity Detail

  • Reliability Score — 0-100 grade
  • Prediction Error — Historical error percentage
  • Sample Size — Number of evaluated predictions
  • Confidence Badge — High/Medium/Low/Insufficient

Screener

  • Reliability can be used as a filter criterion
  • Low reliability opportunities can be flagged or hidden

Building Trust Over Time

The trust layer improves with:

  1. More predictions — Larger sample sizes increase confidence
  2. Model updates — Systematic errors inform model improvements
  3. Transparency — Published error metrics build user trust

Limitations

The trust layer cannot account for:

  • Future market regime changes
  • Black swan events
  • Exchange-specific anomalies not in historical data

Users should treat reliability scores as relative guidance, not absolute guarantees.

Cookie preferences
We use cookies to improve analytics and user experience. You can accept or reject non-essential cookies. Learn more in our Privacy Policy.