±
Advanced Nerd Content — You Asked For This

The Math
Behind the Magic

So you want to know how the sausage is made. Respect. This is where we explain exactly how answers get scored — question types, confidence multipliers, ties, and the reason this is provably a game of skill, not luck. Grab a calculator. Or don't — we'll do the math for you.

← Back to How It Works

The Three Question Types

Not all questions are born equal. Contests can mix three types, each scored differently. Knowing how they work is your first unfair advantage.

🔘
Multiple Choice
Pick one answer from a list of options. You also set a confidence level — how sure you are. That confidence becomes a multiplier on your score. Right answer + high confidence = big points. Wrong answer + high confidence = big ouch.
e.g. "Who will win the Super Bowl?" → Chiefs / Eagles / Cowboys / Bills
Yes / No
Binary prediction — will something happen or not? Like Multiple Choice, you also set a confidence level. It works exactly the same way: your conviction is part of your score. Wishy-washy gets you wishy-washy points.
e.g. "Will the Fed raise interest rates this quarter?" → Yes / No
🔢
Numeric
Enter a specific number. Scored by how close you are to the actual answer — the closer, the better. No confidence slider here; your precision speaks for itself. Nail it exactly and you score a perfect 1.0.
e.g. "How many points will LeBron score?" → Enter: 28

Confidence & Correctness

For Choice and Yes/No questions, your score on each question is dead simple:

The Core Formula (Brier Scoring)
If Correct: Score = 1 − (1 − confidence)²
If Incorrect: Score = 1 − confidence²
Confidence is your slider value as a decimal (0.25 to 1.00). Notice: wrong answers are not zero — a humble wrong answer scores nearly as well as a correct one. Being wrong at 100% confidence is the only way to truly crater your score. Your total contest score is the sum of all question scores.

The confidence slider runs from 25% (a long shot) to 100% (dead certain). Here's what each level actually scores — and yes, the "if wrong" column will surprise you:

25%
+0.4375
if correct
+0.9375
if wrong 😮
70%
+0.91
if correct
+0.51
if wrong
100%
+1.00
if correct
+0.00
if wrong 💀
🧠 Wait — wrong answers still score points?

Yes. This is the Brier scoring system, and it is brilliant. Wrong answers are not worth zero — they are worth whatever your humility deserved. Being wrong at 25% confidence scores 0.9375 (nearly perfect!), because you correctly signaled uncertainty. Being wrong at 100% confidence scores 0.00 — the worst possible outcome.

The mirror of this: being right at 25% confidence only scores 0.4375. You got lucky and the system knows it. This means you can win a contest without getting every answer right — and you can lose one despite being mostly correct if your confidence was reckless. Calibration is the actual skill.

🏆 The "Least Wrong" Win — Yes, It Is Real

If nobody picks the correct answer on a Choice question, the person who was wrong with the most appropriate humility wins that question. Confidently wrong players score near zero. Humbly wrong players score near 1.0. In a tough contest full of genuinely hard questions, the winner might not have gotten a single answer definitively "right" — they just knew what they did not know, better than everyone else.

This is not a loophole. It is the point. Real forecasting skill includes knowing the limits of your own knowledge.

Scoring Numeric Questions

Numeric questions are scored on proximity — how far off you are relative to the plausible range of answers. The closer you are, the higher your score, up to a perfect 1.0 for an exact hit.

Numeric Scoring Formula
Score = max(0, 1 − |Your Answer − Actual| ÷ Range)
Range is the MaxValue − MinValue set by the contest admin (the plausible span of answers). The further you are from the truth, the more your score decays — hitting zero when you're off by a full range width. Exact match = 1.0. Completely off = 0.0. Everything in between scales linearly.

Example: "How many touchdowns will be scored?" — Range set to 0–14. Actual answer: 7.

🎯0
😅5
7
😬9
💀14
0 → Score: 0.50 5 → Score: 0.86 7 → Score: 1.00 ✓ 9 → Score: 0.86 14 → Score: 0.50
📐 Why this rewards genuine knowledge

Anyone can guess randomly on a Yes/No. Numeric questions require actual domain knowledge — knowing that a team averages 28 points per game, or that GDP growth is typically 2–3%. Random guessing on numeric questions produces wildly varied and usually bad scores. Domain experts consistently outperform random guessers. That's skill.

Worked Scoring Examples

Same contest, three different players. Watch how confidence calibration and answer accuracy combine to separate the Jedi from the Padawans.

😤
Gary — The Overconfident Wreck
Last Place
Question
Gary's Answer
Actual
Conf.
Points
Who wins the championship?
Cowboys
Chiefs
100%
0.00
Will the MVP be an offensive player?
Yes ✓
Yes
100%
1.00
Total points scored (range: 20–80)
71
47
0.60
Will there be overtime?
Yes
No
90%
0.00
Gary's Total Score
1.60 / 4.00
🤔
Alex — The Calibrated Thinker
2nd Place
Question
Alex's Answer
Actual
Conf.
Points
Who wins the championship?
Chiefs ✓
Chiefs
70%
0.70
Will the MVP be an offensive player?
Yes ✓
Yes
80%
0.80
Total points scored (range: 20–80)
44
47
0.95
Will there be overtime?
No ✓
No
60%
0.60
Alex's Total Score
3.05 / 4.00
🧠
Sam — The Jedi Sharp
🏆 Winner
Question
Sam's Answer
Actual
Conf.
Points
Who wins the championship?
Chiefs ✓
Chiefs
90%
0.90
Will the MVP be an offensive player?
Yes ✓
Yes
100%
1.00
Total points scored (range: 20–80)
48
47
0.98
Will there be overtime?
No ✓
No
85%
0.85
Sam's Total Score
3.73 / 4.00
🎓 What this shows

Gary got the same number of correct answers as Sam on Yes/No questions, but lost because he recklessly slammed 100% confidence on picks he clearly hadn't thought through. Alex was correct on everything but undersold his conviction. Sam was right and appropriately confident — that's the full skill stack.

Being right matters. Knowing how right you are matters just as much.

How Ties Are Broken

True ties — identical scores to four decimal places — are extremely rare given the confidence multiplier creates a near-continuous scoring range. But if they do occur, here's the resolution order:

Player A
2.7500
Higher numeric precision
answered 48, actual was 47
⚔️
Player B
2.7500
Lower numeric precision
answered 51, actual was 47
⚖️ Tie-Breaking Hierarchy

1. Total score — highest score wins outright. If still tied:

2. Numeric question closeness — the player whose numeric answers were collectively closer to the actual values wins. This is the most common tiebreaker and rewards precision knowledge.

3. Higher average confidence on correct answers — the bolder (and correct) predictor wins. This rewards conviction.

4. Entry timestamp — if all else is equal, earlier submission wins. First mover advantage. So don't wait until the last second.

This Is a Skill Game.
Here's the Receipts.

Prediction contests that use this scoring system are recognized as skill-based competitions under the legal frameworks that distinguish games of skill from games of chance. Here's why the math backs that up:

📊
Confidence Calibration Is Learnable
Knowing not just what will happen but how certain you should be is a trainable, measurable skill. Experts consistently outperform novices on calibration tasks.
🔢
Numeric Questions Can't Be Gamed
Random guessing on numeric questions produces uniformly distributed, poor scores. Domain experts who actually know the relevant statistics score dramatically higher — over any sample of questions.
🏆
Multi-Question Contests Kill Luck
A lucky guess on one question can win a single-question contest. But across 5, 10, or 20 questions, skill dominates. The law of large numbers works against pure luck players — reliably, every time.
📈
Winners Repeat
In a luck-only game, past winners have no edge in future contests. In a skill game, sharp players win disproportionately often. Our Hall of Fame tracks this. Watch the leaderboard over time.
⚖️
Scoring Is Transparent & Deterministic
The formula is public (you're reading it right now). No hidden randomness, no house manipulation. Given the same answers and the same outcomes, the score is always identical. Every time.
🧪
The Math Is Academically Grounded
The confidence-weighted scoring approach is closely related to Brier scoring and proper scoring rules — methods used in meteorology, finance, and academic forecasting research to reward accurate probabilistic predictions.
Now Go Use That Brain

You understand the scoring. You know the system. You have absolutely no more excuses for losing to Gary.