Nonrival is back! After talking to many of you about your experience with the newsletter—and taking some vacation time—Nonrival is coming back new-and-improved for Season 2. Expect an email this Sunday with a new forecast, and in the meantime here’s a rundown of all the improvements I’ve been working on this summer.
What’s new:
What’s new, in more detail
Simpler forecasts:
Click a link in the email and your forecast is recorded. You have five choices:
Very likely (about a 90% chance it happens)
Likely (~70% chance)
Uncertain (~50% chance)
Unlikely (~30% chance)
Very unlikely (~10% chance)
After you click, you have the option to provide a rationale if you want to—I recommend it!—but it’s not required. And to keep things as simple as possible, I’ve removed both the option to pick a different forecast value (to say 58% for example) and to guess what the crowd thinks.
A new scoring and points system:
For every forecast you make, you’ll get up to 100 points depending on your accuracy. Someone who said 90% for an event that eventually happens gets more points than someone who said 50%. (Read more about the points system here.)
Each time you get a score, you’ll also see your Total Points for Season 2.
You can never lose points, so it’s always a good idea to make a forecast.
Shorter emails, with clearer subject lines:
I’m setting myself a word limit, so emails will be short enough that you can read them in two minutes. Subject lines will signal if it’s a “Forecast” where you are asked to make a prediction, “Results” showing what readers predicted, or “Scores” explaining how things turned out and providing scores for accuracy.
A slightly adjusted email cadence:
You can still expect emails on Sundays and (most) Wednesdays. Sundays will alternate between new forecast questions and scores from forecasts that have been resolved. On new forecasts, you’ll have until Tuesday morning to make a prediction.
In weeks with new forecasts, the Wednesday email will still be “Results” showing what readers predict. On weeks where Sunday is a “Scores” email, Wednesday will be flexible: some weeks it might be a bonus post, some weeks it might be an additional “Scores” email if multiple questions have closed, and some Wednesdays I might skip.
A new logo and design:
Bárbara Abbês, my former Quartz colleague and the head of Something Something studio designed the new logo and color scheme. I’m really excited about it.
Trivia!
OK, so I simplified the forecasting flow a lot—but I did add one new thing in at the end. Each week there’s a trivia question that’s only available if you make a forecast. After you click to make a prediction, you’ll see the option to provide a rationale. But after that, you’ll see a trivia question related to the forecast. Take a guess and you’ll see how you did right away. It’s totally optional and not part of the scoring—enjoy!
New scoring system deep dive
Nonrival is coming back for “Season 2” with a new scoring system. If you’re new to forecasting or to the newsletter, no need to worry about the details. All you need to know:
Nonrival asks you to say how likely it is that a given event will happen. For example, will the Writer’s Strike end before October? Your prediction might be that there’s a 90% chance it does. If the event happens (the strike ends before October) then higher forecasts score better. Your 90% prediction scores very well, and someone who said 10% scores less well. If the event doesn’t happen, it’s the reverse: lower forecasts score better because they’re “closer” to the actual outcome.
For each question you forecast, you can get up to 100 points.
You can never lose points. So you’re always better off making a prediction, even if you aren’t sure: The worst you can score with any forecast is +20 points, and the best you can score is +100 points.
That’s all you need to know to enjoy Nonrival.
But if you’re a forecasting nerd or just want a full explanation of the new scoring system, read on.
The requirements
Last season, Nonrival used percentiles—a user might be told they were in the 85th percentile on a given question, meaning their forecast was closer to the actual outcome than 85% of people who participated.
This year, I wanted something different. I wanted a simple, easy-to-understand, points-based system that didn’t rank users against each other. If readers’ predictions are all really accurate, they should all score well—whereas with percentile rankings there’s a zero-sum nature to the scoring.
Also, I wanted to reward participation—so didn’t want any negative points. I wanted readers to always have an incentive to make a prediction.
And I wanted it to be a good, rigorous scoring rule. If an event happens, intuitively higher forecasts should score better. But forecasters should also always have an incentive to express their true beliefs: If I told you I was flipping a coin and asked you to forecast the likelihood of Heads, you should say 50%. A good scoring rule should reward someone who said 50% for a coin flip—they should have a higher expected value than someone who says 90%, for example.
Finally, because Nonrival uses a really simple forecasting system—you can only say something is 90% likely, 70%, 50%, 30%, or 10%—I just needed a system that could work at those thresholds.
So: A simple, points-based system that always rewards participation and accurate forecasting.
The new scoring system
The new scoring system is built off the classic Brier score, which in its simplest form is the difference between the outcome and the forecast, squared. So if an event happens (outcome = 1) then a 90% forecast is scored as (1-0.9)^2=0.01. If the event doesn’t happen (outcome = 0) then 90% is scored as (0-0.9)^2=0.81. Brier scores range from 0 to 1 and lower scores are better.
To get nice positive round numbers, Nonrival is using:
(1 - Brier Score)*100 … rounded to the nearest 5 points
The (1 - Brier) flips the Brier around so higher is better: 1 is now the best score and 0 is the worst. Multiplying by 100 and rounding to the nearest 5 gets us nice round numbers without decimals.
So, if an event occurs, here’s how many points different forecasts get:
90% = 100 points
70% = 90 points
50% = 75 points
30% = 50 points
10% = 20 points
If an event doesn’t occur, the points are the same but in reverse:
10% = 100 points
30% = 90 points
50% = 75 points
70% = 50 points
90% = 20 points
The end result is a scoring system where forecasters can get up to 100 points for each question, and a minimum of 20 points. Someone who always just says 50% for every question can lock in 75 points each time.
This hits all my requirements: It’s simple and rewards participation. And you’re always best off giving the probability you believe is the right one (a coin flip would reward 50%, etc.)
A special thanks to the reader who helped me land on this (you know who you are).