Why CFB Ranks Exists
The story of two engineers who got tired of unexplainable rankings
It Started with Frustration
Like a lot of college football fans, we've spent years yelling at the TV about rankings. Why is that team still top four after that loss? Why did this team drop six spots while that team barely moved? How is a three-loss team in the playoff conversation while a one-loss team is out?
The answers never made sense because we couldn't see how the decisions were being made.
This year was the breaking point. We watched teams surge at the end of the season and get no credit for it. We watched the first round of the playoff produce blowout after blowout - proof that the seeding was wrong. And we watched a 13-person committee, meeting behind closed doors, make decisions that affected programs, players, and fans with zero obligation to explain themselves.
We'd had enough.
The Problem Isn't Human Judgment
Here's what we realized: the problem isn't that humans are involved in rankings. Of course humans should be involved. The "eye test" matters. Context matters. Football isn't played on spreadsheets.
The problem is that the current system hides its human judgment instead of owning it.
The CFP committee doesn't publish criteria. And when they do, they don't follow it themselves. They don't show their work. They don't explain why one team jumped another. When pressed, they give vague answers about "body of work" and "game control" that could mean anything.
That's not a ranking system. That's a black box.
What We Built
CFB Ranks is built on a simple idea: you should be able to see exactly why every team is ranked where they are.
Click on any team. See every game. See the points earned or lost, the opponent strength, the location modifier, the margin impact. Nothing hidden. If you disagree with where a team landed, you can trace exactly what put them there.
But we didn't stop at transparency.
We added What If analysis, letting you change the outcome of any game on your team's schedule to see how it would impact their ranking. Flip a loss to a win, adjust the score, and watch the ripple effects across the entire standings.
We built the Sandbox so you can change the parameters yourself. Think road wins should matter more? Adjust it. Think margin of victory is overweighted? Dial it back. Hit calculate and see how the rankings shift. Test your theory against real data.
This isn't just "trust us" - it's "check our work."
How It Got Built
We're passionate fans who are also product builders with decades of technical experience between us.
My colleague Arek and I talk most every morning for work. During football season, that includes many hours a week just on college football. Arek lives in Poland, where college football barely registers. He doesn't know anyone else there who follows it.
One morning I showed him a prototype that was the groundwork for what we have now. He took it and used our existing tech stack to build a visually appealing, modern interface with far more data than I'd started with. We knew we had something. And then it started evolving.
We built it together. I'd work on it, he'd pick it up. He'd push changes, I'd refine the design and user experience. Back and forth, fully collaborative. Every conversation became "what if we added this?" and "can we show that?" We'd debate how to weight margin of victory, whether road wins were undervalued, how to handle conference strength. Then we'd build it, test it, and discuss it some more.
That's how CFB Ranks was made. Two people who care about getting it right, building in the open. Collaboration is in the DNA of this project - which is exactly why we want to extend that model to the broader community.
Where This Is Going
Maybe the CFP committee is outdated. Maybe their role changes. Maybe everything stays as is. We don't know yet. But we're trying to prove there's a better way. You can even see in our historical data that teams outside our top four would have actually won - so bias isn't always wrong.
Our algorithm is backtested against 9 prior seasons plus this year. You can see what we would have selected versus what the committee actually selected, and who went on to win. In only one year - 2016 - would our algorithm have left out the eventual champion, when we had Clemson ranked #5 out of 4. Every other year, the winner was in our top selections. No system is perfect, but ours has a strong track record.
But the algorithm isn't finished. It shouldn't be. That's why we're building a consensus model - a way for coaches, analysts, athletic programs, and media to contribute to how teams are evaluated. Not one committee deciding for everyone. A community refining the system together, in the open.
Rankings should reflect merit. Merit should be transparent. And the people who define merit should include more than 13 people in a room.
That's why CFB Ranks exists.
Kim Filiatrault and Arkadiusz (Arek) Frankowski