As of Week 9, I have made a significant change in the model.
My model requires that the scores of game performances are adjusted for the number of games that teams have played. This is because the matrix operation that is the cornerstone of the rankings otherwise gives an advantage to teams that have played more games.
My mistake was that previously games were adjusted for the number of total games a team had played, whereas they should have been adjusted for the number of Division I games they had played, since my model only includes Division I games. This had the effect of disadvantaging some teams who had played inter-divisional competitions, and this led to some collateral damage across the board.
Some teams who in particular benefited from this change include Furman (17), Georgia (77), Winthrop (82), Kansas (1), and Duke (2).
What anybody who has followed will probably notice the most is that Maryland no longer dominates the ranking, and Kansas and Duke hold the top two spots by a significant margin.
My rankings are also tracked by Kenneth Massey at Massey Ratings, where they are compared to the composite rankings of many different systems. Whereas before my correlation to the composite was r ~ .90, it has significantly improved since the change to r ~.965. Not that correlation to the composite necessarily indicates better accuracy/predictive power, but at the minimum it proves that I’m not crazy.
Going forward there should be much fewer anomalies in my rankings (but there are always some… I’m looking at you Furman and Baylor). With this new model, I hope to soon publish game predictions, rankings based on expected margin of victory (à la Sagarin) and have a strong predictive model in time for March Madness.