RPI is based only on opponent records and the opponents'-opponent records. It doesn't go any deeper, and it (per NCAA dictum) absolutely does not consider margin-of-victory.
Most "name" formulas, like Massey and KenPom, either directly or indirectly dig deeper than just the two levels that RPI goes. Typical is the Elo formula (Sagarin), which gives every team a rating and with each game the winner "takes" rating points from the loser, in proportion to the difference in rating (much like a wager). For example, an Elo formula might set it up so that if two evenly-matched teams play, the winner takes 16 points from the loser. But in a UConn-Gallaudet type mismatch, UConn might take 1 point from Galluadet if they win, but lose 32 points to them if they lose. Over the course of the season, teams that play strong opponents will be able to gain more points than those who play weak opponents. You have to beat Gallaudet 16 times to get the same effect as playing Baylor. (The numbers I used here are typical, but not precise.)
But here's the upshot. Whether it's Elo, ISR, or any other computer rating system, it takes on the order of two dozen games for the signal to overcome the noise. No computer rating means much of anything before February, and they only really have validity in March. Massey tries to get around this by adding some of last year's games to the mix - if they didn't, their ratings would look as stupid as the current RPI lists.
The one area where we really legitimately look to the computers for support for our arguments TODAY is in the conference-vs-conference comparisons, as we have far more games going into the calculation. The conference ratings would become legitimate roughly 6 games into the season if every team played a "decent" schedule. As things stand, with all the patsies being scheduled in November, it's probably closer to 10-12 games in before the conference ratings are legit.