index↑
FSU Seal - 1851     CIS 4930 Top 10 Algorithms
Chris Lacher
Google and PageRank
  index↑

Background


PageRank

The page rank idea begins by asking what is the probability that a page is reached from a random location in the www. The links to a page are assumed to be equally likely to be selected, so for example if page A has 5 outgoing links then there is probability 1/5 = 0.20 of choosing a particular one of these 5 links. Note that this is a conditional probability: given we are at page A, the probability we will navigate to one particular page linked from A is the number of outgoing links, 0.20 in this example.

Recall that conditional probabilities multiply when navigating a path from page A to, say, page C: the probability of taking that path from A to C is the product of the conditional probabilities along the path.

Similarly, to get the total probability of navigating from A to C we would add the path probabilities, since the distinct paths from A to C are mutually exclusive ways to make the navigation.

If we put all the conditional probabilities in a matrix M = (mi,j) where

mi,j = 1/(number of out links) if i links to j
mi,j = 0 if i does not link to j

we have a matrix that for each pair i,j gives the probability of navigating from page i to page j directly. This is called the transition matrix.

The rank of page i for the WWW is defined as the probability that a surfer who starts at a randomly selected page will navigate to page i. As explained in the background references, this can be phrased as a random walk problem in the directed graph whose transition matrix is M, and the rank can be calculated by starting with a probability vector v with equal components and iterating the product:

v, Mv, M(Mv) = M2v, M(M2v) = M3v, M(M3v) = M4v, ...

which is known to converge to the unique principle eigenvector w of M with length 1:

Mw = w

Here is an intuition for why we expect the successive products to converge to a principle eigenvector:

MMv = M( ... MMM)v = Mv

so taking w = Mv we have

w = Mv = M(Mv) = Mw

The argument can be made rigorous by examining limits.

This eigenvector w is the page rank vector, well, almost. The calculation is modified by taking into account the possibility of dead-ends (sinks in the web digraph) and unreachable pages (no directed path exists from A to C). The fix by Brin & Page is straightforward: modify the transition matrix to

W = (1 - q)M + q{1}

where {1} is the matrix with all entries equal to 1. (Note this is similar to the Cornell wording, and the reverse of the Brin/Page wording. However, the latter seem to have meant it this way, looking at their section 2.1.2.) Here, q is the probability that a random web surfer decides to quit following links described by the transition matrix and start a new search. This possibility is real, and explains how surfers get out of dead-ends as well as jump from one connected component to another. q = 0.15 is the value suggested by Brin/Page.

W satisfies all the same requirements to conclude that

v, Wv, W2v, W3v, W4v, ...

converges to the unique principle eigenvector p of W with length 1:

Wp = p

pi is the rank of page i. The problem is to actually calculate page rank.


Calculating Page Rank

Note that the product Mx of an nxn matrix M and an n-dimensional vector x is the vector y given by:

y[i] = Σjm[i][j] * x[j]

(using bracket notation instead of subscripts). This requires Θ(n2) storage and Θ(n2) time to calculate y.

At the time Brin & Page wrote their paper cited above, the web size was approaching 109 (1 billion) documents. That number has increased to 1011. Thus the transition matrix has between 1018 and 1022 entries - much too large for even today's computer systems to manipulate. So a big question is: how to represent the transition matrix in such a way that the page rank can be calculated?

The key observation for getting this done is that most web pages have only a few outgoing links, and for the few pages that have a lot of links, we can ignore all but the first 10 or so, because users will do the same thing. Therefore the transition matrix is sparse: for each i, there are only about 10 non-zero entries in row i. We can make use of hash table technology to store and manipulate matrices with gigantic size, as long as they are sparse. The following is pseudo code.

typedef HashTable < size_t, double >       SparseVector; 
typedef HashTable < size_t, SparseVector > SparseMatrix;

SparseMatrix m;    // m = WWW transition matrix
SparseVector x,y;  // x = given vector, y = result of product m*x
SparseMatrix::Iterator iter;
SparseVector::Iterator jter;

size_t i;
size_t j;

for (iter = m.Begin(); iter != m.End(); ++iter)
{
  i = (*iter).key_;
  y[i] = 0;  // this is an insert operation initializing the ith component of y to zero
  for (jter = m[i].Begin(); jter != m[i].End(); ++jter)
  {
    j = (*jter).key_;
    y[i] += m[i][j] * x[j]; // these are retrieval operations
  }
}
// the call to x[j] can be prevented from being an insert operation by
// substituting the value 0 whenever the key i is not in the table x

The basic matrix-vector multiplication is the same, but we only have to consider places where the matrix has non-zero entries, which, for sparse matrices, reduces the problem to one of manageable size, both in terms of storage (needing the entire matrix in primary memory) and time.


Indexing the Web

The google enterprise is a tour-de-force of data structures, algorithms, and operating system optimization. When a user enters a keyword, google must:

  1. Find all web pages with that word
  2. Rank these pages
  3. Sort the pages by rank
  4. Present the user with the top so-many pages

all within a time that does not bore the user into starting a different activity. Finding the web pages requires that the pages be stored in a manner that they can be quickly searched for matching strings. That process itself requires a sophisticated index and reverse index of the pages and a sort of the content that permits binary search. Ranking pages use the PageRank algorithm, plus other information that may relate to known user preferences, previous search behavior of that user, and other proprietary (and dynamic) inputs. Sorting by rank requires, obviously, a sort.

The WWW transition matrix must be continuously updated. This is done with a system of "web crawlers", processes that wander the web and send link info back to Google. (They are not the only people operating web crawlers, of course.) Because the web evolves more or less continuously, with new pages and links added virtually every second of every day, the transition matrix needs to be updated, at least daily, using the calculating methodology described above.

Here is a remarkable thing: virtually all of the technology used by Google is covered, in one form or another, in our curriculum: Data Structures, Algorithms, and Operating Systems.

  index↑