It is a well established fact, that - in the case of classical random graphs like (variants of) Gn,p or random regular graphs - spectral methods yield efficient algorithms for clustering (e. g. colouring or bisection) problems. The theory of large networks emerging recently provides convincing evidence that such networks, albeit looking random in some sense, cannot sensibly be described by classical random graphs.
A variety of new types of random graphs have been introduced. One of these types is characterized by the fact that we have a fixed expected degree sequence, that is for each vertex its expected degree is given.
Recent theoretical work confirms that spectral methods can be successfully applied to clustering problems for such random graphs, too - provided that the expected degrees are not too small, in fact ≥ log^{6} n.
In this case however the degree of each vertex is concentrated about its expectation. We show how to remove this restriction and apply spectral methods when the expected degrees are bounded below just by a suitable constant. Our results rely on the observation that techniques developed for the classical sparse Gn,p random graph (that is p = c/n) can be transferred to the present situation, when we consider a suitably normalized adjacency matrix: We divide each entry of the adjacency matrix by the product of the expected degrees of the incident vertices.
Given the host of spectral techniques developed for Gn,p this observation should be of independent interest.