Measuring the properties of a large, unstructured network can be difficult: One may not have full knowledge of the network topology, and detailed global measurements may be infeasible. A valuable approach to such problems is to take measurements from selected locations within the network and then aggregate them to infer large-scale properties. One sees this notion applied in settings that range from Internet topology discovery tools to remote software agents that estimate the download times of popular web pages. Some of the most basic questions about this type of approach, however, are largely unresolved at an analytical level. How reliable are the results? How much does the choice of measurement locations affect the aggregate information one infers about the network?
We describe algorithms that yield provable guarantees for a particular problem of this type: detecting a network failure. Suppose we want to detect events of the following form in an n-node network: An adversary destroys up to k nodes or edges, after which two subsets of the nodes, each of size at least $\epsilon n$, are disconnected from one another. We call such an event an $(\epsilon,k)$-partition. One method for detecting such events would be to place "agents'' at a set D of nodes, and record a fault whenever two of them become separated from each other. To be a good detection set, D should become disconnected whenever there is an $(\epsilon,k)$-partition; in this way, it "witnesses'' all such events.
We show that every graph has a detection set of size polynomial in k and $\epsilon^{-1}$, and independent of the size of the graph itself. Moreover, random sampling provides an effective way to construct such a set. Our analysis establishes a connection between graph separators and the notion of VC-dimension, using techniques based on matchings and disjoint paths.