Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact firstname.lastname@example.org with any questions.
Probability and algorithms enjoy an almost boisterous interaction that has led to an active, extensive literature that touches fields as diverse as number theory and the design of computer hardware. This article offers a gentle introduction to the simplest, most basic ideas that underlie this development.
Simulated annealing is a probabilistic method proposed in Kirkpatrick, Gelett and Vecchi (1983) and Cerny (1985) for finding the global minimum of a cost function that may possess several local minima. It works by emulating the physical process whereby a solid is slowly cooled so that when eventually its structure is "frozen," this happens at a minimum energy configuration. We restrict ourselves to the case of a cost function defined on a finite set. Extensions of simulated annealing to the case of functions defined on continuous sets have also been introduced in the literature (e.g., Geman and Hwang, 1986; Gidas, 1985a; Holley, Kusuoka and Stroock, 1989; Jeng and Woods, 1990; Kushner, 1985). Our goal in this review is to describe the method, its convergence and its behavior in applications.
For large finite sets where there is no explicit formula for the size, one can often devise a randomized algorithm that approximately counts the size by simulating Markov chains on the set and on recursively defined subsets.
Randomization appears to be an essential ingredient in algorithms for maintaining some form of privacy. This article discusses probabilistic algorithms for authenticating a user and for allowing the private use of shared resources.
This article surveys the problem of generating pseudorandom numbers and lists many of the known constructions of pseudorandom bits. It outlines the subject of computational information theory. In this theory the fundamental object is a secure pseudorandom bit generator. Such generators are not theoretically proved to exist, although functions are known that appear to possess the required properties. In any case, pseudorandom number generators are known that work reasonably well in practice.
In the last 10 years, there have been major advances in the average-case analysis of bin packing, scheduling and similar partitioning problems in one and two dimensions. These problems are drawn from important applications throughout industry, often under the name of stock cutting. This article briefly surveys many of the basic results, as well as the probabilistic methods used to obtain them. The impact of the research discussed here has been twofold. First, analysis has shown that heuristic solutions often perform extremely well on average and hence can be recommended in practice, even though worst-case behavior can be quite poor. Second, the techniques of applied probability that have developed for the analysis of bin packing have found application in completely different arenas, such as statistics and stochastic models.
This article summarizes the current status of several streams of research that deal with the probability theory of problems of combinatorial optimization. There is a particular emphasis on functionals of finite point sets. The most famous example of such functionals is the length associated with the Euclidean traveling salesman problem (TSP), but closely related problems include the minimal spanning tree problem, minimal matching problems and others. Progress is also surveyed on (1) the approximation and determination of constants whose existence is known by subadditive methods, (2) the central limit problems for several functionals closely related to Euclidean functionals, and (3) analogies in the asymptotic behavior between worst-case and expected-case behavior of Euclidean problems. No attempt has been made in this survey to cover the many important applications of probability to linear programming, arrangement searching or other problems that focus on lines or planes.
A randomized algorithm is one that uses random numbers or bits during the runtime of the algorithm. Such algorithms, when properly designed, can ensure a correct solution on every input with high probability. For many problems, randomized algorithms have been designed that are simpler or more efficient than the best deterministic algorithms known for the problems. In this article, we define a natural randomized parallel complexity class, RNC, and give a survey of randomized algorithms for problems in this class.
Randomly wired multistage networks have recently been shown to outperform traditional multistage networks in three respects. First, they have fast deterministic packet-switching and circuit-switching algorithms for routing permutations. Second, they are nonblocking, and there are on-line algorithms for establishing new connections in them, even if many requests for connections are made simultaneously. Finally, and perhaps most importantly, they are highly fault tolerant.