Abstract
We introduce a simple but powerful strategy to study processes driven by two or more reinforcement mechanisms in competition. We apply our method to two types of models: to nonconservative zero range processes on finite graphs, and to multi-particle random walks with positive and negative reinforcement on the edges. The results hold for a broad class of reinforcement functions, including those with superlinear growth. Our strategy consists in a comparison of the original processes with suitable reference models. To implement the comparison we estimate an object reminiscent to the Radon–Nikodym derivative on a carefully chosen set of trajectories. Our results describe the almost sure long time behaviour of the processes. We also prove a phase transition depending on the strength of the reinforcement functions.
Funding Statement
D. E. gratefully acknowledges financial support from the National Council for Scientific and Technological Development—CNPq via the Universal grant 409259/2018-7, the Universal grant 406001/2021-9, the Bolsa de Produtividade 303520/2019-1, and the Bolsa de Produtividade 303348/2022-4. D. E. moreover acknowledges support by the Serrapilheira Institute which supported this work (grant number Serra—R-2011-37582).
G. R. was partially supported by a Capes/PNPD fellowship 888887.313738/2019-00 while he was a post doc at Federal University of Bahia (UFBA). G. R. was partially supported by his temporay contract with Technical University of Munich (TUM). G. R. is supported by a PCI scholarship from the project 444350/2018-7—Programa de Capacitação Institucional—Matemática e suas Aplicações.
Acknowledgments
The authors are grateful to T. Franco and A. Teixeira for fruitful discussions about the topic of the paper. The authors are also grateful to the anonymous referees for their valuable input.
Citation
Dirk Erhard. Guilherme Reis. "Stochastic processes with competing reinforcements." Ann. Appl. Probab. 34 (5) 4513 - 4553, October 2024. https://doi.org/10.1214/24-AAP2073
Information