Maximum gaps in one-dimensional hard-core models

We study the distribution of the maximum gap size in one-dimensional hard-core models. First, we randomly sequentially pack rods of length $2$ onto an interval of length $L$, subject to the hard-core constraint that rods do not overlap. We find that in a saturated packing, with high probability there is no gap of size $2 - o(1/L)$ between adjacent rods, but there are gaps of size at least $2 - 1/L^{1-\epsilon}$ for all $\epsilon>0$. We subsequently study a variant of the hard-core process, the one-dimensional ghost hard-core model introduced by Torquato and Stillinger. In this model, we randomly sequentially pack rods of length $2$ onto an interval of length $L$, such that placed rods neither overlap with previously placed rods nor previously considered candidate rods. We find that in the infinite time limit, with high probability the maximum gap between adjacent rods is smaller than $\log L$ but at least $(\log L)^{1-\epsilon}$ for all $\epsilon>0.$


Introduction
The Rényi parking problem is a classical combinatorial question that gives a simple example of a random sequential addition (RSA) process; it is a specific instantiation of a one-dimensional hard-core model of much interest in statistical mechanics.
The setup for the parking problem proceeds as follows.Consider a closed interval [0, L] for L > 2, into which rods of length 2 sequentially arrive at integer times.When each rod arrives, we attempt to place it uniformly at random in the interval, subject to the hard-core condition that rods cannot overlap with each other.In 1958, Rényi proved the following well-known result.
Theorem 1.1 (Rényi [16]).In the above setup, let N (L) be the random variable representing the number of rods placed in a saturated packing of [0, L] (when no more rods can fit without violating the hard-core constraint).Then, where α is the Rényi parking constant 1 − e −y y dy dx ≈ 0.7475979202.
In this work, we study the distribution of gaps between adjacent rods in the saturated state, focusing on the upper extreme.In particular, we seek to understand the following: Question 1.2.What can we say about the largest gap that arises in a saturated packing of length 2 rods into an interval of length L by rods of length 2, subject to the hard-core constraint?
Itoh [10] studied a delay integral equation that characterizes the distribution of the minimum gap sizes in a saturated configuration, following methods of Dvoretzky and Robbins [7].The distribution of gap sizes was also examined in the study of the nearest neighbors problem in one-dimensional random sequential adsorption [17].In [10], Itoh observed that the expected minimum gap size in a saturated configuration on an interval of length L is smaller than any constant ε > 0 in the large L limit.This work was subsequently extended to give approximations of the upper tail of the distribution of minimum gap sizes in [14].These works, as noted in §4 of [6], imply an analogous integral recurrence for the CDF of the maximum gap size in a saturated packing.In [6] the authors provided some preliminary observations and noted that further study of the maximum gap size was of substantial interest.
In this work, we give a threshold for the maximum gap size in a saturated configuration of the hard-core model, observing that with high probability, a saturated one-dimensional hard-core packing on an interval of length L has no gap of size 2 − o(1/L) but does have gaps of size 2 − 1/L 1−ε for all ε > 0.More precisely, we prove the following result.
Theorem 1.3.The following holds in the saturated configuration of a one-dimensional hard-core process on an interval of length L packed by rods of length 2, for L sufficiently large: • with high probability, there are no gaps of size 2 − o(1/L); • for all a > 0, with positive probability, there exists a gap of size at least 2 − a/L; • for all ε > 0, with high probability, there exists a gap of size at least 2 − 1/L 1−ε .
In the classical one-dimensional hard-core model described above, cars that fail to park have no effects on future parking attempts.We will be interested in a variant of the hard-core model, motivated by the ghost RSA process introduced in work of [19] studying sphere packings.Unlike the classical random sequential addition process, where much is unknown even in 2 dimensions, the authors of [19] are able to analytically derive the n-point correlation functions and limiting densities, exactly solving the ghost sphere packing model in arbitrary dimension.
We study the one-dimensional ghost hard-core model, akin to the hard-core model above, focusing on properties of this process in the infinite time limit.We give a precise definition below: Definition 1.4.We attempt to place rods of length 2 on an interval of length L, as follows: • ), and continue. - ), and continue.
Several further observations about the occupancy probabilities and the pair correlation function associated to this process can be found in Appendix A.
In the infinite time limit of the one-dimensional ghost hard-core process (on an interval of length L), with high probability there is no gap of size log L, but there are gaps of size at least (log L) 1−ε for arbitrarily small ε > 0.More precisely, we have the following: Theorem 1.5.The following holds in the infinite time limit of a one-dimensional ghost hard-core process on an interval of length L packed by rods of length 2, for L sufficiently large: • with high probabiity, all gaps are smaller than log L; • for all ε > 0, with high probability, there exists a gap of size (log L) 1−ε .
Overview of article.We begin in Section 2 by reviewing the classical one-dimensional hard-core model and introducing the ghost RSA process of [19].In Section 3 we prove Theorem 1.3.We prove Theorem 1.5 in Section 4. The ghost hard-core process is very different from the classical hard-core process; we illustrate some differences to give some context in Appendix A. One of our primary motivations for studying large gaps in these hard-core processes is to provide a glimpse into what gaps might look like in a random sequential addition process in higher dimensions.Theorem 1.5 hints that in higher dimensions, in the infinite time limit, a ghost packing may still have room for many more spheres/cubes to be packed without overlap; we discuss further in Section 5.
Acknowledgements.We would like to thank Henry Cohn and Salvatore Torquato for helpful discussions, ideas for writing improvements, and several useful reference suggestions for background.NM was supported by the Hertz Graduate Fellowship and by the NSF GRFP #2141064.

Preliminaries
2.1.Notation.Throughout this article, we consider packing rods of length 2 onto an interval of length L, which we model by the closed interval [0, L] ⊆ R. Unless stated otherwise, we study configurations in the infinite time limit of the two processes, the 1D classical hard-core model and the 1D ghost hard-core model.
We sometimes refer to the infinite time limit of the 1D classical hard-core model as saturation, since at this limit, no more rods can be packed without violating the hard-core constraint.We sometimes omit the modifier hard-core when describing models, as all models considered in this article are subject to the hard-core constraint.We also employ the following notation conventions.
• Let N (L) denote the number of rods in the classical model at saturation and N (L) denote the number of rods in the ghost model in the infinite time limit.• Let G(L, r) denote the number of gaps of length at least r in the classical model at saturation, and G(L, r) denote the number of gaps of length at least r in the ghost model in the infinite time limit.• For points x 1 , . . ., x n on the interval, let π(x 1 , • • • , x n ; L) be the n-point correlation function in the classical hard-core model, the probability that all of x 1 , . . ., x n are occupied by rods at saturation; let π(x 1 , • • • , x n ; L) be the corresponding n-point correlation function of the ghost model in the infinite time limit.For quantities depending on L, we use f = o(g), f g and g interchangeably to denote that lim L→∞ f /g = 0; we use f = O(g) to denote that there exists a constant C ≥ 0 such that f ≤ Cg for sufficiently large L; we use f ∼ g to denote that lim L→∞ f /g = 1 2.2.Classical 1D hard-core model at saturation.Consider the classical 1D hard-core model, where we place rods of length 2 on an interval of length L. It is easy to check the following recurrence relation on E[N (L)]: As noted in the introduction, in [16], Rényi established that this mean density of rods converges to the Rényi parking constant α ≈ 0.748.Dvoretzky and Robbins [7] gave a more refined estimate of the rate of convergence.In particular, they proved that indicating very fast convergence of the expected parking density 2E[N (L)]/L to the following approximate density α + 2α−2 L .Understanding the n-point correlation functions is a primary motivating question when analyzing statistical mechanics models.The occupancy probability, π(x, L) is the chance that point x is covered by a rod at saturation; it has the basic symmetry property π(x, L) = π(L − x, L) for all x ∈ [0, L] and was studied along with other statistics of the correlation function in the one-dimensional hard-core model in [5].
One similar observation that arises from such analysis concerns the pair correlation function, π(x 1 , x 2 ; L), the probability that both x 1 , x 2 are occupied at saturation.It will be convenient to think about this correlation when both x 1 , x 2 are far away from the boundary, and thus we could imagine packing rods of length 2 on a circle of length L. We observe that this pair correlation function is identical to the cover probability on an interval of length L − 2, by cutting one of the rods in half and unwinding, i.e.
2.3.Ghost RSA.We also consider a finite analogue of the ghost RSA process of [19], which arises from the following Poisson point process that generalizes random sequential adsorption.We consider a process in R d where the centers of candidate spheres of radius 1 arrive continuously for time t ∈ R ≥0 according to a translationally invariant Poisson point process of density η = 1 per unit time (in other words, we expect to see one newly arrived sphere per unit volume and time).
Unfortunately, this process does not create a packing as spheres can overlap.Thus, we must thin out the candidate spheres so that the remainder is a packing.Definition 2.1.For κ ∈ [0, 1], the κ-packing of R d is achieved by running the above described Poisson process, in which we only retain candidate sphere at position r and time t if no other candidate sphere was within a unit distance of r in the time interval [(1 − κ)t, t].In particular, κ = 0 corresponds to the random sequential addition (RSA) process.
For a packing of R d , the amount of space occupied by spheres can be quantified by the notions of packing density and number density.
Definition 2.3.Given a packing of R d by spheres of radius 1  2 , the packing density φ(t) is the fraction of space in R d covered by the spheres.The number density ρ(t) = φ(t)/V d , where V d is the volume of the unit sphere in R d .
One of the beautiful properties of the ghost RSA process is that, in sharp constrast to most sphere packing problems (including classical RSA), it is possible to compute the packing density of this process in all dimensions d.This packing density is also relatively close to the best generic lower bound on the densest sphere packings.
Theorem 2.4 (Torquato-Stillinger [19]).The ghost RSA process and the associated underlying Poisson point process enjoy the following properties: • The expected number of candidate centers in a volume Ω region at time t is Ωt.
• The probability that a region of volume Ω is empty of candidate centers is exp(−Ωt).• In the infinite time limit

The maximum gap size in the hard-core model
In this section, we prove Theorem 1.3.Consider the classical 1D hard-core model, where we randomly place rods of length 2 onto an interval of length L until we no longer can.Recall that G(L, r) denotes the number of gaps of length at least r at saturation.We seek a threshold r = r(L) such that as L → ∞, we are likely to find gaps smaller than r and unlikely to find any gap of size much greater than r.Towards this goal, we prove Theorem 1.3.
We first consider fixing r, and show that as L → ∞, E[G(L, r)] converges to a linear function c r (L + 2), by studying a recurrence relation E[G(L, r)] satisfies.Since G(L, r) is weakly decreasing with respect to r, so must be c r as a function of r.By quantifying the rate of convergence, we obtain a lower bound on c 2−δ by a linear function of δ.This implies that E[G[L, r]] changes from o(1) to Ω(1) as 2 − r does.We obtain the desired concentration around this expected value by via the second moment method.
We now prove that g r converges to some c r as L → ∞ by controlling the derivative g r (L).Lemma 3.5.For every m ∈ N, there exists some N m ∈ N such that for all r ∈ (0, 2), whenever L > 2m + 1 and g r is differentiable at L, we have Proof.For all L at which g r is differentiable, by Observation 3.3, we have By Observation 3.4, 0 ≤ g r (L) ≤ 1 for all L ≥ 0. Therefore, for all L ≥ 3, we have Substituting this inequality again into the above expression, we see that for all L ≥ 5, we have L 2 for some N 2 > 0. Iterating gives the desired result.Lemma 3.6.For every r ∈ (0, 2), there exists c r ≥ 0 such that lim L→∞ g r (L) = c r .Moreover, the convergence is uniform in r.
Proof.Since g r is differentiable almost everywhere, for all L 2 > L 1 > 5, we have Consequently, for all ε > 0, there exists This implies that lim L→∞ g r (L) = c r for some c r ≥ 0. Since N 2 does not depend on r, the convergence is uniform in r.
Lemma 3.5 actually implies a stronger result.Since for arbitrary large m ∈ N, we have the following: Corollary 3.7.For all m ∈ N, we have where the convergence is uniform in r.
Given the limiting coefficient c r , we wish to understand its magnitude as a function of r.To this end, we define the following auxiliary functions.Definition 3.8.For every r ∈ (0, 2], define function h r (L) with domain (2, ∞) as follows: For all L > 4, f r (L) is continuously differentiable with respect to r on (0, 2), with ∂fr(L) ∂r = −h r (L).Moreover, the left derivative of f r (L) with respect to r at 2 equals −h r .An analysis similar to Lemma 3.5 gives the following asymptotic expression for h r .Lemma 3.9.For every r ∈ (0, 2], there exists λ r > 0 such that lim L→∞ h r (L)/(L + 2) = λ r , where the convergence is uniform in r.Moreover, if r 1 < r 2 , then λ r1 ≥ λ r2 .
Since h r (L) = − ∂fr(L) ∂r for all L > 4, one might imagine that −λ r is the derivative of c r with respect to r.We make this notion precise below.Lemma 3.10.∂cr ∂r = −λ r for all r ∈ (0, 2), and the left derivative of c r at 2 equals −λ 2 .
Proof.Consider fr(L) L+2 : L > 4 and − hr(L) L+2 : L > 4 as families of functions of r.Recall that for all L > 4, f r (L) is continuously differentiable with respect to r on (0, 2) and that we have the following properties: L+2 converges to −λ r uniformly in r.Applying the differentiable limit theorem, we find that c r is differentiable on (0, 2), with ∂cr ∂r = −λ r .Extending to r = 2 gives that the left derivative of c r at 2 equals −λ 2 .
The above will be enough understand at what r the expected number of gaps of size r drops from Ω(1) to o(1).Moreover, if the image of γ lies in [1,2], then we also have Proof.Since c 2 = 0, Lemma 3.10 implies that for all r ∈ (0, 2), we have c r = 2 r λ s ds.The desired result then follows by noting that λ r is decreasing with respect to r.
By applying Markov's inequality, we obtain one side of the threshold from the above.To show the other direction, we will need a second moment result.Definition 3.12.For r ∈ (0, 2) and L > 0, let V r (L) := Var[G(L, r)] be the variance of the number of gaps of length at least r on a interval of length L in a uniformly random saturated configuration arising from the hard-core process.
Observe that V r (L) ≤ E[G(L, r) 2 ].For 0 ≤ L < r, V r (L) = 0 and for r ≤ L < 2, E[G(L, r) 2 ] = 1.For L > 2, we have the following recursive inequality.Observation 3.13.For all r ∈ (0, 2), the following recurrence inequality holds for V r (L) when L > 2: Proof.For L > 2, let X denote the left endpoint of the first placed rod, so that X follows the uniform distribution on [0, L − 2].Let G 1 (L, r) denote the number of gaps of size at least r on the left of the first placed rod, and G 2 (L, r) denote the number those on the right.Notice that G(L, r) = G 1 (L, r) + G 2 (L, r).Further, G 1 (L, r), G 2 (L, r) are conditionally independent given X.We therefore have Repeating the previous argument on E[G(L, r) 2 ] instead of E[G(L, r)], we obtain the following result analogous to Corollary 3.11: Lemma 3.14.There exists constant µ 1 > 0 such that for any function γ : (0, ∞) → [1, 2], we have denoting the expected number of gaps of size at least r in a random saturated configuration.By Corollary 3.7, we have that . Consequently, the expected number of gaps of size at least r is o(1) and thus by Markov's inequality, with high probability there are no gaps of size at least r.
Next fix some a > 0. By Lemma 3.14, there exists ε > 0 (e.g.ε = 2aµ 1 ) such that V 2−a/L (L) ≤ E[G(L, 2 − a/L) 2 ] ≤ ε for L sufficiently large.Meanwhile, by Corollary 3.11, we have c 2−a/L (L + 2) ≥ λ 2 a(L + 2)/L ≥ λ 2 a, and thus for L sufficiently large, we have f 2−a/L (L) ≥ δ for some δ = δ(a) > 0. By the second moment method, we see that Consequently, with positive probability, we have a gap of size at least 2 − a/L on an interval of length L.
Next, take = (L) = o(L) such that = ω(1).Consider some saturated configuration of a length L interval.For each i ∈ [ ], we must have some rod with left endpoint x i ∈ [iL/ − 2, iL/ ].Every possible choice of (x 1 , . . ., x L/ −1 ) yields a division of the interval into subintervals of length in between L/ − 4 and L/ , whose numbers of gaps of size at least r are mutually independent.
By the previous argument, we can choose constants a, δ = δ(a) > 0 such that for sufficiently large L, with probability at least δ, a saturated interval of length greater than L 2 has a gap of size at least 2 − a /L.Consequently, the probability that our length L interval has a gap of size at least 2 − a /L is at least since → ∞ with L. By choosing = log L, we find that with high probability, an interval of length L has a gap of size at least 2 − a log L/L.For all ε > 0 and L sufficiently large, 2 − a log L/L > 2 − 1/L 1−ε , giving the desired result.

Maximum gaps in the one-dimensional ghost hard-core model
Here, we prove Theorem 1.5, giving a threshold for the maximum gap size in the infinite time limit of the ghost hard-core model.
Consider some iteration of the ghost hard-core model on a length L interval.Imagine that we pause at some t ∈ N, and for some choice of L, consider the collection of gaps of length Θ( ).For each such gap of size Θ( ), we will attempt to compute the probability this gap is retained as t → ∞ by an inductive argument on the lengths of the segments (including ghosts) that are adjacent to the gap.We make this idea more precise below.Definition 4.1.For > 0 and k 1 , k 2 ≥ 0 such that k 1 + k 2 ≤ , let P ( ) (k 1 , k 2 ) be the probability that a gap of the form [x, x + ], in which [x, x + k 1 ] and [x + − k 2 , x + ] are already occupied by ghosts, is eventually retained.

Note that if
Observation 4.2.We have the following recurrence for P ( ) (k 1 , k 2 ): only depends on the sum k 1 + k 2 , we take s := − (k 1 + k 2 ) and define We first prove the second half of Theorem 1.5, namely that for all ε > 0, with high probability a gap of size at least (log n) 1−ε is retained in the ghost hard-core process.To do so, we first give a lower bound on P ( ) (s).Lemma 4.3.For all s ≥ 2, P ( ) (s) ≥ s −s .
Proof.We see that this is true for s ≤ 2 and can check by hand for 2 ≤ s ≤ 4. We verify by induction for s ≥ 4. Note that x −x is convex for x > 1.We have and thus We imagine that L → ∞ and choose parameter = (L) such that = ω(1) but +1 = o(L), so that in particular (1 − 1/ ) L/ = o(1).We will show that with positive probability for some t ∈ N, there are Ω(L/ ) disjoint gaps of size at least at time t in the ghost hard-core process, and that as t → ∞, at least one of these is retained if is sufficiently small.To show the first claim, we apply the following theorem about a randomly broken interval: Theorem 4.4 (Theorem 2.2 [9]).Suppose an interval of length 1 is broken uniformly at random into n subintervals with lengths S 1 ≤ • • • ≤ S n .Then for every i ∈ [n] and r ∈ N, we have where and X 1 , . . ., X n are independent exponential random variables with mean 1.
To simplify the calculations below, we will henceforth assume (without loss of generality) that parameters describing a number of rods placed or some time step of the hard-core process are integers.Lemma 4.5.Fix arbitrary c ∈ (0, 1/e) and γ ∈ (0, 1).Suppose = (L) is a function of L such that +1 = o(L).For L sufficiently large, when L/ candidate rods have been placed, with probability at least there are at least cL/ pairs of adjacent rod centers having distance at least .Proof.We rescale and consider placing rods of length 2/L on an interval of length 1.In the recaled setting, we count pairs of adjacent rod centers whose distance is at least /L.
Let M = L/ .At the point where M candidate rods have been placed on the unit interval, let S M −γM denote the γM -th largest distance between adjacent pairs of rod centers.By Theorem 4.4, we have For M sufficiently large, by Paley-Zygmund, we have Consider attempting to place a new rod at time t.This new rod can reduce the size of at most one existing gap.This implies that given two distinct, disjoint gaps (separated by at least one placed rod), the events that each of these gaps are retained as t → ∞ are independent.We are now ready to conclude the second half of Theorem 1.5.Lemma 4.6.If = (log L) 1−ε for ε ∈ (0, 1), then with high probability, in the infinite time limit of the ghost hard-core model, there is at least one gap of size at least (when packing on an interval of length L for sufficiently large L).
Proof.For any δ ∈ (0, 1), we can choose c ∈ (0, 1/e) and γ ∈ (0, 1) via Lemma 4.5 such that for L sufficiently large, with probability at least 1 − δ/2 there are at least cL/ gaps of size at least at some point in the ghost hard-core process.Consequently, there are at least cL 2 gaps of size within [ , 2 /c].By Lemma 4.3, the probability that none of these gaps is retained is at most Hence with probability at least 1 − δ, there exists a gap of size at least in the infinite time limit.Sending δ → 0 gives the result.
To show the first half of Theorem 1.5, we give an upper bound on P ( ) (s).
Proof.There exists constant M > 2 such that for all s > M , we have Take some C > 0 such that P ( ) (s) ≤ Cs −s/3 for all 0 ≤ s ≤ M .Then by induction, for all s > M , we have Lemma 4.8.For sufficiently large L, with high probability, the largest gap that remains as t → ∞ in the ghost hard-core process on an interval of length L has size less than log L. Proof.Let = log L. At any time, there are at most L/ distinct gaps of length at least .Applying Lemma 4.7 and a union bound, the probability that at least one of them is retained is at most Theorem 1.5 then follows by combining Lemmas 4.6 and 4.8.

Further directions
The higher dimensional analogues of RSA and parking are of particular importance.A primary motivating question is trying to understand the maximum density of a sphere packing, a maximum collection of congruent radius one spheres in R d that do not overlap.Determining the densest packings in arbitrary dimensions is one of the most longstanding open problems in discrete geometry, recently resolved in R 8 and R 24 in the breakthrough works of [4,20].The only other dimensions in which optimal sphere packings are known are dimensions 1, 2, and 3.One can derive lower bounds on an optimal sphere packing by studying packing procedures, such as the random sequential addition process of hard spheres in R d .Further, RSA in more than one dimension is in and of itself a process of much physical interest, as in [2,3,6,8,11,13,15,18].
While our methods for establishing the extreme values of gaps do not generalize to more than one dimension, Theorem 1.3 offers a tantalizing glimpse into the existence of relatively large gaps in saturated hard-core packings and perhaps studying sphere packing densities.Further the extremely fast convergence of the maximum gap size to a roughly logarithmic scale Fig. 2 provides some evidence about the utility of relatively small scale simulations.
Our work leaves several questions open; perhaps the most fundamental open problem is extending the results in this work to higher dimensions.Question 5.1.Given a random packing that results from packing spheres of radius 1 in R d via the ddimensional ghost RSA process, how much more dense (on average) is the saturated packing that results from adding spheres to this existing packing via the traditional RSA process (i.e., ignoring the ghost constraint)?
It is also natural to wonder about other generalizations.For example, the following question concerning packing width 2 axis-aligned squares into an L × L square is a natural first extension (see Fig. 3 for a sample simulation).Question 5.2.Given a random packing of width 2 axis-aligned squares into an L×L square via the hard-core process, what is the expected maximum width of an axis-aligned square that could be added to this packing, without violating the hard-core constraint?Unlike in one dimension, where the notion of a maximum gap is unambiguous, one can define a "gap" in many ways in higher dimensions.Focusing on the largest spheres that can fit into the negative space as our notion of gaps, the following question about gaps in the 2D ghost process is very natural.Two-dimensional packings have been extensively studied; physical theories such as the Asakura-Oosawa depletion interaction seek to explain the "effective interaction" between hard-sphere particles and highlight the rich behavior of particles in higher dimensions [1].Most of the work in more than one dimension, however, remains at the heuristic level, lacking rigorous mathematical results about distributions of gaps or sizes (or even asymptotic packing densities and pair correlations) [12].For example, the following question is open.Question 5.3.Given a random packing of radius 1 spheres into an L × L square via the ghost hard-core process, what is the expected maximum radius of a sphere that could be added to this packing, without violating the hard-core constraint?Proof.This follows by a direct calculation: Thus, we sum the above to obtain the occupancy probabilities in the infinite time limit.
x ≥ 3 Remark A.6.The above holds symmetrically for x > L/2 by replacing x with L − x in the above expressions.We plot π(x, L) for small values of x in Figure 4, noting the boundary effect and the lack of L dependence (provided L ≥ 10).The probability of x being covered in the limit for given σ < 1 is given by We plot this piecewise function for x ∈ [0, 10] in Figure 5.

Figure 1 .
Figure 1.Rényi's parking constant.The blue curve denotes the expected parking density 2E[N (L)]/L, the orange curve denotes the Rényi parking constant α, and the green curve (depicted on the right) denotes the approximate function α + 2α−2 L .

Figure 2 .
Figure2.We simulate running the 1D ghost hard-core process to completion, packing rods of width 2 onto an interval of length L for L ranging from 50 to 1000.We estimate the maximum gap size empirically by running 200 trials at each length L. In (a), we plot the average maximum gap vs. L; in (b), we plot the average maximum gap vs. log(L) and include a best fit line that highlights the roughly logarithmic nature of the maximum gap, even at small scale.

Figure 3 .
Figure3.Pictured are three "infinite time" instantiations of packing axis-aligned squares of side length 2 inside a larger square of length L = 20.In (a), we simulate the classical 2D hard-core model; this packing has 54 squares.In (b), we add the additional ghost constraint that squares cannot overlap with any previous candidate square; this packing has 23 squares.In (c), we begin with a random ghost packing generated via the same process as (b); after reaching the infinite time limit, we extend it randomly to a classically saturated packing by removing the ghost constraint.This example placed 27 squares during the ghost process, and 49 total squares.