Self-overlap correction simplifies the Parisi formula for vector spins

We propose a simpler approach to identifying the limit of free energy in a vector spin glass model by adding a self-overlap correction to the Hamiltonian. This avoids constraining the self-overlap and allows us to identify the limit with the classical Parisi formula, similar to the proof for scalar models with Ising spins. For the upper bound, the correction cancels self-overlap terms in Guerra's interpolation. For the lower bound, we add an extra perturbation term to make the self-overlap concentrate, a technique already used in [Probab. Math. Phys., 2(2):281-339, Ann. Inst. Henri Poincar\'e Probab. Stat., 59(3):1143-1182] to ensure the Ghirlanda-Guerra identities. We then remove the correction using a Hamilton-Jacobi equation technique, which yields a formula similar to that in [Ann. Probab., 46(2):865-896]. Additionally, we sketch a direct proof of the main result in [Electron. J. Probab. 25(23):1-17].


Introduction
In [27,28], the limit of the free energy of mean-field vector spin glasses has been identified.One key insight is to consider the free energy with constrained self-overlap.Drawing inspiration from the Hamilton-Jacobi equation approach to spin glasses [19,21,18,20], we present a simpler alternative approach.Specifically, we introduce a self-overlap correction to the Hamiltonian to simplify the analysis.
First, we employ the same argument used in [24] for the scalar model with Ising spins, along with Panchenko's synchronization technique, to identify the limit of the corrected free energy with the classical form of the Parisi formula (see Theorem 1.1).Then, we remove the correction using a simple Hamilton-Jacobi equation technique (see Theorem 1.2).Our approach also simplifies the analysis in the scalar model with soft spins [22] as a special case.
Setting and main results.Fix D ∈ N and let P 1 be a probability measure on R D .We assume that P 1 is supported on the unit ball {v ∈ R D : |v| ⩽ 1}.For each N ∈ N, we define P N = (P 1 ) ⊗N and denote the R D×N -valued spin configuration sampled from P N by σ = (σ i,j ) i∈{1,...,D}, j∈{1,...,N } .
Throughout, we denote by a ⊺ the transpose of a matrix or vector a.For two matrices or vectors a and b with the same size, we write a • b = i,j a i,j b i,j = tr(ab ⊺ ) as the Frobenius inner product between them, which naturally induces a norm |a| = √ a • a.We denote by S D ⊆ R D×D the set of symmetric matrices and S D + ⊆ S D the set of symmetric positive semi-definite matrices.We assume that, for each N , there exists a real-valued centered Gaussian process (H N (σ)) σ∈R D×N with covariance Examples of ξ and H N (σ) satisfying the above requirements are presented in [20,Section 6].
In particular, the mixed p-spin model with vector spins considered in [28] is covered.
The limit of the free energy as N → ∞ has been identified with a variational formula known as the Parisi formula in many settings.We focus on the modified free energy with self-overlap correction: The additional term 1 2 N ξ σσ ⊺ N is one half of the variance of H N (σ).We view this as a correction of the self-overlap 1  N σσ ⊺ from the Hamiltonian.In Guerra's replica symmetry breaking interpolation, this correction term leads to the cancellation of terms involving the self-overlap.This correction term already appeared in Mourrat's work [19,18,20].
We describe the Parisi functional.For convenience, we use continuous versions of the Ruelle probability cascade (RPC) [31].Let R be the RPC with overlap distributed uniformly on [0, 1] (see [24,Theorem 2.17]).More precisely, R is a random probability measure on the unit sphere of a separable Hilbert space with inner product denoted by ∧ such that α 1 ∧ α 2 distributes uniformly on [0, 1] under ER ⊗2 and α l ∧ α l ′ l,l ′ ∈N satisfies the Ghirlanda-Guerra identities where α l l∈N are i.i.d.samples from R ⊗∞ .Let Π be the collection of left-continuous function π : conditioned on R. We refer to [8, Section 4 and Remark 4.9] for the construction and measurability of this process.Notice that ξ(a) = ξ(a ⊺ ) implies that ∇ξ(a) ∈ S D at a ∈ S D and (1.1) (together with (2.1)) implies that ∇ξ(a) ∈ S D + at a ∈ S D + .Hence, ∇ξ • π ∈ Π for every π ∈ Π.We also define θ : R D×D → R by Then, we define P : Π × S D → R by Notice the correction 1  2 ∇ξ • π(1) • σσ ⊺ , which is exactly half of the variance of w ∇ξ•π (α) • σ.We set P(π) = P(π, 0) which has the form of the classical Parisi functional.
We can remove the correction in F N and obtain the limit of (1.2) following the procedure in [21,Section 5].Let ξ * be the convex conjugate of ξ on S D + defined by ξ * (y) = sup The same argument for Theorem 1.1 can be used to treat models enriched by an external field given by a RPC.We describe the enriched model and sketch the proof of the corresponding result, Theorem 6.1, in Section 6.
The convexity of ξ is used in the proof of the upper bound via Guerra's interpolation.To weaken this assumption to convexity over S D + , one needs Talagrand's positivity principle, which is not available in general vector spin models.Alternatively, an upper bound can be obtained through the Hamilton-Jacobi equation approach [18,20,9,10].Based on this, statements in Theorems 1.1 and 1.2 are proved in [8,Corollary 8.3 and Proposition 8.4] under the weaker assumption that ξ is convex over S D + .Remark 1.3.It will be evident from the proof of the lower bound in Section 4 that there exists a minimizer π of inf π∈Π P(π) that satisfies π(1) ∈ K = conv {τ τ ⊺ : τ ∈ supp P 1 }.Hence, we can replace inf π∈Π in both Theorems 1.1 and 1.2 by inf π∈Π: π(1)∈K .
Related works.The classical Parisi formula for the limit of the free energy in the Sherrington-Kirkpatrick (SK) model (D = 1, P 1 uniform on {±1}, and ξ quadratic) was proposed by Parisi in [29,30] and later proven by Talagrand in [33] building on the upper bound by Guerra in [16].Panchenko later extended the formula to SK models with soft spins [22], scalar mixed p-spin models [25,24] with Ising spins, multi-species models [26], and mixed p-spin models with vector spins [27,28].Recently, [5] simplified the Parisi formula for balanced Potts spin glass.After Mourrat's interpretation of the Parisi formula as the Hopf-Lax formula for a Hamilton-Jacobi equation [19], the formula was extended for enriched models [21].For spherical spins, the Parisi formula was proven for the SK model [32], the mixed p-spin model [12], and the multi-species model [4].Since we have assumed that P N is a product measure, the most relevant works are [22,27,28,21].
Let us explain the effect of the self-overlap correction.In Guerra's interpolation for the upper bound, there are terms involving the self-overlap, which have the wrong sign.If the self-overlap is constant (as in the Ising case), these terms cancel each other.Otherwise, to tackle this issue, [27,28] considered free energy with the self-overlap constrained in a small ball, and controlled the original free energy by these with varying constraints.In Section 2, we demonstrate that by correcting F N , the self-overlap terms in the interpolation computation are eliminated (as shown in (2.4)).Consequently, we can establish the upper bound in the same way as for the Ising case.
In the cavity computation in [27,28] for the lower bound, the constraint of the self-overlap disrupts the product structure of P N but enables the derivation of Ghirlanda-Guerra identities in the limit.Here, avoiding the constraint, we preserve the product measure structure in the cavity computation and proceed in the same way as for the Ising case (Section 4).The only issue is that the usual perturbation term added to the Hamiltonian does not ensure the Ghirlanda-Guerra identities since it only controls non-self-overlaps.To resolve this, we include additional perturbation (the second sum in (3.2)) that forces the self-overlap to concentrate.This technique has been previously utilized in [18,20].
Since the overlap is matrix-valued, in the proof of the lower bound, we also need the synchronization technique developed by Panchenko based on the ultrametricity of the overlap proved in [23].
To recover the limit of the original free energy given by (1.2), we add an external field parameterized by (t, x) ∈ [0, ∞) × S D into F N , which is denoted by F N (t, x) (see equation (5.1)).The external field takes the form of tN ξ( σσ ⊺ N ) + x • σσ ⊺ .In Section 5, we will show that F N (t, x) asymptotically satisfies a simple Hamilton-Jacobi equation , 0) corresponds to the original free energy, and the limit of F N (0, x) is given by Theorem 1.1, with P N tilted by e x•σσ ⊺ .Therefore, we can interpolate along the equation to deduce Theorem 1.2 from Theorem 1.1.
It should be noted that the Hamilton-Jacobi equation mentioned here is a finitedimensional one, rather than the infinite-dimensional one in [19,18,20,9] associated with the enriched model discussed in Section 6.The former is similar to the one related to the Curie-Weiss model described in [17].This idea for removing the correction first appeared in [21,Section 5].
In Section 6, we consider the enriched model that was previously investigated in [21].In that work, the limit of the free energy was determined by first establishing the Parisi formula similar to the ones in [27,28] without correction.The formula was then transformed into the predicted form in [19].Here, we will sketch a direct proof using the argument outlined above.
Our simpler approach is not limited to product measures for P N , making it useful in more general cases.However, our assumption on P N simplifies the computation and enables a clearer presentation of the argument.
Comments on variational formulae.The formula presented in Theorem 1.1 has the classical form of the Parisi formula for Ising spins, as seen in [24, Section 3.1].However, the functional P(π) differs from the classical Parisi formula by a constant, which is a consequence of σσ ⊺ = N in the Ising case.
In Theorem 1.2, the two representations are obtained by solving the Hamilton-Jacobi equation mentioned earlier.The first representation is derived using the Hopf-Lax formula, which is possible due to the convexity of ξ.The second representation comes from the Hopf formula, taking advantage of the convexity of y → inf π∈Π P(π, y).One can verify the equivalence between the two using the convexity of the two functions and the Fenchel-Moreau theorem.
The first representation in Theorem 1.2 is a generalization of [21, Corollary 1.3] to D ⩾ 1.The second representation is very close to the formula obtained by Panchenko in [28].The functional P in [28, (31)] can be rewritten as subject to restriction π(1) = z (here y, z correspond to λ, D in [28]).We set Π(z) = {π ∈ Π : π(1) = z} and let D be the convex hull of {σσ ⊺ : σ ∈ supp P 1 }.Notice that D ⊆ S D + .By [28,Theorem 1], the left-hand side of (1.7) is equal to which is similar to our second representation in (1.7).The equivalence of the two formulae will be directly verified in [7] where more information on the self-overlap is needed.
Organization of the paper.We establish Theorem 1.1 by dividing the proof into three parts: the upper bound (Section 2), the perturbation term (Section 3), and the lower bound (Section 4).Theorem 1.2, which removes the correction term, is proved in Section 5. We also provide a brief overview of the enriched model and sketch the proof of the corresponding result in Section 6.
Acknowledgements.The author would like to thank Jean-Christophe Mourrat for helpful discussions.This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 757296).

Upper bound via Guerra's interpolation
We prove the upper bound in Theorem 1.1 using Guerra's interpolation in [16], where the convexity of ξ is needed.We often need the following basic fact: if a ∈ S D , then Proof.Recall the definition of θ in (1.4).Notice that (1.1) and ( 2 + satisfying a ⪰ b.Also, due to ξ(0) = 0, we have θ(0) = 0. Hence, fixing any π ∈ Π, we have that θ ) N as a tuple of its column vectors.For r ∈ [0, 1], we define the interpolating Hamiltonian and the associated interpolating free energy Denoting the Gibbs measure with Hamiltonian H r N (σ, α) by ⟨•⟩ r , we can compute using the Gaussian integration by parts (see [24 The self-overlap corrections have canceled all terms involving R 1,1 and Q 1,1 , which is the key to the proof.The convexity of ξ ensures ξ(a) − ∇ξ(b) • a − θ(b) ⩾ 0 for any a, b ∈ R D×D .Therefore, we have d dr φ(r) ⩽ 0 and thus φ(1) ⩽ φ(0).
Let us rewrite the right-hand side of (1.5) as P 0 (π) + P 1 (π).We have φ(0) = P 0 (π) We want to show that the second term coincides with P 1 (π).Assume that π = Using the standard computation of the RPC in the proof of [24, Lemma 3.1], we have Hence, we have φ(1) = F N − P 1 (π).The case where π is continuous can be treated by the standard approximation.In conclusion, φ(1) ⩽ φ(0) implies that F N ⩽ P(π).Since π is arbitrary, we can obtain the desired upper bound.□

Perturbation and the Ghirlanda-Guerra identities
We directly work with the Hamiltonian appearing in the cavity computation in Section 4. For each N , let ( H N (σ)) σ∈R D×N be a centered real-valued Gaussian process with covariance We describe the perturbation term that will ensure the Ghirlanda-Guerra identities on average in the limit.Let (a n ) n∈N be an enumeration of {q ∈ S D + : |q| ⩽ 1} ∩ Q D×D .For each N ∈ N and h ∈ N 3 , let (H h N (σ)) σ∈R D×N be an independent Gaussian process with where ⊙ is the Schur product of matrices (i.e.(a The construction of H h N (σ) is described at the beginning of [28, Section 5] and omitted here.Note that H h N (σ) has the same order of H N (σ).For each h, fix a constant c h > 0 satisfying be an orthonormal basis of S D and g i (σ) = e i • σσ ⊺ for i ∈ {1, . . ., D(D + 1)/2}.For perturbation parameter we set The second sum will ensure the concentration of the self-overlap, allowing us to get the Ghirlanda-Guerra identities. For for each n ∈ N. Let E x be the expectation under which x is an i.i.d.sequence of uniform random variables over [1,2].Let ⟨•⟩ x be the Gibbs measure of σ with Hamiltonian In the following, E only integrates the Gaussian randomness in H N (σ) and H pert,x N (σ).
Proposition 3.1.The following holds: x x = 0; (2) lim N →∞ E x ∆ x (f, n, h) = 0 for every integer n ⩾ 2, every h ∈ N 3 , and every bounded measurable function f of R ⩽n where The technique of adding g i (σ) to enforce the self-overlap to concentrate already appeared in [18,20].
We first prove the concentration of self-overlap and then the second part.In the first part, i, j are always indices in {1, . . ., D(D + 1)/2}.Lemma 3.2.There is a constant C > 0 such that, for every i, ˆ2 uniformly in (x h ) h∈N 3 and (x j ) j̸ =i .
Due to our assumption on the support of P N , we have |g i (σ)| ⩽ N and thus 16 .
By the standard concentration argument ([18, (4.15)]), we have for some constant C. Writing It is clear from our computation that φ and ϕ are convex.By basic properties of convex functions stated in [24, Lemma 3.2], Using (3.5) and the mean value theorem, we get Inserting the formulae of φ ′ and ϕ ′ and setting y i = N − 7 32 , we get the desired result.□ Proof of (1).The above lemma implies that, uniformly in (x h ) h∈N 3 and (x j ) j̸ =i , ˆ2 for every i.Since {e i } is an orthonormal basis, we can deduce Proposition 3.1 (1).□ Proof of (2).We can proceed in the standard way as in [24, Theorem 3.3] (with h, N 7 16 , N . Let f = f (R ⩽n ) be bounded and measurable.Without loss of generality, we assume ∥f ∥ ∞ ⩽ 1.Then, .
The Gaussian integration by parts gives The above three displays yield Since |R 1,1 | ⩽ 1 holds due to our assumption, there is a constant By Proposition 3.1 (1), the last term averaged by E x vanishes as N → ∞.Inserting this into the previous display, we get Proposition 3.1 (2), which completes the proof.□

Lower bound via the Aizenman-Sims-Starr scheme
We show the lower bound in Theorem 1.1 using the Aizenman-Sims-Starr scheme from [1].This part does not need the convexity of ξ.Recall the perturbation defined in (3.2).For each perturbation parameter x and each N , we define as the perturbation of F N .Then, we describe the Gibbs measure that will appear in the cavity computation.Recall the process H N (σ) in (3.1) and the Gibbs measure ⟨•⟩ x with Hamiltonian H x N (σ) in (3.3).We also need two more independent Gaussian processes.Let (Z(σ)) σ∈R D×N and (Y (σ)) σ∈R D×N be centered R D -valued and real-valued Gaussian processes with covariances EZ(σ)Z(σ We write the spins in and using the standard computation described in the proof of [24, Theorem 3.6], we can obtain the Aizenman-Sims-Starr representation stated below. Lemma 4.2.Uniformly in x, (N + 1) We also need a result on approximating the Parisi-type functional using finitely many entries from the overlap array.The following is a straightforward adaption of [24, Theorem 1.3].
Lemma 4.3.We consider the following setting: • Let Γ be a probability measure on a separable Hilbert space H and let R : H × H → R D×D be a measurable function satisfying |R| ⩽ 1; • Assume that there are centered R D -valued and real-valued Gaussian processes Γ where E integrates the Gaussian randomness in Z(ρ) and Y (ρ).
Then, for every ε > 0, there is a bounded continuous function F ε : R D×D n×n → R for some n ∈ N such that uniformly for all (Γ, R) as described, where ρ l l∈N is i.i.d.sequence under ⟨•⟩ Γ .
The pair (Γ, R) defines an abstract overlap structure that is relevant here.
It is straightforward to check the validity of the first identity.To see the second, one can use the computation in (2.5) with N = 1 to identify the second term in EP(R, R) with the second term in P(π).
For a sequence of random arrays (A n ) n∈N where , we say that A n converges weakly to some random array A = A l,l ′ l,l ′ ∈N and write if, for every k ∈ N, A ⩽k n converges in distribution to A ⩽k as n → ∞.For any real-valued random variable X, its quantile function is the left-continuous increasing function for every bounded measurable function g.The quantile function can be obtained by taking the left-continuous inverse of the probability distribution function, and vice versa.
Proof of Proposition 4.1.Due to the presence of N − 1 16 , we can verify lim holds for a sequence of real numbers (r N ) N ∈N , we can use Lemma 4.2 to see that lim inf We want to choose a sequence of permutation parameters.Let ((f j , n j , h j )) j∈N be an enumeration of and we set where ∆ x (f, n, h) is defined in (3.4).By Proposition 3.1, we have that lim N →∞ E x ∆ N (x) = 0. Using the same argument in the proof of [24,Lemma 3.3], we can find a sequence x N N ∈N such that lim Hence, it suffices to evaluate lim inf N →∞ A N x N .
Choose an increasing sequence of integers Let us make the dependence of R on N explicit by writing , we have that As a result of (4.2), R ∞ satisfies the Ghirlanda-Guerra identities.By the synchronization result [28,Theorem 4], there is a Lipschitz function Ψ : [0, ∞) → S D + satisfying Ψ(s) ⪰ Ψ(s ′ ) for all s ⩾ s ′ such that , where α l l∈N is sampled from R ⊗∞ .Then, we want to obtain a representation of the entire array R ∞ .First, we look for an upper bound for ζ.Note that (4.6 ⩽ tr(q) a.s.Setting r = tr(q), we get Then, (4.6) together with (4.5) also implies Ψ(r) = q.(4.8)Hence, we can represent R ∞ in the following way So, the distribution of the random array R ∞ is induced by E ⟨•⟩ R where ⟨•⟩ R = R ⊗∞ and E integrates the randomness in R.
Due to the possibility that r > ζ(1), in general, R ∞ is not a pure RPC.Hence, we introduce an approximation of R ∞ by RPCs.Choose a sequence (ζ m ) m∈N of leftcontinuous increasing step functions from [0, 1] to [0, r] that converges to ζ as m → ∞ in L 1 ([0, 1]).For each m, allowed by (4.7), we can modify ζ m to ensure that ζ m (s) = r for s in a small neighborhood of 1.
For each m, define where α l l∈N is again sampled from R ⊗∞ .By the convergence of ζ m and the continuity of RPCs in the overlap distribution ([24, Theorem 2.17]), we can deduce that Using these, (4.4), and (4.10), we can find sufficiently large k and m such that , we obtain from the above display that lim inf The desired lower bound follows by sending ε → 0. □

Removing the correction term
We remove the correction and prove Theorem 1.2 by using the Hamilton-Jacobi technique in [21, Section 5] which was set in the case D = 1.For D ⩾ 1, we consider the equation on the cone of positive definite matrices and thus need results from [10].
Recall that we have endowed S D with the Frobenius inner product, which induces the natural topology on S D .For N ∈ N and (t, x) ∈ [0, ∞) × S D , we define Since the computations in this section only involve the self-overlap, we set R = 1 N σσ ⊺ .We denote the derivatives in t and h by ∂ t and ∇ respectively.Here, ∇ is defined with respect to the Frobenius inner product on S D .Recall that we have chosen an orthogonal basis {e i } D(D+1)/2 i=1 of S D , and we define the Laplace operator ∆ = D(D+1)/2 i=1 (e i • ∇) 2 .We often write F N = F N (t, x) for simplicity.Lemma 5.1.Assume that ξ is convex on S D + .The following holds: • for each N , F N is Lipschitz, convex, and increasing in the sense that • there is a constant C > 0 such that, everywhere on (0, ∞) × S D and for every N , The convexity of ξ is only needed for the lower bound in (5.2).Notice that ξ is only required to be convex on S D + instead of the entire space R D×D .
Proof.For (t, x) ∈ (0, ∞) × S D , we can compute that and, for any (s, Since |R| ⩽ 1 a.s. and ξ is locally Lipschitz, we have that + , which implies that F N is increasing.Recognizing that the second-order derivative is a variance, we deduce the convexity of F N .Setting s = 0 and y = e i for each i in (5.4) and summing up, we obtain which together with (5.3) and the local Lipschitzness of ξ implies the upper bound in (5.2).The lower bound in (5.2) follows from (5.3), the convexity of ξ on S D + , the observation R ∈ S D + , and Jensen's inequality.□ From (5.2), F N is expected to be a viscous approximation of the solution f to the equation We make sense of the solution to (5.5) if whenever there is a smooth ϕ : (0, ∞)×O → R such that f −ϕ achieves a local maximum (respectively, minimum) at some (t, x) ∈ (0, ∞) × O, we have (∂ t ϕ − ξ(∇ϕ))(t, x) ⩽ 0 (respectively, ⩾ 0).If f is both a viscosity subsolution and supersolution, we call f a viscosity solution.
Since S D is isometric to R D(D+1)/2 via the orthogonal basis {e i }, all classical theory of viscosity solutions are available for (5.5).For instance, due to the assumption that ξ is locally Lipschitz, [13, Theorem 1 in Section 10.2] ensures the uniqueness of the solution to (5.5) given an initial condition.
Recall P(π, x) defined in (1.5) and P(π) = P(π, 0).We denote by ψ the pointwise limit of F N (0, •) (if it exists).Applying Theorem 1.1 with P 1 replaced by normalized e x•σσ ⊺ dP 1 (σ), we can get By the Lipschitzness of F N uniform in N as stated in Lemma 5.1, if F N converges pointwise on a dense set, we can upgrade this to the convergence in the local uniform topology, namely, uniform convergence on every compact subset of [0, ∞) × S D .Hence, this is the notion of convergence we consider.Proposition 5.2.Assume that ξ is convex on S D + .As N → ∞, F N converges in the local uniform topology to the unique viscosity solution f of (5.5) with initial condition f (0, •) = ψ given in (5.6).
Proof.Since F N is Lipschitz uniformly in N , the Arzelà-Ascoli theorem implies that any subsequence of (F N ) N ∈N has a further subsequence that converges in the local uniform topology to some f .It suffices to show that the subsequential limit f is the viscosity solution.For lighter notation, we assume that the entire sequence F N converges to f .We divide the proof into two parts, verifying that f is a subsolution in the first part and a supersolution in the second part.It is easy to see that replacing "local extremum" by "strict local extremum" in the definition of viscosity solutions yields an equivalent definition.
Part 1.Let (t, x) ∈ (0, ∞) × S D and smooth ϕ satisfy that f − ϕ has a strict local maximum at (t, x).The goal is to show that By the local uniform convergence, there exists (t N , x N ) ∈ (0, ∞) × S D such that F N − ϕ has a local maximum at (t N , x N ), and lim N →∞ (t N , x N ) = (t, x).Notice that Throughout this proof, we denote by C < ∞ a constant that may vary from one occurrence to the next and is allowed to depend on (t, x) and ϕ.
We want to show that, for every y ∈ S D with |y| ⩽ C −1 , (5.9) The convexity of F N gives the first inequality.To derive the other, we start by using Taylor's expansion: (5.10) The same holds with F N replaced by ϕ.By the local maximality of holds for every |y| ⩽ C −1 .The above two displays along with (5.8) imply Since the function ϕ is smooth, the right side of this inequality is bounded by C|y| 2 .Using (5.10) once more, we obtain (5.9).

Next, setting
Using the convexity of F N in (5.4), we have Combining this with (5.9), we obtain that, for every |y| For some deterministic λ ∈ [0, C −1 ] to be determined, we fix the random matrix By the standard concentration result (e.g.[24, Theorem 1.2]) and an ε-net to cover B, we can see lim N→∞ δ N = 0. Taking the expectation in the above display and choosing N , we obtain (5.11).Since (5.9) implies that |∆F N (t N , x N )| ⩽ C, using (5.2), (5.8), and (5.11), we arrive at N .Sending N → ∞ and using the convergence of (t N , x N ) to (t, x), we get (5.7).
Part 2. Let (t, x) ∈ (0, ∞) × S D and smooth ϕ satisfy that f − ϕ has a strict local minimum at (t, x).Since F N converges locally uniformly to f , there is a sequence ((t N , x N )) N ∈N such that lim N →∞ (t N , x N ) = (t, x) and F N − ϕ has a local minimum at (t N , x N ).In particular, (5.8) still holds.Using these and the lower bound in (5.2), after sending N → ∞, we obtain (∂ t ϕ − ξ (∇ϕ)) (t, x) ⩾ 0, which verifies that f is a supersolution and completes the proof.□ Next, we want to restrict the equation (5.5) to a smaller set so that the variational formula for the solution optimizes over the smaller set.We denote by S D ++ the set of positive definite matrices, which is the interior of the closed set S D + .We consider We state the well-posedness of this equation and variational representations below.Notice that we do not impose any boundary condition on ∂S D + .This is possible by only considering increasing solutions.Proposition 5.3.For every Lipschitz ψ : S D + → R that is increasing in the sense that ψ(x) ⩽ ψ(x ′ ) if x ⪯ x ′ , there is a viscosity solution f : [0, ∞) × S D + → R to (5.12) satisfying f (0, •) = ψ, which is unique in the class of increasing and Lipschitz functions.Moreover, • if ξ is convex on S D + , then f admits the Hopf-Lax representation: In (5.13), ξ * is the convex conjugate described in (1.6).
Proof.This is an extraction of results listed in [ ++ here; uniqueness actually holds in a slightly larger class M ∩ L Lip ).The Lipschitzness of f is in (2a) and the monotonicity follows from (2b) and the main statement of (2) (f ∈ M which is the class of functions increasing in x for each fixed t).
Finally, since S D + is a closed convex cone that satisfies the Fenchel-Moreau property described in [10, Definition 6.1] which was proved in [11,Proposition B.1], the two representations of the solution are available due to [10,Theorem 1.2 (2d)].□ Proof of Theorem 1.2.Let f be given by Proposition 5.2.By Lemma 5.1, f is also Lipschitz, convex, and increasing and so is ψ = f (0, •) given in (5.6).Since f is the viscosity solution of (5.5), it follows from the definition that f is a viscosity solution of (5.12).The uniqueness of f follows from Proposition 5.3.Notice that F N ( 1 2 , 0) = 1 N E log ´exp H N (σ)dP N (σ).Hence, due to the convexity of ξ and ψ, the limit of the original free energy is given by the Hopf-Lax and Hopf representations evaluated at (t, x) = ( 1 2 , 0). □

Enriched models
Recently, Mourrat initiated a PDE approach to spin glasses [17,19,21,18,20] (similar considerations also appeared in physics literature [15,14,3,2]).Free energy enriched by a RPC as the additional field is recast as the solution to a Hamilton-Jacobi equation.In this section, we prove the Hopf-Lax representation of the limit free energy for vector spins, Theorem 6.1, which extends the results in [19,21].
To describe the limit, we set which is equal to F N (0, µ) for every N .Recall ξ * defined in (1.6).
be explained below.Due to the concentration of the self-overlap implied by (4.2) and the choice of N

10 ,
Theorem 1.2].Relevant function classes therein are defined in the beginning two paragraphs of [10, Section 1.1.3].By (1.1), ξ is increasing on S D + , hence satisfying the condition on the nonlinearity of the equation (condition H⌊ C ∈ Γ ↗ locLip there; H and C there correspond to ξ and S D + here).The existence and uniqueness is in [10, Theorem 1.2 (2)] ( C corresponds to S D