A note on $\alpha$-permanent and loop soup

In this paper, it is shown that $\alpha$-permanent in algebra is closely related to loop soup in probability. We give explicit expansions of $\alpha$-permanents of the block matrices obtained from matrices associated to $*$-forests, which are a special class of matrices containing tridiagonal matrices. It is proved in two ways, one is the direct combinatorial proof, and the other is the probabilistic proof via loop soup.

where S d is the set of all permutations of {1, • • • , d} and #(π) denotes the number of disjoint cycles in π.The interest of such matrix functions, introduced by Vere-Jones [8,9], derives from their occurrence as the coefficients in the multivariable Taylor series expansion of the determinantal form det(I − ZA) −α , where z qi i q i ! .
Here N = {0, 1, • • • }, q = (q 1 , • • • , q d ), and A[q] denotes the square block matrix of order d i=1 q i , obtained from A by repeating the index i q i times.The equality (1.1) is in fact a reformulation of the α-extension of MacMahon's master theorem.
The basic problem in the study of α-permanent is to provide an explicit expansion of per α (A[q]).Precisely, the expansion of per α (A[q]) is a linear combination of the monomial in the form d i,j=1 A nij ij for some (n ij ) ∈ N d×d .It is of interest to figure out the coefficients of monomials in the expansion.Rubak,Møller,and McCullagh [7] provided explicit expansions for block matrices of some simple types.In this paper, we use two different ways to give explicit expansions of the α-permanents of the block matrices obtained from matrices associated to * -forests (defined in Section 2), which are a special class of matrices containing tridiagonal matrices.THEOREM 1.1.Let A[q] be the block matrix obtained from a d × d matrix A = (A ij ) associated to a * -forest with block sizes q = (q 1 , • • • , q d ).Then Another focus of the paper is the loop soup, which provides an interpretation of αpermanent in the probabilistic language.Roughly speaking, the loop soup is a Poisson ensemble of Markov loops.It has been studied intensively in the recent twenty years due to its vigorous interaction with Gaussian free field, conformal loop ensemble, uniform spanning trees, perturbed Brownian motion, etc. See [10,1,5] for more information on loop soup.
It was indicated by Le Jan [3] that the loop soup with intensity α(> 0) is closely related to the α-permanent.Let V be a finite set and P = (P xy ) x,y∈V be a sub-Markovian transition matrix.Assuming that the corresponding discrete-time Markov chain is transient, denote by L α its associated unrooted oriented loop soup with intensity α (Cf.Section 3 for details).Let θ = θ x x∈V be the occupation time field on vertices of L α .Namely, θ x is the sum of the number of visits at x of each loop in L α .The law of θ is given by the permanental random field defined as follows.For more general definitions of permanental random fields, we refer the reader to [7].THEOREM 1.2.θ is a permanental random field with parameter (α −1 , P ).Namely, for any q ∈ N V , REMARK.Let N = (N xy ) x,y∈V be the occupation time field on edges of L α .That is, N xy is the sum of the number of crossings from x to y of each loop in L α .It holds that θ x = y∈V N xy = x∈V N yx for any x ∈ V .The law of N is given by the following formula ([3, Proposition 4.1]).For any n = (n xy ) x,y∈V satisfying y∈V n xy = y∈V n yx (=: q x ) for all x ∈ V , (1.3) , where R(n) is the coefficient of x,y∈V P nxy xy in per α (P [q]) with q = (q x ) x∈V .
Theorem 1.2 follows immediately from (1.3).In this paper, we provide another proof for Theorem 1.2, which gives the intuition of the appearance of the α-permanent in the occupation law.In the case where P is associated to a * -forest, the law of N and θ have simple expressions as follows.THEOREM 1.3.Suppose P is associated to a * -forest.Let E = E(P ) := {x, y} : x, y ∈ V, P xy = 0 or P yx = 0 ({x, x} ∈ E if P xx = 0).Then for any q ∈ N V and n ∈ T q (P ), x,y∈V P nxy xy , (1.4) and consequently, x∈V Γ(q x + α) {x,y}∈E n xy !{x,y}∈E:x =y Γ(n xy + α) x,y∈V P nxy xy . (1.5) In particular, if P is associated to a forest (defined in Section 2).Then for any q ∈ N V , there is exactly one element, say n q , in T q (P ).Hence it holds that REMARK.The proof of Theorem 1.3 relies on the special structure of * -forest.In fact, (1.4) and (1.5) can not be generalized to general graphs except when α = 1 (one can for example check the case of the triangle graph).In [2], Le Jan showed (1.4) is still true for general graphs when α = 1.This is due to the particularity of the oriented loop soups with intensity 1.In the same paper, an explicit expression for the law of unoriented crossings was also given in the case α = 1/2, which is special for unoriented loop soups.
Theorem 1.3 is also closely related to [4, Proposition 3.2], which gives a conditional occupation law of the continuous-time loop soup on trees.
In fact, the probabilistic proof of Theorem 1.1 comes directly from Theorem 1.2 and Theorem 1.3 by observing that the expansion of per α (A) are polynomials in α and the entries of A.
The paper is organized as follows.In Section 2, we prove Theorem 1.1 in a combinatorial way.In Section 3, we focus on the relation between the α-permanent and the loop soup and prove Theorem 1.2 and Theorem 1.3.

A combinatorial proof of Theorem
A graph is called a * -forest if it has no cycles other than self-loops.We call a matrix A associated to a * -forest (resp.forest) if G(A) is a * -forest (resp.forest).In particular, any tridiagonal matrix is associated to a * -forest.
To simplify notation, we omit '(A)' in G(A), V (A), and E(A) and write ij = ji for an edge {i, j} ∈ E.
A COMBINATORIAL PROOF OF THEOREM 1.1.For a d × d matrix A associated to a *forest and q ∈ N d , we consider every vertex i ∈ V to have q i copies labelled by i 1 , • • • , i qi and focus on the permutations of ), the crossing of π is an element in N V ×V defined as: With the above notation, we can write where the second equality follows from the simple fact that the second sum on the right-hand side is non-zero only when n ∈ T q .Now it suffices to show that for any d × d matrix A associated to a * -forest, q ∈ N d , and Suppose z is the parent of x and the black arrows represent a section of the orbit of some π ∈ S σ .Then the red dashed arrows represent the corresponding section of the orbit of π.
Note that the above statement depends on A only via G (T q depends on G).First, we will prove it under the extra condition that G is a forest.The proof goes by induction on d.In the case where G contains no edges, the only possible choice for q and n is q i = n ij = 0 for any 1 ≤ i, j ≤ d, and (2.1) holds.In particular, this covers the case of d = 1.Assuming that (2.1) holds for any forest G with vertices , and n ∈ T q , we will prove it for G with vertices {1, 2, • • • , d}, q ∈ N d , and n ∈ T q .We exclude the trivial case where G contains no edges.Then there exists at least one vertex with exactly one neighbour in G. Fix such a vertex y and denote by x its unique neighbour.It holds that q y = n xy ≤ q x .Let G = ( V , E) be the graph obtained by removing y and xy from G. Define q = ( q i ) i∈ V and n = ( n ij ) i,j∈ V as follows: Then G is also a forest and n ∈ T q ( G).We shall introduce some notation.Set Y := Below, we use S and S to denote S(V [q]) and S( V [ q]) respectively.Let S Y := σ = (σ − , σ + ) : σ ± : Y → X are both injective and for any σ = (σ − , σ In the following, we will show that for any σ ∈ S Y , there is a one-to-one correspondence between S σ and S. To this end, we relabel X such that σ + (Y ) = {x m : m ∈ [q x −q y +1, q x ]}.For π ∈ S σ , define a map π ∈ S via: where π k is the k-th fold composition of π with itself and k(i The idea is that the orbits induced by σ consist of cycles and bridges 1 ,which are also sections of the orbits of π, for any π ∈ S σ .π is obtained from π by removing all these cycles and bridges and identifying the two endpoints of each bridge.See Figure 1 for an illustration.We can readily check that π → π is a bijection from S σ to S. Furthermore, it holds that where #(σ) is the number of cycles in the orbits of σ. 1 The orbits of σ consist of the self-avoiding paths of the following two kinds: (1) the cycles Following this, we have By the induction hypothesis, it holds that On the other hand, we claim that We can similarly define the orbits of σ m ∈ S m Y .Denote by #(σ m ) the number of cycles in the orbit of σ m .Observe that given σ m−1 ∈ S m−1 Y , y m can be traced, along the orbit of σ m−1 , back to a unique element x m * ∈ X \ σ m−1 (Y m−1 ).So if we consider the collection σ m ∈ S m Y : σ m is an extension of σ m−1 as further choosing the image of y m , then there are exactly (q x − m + 1) choices (since σ m (y m ) can be any element in X \ σ m−1 (Y m−1 )) and among them, the choice of σ m (y m ) = x m * makes the number of cycles increase by 1; while for other choices, the number remains the same.Thus, for any given , where the sum is over , which immediately leads to (2.4).Substituting (2.3) and (2.4) to (2.2), we reach (2.1).
For a general * -forest G, we consider a new graph G * obtained from G by changing every self-loop at x ∈ V to an edge from x to a newly-created copy of x.Then G * is a forest.It is simple to deduce (2.1) for G from that for G * .
3. Link with loop soup.

Proof of Theorem 1.2.
Recall that V is a finite set and P = (P xy ) x,y∈V is a sub-Markovian transition matrix.Denote P x∆ = 1 − y∈V P xy for any x ∈ V .Consider the discrete-time Markov chain (DTMC) X = (X n ) 0≤n<ζ , (P x ) x∈V on G which being at x, jumps to y with probability P xy and is killed with probability P x∆ .It is assumed that X is transient.So its Green's function G(x, y) := E x ζ−1 n=0 1 {Xn=y} is finite for all x, y ∈ V .Denote G = G(x, y) x,y∈V .Then it holds that G(I − P ) = (I − P )G = I.
In the following, G always refers to G(P ).We give a brief introduction to the (discretetime) oriented loop soup associated to X. Any discrete-time path on G with the same starting and terminal points is a rooted loop on G. Forgetting the starting point of a rooted loop, it defines an unrooted loop on G.For an unrooted loop γ, its multiplicity J(γ) is the maximal integer J such that γ can be written as the concatenation of J identical unrooted loops.The loop measure associated to X is defined to be the measure µ on the space of unrooted loops with µ(γ) = µ({γ}) equaling the product of the transition probabilities of the edges crossed by γ divided by J(γ) for each loop γ.The oriented loop soup associated to X with intensity α(> 0) is by definition a Poisson point process on the spaces of unrooted loops on G with intensity αµ.We use L α to denote the loop soup associated to X with intensity α.It is wellknown that the total mass of µ is − log(det(I − P )).Thus, for a loop configuration L with k different loops γ 1 , • • • , γ k and each loop γ i repeating r i times, it holds that Before the proof of Theorem 1.2, we introduce the (vertex-)extended graph of G.
Extended graph.Let K be a large integer and K = (K x ) x∈V with K x = K for all x ∈ V .The extended graph G K = (V K , E K ) is defined as follows.We set V K = V [K] and claim x i and y j are neighboured in G K if x and y are neighboured in G. Let X K be the DTMC on G K with transition matrix P K xi,yj := P xy /K and L K α be the oriented loop soup with intensity α associated to X K .Since X has the law of the projection of X K on G, the projection of PROOF OF THEOREM 1.2.Note that the probability that L K visits any vertex at most once goes to 1 as K → ∞.Therefore, we can focus on the event that L K ∈ C * , where C * is the collection of the loop configurations on G K that visit every vertex at most once.By the projection relation of L and L K , for any n ∈ T q , it holds that x,y∈V where #L is the number of loops in L , o K (1) is a term that goes to 0 as K → ∞, and θ G (L ) (resp.N G (L )) is the projection on G of the occupation time field on vertices (resp.edges) of L .The second equality in (3.2) follows from (3.1) and the fact that the multiplicities of the loops in L ∈ C * are all 1.Now it remains to deal with the term which can be computed as follows: (a) first choose q x different vertices from {x 1 , • • • , x K } for each x.The total number of ways x,y∈V P nxy xy , where n = N G (L ).Sum the weights of all the loop configurations L that satisfy θ G (L ) = q and visit exactly the chosen vertices.
. The summation in (b) equals per α (P [q]).Therefore, as K → ∞, the limit of the right-hand side of (3.2) is , which completes the proof.

3.2.
Proof of Theorem 1.3.Now let us turn to the proof of Theorem 1.3.Since (1.5) follows directly from (1.4), it suffices to prove (1.4).Without loss of generality, we further assume G(P ) is connected (otherwise we consider its connected components separately).Then G is a * -tree (i.e. a connected * -forest).First, we impose the extra conditions that (a) G is a tree, (b) X can only be killed at some vertex x 0 .
Let us show (1.4) under (a) and (b).We view x 0 as the root of the tree G henceforth.Let C x be the set of children of x for x ∈ V and p x be the parent of x for x ∈ V \ {x 0 }.Denote by NB(r, p), Multi(m, p), and NM(r, p) the negative binomial distribution with parameter (r, p), the multinomial distribution with parameter (m, p), and the negative multinomial distribution with parameter (r, p) respectively 2 .We shall prove a preliminary lemma.REMARK.The conclusion (i) can be generalized to any transition matrix P , while (ii) relies on the tree structure of G.
PROOF.In this proof, we will frequently use the following simple fact.
FACT (NB-Multi mixture=NM).If Y is a NB(r, p) random variable and conditionally on Y , X follows the Multi(Y, p) distribution, then the unconditional distribution of X is NM(r, pp), where pp = (pp 1 , • • • , pp d ).
For (i), the law of θ x0 follows from the standard result of the marginal distribution of a random permanent field (Cf.[7, §3.1]).It can be easily deduced that the paths between consecutive visits of x 0 in the loops in L α are i.i.d and each of them is distributed as an excursion of X at x 0 .In other words, conditionally on θ x0 , if we cut off the loops visiting x 0 at every visit at x 0 , then what we get is just θ x0 independent excursions of X at x 0 .So the conditional law of N x0x : x ∈ C x0 is Multi θ x0 , (1 − P x0∆ ) −1 P x0 .Their unconditional law follows from Fact 3.2.
For (ii), for any y ∈ C x , we divide N xy into two parts: xy + N xy , where N (resp.N xy ) is the number of crossings from x to y by the loops in L α visiting p x (resp.not visiting p x ).For the first part, conditionally on N xpx = m, the trace of the loops visiting p x on the branch 3 at p x containing x consists of exactly m excursions at p x .As the previous arguments, these excursions are i.i.d. and each of them has the same law as an excursion 2 For r > 0, d ≥ 1 and p = (p 1 , • • • , p d ) ∈ [0, 1) d with p 1 < 1, NM(r, p) is a distribution on N d with probability mass function: For r > 0 and 0 ≤ p < 1, NB(r, p) is just NM(r, p) with p = (p).
3 A branch at x is defined as a connected component of the tree G when removing the vertex x, to which we add x.
of X at p x conditioned to hit x.So the number of crossings from x to its children by each excursion follows a NB(1, 1 − P xpx ) distribution (i.e.geometric distribution with success probability P xpx ).The sum of them, i.e. y∈Cx N xy , is a NB(m, 1 − P xpx ) random variable.For the second part, it is easy to deduce that the loops not visiting p x form a loop soup associated to X killed at p(x).Hence it follows from (i) that y∈Cx N Iteratively using (i) and (ii), we get where in the second equality, we use The second equality is a well-known formula of the determinant of the Green's function (see for example [10,Proposition 1.31]).Now we proceed to the general cases.First, we remove condition (b).Still fix some vertex x 0 ∈ V .We consider the h-transform X h of X using the excessive function h(x) = P x (T x0 < ∞), x ∈ V, where T x0 = inf{n ≥ 0 : X n = x 0 }.Since the law of the loop soup is invariant under the transform (because the loop measure is invariant under the transform) and X h is an irreducible transient DTMC that can only be killed at x 0 , it boils down to the previous case and we have (1.4) holds if det(I − P ) is substituted by det(I − P h ), where P h xy = P xy h(y) h(x) is the transition matrix of X h .Further using that the diagonal entries in the Green's function (seen as a matrix) are invariant under the transform, we can readily deduce from (3.3) that det(I − P ) = det(I − P h ).That gives rise to (1.4).
Next, we further remove condition (a).Similar to the combinatorial proof of Theorem 1.1, we add to the vertex set a copy of every vertex and define a DTMC X * on the extended vertex set via its transition function:     P xy , if x, y ∈ V ; P xx , if x ∈ V and y = x * ; 1, if y ∈ V and x = y * ; 0, otherwise, where x * is the copy of x in the extended vertex set.Then G(P * ) is a tree and the projection of X * on V has the same law as X, where the projection maps every path (x, x * , x) to the path (x, x) for any x ∈ V .It follows that the loop soup associated to X has the same law as the projection of the loop soup associated to X * .Again, it reduces to the previous case and (1.4) holds if det(I − P ) is substituted by det(I − P * ).By (3.3), it is easily seen that det(I − P ) = det(I − P * ), which leads to (1.4).That completes the proof.
Funding.The first author is supported by the Fundamental Research Funds for the Central Universities.The second author is partially supported by NSFC, China (No. 11871162).

( 2 )
xy has the NB(α, 1 − P xpx ) distribution.The independence of N (1) xy : y ∈ C x and N (2) xy : y ∈ C x yields that the conditional distribution of y∈Cx N xy is NB(m + α, 1 − P xpx ).Moreover, we can readily see from the above arguments that conditionally on y∈Cx N xy , N xy : y ∈ C x follows a Multi y∈Cx N xy , (1 − P xpx ) −1 P x distribution.Thus, Fact 3.2 implies (ii).