Simulation reduction of the Ising model to general matchings

A distribution is tractable if it is possible to approximately sample from the distribution in polynomial time. Here the ferromagnetic Ising model with unidrectional magnetic field is shown to be reducible to a standard distribution on matchings that is tractable. This provides an alternate method to the original Jerrum and Sinclair approach to show that the Ising distribution itself is tractable. Previous reductions of the Ising model to perfect matchings on different graphs exist, but these older distributions are not tractable. Also, the older reductions did not consider an external magnetic field, while the new reduction explictly includes such a field. The new reduction also helps to explain why the idea of canonical paths is so useful inapproximately sampling from both problems. In addition, the reduction allows any algorithm for matchings to immediately be applied to the Ising model. For instance, this immediately yields a fully polynomial time approximation scheme for the Ising model on a bounded degree graph with magnetization bounded away from 0, merely by invoking an existing algorithm for matchings.


Introduction
All problems in NP are reducible in polynomial time to an NP-complete problem, illustrating the difficulty of the NP-complete problem.In a similar fashion, it is often possible to reduce the problem of sampling from a distribution to sampling from a different distribution.
Given a set of problem instances Σ * , a simulation problem is a map from a ∈ Σ * to a finite state space Ω a and probability measure π a where all subsets of Ω a are measurable.A simulation algorithm is a randomized algorithm G that takes instances a ∈ Σ * and has output X ∼ π a .
Informally, a simulation reduction is an algorithm that takes a draw for one simulation problem, and uses it to construct a draw for a different simulation problem.

Reducing Ising to general matchings
To be more precise, consider two simulation problems, f with instance space Σ * f , and g with instance space Σ * g .Then a simulation reduction from f to g is a map A that takes problem instance b ∈ Σ * f and returns a ∈ Σ * g and φ : Ω a → Ω b with the property that if Y ∼ π a , then φ(Y ) ∼ π b .Write f ≤ SR g if such a simulation reduction exists.Of course, such a simulation reduction is only useful if φ is a quickly computable function, and if it is easy to simulate Y from π a .
In many cases it is not possible to draw Y exactly from π a , often only an approximate sampling method is available.This is the case when dealing with Markov chain Monte Carlo algorithms.One way of measuring how close the distribution of Y is to π a (and the method used throughout this paper) is by total variation distance, defined as follows for finite Ω: (1.1) The next lemma states that a simulation reduction applied to an approximate sample (as measured by total variation distance) returns samples that are approximate to the target distribution at the same level of approximation.The main result of this paper states that the subgraphs version of the Ising model is simulation reducible to the simulation problem of drawing matchings in a graph in such a way as to give a new polynomial time algorithm for approximately sampling from the Ising model.This links work of Jerrum and Sinclair on Markov chains for matchings [6] and the Ising model [7].The problems considered here are defined as follows.

Name. SPINS (Spins version of Ising model.)
Instance.A graph G = (V, E) together with weights V : E → (−∞, ∞) and B : V → (−∞, ∞).Distribution.The state space is {−1, 1} V with probability distribution The V (i, j) measure the strength of interaction between nodes, while the B(i) measures the external magnetic field of the problem.When B(i) ≥ 0, say that the magnetic field is unidirectional.When V (i, j) ≥ 0, say that the model is ferromagnetic.The problem of a ferromagnetic unidirectional Ising model shall be denoted FUIS.The high temperature expansion of the ferromagnetic unidirectional Ising model was originally suggested by van der Waerden [16].The state space of this problem is {0, 1} E , so each edge is assigned either a 1 or a 0 in a configuration.The following notation will be useful: x({i, j}) so deg(i) is the number of edges adjacent to i in the graph, and for a configuration x, deg x (i) is the number of edges adjacent to i that are assigned the value 1 in the configuration.The form given here for the high temperature expansion follows [7]: Name.HTEIS (High temperature expansion of ferromagnetic Ising model with unidirectional magnetic field.)

Instance.
A graph G = (V, E) together with edge weights λ : E → [0, 1] and node weights µ :→ [0, ∞).Distribution.The state space is {0, 1} E with probability distribution Several subproblems of HTEIS will be given their own names.In the special case that the input graph G = (V, E) has maximum degree 3, refer to the problem as HTEISDEG3.
In the case that µ(i) = 0 for all i (no external magnetic field) the problem is HTEISNO-MAG.And if both the maximum degree of the graph is 3 and there is no magnetic field, call the problem HTEISDEG3NOMAG.
Suppose that HTEIS has as input the same graph as FUIS, for all e ∈ E, λ(e) = tanh V (e), and for all i ∈ V , µ(i) = tanh B(i).Then Newell and Montroll [13] showed that The high temperature expansion was the form of the Ising model used by Jerrum and Sinclair [7] in their approximation algorithm for Z spins .In [5], simulation reductions from HTEIS to FUIS and from FUIS to HTEIS are presented.These are linear time reductions.
The state space {0, 1} E can be viewed as encoding a subset of edges of the graph where x(e) = 1 denotes membership in the subset, and x(e) = 0 means it is out.An edge e with x(e) = 1 will be referred to as on, while an edge e with x(e) = 0 is off.The next problem of perfect matchings is similar in that the state space is essentially a subset of edges of the graph-here chosen so that every node is adjacent to exactly one edge.That is, every node is adjacent to exactly one on edge.As with HTEIS, the weight of the edges is the product of the weights of the individual edges in the set.

Name. PMATCH (Perfect matchings of a graph.)
Instance.A graph G = (V, E) together with edge weights λ : E → [0, ∞) where there exists E ⊆ E such that e∈E λ(e) > 0 and every node in V is adjacent to exactly one edge of E .Distribution.The state space is {0, 1} E with probability distribution where 1(expression) is the indicator function that evaluates to 1 if the expression is true and evaluates to 0 otherwise.As usual, the product over an empty set is taken to be 1.
Given a nonnegative symmetric matrix A with an even number of rows, the value of Z pmatch for the graph with A as its adjacency matrix is called the hafnian of A, a term introduced by Caianiello [2].PMATCH is known in the physics literature as the dimer problem.In the monomer-dimer problem, the subset of edges has at most one edge adjacent to each edge, yielding the following distribution on matchings.

Name. MATCH (Matchings of a graph.)
Instance.A graph G = (V, E) together with edge weights λ : E → [0, ∞) where there exists E ⊆ E such that e∈E λ(e) > 0 and every node in V is adjacent to at most one edge of E .Distribution.The state space is {0, 1} E with probability distribution Earlier work A reduction from HTEIS to PMATCH in the special case where µ(i) = 0 for all i appears in [3].This reduction built upon an earlier reduction in [12] (pp.125-147), and shows how the Ising model with no external magnetic field can be reduced to sampling from perfect matchings on a graph linear in size of the original.Unfortunately, the edge weights for the new distribution could be very large, and it is unknown how to sample effectively from PMATCH for such weights.The goal in the work in [3] was not a simulation reduction, but rather to use knowledge of the combinatorics of perfect matchings to better understand the Ising model on planar graphs.Therefore, the key issue in [3] was that the new graph be planar whenever the original was.The issue for the new work presented here is how quickly the new distribution can be used to generate samples from the old.
In [7], Jerrum and Sinclair showed that a particular Markov chain on HTEIS was rapidly mixing, thereby yielding the first method for approximately sampling from this distribution in polynomial time.However, to this date no one has discovered a polynomial time method for sampling from the perfect matchings distribution on general graphs with large weights.Therefore, in an algorithmic sense the reduction in [3] is unhelpful, as the problem the Ising model was reduced to is more difficult.
In this work we present algorithms that reduce HTEIS to PMATCH and HTEIS to MATCH The HTEIS to MATCH reduction takes advantage of the unidirectional external magnetic field, and the weights associated with edges are polynomial in the weights assigned the original problem.The result is that the new distribution can be approximately sampled from in polynomial time, that is, it provides an alternate means of showing that the ferromagnetic Ising model is tractable.
The chain of Jerrum and Sinclair was the first method for showing the Ising model is tractable [7] and they also presented a chain for the matchings problem that shows it is tractable for small edge weights [6].Both analyses relied on using the idea of conductance to find the mixing time.The reductions presented here give some idea of why conductance should work well for both the matchings problem and the Ising model, since the high temperature Ising expansion can be effectively viewed as a special instance of the matchings distribution.
Our main result reduces the ferromagnetic Ising model with unidirectional external magnetic field simulation problem to the matchings simulation problem.That is, Theorem 1.2.HTEIS≤ SR MATCH.Using algorithms for approximate sampling from MATCH together with the reduction gives an EJP 17 (2012), paper 33.algorithm for sampling within total variation distance from HTEIS.
The proof is given at the end of Section 4.An algorithm that generates output within total variation distance of the target in time polynomial in the problem instance size and ln( −1 ) is called a fully polynomial approximate sampling scheme, or FPASS [10].
The reduction presented here gives an alternate FPASS for HTEIS than the original given by Jerrum and Sinclair in [7].
For families of distributions of the form π(x) = w(x)/Z, where the w(x) are easily computable functions and the normalizing constant Z is often difficult to find exactly, it is well known that the ability to approximately sample from a family of distributions that are self-reducible gives a means to approximate Z (see [9,15].)On the other hand, the ability to approximate Z across problem instances also gives the ability to approximately sample from π.The result is that a simulation reduction such as given above not only links the ability to generate samples, but also links the ability to approximate the partition function for HTEIS with that of MATCH.
The remainder of the paper is organized as follows.In the next section results on the time to stationary for Markov chain approaches to approximately sampling from HTEIS, PMATCH, and MATCH are discussed.Section 3 shows how HTEIS can be reduced to HTEISDEG3.Section 4 then gives our reduction from HTEISDEG3NOMAG to PMATCH and from HTEISDEG3 to MATCH.Section 5 then shows some applications of the reductions.

Approximately sampling HTEIS and MATCH with Markov chains
In this section, work on approximately sampling from HTEIS and MATCH using Markov chains is presented.Defining the total variation distance between two distributions as in equation (1.1), the mixing time of a finite state space ergodic Markov chain can be defined as, starting in state x, the number of steps necessary for the total variation distance from the stationary distribution to be at most .That is, for ergodic Markov chain M with state space Ω and stationary distribution π, the mixing time starting from state x is τ M (x, ) := min{t : where L(X t |X 0 = x) is the distribution of X t given that the chain began in state x.
In [8] is is shown that a Markov chain for MATCH satisfies: [Equation (2.1) is a slight modification of Proposition 12.4 of [8].The λ has been replaced with λ 2 , since the argument in [8] which reduced the λ 2 in [6] to λ is flawed.]In fact, by choosing the starting state to be x * , the matching of maximum weight (accomplished in O(#V 3 ) time with Hungarian algorithm variants [14]), the mixing time can be slightly improved to In [7], Jerrum and Sinclair created a Markov chain for HTEIS and bounded the mixing time analytically when µ > 0. Using results from the proof of Theorem 7 of [7] together with Proposition 12.1 of [8], the mixing time for the Jerrum-Sinclair Markov chain for HTEIS starting at x * , the state where every edge is 0, satisfies (2.3)

Reducing HTEIS to maximum degree 3
All of the reductions presented here have the same form.Given distribution π on Ω and π on Ω , present a function φ (not necessarily 1-1) such that X ∼ π implies φ(X ) ∼ π.The following easy lemma gives a simple condition for φ that guarantees this result.Lemma 3.1.Suppose that π(x) = w(x)/Z is a distribution over Ω while π (x) = w (x)/Z is a distribution over Ω .Here Z = x∈Ω w(x) and Z = x∈Ω w (x) are the appropriate normalizing constants.Suppose φ : Ω → Ω is a function that satisfies for all x ∈ Ω, where C is a constant.Then if X ∼ π and X = φ(X ), then X ∼ π.
Proof.Fix x ∈ Ω, and note The important feature of the condition in (3.1) is that it only relies on the weight function w(x) (which are typically easy to compute) and not on the partition functions (which are usually difficult to compute.)Throughout the rest of this work assume that the graph G is connected: otherwise the weight functions for all our problems factor into the weights over connected components.This means that each connected component could be treated separately.
Reduction for subgraphs to maximum degree 3 with no magnetic field As an example of how to apply this lemma, consider a reduction from [3] where it is shown how to reduce subgraph Ising from a graph with degree at least three to a graph where every degree is at most 3 when the µ(i) are identically 0.
Proof.The construction is as follows.Consider a graph G = (V, E) with node i adjacent to j 1 , . . ., j k , where k = deg(i) > 3. Build a new graph G = (V , E ) with the same nodes and edges as the original, except the node i is split into 2 nodes i 1 and i 2 .For all a ∈ {1, . . ., k − 2}, connect i 1 to j a , and set λ({i 1 , j a }) = λ({i, j a }).Then connect i 2 to j k−1 and j k , and set λ({i 2 , j a }) = λ({i, j a }) for a ∈ {k − 1, k}.
Finally, add edge {i 1 , i 2 } with edge weight 1. See Figure 1 for an illustration.Then φ maps as follows.For any edge e that is in both G and G , let [φ(x )](e) = x (e).Otherwise, let [φ(x )]({i, j a }) = x ({i b , j a }) where b is either 1 or 2 as appropriate.Now to check the conditions of Lemma 3.1.Fix x ∈ {0, 1} E , and consider which x ∈ {0, 1} E map into x under φ.Since i has even degree, then either both i 1 and i 2 have even degree or both have odd degree before edge {i 1 , i 2 } is considered.If both i 1 and i 2 have even degree then x ({i 1 , i 2 }) must be 0, and if both have odd degree then x ({i 1 , i 2 }) must be 1.So for any x, there is exactly one x such that φ(x ) = x.
The weights of all the edges E are the same in E , and the weights of the extra edge in E is 1, so w (x ) = w(φ(x )) = w(x), and Lemma 3.1 is trivially satisfied.Note that the degree of i 1 is k − 1, and the degree of i 2 = 3.The node i 1 can be split over and over again, continuing until the maximum degree of the graph is 3. Now look at how many nodes and edges are in the new graph G .Suppose that in one splitting operation we move from G to G 1 which has exactly one more node and edge than G .This means Repeating this step i∈G max{deg(i)−3, 0} times yields a graph G with i∈G max{deg(i)− 3, 0} = 0, so every node must have maximum degree 3.This sum can be written as: Hence the number of nodes in G is at most #V + 2#E − #V = 2#E, and the number of edges is at most Reduction for subgraphs to maximum degree 3 with magnetic field Now we extend the reduction in [3] to nodes with an external magnetic field.
Proof.For graph G = (V, E), split node i of degree k into two nodes i 1 and i 2 as in HTEISNOMAG≤ SR HTEISDEG3NOMAG, and consider the same map φ.
Let x be a configuration in G. Then there are two configurations x that map to x under φ: one with x ({i 1 , i 2 }) = 1, and one with x ({i 1 , i 2 }) = 0. To check Lemma 3.1, it is necessary to consider different cases, based on the number of edges adjacent to i 1 and i 2 .
To deal with these cases, observe that ) odd and deg x (i 2 ) even, and the other choice of x ({i 1 , i 2 }) makes deg x (i 1 ) even and deg x (i 2 ) odd.Hence If deg x (i) is even, then one value of x ({i 1 , i 2 }) makes both deg x (i 1 ) and deg x (i 2 ) odd, and the other value makes both deg x (i 1 ) and deg x (i 2 ) even.In this case To apply Lemma 3.1, these must equal the same constant, that is: There are an infinite number of solutions to this equation for nonzero µ(i).Since µ(i Now repeat the process for i 2 , breaking it into i 2 with degree 3 and i 3 .Continue until you have nodes i 1 , . . ., i deg(i)−2 .Then to have a valid set of µ(i ), use , where α = α.
The simplest choice is to make α = α (deg(i)−2) −1 for all .The bound on the number of nodes and edges in the new graph is the same as the no magnetic field case.
The downside of this construction is that the magnetic field is smaller at each of the duplicated nodes (unless µ(i) was 0 or 1 to begin with.)The next lemma shows that the new µ vector is not too much smaller than the old.Lemma 3.4.In the construction for an HTEISDEG3 problem given in the proof of the previous lemma, the magnetic field of a new node i in G coming from node i in G is bounded below by µ(i)/(deg(i) − 2).
).Therefore, to show the result it suffices to show that an unsurprising property to have since 2f (b) is a good approximation to ln(b) near b = 1.
Since f (1) = 0, property (3.3) holds at γ = 0.So by the Fundamental Theorem of Calculus, for all γ ≥ 0 and α ≥ 1: Both factors in the right hand side of the last inequality are nonnegative for α ≥ 1 and γ ∈ [0, 1], which completes the proof.This means that the µ(i) have not been reduced too much.Furthermore, if µ(i) becomes too small by falling below #V −1 it can be replaced by #V −1 without drastically changing the distribution.The following procedure makes this notion precise.
For all i ∈ V , set µ (i) = max{µ(i), #V −1 }.Let X be a draw from HTEIS with external magnetic field µ .Then with probability i:deg X (i) is odd (µ(i)/µ (i)), accept X as a draw from HTEIS with input µ.Otherwise, reject and draw X again.Repeat until acceptance occurs.
The expected number of draws needed by the algorithm will be Z spins (µ )/Z spins (µ) (see for instance [4]) which is bounded by the following Lemma.Lemma 3.5.For #V ≥ 2, the above acceptance rejection algorithm accepts with probability at least 1/4.
Proof.Recall that µ(i) = tanh B(i), and define B (i) so that tanh Then from (1.3) Now cosh(B(i)) ≥ 1, and each term in Z spins (B ) is larger than each term in Z spins (B) by a factor of at most i∈V exp(B (i)).Hence Therefore, an upper bound on the expected number of draws needed from HTEIS with external magnetic field µ to obtain one draw from HTEIS with µ is 4.

Reduction of Ising to matchings and perfect matchings
We begin by reviewing the reduction of Fisher [3] showing HTEISNOMAG≤ SR PMATCH.Since this reduction is for no external magnetic field, nodes of degree 1 must have their adjacent edge be off, and so can be removed.Each remaining edge {i, j} in the original graph is turned into a pair of nodes {i j , j i } in the new graph, with λ({i j , j i }) = 1/λ({i, j}).
For nodes i of degree 3 with neighbors j, k and , add edges {i j , i k }, {i k , i } and {i , i j }, each with weight 1.For nodes i of degree 2 with neighbors j and k, add edge {i j , j i } with weight 1. Figure 2 illustrates this procedure on a six node example.Consider a perfect matching x in the new graph and the map [φ 1 (x )]({i, j}) = x ({i j , j i }).It is straightforward to verify that since every node is adjacent to an edge in the perfect matching, under φ 1 (x ), nodes with deg(i) = 3 In G have either 1 or 3 neighbors.Therefore the result is not a subgraphs configuration.To ensure each node had even degree, Fisher used the map [φ 2 (x )]({i, j}) = 1 − x ({i j , j i }).
Now fix a subgraphs state x.There is exactly one state x with φ 2 (x ) = x.Moreover, Therefore Lemma 3.1 is satisfied, and the reduction is valid.
Reduction to perfect matchings for no external magnetic field Here we show that HTEISDEG3NOMAG≤ SR PMATCH using a new reduction that keeps edge weights smaller than Fisher.This appeared in its general form as part of the second author's thesis [11].As in the previous section, with magnetic field 0 nodes of degree 1 can be removed without changing the distribution.
As in [3], each edge {i, j} in the original graph is changed to two new nodes i j and j i , but now set λ({i j , j i }) = λ({i, j}) so that the new weights stay in [0, 1].
A node i of degree two with neighbors j and k is split into two nodes i j and i k .Connect these two nodes with an edge {i j , i k } of weight 1.
A node i of degree three with neighbors j, k, and is split into four nodes: i j , i k , i , and i .Connect each pair of {i j , i k , i } with an edge of weight 1/3.Connect i to each of i j , i k and i by an edge of weight 1. Figure 3 illustrates this procedure.Call edges of the form {i j , j i } in G , where {i, j} is an edge in G, an exterior edge.Say that {i, j} in G corresponds to edges {i j , j i } in G .All other edges of G are interior edges.When deg(i) = 3, break the interior edges into two sets.Call {i , i j }, {i , i k }, and {i , i } spoke edges and {i j , i k }, {i k , i } and {i , i j } wheel edges All the spoke edges receive weight 1, while all the wheel edges receive weight 1/3.
Let x be a perfect matching in the new graph.Then [φ(x )]({i, j}) = x ({i j , j i }).That is, an exterior edge gives the corresponding edge in the original graph the same value, while the values of interior edges are ignored.Note that at most one wheel and one spoke edge can be part of a perfect matching on the interior edges.
Since i must be matched to one of {i , i j , i k }, the maximum degree of φ(x ) is 2. Also, the remaining two of {i , i j , i k } are either matched to each other (making the degree of i under φ(x ) equal to 0) or they are not (making the degree of i under φ(x ) equal to 2).Hence for any perfect matching x in G and node i, deg φ(x ) (i) must be even.A similar argument shows that nodes with degree 2 in G also have even degree under φ(x ).
The next lemma shows that the conditions of Lemma 3.1 are satisfied.Lemma 4.1.Under the reduction above of subgraphs Ising with maximum degree 3 and no external magnetic field to perfect matchings, Proof.Let x be a subgraphs configuration.Unlike Fisher's reduction, there is more than one x such that φ(x ) = x.Consider a node i such that deg(i) = 3.Let the neighbors of i be j, k, and .Since deg Begin by supposing deg x (i) = 2. Without loss of generality x({i, j}) = x({i, k}) = 1 and x({i, }) = 0.The only way that x maps to x while being a perfect matching is if Now suppose that deg x (i) = 0. Then there are three x such that φ(x These three configurations all contribute an edge of weight 1 and one of weight 1/3 to the weight of x .So if n 3 (x) is the number of degree 3 nodes under x, there are 3 n3(x) different x that map to x, but each has edge weight factors of (1/3) n3(x) .Degree 2 nodes are easier: given x, there is exactly one choice for the edges in x that map to x, and this configuration always contributes a factor of 1.
Combining the degree 2 and degree 3 nodes then: Now consider the size of the new graph.Each original node splits into at most 4 new nodes and creates at most 6 new edges.The previous reduction to maximum degree 3 yielded a graph with at most 2#E nodes and at most 3#E edges.Therefore after splitting the result has at most 8#E nodes and at most 3#E + 6(2#E) = 15#E edges.
Reduction to matchings for nonzero magnetic field Here it is shown that HTEISDEG3≤ SR MATCH.Begin by setting µ (i) = max{µ(i), #V −1 } for each i ∈ V so that µ > 0. (Note we can use acceptance rejection as before to use samples from HTEISDEG3 with input µ to obtain samples from HTEISDEG3 with input µ.) For degree 3 nodes, the splitting of the node into 4 nodes and 6 edges proceeds as in the previous section.For degree 2 nodes, the split is into 2 nodes with one extra edge, and degree 1 nodes stay as one node with no extra edges.The edge weights are as follows.
• Node i of degree 1: • Node i of degree 3: Let r(i) be the smallest nonnegative solution to the cubic equation µ(i) 2 (1 + r(i)) 3 = 1 + 3r(i) + 3r(i) 2 .The spoke edges all receive weight r(i).The wheel edges receive weight r(i) 2 /(1+ r(i)), and α(i) = µ(i)(1 + r(i)).(When r(i) = 0 the left hand side of (4.1) is at least the right hand side, but as r(i) goes to infinity the left hand side grows faster than the right since µ(i) > 0. Hence (4.1) has a smallest nonnegative solution.)As in the no magnetic field case, the map is just [φ(x )]({i, j}) = x ({i j , j i }).Lemma 4.2.Under the reduction above of subgraphs Ising with maximum degree 3 and positive external magnetic field to matchings, Proof.Suppose x maps to x, so that all the exterior edges receive the same value.Let E int (i) = ∪ w:{i,w}∈E {{i , i w }} be the interior edges associated with node i.Then λ(e) x (e)   .
Since w(x) does not depend on x λ(e) x (e) .Given x, let A(i) denote all the possible choices of x (E int (i)) that do not violate the matching constraint on the interior edges for i.With this notation, the sum in the last equation can be factored as x (Eint(i))∈A(i) e∈Eint(i) λ(e) x (e) .
So to finish the proof, it is necessary to show where r(i) is chosen to satisfy (4.1).
To show (4.2), begin by letting i be a node with deg(i) = 2.That means α(i) = µ(i) −1 and the weight of the single interior edge is µ(i) −2 − 1.When deg x (i) ∈ {1, 2} the interior edge for i must have value 0, and if deg x (i) = 0, then there are two choices for the interior edge.That gives two terms in the sum. the following table summarizes the possibilities.
In all cases, the product of the last two columns is µ(i) −2 , showing (4.2).
in this case.
Continuing, consider when deg x (i) = 1.Let j be the neighbor of i with x(i, j) = 1, and k and be the neighbors of i with x(i, k) = x(i, ) = 0.For the set A(i), either no interior edges are on or exactly one interior edge is on.(If two interior edges are on, the result cannot be a matching.)If one interior edge is on, either x (i, i j ) = 1, or x (i, i j ) = 1, or x (i j , i k ) = 1.Therefore there are four configurations in A(i) (See Figure 4.) From the weights on the spoke and wheel edges, this gives four terms in the Figure 4: Node i degree 3 in G, degree 1 in configuration x sum: where the last equality follows from our choice of r(i) in (4.1).When deg x (i) = 0, there are ten different choices for the interior edges.One choice is for every interior edge to be off.Three more choices set a single spoke edge to be on, with all other edges off.Three more have a single wheel edge on, with all others off.The last three have exactly one spoke and one wheel edge on, and the rest off.Since any matching in the interior edges can have at most 1 spoke or wheel edge on, this exhausts A(i).Spoke edges have weight r(i), while wheel edges have weight r(i) 2 /(1 + r(i)), so These results are summarized in the following table: In all cases, the product of the second and third columns is µ(i) 2 (1 + r(i)) 3 , which completes the proof.
The next lemma shows the new edge weights are bounded by the product of the old weights and the inverse of the magnetic field up to a constant factor.Lemma 4.3.Let {i, j} ∈ E. Then λ({i j , j i }) ≤ 16µ(i) −1 µ(j) −1 λ({i, j}).
All added edges in E associated with node i have weight at most 3µ(i) −2 .
The main result can now be shown.
Proof of Theorem 1.2.Begin by modifying G = (V, E) to G = (V , E ) by splitting nodes until the maximum degree is 3. Then #V and #E are both Θ(#E).The original magnetic field at node i was µ(i), after splitting the field it is at least µ(i)/(deg(i) − 2).

Consequences of the Ising reduction
The purpose of any reduction is so that existing methods for one problem can be immediately applied to the other.Recall the Jerrum and Sinclair chain for HTEIS on graph G = (V, E) has mixing time upper bounded by 2(#E) 2 (#V ) 4 [(ln 2)#E + ln −1 ] using the fact that µ(i) < #V −1 can be replaced by #V −1 at little cost.
Then Lemma 4.3 gives an upper bound on the edge weight for the matching problem of 16(8#E) 2 .Putting µ(i) ≥ 1/(8#E), #V ≤ 8#E and #E ≤ 15#E into (2.2) means the mixing time for the matchings distribution on the new graph is at most 7.6 • 10 9 (#E) 6 [15(ln 2)#E + ln −1 ].Naturally this is not intended to be a substitute for the Ising Markov chain, as direct analysis is almost always tighter than the analysis of a reduction.However, it is notable that the order of the time using the reduction to matchings is the same as that of a direct analysis of Ising for bounded degree graphs.
With the reduction, any improvements or new algorithms for simulating matchings will automatically translate into new algorithms for the subgraphs world.The reduction also provides insight into why methods such as canonical paths work for both problems: because in the reduction sense they are the same problem.
Bayati et.al. [1] have shown that for matchings in graphs of bounded degree and bounded edge weights it is possible to construct a deterministic fully polynomial time approximation scheme for computing the partition function for the set of matchings.To be precise, their result states:

Figure 3 :
Figure 3: New graph for degree 3 nodes