Distribution-Free Conditional Median Inference

We consider the problem of constructing confidence intervals for the median of a response $Y \in \mathbb{R}$ conditional on features $X \in \mathbb{R}^d$ in a situation where we are not willing to make any assumption whatsoever on the underlying distribution of the data $(X,Y)$. We propose a method based upon ideas from conformal prediction and establish a theoretical guarantee of coverage while also going over particular distributions where its performance is sharp. Additionally, we prove an equivalence between confidence intervals for the conditional median and confidence intervals for the response variable, resulting in a lower bound on the length of any possible conditional median confidence interval. This lower bound is independent of sample size and holds for all distributions with no point masses.


Introduction
Consider a dataset (X 1 , Y 1 ), . . . , (X n , Y n ) ⊆ R d × R and a test point (X n+1 , Y n+1 ), with all datapoints being drawn i.i.d. from the same distribution P . Given our training data, can we provide a confidence interval for the expected value µ(X n+1 ) = E[Y n+1 |X n+1 ]?
Methods for inferring the conditional mean are certainly not in short supply. In fact, most existing methods predict not just the conditional mean at a datapoint E[Y n+1 |X n+1 ], but the full conditional mean function E[Y |X = x] for all x ∈ R d . To the best of our knowledge, however, each approach relies on some assumptions in order to guarantee coverage. For instance, the classical linear regression model is often used but is only accurate if Y |X is normal with mean µ(x) = E[Y |X = x] affine in x and standard deviation independent of x. Nonparametric regressions cannot estimate the conditional mean without imposing smoothness conditions and assuming that the conditional distribution is sufficiently light tailed. Since reliable conditional mean inference is a very common problem, these methods are nevertheless used all the time, e.g. in predicting disease survival times, classifying spam emails, pricing financial assets, and more. The issue is that the assumptions these methods make rarely hold in practice. Thus, the question remains: is it possible to estimate the conditional mean at a datapoint in a distribution-free setting, with no assumptions on P ?
It turns out that it is not only impossible to get a nontrivial confidence interval for the conditional mean E[Y |X = X n+1 ], but it is actually impossible to get a confidence interval for E [Y ] itself. This result originates in Bahadur and Savage [1956], where the authors show that any parameter sensitive to tails of a distribution cannot be estimated when no restrictions exist on the distribution class; an example of a distribution with a non-estimable mean is given in Appendix B.3.
Thus, within the distribution-free setting, making progress on inferring the conditional mean requires a modification to the problem statement. One strategy is to restrict the range of Y . An example of this is in Barber [2020], which introduces an algorithm that calculates a confidence interval for the conditional mean in the case where Y ∈ {0, 1}. However, even with this restriction, Barber shows that there exists a fundamental bound limiting how small any such confidence interval can be.
The other strategy is to modify the measure of central tendency that we study. Bahadur and Savage's result suggests that the best parameters to study are robust to distribution outliers; this observation motivates our investigation of the conditional median.
In a nutshell, the conditional median is possible to infer because of the strong and quantifiable relationship between any particular sampled datapoint (X i , Y i ) and Median(Y |X = X i ). Its robustness to outliers means that even within the distribution-free setting, there is no need to worry about 'hidden' parts of the distribution. Additionally, there already exists a well-known algorithm for estimating Median(Y ) given a finite number n of i.i.d. samples. Explored in Noether [1972] and covered in Appendix B.4, this algorithm produces intervals with guaranteed rates of coverage and widths going to zero as the sample size goes to infinity, suggesting that an algorithm for the conditional median might also perform well. We note that there already exists literature dealing with estimating the conditional quantile through quantile regression; Koenker [2005] provides an introduction to different methods. The difference between quantile regression and our work here is that quantile regression requires continuity conditions and only provides theoretical guarantees at the asymptotic level, as seen in Belloni et al. [2019], Takeuchi et al. [2006], and Oberhofer and Haupt [2016]; the methods described here provide coverage guarantees on finite sample sizes and require no assumptions.
Our goal in this paper is to combine ideas from regular median inference with procedures from distributionfree inference in order to understand how well an algorithm can cover the conditional median and, more generally, conditional quantiles. In particular, we want to see if the properties of the median and quantiles lead to a valid inference method while also examining the limits of this inference.
It is important to note that the quantity we are attempting to study is Median(Y |X = X n+1 ), not Median(Y |X = x). The first term is a random variable dependent on X n+1 , whereas the second is fixed and exists for all x in the support of P . We focus on the first quantity because we are in the distribution-free setting; as we know nothing about the class of distributions that P belongs to, it is more tractable to make inferences about datapoints as opposed to pointwise across the full distribution.
Another way to frame this is to think of our goal as to cover the value of the conditional median function when weighting by the marginal distribution P X , as opposed to covering the full conditional median function over all x ∈ R d . Because we are predicting the conditional median at an unknown value X n+1 , our success is not measured by whether or not we predict the conditional median correctly at all possible x ∈ R d , but by how often we predict the conditional median correctly across P X .
The methods used in this paper are similar to those from distribution-free predictive inference, which focuses on predicting Y n+1 from a finite training set. The field of conformal predictive inference began with Vovk et al. [2005] and was built up by works such as Shafer and Vovk [2008] and Vovk et al. [2009]; it has been generating interest recently due to its versatility and lack of assumptions. Applications of conformal predictive inference range from criminal justice to drug discovery, as seen in Romano et al. [2020] and Cortés-Ciriano and Bender [2019] respectively.
While this paper relies on techniques from predictive inference, our focus is on parameter inference, which is quite different from prediction because it focuses on predicting a function of the conditional distribution Y n+1 |X n+1 as opposed to the datapoint Y n+1 . For example, using sample datapoints from an unknown normal distribution to estimate the true mean of the distribution would constitute parameter inference; using the estimated quantiles of the datapoints to create a predictive confidence interval for the next datapoint would constitute predictive inference. Whereas predictive inference exploits the fact that (X n+1 , Y n+1 ) is exchangeable with the sample datapoints, parameter inference requires another layer of analysis, as (X n+1 , Median(Y n+1 |X n+1 )) is not exchangeable with (X i , Y i ). This additional complexity demands modifying approaches from predictive inference to produce valid parameter inference.

Terminology
We begin by setting up definitions to formalize the concepts above. Throughout this paper, we assume that any distribution (X, Y ) ∼ P is over R d × R unless explicitly stated otherwise. Each result in this paper holds true for all values of d and n.
Given a feature vector X n+1 , we letĈ n (X n+1 ) ⊆ R denote a confidence interval for some functional of the conditional distribution Y n+1 |X n+1 . Note that we use the phrase confidence interval for convenience; in its most general form,Ĉ n (X n+1 ) is a subset of R. This interval is a function of the point X n+1 at which we seek inference as well as of our training data D = {(X 1 , Y 1 ), . . . , (X n , Y n )}. We writeĈ n to refer to the general algorithm that maps D to the resulting confidence intervalsĈ n (x) for each x ∈ R d .
In order forĈ n to be useful, we want it to capture, or contain, the parameter we care about with high probability. We formalize this as follows: Definition 1. We say thatĈ n satisfies distribution-free median coverage at level 1−α, denoted by Definition 2. For 0 < q < 1, let Quantile q (Y n+1 |X n+1 ) refer to the qth quantile of the conditional distribution Y n+1 |X n+1 . We say thatĈ n satisfies distribution-free quantile coverage for the qth quantile at level 1−α, denoted by (1 − α, q)-Quantile, if The probabilities in Definitions 1 and 2 are both taken over the training data D = {(X 1 , Y 1 ), . . . , (X n , Y n )} and test point X n+1 . Thus, satisfying (1 − α)-Median is equivalent to satisfying (1 − α, 0.5)-Quantile.
These concepts are similar to predictive coverage, with the key difference being that our goal is now to predict a function of X n+1 rather than a new datapoint Y n+1 .
Definition 3. We say thatĈ n satisfies distribution-free predictive coverage at level where this probability is taken over the training data D = {(X 1 , Y 1 ), . . . , (X n , Y n )} and test point (X n+1 , Y n+1 ).
Finally, we define the type of conformity scores that we will use in our general quantile inference algorithm.
Definition 4. We say that a function f : (R d , R) → R is a locally nondecreasing conformity score if, for all x ∈ R d and y, y ∈ R with y ≤ y , we have f (x, y) ≤ f (x, y ).

Summary of Results
We find that there exists a distribution-free predictive inference algorithm that satisfies both (1 − α/2)-Predictive and (1 − α)-Median. Moreover, an improved version of this algorithm also satisfies (1 − α, q)-Quantile. Together, these prove that there exists nontrivial algorithmsĈ n that satisfy (1 − α)-Median and (1 − α, q)-Quantile for all 0 < q < 1. We go on to show that conditional median inference and predictive inference are nearly equivalent problems. Specifically, we show that any algorithm that contains Median(Y n+1 |X n+1 ) with probability 1 − α must also contain Y n+1 with probability at least 1 − α, and that any algorithm that contains Y n+1 with probability at least 1 − α/2 must contain Median(Y n+1 |X n+1 ) with probability 1 − α.
Taken together, these results give us somewhat conflicting perspectives. On the one hand, there exist distributionfree algorithms that capture the conditional median and conditional quantile with high likelihood; on the other hand, any such algorithm will also capture a large proportion of the distribution itself, putting a hard limit on how well such algorithms can ever perform.

Confidence Intervals for the Conditional Median
This section proves the existence of algorithms obeying distribution-free median and quantile coverage. We then focus on situations where these algorithms are sharp.

Basic Conditional Median Inference
Algorithm 1 below operates by taking the training dataset and separating it into two halves of sizes n 1 + n 2 = n. Next, a regression algorithmμ is trained on D 1 = {(X 1 , Y 1 ), . . . , (X n1 , Y n1 )}. The residuals Y i −μ(X i ) are calculated for n 1 < i ≤ n, and the 1 − α/2 quantile of the absolute value of these residuals is then used to create a confidence band around the predictionμ(X n+1 ). The expert will recognize that this is identical to a well-known algorithm from predictive inference as explained later.
The proof of Theorem 1 is covered in Appendix A.1.
Remark 2.1. Algorithm 1 works independently of howμ is trained; this means that any regression function may be used, from simple linear regression to more complicated machine learning algorithms. It is important to note that µ(x) does not need to be an estimate of the true conditional mean µ(x); while one option is to train it to predict the conditional mean, it can be fit to predict the conditional median, conditional quantile, or any other measure of central tendency. The best choice for what to fitμ to may depend on one's underlying belief about the distribution.
We return at last to the connection with predictive inference. Introduced in Papadopoulos et al. [2002] and Vovk et al. [2005] and studied in Lei et al. [2013], Barber et al. [2021b], Romano et al. [2020], and several other papers, the split conformal method was initially created to achieve distribution-free predictive coverage guarantees. In particular, Vovk et al. [2005] shows that Algorithm 1 satisfies (1 − α/2)-Predictive, implying that in order to capture Median(Y n+1 |X n+1 ) with probability 1 − α, our algorithm produces a wider confidence interval than an algorithm trying to capture Y n+1 with the same probability.

General Conditional Quantile Inference
Algorithm 1 is a good first step towards a usable method for conditional median inference; however, it may be too rudimentary to be used in practice. Algorithm 2 is a more general version of Algorithm 1 that results in conditional quantile coverage and better empirical performance. This provides a better understanding of how diverse parameter inference algorithms can be.
Fit conformity scores f lo , f hi : Algorithm 2 differs from Algorithm 1 in two ways. First, we use the rq quantile of the lower scores to create the confidence interval's lower bound, and the 1 − s(1 − q) quantile of the upper scores (corresponding to the top s(1 − q) of the score distribution) for the upper bound. Second, the functions we fit are no longer regression functions, but instead locally nondecreasing conformity scores. These scores are described in Definition 4; see Remark 2.3 for examples.
The proof of Theorem 2 is covered in Appendix A.2 and is similar to that of Theorem 1; the main modifications come from the changes described above. Regarding the first change, the asymmetrical quantiles on the lower and upper end of the E i 's balance the fact that datapoints have asymmetrical probabilities of being on either side of the conditional quantile. Regarding the second change, because the conformity scores E i still preserve relative ordering, they do not affect the relationship between datapoints and the conditional quantile.
Remark 2.2. One possible choice for r and s is r = s = α/2. This is motivated by the logic that r and s decide the probabilities of failure on the lower bound and the upper bound, respectively; if we want the bound to be equally accurate on both ends, it makes sense to set r and s equal. Another choice is r = (1 − q)α and s = qα; this results in the quantiles for Q lo rq and Q hi 1−s(1−q) being approximately equal, with the algorithm taking the q(1 − q)α quantile of the scores on both the lower and upper ends.
Remark 2.3. The versatility of the conformity scores f lo and f hi is what differentiates Algorithm 2 from Algorithm 1 and makes it a viable option for conditional quantile inference. Below are a few examples of possible scores and the style of intervals they produce.
tendency estimator. This is the conformity score used in Algorithm 1 and Vovk et al. [2005], resulting in a confidence interval of the form [μ(X n+1 )+c lo ,μ(X n+1 )+c hi ] for some c lo , c hi ∈ R. This score is best when the conditional distribution Y |X = x is similar for all x and either the mean or median can be estimated with reasonable accuracy. Note that if the conditional distribution Y − E[Y |X] is independent of X, Algorithm 2 will output the same confidence interval forμ( i ∈ I 1 } as a central tendency estimator and conditional absolute deviation estimator, respectively. This score results in a confidence interval of the form [μ(X n+1 ) + c loσ (X n+1 ),μ(X n+1 ) + c hiσ (X n+1 )] for some c lo , c hi ∈ R. Unlike the previous example, this score no longer results in a fixed-length confidence interval; it is best used when there is high heteroskedasticity in the underlying distribution. This is the conformity scores used to create adaptive predictive intervals in Lei et al. [2018]. Note that a normalization constant γ > 0 can be added to the denominatorσ(X i ) to create stable confidence intervals.
These scores are best when one can estimate the conditional quantiles reasonably well and the conditional distribution Y |X = x is heteroskedastic. Note that ifQ lo andQ hi are trained well, then the resulting confidence interval will be approximately [Q lo (X n+1 ),Q hi (X n+1 )]. These are the scores used to create the predictive intervals seen in Romano et al. [2019] and Sesia and Candès [2020].
i ∈ I 1 } to be the estimated cumulative distribution function of the conditional distribution Y |X. Using this score will result in a confidence interval , similar to the predictive intervals in  and Kivaranovic et al. [2020]. This can be a good approach when the conditional distribution Y |X is particularly complex.
as a log central tendency estimator. This results in a confidence interval of the form [c lo exp(μ(X n+1 )), c hi exp(μ(X n+1 ))] for some c lo , c hi ∈ R + . This score works well when Y is known to be positive and one wants to minimize the approximation ratio; it is equivalent to taking a log transformation of the data.
In general, a good choice for f lo (X i , Y i ) and f hi (X i , Y i ) depends on one's underlying belief about the distribution as well as on the sample size n, though some scores perform better in practice. The choice of n 1 and n 2 represents a balance between model training and interval tightness; increasing n 1 increases the amount of data for f lo and f hi , and increasing n 2 results in a better quantile for the predictive interval. Sesia and Candès [2020] contains more information on the effect of the conformity score on the size of predictive intervals as well as the impact of the ratio n 1 /n on interval width and coverage. We also simulate the impact of different scores on conditional quantile intervals in Section 4.
Remark 2.4. Algorithm 2 can be generalized to use more than one split, resulting in multiple confidence intervals that can then be combined into one output. For more information, Vovk et al. [2018] describes how to apply kfold cross validation to split conformal inference, and Barber et al. [2021a] describes the special case of having n different splits using the jackknife+ procedure.

Algorithm Sharpness
Now that we have seen that Algorithms 1 and 2 achieve coverage, an important question to ask is whether or not the terms for the error quantile can be improved. Do our methods consistently overcover the conditional median, and if so, is it possible to take a lower quantile of the error terms and still have Theorems 1 and 2 hold? In this section, we prove that this is impossible by going over a particular distribution P δ for which the 1 − α/2 term in Algorithm 1 is necessary. Additionally, we go over a choiceμ c with the property that Algorithm 1 always results in 1 − α/2 coverage when ran with inputμ c ; this implies that there does not exist a distribution with the property that Algorithm 1 will always provide a sharp confidence interval for the conditional median regardless of the regression algorithm. For 1} is here an independent Bernoulli variable with P(B = 1) = 0.5 + δ. That is, 0.5 + δ of the distribution is on the line segment Y = X from (−0.5, −0.5) to (0.5, 0.5), and 0.5 − δ of the distribution is on the line segment Y = 0 from (−0.5, 0) to (0.5, 0). Thus, Median(Y |X = x) = x. A visualization of P δ is shown in Figure 1. We know that Algorithm 1 is accurate for all distributions P and all algorithmsμ. Consider the regression algorithmμ : R → R such thatμ(x) = 0 for all x ∈ R; in other words,μ predicts Y i = 0 for all X i . We show that Algorithm 1 returns a coverage almost exactly equal to 1 − α.
Theorem 3. For all > 0, there exist N and δ > 0 such that if we sample n > N datapoints from the distribution P δ and use Algorithm 1 withμ = 0 as defined above and n 1 = n 2 = n/2 to get a confidence interval for the conditional median, The proof is in Appendix A.3. Theorem 3 does not directly prove that the 1 − α/2 term in Algorithm 1 is sharp. However, we can see that if Algorithm 1 used the 1 − α /2 quantile of the residuals with α > α, then by Theorem 3 there would exist a choice for δ and n where the probability of conditional median coverage would be less than 1 − α. Therefore, the 1 − α/2 term is required for the probability of coverage to always be at least 1 − α.
Remark 2.5. It is possible to generalize Theorem 3 to Algorithm 2 as well; we can change it can be shown for large n and small δ that Algorithm 2 returns a confidence interval that has a conditional quantile coverage of at most 1 − α + , meaning that the rq and 1 − s(1 − q) terms in the quantiles for the error scores are sharp.
These results may seem somewhat pedantic because we are restrictingμ(x) to be the zero function and f lo (X i , Y i ) and f hi (X i , Y i ) to be Y i ; this simplification is done to better illustrate our point. Even when f lo (X i , Y i ) and f hi (X i , Y i ) are trained using more complicated approaches, there still exist distributions that result in only 1−α coverage for Algorithm 2. For an example of a distribution where Algorithm 2 only achieves 1 − α coverage for standard conformity scores f lo (X i , Y i ) and f hi (X i , Y i ), refer to P 3 in Section 4. The existence of P δ and similarly 'confusing' distributions helps to show why capturing the conditional median can be tricky in a distribution-free setting.
At the same time, there exist conformity scores for which Algorithms 1 and 2 have rates of coverage that are always near 1 − α/2. For c > 0, define the randomized regression functionμ c as follows: set M = max i∈I1 |Y i | and ∼ N (0, (cM ) 2 ). We prove the following: Theorem 4. For all > 0, there exists c and N such that for all n > N , there is a split n 1 + n 2 = n such that when Algorithm 1 is ran using the regression functionμ c on n datapoints with I 1 of size n 1 and I 2 of size n 2 , the resulting interval will be finite and will satisfy The proof for this theorem is in Appendix A.4.
Remark 2.6. Theorem 4 can be extended to Algorithm 2 by taking The corresponding result shows that given a large enough number of datapoints and a particular data split, there exist conformity scores that result in Algorithm 2 capturing the conditional quantile nontrivially with probability at least Due to the definition ofμ c , the resulting confidence intervals will be near-useless; the predictions will be so far off that the intervals will have width several times the range of the (slightly clipped) marginal distribution P Y . However, they still will be finite, and will still achieve predictive inference at a rate roughly equal to 1 − α/2. The existence of scores that always result in higher-than-needed rates of coverage means that any result like Theorem 3 that provides a nontrivial upper bound for either Algorithm 1 or Algorithm 2's coverage of the conditional median on a specific distribution will have to restrict the class of regression functions and/or conformity scores.

Median Intervals and Predictive Intervals are Equivalent
Up until this point, we have looked at the existence and accuracy of algorithms for estimating the conditional median. This section shows that any algorithm for the conditional median is also a predictive algorithm, and vice versa. As a consequence, this means there exists a strong lower bound on the size of any conditional median confidence interval.
Proof. The proof above uses the same approach from the proof of Theorem 1 in Barber [2020]. Consider an arbitraryĈ n that satisfies (1 − α)-Median, and let P be any distribution over R d × R for which P X is nonatomic.
∼ P . We define two different ways of sampling our data from L.
Fix L and pick (X 1 , Y 1 ), . . . , (X n+1 , Y n+1 ) without replacement from L. Call this method of sampling Q 1 . It is clear that after marginalizing over L, the (X i , Y i )'s are effectively drawn i.i.d. from P ; thus, we have that with replacement from L, and call this method of sampling Q 2 . Note that because P X is nonatomic, the X j 's are distinct with probability 1, which means that Median Q2 (Y |X = X j ) = Y j . Then, asĈ n applies to all distributions, it applies to our point distribution over L; thus, we have that for all L, Let R be the event that any two of the for any a < b}. Note that under Q 2 , for 1 ≤ a < b ≤ n + 1, the probability of (X a , Y a ) and (X b , Y b ) being equal is 1/M . Then, by the union bound, where the last step is from the fact that the number of possible pairs (a, b) is bounded above by n 2 . Meanwhile, we know that P Q1 {R} = 0 by the definition of Q 1 .
We can use this to bound the total variation distance between Q 1 and Q 2 . For any fixed L and any event E, Therefore, for any fixed L, the total variation distance between the distributions Q 1 and Q 2 is at most n 2 /M , implying that Taking the limit as M goes to infinity gives the result.
Remark 3.1. Theorem 5 also applies to all algorithms that satisfy (1 − α, q)-Quantile; for our uniform distribution over L, Quantile q (Y |X = X j ) = Y j , so the proof translates exactly. As a result, this means that all algorithms that satisfy (1 − α, q)-Quantile also satisfy (1 − α)-Predictive for all nonatomic distributions P . Remark 3.2. The approach taken in the proof of Theorem 5 is similar to those used to show the limits of distributionfree inference in other settings. As mentioned earlier, Barber [2020] shows that in the setting of distributions P over R d × {0, 1}, any confidence intervalĈ n (X n+1 ) for E[Y |X = X n+1 ] with coverage 1 − α must contain Y n+1 with probability 1 − α for all nonatomic distributions P , and goes on to provide a lower bound for the length of the confidence interval. Additionally, Barber et al. [2021b] proves a similar theorem about predictive algorithmŝ C n (X n+1 ) for Y n+1 that are required to have a weak form of conditional coverage. The proof for the result from Barber [2020] involves the same idea of marginalizing over a large finite sampled subset L in order to applyĈ n to the distribution over L; the proof for the result from Barber et al. [2021b] focuses on sampling a large number of datapoints conditioned on whether or not they belong to a specific subset B ⊆ R d × R. In both cases, studying two sampling distributions and measuring the total variation distance between them was crucial. Thus, it seems that this strategy may have further use in the future when studying confidence intervals for other parameters or data in a distribution-free setting.
We now prove a similar result in the opposite direction.
Proof. Consider any distribution P . For all x ∈ R d in the support of P , set m(x) = Median(Y |X = x). We know that under the distribution P , P{Y n+1 ∈Ĉ n (X n+1 )} ≥ 1 − α/2. Then, we can condition on whether or not the conditional median is contained inĈ n as follows: The key insight here is that becauseĈ n only outputs confidence intervals,Ĉ n (X n+1 ) can cover at most half of the conditional distribution of Y |X = X n+1 if m(X n+1 ) is not contained in the confidence interval. Shifting the constant over and multiplying by 2 tells us that P{m(X n+1 ) ∈Ĉ n (X n+1 )} ≥ 1 − α, as desired.
Theorem 5 tells us that conditional median inference is at least as imprecise as predictive inference. As a result, because all predictive intervals have nonvanishing widths (assuming nonzero conditional variance) no matter the sample size n, it is not possible to write down conditional median algorithms with widths converging to 0. Thus, it may be better to study other distribution parameters similar to the conditional median if we are looking for better empirical performance. For discussion on related distribution parameters that are worth studying and may result in stronger inference, refer to Section 5.2.
Theorem 6 tells us that one way to approach conditional median inference is to apply strong predictive algorithms. It also suggests that improvements in predictive inference may translate to conditional median inference.
Lastly, we know that Algorithm 1 captures Y n+1 with probability 1 − α/2. Because there is space between the bounds of Theorems 5 and 6, there may exist a better conditional median algorithm that only captures Y n+1 with probability 1 − α. Based on our result from Section 2.3, any such algorithm will likely follow a format different than the split conformal approach. Studying this problem in more detail, particularly on difficult distributions P , might lead to more accurate conditional median algorithms.

Simulations
In this section, we analyze the impact of different conformity scores on the outcome of Algorithm 2. Specifically, we look at the four following conformity scores: We trainμ to predict the conditional mean using quantile regression forests on the dataset {(X i , Y i ) : i ∈ I 1 }. Xi) . We trainμ andσ jointly using random forests on the dataset We trainQ lo andQ hi to predict the conditional α/2 quantile and 1 − α/2 quantile, respectively, using quantile regression forests on the dataset We createF Y |X=Xi (Y i ) by using 101 quantile regression forests trained on {(X i , Y i ) : i ∈ I 1 } to estimate the conditional q th quantile for q ∈ {0, 0.01, . . . , 0.99, 1} and use linear interpolation between quantiles to estimate the conditional CDF. Our training method ensures that quantile predictions will never cross.
To compare the performance of Algorithm 2 against a method that does not have a distribution-free guarantee, we also consider a nonconformalized algorithm that outputsĈ n (X n+1 ) = [Q lo (X n+1 ),Q hi (X n+1 )], whereQ lo andQ hi are trained on D to predict the conditional α/2 quantile and 1 − α/2 quantile, respectively, using quantile regression forests. We refer to this algorithm as QRF.
In order to test the conditional median coverage rate, we must look at distributions for which the conditional median is known and, therefore, focus on simulated datasets. We consider the performance of Algorithm 2 and QRF on these three distributions: Distribution 1: We draw (X, Y ) ∼ P 1 from R d ×R, where d = 10. Here, X = (X 1 , . . . , X d ) is an equicorrelated multivariate Gaussian vector with mean zero and Var(X i ) = 1, Cov(X i , X j ) = 0.25 for i = j. We set Y = (X 1 + X 2 ) 2 − X 3 + σ(X) , where ∼ N (0, 1) is independent of X and σ(x) = 0.1 + 0.25 x 2 2 for all x ∈ R d . Distribution 2: We draw (X, Y ) ∼ P 2 from R × R. Draw X ∼ Unif[−4π, 4π] and Y = U 1/4 f (X), where U ∼ Unif[0, 1] is independent of X and f (x) = 1 + |x| sin 2 (x) for all x ∈ R.
for all x ∈ R. Note that {r} is the fractional part of r. We set δ = 0.0001 and M = 1/γ = 25. For each distribution, we run Algorithm 2 using each conformity score, as well as QRF, to get a confidence interval for the conditional median. We run 500 trials; in each trial, we set n = 5, 000 with n 1 = n 2 = n/2 and α = 0.1 with r = s = α/2. For a separate study on the impact of n on the confidence interval width and coverage, we refer the reader to Sesia and Candès [2020]. We test coverage on 5, 000 datapoints for each trial. The average coverage rate, average interval width, and other statistics for each distribution and conformity score are shown in Figure 3. An example of the resulting confidence intervals for a single trial on Distribution 2 are displayed in Figure 4.  Comparing the QRF algorithm with Algorithm 2, we see that while the widths are significantly lower for all three distributions, the coverage for Distribution 3 is much less than 1 − α. In particular, compared against Conformity Score 3, which has the same framework but includes a constant buffer on both ends due to the calibration set, we see how much extra width Algorithm 2 adds in the calibration step for Conformity Score 3.
Looking first at rates of coverage, we see that all scores have coverage much greater than 1−α on Distributions 1 and 2. However, Distribution 3 is a case where all scores have near-identical rates of coverage at just about 1 − α. Further investigation into the confidence intervals produced for Distribution 3 suggests that the algorithms are often failing to capture the conditional median when its absolute value is almost exactly 1.
The minimum conditional coverage on Distributions 1 and 3 is near 0 for each conformity score. Interestingly, the scores with the worst minimum conditional coverage on Distribution 1 have the relative best minimum conditional coverage on Distribution 3, and vice versa. Scores 1, 3, and 4 have a minimum conditional coverage greater than 1 − α on Distribution 2, implying that these scores achieve point-wise conditional coverage.
Regarding interval width, Score 1 performs significantly worse on Distributions 1 and 2 than all other scores; meanwhile, the 3 other scores have roughly equal average widths. On Distribution 3, Scores 3 and 4 produce intervals with significantly less width than Scores 1 and 2.
Overall, we see that Score 1 is significantly worse than the other scores on distributions with a wide range in conditional variance; Scores 3 and 4 behave very similarly on all distributions and perform slightly better than Score 2 on Distribution 3.

Discussion
This paper introduced two algorithms for capturing the conditional median of a datapoint within the distributionfree setting, as well as a particular distribution where the performance of these algorithms was sharp. Our lower bounds prove that in the distribution-free setting, conditional median inference is fundamentally as difficult as prediction itself, thereby setting a concrete limit to how well any median inference algorithm can ever perform. We also showed that any predictive algorithm can be used as a median algorithm at a different coverage level, suggesting that the two problems are near-equivalent.

Takeaways
A few observations may prove useful. For one, distributions such as P δ from Section 2.3 and P 3 from Section 4 will likely show up again. Because each distribution is a mixture of two disjoint distributions with roughly equal weights, it is hard to identify which half contains the median. It is likely that similar distributions will show up as the performance-limiting distribution for distribution-free parameter inference. Further, the proof technique of sampling a large finite number of datapoints and then marginalizing (Section 3) is similar to those in Barber [2020] and Barber et al. [2021b], pointing out to possible future use. Lastly, our results and those of Barber [2020] indicate that the value of conditional parameters cannot be known with higher accuracy than the values of future samples.

Further Work
We hope that this paper motivates further work on conditional parameter inference. We see three immediate potential avenues: • One direction is to extend our methods to study other conditional parameters similar to the conditional median. For example, the smoothed conditional median, equal to the conditional median convolved with a kernel, may be easier to infer than the conditional median. This parameter would allow for smarter inference in the case of smooth distributions without making smoothness a requirement for inference. Similarly, the truncated mean and other measures of central tendency may be amenable to model-free inference and analysis, as may the conditional interquartile range and other robust measures of scale.
• Another direction is to get tighter bands by imposing mild shape constraints on the conditional median function. For instance, if we know that Median(Y |X = x) is convex, then the results from Section 3 no longer apply. Similarly, assuming that Median(Y |X = x) is decreasing in x or Lipschitz would yield intervals with vanishing widths in the limit of large samples. For instance, when predicting economic damages caused by tornadoes using wind speed as a covariate, one may assume that the median damage is nondecreasing as wind speed increases.
• A third subject of study is creating full conformal inference methods based off of our split conformal algorithms. Unlike split conformal inference, the full conformal method does not rely on splitting the dataset into a fitting half and a ranking half; instead, it calculates the conformity of a potential datapoint (X n+1 , y) to the full dataset D and includes y in its confidence region only if (X n+1 , y) is similar enough to the observed datapoints. The study of full conformal inference has grown alongside that of split conformal inference; the method can be seen in Vovk et al. [2005], Shafer and Vovk [2008], and Lei et al. [2018]. Standard full conformal algorithms do not guarantee coverage of the conditional median; however, there may exist modifications similar to locally nondecreasing conformity scores that result in a full conformal algorithm that captures the conditional median.

A.3 Proof of Theorem 3
We show that given , there exists δ and N such that for all n > N , running Algorithm 1 on P δ with our chosen µ results in a confidence interval that contains the conditional median with probability at most 1 − α + . Our approch is similar to that in Appendix A.1; however, we apply the inequalities in the opposite directions and use some analysis in order to get an upper bound as opposed to a lower bound.
We now apply our two lemmas: the second equality follows from Lemma A.1, and the last from Lemma A.2. Now, by applying the same train of logic as used in Appendices A.1 and A.2, where the last equality is from evaluating the generating function G( t k ; z) = ∞ t=0 t k z t at z = 0.5 − δ. Finally, in order to bring this into a coherent bound, we expand the equation to bring out the 1 − α term and isolate the remainder, which we can then show goes to 0: where the inequality arises from upper bounding the summation ∞ j=n2+1 j k (0.5 − δ) j by n 2 + 1 n 2 + 1 − k (0.5 − δ) j using the maximum ratio of consecutive terms. Applying m ≤ (n 2 + 1)α/2 twice gives Because α < 1, this is due to the fact that n2+1 k=0 n2+1 k 2 n2+1 = 1 and that the standard deviation of Binom(n 2 , 0.5) is O( √ n 2 ), meaning that n2+1− (n2+1)α/2 k= (n2+1)α/2 +1 n2+1 k 2 n2+1 → 1.
Furthermore, as Binom(n 2 , 0.5) approaches a normal distribution as n 2 → ∞ and Φ(−c √ n 2 ) is O(d −n2 ) for some d > 1, for small enough δ, Thus, we can pick D and N such that for all δ < D and n ≥ N , noting that n 2 = n/2. Then, setting δ = min{ 4α−2 , D} and 2αδ 1+2δ ≤ /2 yields This says that the probability P{M E ≥ m} of the confidence interval containing the conditional median is at most 1 − α + .

A.4 Proof of Theorem 4
We show that given there exists c, N , and n 1 + n 2 = n for all n > N such that running Algorithm 1 on n > N datapoints from an arbitrary distribution P with regression functionμ c and split sizes n 1 + n 2 = n results in a finite confidence interval that contains the conditional median with probability at least 1 − α/2 − .
For each x in the support of P , define m(x) = Median(Y |X = x) and recall that M = max i∈I1 |Y i |. We begin with two lemmas.
This results from the fact that |Y i | is exchangeable with |Y j | for all j ∈ I 1 ; thus, the probability that |Y i | is the unique maximum of the set {Y j : j ∈ I 1 ∪ {i}} is bounded above by 1 n1+1 . Taking the complement yields the desired result.
Lemma A.4. For all i ∈ I 2 ∪ {n + 1}, P{|m(X i )| ≤ M } ≥ 1 − 2 n1+1 . Proof. Note that |m(X i )| is exchangeable with |m(X j )| for all j ∈ I 1 . Letting M R = #{|m(X j )| ≥ |m(X i )| : j ∈ I 1 }, exchangeability gives that the CDF of M R is bounded below by the CDF of the uniform distribution over {0, 1, . . . , n 1 }. For each j ∈ I 1 , the event {|Y j | ≥ |m(X j )|} occurs with probability at least 1/2 by definition of the median; moreover, these events are mutually independent. Therefore, if we condition on M R , we have that Putting this together, we see that Taking the complement yields the desired result.
Proof. Notice that on the event A∩B, |m( These two intervals both have length 2M , but their centers are at a distance greater than 2M on the event B, meaning that the intervals are disjoint. Therefore, |m(X i ) −μ c (X i )| ≥ |m(X n+1 ) −μ c (X n+1 )| implies that all elements of the first interval are greater than all elements of the second, so | also implies that all elements of the first interval are greater than all elements of the second, so |m( Looking at Algorithm 1, we have that m( Because 1/n 2 < α/2, Q 1−α/2 (E) is finite and thus the confidence interval is bounded. By Lemma A.5, on the event We have just shown that on the event A ∩ B ∩ C, we have m(X n+1 ) ∈Ĉ n (X n+1 ). Additionally, because the elements of {|m(X i ) −μ c (X i )| : i ∈ I 2 ∪ {n + 1}} are exchangeable, we have that P{C} ≥ 1 − α/2. Then, by the union bound, proving the desired result.
Consider a distribution P , and for all x ∈ R d in the support of P , let q(x) be Quantile q (Y |X = x). We prove the first result in two parts.
Conditioning on whether or not q(X n+1 ) is greater than or equal toL(X n+1 ), note that where we use the fact that if q(X n+1 ) <L n (X n+1 ), at most 1 − q of the conditional distribution of Y |X = X n+1 can be at leastL n (X n+1 ). Subtracting 1 − q from both sides and dividing by q tells us that 1 − r ≤ P{q(X n+1 ) ≥ L n (X n+1 )}. Similarly, Subtracting q from both sides and dividing by 1 − q tells us that 1 − s ≤ P{q(X n+1 ) ≤Ĥ n (X n+1 )}. Then, by the union bound, we have that We can prove the second result in the exact same fashion as the proof of Theorem 6. We have that Subtracting max{q, 1 − q} and dividing by min{q, 1 − q} yields the desired result.

B.1 Alternate Proof of Theorem 1
The proof of the theorem relies on two lemmas: the first establishes a connection between Median(Y n+1 |X n+1 ) and Median(Y |X = X i )'s using exchangeability, and the second gives us a relationship between Median(Y |X = X i )'s and the Y i 's. We begin with some notation. For all x ∈ R d in the support of P , set m(x) = Median(Y |X = x). Also, for 1 ≤ i ≤ n + 1, let R(X i ) = |m(X i ) −μ(X i )|, and for 1 ≤ i ≤ n let E(X i ) = |Y i −μ(X i )|. Finally, put M R = #{i ∈ I 2 : R(X i ) ≥ R(X n+1 )} as the number of i ∈ I 2 for which R(X i ) ≥ R(X n+1 ), and M E = #{i ∈ I 2 : E(X i ) ≥ R(X n+1 )} as the number of i ∈ I 2 for which E(X i ) ≥ R(X n+1 ).
The first lemma relates R(X n+1 ) to the other R(X i )'s.
Proof. Our statement follows from the fact that our samples are i.i.d. Becauseμ is independent of (X i , Y i ) for i ∈ I 2 and independent of X n+1 , the values {R(X i ) : i ∈ I 2 }∪{R(X n+1 )} are i.i.d. as well. Then, the probability of less than m values in {R(X i ) : i ∈ I 2 } being at least R(X n+1 ) is bounded above by m |I2|+1 , as the ordering of these values is uniformly random. Taking the complement of both sides gives the result.
The second lemma gives a direct relationship between E(X i ) and R(X i ). (Note that the events {E(X i ) ≥ R(X i )} below are mutually independent.) Lemma B.2. For all i ∈ I 2 , P E(X i ) ≥ R(X i ) ≥ 1/2.
Proof. We see that P{Y i ≥ m(X i )} ≥ 1/2 and P{Y i ≤ m(X i )} ≥ 1/2 by the definition of the conditional median. Furthermore, the events {Y i ≥ m(X i )} and {Y i ≤ m(X i )} are independent of the events {m(X i ) ≥ µ(X i )} and {m(X i ) ≤μ(X i )} given X i , asμ is a function of {(X i , Y i ) : i ∈ I 1 } and the datapoints are i.i.d. Then, conditioned on X i , if m(X i ) ≥μ(X i ), with probability 1/2 we have that m( The conclusion holds conditionally in both cases; marginalizing out X i yields the desired result. We now study the number of datapoints obeying E(X i ) ≥ R(X n+1 ) by combining these lemmas together. Consider any 0 ≤ m ≤ n 2 . Note that by conditioning on M R , The first inequality holds true by Lemma B.2, which implies that P{M E ≥ m|M R = j} can be bounded below by the probability that M ≥ m for M ∼ Binom(j, 0.5). The second inequality is due to Lemma B.1. We know that j k=m j k 2 −j is a nondecreasing function of j; by Lemma B.1, the CDF of the distribution of M R is lower bounded by the CDF of the uniform distribution over {0, 1, . . . , n 2 }, meaning that This gives n2 j=0 1 The second-to-last equality comes from evaluating the generating function G( t k ; z) = ∞ t=0 t k z t at z = 0.5. Putting it together, in Algorithm 1, Q 1−α/2 (E) is set to be the (1−α/2)(1+1/n 2 )-th quantile of {E i : i ∈ I 2 }. This is equal to the n 2 (1 − α/2)(1 + 1/n 2 ) = (1 − α/2)(n 2 + 1) smallest value of {E i : i ∈ I 2 }. This means that if M E ≥ n 2 − (1 − α/2)(n 2 + 1) + 1, then R(X n+1 ) will be at most the n 2 − (1 − α/2)(n 2 + 1) + 1 largest value of {E i : i ∈ I 2 }, which is equal to the (1 − α/2)(n 2 + 1) smallest value, or Q 1−α/2 (E).

B.2 Alternate Proof of Theorem 2
Our approach is similar to that in Appendix A.1. The main difference in the proof arises from the fact that Algorithm 2 no longer uses the absolute value and uses two separate fitted functions, meaning that it is important to bound the probability of the confidence interval covering the desired value from both sides. We begin with some definitions. For all x ∈ R d in the support of P , let q(x) be Quantile q (Y |X = x). For all 1 ≤ i ≤ n + 1, define: . As such, we proceed to develop two lemmas which extend those from Section A.1: the first helps us to understand the distributions of M hi R and M lo R , and the second studies the individual relationships between E lo (X i ) and R lo (X i ) and between E hi (X i ) and R hi (X i ). With both of these lemmas, we are able to bound the probability of each event {R lo (X n+1 ) ≥ Q lo rq } and {R hi (X n+1 ) ≤ Q hi 1−s(1−q) }. Lemma B.3. For all 0 ≤ m ≤ n 2 , P M lo R ≥ m ≥ 1 − m n 2 + 1 and P M hi R ≥ m ≥ 1 − m n 2 + 1 .
Proof. We prove the result for M hi R ; the same approach holds for M lo R . Because f hi is independent of (X i , Y i ) for i ∈ I 2 and independent of X n+1 , the values in {R hi (X i ) : i ∈ I 2 } ∪ {R hi (X n+1 )} are i.i.d.. Thus, the probability of less than m values in {R hi (X i ) : i ∈ I 2 } being at least R hi (X n+1 ) is bounded above by m |I2|+1 , as the ordering of these values is uniformly random. Taking the complement of both sides establishes the claim.
Lemma B.4. For all i ∈ I 2 , P E lo (X i ) ≤ R lo (X i ) ≥ q and P E hi (X i ) ≥ R hi (X i ) ≥ 1 − q.
Proof. We see that P{Y i ≤ q(X i )} ≥ q and P{Y i ≥ q(X i )} ≥ 1 − q by the definition of the conditional quantile. Then, with probability at least q we have that E lo (X i ) = f lo (X i , Y i ) ≤ f lo (X i , q(X i )) = R lo (X i ) by the definition of a locally nondecreasing conformity score. Similarly, with probability at least 1 − q we have that E hi (X i ) = f hi (X i , Y i ) ≥ f hi (X i , q(X i )) = R hi (X i ), thereby concluding the proof.
We now study the number of i ∈ I 2 such that E hi (X i ) ≥ R hi (X n+1 ) by combining these two lemmas together. Consider any 0 ≤ m ≤ n 2 , and note that by conditioning on M hi R , we have (1 − q) k q j−k .
The first inequality holds because P{M hi E ≥ m|M hi R = j} can be bounded below by the probability that M ≥ m for M ∼ Binom(j, 1 − q) by Lemma B.4. The second inequality holds since the CDF M hi R is greater than or equal to the CDF of the uniform distribution over {0, 1, . . . , n 2 } by Lemma B.3. Then, as j k=m j k (1 − q) k q j−k is a nondecreasing function of j, we have We now solve the summation, which gives n2 j=0 1 n 2 + 1 j k=m j k ( where the second-to-last equality is from evaluating the generating function G( t k ; z) = ∞ t=0 t k z t at z = q. Note that this same calculation works for counting the i ∈ I 2 with E lo (X i ) ≤ R lo (X n+1 ) using the same lemmas, substituting M lo E for M hi E , M lo R for M hi R , and q for 1 − q within the calculation. This gives P M lo E ≥ m ≥ 1 − 1 q · m n 2 + 1 . Now, in Algorithm 2, Q hi 1−s(1−q) (E) is defined as the (1 − s(1 − q))(1 + 1/n 2 )-th quantile of {E hi i : i ∈ I 2 }. This is equal to the n 2 (1−s(1−q))(1+1/n 2 ) = (1−s(1−q))(n 2 +1) smallest value of {E hi i : i ∈ I 2 }. Thus, if M hi E ≥ n 2 − (1 − s(1 − q))(n 2 + 1) + 1, then R hi (X n+1 ) will be at most the n 2 − (1 − s(1 − q))(n 2 + 1) + 1 largest value of {E hi i : i ∈ I 2 }, which is equal to the (1 − s(1 − q))(n 2 + 1) smallest value, or Q hi 1−s(1−q) (E).

B.3 Impossibility of Capturing the Distribution Mean
Instead of proving the impossibility of capturing the conditional mean of a distribution, we prove a more general result: we show that there does not exist an algorithm to capture the mean of a distribution Y ∼ P given no assumptions about P . This is a more general form of our result because if we set X ⊥ ⊥ Y in (X, Y ) ∼ P , then E[Y |X] = E[Y ], meaning that the impossibility of capturing the mean results in the conditional mean being impossible to capture as well.