Testing axial symmetry by means of directional regression quantiles

The article describes how directional quantiles can be useful for testing the null hypothesis that a multivariate distribution is symmetric around a line in a given direction. It also generalizes the proposed tests to residual distributions in a linear regression setup, discusses their use for statistical inference regarding equality of distributions, equality of scale, or exchangeability, and illustrates the achievements with carefully designed pictures and examples. MSC2020 subject classifications: Primary 62H15; secondary 62J99.


Introduction
Symmetry not only makes the world beautiful and interesting, but also plays a key role in mathematical statistics where it simplifies and reduces statistical models via sufficiency. Statisticians already recognize several kinds of multivariate symmetry; see, e.g., [22] for a survey. Nevertheless, there seem to be no available results on testing axial symmetry of a given multivariate distribution beyond dimension two except for the very recent permutation test in [12]. A random vector Y with E Y < ∞ is here formally defined to be axially symmetric around an axis with direction u when L{Y − E(Y )} = L{R u (Y − E(Y ))} for the orthonormal matrix R u = 2uu − I, satisfying R u u = u and R u v = −v for all vectors v orthogonal to u.
To be more precise, a slightly similar test for the first eigenvector of a covariance or scatter matrix is often used in the principal component analysis; see, e.g., [6] and the references given there. In the bivariate case, a simple nonparametric test for testing symmetry around a given line has been proposed only in [20] and applied to testing exchangeability and symmetry around a coordinate axis. Some tests of the latter hypotheses have already been discussed in the non-parametric statistical literature; see, e.g., [10] and [18] for bivariate tests and [23] for a multivariate test of conditional symmetry. Related symmetries have also been studied for directional data [4].
Nevertheless, the problem of testing axial symmetry (about an arbitrary line) becomes really interesting only in spaces beyond dimension two where axial symmetry does not coincide with hyperplane symmetry. Furthermore, the assumption of a particular axis of symmetry is often too restrictive. It would often be more convenient to assume only its direction as there can never exist two or more such axes parallel with one another. The tests presented below thus nicely fill in the gap as they work for general distributions in any dimension and assume only the axial direction under the null hypothesis. They can also test conditional axial symmetry in a regression context with a few regressors and responses, which also distinguishes them from possible competitors. That is to say that a follow-up article [11] will describe various tests of symmetry around a general subspace based on a completely different principle, only in the mulivariate case (with no regressors), and with more stringent assumptions.
The presented tests originate from the directional quantile regression, introduced in [7] and further elucidated in [19], that appears very useful even for a single direction, namely for the direction of the assumed axis of symmetry. Then it more or less resembles the ordinary quantile regression of [15] and [13], but with stochastic regressors and applied to certain projections. The regression framework behind the tests makes their regression extensions very intuitive and straightforward.
Unfortunately, it turns out that the presence of response-dependent and stochastic regressors makes the traditional inference about the regression quantile process invalid except for some very special cases. Consequently, the tests presented here had to be derived anew with the response-dependent and stochastic regressors taken into consideration. Therefore, the results probably con-tribute something original even to the theory of ordinary quantile regression with such type of regressors.
The tests are likely to become useful even in full generality because axial symmetry plays a key role in molecular symmetry (influencing chemical properties of the matter) and because it naturally occurs whenever mirrors or reflections are employed, e.g., in optics, acoustics, particle physics, astronomy, and crystallography. For example, the applications might result from the law of reflection, the axial rotation of heavenly bodies, the natural axial symmetry of simple living organisms, the axial symmetry of electrostatic potential of symmetric molecules, or from the radars rotating around an axis. The need for testing axial symmetry might also arise in the same situations when rotational symmetry is investigated for directional data; see [4] and the references therein. Furthermore, the conditional axial symmetry is also closely linked to the directional predictability of linear models with vector responses.
In any way, the test of (conditional) axial symmetry may be used to check other common statistical hypotheses such as (conditional) symmetry about a particular coordinate axis, (conditional) exchangeability, and equality of (conditional) distributions or their scales, all that possibly up to a suitable shift. The applications are further elaborated and illustrated in the text.
The rest of this article is organized as follows. Section 2 introduces the necessary minimum regarding the directional regression quantiles of interest, Section 3 provides some motivation for the tests and for their use in various situations, Section 4 mainly derives and discusses the tests for the null hypothesis of axial symmetry about a line with a given direction in the location case, Section 5 extends the tests to a linear regression context, and concluding Section 6 illustrates the achievements with a few representative examples and comparisons. The Appendix collects most of the technical remarks and the proofs of all the assertions.

Directional quantiles
Suppose that Y = (Y (1) , . . . , Y (m) ) ∈ R m and X = (1, X (2) , . . . , X (p) ) = (1, Z ) ∈ R p stand for a random vector of responses and for a random vector of regressors, respectively, and that the following Assumption 1 holds in the whole article. Assumption 1. The joint probability distribution L of (Y , Z ) is absolutely continuous with finite expectation, cumulative distribution function F , and probability density function f that is continuous, bounded, and positive in the interior of a connected support.
In general, directional multivariate quantiles extend univariate quantiles to multivariate spaces directionwise. In [7], the directional (regression) τ -quantile of the m-dimensional response Y corresponding to the p-dimensional covariate X is defined for any direction u ∈ S m−1 := {v ∈ R m : v = 1} and for any quantile level τ ∈ (0, 1) as the unique hyperplane where Γ u is an m × (m − 1) matrix complementing u to an orthonormal matrix and (c τ u , a τ u ) ∈ R m+p−1 solves the unconstrained (standard quantile regression) minimization problem with one scalar response u Y and p + m − 1 scalar regressors grouped in (Y Γ u , X ) . (It is basically a standard quantile regression problem when the basis of the response space is changed from the canonical one to (u|Γ u ).) Then b τ u = u − Γ u c τ u , c τ u = −Γ u b τ u , and the choice of Γ u does not impact π τ u . The (regression) (τ u)-quantile hyperplane π τ u divides the space into the upper and lower (τ u)-quantile (regression) halfspaces H + τ u and H − τ u : In the location (multivariate) case with p = 1, both Z and z simply disappear from all the definitions.
The probability interpretation of the directional regression quantiles follows from (2.3) thanks to the first unit coordinate of X: Consequently, (D τ u , 0 ) can be interpreted as the vector linking the mass centers μ(H − τ u ) and μ(H + τ u ) of H − τ u and H + τ u through the overall center of mass and pointing to H + τ u if f is interpreted as the mass density.
Loosely speaking, the (regression) (τ u)-quantile hyperplane π τ u cuts off the probability mass equal to τ and splits the space into two halfspaces with their own probability mass centers whose mutual position is determined by the direction u up to certain scalar multiplier.

Motivation
For simplicity, first assume the purely multivariate case when L = L(Y ) is axially symmetric around an axis with direction u. Then the uniquely defined π τ u , cutting off the right amount of probability mass, must be orthogonal to u for any τ ∈ (0, 1) because μ(H + τ u ) − μ(H − τ u ) is then parallel to u and the necessary and sufficient conditions (2.3) to (2.5) are then satisfied for b τ u = u, i.e., for c τ u = 0. In other words, the axial symmetry around an axis with direction u implies c τ u = 0 for any τ ∈ (0, 1), which makes its testing by means of c τ u and its scalar functions very promising, especially for elliptical distributions: The impact of such axial symmetry tests far exceeds their primary purpose because they can also be employed 1. for testing the hypothesis of axial symmetry about a particular line in direction u. That is to say that if the population distribution of Y is symmetric about an axis in direction u, then the particular axis of symmetry is of the form E(Y ) + tu, t ∈ R. Testing the particularity should thus involve a test of the mean vector after the test of the axial direction. After a suitable affine transformation (turning the particular line of interest to the last coordinate axis), one would only have to test that the first (m − 1) coordinates of the mean vector are zero. 2. for testing exchangeability after a suitable shift. If a multivariate distribution is exchangeable after a suitable shift, then it is symmetric around an axis with direction u = (1, 1 . . . , 1) / √ m. The converse is true only in R 2 . 3. for testing equality of independent univariate distributions up to their location. Their joint distribution would then be exchangeable after a suitable shift. Conversely, if the joint distribution of independent univariate distributions is exchangeable, then the univariate distributions are the same. 4. for testing that independent univariate distributions are equally scaled on condition that they can differ only in their location and scale parameters. 5. for testing equality of independent multivariate distributions (after a suitable shift) because then their univariate marginal distributions corre-sponding to any particular dimension must be independent and one could test it with a compound test.
Of course, one can combine 1. with 2., 3., 4. and 5. if a particular knowledge about the location is available.
In the regression case, the null hypothesis H S 0 (u) of axial symmetry around an axis with direction u pertains to the conditional distribution: almost surely. In the location case, it reduces to However, the following text focuses only on the null hypothesis because H S 0 (u) always implies H 0 (u) in the location case and because the two hypotheses are equivalent for elliptical distributions; see Proposition 1.
Furthermore, all that easily extends to certain linear regression location-scale models because Proposition 1 can easily be rephrased for conditional distributions L(Y |X = x) and because the conditional gradient conditions turn into the unconditional ones if all the τ -quantiles of u Y given X are linear in X, In particular, the theory can be applied even to the following linear regression common-scale model with parametric matrix B, parametric vector d = 0, and a centered absolutely continuous error term ε ∈ R m , E ε < ∞, independent of absolutely continuous regressor vector X, E X < ∞: Then H S 0 (u) implies H 0 (u) for any u, u = 1, and the reverse is true for elliptically distributed ε. If ε is elliptically distributed, but not around an axis with direction u, then c τ u = 0 is a constant vector independent of τ ∈ (0, 1).
In fact, H S 0 (u) implies H 0 (u) for a given directional vector u even in a more general model . . , g m are almost surely non-zero scalar functions, and d 0 , To sum up, H S 0 (u) can be tested by means of H 0 (u) not only in the location case, but also in certain linear regression location-scale models such as (3.2) and (3.3) described above. Consequently, the rest of the article deals only with the problem of testing H 0 (u), which may be of independent interest. The presented theory for the regression case also covers models (3.2) and (3.3) thanks to Remark 4 after Proposition 3, stated in the Appendix.
Of course, the proposed asymptotic nonparametric tests are the most useful when they have no simple parametric competitors, e.g., when the specification of the underlying probability distribution is unknown or unrelated to the axial symmetry considered. In fact, they may sometimes be the only reasonable options available in the multidimensional and regression cases.
In what follows, sample variants will usually be denoted with and the tests will be called after their test statistics.

Location case
The asymptotic representation and distribution of c τ u in the i.i.d. case is known from [7] for any fixed τ ∈ (0, 1). If the assumptions were different, then H 0 (u) of (3.1) could be tested, for example, by means of the rank score statistic of [16] known from the ordinary quantile regression; see also Section 3.7.3 of [13]. The same test statistic is also adopted here but its asymptotic distribution becomes complicated by the presence of stochastic and response-dependent regressors, and it does not follow directly from any known theory. Consider (4.1) is the cornerstone on which the tests will be built.
(1) Then, for any τ ∈ (0, 1), S n (τ ) = S 0 is asymptotically multivariate normal with mean zero and block covariance matrix Simplification of Proposition 2 in certain cases, weakening its assumptions, and good invariance properties of T n (τ ) of (4.6) are discussed in Remarks 1 to 3 in the Appendix. In particular, T n (τ ), τ ∈ (0, 1), is invariant with respect to the choice of Γ u , to shifts, to rotations (if (u, Γ u ) is rotated accordingly) and to scale transformations preserving axial symmetry. The same invariance then holds even for T C of (4.5).
As H 0 (u) implies a weaker null hypothesis H 0 (u) : c τiu = 0 for 0 < τ 1 < · · · < τ k < 1, testing H 0 (u) can also be based on Proposition 2(2) and may be beneficial at least in the family of elliptical distributions in view of Proposition 1. This is also confirmed empirically in Section 6.
To sum up, H 0 (u) can be tested in the location case by means of the test statistics T D and T C of (4.4) and (4.5) and their asymptotic distributions under H 0 (u). Of course, test statistics asymptotically equivalent to T C or T D under H 0 (u) can be used as well, and the corresponding assumptions must be fulfilled.

Regression case
Consider H 0 (u) of (3.1), define centered regression rank score vector b and focus again on the τ -indexed process

2698Š. Hudecová and M.Šiman
where M X stands for the projection matrix M X = X(X X) −1 X .

4)
and W is a consistent estimator of W.
Note that a natural consistent estimator of W is and that the condition of Proposition 3 (3) is satisfied if (Y , Z ) comes from a multivariate normal distribution. Remarks 4 and 5 in the Appendix further show that the assumption of Proposition 3(3) can be weakened and that T n (τ ) of (5.4) inherits all the good invariance properties from the location case and adds to them certain regression invariance, if a reasonable estimator of W such as W 0 is employed.
To sum up, if the assumptions are satisfied, then H 0 (u) can be tested in the regression case by means of the test statistics T D and T C of (5.2) and (5.3) using their asymptotic distributions under H 0 (u) stated ibidem. In principle, one could then use any statistics asymptotically equivalent to T D or T C under H 0 (u).

Demonstrative examples
This section illustrates the new testing possibilities with a few carefully designed and representative examples involving the test T C based on the whole quantile process (4.5,5.3) and its χ 2 modification T D based on a few particular quantiles (4.4,5.2), both location and regression case, both elliptical and non-elliptical distributions (with light and heavy tails), both large (n = 5 000) and not too large (n = 100) data samples, responses with dimension m = 2, . . . , 20, and regressors with dimension p = 1, . . . , 20.
The results have been obtained by means of the packages quantreg [14] and ks [2] for R [24] where the latter package was employed only for the computation of conditional densities. The reported p-values regarding the process-based test T C have been calculated thanks to the algorithm for computing the tail probabilities of restricted suprema of the squared standardized tied-down Bessel process of any order [3].
For simplicity, the centered regression rank scores have been produced by the function ranks of quantreg with score equal to tau. They slightly differ from those of (5.1) for observations with zero residuals due to the weighting performed by the function but it does not affect the asymptotic behavior of the tests. Fig. 1 shows the process { c τ u } τ obtained from a random sample of size n = 5 000 from bivariate normal distribution N (0, 1) × N (0, 4) for two axial directions where only one of them corresponds to an axis of symmetry. The left picture strongly speaks for axial symmetry while the other vehemently denies it. The pictures are in harmony with Proposition 1 and confirm that the tests may be sensitive even to small departures from the null hypothesis of axial symmetry.

Simulated data
Figs. 2 to 4 use the process-based test T C with the sample variance-covariance estimator and approximate the supremum over τ ∈ [0.05, 0.95] with the maxi-   The observed average p-values produced by the tests T C are slightly higher than 0.55 under the null hypothesis in both cases, although they are always based on 1 000 replications. This discrepancy diminishes with growing sample sizes and occurs mainly due to the data samples too small for the asymptotic approximation to hold perfectly, although the use of discrete approximation and (upper) p-value estimates may also play a marginal role. Apart from the small issue with the seemingly conservative size, the tests T C behave as expected, which means in line with Propositions 1-3. Fig. 4 presents empirical power for comparison with average p-values. It shows the empirical power of the test T C obtained from 1 000 random samples of size n = 5 000 coming from a twenty-dimensional distribution with independent marginals for twenty-dimensional directions u = (cos(α), sin(α), 0, . . . , 0) for α ∈ [−π/60, π/60]. Two centered distributions were used: (heavy-tailed) t 3 × 2t 3 × · · · × 20t 3 and (light-tailed) L(1) × 2L(1) × · · · × 20L(1) where L denotes the Laplace distribution. The plots employ the critical values for testing levels The critical values for testing levels 0.01 (black), 0.05 (dark gray), and 0.10 (light gray) come from [3]. 0.01, 0.05, and 0.10 obtained from the tables published in [3] (that produce similar results as those of [1]).
The remaining figures illustrate the χ 2 -tests T D , based on the sample variancecovariance matrix estimators. They use (m + p − 1)-dimensional data samples of size n obtained from the multivariate uniform distribution on [−2, 2] m+p−1 (U), multivariate standard normal distribution (N ), and/or multivariate standard t distribution with df degrees of freedom (t df ). The first p − 1 dimensions are taken for the stochastic regressor vectors Z i , the next m dimensions are considered as the error vectors ε i , and the response vectors then follow the model . . , n, where Δ = 0 for each null hypothesis and Δ > 0 for each alternative considered. The tests themselves are based on n τ quantile levels τ equidistantly distributed in the interval [ε τ , 1 − ε τ ] including its end points (and τ = 0.5 for n τ = 1). They check the null hypothesis H 0 (u), u = (1, 0 ) , of axial symmetry of the (conditional) response distribution around an axis parallel with the first coordinate axis in the response space.
The figures illustrate the dependence of the tests T D on ε τ (Fig. 5), n τ (Fig. 6), m (Fig. 7), p (Fig. 8), n (Fig. 9), Δ (Fig. 10), or on the data distribution (Fig. 11). Solid lines generally correspond to the test behavior under the null hypothesis, dashed lines generally correspond to the test behavior under the shift alternative with Δ = 0.5. The dimension of responses (m), the dimension of regressor vectors X i = (1, Z i ) including the first unit coordinate (p), and the underlying distributions L of (Z i , ε i ) , i = 1, . . . , n, are indicated below each picture when it makes sense; usually L ∈ {N } or L ∈ {U, N , t 7 }. Always n = 1 000 except for Fig. 9 that shows how the tests depend on the number of observations n. If m = 4 and p = 2 or vice versa, then the regression  variant of the tests uses the computationally demanding estimation of conditional densities with the normal kernel and the bandwidth optimal for normal densities (for the sake of quick computation). The reported average p-values are based on N = 1 000 replications in such cases and on N = 10 000 replications in the others. Fig. 5 uses the tests T D for two values of τ , τ = ε τ and τ = 1 − ε τ , and shows their dependence on ε τ , ε τ = 0.05, 0.10, . . . , 0.45. Accordingly, it seems quite prudent to choose τ = 0.2 and τ = 0.8 for the two values of τ as a reasonable compromise, at least for the alternatives considered here. This is why this couple of quantile levels is used in the other settings whenever two values of τ are employed, i.e., in Figs. 7 and 8. Fig. 6 investigates the dependence of the tests T D on the number of quantile levels considered. It uses τ = 0.5 for n τ = 1, and n τ equidistant values of τ from the interval [0.2, 0.8] including its end points for n τ = 3, 5, . . . , 15. The results are probably highly dependent on the choice of alternatives, but they nevertheless indicate that n τ should not be chosen pointlessly too high; see also Proposition 1. Therefore, the remaining pictures employ only n τ = 1 and τ = 0.5 (Figs. 9, 10, and 11) or n τ = 2 and τ = 0.2 and 0.8 ( Figs. 7 and 8).  The power of the test grows with m under the considered alternative, while it seems to be unaffected by changes in p. Fig. 9 shows how the tests T D depend on the number of observations n = 100, . . . , 1 000, while Fig. 10 illustrates the dependence of the test on the shift Δ in the data generating model. The observed dependences are as expected. Finally, Fig. 11 illustrates the test performance for the multivariate t df distri-  To sum up the simulation results, the χ 2 -tests T D behave as expected and in harmony with the theory. They seem to be correctly sized at least for the dimension of responses as high as m = 10 and for the dimension of regressors as high as p = 10 when the conditional density estimation is not needed, n ≥ 20(m + p), and n τ ≤ 5 or so, which covers the most typical situations. In general, it can also be recommended not to choose n τ pointlessly too high and ε τ pointlessly too small. One should be more careful, though, if the density estimation is required, i.e., in the non-normal regression case. Nevertheless, the tests may then behave reasonably even for m + p − 1 = 4 if n is large enough such as n = 1 000, at least in the cases considered here.

Test comparison
The general asymptotic tests presented here have no close competitors, perhaps except for the follow-up tests for elliptical distributions; see [11] for the tests and the comparison. Consequently, the comparison with other available tests is possible only in very special cases.
The only known test working with general axes of symmetry is that of [20]. Therefore, it is used here as a benchmark with the two recommended values of the auxiliary binning parameter k (namely k = 2 and k = 3). However, the test is only bivariate and works with fixed axes instead of axial directions. It is applied to the problem of testing symmetry of a bivariate normal distribution (with zero mean, unit marginal variances and correlation ) around the x-axis and compared there with the process-based test T C presented here for testing level α = 0.05 in terms of empirical size ( = 0) and power ( > 0) obtained by means of 10 000 simulated independent random samples with n = 60, 75, 100 or 150 observations. The benchmark empirical powers and sizes were copied from  Table 3 of [20] for maximum reliability. See Table 1 for the results.
The test T C appears conservative for the small sample sizes considered, which is in line with Figs. 2 and 3. In spite of the fact, it still beats the benchmark considerably in terms of empirical test power.

Real data
This section applies the process-based test T C to three real data sets for the sake of illustration.
First, consider the famous Fisher's Iris (flower) data set as included in R. It consists of 50 samples from each of the three Iris species considered. Each sample contains measurements regarding the length and the width of the sepals and petals (in centimeters). Assume the null hypothesis that the probability distribution of petal length is the same for all the three species up to a location shift. According to Section 3, it can be tested by means of the presented axial symmetry tests, direction u = (1, 1, 1) / √ 3 and the combined sample i ) and X (j) i is the ith observation (of petal length) from the jth sample. If the process-based test T C is applied, then the null hypothesis is rejected with p-value about 0.0008. Similar results can be obtained even for the petal width.
The axial symmetry of certain economic or financial data distributions may have some meaningful economic interpretations; see, e.g., [21] for an application to exchange rates. For example, consider 1146 log-returns of ten Forex 1M exchange rates (AUD/CHF, AUD/JPY, EUR/CAD, EUR/CHF, GBP/CAD, Table 1 The table relates to the problem of testing symmetry of bivariate normal distribution (with zero mean, unit marginal variances and correlation ) around the x-axis. It uses the test of [20] with the two recommended values of the auxiliary binning parameter k (namely k = 2 and k = 3) as a benchmark, denoted as B(k = 2) and B(k = 3). The benchmark is compared with the process-based test T C in terms of empirical power (for > 0) or size (for ρ = 0) for testing level α = 0.05, based on 10 000 simulations of independent samples with n = 60, 75, 100 or 150 observations. The benchmark empirical sizes and powers were only copied from the original Table 3 of [20] to minimize the chance of an error. The critical values for T C were obtained from [1]. GBP/USD, NZD/CHF, USD/CHF, USD/NOK, XAU/AUD) from 13/11/2014 19:11 to 14/11/2014 14:16, combined to one ten-dimensional sample in the same order. Then the null hypothesis assuming both serial independence and exchangeability (up to a shift) would be rejected (for u = (1, 1, . . . , 1) / √ 10) by T C with the p-value much less than 0.01. The same null hypothesis applied only to the bivariate sample corresponding to the first two exchange rates would be rejected (for u = (1, 1) / √ 2) with p-value less than 0.0005. The same conclusion would be obtained for the bivariate sample of GBP/CAD and GBP/USD. The null hypothesis assuming both serial independence and symmetry around the last (and possibly shifted) coordinate axis (u = (0, . . . , 1) ) would also be rejected in the ten-variate sample as well as in the two bivariate ones (all p-values less than 0.0005).

Test comparison for bivariate normal distribution and axial symmetry
If 626 (virtually serially uncorrelated) log-returns of four daily exchange rates (AUD/CZK, CAD/CZK, EUR/CZK, USD/CZK) from 2/5/2017 to 30/10/2019 were considered as a four-variate i.i.d. sample, then the null hypothesis of exchangeability would still be rejected with p-value less than 0.001. Similarly clear rejection would be obtained also for the null hypothesis assuming axial symmetry around the last coordinate axis or for the bivariate sample consisting only of EUR/CZK and USD/CZK.
As the third possible application, consider the Australian athletes data set ais contained in the R package DAAG [17]. Its subsets are used in both [12] and [8] for testing certain symmetry hypotheses including axial symmetries. In particular, the latter article tested (and rejected) the spherical symmetry of the joint distribution of the logarithms of the red blood cell count, white blood cell count and hemoglobin concentration, where all the three characteristics were obtained for 202 athletes. There is no wonder the spherical symmetry was rejected because the process-based test T C rejects the axial symmetry for (coordinate) axial directions u = (0, 0, 1) and u = (1, 0, 0) , always with pvalue less than 0.0005. The results are in line with those regarding the spherical symmetry of all the three bivariate marginals, reported in [8].

Appendix
This section contains the proofs of Propositions 1 to 3 as well as the technical remarks commenting on them. The first three pertain to Proposition 2 while the remaining two comment on Proposition 3.
Proof of Proposition 1. Obviously, (1) follows from the text preceding the proposition. Claims (2) and (3) are proved here.
Transformation W := VY leads to the spherically distributed random vector W and to the transformed gradient conditions (2.3) and (2.4) that turn into τ = E W I(b τ u V −1 W − a τ u < 0) and then T n (τ ) is invariant with respect to the transformations X → A X. It is because the centered rank scores do not change with such transformations. Note also that the sample variance-covariance matrix estimator W 0 (Y, X) exhibits all the equivariance and invariance properties mentioned in this remark.