Kendall transformation brings a robust categorical representation of ordinal data

Information theory1 is a powerful framework utilised in many branches of statistics and machine learning. It is used, among others, for association testing2, feature selection3, network reconstruction4 and clustering5. The known deficiency of the theory, though, is that it is well defined for discrete probability distributions6, yet there is no unequivocally proper generalisation over continuous distributions which would retain important properties without causing substantial theoretical or practical issues.
A common approach here is to simply quantise data before information-theoretic analysis, that is approximate using a small set of discrete values, and treat as categorical afterwards. This is often done through binning, which involves splitting the range of the original feature into consecutive intervals, bins, and then mapping each original value to a discrete one common to a corresponding bin. In a special case of dichotomisation only two bins are used, defined by a single threshold value, which yields a binary outcome. This approach is popular in biomedical applications, as it generates straightforward, comprehensible results7. On the other hand, quantisation is a lossy procedure, as it wipes out all intra-bin interactions; consequently it also exaggerates inter-bin differences, which may lead to spurious effects like banding or tautological conclusions. The most substantial problem in context of this work, however, is that quantisation can be done in numerous ways, bringing additional burden of heuristics and hyper-parameters which can critically influence outcomes, especially in the small data limit8.
The other substantial approach is differential entropy, which is a straightforward generalisation over continuous probability distributions. Unfortunately, while very useful on its own, it violates many useful properties of discrete Shannon entropy and related quantities9, seriously impacting applicability of more sophisticated methods developed for discrete cases. Moreover, on a practical side, this approach requires modelling and estimation of distributions underlying analysed data, which may easily become a highly non-trivial task relying on cumbersome heuristics, indifferent from quantisation. Henceforth, it is desirable to look for more robust and generic approaches.
Following the idea behind Kendall correlation10,11, a continuous variable can be represented as a graph of order relations between each pair of its values, and to measure association of variables in terms of their similarity in such representation. Still, the said graph can be thought of not just as an intermediate artefact, but an useful concept on its own. In particular, the list of its edges, when ordered in some arbitrary yet common way, can be written as a discrete variable with at most three states: less, greater and equal. I will refer to conversion to such a form as Kendall transformation. This approach involves neither strong assumptions, parameters nor optimisation, and its output can be directly fed into any method expecting categorical input without any adaptations. Although it only preserves the ranking of the original variable, the success of rank tests12 suggests it is an effective trade-off in many practical applications.
In this paper I will analyse properties of Kendall transformation and argue that it is a sufficient and reliable representation for many information-based inquiries into the data, consequently is a robust and effective alternative to both binning and elaborate continuous entropy estimators.
Kendall transformation of a toy information system with four observations (left). Three continuous features, a, b and c, are converted into matrices of relations between values (centre). Matrices are flattened in an arbitrary but consistent manner and collected into a transformed system, which consists of twelve observations, but only has categorical values (right).
Let us assume some way of ordering all (m=n(n-1)) possible pairs from a set ({1..n}), with ((a_j,b_j
e a_j)) denoting j-th such pair. Without a loss of generality, I will assume it to be one corresponding with a column-wise traversal of an (n imes n) matrix. Furthermore, let us assume X to a be a totally ordered set, that is equipped with a greater-or-equal relation (ge ) that is reflexive, transitive, antisymmetric and strongly connected.
For an arbitrary n-element vector (xin X^n), Kendall transformation is a function (mathsf {K}:X^n
ightarrow {{igtriangleup ,igtriangledown ,square }}^m) defined as
By extension, Kendall transformation of an information system exclusively composed of totally ordered features is an information system in which every feature has been Kendall-transformed. Figure 1 presents an illustrative example of transformation of a small, toy system. It is trivial to note that (mathsf {K}(x)=mathsf {K}(f(x))) for any strictly increasing f, as (f(a)ge f(b)
ightarrow age b) in this case. Henceforth, the proposed transformation is lossy, but exactly preserves observation ranking.
Let us further assume for a moment that there are no ties between values, hence that the equal state does not occur. With two features x and y, we say that pair j is concordant if (mathsf {K}{(x)}_j=mathsf {K}{(y)}_j) and discordant otherwise. The Kendall correlation coefficient ( au ) is then the difference between the number of concordant and discordant pairs, normalised by m (One should note that ( au ) is quantised into (m/2+1) states; in particular, ( au =0) is only possible for n or (n-1) divisible by 4). Furthermore, the entropy of a Kendall-transformed vector is (log (2)), and the mutual information (MI) between two transformed variables (I^mathsf {K}) is a function of a Kendall correlation ( au ) between them, namely
It is an even function, strictly increasing for ( au >0). Moreover, it can be also considered as an extension of a simple, commonly used formula connecting MI and correlation coefficient (
ho ),
It is derived by subtracting differential entropy of a bivariate normal distribution from a sum of differential entropies of its marginals, with a constant factor (log sqrt{2pi e}) omitted, and, although valid only for the Pearson correlation, it is often used with other coefficients, usually Spearman’s13. Both functions behave similarly for small absolute values of ( au ) and (
ho ). On the other hand, when respective correlation coefficient approaches 1 or − 1, (I^mathsf {K}) achieves maximum, while (I^G) diverges, causing problems in certain use cases, especially when highly correlated features are of interest.
As with Pearson or Spearman11,14 correlation, on Kendall-transformed information systems we can only detect monotonic relationships; if, say, (xsim {mathscr {U}}(-1,1)), the relation between x and (x^2) will be lost. I will argue that it is well justified constraint in small-n problems, however. Under the null hypothesis of interaction testing, independence between variables, the probability of each joint state is a product of its marginal probabilities. It is very unlikely to get a small sample of such symmetry, though, hence the agreement with null may easily become less likely than with the alternative hypothesis. Obviously, we may apply corrections for this phenomenon, but for a cost of severely hindered sensitivity. On the other hand, even among short sequences, the probability of randomly getting a sorted one is minimal (1/n! without ties), hence the null of non-monotonicity is much more robust.
Mutual information between two variables drawn from a bivariate normal distribution with correlation r, calculated with different methods. The bands show 5th, 25th, 50th, 75th and 95th percentile over 100 realisations, while red lines the theoretical value, or its estimate for 5000 samples.
Let us empirically investigate the properties of (I^mathsf {K}) on a synthetic data drawn from bivariate normal distribution; we set the marginal distributions to ({mathscr {N}}(0,1)), and covariance to r. For a comparison, we will use five other methods of estimating MI: quantisation into three or five bins of equal width, Pearson correlation coefficient with Eq (3), finally two versions of the k-nearest neighbour estimator, variant 2 of the Kraskov estimator15 with (k=5) (KSG) and its extension corrected for local non-uniformity16 with (k=10) and (alpha =0.65) (LNC). When not noted otherwise, a maximal likelihood estimator of entropy is used. The results of this experiment, repeated for four values of r, various sample sizes n and over 100 random replicates, are collected on Fig. 2.
We can see that quantisation leads to a very poor convergence, which makes such estimators unsuitable for low n cases; this is especially pronounced for (r=0), which in practice would lead to many false positives in association sweeps. In contrast, (I^mathsf {K}) provides nearly unbiased results even for smallest values of n, as well as a relatively small variance.
The remaining methods estimate differential MI, so (-log sqrt{1-r^2}). The one based on Pearson correlation, which behaviour closely resembles this of (I^mathsf {K}), yet with a smaller variance. Still, this is due to the fact that we have fully satisfied strong assumptions this estimator relies on; it won’t be nearly as effective in a general case. The KSG estimator has a very low variance, and provides a bell-shaped distribution for estimates of (r=0), which is handy for independence testing. Still, in highly correlated cases it exhibits its known deficiency of a very slow convergence. The LNC estimator, on the other hand, converges fast for any r; it is visibly biased in the independence case, however, which is likely to hurt specificity.
Clearly, Kendall transformation is the most versatile and robust among investigated solutions; it works reliably over the entire range of analysed cases, takes no parameters and is never substantially inferior to the best method.
As mentioned earlier, the entropy of a Kendall-transformed variable is (log (2)) if there are no ties, regardless of the distribution of the original. This becomes intuitive given that this transformation, similarly to ranking, wipes scale information and retains only order; hence, it effectively converts any input distribution into an uniform one. Tied values cannot be separated by a such process, hence the resulting effective distribution in a general case is a mixture of an uniform distribution and Dirac deltas located over tied values, which is a richer, more complex structure. In a similar fashion, the introduction of ties generate (square ) states in the transformed variable which first increases its entropy, up to (log (3)), when the complexity contributions of uniform and discrete components become balanced. Additional ties effectively convert the distribution into a discrete one, decreasing the entropy of the transformation. Finally, due to coalescence of values, only one state remains both in the original variable and its Kendall transformation, and entropies of both become 0.
Naturally, the correspondence between actual entropy of a discrete variable and the entropy of a Kendall transformation of its numerical encoding holds only in a constant and binary case; otherwise the order in which states are encoded becomes important. Henceforth, Kendall transformation is directly applicable to numeric, ordinal and binary features, and can provide a viable representation of ties when they are not numerous enough to dominate the ordinal nature of a feature. The proper handling of arbitrary categorical data in such a framework is a subject for further research, though, by a simple extension, we may analyse such features by breaking them into a set of category-vs-other indicator features.
One should note, though, that the above reasoning applies to actual ties, i.e., pairs of values that are indistinguishable, like two days without rain; as opposed to two days with such a similar amount of precipitation that the resolution of the sensor is insufficient to differentiate them. In the latter case it is better to break such artificial ties using random jitter or to treat comparisons between them within the Kendall-transformed variable as missing observations.
Interestingly, when x is ordered and contains no ties and y is binary, (I^mathsf {K}) also corresponds to a popular measure of association. Namely, it is a function of (A) defined as the area under the receiver operating characteristics curve17 (AUROC),
normalised by sizes of both classes, a and (b=n-a). This way, Kendall transformation is also connected with the Mann-Whitney-Wilcoxon test18, as its statistic (U=ab(1-A)).
Values of certain information scores for a Kendall-transformed system of three features a, b and y engaged in a simple interaction, (y=alambda +b(1-lambda )) (left) or (y=max {alambda ,b(1-lambda )}) (right), for a range of realisations with a different (lambda ) parameter.
The important gain of Kendall transformation is that transformed features can be used in more complex considerations that just bivariate ones. In particular, we can calculate joint, conditional or multivariate mutual information and use it to investigate relationships between features.
For example of such an analysis, let us consider an information system composed of three independent, random features (a,b,csim U(0,1)) and a decision which is either a linear (y=alambda +b(1-lambda )) or an example non-linear (y=max (alambda ,b(1-lambda ))) function of a and b. It is worth noticing that although linear relation seems pretty basic, Kendall transformation makes it invariant to monotonic transformations of a, b and y, hence this case covers more complex functions, for instance (y=sinh ^mu (a)cdot {(b-b_0)}^
u ). Let us also denote the Kendall-transformed features with upper-case letters, i.e., (A=mathsf {K}(a)) and so on. Figure 3 contains the maximum likelihood estimates of certain information scores in such system, for (n=200) and a selection of realisations for a range of (lambda ) values.
We can see that for most realisations the joint mutual information I(A, B; Y) is larger than either marginal mutual information, signifying that considering both features allows for a better prediction of Y. Their interaction can be directly measured with conditional mutual information I(A; B|Y); we can also compare it to a baseline of I(A; C|Y), which is asymptotically zero as C is irrelevant. This score substantially dominates baseline for almost full range of (lambda ) in the linear case, and at least half of it in the much harder non-linear case, confirming the previous conclusion of the presence of interaction. Moreover, it reaches maximum for (lambda =1/2), i.e., balanced impacts of a and b, and decreases as either of them dominates, which is a reasonable outcome.
In a yet another view, we can analyse the three-feature mutual information I(A; B; Y). Because of the independence of A and B, it is approximately equal to (-I(A;B|Y)), hence has a negative minimum for (lambda =0), which also signifies synergy between the three features. Similarly, the marginal MIs I(A; Y) and I(B; Y) behave as a reasonable measure of relative impact, akin weights in linear regression, yet generalisable over many non-linear relationships.
The other important aspect of using Kendall transformation for multivariate analysis is how well we can estimate the joint distribution, which generally depends on how many states are there, and how many observations of each can we count. The number of states increases exponentially with the dimension of the analysis, yet the base of said exponent is usually 2 when using Kendall transformation and (5-10) in case of typical quantisation. Also, due to the nature of the Kendall transformation, the effective number of observations is squared and marginal distributions are well balanced, reducing the noise in counts and the risk of spuriously unobserved states.
Kendall-transformed information systems can also be used as an input for sophisticated machine learning methods. Some approaches can operate in a straightforward way, like many feature selection methods which may be used for explanatory analysis, others may require appropriate adaptations. The particular caveat is that the transformation generates artificial correlation structure between pairs, hence methods which rely on them being independent may become biased.
A Kendall-transformed feature can be transformed back using the following method. First, for each object i we extract values of pairs ((i,cdot )), and score each (igtriangleup ) with 1 and each (igtriangledown ) with (-1). Then, we order objects according to the decreasing score, which recovers the ranking. Such algorithm is equivalent to the Copeland’s election method19, and it is trivial to see that it always correctly reverses a valid result of a Kendall transformation.
Kendall transformation is non-surjective, henceforth, when operating on transformed data, we can stumble upon invalid sequences of (igtriangleup ), (igtriangledown ) and (square ). For instance, let’s consider a feature ((igtriangleup ,igtriangledown ,igtriangledown ,igtriangleup ,igtriangleup ,igtriangledown )) — it corresponds to a cycle of three observations, and cannot be generated by transforming any ranking. While there is no universal solution covering such cases, Copeland’s method can be easily adapted and used as one of possible heuristic solutions; in this particular case, it maps these features into all observations with the same rank, which intuitively seems as a reasonable solution.
Furthermore, sometimes we can also expect missing values or fuzzy predictions, i.e., probabilities or weights of each state. In this cases, we can reach out deeper into the social choice theory for a more sophisticated approach, like the Schulze’s20 or Tideman’s21 method. They will produce a consensus graph of ordering relations, from which we can recover complex tie structures and disconnected subsets of observations.
One of the common problems in molecular biology is the lack of consistent calibration of certain experimental procedures, especially the high-throughput ones. From a mathematical point of view, this phenomenon can be well modelled as an assumption that results for each batch of samples are filtered by an arbitrary monotonic function, specific to a given batch. This way, explanatory outcomes from different batches are mostly consistent; yet, raw datasets cannot be naively combined, for instance to look for more subtle aspects in meta-analysis. It is because the batch effect can be very substantial, even to the point of overshadowing actual interactions, heavily biasing the results.
Kendall transformation can be used as another approach to tackle this problem. The independently transformed data sets from different batches do not retain the strictly increasing dis-calibration effects, hence they can be safely combined without the risk of introducing bias. The merged data will not represent relations in cross-dataset pairs, but we can treat these rows as missing and still conduct actions like feature selection or model training. Only for the prediction, the sets have to be separated to perform inverse transformation—the cross-set relations will be left unknown, but so they were in the first place.
For an example analysis using the Kendall transformation, let us consider the morphine withdrawal data set from22. It collects concentrations of 15 neurotransmitters in 6 brain structures, measured in four groups of rats: subject to either a morphine or saline treatment, as well as measured directly after treatment or after a 14 day withdrawal period and re-exposure to the administration context. Additionally, ultrasonic vocalisation (USV) intensity of rats was also quantified as a number of episodes during a 20 minute recording session. In contrast to the original, in the data used here one record has been rejected due to missing data.
Overall, the set is composed of 37 objects and 90 continuous features, distributions of which are predominantly not normal, as well as three separate decision features, continuous USV episode count, and two categorical: morphine treatment and withdrawal period. Such structure is typical to many biomedical studies.
In the original paper, a standard, bivariate non-parametric statistical analysis was used to identify compounds significantly connected with each of the decision features. Namely, categorical decisions were analysed with Mann-Whitney-Wilcoxon U test18, while continuous one with a Spearman (
ho ) test23. Such analysis, combined with a multiple comparisons correction, yields 5 significant features for the episode count problem, 1 for morphine and 13 for withdrawal. Let us compare such outcome with the mutual information rankings of features obtained on either Kendall-transformed or binned data, as well as with the Random Forest importance24 applied directly (using the randomForest R package25, in its default set-up). Their agreement can be quantified by the maximal value of Jaccard index over all possible cut-offs in the respective ranking. Precisely, Jaccard index26,27 measures similarity of two sets, A and B, and is defined as
Here, we have a set of all features, (Omega ), a golden standard set (Gsubset Omega ) of features selected by a standard analysis, and a ranking or scoring of features which can be commonly expressed as a function (s:Omega
ightarrow {mathbb {R}}), having greater values for more important features. Then, the final score can be formally defined as
Table 1 collects the results of the said experiment; we see a perfect agreement in case of Kendall transformation, which is unsurprising given the aforementioned equivalence relations, as well as the fact that Spearman and Kendall correlation coefficients are usually highly correlated. The rankings on quantised data are substantially influenced by the binning method; even though all predictors are the same, a different method is optimal for each decision. Perfect agreement with baseline is only achieved once, and the agreement is pretty poor on average. The much more elaborated approach, RF importance, achieves a relatively high average agreement of 0.89, given that it also considers multivariate relations. Overall, the results support the notion that simple binning is susceptible to exaggerating spurious interactions inherent to small sample data. Kendall transformation not only helps to avoid this phenomenon, but also requires no hyper-parameters, which have a critical impact on binning.
In order to investigate Kendall transformation in the context of machine learning, let us now compare the accuracy of the Random Forest method applied to the morphine data directly and after Kendall transformation. Accuracy is estimated using a bootstrap train-test split, hence with roughly 63% of observations used to build a model predicting the remaining data—in both cases the split is done before transformation. For the modelling after Kendall transformation, predictions for pairs are translated to ranking of original objects using the Copeland’s method proposed in Section Inverse transformation, considering the fraction of votes cast for the (igtriangleup ) state. Similarly, for the baseline, the vote fraction is used as an outcome for classification problems (Morphine and Withdrawal), while the regression mode is used for the USV count prediction. Finally, the accuracy is quantified with Spearman correlation, for regression, or with AUROC, for classification. As in previous example, the randomForest25 implementation is used.
The result of this analysis, repeated over 100 random replications, is reported on Table 2. We see that the application of Kendall transformation, despite its lossy nature, does not substantially hinder the accuracy of the prediction; the loss is statistically significant only in case of the Morphine problem, yet it is about a quarter of the interquartile range. Thus, we can conclude that Kendall-transformed data is a viable input for a machine learning method.
For the sake of this example, let us simulate the acquisition of data from two incoherent measurements. To this end, the morphine set is randomly split in half, and the values of features in one part is tripled. Next, we either naively merge halves back and apply Kendall transformation on the fused set, or the other way round, merge independently transformed parts. Finally, we compare the agreement of mutual information-based feature rankings obtained after each processing with this calculated on the golden standard, Kendall-transformed original, unperturbed data.
The results, quantified by Spearman correlation coefficient and averaged over 100 realisations, are presented as Table 3. We see that the simulated measurement artefact has a substantial, negative impact on the accuracy and stability of the ranking in the naive scenario, while the merge of Kendall-transformed parts reliably achieves over 90% agreement. This is expected due to a fact that the applied disruption is a strictly increasing function, which Kendall transformation is invariant to. The agreement is not perfect, however, because the set merged after transformation lacks almost 50% of information, namely all the pairs representing intra-half relationships—in light of this fact, the observed loss is rather limited.
Kendall transformation is a novel way to represent ordinal data in a categorical form, as well as to apply discrete methods and approaches on such data. While standard quantisation procedures sacrifice precision, Kendall transformation precisely preserves ranking, sacrificing original distribution instead. This approach is common to non-parametric statistical methods, though, and it is proved effective in many use-cases, in particular in small sample size conditions. In fact, I show that Kendall transformation is tightly connected with certain popular methods of this class on a theoretical level.
Moreover, Kendall transformation is reversible into ranking, has no parameters and imposes no restrictions on the input, as well as offers consistent behaviour regardless of its characteristics. The method is also very versatile, as its output can be used both to calculate some simple coefficient and be a part of an elaborate algorithm.