From Geometric Mean to KL Divergence to Noisy Labels

If you have ever participated in a machine learning competition (like those held by Kaggle), you may be familiar with the geometric mean as a simple method that combines the output of multiple models. In this post, I will review the geometric mean and its relation to the Kullback–Leibler (KL) divergence. Based on this relation, we will see why the geometric mean might be a good approach for model combination. Then, I will show how you can build upon this relation to develop a simple framework for cleaning noisy labels or training machine learning models from noisy labeled data.

Geometric Mean for Classifier Fusion

Imagine you would like to build an image classification model to categorize cats, dogs, and people in your personal images. One strategy to achieve the best classification accuracy is to fuse the output of multiple models trained on your dataset. Let’s say you have trained three convolutional neural networks (CNNs) to classify images into these three categories. The CNNs may predict the following probability values over the categories for an image:

But, how can we aggregate the output of the CNNs to produce a single prediction for the image? For this example, you may use majority voting which will assign the image to the people category, as 2 out of 3 CNNs predicted this category. However, the majority vote does not consider the confidence of each classifier and it ignores that CNN2 is slightly leaning towards people while CNN1 is highly confident against this category.

In practice, a better approach is to use the geometric mean of the predicted values to construct a distribution over the categories. We will represent this using q(y) \propto \left[p_1(y) p_2(y) p_3(y)\right]^{\frac{1}{3}} where \propto indicates that the distribution q(y) is proportional to the geometric mean \left[p_1(y) p_2(y) p_3(y)\right]^{\frac{1}{3}}. Let’s see how we can calculate this:

    \begin{align*}     q(y = \text{cat}) &\propto \sqrt[3]{0.950 \times 0.3 \times 0.29} = 0.44 \\     q(y = \text{dog}) &\propto \sqrt[3]{0.045 \times 0.3 \times 0.01} = 0.05 \\     q(y = \text{people}) &\propto \sqrt[3]{0.005 \times 0.4 \times 0.70} = 0.11, \end{align*}

where after normalization we have:

As you can see, the geometric mean chooses the cat category instead of people, as CNN1 is highly confident on this cateory while other CNNs have given a moderate probablity to this category.

We can replace the geometric mean with the weighted geometric mean represented by q(y) \propto \left[p_1^\alpha(y) p_2^\beta(y) p_3^\gamma(y)\right]^{\frac{1}{\alpha + \beta + \gamma}} where \alpha, \beta, and \gamma are positive scalars controling the impract of each distribution on the final distribution. This is useful for example when you know CNN1 is more accurate than CNN2 and CNN3. By setting \alpha greater than \beta and \gamma, you can force q to be closer to p_1.

Geometric Mean and KL Divergence

The KL divergence is a measure commonly used to characterize how a probability distributions differs from another distribution. Instead of aggregating the classification predictions using the geometric mean, we could use the KL divergence to find a distribution q(y) that is closest to p_1(y), p_2(y) and p_3(y). If such q(y) is found, we can use it for making the final prediction as it is close to all the distributions p_1(y), p_2(y) and p_3(y). This is to say we solve:

(1)   \begin{align*}     \min_{q} \KL(q(y)||p_1(y)) + \KL(q(y)||p_2(y)) + \KL(q(y)||p_3(y)) \end{align*}

Using the definition of the KL divergence, we can expand the above objective to:

(2)   \begin{align*} &\KL(q(y)||p_1(y)) + \KL(q(y)||p_2(y)) + \KL(q(y)||p_3(y)) \\ &\quad = \sum_y q(y) \log \frac{q(y)}{p_1(y)} + \sum_y q(y) \log \frac{q(y)}{p_2(y)} + \sum_y q(y) \log \frac{q(y)}{p_3(y)} \nonumber \\ &\quad = \sum_y q(y) \log \frac{q^3(y)}{p_1(y) p_2(y) p_3(y)} = 3 \sum_y q(y) \log \frac{q(y)}{\left[p_1(y) p_2(y) p_3(y)\right]^\frac{1}{3}} \nonumber \\ &\quad = 3 \KL(q(y) || \frac{\left[p_1(y) p_2(y) p_3(y)\right]^\frac{1}{3}}{Z}) - 3 \log Z, \end{align*}

where Z=\sum_y \left[p_1(y) p_2(y) p_3(y)\right]^\frac{1}{3} is the normalization constant that makes the geometric mean distribution \frac{\left[p_1(y) p_2(y) p_3(y)\right]^\frac{1}{3}}{Z} a valid distribution. Above, we have converted the KL summation in (1) to a single KL in (2). It is easy to see that the KL in (2) is minimized if q(y) is proportional to \left[p_1(y) p_2(y) p_3(y)\right]^{\frac{1}{3}}. This is an interesting result. The geometric mean of p_1(y), p_2(y) and p_3(y) is a distribution close to all three based on the KL objective in (1). This may explain why the geometric mean is a good approach to classifier fusion. It simply finds the closest distribution to all the predictions.

Similarly, we can show that q(y) \propto \left[p_1^\alpha(y) p_2^\alpha(y) p_3^\gamma(y)\right]^{\frac{1}{\alpha + \beta + \gamma}}, the weighted geometric mean, is the solution to the weighted KL objective:

    \begin{align*}     \min_{q} \ \alpha\KL(q(y)||p_1(y)) + \beta\KL(q(y)||p_2(y)) + \gamma\KL(q(y)||p_3(y)). \end{align*}

From KL Divergence to Noisy Labels

Let’s now consider the problem of training an image classification model from noisy labeled images. Again, the categories that we are considering are cat, dog, and people. But, our training annotations are noisy, and we would like to develop a simple approach to clean the training annotations and use them for training. Here, we will see how we can use the geometric mean and equivalently the KL divergence to develop such an approach.

Let’s assume we can estimate transition probablities from noisy to clean labels. This can be denoted by:

This matrix represents the conditional distribution on a clean label (y) given a noisy label (y'). For example, the top-left entry indicates that p(y=\text{cat}|y'=\text{cat}) = 0.6, i.e., if an image is labeled as cat, with 60% probablity the true label is also cat.

The transition probabilities p(y|y'), provided by the matrix, form a distribution over true labels for each noisy labeled instance. Instead of using the noisy annotations, we can use the probabilistic labels provided by p(y|y') to train the image classification model. For example, if an image is annotated by cat, we can assume that the true label is cat with probability 0.6, dog with probability 0.3, and people with probability 0.1. Although this approach provides robustness to label noise to some degrees, it does so by only considering the dependency between noisy and clean labels, independent of the image content, i.e., it does not infer the true labels specifically for each image. This type of noise correction is known as the class conditional noise model (see this paper for a theoretical discussion).

When we are training an image classification model, the model itself may be able to successfully predict the true labels (or at least a reasonable distribution over true labels) given a training image. In this case, we can use the model to infer a distribution over true labels. Let’s denote the classification model that is being trained by p_{CNN}(y|x) which predicts a distribution over true labels given the image x. We can use the KL divergence below to find the closest distribution to both p_{CNN}(y|x) and p(y|y'):

(3)   \begin{align*}     \min_{q} \ \KL(q(y)||p_{CNN}(y|x)) + \alpha \KL(q(y)||p(y|y')). \end{align*}

where \alpha \ge 0 is a scalar. At early training stages, p_{CNN}(y|x) cannot predict the true labels correctly. We can set \alpha to a large value such that q is only close to p(y|y'). As the classification model is trained, we can decrease \alpha to let q be close to both distributions.

There are two advantages to the KL minimization in (3): i) The global solution can be obtained for q using the geometric mean (as shown above). Thus, q can be inferred efficiently at each training iteration given the current model p_{CNN}(y|x) and p(y|y'). ii) As p_{CNN}(y|x) predicts the labels given the image content, q can be considered as an image-dependent noise correction model. In fact, q can be used to infer the true labels for each noisy-labeled instance.

Further Reading

The idea of combining an auxiliary source of information (such as p(y|y')) and the underlying classification model p_{CNN}(y|x) was first introduced in this paper where I showed that the KL minimization in (3) is the natural result of a regualarized EM algorithm.

The original paper only considered binary class labels. Here, we extended this model to continuous labels (i.e., object location) and we showed that robust object detection models can be developed using this simple idea. Also, we showed that more sophisticated auxiliary sources of information can be formed for the object detection problem using image classification models.

Last but not least, the geometric mean expression used for inferring q can be considered as a linear function applied to \log p_{CNN}(y|x) and \log p(y|y'). Here, we show that better inference models can be designed by training a CNN to represent q instead of a fixed linear function. We show that this model is effective in the image segmentation problem.

Use the subscription form below if you’d like to stay tuned with the future posts:

2 thoughts on “From Geometric Mean to KL Divergence to Noisy Labels”

  1. While you expanded the objective using KL definition , I was trying to reformulate and redo it from a scratch … but i stumbled between step 2 and step 3 when you included the Z term and i could not find what you found .It seems that there is a missing term q(y) : …. – q(y) x 3 Log Z
    I did not get how did you get rid of q(y) because it seems to be a common multiplier before transforming the division inside the log into a substraction

    1. You are on the right track. The reason I can get rid of q(y) in “\sum_y q(y) 3\log Z” is that “\log Z” is a constant that depends on p_1, p_2, and p_3. It’s not a function of y. So, \log Z comes out of the summation and we know \sum_y q(y) = 1 for any distribution.

Leave a Comment

Your email address will not be published. Required fields are marked *