# Gradient descent with constant learning rate for a logistic log-loss function of one variable

## Contents

## Setup

This page includes a detailed discussion of gradient descent with constant learning rate for the logistic log-loss function of one variable.

### Function

Explicitly, the function is:

where is the logistic function and denotes the natural logarithm. Explicitly, .

Note that , so the above can be written as:

(we avoid extremes of 0 and 1 because in the extreme case the optimum is at infinity).

More explicitly, is the function:

The optimal value that we want to converge to is:

### Learning algorithm

Suppose is a positive real number. The gradient descent with constant learning rate is an iterative algorithm that aims to find the point of local minimum for . The algorithm starts with a guess and updates according to the rule:

Concretely, this is:

Note that we use parenthesized superscripts to denote the iterates, and these should not be confused with exponents. The reason for using superscripts instead of subscripts is to keep notation consistent with the case of functions of multiple variables, where subscripts are used for coordinates and superscripts for iterates.

## Convergence properties

To guarantee *good* global convergence, we need to choose a learning rate that takes into account the global upper bound on the second derivative. We have that the second derivative is:

This is maximized at , with value . Therefore, the bound on the second derivative, as discussed on the page gradient descent with constant learning rate for a convex function of one variable, is , and so, what we are guaranteed is that any learning rate in in the interval will work globally, and a learning rate in will work globally with monotone convergence towards the optimum.

For *local* convergence, it suffices to choose a learning rate that is in the interval:

with the best convergence (superlinear convergence) occurring if the learning rate chosen is .

Now, it turns out that **even using a learning rate in this interval will still guarantee global convergence**. But the first few steps could be *really* bad, so even starting with a point close to the optimum, we may end up moving quite far from the optimum.

### General case

For a learning rate , we obtain linear convergence with convergence rate:

Note that, for fixed , this convergence rate goes to 1 (i.e., becomes very slow) as and also as .

Why do we have convergence? As discussed here, if is quasi-convex, the locally optimal learning rate (or anything smaller) also works globally. In this case, is continuous and attains its unique local maximum at .

In the case that we choose the safe learning rate , we get a convergence rate:

### Special case of optimal learning rate, probability not equal to 1/2

If we choose as the optimal learning rate, namely (the reciprocal of the second derivative at the point of minimum), then, as discussed at the page gradient descent with optimal constant learning rate converges quadratically from sufficiently close to a minimum of multiplicity one, we get quadratic convergence, with convergence rate:

### Special case of optimal learning rate, probability equal to 1/2

Suppose and . In this case, the above convergence rate works out to 0, suggesting that the convergence is faster than quadratic. This is indeed the case. As described at the page gradient descent with optimal constant learning rate converges quadratically from sufficiently close to a minimum of multiplicity one, we get cubic convergence, with convergence rate:

### Example to illustrate extremely bad initial steps and extremely slow convergence if we use a learning rate based on the second derivative

Suppose we choose , so the optimal value is . Note that . Suppose we choose the optimal learning rate for local convergence. The locally optimal learning rate for convergence is:

For simplicity, we will take rather than the exact optimum -- it does not affect anything material, but it simplifies our calculations.

Suppose (a neutral initial starting point). Then, the first iteration gives:

Note that even though was quite close to the optimal point , the first iterate is very very far from it. We did move in the correct direction, but overshot by a *very* large margin.

However, things will proceed uphill from this point, because in every further iteration, we improve by about 1 unit:

at least when is negative of large magnitude compared with 100. So in each step we come closer to the optimum by about 1 (and more importantly, by *at least* some number that's slightly less than 1). We will therefore eventually converge. Note, however, that we'd need somewhere in the ballpark of iterations to come approximately as close to the optimum as we were at the outset. Thus, although we *technically* didn't diverge, for all practical purposes, we did.