# Gradient descent with constant learning rate

## Definition

Gradient descent with constant learning rate is a first-order iterative optimization method and is the most standard and simplest implementation of gradient descent. In this method, the real number by which the gradient vector is multiplied to determine the step size is constant across iterations. This constant is termed the learning rate and we will customarily denote it as $\alpha$.

## Qualitative behavior based on type of function

Gradient descent with constant learning rate, although easy to implement, can converge painfully slowly for various types of problems. The analysis gets progressively more complicated as the type of function becomes more complicated. For more, see:

Function type Gradient descent in one variable Corresponding function type for multiple variables Gradient descent in multiple variables
Case on learning rate $\alpha$ What happens