Definition
Consider variables
. A quadratic function of the variables
is a function of the form:
In vector form, if we denote by
the column vector with coordinates
, then we can write the function as:
where
is a
matrix with entries
and
is the column vector with entries
.
Note that the matrix
is non-unique: if
then we could replace
by
. Therefore, we could choose to replace
by the matrix
and have the advantage of working with a symmetric matrix.
Key data
For the discussion here, assume that
has been made a symmetric matrix.
Item |
Value |
Consistency with the case , where , (a matrix), (a 1-dimensional vector)
|
default domain |
the whole of  |
the whole of
|
range |
If the matrix is not positive semidefinite or negative semidefinite, the range is all of . If the matrix is positive definite or ( is positive semidefinite and is in its image), the range is where is the minimum value. If the matrix is negative definite or ( is negative semidefinite and is in its image), the range is where is the maximum value. |
The case of "not positive semidefinite or negative semidefinite" does not arise for . Moreover, all the semidefinite cases must be definite, so we only have to consider the positive definite case and the negative definite case. The positive definite case corresponds to  The negative definite case corresponds to
|
local minimum value and points of attainment |
If the matrix is positive definite, then , attained at  If is positive semidefinite but not positive definite, it depends on whether is in the image of . If yes, replace with the solution to , so we get a local minimum of attained at  If is not positive semidefinite or if is not in the image of , no local minimum value |
The positive definite case corresponds to : Here, the local minimum value of is attained at (consistent with the matrix formulation) The negative definite case corresponds to , and there is no minimum in this case.
|
local maximum value and points of attainment |
If the matrix is negative definite, then , attained at  If is negative semidefinite but not negative definite, it depends on whether is in the image of . If yes, replace with the solution to , so we get a local minimum of attained at  If is not negative semidefinite or if is not in the image of , no local minimum value |
The negative definite case corresponds to : Here, the local maximum value of is attained at (consistent with the matrix formulation) The positive definite case corresponds to , and there is no maximum in this case.
|
gradient vector function (analogous to the derivative) |
 |
the derivative is (consistent with the matrix formulation)
|
Hessian matrix (analogous to the second derivative) |
(constant matrix-valued function) |
the second derivative is the constant function (consistent with the matrix formulation)
|
Differentiation
Partial derivatives and gradient vector
Case of general matrix
The partial derivative with respect to the variable
, and therefore also the
coordinate of the gradient vector, is given by:
In terms of the matrix and vector notation, the gradient vector, expressed as a column vector, is:
Case of symmetric matrix
In the case that
is a symmetric matrix, the above expressions simplify as follows.
Since
for all
, the expression for the partial derivative becomes:
The expression for the gradient vector becomes:
Case 
A sanity check for the above expressions is that in the case
, where
, we get the same answers as for the quadratic function
.
This is indeed the case. The only partial derivative here is the ordinary derivative, and this also is the gradient vector, and has expression:
This agrees with both the expression for
and the expression for
.
Second-order partial derivatives and Hessian matrix
Case of general matrix=
Recall that we had obtained (we replace the dummy variable
by
to facilitate differentiation with respect to
in the next step):
Differentiating both sides with respect to
(note that
may be equal to
or different from
) we find that the only term with a nonzero derivative is the term where
. In this case, the derivative is the coefficient of
. Therefore, we obtain:
Thus, the Hessian matrix of the quadratic function is given as:
Note that this is independent of the choice of
. This fact is true only because of the nature of the function: for more general functional forms, the Hessian matrix varies with the choice of input vector.
We can also see this in matrix form directly. The gradient function is:
This is a linear transformation, and the Jacobian matrix of this linear transformation computes the Hessian that we want. We can use the well-known fact that the Jacobian matrix of a linear transformation coincides with the matrix describing the linear part of the transformation, and therefore the Hessian is:
Case of symmetric matrix
We can either plug into the formulas for the general case or perform similar calculations to get the formulas in the case that
is a symmetric matrix:
Case 
A sanity check for the above expressions is that in the case
, where
, we get the same answers as for the quadratic function
.
This is indeed the case. The only second-order partial derivative is
. This agrees both with the formula for the second-order partial derivative and with the formula for the Hessian matrix.
Higher derivatives
All the higher derivative tensors are zero.
Cases
For the discussion of cases, assume that
is a symmetric matrix. If
is not symmetric, replace it by the symmetric matrix
.
Positive definite case
First, we consider the case where
is a symmetric positive definite matrix. In other words, we can write
in the form:
where
is a
invertible matrix.
We can "complete the square" for this function:
In other words:
This is minimized when the expression whose norm we are measuring is zero, so that it is minimized when we have:
Simplifying, we obtain that we minimum occurs at:
Moreover, the value of the minimum is: