# Sine-squared function

This article is about a particular function from a subset of the real numbers to the real numbers. Information about the function, including its domain, range, and key data relating to graphing, differentiation, and integration, is presented in the article.

View a complete list of particular functions on this wiki

## Contents

## Definition

This function, denoted , is defined as the composite of the square function and the sine function. Explicitly, it is the map:

For brevity, we write as .

## Key data

Item | Value |
---|---|

Default domain | all real numbers, i.e., all of |

range | , i.e., absolute maximum value: 1, absolute minimum value: 0 |

period | , i.e., |

local maximum value and points of attainment | All local maximum values are equal to 1, and are attained at odd integer multiples of . |

local minimum value and points of attainment | All local minimum values are equal to 0, and are attained at integer multiples of . |

points of inflection (both coordinates) | odd multiples of , with value 1/2 at each point. |

derivative | , i.e., double-angle sine function. |

second derivative | |

derivative | times an expression that is or of , depending on the remainder of mod |

antiderivative | |

mean value over a period | 1/2 |

expression as a sinusoidal function plus a constant function | |

important symmetries | even function (follows from composite of even function with odd function is even, the square function being even, and the sine function being odd) more generally, miror symmetry about any vertical line of the form , an integer. Also, half turn symmetry about all points of the form . |

interval description based on increase/decrease and concave up/down | For each integer , the interval from to is subdivided into four pieces: : increasing and concave up : increasing and concave down : decreasing and concave down, : decreasing and concave up |

power series and Taylor series | The power series about 0 (which is hence also the Taylor series) is It is a globally convergent power series. |

## Identities

We have the following important identities involving :

- , relating it to the cosine-squared function.
- , or equivalently, .

## Graph

Here is the graph on the interval , drawn to scale:

Here is a close-up view of the graph between and . The dashed horizontal line indicates the mean value of :

The red dotted points indicate the points of inflection and the black dotted points indicate local extreme values.

Here is a picture showing the function (blue) and the cosine-squared function (purple) with the dashed line being . The picture illustrates that :

## Integration

### First antiderivative: using double angle formula

We use the identity:

We can now do the integration:

To integrate , we use the method of integration of linear transform of function to get . Plugging that in, we get:

### First antiderivative: using integration by parts

We rewrite and use integration by parts in its recursive version:

We now rewrite and obtain:

Setting to be a choice of antiderivative so that the above holds without any freely floating constants, we get:

Rearranging, we get:

This gives:

So the general antiderivative is:

Using the double angle sine formula , we can verify that this matches with the preceding answer.

For a given continuous function on a connected set, antiderivatives obtained by different methods must differ by aconstant. In some cases, the antiderivatives may be exactly equal, but this isnot necessary in general.

See zero derivative implies locally constant

### Graph of function with antiderivative

In the picture below, we depict (blue) and the function (purple). This is the unique antiderivative that takes the value 0 at 0. The other antiderivatives can be obtained by vertically shifting the purple graph:

The black dots correspond to local extreme values for , and the red dots correspond to points of inflection for the antiderivative. Each black dot is in the same vertical line as a red dot, as we should expect, because points of inflection for the antiderivative correspond to local extreme values for the original function. Further:

- The antiderivative is increasing everywhere because is everywhere nonnegative, and is zero only at isolated points.
- The antiderivative is concave up on those intervals where is increasing, i.e., intervals of the form as varies over the integers.
- The antiderivative is concave down on those intervals where is decreasing, i.e., intervals of the form as varies over the integers.

### Definite integrals

The part in the antiderivative signifies that the *linear* part of the antiderivative of has slope , and this is related to the fact that has a mean value of on any interval of length equal to the period. It is in fact clear that the function is a sinusoidal function about .

Thus, we have:

where is an integer.

### Transformed versions

Based on the integration of , we can also integrate the square of any sinusoidal function:

### Higher antiderivatives

It is possible to antidifferentiate more than once. The antiderivative is the sum of a polynomial of degree and a trigonometric function with a period of .

## Power series and Taylor series

### Computation of power series

We can use the identity:

along with the power series for the cosine function, to find the power series for .

The power series for the cosine function converges to the function everywhere, and is:

The power series for is:

The power series for is:

Dividing by 2, we get the power series for :

Here's another formulation with the first few terms written more explicitly:

### Taylor polynomials as approximations

Note that since is an even function, all its Taylor polynomials are also even polynomials. In the figure below, we consider the graphs of and its second, fourth, and sixth Taylor approximations.

- Second Taylor polynomial , which equals the third Taylor polynomial, , is .
- Fourth Taylor polynomial , which equals the fifth Taylor polynomial, , is .
- Sixth Taylor polynomial, , which equals the seventh Taylor polynomial , is .

## Limit computations

### Order of zero

We get the following limit from the power series:

Thus, the order of the zero of at zero is 2 and the residue is 1.

This limit can be computed in many ways:

Name of method for computing the limit | Details |
---|---|

Simple manipulation, using | |

Using the L'Hopital rule | |

Using the power series | We have , so we get . Taking the limit as gives 1. |

### Higher order limits

We have the limit:

This limit can be computed in many ways:

Name of method for computing the limit | Details |
---|---|

Using and | We have . The first limit is and the second limit is 2 from the given data. We thus get . |

Using the L'Hopital rule | . |

Using the power series | We have , so , so the limit as is 1/3. |