Limit

From Calculus
Jump to: navigation, search
ORIGINAL FULL PAGE: Limit
STUDY THE TOPIC AT MULTIPLE LEVELS:
ALSO CHECK OUT: Quiz (multiple choice questions to test your understanding) |Page with videos on the topic, both embedded and linked to
This page lists a core term of calculus. The term is used widely, and a thorough understanding of its definition is critical.
See a complete list of core terminology

Motivation

Quick summary

The term "limit" in mathematics is closely related to one of the many senses in which the term "limit" is used in day-to-day English. In day-to-day English, there are two uses of the term "limit":

  • Limit as something that one approaches, or is headed toward
  • Limit as a boundary or cap that cannot be crossed or exceeded

The mathematical term "limit" refers to the first of these two meanings. In other words, the mathematical concept of limit is a formalization of the intuitive concept of limit as something that one approaches or is headed toward.

For a function f, the notation:

\lim_{x \to c} f(x)

is meant to say "the limit, as x approaches c, of the function value f(x)" and thus, the mathematical equality:

\lim_{x \to c} f(x) = L

is meant to say "the limit, as x approaches c, of the function value f(x), is L." In a rough sense, what this means is that as x gets closer and closer to c, f(x) eventually comes, and stays, close enough to L.

Graphical interpretation

The graphical interpretation of "\lim_{x \to c} f(x) = L" is that, if we move along the graph y = f(x) of the function f in the plane, then the graph approaches the point (c,L) whether we make x approach c from the left or the right. However, this interpretation works well only if f is continuous on the immediate left and immediate right of c.

This interpretation is sometimes termed the "two finger test" where one finger is used to follow the graph for x slightly less than c and the other finger is used to follow the graph for x slightly greater than c.

The interpretation is problematic in that it is not really a definition, and fails to have computational utility for wildly oscillatory functions or functions with other forms of weird behavior.

Two key ideas

The concept of limit involves two key ideas, both of which help explain why the definition is structured the way it is:

  • Arbitrarily close: The limit depends on how things behave arbitrarily close to the point involved. The notion of "arbitrarily close" is difficult to quantify non-mathematically, but what it means is that any fixed distance is too much. For instance, if doing \lim_{x \to 2} f(x), we can take points close to 2 such as 2.1, 2.01, 2.001, 2.0001, 2.0000001, 2.000000000000001. Any of these points, viewed in and of itself, is too far from 2 to offer any meaningful information. It is only the behavior in the limit, as we get arbitrarily close, that matters.
  • Trapping of the function close by: For a function to have a certain limit at a point, it is not sufficient to have the function value come close to that point. Rather, for \lim_{x \to c} f(x) = L to hold, it is necessary that for x very close to c, the function value f(x) is trapped close to L. It is not enough that it keeps oscillating between being close to L and being far from L.
Full timed transcript: [SHOW MORE]

Checkpoint questions:

  • To figure out the limit of a function at 2, does the value of the function at 2.1 matter? Does the value of the function at 2.01 matter? 2.001? How close is close enough?
  • What is the limit \lim_{x \to 0} \sin(1/x)? What's the intuitive idea behind the reasoning? More formal versions of this reasoning will be introduced after we have seen the \varepsilon-\delta definition.

Definition for finite limit for function of one variable

Two-sided limit

Suppose f is a function of one variable and c \in \R is a point such that f is defined to the immediate left and immediate right of c (note that f may or may not be defined at c). In other words, there exists some value t  > 0 such that f is defined on (c-t,c+t) \setminus \{ c \} = (c-t,c) \cup (c,c + t).

For a given value L \in \R, we say that:

\lim_{x \to c} f(x) = L

if the following holds:

For every \varepsilon > 0, there exists \delta > 0 such that for all x \in \R satisfying 0 < |x - c| < \delta, we have |f(x) - L| < \varepsilon.

The definition is broken down into its four clauses below:

Clause Interval description Symbol explanations
For every \varepsilon > 0 \varepsilon \in (0,\infty) The symbol \varepsilon is a Greek lowercase letter pronounced "epsilon".
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
there exists \delta > 0 such that \delta \in (0,\infty) The symbol \delta is a Greek lowercase letter pronounced "delta"
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
for all x \in \R satisfying 0 < |x - c| < \delta x \in (c - \delta,c) \cup (c, c+ \delta) = (c - \delta, c + \delta) \setminus \{ c \} The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statements should be read as saying that x is in the set described in the two equivalent ways.
\cup stands for the union, so the statement that x \in (c - \delta,c) \cup (c, c+ \delta) should be parsed as saying that x \in (c - \delta,c) or x \in (c, c + \delta)
\setminus stands for set difference, so the statement x \in (c - \delta, c + \delta) \setminus \{ c \} can be parsed as saying that x could be any value in (c - \delta, c + \delta) except c. The point c is excluded because we do not want the value of f at c to affect the limit notion.
we have |f(x) - L| < \varepsilon f(x) \in (L - \varepsilon,L + \varepsilon) The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statement should be read as saying that f(x) is in the set (L - \varepsilon, L + \varepsilon).

The limit (also called the two-sided limit) \lim_{x \to c} f(x) is defined as a value L \in \R such that \lim_{x \to c} f(x) = L. By the uniqueness theorem for limits, there is at most one value of L \in \R for which \lim_{x \to c} f(x) = L. Hence, it makes sense to talk of the limit when it exists.

Full timed transcript: [SHOW MORE]

Checkpoint questions:

  • In order to make sense of \lim_{x \to c} f(x) = L where must the function f be defined? Must f be defined at c? If f(c) exists, what can we say about its value?
  • What's the formal definition of limit, i.e., what does \lim_{x \to c} f(x) = L mean?
  • How would you write the formal definition of limit using intervals rather than absolute value inequalities to describe where x and f(x) should be?
  • Why is there a "0 < " in the inequality 0 < |x - c| < \delta in the \varepsilon-\delta definition? Why doesn't a 0 < appear in the |f(x) - L| < \varepsilon part of the definition?
  • In order to be able to talk of the limit \lim_{x \to c} f(x), what additional fact do we need beyond the definition of what \lim_{x \to c} f(x) = L means?

Left hand limit

Suppose f is a function of one variable and c \in \R is a point such that f is defined on the immediate left of c (note that f may or may not be defined at c). In other words, there exists some value t  > 0 such that f is defined on (c-t,c).

For a given value L \in \R, we say that:

\lim_{x \to c^-} f(x) = L

if the following holds:

For every \varepsilon > 0, there exists \delta > 0 such that for all x \in \R satisfying 0 < c - x < \delta, we have |f(x) - L| < \varepsilon.

The definition is broken down into its four clauses below:

Clause Interval description Symbol explanations
For every \varepsilon > 0 \varepsilon \in (0,\infty) The symbol \varepsilon is a Greek lowercase letter pronounced "epsilon".
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
there exists \delta > 0 such that \delta \in (0,\infty) The symbol \delta is a Greek lowercase letter pronounced "delta"
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
for all x \in \R satisfying 0 < c  - x < \delta x \in (c - \delta,c) The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statements should be read as saying that x is in the set (c - \delta,c) describing the immediate \delta-left of c.
we have |f(x) - L| < \varepsilon f(x) \in (L - \varepsilon,L + \varepsilon) The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statement should be read as saying that f(x) is in the set (L - \varepsilon, L + \varepsilon).

The left hand limit (acronym LHL) \lim_{x \to c^-} f(x) is defined as a value L \in \R such that \lim_{x \to c^-} f(x) = L. By the uniqueness theorem for limits (one-sided version), there is at most one value of L \in \R for which \lim_{x \to c^-} f(x) = L. Hence, it makes sense to talk of the left hand limit when it exists.

Right hand limit

Suppose f is a function of one variable and c \in \R is a point such that f is defined on the immediate right of c (note that f may or may not be defined at c). In other words, there exists some value t  > 0 such that f is defined on (c,c+t).

For a given value L \in \R, we say that:

\lim_{x \to c^+} f(x) = L

if the following holds:

For every \varepsilon > 0, there exists \delta > 0 such that for all x \in \R satisfying 0 < x - c < \delta, we have |f(x) - L| < \varepsilon.

The definition is broken down into its four clauses below:

Clause Interval description Symbol explanations
For every \varepsilon > 0 \varepsilon \in (0,\infty) The symbol \varepsilon is a Greek lowercase letter pronounced "epsilon".
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
there exists \delta > 0 such that \delta \in (0,\infty) The symbol \delta is a Greek lowercase letter pronounced "delta"
Although the definition customarily uses the letter \varepsilon, it can be replaced by any other letter, as long as the letter is different from the other letters in use. The reason for sticking to a standard letter choice is that it reduces cognitive overload.
for all x \in \R satisfying 0 < x - c < \delta x \in (c,c + \delta) The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statements should be read as saying that x is in the set (c,c + \delta) describing the immediate \delta-right of c.
we have |f(x) - L| < \varepsilon f(x) \in (L - \varepsilon,L + \varepsilon) The symbol | \ \  | stands for the absolute value function.
\in stands for "is in the set" so the statement should be read as saying that f(x) is in the set (L - \varepsilon, L + \varepsilon).

The right hand limit (acronym RHL) \lim_{x \to c^+} f(x) is defined as a value L \in \R such that \lim_{x \to c^+} f(x) = L. By the uniqueness theorem for limits (one-sided version), there is at most one value of L \in \R for which \lim_{x \to c^+} f(x) = L. Hence, it makes sense to talk of the right hand limit when it exists.

Side-by-side comparison of the definitions

Clause for two-sided limit \lim_{x \to c} f(x) = L Clause for left hand limit \lim_{x \to c^-} f(x) = L Clause for right hand limit \lim_{x \to c^+} f(x) = L Comments
For every \varepsilon > 0 For every \varepsilon > 0 For every \varepsilon > 0 identical so far
there exists \delta > 0 such that there exists \delta > 0 such that there exists \delta > 0 such that still identical
for all x \in \R satisfying 0 < |x - c| < \delta, i.e., x \in (c - \delta,c) \cup (c,c + \delta) for all x \in \R satisfying 0 < c - x < \delta, i.e., x \in (c - \delta,c) for all x \in \R satisfying 0 < x - c < \delta, i.e., x \in (c,c + \delta) this is the part that differs, in so far as it is the direction of domain approach that differs between the definitions.
we have |f(x) - L| < \varepsilon, i.e., f(x) \in (L - \varepsilon,L + \varepsilon) we have |f(x) - L| < \varepsilon, i.e., f(x) \in (L - \varepsilon,L + \varepsilon) we have |f(x) - L| < \varepsilon, i.e., f(x) \in (L - \varepsilon,L + \varepsilon) this part is again identical. Note that the left versus right is only about the direction of approach in the domain, not about the direction of approach of the function value.
Full timed transcript: [SHOW MORE]

Checkpoint questions:

  • In order to make sense of \lim_{x \to c^-} f(x) = L, where must the function f be defined? Must f be defined at c? If f(c) exists, what can we say about its value?
  • The definitions of left hand limit, right hand limit and ordinary (two-sided) limit are pretty similar. There is only one clause that differs across the three definitions. What clause is this, and how does it differ across the definitions? Explain both in inequality notation and in interval notation.
  • Why should we be careful when dealing with one-sided limits in the context of function compositions?

Relation between the limit notions

The two-sided limit exists if and only if (both the left hand limit and right hand limit exist) and (they are equal to each other).

Explicitly, \lim_{x \to c} f(x) exists if all three of these conditions hold:

  • \lim_{x \to c^-} f(x) exists.
  • \lim_{x \to c^+} f(x) exists.
  • \lim_{x \to c^-} f(x) = \lim_{x \to c^+} f(x).

Moreover, in the event that both one-sided limits exist and are equal, the two-sided limit is equal to both of them.

Further, a particular value of \delta > 0 works for a particular value of \varepsilon > 0 in the two-sided limit definition if and only if it works in both the left hand limit definition and the right hand limit definition.

Definition of finite limit for function of one variable in terms of a game

The formal definitions of limit, as well as of one-sided limit, can be reframed in terms of a game. This is a special instance of an approach that turns any statement with existential and universal quantifiers into a game.

Two-sided limit

Consider the limit statement, with specified numerical values of c and \! L and a specified function f:

\!\lim_{x \to c} f(x) = L

Note that there is one trivial sense in which the above statement can be false, or rather, meaningless, namely, that f is not defined on the immediate left or immediate right of c. In that case, the limit statement above is false, but moreover, it is meaningless to even consider the notion of limit. We therefore omit this sense from consideration and consider instead only the situation where f is defined on the immediate left and immediate right of c.

The game is between two players, a Prover whose goal is to prove that the limit statement is true, and a Skeptic (also called a Verifier or sometimes a Disprover) whose goal is to show that the statement is false. The game has three moves:

  1. First, the skeptic chooses \varepsilon > 0, or equivalently, chooses the target interval (L - \varepsilon,L + \varepsilon) in which the skeptic is challenging the prover to trap the function.
  2. Then, the prover chooses \delta > 0, or equivalently, chooses the interval (c - \delta, c + \delta) \setminus \{ c \}.
  3. Then, the skeptic chooses a value x satisfying 0 < |x - c| < \delta, or equivalently, x \in (c - \delta, c + \delta) \setminus \{ c \}, which is the same as (c - \delta,c) \cup (c,c + \delta).

Now, if |f(x) - L| < \varepsilon (i.e., f(x) \in (L - \varepsilon,L + \varepsilon)), the prover wins. Otherwise, the skeptic wins.

We say that the limit statement

\!\lim_{x \to c} f(x) = L

is true if the prover has a winning strategy for this game. The winning strategy for the prover basically constitutes a strategy to choose an appropriate \delta in terms of the \varepsilon chosen by the skeptic. Thus, it is an expression of \delta as a function of \varepsilon. Verbally, the goal of the prover is to choose a value of \delta so that when the input is restricted to being within \delta distance of c, the output is trapped to within \varepsilon distance of the claimed limit L.

We say that the limit statement

\!\lim_{x \to c} f(x) = L

is false if the skeptic has a winning strategy for this game. The winning strategy for the skeptic involves a choice of \varepsilon, and a strategy that chooses a value of x (constrained in the specified interval) based on the prover's choice of \delta.

Let's review the definition in conjunction with the game along with a deeper semantic understanding of the steps:

Step no. Clause of definition Who moves? What is chosen? Constraints on the choice Comment
1 For every \varepsilon > 0 Skeptic \varepsilon Must be positive The "for every" corresponds to the idea that the move is made by the side that does not have a winning strategy, because we want to argue that the side that does have a winning strategy can win no matter what.
2 there exists \delta > 0 such that Prover \delta Must be positive The "there exists" corresponds to the idea that the move is made by the side that has a winning strategy, because that side gets to choose a favorable value of the variable (in this case \delta).
3 for all x satisfying 0 < |x - c| < \delta, Skeptic x Must be within the interval (c - \delta,c) \cup (c,c + \delta) The "for all" corresponds to the idea that the move is made by the side that does not have a winning strategy, because we want to argue that the side that does have a winning strategy can win no matter what.
4 we have |f(x) - L| < \varepsilon Neither; it's time for the judge to decide -- If f(x) \in (L - \varepsilon,L + \varepsilon) (the condition that we desire) the prover wins. Else, the skeptic wins.

Epsilondeltagamepicture.png

Slight subtlety regarding domain of definition: [SHOW MORE]
Full timed transcript: [SHOW MORE]

Negation of limit statement and non-existence of limit

We now consider the explicit description of the definition for the case that the skeptic has a winning strategy for the limit game for \lim_{x \to c} f(x) = L, i.e., for the limit statement being false.

In words, the definition is:

There exists \varepsilon > 0 such that for every \delta > 0, there exists x satisfying 0  < |x - c| < \delta and |f(x) - L| \ge \varepsilon.

Let's review the definition in conjunction with the game along with a deeper semantic understanding of the steps:

Step no. Clause of definition for original limit statement (i.e., prover has a winning strategy) Clause of definition for skeptic having a winning strategy Who moves? What is chosen? Constraints on the choice Comment
1 For every \varepsilon > 0 There exists \varepsilon > 0 such that Skeptic \varepsilon Must be positive Whether we use "for every" or "there exists" depends on who we're rooting for.
2 there exists \delta > 0 such that for every \delta > 0, Prover \delta Must be positive Whether we use "for every" or "there exists" depends on who we're rooting for.
3 for all x \in \R satisfying 0 < |x - c| < \delta, there exists x \in \R satisfying 0 < |x - c| < \delta and Skeptic x Must be within the interval (c - \delta,c) \cup (c,c + \delta) Whether we use "for every" or "there exists" depends on who we're rooting for.
4 we have |f(x) - L| < \varepsilon |f(x) - L| \ge \varepsilon. Neither; it's time for the judge to decide -- If f(x) \in (L - \varepsilon,L + \varepsilon), the prover wins. Else, the skeptic wins. The conditions are negatives of one another.
Full timed transcript: [SHOW MORE]

Non-existence of limit

The statement \lim_{x \to c} f(x) does not exist could mean one of two things:

  1. f is not defined around c, i.e., there is no t > 0 for which f is defined on (c - t, c + t) \setminus \{ c \}. In this case, it does not even make sense to try taking a limit.
  2. f is defined around c, except possibly at c, i.e., there is t > 0 for which f is defined on (c - t, c + t) \setminus \{ c \}. So, it does make sense to try taking a limit. However, the limit still does not exist.

The formulation of the latter case is as follows:

For every L \in \R, there exists \varepsilon > 0 such that for every \delta > 0, there exists x satisfying 0 < |x - c| < \delta and such that |f(x) - L| \ge \varepsilon.

We can think of this in terms of a slight modification of the limit game, where, in our modification, there is an extra initial move by the prover to propose a value L for the limit. The limit does not exist if the skeptic has a winning strategy for this modified game.

An example of a function that does not have a limit at a specific point is the sine of reciprocal function. Explicitly, the limit:

\lim_{x \to 0} \sin\left(\frac{1}{x}\right)

does not exist. The skeptic's winning strategy is as follows: regardless of the L chosen by the prover, pick a fixed \varepsilon < 1 (independent of L, so \varepsilon can be decided in advance of the game -- note that the skeptic could even pick \varepsilon = 1 and the strategy would still work). After the prover has chosen a value \delta, find a value x \in (0 - \delta,0 + \delta) \setminus \{ 0 \} such that the \sin(1/x) function value lies outside (L - \varepsilon,L + \varepsilon). This is possible because the interval (L - \varepsilon,L + \varepsilon) has width 2 \varepsilon, hence cannot cover the entire interval [-1,1], which has width 2. However, the range of the \sin(1/x) function on (0 - \delta,0 + \delta) \setminus \{ 0 \} is all of [-1,1].

Crucially, the inability of the prover to trap the function value close to any point as x \to 0 is the reason the limit fails to exist.

Sin1byxlimitat0.png

Sin1byxlimitat0zoomin.png

Full timed transcript: [SHOW MORE]

Strategic aspects

The strategy of small

In the game formulation of the limit, the following loose statements are true:

  • "Smaller is smarter" for the skeptic, i.e., the smaller the choice of \varepsilon, the better the outlook is for the skeptic to win.
  • "Smaller is smarter" for the prover, i.e., the smaller the choice of \delta, the better the outlook is for the prover to win.

In other words, each side benefits by making the crucial move of that side as small as possible. However, there does not exist any single arbitrarily small number -- this is related to the observation in the motivation section that there is no such thing as a single arbitrarily close number. Thus, saying "choose as small a value as possible" is not a coherent strategy. What we can say is the following:

  • If a value of \delta > 0 works for a given value of \varepsilon > 0, the same value of \delta > 0 also works for larger choices of \varepsilon.
  • If a value of \delta > 0 works for a given value of \varepsilon > 0, smaller values of \delta > 0 also work for the same choice of \varepsilon.

Prover's strategy revisited

The prover, in choosing a winning strategy, must specify a rule that can determine a value of \delta that works in terms of the value of \varepsilon specified by the skeptic. In other words, the prover must have a way of specifying \delta as a function of \varepsilon.

The skeptic also chooses x in the next move. However, the prover has no way of knowing the value of x that the skeptic plans to pick. Thus, in order for the prover to have a winning strategy, the prover's choice of \delta should be such that no matter what x the skeptic picks, the prover wins.

Skeptic's strategy revisited

The skeptic, in choosing a winning strategy, must specify the value of \varepsilon and then specify how to pick a value of x that works. When picking the value of \varepsilon, the skeptic does not know what \delta the prover will pick. Thus, the skeptic's choice of \varepsilon cannot be dependent on the prover's subsequent choice of \delta.

However, when picking the value of x, the skeptic is aware of (and constrained by) the prover's choice of \delta.

Misconceptions

Most misconceptions associated with the formal \varepsilon-\delta definition of limit have to do with the ordering of the moves in the game, who's in charge of what move, and what information each person has at the time of making the move. We describe some common misconceptions below.

Full timed transcript: [SHOW MORE]

Strongly telepathic prover

Spot the error in this:

Consider the limit problem \displaystyle \lim_{x \to 2} x^2 = 4. The \varepsilon-\delta proof corresponding to this problem would involve a game between a prover and a skeptic. To show that the limit statement is true, it suffices to exhibit a winning strategy for the prover for the game. The strategy is as follows. Pick \delta = \frac{\varepsilon}{|x + 2|}. Let's prove that this works.

Specific claim: For any skeptic-picked \varepsilon > 0, if the prover picks \delta > 0 such that \delta = \varepsilon/|x + 2|, then regardless of the x that the skeptic picks with 0 < |x - 2| < \delta, we have |x^2 - 4| < \varepsilon.

Proof of claim: We have:
|x^2 - 4| = |x - 2||x + 2| < \delta|x + 2| = \frac{\varepsilon}{|x + 2|} |x + 2| = \varepsilon
The error is as follows: [SHOW MORE]

Mildly telepathic prover

Spot the error in this:

Consider the limit problem:
g(x) = \left \lbrace \begin{array}{ll} x, & x \text{ rational } \\ 0, & x \text{ irrational }\\\end{array}\right.
We want to show that \displaystyle \lim_{x \to 0} g(x) = 0
For this game, we need to exhibit a winning strategy for the prover. The winning strategy is as follows. The skeptic first chooses \varepsilon > 0. The prover now makes two cases. If the skeptic is planning to pick a rational value of x, then the prover chooses the strategy \delta = \varepsilon. If the skeptic is planning to choose an irrational value of x, then the prover can pick any \delta.
Clearly, the prover's strategy works in both cases, so we have a winning strategy.
Th error is as follows: [SHOW MORE]

You say you want a replay?

Spot the error in this:

Consider the limit problem \displaystyle \lim_{x \to 1} 2x = 2. Let's think of this in terms of an \varepsilon-\delta game. The skeptic begins by picking \varepsilon = 0.1. The prover chooses \delta = 0.05. The skeptic now chooses x = 0.97. This value of x is within the \delta-distance of 1. It's now checked that 2x = 1.94 is within \varepsilon-distance of the claimed limit 2. The prover has thus won the game, and we have established the truth of the limit statement.
The error is as follows: [SHOW MORE]

Playing to lose

Spot the error in this:

Here's an easy proof that \lim_{x \to 0} \sin(1/x) = 0. We need to show that the prover has a winning strategy for the game. Let's say the skeptic starts out by picking \varepsilon = 2. The prover then picks \delta = 1/\pi. It can now easily be verified that for 0 < |x| < \delta, |\sin(1/x) - 0| < 2, because the \sin function is trapped within [-1,1]. Thus, the prover has succeeded in trapping the function within the $\varepsilon$-interval specified by the skeptic, and hence won the game. The limit statement is therefore true.
The error is as follows: [SHOW MORE]

Conceptual definition and various cases

Formulation of conceptual definition

Below is the conceptual definition of limit. Suppose f is a function defined in a neighborhood of the point c, except possibly at the point c itself. We say that:

\lim_{x \to c} f(x) = L

if:

  • For every choice of neighborhood of L (where the term neighborhood is suitably defined)
  • there exists a choice of neighborhood of c (where the term neighborhood is suitably defined) such that
  • for all x \ne c that are in the chosen neighborhood of c
  • f(x) is in the chosen neighborhood of L.
Full timed transcript: [SHOW MORE]

Functions of one variable case

The following definitions of neighborhood are good enough to define limits.

  • For points in the interior of the domain, for functions of one variable: We can take an open interval centered at the point. For a point c, such an open interval is of the form (c - t, c + t), t > 0. Note that if we exclude the point c itself, we get (c - t,c) \cup (c,c + t).
  • For the point +\infty, for functions of one variable: We take intervals of the form (a,\infty), where a \in \R.
  • For the point -\infty, for functions of one variable: We can take interval of the form (-\infty,a), where a \in \R.

We can now list the nine cases of limits, combining finite and infinite possibilities:

Case Definition
\lim_{x \to c} f(x) = L For every \varepsilon > 0, there exists \delta > 0 such that for all x satisfying 0 < |x - c| < \delta (i.e., x \in (c - \delta,c) \cup (c,c + \delta)), we have |f(x) - L| < \varepsilon (i.e., f(x) \in (L - \varepsilon,L +\varepsilon)).
\lim_{x \to c} f(x) = -\infty For every a \in \R, there exists \delta > 0 such that for all x satisfying 0 < |x - c| < \delta (i.e., x \in (c - \delta,c) \cup (c,c + \delta)), we have f(x) < a (i.e., f(x) \in (-\infty,a)).
\lim_{x \to c} f(x) = \infty For every a \in \R, there exists \delta > 0 such that for all x satisfying 0 < |x - c| < \delta (i.e., x \in (c - \delta,c) \cup (c,c + \delta)), we have f(x) > a (i.e., f(x) \in (a,\infty)).
\lim_{x \to -\infty} f(x) = L For every \varepsilon > 0, there exists a \in \R such that for all x satisfying x < a (i.e., x \in (-\infty,a)), we have |f(x) - L| < \varepsilon (i.e., f(x) \in (L - \varepsilon,L +\varepsilon)).
\lim_{x \to -\infty} f(x) = -\infty For every b \in \R, there exists a \in \R such that for all x satisfying x < a (i.e., x \in (-\infty,a)), we have f(x) < b (i.e., f(x) \in (-\infty,b)).
\lim_{x \to -\infty} f(x) = \infty For every b \in \R, there exists a \in \R such that for all x satisfying x < a (i.e., x \in (-\infty,a)), we have f(x) > b (i.e., f(x) \in (b,\infty)).
\lim_{x \to \infty} f(x) = L For every \varepsilon > 0, there exists a \in \R such that for all x satisfying x > a (i.e., x \in (a,\infty)), we have |f(x) - L| < \varepsilon (i.e., f(x) \in (L - \varepsilon,L +\varepsilon)).
\lim_{x \to \infty} f(x) = -\infty For every b \in \R, there exists a \in \R such that for all x satisfying x > a (i.e., x \in (a,\infty)), we have f(x) < b (i.e., f(x) \in (-\infty,b)).
\lim_{x \to \infty} f(x) = \infty For every b \in \R, there exists a \in \R such that for all x satisfying x > a (i.e., x \in (a,\infty)), we have f(x) > b (i.e., f(x) \in (b,\infty)).

Limit of sequence versus real-sense limit

Recall that the limit of a real-valued function to infinity is defined as follows:

\lim_{x \to \infty} f(x) = L means that:

  • For every \varepsilon > 0
  • there exists a \in \R (we're thinking of the neighborhood (a,\infty)) such that
  • for all x > a (i.e. x \in (a,\infty))
  • we have |f(x) - L| < \varepsilon (i.e., f(x) \in L - \varepsilon,L + \varepsilon)).

Suppose now instead that f is a function restricted to the natural numbers. We can think of f as a sequence, namely the sequence f(1), f(2), \dots. In that case:

\lim_{n \to \infty, n \in \mathbb{N}} f(n) = L (in words, the sequence converges to L) means that:

  • For every \varepsilon > 0
  • there exists n_0 \in \mathbb{N} such that
  • for all n \in \mathbb{N} satisfying n > n_0,
  • we have |f(n) - L| < \varepsilon (i.e., f(n) \in (L - \varepsilon, L + \varepsilon)).

The definitions differ both in their second and third line. However, the difference in the second line (the use of a real number versus a natural number to specify the threshold for the trapping interval) is not important, i.e., we could swap these lines between the definitions without changing the sense of either definition. The key difference between the definitions lies in their third lines. In the real-sense limit definition case, we require trapping of the function value close to the claimed limit for all sufficiently large reals whereas the sequence limit definition requires trapping only for all sufficiently large natural numbers.

To understand this distinction, consider the following: if f is defined on reals, and it has a real-sense limit, i.e., \lim_{x \to \infty} f(x) = L for some L \in \mathbb{R}, then it must also be true that \lim_{n \to \infty, n \in \mathbb{N}} f(n) = L. However, it is possible for f to have a sequence limit but not have a real-sense limit. For instance, the function f(x) := \sin(\pi x) has \lim_{x \to \infty} f(x) undefined but \lim_{n \to \infty, n \in \mathbb{N}} f(n) is zero, because f takes the value 0 at all integers.

Full timed transcript: [SHOW MORE]

Real-valued functions of multiple variables case

We consider the multiple input variables as a vector input variable, as the definition is easier to frame from this perspective.

The correct notion of neighborhood is as follows: for a point \overline{c}, we define the neighborhood parametrized by a positive real number r as the open ball of radius r centered at \overline{c}, i.e., the set of all points \overline{x} such that the distance from \overline{x} to \overline{c} is less than r. This distance is the same as the norm of the difference vector \overline{x} - \overline{c}. The norm is sometimes denoted |\overline{x} - \overline{c}|. This open ball is sometimes denoted B_r(\overline{c}).

Suppose f is a real-valued (i.e., scalar) function of a vector variable \overline{x}. Suppose \overline{c} is a point such that f is defined "around" \overline{c}, except possibly at \overline{c}. In other words, there is an open ball centered at \overline{c} such that f is defined everywhere on that open ball, except possibly at \overline{c}.

With these preliminaries out of the way, we can define the notion of limit. We say that:

\lim_{\overline{x} \to \overline{c}} f(\overline{x}) = L

if the following holds:

  • For every \varepsilon > 0
  • there exists \delta > 0 such that
  • for all \overline{x} satisfying 0 < |\overline{x} - \overline{c}| < \delta (i.e., \overline{x} is in a ball of radius \delta centered at \overline{c} but not the point \overline{c} itself -- note that the | \cdot | notation is for the norm, or length, of a vector)
  • we have |f(\overline{x}) - L| < \varepsilon. Note that f(\overline{x}) and L are both scalars, so the | \cdot | here is the usual absolute value function.