Multidimensional Spaces
Multidimensional Spaces
In this chapter, we introduce multidimensional spaces, laying the foundation for the core themes of Analysis II.
The Euclidean Structure
In this section, we discuss spaces endowed with a Euclidean structure. We begin by defining the standard vector space \(\mathbb{R}^n\) as the set of ordered \(n\)-tuples of real numbers: \[\begin{equation} \mathbb{R}^n = \{x = (x_1, \dots, x_n) \mid x_i \in \mathbb{R}\} \end{equation}\] where \(n \in \mathbb{N}\) represents the dimension. The space \(\mathbb{R}^n\) is a linear space equipped with the following operations:
Vector Addition: For all \(x, y \in \mathbb{R}^n\), \[\begin{equation} x + y = (x_1+y_1, \dots, x_n + y_n) \end{equation}\]
Scalar Multiplication: For all \(\lambda \in \mathbb{R}\) and \(x \in \mathbb{R}^n\), \[\begin{equation} \lambda x = (\lambda x_1, \dots, \lambda x_n) \end{equation}\]
Inner Product, Norm, and Distance
The geometric structure of \(\mathbb{R}^n\) arises from the introduction of the standard scalar product (or dot product), defined as:
\[\begin{equation} x \cdot y = \langle x, y \rangle := \sum_{i=1}^n x_i y_i \end{equation}\]
Based on the scalar product, we induce the Euclidean norm (the length of a vector):
\[\begin{equation} \lVert x \rVert := \sqrt{\langle x, x \rangle} = \sqrt{\sum_{i=1}^n x_i^2} \end{equation}\]
Finally, we define the Euclidean distance between two points \(x\) and \(y\):
\[\begin{equation} d(x,y) := \lVert y-x \rVert \end{equation}\]
For all \(x, y \in \mathbb{R}^n\), the following inequality holds: \[\begin{equation} |\langle x, y \rangle| \leq \lVert x \rVert \cdot \lVert y \rVert \end{equation}\]
Proof. If either \(x = 0\) or \(y = 0\), the inequality holds trivially (\(0 \leq 0\)). Assume without loss of generality that \(x, y \neq 0\). We start from the elementary inequality valid for all real numbers \(a, b \in \mathbb{R}\): \[\begin{equation} 2ab \leq a^2 + b^2 \end{equation}\] Let \(\lambda > 0\) be an arbitrary scalar. By applying the inequality above to each component, we have: \[\begin{equation} \frac{2 x_i y_i}{\lambda} = 2 (\lambda x_i) \left( \frac{y_i}{\lambda} \right) \leq \lambda^2 x_i^2 + \frac{y_i^2}{\lambda^2} \end{equation}\] Summing over \(i = 1, \dots, n\): \[\begin{equation} 2 \sum_{i=1}^n x_i y_i \leq \lambda^2 \sum_{i=1}^n x_i^2 + \frac{1}{\lambda^2} \sum_{i=1}^n y_i^2 = \lambda^2 \lVert x \rVert^2 + \frac{1}{\lambda^2} \lVert y \rVert^2 \end{equation}\] To obtain the tightest bound, we choose \(\lambda^2 = \frac{\lVert y \rVert}{\lVert x \rVert}\). Substituting this back: \[\begin{equation} 2 \langle x, y \rangle \leq \left( \frac{\lVert y \rVert}{\lVert x \rVert} \right) \lVert x \rVert^2 + \left( \frac{\lVert x \rVert}{\lVert y \rVert} \right) \lVert y \rVert^2 = \lVert y \rVert \lVert x \rVert + \lVert x \rVert \lVert y \rVert = 2 \lVert x \rVert \lVert y \rVert \end{equation}\] Dividing by 2 gives \(\langle x, y \rangle \leq \lVert x \rVert \lVert y \rVert\). The same argument applies to \(-x\) and \(y\), proving the absolute value. ◻
For all \(x, y \in \mathbb{R}^n\): \[\begin{equation} \lVert x + y \rVert \leq \lVert x \rVert + \lVert y \rVert \end{equation}\] This implies the metric triangle inequality: \(d(x, z) \leq d(x, y) + d(y, z)\).
Proof. We proceed by squaring the norm: \[\begin{align} \lVert x+y \rVert^2 &= \langle x+y, x+y \rangle \\ &= \langle x, x \rangle + 2\langle x, y \rangle + \langle y, y \rangle \\ &= \lVert x \rVert^2 + 2\langle x, y \rangle + \lVert y \rVert^2 \end{align}\] By the Cauchy-Schwarz inequality, we know that \(\langle x, y \rangle \leq \lVert x \rVert \lVert y \rVert\). Therefore: \[\begin{align} \lVert x+y \rVert^2 &\leq \lVert x \rVert^2 + 2\lVert x \rVert \lVert y \rVert + \lVert y \rVert^2 \\ &= (\lVert x \rVert + \lVert y \rVert)^2 \end{align}\] Taking the square root of both sides yields the result. ◻
A metric space is defined as a set endowed with a distance function: \[\begin{equation} (X, d) \end{equation}\] is a metric space where \(X\) is a non-empty set and \(d: X \times X \rightarrow [0, \infty)\) is a distance function.
A metric space must satisfy three axioms:
\(\forall x, y \in X, \quad d(x,y) = 0 \iff x = y\)
\(\forall x,y \in X, \quad d(x,y) = d(y,x)\)
\(\forall x,y,z \in X, \quad d(x,z) \leq d(x,y) + d(y,z)\)
Some examples of metric spaces are:
\((\mathbb{R}^n, d_{\text{Euclidean}})\)
\((\mathbb{R}^2, d_{\text{NY}})\) where \(d_{\text{NY}}(x,y) = |x_1-y_1| + |x_2-y_2|\) is called the New York (or Manhattan) distance.
Note that for a metric space \((X, d)\), if \(Y \subset X\), then \((Y, d|_{Y \times Y})\) is also a metric space.
The space of continuous functions on a real interval \([a,b]\) with \(a < b\), defined as \(X = \{ f: [a,b] \rightarrow \mathbb{R} \mid f \text{ is continuous} \}\). We can endow this set with several metrics, for instance: \[\begin{equation} d_{\infty}(f,g) = \max_{x \in [a,b]} |f(x)-g(x)| \end{equation}\] and the integral metric: \[\begin{equation} d_2(f,g) = \left( \int_a^b (f(x)-g(x))^2 \, dx \right)^{\frac{1}{2}} \end{equation}\]
We can define sequences in multidimensional spaces similarly to how we have defined sequences in \(\mathbb{R}\).
Let \(X\) be a set. We call a sequence in \(X\) a map \(x: \mathbb{N} \rightarrow X\), and write \(x_n\) to indicate the \(n\)-th element of the sequence.
There are many notations; the most common are: \((x_n)_{n\geq 0}\), \((x_n)_{n \in \mathbb{N}}\), and \((x_n)_{n = 0}^{\infty}\).
Let \((X,d)\) be a metric space. We say that a sequence \((x_n)_{n\geq 0}\) has a limit \(x \in X\) if and only if \(d(x_n, x) \rightarrow 0\) as a sequence of real numbers.
Equivalently, \(\forall \varepsilon > 0\), \(\exists N > 0\) such that \(d(x_n, x) < \varepsilon\) for all \(n \geq N\).
We will use this notation: \[\begin{equation} \lim_{n\rightarrow \infty} x_n = x \quad \text{or} \quad x_n \rightarrow x \end{equation}\]
Let \((X, d)\) be a metric space, and \((x_n)_{n \geq 0}\) a sequence in \(X\). Assume \(x_n \rightarrow x\) and \(x_n \rightarrow y\) with \(x,y \in X\). Then, \(x = y\).
Proof. Assume by contradiction that \(x \neq y\). Then \(d(x,y) > 0\). Let \(\varepsilon = \frac{d(x,y)}{3} > 0\).
By the definition of a limit, since \(x_n \rightarrow x\), we have: \[\begin{equation} \exists N_x > 0 \quad \text{s.t.} \quad d(x_n, x) < \varepsilon \quad \forall n \geq N_x \end{equation}\] And since \(x_n \rightarrow y\): \[\begin{equation} \exists N_y > 0 \quad \text{s.t.} \quad d(x_n, y) < \varepsilon \quad \forall n \geq N_y \end{equation}\]
Now take \(N = \max(N_x, N_y)\). Then for all \(n \geq N\), applying the triangle inequality yields: \[\begin{equation} 3\varepsilon = d(x,y) \leq d(x, x_n) + d(x_n, y) = d(x_n, x) + d(x_n, y) < \varepsilon + \varepsilon = 2\varepsilon \end{equation}\] This implies \(3\varepsilon < 2\varepsilon\), which is a contradiction since \(\varepsilon > 0\). Therefore, \(x = y\). ◻
Let \((x_n)_{n \geq 0}\) be a sequence in \(X\). We define a subsequence as any sequence of the form \((x_{f(k)})_{k \geq 0}\), where \(f: \mathbb{N} \rightarrow \mathbb{N}\) is a strictly increasing function.
Let \((X,d)\) be a metric space.
Given a subset \(Y \subset X\), we say that \(y \in X\) is an accumulation point (or limit point) of \(Y\) if and only if there exists a sequence \((y_n)_{n \geq 0} \subset Y \setminus \{y\}\) such that \(y_n \rightarrow y\).
Equivalently, given a sequence \((x_n)_{n \geq 0}\) in \(X\), we say that \(x\) is an accumulation point of the sequence if and only if there exists a strictly increasing function \(f: \mathbb{N} \rightarrow \mathbb{N}\) such that the subsequence \(x_{f(k)} \rightarrow x\).
Let \((X,d)\) be a metric space and \((x_n)_{n \geq 0} \subset X\) a sequence. Then \((x_n)_{n \geq 0}\) converges to some \(x \in X \iff \forall\) subsequence \((x_{n_k})_{k \geq 0}\) we have \(x_{n_k} \rightarrow x\).
Proof. Recall that if \(x_n \rightarrow x\) and \(x_{n_k} \rightarrow y\), then \(x = y\) (a subsequence limit must match the sequence limit if the sequence converges). We will prove both implications:
(\(\Leftarrow\)) Take as a subsequence the sequence itself (where \(n_k = k\)). Then \((x_{n_k})_{k \geq 0} = (x_n)_{n \geq 0}\), which by assumption implies \(x_n \rightarrow x\).
(\(\Rightarrow\)) Let \(f: \mathbb{N} \rightarrow \mathbb{N}\) be a strictly increasing function, so \(n_k = f(k)\). The goal is to show that \(x_{f(k)} \rightarrow x\).
By definition: \[\begin{equation} x_n \rightarrow x \iff \forall \varepsilon > 0, \exists N > 0 \quad \text{s.t.} \quad d(x_n, x) < \varepsilon \quad \forall n \geq N \end{equation}\] Since \(f\) is strictly increasing, we have \(f(k) \geq k\) for all \(k \in \mathbb{N}\). Therefore, for all \(k \geq N\), it follows that \(f(k) \geq N\). This implies: \[\begin{equation} d(x_{f(k)}, x) < \varepsilon \quad \forall k \geq N \end{equation}\] which means \(x_{f(k)} \rightarrow x\).
◻
A sequence \((x_n)_{n \geq 0}\) is a Cauchy sequence if and only if \(\forall \varepsilon > 0, \exists N > 0\) such that \(d(x_n, x_m) < \varepsilon\) for all \(n,m \geq N\).
A metric space \((X,d)\) is complete if and only if every Cauchy sequence in \(X\) converges to some limit in \(X\).
Let \((x_m)_{m \geq 0} \subset \mathbb{R}^n\). Then \(x_m \rightarrow x \in \mathbb{R}^n \iff x_{m,i} \rightarrow x_i \quad \forall i \in \{1, \dots , n\}\).
An element of a sequence in a multidimensional space is defined by its components: \[\begin{equation} x_m = (x_{m,1}, \dots , x_{m,n}) \end{equation}\] And its limit is: \[\begin{equation} x = (x_1, \dots , x_n) \end{equation}\]
Proof.
(\(\Rightarrow\)) Assume \(x_m \rightarrow x\). By definition, \(\forall \varepsilon > 0, \exists N > 0\) such that \(\lVert x_m - x \rVert < \varepsilon\) for all \(m \geq N\). For any component \(i\), this implies: \[\begin{equation} |x_{m,i} - x_i| = \sqrt{|x_{m,i} - x_i|^2} \leq \sqrt{\sum_{j = 1}^n |x_{m,j} - x_j|^2} = \lVert x_m - x \rVert < \varepsilon \end{equation}\] Thus, \(x_{m,i} \rightarrow x_i\).
(\(\Leftarrow\)) Assume \(x_{m,i} \rightarrow x_i\) for all \(i = 1, \dots, n\). Let \(\varepsilon > 0\). For each \(i\), there exists \(N_i > 0\) such that \(|x_{m,i} - x_i| < \frac{\varepsilon}{\sqrt{n}}\) for all \(m \geq N_i\). Let \(N = \max(N_1, \dots, N_n)\). Then for all \(m \geq N\), we have: \[\begin{equation} \lVert x_m - x \rVert = \sqrt{\sum_{i = 1}^n |x_{m,i} - x_i|^2} < \sqrt{\sum_{i=1}^n \left(\frac{\varepsilon}{\sqrt{n}}\right)^2} = \sqrt{n \frac{\varepsilon^2}{n}} = \varepsilon \end{equation}\] Thus, \(x_m \rightarrow x\).
◻
The metric space \((\mathbb{R}^n, d_{\text{Euclidean}})\) is complete.
Proof. Given a Cauchy sequence \((x_m)_{m \geq 0}\) in \(\mathbb{R}^n\), we must show that there exists an \(x \in \mathbb{R}^n\) such that \(x_m \rightarrow x\).
Since \((x_m)_{m \geq 0}\) is a Cauchy sequence with respect to the Euclidean norm, each component sequence \((x_{m,i})_{m \geq 0}\) is a Cauchy sequence in \(\mathbb{R}\) for every \(i \in \{1, \dots, n\}\).
Since the field of real numbers \(\mathbb{R}\) is complete, every Cauchy sequence in \(\mathbb{R}\) converges. Therefore, for each \(i\), there exists a real number \(x_i \in \mathbb{R}\) such that \(x_{m,i} \rightarrow x_i\) as \(m \rightarrow \infty\).
Let \(x = (x_1, \dots, x_n) \in \mathbb{R}^n\). By the previous lemma, convergence component by component implies convergence in the Euclidean metric. Thus, \(x_m \rightarrow x\). This proves that every Cauchy sequence in \(\mathbb{R}^n\) converges to a limit in \(\mathbb{R}^n\), meaning the space is complete. ◻