# Norm (mathematics)

In mathematics, a norm is a function from a vector space over the real or complex numbers to the nonnegative real numbers that satisfies certain properties pertaining to scalability and additivity, and takes the value zero only if the input vector is zero. A pseudonorm or seminorm satisfies the same properties, except that it may have a zero value for some nonzero vectors.[1]

The Euclidean norm or 2-norm is a specific norm on a Euclidean vector space, that is strongly related to the Euclidean distance, and equals the square root of the inner product of a vector with itself.

A vector space on which a norm is defined is called a normed vector space. Similarly, a vector space with a seminorm is called a seminormed vector space.

## Definition

Given a vector space V over a field F of the real numbers ${\displaystyle \mathbb {R} }$ or complex numbers ${\displaystyle \mathbb {C} }$, a norm on V is a nonnegative-valued function p: V${\displaystyle \mathbb {R} }$ with the following properties:[2]

For all aF and all u, vV,

1. p(u + v) ≤ p(u) + p(v) (being subadditive or satisfying the triangle inequality).
2. p(av) = |a| p(v) (being absolutely homogeneous or absolutely scalable).
3. If p(v) = 0 then v = 0 is the zero vector (being positive definite or being point-separating).

A seminorm on V is a function p : V${\displaystyle \mathbb {R} }$ with the properties 1 and 2 above.[3] An ultraseminorm or a non-Archimedean seminorm is a seminorm p with the additional property that p(x+y) ≤ max { p(x), p(y) } for all x, yV.[4]

Every vector space V with seminorm p induces a normed space V/W, called the quotient space, where W is the subspace of V consisting of all vectors v in V with p(v) = 0. The induced norm on V/W is defined by:

p(W + v) = p(v).

Two norms (or seminorms) p and q on a vector space V are equivalent if there exist two real constants c and C, with c > 0, such that

for every vector v in V, one has that: c q(v) ≤ p(v) ≤ C q(v).

## Basic properties

Let X be a vector space over ${\displaystyle \mathbb {F} }$ where ${\displaystyle \mathbb {F} }$ is either the real or complex numbers. Let Br denote the open ball of radius r > 0 in ${\displaystyle \mathbb {F} }$ centered at the origin. Let p : X → [0, ∞) be a seminorm on X. Then,

1. (the second triangle inequality) |p(x) − p(y)| ≤ p(xy) for all x, yX.
2. p−1(Br) is an absolutely convex and absorbing set.[5]
3. Every non-negative real scalar multiple of p is a seminorm.[5]
4. If q is another seminorm on X then p + q is a seminorm as is ${\displaystyle x\mapsto \max \left\{p(x),q(x)\right\}}$.[5]
5. For any r > 0, ${\displaystyle r\left(p^{-1}\left(B_{1}\right)\right)=\{x\in X:p(x).[5]
6. For any xX and r > 0, ${\displaystyle x+p^{-1}\left(B_{r}\right)=\left\{y\in X:p(x-y).[6]
7. If X is a TVS and p is a continuous seminorm on X, then the closure of ${\displaystyle p^{-1}\left(B_{r}\right)}$ in X is equal to ${\displaystyle \left\{x\in X:p(x)\leq 1\right\}}$.[5]

If p is a seminorm on a topological vector space X, then the following are equivalent:[7]

1. p is continuous.
2. p is continuous at 0;[5]
3. ${\displaystyle \{x\in X:p(x)<1\}}$ is open in X;[5]
4. ${\displaystyle \{x\in X:p(x)\leq 1\}}$ is closed neighborhood of 0 in X;[5]
5. p is uniformly continuous on X;[5]
6. There exists a continuous seminorm q on X such that pq.[5]

In particular, if (X, p) is a semi-normed space then a seminorm q on X is continuous if and only if q is dominated by a positive scalar multiple of p.[5] If p and q are seminorms on X, then pq if and only if q(x) ≤ 1 implies p(x) ≤ 1.[4]

Extending seminorms: If M is a vector subspace of X, p is a seminorm on M, and r is a seminorm on X such that p${\displaystyle r{\big \vert }_{M}}$, then there exists a seminorm q on X such that ${\displaystyle q{\big \vert }_{M}=p}$ and qr.[4]

## Normability

A topological vector space (TVS) is called normable (seminormable) if the topology of the space can be induced by a norm (seminorm). Normability of topological vector spaces is characterized by Kolmogorov's normability criterion.

If X is a Hausdorff locally convex TVS then the following are equivalent:

1. X is normable.
2. X has a bounded neighborhood of the origin.
3. the strong dual ${\displaystyle X_{b}^{\prime }}$ of X is normable.[8]
4. the strong dual ${\displaystyle X_{b}^{\prime }}$ of X is metrizable.[8]

Furthermore, X is finite dimensional if and only if ${\displaystyle X_{\sigma }^{\prime }}$ is normable (here ${\displaystyle X_{\sigma }^{\prime }}$ denotes ${\displaystyle X^{\prime }}$ endowed with the weak-* topology).

The product of infinitely many seminormable space is again semi-normable if and only if all but finitely many of these spaces trivial (i.e. 0-dimensional).[9]

## Notation

If a norm p : V${\displaystyle \mathbb {R} }$ is given on a vector space V then the norm of a vector vV is usually denoted by enclosing it within double vertical lines: v‖ = p(v). Such notation is also sometimes used if p is only a seminorm.

For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation |v| with single vertical lines is also widespread.

In Unicode, the code point of the "double vertical line" character ‖ is U+2016. The double vertical line should not be confused with the "parallel to" symbol, Unicode U+2225 ( ∥ ), which is used in geometry to signify parallel lines and in network theory, various fields of engineering and applied electronics as parallel addition operator. This is usually not a problem because the former is used in parenthesis-like fashion, whereas the latter is used as an infix operator. The double vertical line used here should also not be confused with the symbol used to denote lateral clicks in linguistics, Unicode U+01C1 ( ǁ ). The single vertical line | is called "vertical line" in Unicode and its code point is U+007C.

In LaTeX and related markup languages, the macro \| is often used to denote a norm.

## Examples

• All norms are seminorms.
• The trivial seminorm has p(x) = 0 for all x in V.
• Every linear form f on a vector space defines a seminorm by x → |f(x)|.
• If s is a real-valued sublinear function on X, then the map ${\displaystyle p(x):=\max\{s(x),s(-x)\}}$ defines a seminorm on X called the seminorm associated with s.[10]

### Absolute-value norm

${\displaystyle \left\|x\right\|=\left|x\right|}$

is a norm on the one-dimensional vector spaces formed by the real or complex numbers.

Any norm p on a one-dimensional vector space V is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces ${\displaystyle f\colon K{\overset {\sim }{\to }}V,}$ where K is either ${\displaystyle \mathbb {R} }$ or ${\displaystyle \mathbb {C} ,}$ and norm-preserving means that ${\displaystyle \left|x\right|=p(f(x))}$. This isomorphism is given by sending ${\displaystyle 1\in K}$ to a vector of norm 1, which exists since such a vector is obtained by multiplying any nonzero vector by the inverse of its norm.

### Euclidean norm

A Euclidean vector space E is a inner product space of finite dimension n over the reals. The square root of the inner product of a vector by itself is a norm, called the Euclidean norm:

${\displaystyle \left\|{\boldsymbol {x}}\right\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.}$

The choice of an orthonormal basis allows identifying E with ${\displaystyle \mathbb {R} ^{n}}$ by mapping vectors to their coordinate vectors. Under this identification, the norm of a vector x = (x1, x2, ..., xn) is

${\displaystyle \left\|{\boldsymbol {x}}\right\|:={\sqrt {x_{1}^{2}+\cdots +x_{n}^{2}}}.}$

This is a consequence of the Pythagorean theorem that this norm is the Euclidean distance from the origin to the vector.

The Euclidean norm is by far the most commonly used norm on ${\displaystyle \mathbb {R} ^{n},}$ and is the L2 norm of this vector space (The p-norm for p=2). There are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology.

#### Euclidean norm of complex numbers and vectors

The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane ${\displaystyle \mathbb {R} ^{2}}$. This identification of the complex number x + i y as a vector in the Euclidean plane, makes the quantity ${\displaystyle {\sqrt {x^{2}+y^{2}}}}$ (as first suggested by Euler) the Euclidean norm associated with the complex number.

On an n-dimensional complex space ${\displaystyle \mathbb {C} ^{n}}$ the most common norm is

${\displaystyle \left\|{\boldsymbol {z}}\right\|:={\sqrt {\left|z_{1}\right|^{2}+\cdots +\left|z_{n}\right|^{2}}}={\sqrt {z_{1}{\bar {z}}_{1}+\cdots +z_{n}{\bar {z}}_{n}}}.}$

In this case the norm can be expressed as the square root of the inner product of the vector and itself:

${\displaystyle \left\|{\boldsymbol {x}}\right\|:={\sqrt {{\boldsymbol {x}}^{H}~{\boldsymbol {x}}}},}$

where ${\displaystyle {\boldsymbol {x}}}$ is represented as a column vector ([x1; x2; ...; xn]), and ${\displaystyle {\boldsymbol {x}}^{H}}$ denotes its conjugate transpose.

This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence, in this case the formula can be also written with the following notation:

${\displaystyle \left\|{\boldsymbol {x}}\right\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.}$

### Taxicab norm or Manhattan norm

${\displaystyle \left\|{\boldsymbol {x}}\right\|_{1}:=\sum _{i=1}^{n}\left|x_{i}\right|.}$

The name relates to the distance a taxi has to drive in a rectangular street grid to get from the origin to the point x.

The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope of dimension equivalent to that of the norm minus 1. The Taxicab norm is also called the ${\displaystyle \ell }$1 norm. The distance derived from this norm is called the Manhattan distance or ${\displaystyle \ell }$1 distance.

The 1-norm is simply the sum of the absolute values of the columns.

In contrast,

${\displaystyle \sum _{i=1}^{n}x_{i}}$

is not a norm because it may yield negative results.

### p-norm

Let p ≥ 1 be a real number. The ${\displaystyle p}$-norm (also called ${\displaystyle \ell _{p}}$-norm) of vector ${\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n})}$ is

${\displaystyle \left\|\mathbf {x} \right\|_{p}:={\bigg (}\sum _{i=1}^{n}\left|x_{i}\right|^{p}{\bigg )}^{1/p}.}$

For p = 1 we get the taxicab norm, for p = 2 we get the Euclidean norm, and as p approaches ${\displaystyle \infty }$ the p-norm approaches the infinity norm or maximum norm:

${\displaystyle \left\|\mathbf {x} \right\|_{\infty }:=\max _{i}\left|x_{i}\right|.}$

The p-norm is related to the generalized mean or power mean.

This definition is still of some interest for 0 < p < 1, but the resulting function does not define a norm,[11] because it violates the triangle inequality. What is true for this case of 0 < p < 1, even in the measurable analog, is that the corresponding Lp class is a vector space, and it is also true that the function

${\displaystyle \int _{X}\left|f(x)-g(x)\right|^{p}~\mathrm {d} \mu }$

(without pth root) defines a distance that makes Lp(X) into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory, and harmonic analysis. However, outside trivial cases, this topological vector space is not locally convex and has no continuous nonzero linear forms. Thus the topological dual space contains only the zero functional.

The partial derivative of the p-norm is given by

${\displaystyle {\frac {\partial }{\partial x_{k}}}\left\|\mathbf {x} \right\|_{p}={\frac {x_{k}\left|x_{k}\right|^{p-2}}{\left\|\mathbf {x} \right\|_{p}^{p-1}}}.}$

The derivative with respect to x, therefore, is

${\displaystyle {\frac {\partial \|\mathbf {x} \|_{p}}{\partial \mathbf {x} }}={\frac {\mathbf {x} \circ |\mathbf {x} |^{p-2}}{\|\mathbf {x} \|_{p}^{p-1}}}.}$

where ${\displaystyle \circ }$ denotes Hadamard product and ${\displaystyle |\cdot |}$ is used for absolute value of each component of the vector.

For the special case of p = 2, this becomes

${\displaystyle {\frac {\partial }{\partial x_{k}}}\left\|\mathbf {x} \right\|_{2}={\frac {x_{k}}{\left\|\mathbf {x} \right\|_{2}}},}$

or

${\displaystyle {\frac {\partial }{\partial \mathbf {x} }}\left\|\mathbf {x} \right\|_{2}={\frac {\mathbf {x} }{\left\|\mathbf {x} \right\|_{2}}}.}$

### Maximum norm (special case of: infinity norm, uniform norm, or supremum norm)

${\displaystyle \left\|x\right\|_{\infty }=1}$

If ${\displaystyle \mathbf {x} }$ is some vector such that ${\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})}$, then:

${\displaystyle \left\|\mathbf {x} \right\|_{\infty }:=\max \left(\left|x_{1}\right|,\ldots ,\left|x_{n}\right|\right).}$

The set of vectors whose infinity norm is a given constant, c, forms the surface of a hypercube with edge length 2c.

### Zero norm

In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm ${\displaystyle (x_{n})\mapsto \sum _{n}{2^{-n}x_{n}/(1+x_{n})}}$.[12] Here we mean by F-norm some real-valued function ${\displaystyle \lVert \ \cdot \ \rVert }$ on an F-space with distance d, such that ${\displaystyle \lVert x\rVert =d(x,0)}$. The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.

#### Hamming distance of a vector from zero

In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.

In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks. Following Donoho's notation, the zero "norm" of x is simply the number of non-zero coordinates of x, or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit of p-norms as p approaches 0. Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument. Abusing terminology, some engineers[who?] omit Donoho's quotation marks and inappropriately call the number-of-nonzeros function the L0 norm, echoing the notation for the Lebesgue space of measurable functions.

### Other norms

Other norms on ${\displaystyle \mathbb {R} ^{n}}$ can be constructed by combining the above; for example

${\displaystyle \left\|x\right\|:=2\left|x_{1}\right|+{\sqrt {3\left|x_{2}\right|^{2}+\max(\left|x_{3}\right|,2\left|x_{4}\right|)^{2}}}}$

is a norm on ${\displaystyle \mathbb {R} ^{4}}$.

For any norm and any injective linear transformation A we can define a new norm of x, equal to

${\displaystyle \left\|Ax\right\|.}$

In 2D, with A a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. In 2D, each A applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation. In 3D this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).

There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally symmetric convex body in ${\displaystyle \mathbb {R} ^{n}}$ (centered at zero) defines a norm on ${\displaystyle \mathbb {R} ^{n}}$.

All the above formulas also yield norms on ${\displaystyle \mathbb {C} ^{n}}$ without modification.

There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.

### Infinite-dimensional case

The generalization of the above norms to an infinite number of components leads to  p and L p spaces, with norms

${\displaystyle \left\|x\right\|_{p}={\bigg (}\sum _{i\in \mathbb {N} }\left|x_{i}\right|^{p}{\bigg )}^{1/p}{\text{ and }}\ \left\|f\right\|_{p,X}={\bigg (}\int _{X}\left|f(x)\right|^{p}~\mathrm {d} x{\bigg )}^{1/p}}$

for complex-valued sequences and functions on ${\displaystyle X\subset \mathbb {R} }$ respectively, which can be further generalized (see Haar measure).

Any inner product induces in a natural way the norm ${\displaystyle \left\|x\right\|:={\sqrt {\langle x,x\rangle }}.}$

Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.

## Properties

Illustrations of unit circles in different norms.

The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm the unit circle is a square, for the 2-norm (Euclidean norm) it is the well-known unit circle, while for the infinity norm it is a different square. For any p-norm it is a superellipse (with congruent axes). See the accompanying illustration. Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and ${\displaystyle p\geq 1}$ for a p-norm).

In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors ${\displaystyle \{v_{n}\}}$ is said to converge in norm to ${\displaystyle v}$ if ${\displaystyle \left\|v_{n}-v\right\|\rightarrow 0}$ as ${\displaystyle n\to \infty }$. Equivalently, the topology consists of all sets that can be represented as a union of open balls.

Two norms ‖•‖α and ‖•‖β on a vector space V are called equivalent if there exist positive real numbers C and D such that for all x in V

${\displaystyle C\left\|x\right\|_{\alpha }\leq \left\|x\right\|_{\beta }\leq D\left\|x\right\|_{\alpha }.}$

For instance, on ${\displaystyle \mathbf {C} ^{n}}$, if p > r > 0, then

${\displaystyle \left\|x\right\|_{p}\leq \left\|x\right\|_{r}\leq n^{(1/r-1/p)}\left\|x\right\|_{p}.}$

In particular,

${\displaystyle \left\|x\right\|_{2}\leq \left\|x\right\|_{1}\leq {\sqrt {n}}\left\|x\right\|_{2}}$
${\displaystyle \left\|x\right\|_{\infty }\leq \left\|x\right\|_{2}\leq {\sqrt {n}}\left\|x\right\|_{\infty }}$
${\displaystyle \left\|x\right\|_{\infty }\leq \left\|x\right\|_{1}\leq n\left\|x\right\|_{\infty },}$

i.e.,

${\displaystyle \left\|x\right\|_{\infty }\leq \left\|x\right\|_{2}\leq \left\|x\right\|_{1}\leq {\sqrt {n}}\left\|x\right\|_{2}\leq n\left\|x\right\|_{\infty }.}$

If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.

Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.

Every (semi)-norm is a sublinear function, which implies that every norm is a convex function. As a result, finding a global optimum of a norm-based objective function is often tractable.

Given a finite family of seminorms pi on a vector space the sum

${\displaystyle p(x):=\sum _{i=0}^{n}p_{i}(x)}$

is again a seminorm.

For any norm p on a vector space V, we have that for all u and vV:

p(u ± v) ≥ |p(u) − p(v)|.

Proof: Applying the triangular inequality to both ${\displaystyle p(u-0)}$ and ${\displaystyle p(v-0)}$:

${\displaystyle p(u-0)\leq p(u-v)+p(v-0)\Rightarrow p(u-v)\geq p(u)-p(v)}$
${\displaystyle p(u-0)\leq p(u+v)+p(0-v)\Rightarrow p(u+v)\geq p(u)-p(v)}$
${\displaystyle p(v-0)\leq p(u-v)+p(u-0)\Rightarrow p(u-v)\geq p(v)-p(u)}$
${\displaystyle p(v-0)\leq p(u+v)+p(0-u)\Rightarrow p(u+v)\geq p(v)-p(u)}$

Thus, p(u ± v) ≥ |p(u) − p(v)|.

If ${\displaystyle X}$ and ${\displaystyle Y}$ are normed spaces and ${\displaystyle u:X\to Y}$ is a continuous linear map, then the norm of ${\displaystyle u}$ and the norm of the transpose of ${\displaystyle u}$ are equal.[13]

For the Lp norms, we have Hölder's inequality[14]

${\displaystyle \left|\langle x,y\rangle \right|\leq \left\|x\right\|_{p}\left\|y\right\|_{q}\qquad {\frac {1}{p}}+{\frac {1}{q}}=1.}$

A special case of this is the Cauchy–Schwarz inequality:[14]

${\displaystyle \left|\langle x,y\rangle \right|\leq \left\|x\right\|_{2}\left\|y\right\|_{2}.}$

## Classification of seminorms: absolutely convex absorbing sets

All seminorms on a vector space V can be classified in terms of absolutely convex absorbing subsets A of V. To each such subset corresponds a seminorm pA called the gauge of A, defined as

pA(x) := inf{α : α > 0, xαA}

with the property that

{x : pA(x) < 1} ⊆ A ⊆ {x : pA(x) ≤ 1}.

Conversely:

Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family (p) of seminorms p that separates points: the collection of all finite intersections of sets {p < 1/n} turns the space into a locally convex topological vector space so that every p is continuous.

Such a method is used to design weak and weak* topologies.

norm case:

Suppose now that (p) contains a single p: since (p) is separating, p is a norm, and A = {p < 1} is its open unit ball. Then A is an absolutely convex bounded neighbourhood of 0, and p = pA is continuous.
The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
If V is an absolutely convex bounded neighbourhood of 0, the gauge gV (so that V = {gV < 1}) is a norm.

## Generalizations

There are several generalizations of norms and semi-norms. If p is absolute homogeneity but in place of subadditivity we require that

 2′. there is a ${\displaystyle b\geq 1}$ such that ${\displaystyle p(u+v)\leq b(p(u)+p(v))}$ for all ${\displaystyle u,v\in V}$

then p satisfies the triangle inequality but is called a quasi-seminorm and the smallest value of b for which this holds is called the multiplier of p; if in addition p separates points then it is called a quasi-norm.

On the other hand, if p satisfies the triangle inequality but in place of absolute homogeneity we require that

 1′. there exists a k such that ${\displaystyle 0 and for all ${\displaystyle v\in V}$ and scalars ${\displaystyle \lambda }$: ${\displaystyle p(\lambda v)=\left|\lambda \right|^{k}p(v)}$

then p is called a k-seminorm.

We have the following relationship between quasi-seminorms and k-seminorms:

Suppose that q is a quasi-seminorm on a vector space X with multiplier b. If ${\displaystyle 0 then there exists k-seminorm p on X equivalent to q.

The concept of norm in composition algebras does not share the usual properties of a norm. A composition algebra (A, *, N) consists of an algebra over a field A, an involution *, and a quadratic form N, which is called the "norm". In several cases N is an isotropic quadratic form so that A has at least one null vector, contrary to the separation of points required for the usual norm discussed in this article.

## Notes

1. ^ Knapp, A.W. (2005). Basic Real Analysis. Birkhäuser. p. [1]. ISBN 978-0-817-63250-2.
2. ^ Pugh, C.C. (2015). Real Mathematical Analysis. Springer. p. page 28. ISBN 978-3-319-17770-0. Prugovečki, E. (1981). Quantum Mechanics in Hilbert Space. p. page 20.
3. ^ Rudin, W. (1991). Functional Analysis. p. 25.
4. ^ a b c Narici 2011, pp. 149–153.
5. Narici 2011, pp. 116–128.
6. ^ Narici 2011, pp. 116−128.
7. ^ Schaefer 1999, p. 40.
8. ^ a b Treves 2006, pp. 136–149, 195–201, 240–252, 335–390, 420–433.
9. ^ Narici 2011, pp. 156–175.
10. ^ Narici 2011, pp. 120–121.
11. ^ Except in ${\displaystyle \mathbb {R} ^{1}}$, where it coincides with the Euclidean norm, and ${\displaystyle \mathbb {R} ^{0}}$, where it is trivial.
12. ^ Rolewicz, Stefan (1987), Functional analysis and control theory: Linear systems, Mathematics and its Applications (East European Series), 29 (Translated from the Polish by Ewa Bednarczuk ed.), Dordrecht; Warsaw: D. Reidel Publishing Co.; PWN—Polish Scientific Publishers, pp. xvi, 524, doi:10.1007/978-94-015-7758-8, ISBN 90-277-2186-6, MR 0920371, OCLC 13064804
13. ^ Treves pp. 242–243
14. ^ a b Golub, Gene; Van Loan, Charles F. (1996). Matrix Computations (Third ed.). Baltimore: The Johns Hopkins University Press. p. 53. ISBN 0-8018-5413-X.