Vector is everywhere and plays a fundamental role in different branches of sciences. Though vector has a formal definition from vector space, which is an element of vector space, many still regard vector as an array of numbers with certain magnitude and direction. This classical view, actually, on one hand, doesn’t give a full picture of a **true **vector, on the other hand, puts something more than a vector should have. Sometimes intuition makes a concept understood easier, and sometimes it makes different concepts look the same. In this post, we will discuss a few sorts of “vectors” and show that although satisfying the definition of the vector, for which many don’t distinguish them, they have so different behaviors under change of basis that worths different names.

# 1. What is vector

When an object is to be defined, it is usually necessary to give a background in which the object is unambiguous, meaningful and well-defined. Just like in biology, a specie is defined according to its domain, kingdom, phylum, class, order, family, genus. In programming language, a variable is defined according to its environment, domain and type. The background confines the object to a proper extension so that it’s neither too narrow to contain enough information nor too general to attribute required properties. Here, vector is nothing but an object in its background called vector space.

A **vector space** is a specific algebraic structure characterized by eight axioms. The element of , called vector, is denoted by etc, and element of a field , called scalar (or number), is denoted by etc.

- s.t. for each
- For each , s.t.
- , where 1 is multiplicative identity of

From the perspective of universal algebra, a vector space has one 0-ary operation (i.e. constant 0), one unary operation (additive inverse) and two binary operations (vector addition, scalar multiplication), together with several equations such as commutative law, associative law and distributive law etc. The first four axioms state nothing more than that is essentially an Abelian group. And the fifth axiom, usually ignored by beginners, is necessary to bring into the collection of vectors an external scalar field. Together with the rest ones guaranteeing the compatibility of scalar operations (addition, multiplication) with vector operations (vector addition, vector multiplication by scalar), vector space furnishes a structure of left -module. Note that I include a new rule into the definition to display a very implicit yet natural identification between scalar multiplication from the left side and the right side, namely we don’t distinguish left -module from right -module. The feasibility of this identification depends on the commutative law of multiplication of the field, since otherwise , letting , constructs a contradiction. As we have seen, even the most natural identification may fail when a simple condition doesn’t hold. We should be careful with every intuition before it is rigorously verified from axioms and established theorems. We will encounter many more natural identifications in the following sections, in which readers may gradually feel how unreliable the intuition is.

The abstract definition of vector space is consistent with classical view. If we see vectors as arrows in Euclidean (flat) space (this is necessary to ensure parallel transportation preserves the vector), then vector addition can be viewed as an arrow starting from the initial point of vector and terminating at the end point of vector when the end point of coincides with the initial point of . Scalar multiplication can be viewed as stretch (prolong and contraction) of vector (with possibly reverse the direction of arrow). Under the view, associative law states that the resultant of vector addition depends only on the initial point of the first vector and the end point of the last vector. Commutative law is simply parallelogram law. There is a special arrow called zero vector 0 such that it starts and terminates at the same point. For each arrow, there is an opposite arrow which just exchanges the initial and the end point.

However, readers should be reminded to distinguish from the classical view of a vector neither having to be associated with magnitude until a norm is defined, nor with a direction with respect to a fixed vector until a non-degenerate inner product is defined. Moreover, vector is not necessarily an array of numbers, called coordinates. For instance, it is readily to verify that the set of all single-variable polynomials of finite degree form a vector space, in which vector addition is polynomial addition (note that it isn’t illegal to adopt polynomial multiplication as vector addition). In this vector space, each polynomial is a vector that cannot be represented by, at least, an array of finite size. In spite of the fact, we can resolve the vector space into direct sum of countable vector subspaces of finite dimension, each of which has coordinate representation for vectors. To consider another example, the collection of all smooth functions defined on real line form a vector space, in which vector addition is function addition. Now a real number, in replace of subset of natural numbers, is used to index “component”. We call this kind of vector spaces (uncountable) infinite dimensional. Sometimes as smooth functions are restricted to analytic ones, we can represent them using power series and hence vector space is of countable infinite (or enumerable) dimensional. [*Q: What’s exactly the dimension of vector space of analytic functions on real?*] We have seen that coordinate is not essential to a vector, yet it is still a very convenient way to express a vector.

Now we come to see the premise of vector having a coordinate — coordinate system. Basis, as coordinate system, is a linearly independent set of vectors that span the vector space, equivalently, each vector is a unique (finite) linear combination of basis vectors. We spend a little time here claiming that basis always exists, even for infinite dimensional space.

**Theorem**: *Every vector space has a basis, assume Zorn’s lemma.*

**Proof**: Let be the collection of linearly independent subsets. We furnish with partial order structure induced by inclusion. For each chain , is apparently an upper bound of . We shall require for applying Zorn’s lemma, that is, vectors in are linear independent. To this end, observe that for any finite subsets of vectors , there is an element of the chain such that for all . But is linearly independent set, so is . By Zorn’s lemma, there exists (at least) a maximal element . We claim spans the vector space. Suppose it doesn’t, then take . We have be linearly independent and hence belongs to , contradicting to maximality of .

The converse of the theorem, existence of bases of vector space implies Axiom of Choice (AC) has been proved by Andreas Blass in 1984. And thus AC is equivalent to existence of basis.

Now we restrict to finite dimensional case. With the theorem, for each vector , we can associate a unique coordinate as identificator under a specific set of basis vectors , and write (Here we adopt Einstein summation convention). Obviously, coordinate depends on basis. A vector under different bases may have different coodinates. So let’s see what will happen to coordinate under change of basis.

**2. Dual vector: Contravariance vs. Covariance**

Suppose we need to express vector with respect to a new set of basis , where is the transformation matrix, namely . In such basis, vector . Since vector itself doesn’t change with basis (change of basis is essentially passive transformation), we should have

and hence , or

where is the inverse of . We find that coordinate of vector changes inversely to transformation law of change of basis. So in general, we speak of vector referring to contravariant vector.

Now we consider dual vector. Define functionals for each vector . Obviously, the set of all such functionals form a vector space , called dual space of , in which the basis is a set of vectors such that , namely the inverse of basis matrix . A dual vector then can be written as . If we change basis from to , letting , we still have

So is inverse matrix of change of basis and . Then,

That is, , equivalently,

As we have seen, the coordinate of dual vector co-vary with the change of basis, which we call it covariant vector. Common examples of dual vectors are row vectors and differential 1-forms. Many think row vectors are just transpose of column vectors and naturally identify two things. Differential 1-forms can also be identified with vectors through musical isomorphism. I will exemplify this natural identification with gradient. Let be an open set, on which a vector field is defined. Suppose is a differentiable function with , then we have

Note that the bracket on the left denotes pairing of vector and dual vector while the right bracket denotes inner product of two vectors. We see that the differential of is identified with the gradient of . In mathematical language,

**3. Pseudovector: Hodge dual of vector**

Think about this, a ball involves counterclockwise around a fixed point on the table. If one sets a mirror vertically to the table, then the virtual image of ball in the mirror involves clockwise. The angular momentum of the real ball, according to right-hand rule, is upward, hence it shall be still upward for the image since the mirror doesn’t reflect up and down. However, as applying right-hand rule directly for virtual ball, we find the angular momentum downward, which is unreasonable. It seems that angular momentum doesn’t belong to contravariant or covariant vector. If we are careful enough, it’s not hard to find it slightly weird using a vector for a planar motion. I mean why not think about a straightforward representation of plan in which the ball moves. If an ant happens to walk around on the table, it must think it crazy to express angular momentum using an external vector which it cannot touch and see. The same thought would exist for a monster living in higher dimensional space, how could it possible using only one vector to indicate angular momentum. The fact is, we feel it so natural to implicitly identify a vector with an area element perpendicular to it, since we live in three dimension!

This identification establish another sort of dual concept, Hodge dual. Consider two vectors and we multiply them formally, denoted by .

If we require , then the expression reduces to

We call this kind of quantity **bivector**. Compared to cross product , there is a natural identification relation,

Note that the order does matter. Let be hodge star operator mapping between vector and pseudovector (bivector for ) space, then

where be volume form, be determinant of metric matrix. We can use this identity to compute any hodge dual, even for each . Suppose and , take , then

Identify two results, we have

Remark that hodge dual is not involution, so that hodge dual of hodge dual doesn’t have to return back to the original space. To see this, assume , then

where is the signature of metric. This identification by Hodge dual, though looks complicated for computation, is natural just like turn a ladder upside down, The first stage becomes the last and vice versa. Because of the basis-independent property of Levi-Civita symbol in Hodge dual, for every improper rotation, pseudovectors gain a minus sign. That’s why bivector, hence the angular momentum, doesn’t belong to contravariant or covariant vector!

We have seen three kinds of vectors with different transformation law under change of basis. As vectors, they all satisfy eight axioms of vector space. While as detailed objects in particular basis, they have different behaviors. Readers should recapitulate three natural identifications and appreciate whomever tells them apart.