Mathematicians study a variety of different mathematical structures, but perhaps the structures that are most commonly associated with mathematics are the number systems, such as the integers ${{\bf Z}}$ or the real numbers ${{\bf R}}$. Indeed, the use of number systems is so closely identified with the practice of mathematics that one sometimes forgets that it is possible to do mathematics without explicit reference to any concept of number. For instance, the ancient Greeks were able to prove many theorems in Euclidean geometry, well before the development of Cartesian coordinates and analytic geometry in the seventeenth century, or the formal constructions or axiomatisations of the real number system that emerged in the nineteenth century (not to mention precursor concepts such as zero or negative numbers, whose very existence was highly controversial, if entertained at all, to the ancient Greeks). To do this, the Greeks used geometric operations as substitutes for the arithmetic operations that would be more familiar to modern mathematicians. For instance, concatenation of line segments or planar regions serves as a substitute for addition; the operation of forming a rectangle out of two line segments would serve as a substitute for multiplication; the concept of similarity can be used as a substitute for ratios or division; and so forth.

A similar situation exists in modern physics. Physical quantities such as length, mass, momentum, charge, and so forth are routinely measured and manipulated using the real number system ${{\bf R}}$ (or related systems, such as ${{\bf R}^3}$ if one wishes to measure a vector-valued physical quantity such as velocity). Much as analytic geometry allows one to use the laws of algebra and trigonometry to calculate and prove theorems in geometry, the identification of physical quantities with numbers allows one to express physical laws and relationships (such as Einstein’s famous mass-energy equivalence ${E=mc^2}$) as algebraic (or differential) equations, which can then be solved and otherwise manipulated through the extensive mathematical toolbox that has been developed over the centuries to deal with such equations.

However, as any student of physics is aware, most physical quantities are not represented purely by one or more numbers, but instead by a combination of a number and some sort of unit. For instance, it would be a category error to assert that the length of some object was a number such as ${10}$; instead, one has to say something like “the length of this object is ${10}$ yards”, combining both a number ${10}$ and a unit (in this case, the yard). Changing the unit leads to a change in the numerical value assigned to this physical quantity, even though no physical change to the object being measured has occurred. For instance, if one decides to use feet as the unit of length instead of yards, then the length of the object is now ${30}$ feet; if one instead uses metres, the length is now ${9.144}$ metres; and so forth. But nothing physical has changed when performing this change of units, and these lengths are considered all equal to each other:

$\displaystyle 10 \hbox{ yards } = 30 \hbox{ feet } = 9.144 \hbox{ metres}.$

It is then common to declare that while physical quantities and units are not, strictly speaking, numbers, they should be manipulated using the laws of algebra as if they were numerical quantities. For instance, if an object travels ${10}$ metres in ${5}$ seconds, then its speed should be

$\displaystyle (10 m) / (5 s) = 2 ms^{-1}$

where we use the usual abbreviations of ${m}$ and ${s}$ for metres and seconds respectively. Similarly, if the speed of light ${c}$ is ${c=299 792 458 ms^{-1}}$ and an object has mass ${10 kg}$, then Einstein’s mass-energy equivalence ${E=mc^2}$ then tells us that the energy-content of this object is

$\displaystyle (10 kg) (299 792 458 ms^{-1})^2 \approx 8.99 \times 10^{17} kg m^2 s^{-2}.$

Note that the symbols ${kg, m, s}$ are being manipulated algebraically as if they were mathematical variables such as ${x}$ and ${y}$. By collecting all these units together, we see that every physical quantity gets assigned a unit of a certain dimension: for instance, we see here that the energy ${E}$ of an object can be given the unit of ${kg m^2 s^{-2}}$ (more commonly known as a Joule), which has the dimension of ${M L^2 T^{-2}}$ where ${M, L, T}$ are the dimensions of mass, length, and time respectively.

There is however one important limitation to the ability to manipulate “dimensionful” quantities as if they were numbers: one is not supposed to add, subtract, or compare two physical quantities if they have different dimensions, although it is acceptable to multiply or divide two such quantities. For instance, if ${m}$ is a mass (having the units ${M}$) and ${v}$ is a speed (having the units ${LT^{-1}}$), then it is physically “legitimate” to form an expression such as ${\frac{1}{2} mv^2}$, but not an expression such as ${m+v}$ or ${m-v}$; in a similar spirit, statements such as ${m=v}$ or ${m\geq v}$ are physically meaningless. This combines well with the mathematical distinction between vector, scalar, and matrix quantities, which among other things prohibits one from adding together two such quantities if their vector or matrix type are different (e.g. one cannot add a scalar to a vector, or a vector to a matrix), and also places limitations on when two such quantities can be multiplied together. A related limitation, which is not always made explicit in physics texts, is that transcendental mathematical functions such as ${\sin}$ or ${\exp}$ should only be applied to arguments that are dimensionless; thus, for instance, if ${v}$ is a speed, then ${\hbox{arctanh}(v)}$ is not physically meaningful, but ${\hbox{arctanh}(v/c)}$ is (this particular quantity is known as the rapidity associated to this speed).

These limitations may seem like a weakness in the mathematical modeling of physical quantities; one may think that one could get a more “powerful” mathematical framework if one were allowed to perform dimensionally inconsistent operations, such as add together a mass and a velocity, add together a vector and a scalar, exponentiate a length, etc. Certainly there is some precedent for this in mathematics; for instance, the formalism of Clifford algebras does in fact allow one to (among other things) add vectors with scalars, and in differential geometry it is quite common to formally apply transcendental functions (such as the exponential function) to a differential form (for instance, the Liouville measure ${\frac{1}{n!} \omega^n}$ of a symplectic manifold can be usefully thought of as a component of the exponential ${\exp(\omega)}$ of the symplectic form ${\omega}$).

However, there are several reasons why it is advantageous to retain the limitation to only perform dimensionally consistent operations. One is that of error correction: one can often catch (and correct for) errors in one’s calculations by discovering a dimensional inconsistency, and tracing it back to the first step where it occurs. Also, by performing dimensional analysis, one can often identify the form of a physical law before one has fully derived it. For instance, if one postulates the existence of a mass-energy relationship involving only the mass of an object ${m}$, the energy content ${E}$, and the speed of light ${c}$, dimensional analysis is already sufficient to deduce that the relationship must be of the form ${E = \alpha mc^2}$ for some dimensionless absolute constant ${\alpha}$; the only remaining task is then to work out the constant of proportionality ${\alpha}$, which requires physical arguments beyond that provided by dimensional analysis. (This is a simple instance of a more general application of dimensional analysis known as the Buckingham ${\pi}$ theorem.)

The use of units and dimensional analysis has certainly been proven to be very effective tools in physics. But one can pose the question of whether it has a properly grounded mathematical foundation, in order to settle any lingering unease about using such tools in physics, and also in order to rigorously develop such tools for purely mathematical purposes (such as analysing identities and inequalities in such fields of mathematics as harmonic analysis or partial differential equations).

The example of Euclidean geometry mentioned previously offers one possible approach to formalising the use of dimensions. For instance, one could model the length of a line segment not by a number, but rather by the equivalence class of all line segments congruent to the original line segment (cf. the Frege-Russell definition of a number). Similarly, the area of a planar region can be modeled not by a number, but by the equivalence class of all regions that are equidecomposable with the original region (one can, if one wishes, restrict attention here to measurable sets in order to avoid Banach-Tarski-type paradoxes, though that particular paradox actually only arises in three and higher dimensions). As mentioned before, it is then geometrically natural to multiply two lengths to form an area, by taking a rectangle whose line segments have the stated lengths, and using the area of that rectangle as a product. This geometric picture works well for units such as length and volume that have a spatial geometric interpretation, but it is less clear how to apply it for more general units. For instance, it does not seem geometrically natural (or, for that matter, conceptually helpful) to envision the equation ${E=mc^2}$ as the assertion that the energy ${E}$ is the volume of a rectangular box whose height is the mass ${m}$ and whose length and width is given by the speed of light ${c}$.

But there are at least two other ways to formalise dimensionful quantities in mathematics, which I will discuss below the fold. The first is a “parametric” model in which dimensionful objects are modeled as numbers (or vectors, matrices, etc.) depending on some base dimensional parameters (such as units of length, mass, and time, or perhaps a coordinate system for space or spacetime), and transforming according to some representation of a structure group that encodes the range of these parameters; this type of “coordinate-heavy” model is often used (either implicitly or explicitly) by physicists in order to efficiently perform calculations, particularly when manipulating vector or tensor-valued quantities. The second is an “abstract” model in which dimensionful objects now live in an abstract mathematical space (e.g. an abstract vector space), in which only a subset of the operations available to general-purpose number systems such as ${{\bf R}}$ or ${{\bf R}^3}$ are available, namely those operations which are “dimensionally consistent” or invariant (or more precisely, equivariant) with respect to the action of the underlying structure group. This sort of “coordinate-free” approach tends to be the one which is preferred by pure mathematicians, particularly in the various branches of modern geometry, in part because it can lead to greater conceptual clarity, as well as results of great generality; it is also close to the more informal practice of treating mathematical manipulations that do not preserve dimensional consistency as being physically meaningless.

— 1. The parametric approach —

In the parametric approach to formalising units and dimension, we postulate the existence of one or more dimensional parameters; for sake of discussion, let us initially use ${M, L, T}$ (representing the mass unit, length unit, and time unit respectively) for the dimensional parameters, though later in this discussion we will consider other sets of dimensional parameters. We will allow these parameters ${M,L,T}$ to range freely and independently among the positive real numbers ${{\bf R}^+}$, thus the parameter space (or structure group) here is given by the multiplicative group ${({\bf R}^+)^3}$. Later on, we will consider more general situations in which the parameter space is given by more general structure groups. (Actually, it would be slightly more natural to use a parameter space which was a torsor of the structure group, rather than the structure group itself; we discuss this at the very end of the post.)

We then distinguish two types of mathematical object in the “dimensionful universe”:

1. Dimensionless objects ${x}$, which do not depend on the dimensional parameters ${M,L,T}$;
2. Dimensionful objects ${x = x_{M,L,T}}$, which depend on the dimensional parameters ${M,L,T}$.

Similarly with “object” replaced by “number”, “vector”, or any other mathematical object. (Strictly speaking, with this convention, a dimensionless object is a degenerate special case of a dimensionful object; one could, if one wished, talk about strictly dimensionful objects in which the dependence of ${x_{M,L,T}}$ on ${M,L,T}$ is non-constant.) Our conventions will be slightly different when we turn to dimensionful sets rather than dimensionful objects, but we postpone discussion of this subtlety until later.

The distinction between dimensionless and dimensionful objects is analogous to the distinction between standard and (cheap) nonstandard objects in (cheap) nonstandard analysis. However, whereas in nonstandard analysis the underlying parameter ${\mathbf{n}}$ is usually thought of as an infinitely large parameter, in dimensional analysis the dimensional parameters ${M,L,T}$ are usually thought of as neither being infinitesimally small or infinitely large, but rather a medium-sized quantity taking values comparable to those encountered in the physical system being studied.

A typical example of a dimensionful quantity is the numerical length ${l = l_{M,L,T}}$ of a physical rod ${AB}$, in terms of a length unit which is ${L}$ yards (say) long. For instance, if ${AB}$ is ten yards long, then ${l_1 = 10}$. Furthermore, ${l_{1/3}=30}$, since when ${L=1/3}$, the length unit is now ${1/3}$ yards, i.e. a foot, and ${AB}$ is thirty feet long. More generally, we see that

$\displaystyle l_{M,L,T} = 10 L^{-1}.$

Thus, a quantity which measures the numerical length of an object is a dimensionful quantity behaves inversely to the size of the length unit. More generally, let us say that a dimensionful numerical quantity ${x = x_{M,L,T}}$ has dimension ${M^a L^b T^c}$ for some (dimensionless) exponents ${a,b,c}$ if one has a proportionality relationship of the form

$\displaystyle x_{M,L,T} = \tilde x M^{-a} L^{-b} T^{-c} \ \ \ \ \ (1)$

for some number ${\tilde x}$. For instance, the speed of an object, measured in length units per time unit, where length unit is ${L}$ yards long and the time unit is ${T}$ seconds in duration, is a dimensionful quantity of dimension ${LT^{-1}}$. The presence of the negative signs in (1) may seem surprising at first, but this is due to the fact that (1) is describing the effect of a passive change of units rather than an active change of the object ${x}$.

(Note here one slight defect of this approach in modeling physical quantities, in that one has to select a preferred system of units (in this case, yards, seconds, and some unspecified mass unit) in order to interpret the parameters ${M,L,T}$ numerically. As mentioned above, one can avoid this by viewing the parameters as torsors rather than numbers; we will discuss this briefly at the end of this post.)

In the language of representation theory, the collection of dimensionful quantities of dimension ${M^a L^b T^c}$ is a weight space of the structure group ${({\bf R}^+)^3 = \{ (M,L,T): M,L,T \in {\bf R}^+ \}}$ of weight ${(a,b,c)}$. One can indeed view dimensional analysis as being the representation theory of groups such as ${({\bf R}^+)^3}$, this viewpoint will become more prominent when we consider more general structure groups than ${({\bf R}^+)^3}$ later in this section.

Note that with this definition, it is possible that some dimensionful quantities do not have any specific dimension ${M^a L^b T^c}$, due to the dependence on ${M,L,T}$ being more complicated than a simple power law relationship. To give a (contrived) example, the dimensionful quantity ${L^{\sin(M+T)}}$ does not have any specific dimension attached to it.

We can manipulate dimensionful quantities mathematically by applying any given mathematical operation pointwise for each choice of dimensional parameters ${M,L,T}$. For instance, if ${x = x_{M,L,T}}$ and ${y = y_{M,L,T}}$ are two dimensionful numerical quantities, we can form their sum ${x+y = (x+y)_{M,L,T}}$ by the formula

$\displaystyle (x+y)_{M,L,T} = x_{M,L,T} + y_{M,L,T}.$

Similarly one can define ${x-y}$, ${xy}$, ${x/y}$ (if ${y}$ is never vanishing), ${x^y}$, ${\sin(x)}$, etc.. We also declare ${x=y}$ if one has ${x_{M,L,T} = y_{M,L,T}}$ for all ${M,L,T}$, and similarly declare ${x \leq y}$ if one has ${x_{M,L,T} \leq y_{M,L,T}}$ for all ${M,L,T}$, and so forth. Note that any law of algebra that is expressible as a universal identity will continue to hold for dimensionful quantities; for instance, the distributive law ${x(y+z)=xy+xz}$ holds for ordinary real numbers, and hence clearly also holds for dimensionful real numbers.

With these conventions, we now see a difference between dimensionally consistent and dimensionally inconsistent operations. If ${x}$ and ${y}$ both have units ${M^a L^b T^c}$, then their sum ${x+y}$ and difference ${x-y}$ also has units ${M^a L^b T^c}$; but if ${x}$ has units ${M^a L^b T^c}$ and ${y}$ has units ${M^{a'} L^{b'} T^{c'}}$ for some ${(a',b',c') \neq (a,b,c)}$, then the sum ${x+y}$ or difference ${x-y}$, while still defined as a dimensionful quantity, no longrer has any single dimension. For instance, if one adds a length ${l = l_{M,L,T} = 10 L^{-1}}$ to a speed ${v = v_{M,L,T} = 30 L^{-1} T}$, one obtains a hybrid dimensionful quantity ${10L^{-1} + 30 L^{-1} T}$ which is not of the form (1). Similarly, applying a transcendental function ${\sin}$ to a dimensionful quantity ${x}$ will almost certainly generate a quantity with no specific dimension, unless the quantity ${x}$ was actually dimensionless (in which case ${\sin(x)}$ will be dimensionless too, of course). On the other hand, dimensions interact very well with products and quotients: the product of a quantity of dimension ${M^a L^b T^c}$ with a quantity of dimension ${M^{a'} L^{b'} T^{c'}}$ is a quantity of dimension ${M^{a+a'} L^{b+b'} T^{c+c'}}$, and similarly for quotients.

Now we turn to equality. If two quantities ${x,y}$ have the same units ${M^a L^b T^c}$, then we see that in order to test the equality ${x=y}$ of the two objects, it suffices to do so for a single choice of dimensional parameters ${M,L,T}$: if ${x_{M_0,L_0,T_0} = y_{M_0,L_0,T_0}}$ for a single tuple ${(M_0,L_0,T_0)}$, then one has ${x_{M,L,T} = y_{M,L,T}}$ for all ${M,L,T}$. Similarly for order relations such as ${x \geq y}$ or ${x < y}$. In particular, if two quantities ${x, y}$ have the same units, then we have the usual order trichotomy: exactly of ${x=y, xy}$ holds. In contrast, it is possible for two dimensionful quantities to be incomparable: for instance, using the quantities ${l = 10 L^{-1}}$ and ${v = 30 L^{-1} T}$ from before, we see that none of the three statements ${l=v}$, ${l>v}$, ${l are true (which, in the dimensionful universe, means that they are valid for all choices of dimension parameters ${M,L,T}$): instead, we have ${l_{M,L,T} > v_{M,L,T}}$ for ${T < 1/3}$ and ${l_{L,M,T} < v_{L,M,T}}$ for ${T > 1/3}$. Indeed, if ${x}$ and ${y}$ have different dimensions, we see that the equation ${x=y}$ cannot hold at all, unless ${x}$ and ${y}$ both vanish. Thus we see that any non-trivial dimensionally inconsistent identity (in which the left and right-hand sides have different dimensions) can be automatically ruled out as being false.

A similar situation holds for inequality: if ${x, y}$ are strictly positive dimensionful quantities with different dimensions, then none of the statements ${x, ${x>y}$, ${x \leq y}$, or ${x \geq y}$ can hold. (On the other hand, with our conventions, a strictly positive quantity is always greater than a strictly negative quantity, even when the dimensions do not match.) The situation gets more complicated though when dealing with quantities of hybrid dimension. For instance, the arithmetic mean-geometric mean inequality tells us that

$\displaystyle l v \leq \frac{1}{2} l^2 + \frac{1}{2} v^2 \ \ \ \ \ (2)$

for any two strictly positive dimensionful quantities ${l, v}$, even if these quantities have different dimensions. For instance, if ${l}$ has dimension ${L}$ and ${v}$ has dimension ${LT^{-1}}$, then the left-hand side ${lv}$ has dimension ${L^2 T^{-1}}$, but the two terms ${\frac{1}{2}l^2}$, ${\frac{1}{2} v^2}$ on the right-hand side have dimensions ${L^2}$ and ${L^2 T^{-2}}$ respectively. But this inequality can still be viewed as dimensionally consistent, if one broadens the notion of dimensional consistency sufficiently. For instance, if ${x}$ is the sum of ${n}$ strictly positive quantities of dimension ${M^{a_i} L^{b_i} T^{c_i}}$ for ${i=1,2,\ldots,n}$ and some ${(a_i,b_i,c_i) \in {\bf R}^3}$, and ${y}$ is similarly the sum of ${m}$ strictly quantities of dimension ${M^{a'_j} L^{b'_j} T^{c'_j}}$ for ${j=1,\ldots,m}$ and some ${(a'_j,b'_j,c'_j) \in {\bf R}^3}$, it is an instructive exercise to show that an inequality of the form ${x < y}$ or ${x \leq y}$ can only hold if the convex hull of the ${(a_i,b_i,c_i)}$ is contained in the convex hull of the ${(a'_j,b'_j,c'_j)}$ (and that equality ${x=y}$ can only hold if the two sets ${\{ (a_i,b_i,c_i): 1 \le i \leq n\}}$ and ${\{ (a'_j,b'_j,c'_j): 1 \le j \le m \}}$ agree). Thus, for instance, (2) is dimensionally consistent in this generalised sense, because the exponent pair ${(0,2,-1)}$ associated to the left-hand side ${lv}$ lies in the convex hull of the exponent pairs ${(0,2,0), (0,2,-2)}$ associated to the right-hand side ${\frac{1}{2} l^2 + \frac{1}{2} v^2}$. On the other hand, this analysis helps explain why we almost never see such hybrid dimensional quantities appear in a physical problem, because while one can bound a positive quantity with a single dimension by a combination of positive quantities of different dimensions (as in (2)), the converse is not possible: one cannot bound a positive quantity of hybrid dimension by a quantity with a single dimension. As a consequence, if one is trying to establish an inequality ${x \leq y}$ between two positive quantities of the same dimension by writing down a chain ${x \leq z_1 \leq \ldots\leq z_n \leq y}$ of intermediate inequalities, one cannot have any of the intermediate quantities ${z_j}$ be of hybrid dimension, as this will necessarily cause one of the inequalities in the chain to fail as soon as one attempts to bound a hybrid quantity by a non-hybrid quantity. Similarly if one wishes to prove an equality ${x=y}$ instead of an inequality.

We have already observed that to verify a dimensionally consistent statement between dimensionful quantities, it suffices to do so for a single choice of the dimension parameters ${M,L,T}$; one can view this as being analogous to the transfer principle in nonstandard analysis, relating dimensionful mathematics with dimensionless mathematics. Thus, for instance, if ${E, m, c}$ have the units of ${M L^2 T^{-2}}$, ${M}$, and ${L T^{-1}}$ respectively, then to verify the dimensionally consistent identity ${E=mc^2}$, it suffices to do so for a single choice of units ${M,L,T}$. For instance, one can choose a set of units (such as Planck units) for which the speed of light ${c}$ becomes ${1}$, in which case the dimensionally consistent identity ${E=mc^2}$ simplifies to the dimensionally inconsistent identity ${E=m}$. Note that once we sacrifice dimensional consistency, though, we cannot then transfer back to the dimensionful setting; the identity ${E=m}$ does not hold for all choices of units, only the special choice of units for which ${c=1}$. So we see a tradeoff between the freedom to vary units, and the freedom to work with dimensionally inconsistent equations; one can spend one freedom for another, but one cannot have both at the same time. (This is closely related to the concept of spending symmetry, which I discuss for instance in this post (or in Section 2.1 of this book).)

Thus far, we have only considered scalar dimensionful quantities: quantities which, for each choice of dimensional parameters ${M,L,T}$, take values in a number system such as ${{\bf R}}$. One can similarly consider vector-valued or matrix-valued dimensionful quantities, with only minor changes (though see below, when we consider coordinate systems to themselves be a dimensional parameter). We remark though that one could consider vectors in which different components have different dimensions. For instance, one could consider a four-dimensional vector ${(x,y,z,t)}$ in which the first three components ${x,y,z}$ have the dimension of length ${L}$, while the final component ${t}$ has the dimension of time ${T}$.

Now we turn to the notion of a dimensionful set, which requires some care. We will define a dimensionful set to be a set ${\Omega}$ of dimensionful objects ${x = x_{M,L,T}}$. Thus, for instance, the collection ${{\bf R}_{M^a L^b T^c}}$ of all dimensionful numbers of dimension ${M^a L^b T^c}$ would be a dimensionful set; it is isomorphic to the reals ${{\bf R}}$, because a dimensionful real number of a given dimension is entirely determined by its value for a single choice of parameters. But one should view the dimensionful sets ${{\bf R}_{M^a L^b T^c}}$ as ${a,b,c}$ vary as being distinct copies of ${{\bf R}}$. We say that a dimensionful set ${\Omega}$ of reals has dimension ${M^a L^b T^c}$ if each element of ${\Omega}$ has this dimension.

Given a dimensionful set ${\Omega}$, one can evaluate it at any given choice ${M_0,L_0,T_0}$ of parameters, by evaluating each point of ${\Omega}$ at this choice:

$\displaystyle \Omega_{M_0,L_0,T_0} := \{ x_{M_0,L_0,T_0}: x \in \Omega \}.$

However, in contrast with the situation with dimensionful objects, a dimensionful set ${\Omega}$ is not completely characterised by its evaluations ${\Omega_{M_0,L_0,T_0}}$ at each choice of parameters ${M_0,L_0,T_0}$. For instance, if one evaluates the dimensionful set ${{\bf R}_{M^a L^b T^c}}$ at any given ${M_0,L_0,T_0}$, one just gets the ordinary real numbers:

$\displaystyle ({\bf R}_{M^a L^b T^c})_{M_0,L_0,T_0} = {\bf R}.$

However, the sets ${{\bf R}_{M^a L^b T^c}}$ and ${{\bf R}_{M^{a'} L^{b'} T^{c'}}}$ are still distinct (indeed, they only intersect at the origin). The point is that membership of a dimensionful point ${x}$ in a dimensionful set ${\Omega}$ is a global property rather than a local one; in order for ${x}$ to lie in ${\Omega}$, it is necessary that ${x_{M,L,T} \in \Omega_{M,L,T}}$ for all ${M,L,T}$, but this condition is not sufficient (unless ${x}$ and ${\Omega}$ have the same dimension, in which case it suffices to have ${x_{M,L,T} \in \Omega_{M,L,T}}$ for just a single ${M,L,T}$).

Given two dimensionful sets ${\Omega, \Omega'}$, we define a dimensionful function ${f: \Omega \rightarrow \Omega'}$ from ${\Omega}$ to ${\Omega'}$ to be a family ${f_{M,L,T}: \Omega_{M,L,T} \rightarrow \Omega'_{M,L,T}}$ of functions which maps points in ${\Omega}$ to points in ${\Omega'}$; thus, if ${x = x_{M,L,T}}$ is a point in ${\Omega}$, then the dimensionful object ${f(x)}$ defined by pointwise evaluation

$\displaystyle f(x)_{M,L,T} := f_{M,L,T}(x_{M,L,T})$

is a point in ${\Omega'}$. Thus, for instance, the squaring function ${x \mapsto x^2}$ can be viewed both as a dimensionless function, and also as a function from ${{\bf R}_{M^a L^b T^c}}$ to ${{\bf R}_{M^{2a} L^{2b} T^{2c}}}$ for any ${a,b,c}$. (Thus, when describing a dimensionful function ${f: \Omega \rightarrow \Omega'}$, it is not quite enough to specify the specific instantiations ${f_{M,L,T}}$ of that function; one must also specify the dimensionful domain ${\Omega}$ and range ${\Omega'}$.) As another example, if ${A}$ is a dimensionful quantity of some units ${M^a L^b T^c}$ (representing amplitude) and ${\omega}$ is a dimensionful quantity of units ${T^{-1}}$ (representing a time frequency), then the function ${f(t) := A \sin(\omega t)}$ (thus ${f(t)_{M,L,T} = A_{M,L,T} \sin(\omega_{M,L,T} t_{M,L,T})}$ for all ${M,L,T}$) is a dimensional function from ${{\bf R}_T}$ to ${{\bf R}_{M^a L^b T^c}}$.

A dimensionful function ${f: {\bf R}_{M^a L^b T^c} \rightarrow {\bf R}_{M^{a'} L^{b'} T^{c'}}}$ has instantiations ${f_{M,L,T}: {\bf R} \rightarrow {\bf R}}$ that scale according to the rule

$\displaystyle f_{M,L,T}( x ) = \tilde f( M^a L^b T^c x ) M^{-a'} L^{-b'} T^{-c'} \ \ \ \ \ (3)$

for any ${M,L,T,x}$ and some dimensionless function ${\tilde f: {\bf R} \rightarrow {\bf R}}$; conversely, every dimensionless function ${\tilde f}$ creates a dimensionful function ${f}$ in this fashion. As such, one can again transfer between the dimensionful and dimensionless settings when manipulating functions and objects, provided as before that all statements involved are dimensionally consistent.

An important additional operation available to dimensionful functions that is not available (in any non-trivial sense) to dimensionful scalars is that of integration. Given a dimensionful function ${f: \Omega \rightarrow {\bf R}_{M^{a'} L^{b'} T^{c'}}}$ on some one-dimensional dimensionful set ${\Omega \subset {\bf R}_{M^a L^b T^c}}$, one can form the integral ${\int_\Omega f(x)\ dx \in {\bf R}_{M^{a+a'} L^{b+b'} T^{c+c'}}}$ (assuming sufficient regularity and decay conditions on ${f}$ and ${\Omega}$, which we will not dwell on here) by the formula

$\displaystyle (\int_\Omega f(x)\ dx)_{M,L,T} := \int_{\Omega_{M,L,T}} f_{M,L,T}(x)\ dx.$

One can verify that this integral ${\int_\Omega f(x)\ dx}$ is indeed a dimensionful quantity of dimension ${M^{a+a'} L^{b+b'} T^{c+c'}}$. (One way to see this is to first verify the analogous claim for Riemann sums, and then to observe that the property of being a given dimension is a closed condition in the sense that it is preserved under limits.) In the opposite direction, the derivative ${\frac{d}{dx}f(x)}$ of this function, defined in the obvious fashion as

$\displaystyle (\frac{d}{dx} f(x))_{M,L,T} := \frac{d}{dx_{M,L,T}} f_{M,L,T}(x_{M,L,T}),$

can be easily verified to be a dimensionful quantity of dimension ${{\bf R}^{M^{a'-a} L^{b'-b} T^{c'-c}}}$. (As before, this can be seen by first considering Newton quotients and then taking limits.)

With this formalism, one can now use dimensional analysis to help test the truth of various estimates in harmonic analysis. Consider for instance the homogeneous Sobolev inequality

$\displaystyle (\int_{{\bf R}^d} |f(x)|^q\ dx)^{1/q} \leq C_{p,q,d} (\int_{{\bf R}^d} |\nabla f(x)|^p\ dx)^{1/p} \ \ \ \ \ (4)$

for all sufficiently nice functions ${f: {\bf R}^d \rightarrow {\bf R}}$ (again, we will not dwell on the precise regularity needed here, as it is not the main focus of this post), for certain choices of exponents ${1 \leq p,q < \infty}$ and ${d \geq 1}$, and a constant ${C_{p,q,d}}$ that is independent of ${f}$. To dimensionally analyse this inequality, we introduce two dimensional parameters – a length unit ${L}$ and an amplitude unit ${A}$ – and view ${f}$ as a function from ${{\bf R}_L^d}$ to ${{\bf R}_A}$ rather than from ${{\bf R}^d}$ to ${{\bf R}}$; thus, by (3) (now using parameters ${L,A}$ instead of ${M,L,T}$), we have

$\displaystyle f_{L,A}( x ) = A^{-1} f(Lx).$

(As before, the exponents seem reversed from the more familiar rescaling ${A f(x/L)}$, due to the fact that we are measuring change with respect to passive rescaling of units rather than an active rescaling of the function ${f}$.) We can then verify that ${f(x)}$ has dimension ${A}$, ${|f(x)|^q}$ has dimension ${A^q}$, ${\int_{{\bf R}^d} |f(x)|^q\ dx}$ has dimension ${A^q L^d}$, and so the left-hand side (4) of ${(\int_{{\bf R}^d} |f(x)|^q\ dx)^{1/q}}$ has dimension ${A L^{d/q}}$. A similar calculation (treating ${C_{p,q,d}}$ as dimensionless) shows that the right-hand side of (4) has dimension ${A L^{-1} L^{d/p}}$. If (4) holds for dimensionless functions, it holds for dimensionful functions as well (by applying the inequality to each instantiation of the dimensionful function); as the quantities in (4) are positive for non-trivial ${f}$, we conclude that (4) can only hold if we have the dimensional consistency relation

$\displaystyle \frac{d}{q} = \frac{d}{p} - 1.$

In fact, this condition turns out to be sufficient as well as necessary, although this is a non-trivial fact that cannot be proven purely by dimensional analysis; see e.g. these notes.

In a similar vein, one can dimensionally analyse the inhomogeneous Sobolev inequality

$\displaystyle (\int_{{\bf R}^d} |f(x)|^q\ dx)^{1/q} \leq C_{p,q,d} [(\int_{{\bf R}^d} |f(x)|^p\ dx)^{1/p} + (\int_{{\bf R}^d} |\nabla f(x)|^p\ dx)^{1/p}]. \ \ \ \ \ (5)$

Using the same units as before, the left-hand side has dimension ${A L^{d/q}}$, and the right-hand side is a hybrid of the dimensions ${A L^{d/p}}$ and ${A L^{-1} L^{d/q}}$, leading to the dimensional consistency relation

$\displaystyle \frac{d}{p} - 1 \leq \frac{d}{q} \leq \frac{d}{p}$

for this inequality, as a necessary condition for (5). (See also this blog post for an equivalent way to establish these conditions, using rescaled test functions instead of dimensional analysis; as mentioned earlier, the relation between these two approaches is essentially the difference between active and passive transformations.)

We saw earlier that hybrid inequalities (in which the right-hand side contains terms of different dimension) are not as useful or “efficient” as dimensionally pure inequalities (in which both sides have the same, single dimension). But it is often possible to amplify a hybrid inequality into a dimensionally pure one by optimising over all rescalings; see this previous blog post for a discussion of this trick (which, among other things, amplifies the inhomogeneous Sobolev inequality into the Gagliardo-Nirenberg inequality).

In all the above discussion, the dimensional parameters used (such as ${M,L,T}$ or ${A,L}$) were scalar quantities, taking values in the multiplicative group ${{\bf R}^+}$, and representing units in one-dimensional spaces. But, when dealing with vector quantities, one can perform a more powerful form of dimensional analysis in which the dimensional parameters lie in a more general group (which we call the structure group of the dimensionful universe being analysed). Suppose for instance one wishes to represent a vector ${v}$ in a three-dimensional vector space ${V}$. One could designate some basis ${e_1,e_2,e_3}$ of this space ${V}$ as a reference basis, so that ${v}$ is expressible as some linear combination ${v = x_1 e_1 + x_2 e_2 + x_3 e_3}$, in which case one could identify ${v}$ with the row vector ${\tilde v = (x_1,x_2,x_3)}$, and identify ${V}$ with ${{\bf R}^3}$. But one could instead represent ${v}$ in some different basis ${Le_1, Le_2, Le_3}$, where ${L = \begin{pmatrix} L_{11} & L_{12} & L_{13} \\ L_{21} & L_{22} & L_{23} \\ L_{31} & L_{32} & L_{33} \end{pmatrix}}$ is a ${3 \times 3}$ matrix (where, by an abuse of notation, we use ${Le_i}$ as shorthand for ${L_{1i} e_1 + L_{2i} e_2 + L_{3i}e_3}$), in which case one obtains a new decomposition

$\displaystyle v = x_{1,L} Le_1 + x_{2,L} Le_2 + x_{3,L} L e_3$

where the row vector ${v_L := (x_{1,L}, x_{2,L}, x_{3,L})}$ is related to the original row vector ${\tilde x}$ by the formula

$\displaystyle v_L = \tilde v (L^t)^{-1}, \ \ \ \ \ (6)$

where ${L^t}$ is the transpose of ${L}$. Motivated by this, we may take the matrix ${L}$ as the dimensional parameter (taking values in the structure group ${GL_3({\bf R})}$ of invertible ${3 \times 3}$ matrices), and define a polar vector or type ${(1,0)}$ tensor to be a dimensionful vector ${v = v_L}$ taking values in the space ${{\bf R}^3}$ of three-dimensional row vectors and transforming according to the law (6). This is the three-dimensional analogue of a scalar quantity of dimension ${L}$ (which, in the one-dimensional setting, is just a scalar parameter in ${{\bf R}^+}$ rather than an element of ${GL_3({\bf R})}$). Indeed, one can view units as being simply the one-dimensional special case of coordinate systems. (As with previous transformation laws, the presence of the transpose inverse in (6) comes from the use of passive transformations rather than active ones.)

Now suppose we do not wish to model a vector ${v}$ in ${V}$, but rather a linear functional ${w: V \rightarrow {\bf R}}$ on ${V}$. Using the standard basis ${e_1,e_2,e_3}$, one can identify ${w}$ with the row vector ${\tilde w := (w(e_1), w(e_2), w(e_3))}$. Replacing that basis with ${L e_1, Le_2, Le_3}$, we obtain a new row vector ${w_L := (w(Le_1), w(Le_2), w(Le_3))}$, which is related to the original vector by the formula

$\displaystyle w_L = \tilde w L. \ \ \ \ \ (7)$

We will call a dimensionful vector-valued quantity ${w=w_L}$ of the form (7) a covector or type ${(0,1)}$ tensor. Thus we see that while polar vectors and covectors can both be expressed (for each choice of ${L}$) as an element of ${{\bf R}^3}$, they transform differently with respect to coordinate change (coming from the two different right-actions of the structure group ${GL_3({\bf R})}$ on ${{\bf R}^3}$ given by (6), (7)) and so it is not possible for a vector and covector to be equal as dimensionful quantities (unless they are both zero). If one tries to add a non-zero vector to a non-zero covector, one still obtains a dimensionful quantity taking values in ${{\bf R}^3}$, but it is no longer transforms according to a single group action such as (6) or (7), but is instead coming from a more complicated hybrid of two such actions. On the other hand, the dot product of a vector and a covector becomes a dimensionless scalar, whereas the dot product of a vector with another vector, or a covector with another covector, does not transform according to any simple rule. This makes the distinction between vectors and covectors well suited to problems in affine geometry, which by their nature transform well with respect to the action of ${GL_3({\bf R})}$. (However, if the geometric problem involves concepts such as length and angles, which do not transform easily with respect to ${GL_3({\bf R})}$ actions, then the vector/covector distinction is much less useful.)

One can also assign dimensions to higher rank tensors, such as matrices, ${k}$-vectors, or ${k}$-forms; the notation here becomes rather complicated, but is perhaps best described using abstract index notation. The dimensional consistency of equations involving such tensors then becomes the requirement in abstract index notation that the subscripts and superscripts on the left-hand side of a tensor equation must match those on the right-hand side, after “canceling” any index that appears as both a subscript and a superscript on one side. (Abstract index notation is discussed further in this previous blog post.)

Another type of dimensional analysis arises when one takes the structure group to be a Euclidean group such as ${E(2)}$ or ${E(3)}$, rather than ${GL_3({\bf R})}$ or (some power of) ${{\bf R}^+}$. For instance, one might be studying Euclidean geometry in the Euclidean plane ${P}$, which one might identify with the Cartesian plane ${{\bf R}^2}$ by some reference isomorphism ${\iota: {\bf R}^2 \rightarrow P}$ (which can be viewed as a choice of origin, coordinate axes, orientation, and unit length on ${P}$), so that a given point ${p}$ in the Euclidean plane ${P}$ is associated to a Cartesian point ${\tilde p := \iota^{-1}(p)}$ in ${{\bf R}^2}$. If we let ${L \in E(2)}$ denote a rigid motion of ${{\bf R}^2}$ (some combination of a translation, rotation, and/or reflection on ${{\bf R}^2}$), this gives rise to a new isomorphism ${\iota \circ L: {\bf R}^2 \rightarrow P}$, and a new Cartesian point

$\displaystyle p_L = (\iota \circ L)^{-1} p = L^{-1} \tilde p.$

If we view ${L}$ as a dimensional parameter, we can then define a position vector to be a dimensionful quantity ${p = p_L}$ taking values in ${{\bf R}^2}$ that transforms according to the law

$\displaystyle p_L = L^{-1} \tilde p \ \ \ \ \ (8)$

for some ${\tilde p \in {\bf R}^2}$. If, instead of considering the position of a single point ${p}$ in the Euclidean plane, one considers the displacement ${v = q-p}$ between two points ${p,q}$ in that plane, a similar line of reasoning leads to the concept of a displacement vector, which would be a dimensionful quantity ${v = v_L}$ taking values in ${{\bf R}^2}$ that transforms according to the law

$\displaystyle p_L = \dot L^{-1} \tilde p \ \ \ \ \ (9)$

where ${\dot L \in O(2)}$ is the homogeneous portion of ${L}$ (thus ${\dot L}$ is an orthogonal linear transformation on ${R^2}$, and ${L(x) = \dot L(x) + L(0)}$ for all ${x \in {\bf R}^2}$). Thus we see a rigorous distinction between the concepts of position and displacement vector that one sometimes sees in introductory linear algebra or physics courses. Note for instance that one can add a position vector to a displacement vector to obtain another position vector, or a displacement vector to a displacement vector to obtain a further displacement vector, but when adding a position vector to another position vector, one obtains a new type of vector which is neither a position vector nor a displacement vector. (On the other hand, convex combinations of position vectors still give a position vector as output.) The dimensional analysis distinction between position and displacement vectors can be useful in situations in which the ambient plane ${P}$ has no preferred origin, allowing for a clean action by the Euclidean group ${E(2)}$ (or at least the translation subgroup ${{\bf R}^2}$).

A similar discussion can also be given in three-dimensions for the Euclidean group ${E(3)}$ (or the orthogonal subgroup ${O(3)}$); in addition to the position/displacement vector distinction, one can also distinguish polar vectors from axial vectors, leading in particular to the conclusion that the cross product of two polar vectors is an axial vector rather than a polar vector. Among other things, this helps explain why, in any identity involving cross products, such as the Hodge decomposition

$\displaystyle \Delta f = \nabla \cdot (\nabla f) + \nabla \times (\nabla \times f)$

of the three-dimensional Laplacian ${\Delta}$, either all terms have an even number of cross products, or all terms have an odd number of cross products.

In a similar vein, one can dimensionally analyse physical quantities in spacetime using the Poincaré group ${SO(3,1) \ltimes {\bf R}^{3+1}}$ as the structure group, in which case the principle of special relativity can be interpreted as the assertion that all physical quantities transform cleanly with respect to this group action (as opposed to having a hybrid transformation law, or one which privileges a specific reference frame). Similarly, if one enlarges the structure group to the diffeomorphism group of spacetime, one recovers the principle of general relativity (though an important caveat here is that if the spacetime is topologically non-trivial, one needs to work with local coordinate charts (or atlases of such charts, and the structure group needs to be relaxed to a looser concept, such as that of a pseudogroup) rather than a single global coordinate chart, which greatly complicates the parametrisation space). Using a group of gauge transformations instead as the symmetry group leads to a mathematical framework suitable for gauge theory; and so forth.

In the preceding examples, the structure groups were all examples of continuous Lie groups: ${({\bf R}^+)^3}$, ${GL_3({\bf R})}$, ${E(2)}$, etc.. But even a discrete structure group can lead to a non-trivial capability for dimensional analysis. Perhaps the most familiar example is the use of the structure group ${\{-1,+1\}}$ as the range of the dimensional parameter ${L}$, leading to two types of scalars: symmetric scalars ${x = x_L}$, which are dimensionless (so ${x_{-1}=x_{+1}}$), and antisymmetric scalars ${x = x_L}$, which transform according to the law ${x_{-1} = -x_{+1}}$. A function ${f: {\bf R} \rightarrow {\bf R}}$ then transforms antisymmetric scalars to symmetric ones if the function is even, or to antisymmetric ones if the function is odd; this leads to the usual parity rules regarding combinations of even and odd functions, explaining for instance why in any trigonometric identity such as

$\displaystyle \sin(x+y) = \sin(x) \cos(y) + \cos(x) \sin(y),$

the number of odd functions (sine, tangent, cotangent, and their inverses) in each term has the same parity. (Incidentally, this particular discrete structure group ${\{-1,+1\}}$ can be combined with continuous structure groups, giving rise to various mathematical structures used in the theory of supersymmetry in physics.)

— 2. The abstract approach —

The parameteric approach described above is essentially the one which is emphasised in physics courses, in which any mathematical representation of a physical object includes a description of how that representation transforms with respect to elements of the relevant structure group, so that in effect all coordinatisations of the object are considered simultaneously. However, in pure mathematics it is often more preferable to take a more abstract, “coordinate free” approach, in which explicit use of coordinates is avoided whenever possible, in order to keep the mathematical framework as close as possible to the underlying geometry or physics of the situation, and also in order to allow for easy generalisation to other contexts of mathematical interest in which coordinate systems become inconvenient to use (as we already saw in the case of general relativity on a topologically non-trivial situations, in which global coordinate systems had to be replaced by local ones), or completely unavailable (e.g. in studying more general topological spaces than manifolds). With this abstract, minimalistic approach, one models objects as elements of abstract structures in which only the bare minimum of operations (namely, the “geometric”, “physical”, or “dimensionally consistent” operations) are permitted, viewing any excess structure (such as coordinates) as superfluous ontological baggage to be discarded if possible.

We have already implicitly seen this philosophy when discussing the example of different coordinatisations of a three-dimensional vector space ${V}$. One can simply forget about coordinatisations altogether, and view ${V}$ as an abstract vector space ${V = (V,0,+,\cdot)}$, with the only operations initially available on ${V}$ being that of addition and scalar multiplication, which obey a standard set of compatibility axioms which we omit here. (Thus, for instance, the dot product of two vectors in ${V}$ would be undefined, due to the lack of a canonical inner product structure on this space.) The dual space ${V^*}$ of covectors is also a three-dimensional vector space, but because there is no canonical identification between ${V}$ and ${V^*}$, there is no way (with only these minimal structural operations) to equate a vector to a covector, or to add a vector to a covector. Thus, in this framework, dimensionally inconsistent operations are not just inconvenient to use; they are impossible to write down in the first place (unless one introduces some non-canonical choices, such as an identification of ${V}$ with ${{\bf R}^3}$).

One can add or remove structures to this vector space ${V}$, depending on the situation. For instance, if one wants to remove the preferred origin from the vector space, one can work instead with the slightly weaker structure of an affine space, in which no preferred origin is present. In order to recover the operations of Euclidean geometry, one can then place a Euclidean metric on the affine space; in some situations, one may also wish to add an orientation or a Haar measure to the list of structures. By adding or deleting such structures, one changes the group of isomorphisms of the space (or the groupoid of isomorphisms between different spaces in the same category), which serve as the analogue of the structure group in the parametric viewpoint. For instance, for a three-dimensional vector space with no additional structure, the group of isomorphisms is ${GL_3({\bf R})}$ (or more precisely, the group is isomorphic to this matrix group). Adding a Euclidean metric on this space reduces the group to ${O_3({\bf R})}$, but then deleting the origin increases it again to ${E_3({\bf R})}$, and so forth. In the Kleinian approach to geometry as described by the Erlangen program, the group of isomorphisms plays a primary role; indeed, one can view any given type of geometry (e.g. Euclidean geometry, affine geometry, projective geometry, spherical geometry, etc.) as the study of all structures on a given homogeneous space that are invariant with respect to the action of the group of isomorphisms.

Many foundational mathematical structures (e.g. vector spaces, groups, rings, fields, topological spaces, measure spaces, etc.) are routinely presented in this sort of abstract, axiomatic framework, without reference to explicit coordinates or number systems. (But there are some exceptions; for instance, the standard definition of a smooth manifold (or a complex manifold, etc.) makes reference to an altas of smooth coordinate charts, the standard definition of an algebraic variety or scheme makes reference to affine charts, and so forth.) One can apply the same abstract perspective to scalars, such as the length or mass of an object, by viewing these quantities as lying in an abstract one-dimensional real vector space, rather than in a copy of ${{\bf R}}$.

For instance, to continue the example of the ${M,L,T}$ system of dimensions from the previous section, we can postulate the existence of three one-dimensional real vector spaces ${V^M, V^L, V^T}$ (which are supposed to represent the vector space of possible masses, lengths, and times, where we permit for now the possibility of negative values for these units). As it is physically natural to distinguish between positive and negative masses, lengths, or times, we endow these one-dimensional spaces with a total ordering (obeying the obvious compatibility conditions with the vector space structure), so that these spaces are ordered one-dimensional real vector spaces. However, we do not designate a preferred unit in these spaces (which would identify each of them with ${{\bf R}}$).

We can then use basic algebraic operations such as tensor product to create further one-dimensional real vector spaces, without ever needing to explicitly invoke a coordinate system (except perhaps in the proofs of some foundational lemmas, though not in the statements of those lemmas). For instance, one can define ${V^{ML}}$ to be the tensor product ${V^M \otimes V^L}$ of ${V^M}$ and ${V^L}$, which can be defined categorically as the universal vector space with a bilinear product ${\cdot: V^M \times V^L \rightarrow V^{ML}}$ (thus any other bilinear product ${\tilde \cdot: V^M \times V^L \rightarrow W}$ must factor uniquely through the universal bilinear product). Note that we can canonically place an ordering on this tensor product ${V^{ML}}$ by declaring the tensor product of two positive quantities to again be positive. Similarly, one can define ${V^{T^{-1}}}$ as the dual space to ${V^T}$ (with a linear functional on ${V^T}$ being positive if it is positive on positive values of ${V^T}$), define ${V^{LT^{-1}}}$ as the tensor product of ${V^L}$ and ${V^{T^{-1}}}$, and so forth. This leads to a definition of ${V^{M^a L^b T^c}}$ for any integers ${a,b,c}$ (actually, to be pedantic, it leads to multiple definitions of ${V^{M^a L^b T^c}}$ for each ${a,b,c}$, but these definitions can are canonically and naturally isomorphic to each other, and so by abuse of notation one can safely treat them as being a single definition). With a bit of additional effort (and taking full advantage of the one-dimensionality of the vector spaces), one can also define spaces with fractional exponents; for instance, one can define ${V^{L^{1/2}}}$ as the space of formal signed square roots ${\pm l^{1/2}}$ of non-negative elements ${l}$ in ${V^L}$, with a rather complicated but explicitly definable rule for addition and scalar multiplication. (Such formal square roots do occasionally appear naturally in mathematical applications; for instance, half-densities (formal square roots of measures) arise naturally in the theory of Fourier integral operators. However, when working with vector-valued quantities in two and higher dimensions, there are representation-theoretic obstructions to taking arbitrary fractional powers of units (though the double cover of orthogonal groups by spin groups does allow for spinor-valued quantities whose “dimension” is in some sense the square root of that of a vector).

If one views elements of ${V^{M^a L^b T^c}}$ as having dimension ${M^a L^b T^c}$ then, operations which are dimensionally consistent are well-defined, but operations which are dimensionally inconsistent are not. For instance, one can multiply an element in ${V^{M^a L^b T^c}}$ with an element in ${V^{M^{a'} L^{b'} T^{c'}}}$ to obtain a product in ${V^{M^{a+a'} L^{b+b'}T^{c+c'}}}$, but one cannot canonically place this product in any other space ${V^{M^{\tilde a} L^{\tilde b} T^{\tilde c}}}$. One can compare two quantities ${x}$ and ${y}$ (i.e. decide if ${x=y}$, ${x, or ${x>y}$ if they lie in the same spaces ${V^{M^a L^b T^c}}$, but not if they lie in different spaces (particularly if one is careful to keep the origins ${0_{V^{M^a L^b T^c}}}$ of each of these vector spaces ${V^{M^a L^b T^c}}$ distinct from each other). And so forth. Subject to the requirement of dimensional consistency, all the usual laws of algebra continue to hold; for instance, the distributive law ${x(y+z)=xy+xz}$ continues to hold as long as ${y,z}$ have the same dimension. (The situation here is similar to that of a graded algebra, except that one does not permit addition of objects of different dimensions or gradings.) Thus, one expects many proofs of results that work in a dimensionless context to translate over to this abstract dimensionful setting. However, some results will not translate into this framework due to their dimensional inconsistency. For instance, the inequality (2) does not make sense if ${l}$ lies in ${V^L}$ and ${v}$ lies in ${V^{LT^{-1}}}$, and similarly the inequality (5) does not make sense for ${f}$ a nice function from ${V^L}$ to ${V^A}$. (Note that the usual definitions of integral and dimension (as limits of Riemann sums and Newton quotients respectively) can be extended to this abstract dimensionful setting, so long as one keeps track of the dimensionality of all objects involved.) But if a statement is dimensionally consistent and can be proven in a dimensionless framework, then it should be provable in the abstract dimensionful setting as well (indeed, if all else fails, one can simply pick a set of coordinates to express all the abstract quantities here numerically, and then run the dimensionless argument). So, while the abstract framework is apparently less powerful than the parameterised framework due to the restricted number of operations available, in practice it can be a more useful framework to work in, as the operations that remain in the framework tend to be precisely the ones that one actually needs to solve problems (provided, of course, that one has chosen the right abstract formalism that is adapted to the symmetries of the situation).

It is possible to convert the abstract framework into the parametric one by making some non-canonical choices of a reference unit system. For instance, in the abstract ${M,L,T}$ dimensional system, after selecting a reference system of units ${M_0 \in V^M}$, ${L_0 \in V^L}$, ${T_0 \in V^T}$, one can then identify ${{\bf R}^{M^a L^b T^c}}$ with ${{\bf R}}$ by identifying ${M_0^a L_0^b T_0^c}$ with ${1}$, so that any ${x \in {\bf R}^{M^a L^b T^c}}$ gets identified with some real number ${\tilde x \in {\bf R}}$. For any ${M,L,T \in {\bf R}^+}$, one can then replace the units ${M_0,L_0,T_0}$ with rescaled units ${MM_0, LL_0, TT_0}$, which changes the identification of ${{\bf R}^{M^a L^b T^c}}$ with ${{\bf R}}$, so that an element ${x \in {\bf R}^{M^aL^bT^c}}$ is now identified with the real number

$\displaystyle x_{M,L,T} := \tilde x M^{-a} L^{-b} T^{-c}$

which is of course just (1). Thus we see that after selecting a reference unit system, one can convert an object ${x}$ which has dimension ${M^a L^b T^c}$ in the abstract framework into an object ${x_{M,L,T}}$ which has dimension ${M^a L^b T^c}$ in the parametric framework; conversely, every object ${x_{M,L,T}}$ that has dimension ${M^a L^b T^c}$ in the parametric framework arises from a unique object ${x}$ in the abstract framework (if one keeps the reference units ${M_0,L_0,T_0}$ fixed). Similarly for sets of objects, or functions between such sets. Note though that objects in the parametric dimension that do not have a single dimension, but rather some hybrid of various dimensions, do not correspond to any particular object in the abstract setting, unless one performs some additional algebraic constructions in the latter setting, such as taking formal sums of spaces of different dimensionalities.

The need to select some arbitrary reference units ${M_0,L_0,T_0}$ in order to connect the abstract and parametric frameworks is a bit inelegant. One way to avoid this (which was alluded to previously) is to interpret ${M,L,T}$ not as scalars in ${{\bf R}}$, but rather as elements of the ${{\bf R}}$-torsors ${V^M}$, ${V^L}$, ${V^T}$ respectively. With this modification to the parametric framework, the reference units ${M_0,L_0,T_0}$ can now be omitted. On the other hand, by turning the parameters ${M,L,T}$ into abstractly dimensionful quantities instead of dimensionless scalars, one loses some of the power of the parametric model, namely the power to perform numerical operations even if they are dimensionally inconsistent, and so one may as well work entirely in the abstract setting instead.