You are currently browsing the tag archive for the ‘rescaling’ tag.

Mathematicians study a variety of different mathematical structures, but perhaps the structures that are most commonly associated with mathematics are the number systems, such as the integers ${{\bf Z}}$ or the real numbers ${{\bf R}}$. Indeed, the use of number systems is so closely identified with the practice of mathematics that one sometimes forgets that it is possible to do mathematics without explicit reference to any concept of number. For instance, the ancient Greeks were able to prove many theorems in Euclidean geometry, well before the development of Cartesian coordinates and analytic geometry in the seventeenth century, or the formal constructions or axiomatisations of the real number system that emerged in the nineteenth century (not to mention precursor concepts such as zero or negative numbers, whose very existence was highly controversial, if entertained at all, to the ancient Greeks). To do this, the Greeks used geometric operations as substitutes for the arithmetic operations that would be more familiar to modern mathematicians. For instance, concatenation of line segments or planar regions serves as a substitute for addition; the operation of forming a rectangle out of two line segments would serve as a substitute for multiplication; the concept of similarity can be used as a substitute for ratios or division; and so forth.

A similar situation exists in modern physics. Physical quantities such as length, mass, momentum, charge, and so forth are routinely measured and manipulated using the real number system ${{\bf R}}$ (or related systems, such as ${{\bf R}^3}$ if one wishes to measure a vector-valued physical quantity such as velocity). Much as analytic geometry allows one to use the laws of algebra and trigonometry to calculate and prove theorems in geometry, the identification of physical quantities with numbers allows one to express physical laws and relationships (such as Einstein’s famous mass-energy equivalence ${E=mc^2}$) as algebraic (or differential) equations, which can then be solved and otherwise manipulated through the extensive mathematical toolbox that has been developed over the centuries to deal with such equations.

However, as any student of physics is aware, most physical quantities are not represented purely by one or more numbers, but instead by a combination of a number and some sort of unit. For instance, it would be a category error to assert that the length of some object was a number such as ${10}$; instead, one has to say something like “the length of this object is ${10}$ yards”, combining both a number ${10}$ and a unit (in this case, the yard). Changing the unit leads to a change in the numerical value assigned to this physical quantity, even though no physical change to the object being measured has occurred. For instance, if one decides to use feet as the unit of length instead of yards, then the length of the object is now ${30}$ feet; if one instead uses metres, the length is now ${9.144}$ metres; and so forth. But nothing physical has changed when performing this change of units, and these lengths are considered all equal to each other:

$\displaystyle 10 \hbox{ yards } = 30 \hbox{ feet } = 9.144 \hbox{ metres}.$

It is then common to declare that while physical quantities and units are not, strictly speaking, numbers, they should be manipulated using the laws of algebra as if they were numerical quantities. For instance, if an object travels ${10}$ metres in ${5}$ seconds, then its speed should be

$\displaystyle (10 m) / (5 s) = 2 ms^{-1}$

where we use the usual abbreviations of ${m}$ and ${s}$ for metres and seconds respectively. Similarly, if the speed of light ${c}$ is ${c=299 792 458 ms^{-1}}$ and an object has mass ${10 kg}$, then Einstein’s mass-energy equivalence ${E=mc^2}$ then tells us that the energy-content of this object is

$\displaystyle (10 kg) (299 792 458 ms^{-1})^2 \approx 8.99 \times 10^{17} kg m^2 s^{-2}.$

Note that the symbols ${kg, m, s}$ are being manipulated algebraically as if they were mathematical variables such as ${x}$ and ${y}$. By collecting all these units together, we see that every physical quantity gets assigned a unit of a certain dimension: for instance, we see here that the energy ${E}$ of an object can be given the unit of ${kg m^2 s^{-2}}$ (more commonly known as a Joule), which has the dimension of ${M L^2 T^{-2}}$ where ${M, L, T}$ are the dimensions of mass, length, and time respectively.

There is however one important limitation to the ability to manipulate “dimensionful” quantities as if they were numbers: one is not supposed to add, subtract, or compare two physical quantities if they have different dimensions, although it is acceptable to multiply or divide two such quantities. For instance, if ${m}$ is a mass (having the units ${M}$) and ${v}$ is a speed (having the units ${LT^{-1}}$), then it is physically “legitimate” to form an expression such as ${\frac{1}{2} mv^2}$, but not an expression such as ${m+v}$ or ${m-v}$; in a similar spirit, statements such as ${m=v}$ or ${m\geq v}$ are physically meaningless. This combines well with the mathematical distinction between vector, scalar, and matrix quantities, which among other things prohibits one from adding together two such quantities if their vector or matrix type are different (e.g. one cannot add a scalar to a vector, or a vector to a matrix), and also places limitations on when two such quantities can be multiplied together. A related limitation, which is not always made explicit in physics texts, is that transcendental mathematical functions such as ${\sin}$ or ${\exp}$ should only be applied to arguments that are dimensionless; thus, for instance, if ${v}$ is a speed, then ${\hbox{arctanh}(v)}$ is not physically meaningful, but ${\hbox{arctanh}(v/c)}$ is (this particular quantity is known as the rapidity associated to this speed).

These limitations may seem like a weakness in the mathematical modeling of physical quantities; one may think that one could get a more “powerful” mathematical framework if one were allowed to perform dimensionally inconsistent operations, such as add together a mass and a velocity, add together a vector and a scalar, exponentiate a length, etc. Certainly there is some precedent for this in mathematics; for instance, the formalism of Clifford algebras does in fact allow one to (among other things) add vectors with scalars, and in differential geometry it is quite common to formally apply transcendental functions (such as the exponential function) to a differential form (for instance, the Liouville measure ${\frac{1}{n!} \omega^n}$ of a symplectic manifold can be usefully thought of as a component of the exponential ${\exp(\omega)}$ of the symplectic form ${\omega}$).

However, there are several reasons why it is advantageous to retain the limitation to only perform dimensionally consistent operations. One is that of error correction: one can often catch (and correct for) errors in one’s calculations by discovering a dimensional inconsistency, and tracing it back to the first step where it occurs. Also, by performing dimensional analysis, one can often identify the form of a physical law before one has fully derived it. For instance, if one postulates the existence of a mass-energy relationship involving only the mass of an object ${m}$, the energy content ${E}$, and the speed of light ${c}$, dimensional analysis is already sufficient to deduce that the relationship must be of the form ${E = \alpha mc^2}$ for some dimensionless absolute constant ${\alpha}$; the only remaining task is then to work out the constant of proportionality ${\alpha}$, which requires physical arguments beyond that provided by dimensional analysis. (This is a simple instance of a more general application of dimensional analysis known as the Buckingham ${\pi}$ theorem.)

The use of units and dimensional analysis has certainly been proven to be very effective tools in physics. But one can pose the question of whether it has a properly grounded mathematical foundation, in order to settle any lingering unease about using such tools in physics, and also in order to rigorously develop such tools for purely mathematical purposes (such as analysing identities and inequalities in such fields of mathematics as harmonic analysis or partial differential equations).

The example of Euclidean geometry mentioned previously offers one possible approach to formalising the use of dimensions. For instance, one could model the length of a line segment not by a number, but rather by the equivalence class of all line segments congruent to the original line segment (cf. the Frege-Russell definition of a number). Similarly, the area of a planar region can be modeled not by a number, but by the equivalence class of all regions that are equidecomposable with the original region (one can, if one wishes, restrict attention here to measurable sets in order to avoid Banach-Tarski-type paradoxes, though that particular paradox actually only arises in three and higher dimensions). As mentioned before, it is then geometrically natural to multiply two lengths to form an area, by taking a rectangle whose line segments have the stated lengths, and using the area of that rectangle as a product. This geometric picture works well for units such as length and volume that have a spatial geometric interpretation, but it is less clear how to apply it for more general units. For instance, it does not seem geometrically natural (or, for that matter, conceptually helpful) to envision the equation ${E=mc^2}$ as the assertion that the energy ${E}$ is the volume of a rectangular box whose height is the mass ${m}$ and whose length and width is given by the speed of light ${c}$.

But there are at least two other ways to formalise dimensionful quantities in mathematics, which I will discuss below the fold. The first is a “parametric” model in which dimensionful objects are modeled as numbers (or vectors, matrices, etc.) depending on some base dimensional parameters (such as units of length, mass, and time, or perhaps a coordinate system for space or spacetime), and transforming according to some representation of a structure group that encodes the range of these parameters; this type of “coordinate-heavy” model is often used (either implicitly or explicitly) by physicists in order to efficiently perform calculations, particularly when manipulating vector or tensor-valued quantities. The second is an “abstract” model in which dimensionful objects now live in an abstract mathematical space (e.g. an abstract vector space), in which only a subset of the operations available to general-purpose number systems such as ${{\bf R}}$ or ${{\bf R}^3}$ are available, namely those operations which are “dimensionally consistent” or invariant (or more precisely, equivariant) with respect to the action of the underlying structure group. This sort of “coordinate-free” approach tends to be the one which is preferred by pure mathematicians, particularly in the various branches of modern geometry, in part because it can lead to greater conceptual clarity, as well as results of great generality; it is also close to the more informal practice of treating mathematical manipulations that do not preserve dimensional consistency as being physically meaningless.

In this post I would like to make some technical notes on a standard reduction used in the (Euclidean, maximal) Kakeya problem, known as the two ends reduction. This reduction (which takes advantage of the approximate scale-invariance of the Kakeya problem) was introduced by Wolff, and has since been used many times, both for the Kakeya problem and in other similar problems (e.g. by Jim Wright and myself to study curved Radon-like transforms). I was asked about it recently, so I thought I would describe the trick here. As an application I give a proof of the ${d=\frac{n+1}{2}}$ case of the Kakeya maximal conjecture.

In a recent Cabinet meeting, President Obama called for a $100 million spending cut in 90 days from the various federal departments as a sign of budget discipline. While this is nominally quite a large number, it was pointed out correctly by many people that this was in fact a negligible fraction of the total federal budget; for instance, Greg Mankiw noted that the cut was comparable to a family with an annual spending of$100,000 and a deficit of $34,000 deciding on a spending cut of$3.  (Of course, this is by no means the only budgetary initiative being proposed by the administration; just today, for instance, a change in the taxation law for offshore income was proposed which could potentially raise about $210 billion over the next ten years, or about$630 a year with the above scaling, though it is not clear yet how feasible or effective this change would be.)

I thought that this sort of rescaling (converting $100 million to$3) was actually a rather good way of comprehending the vast amounts of money in the federal budget: we are not so adept at distinguishing easily between $1 million,$1 billion, and $1 trillion, but we are fairly good at grasping the distinction between$0.03, $30, and$30,000.  So I decided to rescale (selected items in) the federal budget, together with some related numbers for comparison, by this ratio 100 million:3, to put various figures in perspective.

This is certainly not an advanced application of mathematics by any means, but I still found the results to be instructive.  The same rescaling puts the entire population of the US at about nine – the size of a (large) family – which is of course consistent with the goal of putting the federal budget roughly on the scale of a family budget (bearing in mind, of course, that the federal government is only about a fifth of the entire US economy, so one might perhaps view the government as being roughly analogous to the “heads of household” of this large family).  The implied (horizontal) length rescaling of $\sqrt{100 \hbox{million}:3} \approx 5770$ is roughly comparable to the scaling used in Dubai’s “The World” (which is not a coincidence, if you think about it; the purpose of both rescalings is to map global scales to human scales).  Perhaps readers may wish to contribute additional rescaled statistics of interest to those given below.

One caveat: the rescaling used here does create some noticeable distortions in other dimensional quantities.  For example, if one rescales length by the implied factor of $\approx 5770$, but leaves time unrescaled (so that a fiscal year remains a fiscal year), then this will rescale all speeds by a factor of $\approx 5770$ also.  For instance, the circumference of the Earth has been rescaled to a manageable-looking 6.9 kilometres (4.3 miles), but the speed of, say, a commercial airliner (typically about 900 km/hr, or 550 mi/hr) is rescaled also, to a mere 150 metres (or 160 yards) or so per hour, i.e. two or three meters or yards per minute – about the speed of a tortoise.

All amounts here are rounded to three significant figures (and in some cases, the precision may be worse than this).   I am using here the FY2008 budget instead of the 2009 or 2010 one, as the data is more finalised; as such, the numbers here are slightly different from those of Mankiw.  (For instance, the 2010 budget includes the expenditures for Iraq & Afghanistan, whereas in previous budgets these were appropriated separately.)  I have tried to locate the most official and accurate statistics whenever possible, but corrections and better links are of course welcome.

FY 2008 budget:

• Total revenue: $75,700 • Individual income taxes:$34,400
• Social security & other payroll taxes: $27,000 • Total spending:$89,500
• Net mandatory spending: $48,000 • Medicare, Medicaid, and SCHIP:$20,500
• Social Security: $18,400 • Net interest:$7,470
• Net discretionary spending: $34,000 • Department of Defense:$14,300
• DARPA: $89 • Global War on Terror:$4,350
• Department of Education: $1,680 • Department of Energy:$729
• NASA: $519 • Net earmarks:$495
• NSF: $180 • Maths & Physical Sciences:$37.50
• Budget deficit: $13,800 • Additional appropriations (not included in regular budget) • Iraq & Afghanistan:$5,640
• Spending cuts within 90 days of Apr 20, 2009: $3 • Air force NY flyover “photo shoot”, Apr 27, 2009:$0.01
• Additional spending cuts for FY2010, proposed May 7, 2009: $510 • Projected annual revenue from proposed offshore tax code change:$630

Other figures (for comparison)

• National debt held by public 2008: $174,000 • Held by foreign/international owners 2008:$85,900
• National debt held by government agencies, 2008: $126,000 • Held by OASDI (aka (cumulative) “Social Security Surplus”):$64,500
• National GDP 2008: $427,000 • National population 2008: 9 • GDP per capita 2008:$47,000
• Land mass: 0.27 sq km (0.1 sq mi, or 68 acres)
• World GDP 2008: $1,680,000 • World population 2008: 204 • GDP per capita 2008 (PPP):$10,400
• Land mass: 4.47 sq km (1.73 sq mi)
• World’s richest (non-rescaled) person: Bill Gates (net worth $1,200, March 2009) • 2008/2009 Bailout package (TARP):$21,000 (maximum)
• Amount spent by Dec 2008: $7,410 • AIG bailout from TARP:$1,200
• AIG Federal Reserve credit line: $4,320 • AIG bonuses in 2009 Q1:$4.95
• GM & Chrysler loans: $552 • 2009/2010 Stimulus package (ARRA):$23,600
• Tax cuts (spread out over 10 years): $8,640 • State and local fiscal relief:$4,320
• Education: $3,000 • “Race to the top” education fund:$150
• Investments in scientific research: $645 • Pandemic flu preparedness:$1.50 (was initially $27, after being dropped from FY2008 and FY2009 budgets) • Additional request after A(H1N1) (“swine flu”) outbreak, Apr 28:$45
• Volcano monitoring: $0.46 (erroneously reported as$5.20)
• Salt marsh mouse preservation (aka “Pelosi’s mouse“): $0.00 (erroneously reported as$0.90)
• Market capitalization of NYSE
• May 2008 (peak): $506,000 • March 2009:$258,000
• Largest company by market cap: Exxon Mobil (approx $10,000, Apr 2009) • Value of US housing stock (2007):$545,760
• Total value of outstanding mortgages (2008): $330,000 • Total value of sub-prime mortgages outstanding (2007 est):$39,000
• Total value of mortgage-backed securities (2008): \$267,000
• Credit default swap contracts, total notional value:
• US trade balance (2007)