I have recently finished a draft version of my blog book “Poincaré’s legacies: pages from year two of a mathematical blog“, which covers all the mathematical posts from my blog in 2008, excluding those posts which primarily originated from other authors or speakers.
The draft is much longer – 694 pages – than the analogous draft from 2007 (which was 374 pages using the same style files). This is largely because of the two series of course lecture notes which dominate the book (and inspired its title), namely on ergodic theory and on the Poincaré conjecture. I am talking with the AMS staff about the possibility of splitting the book into two volumes, one focusing on ergodic theory, number theory, and combinatorics, and the other focusing on geometry, topology, and PDE (though there will certainly be miscellaneous sections that will basically be divided arbitrarily amongst the two volumes).
The draft probably also needs an index, which I will attend to at some point before publication.
As in the previous book, those comments and corrections from readers which were of a substantive and mathematical nature have been acknowledged in the text. In many cases, I was only able to refer to commenters by their internet handles; please email me if you wish to be attributed differently (or not to be attributed at all).
Any other suggestions, corrections, etc. are, of course welcome.
I learned some technical tricks for HTML to LaTeX conversion which made the process significantly faster than last year’s, although still rather tedious and time consuming; I thought I might share them below as they may be of use to anyone else contemplating a similar conversion.
Last year, I converted each post to LaTeX separately, which resulted in a lot of redundant search-and-replaces. This time around, I first dumped all the posts (in HTML format) into a single massive text file (1.7 MB!). Then I moved linearly through the file, and every time I saw a conversion needed I executed a global search and replace (if the conversion could be automated), or at least a global search (if it required manual editing). (In the most obvious such searches, e.g. changing , or <li> to \item, there were literally thousands of such replacements; each one thus saved me many tedious minutes of work).
In some cases I had to do several passes to get the conversion done correctly. For instance, in the HTML many short mathematical expressions (e.g. a single symbol such as , , etc.) were often not marked up, so I would do a global search and replace to change, e.g. ” f ” to ” $f$ “. But this would occasionally mess up existing LaTeX expressions, such as $B( f , g )$, and I would then have to do a second pass, e.g. converting ” $f$ ,” back to ” f ,”. My rule of thumb was that such “false positives” were acceptable as long as their rate of occurrence was significantly less than 50%, so that the end result of the search and replace was closer to my goal than the start.
To mitigate the false positive problem, I went through the text linearly, and each section that was cleaned up and fully converted was moved into a separate text file, so that further global search-and-replaces did not mess up the portions of the text that had already been dealt with.
Perhaps the two most difficult conversion tasks, that could not be automated much at all, was the conversion of hypertext links to more traditional bibliographical citations, though I found that if a paper was cited multiple times, then a search-and-replace did cut down on the work required. Wikipedia links had to be largely abandoned, unfortunately, although I did adopt a convention that any technical term introduced with only a cursory definition would be italicised to indicate that more information about that term could be obtained from an external source, such as Wikipedia.