On limits
One thing to keep in mind when using mathematics to describe physical reality is that you have to be very careful about taking limits. In pure mathematics it's generally acceptable to send a variable off to infinity, m → ∞. If you are using mathematics to describe physical reality, then m might just have 'dimensions' (aka 'units') so sending a dimensionful number off to dimensionless infinity (or zero) can give you weird results.
Generally, if, say, m is a mass (with units of kilograms) the only way you can send it off to infinity or zero is to have another scale (say, another mass M with units of kilograms) to compare it to. You can send m/M → ∞ or m/M → 0 ... or better yet, as physicists tend to put it: m/M >> 1 or m/M << 1 (i.e. m >> M or m << M).
I do my best to be very careful about this (I probably have made some mistakes). However, I don't think economists care about this at all. For example, Paul Romer and Robert Lucas both take limits where time T and a time scale (1/β, where β is a rate) go off to infinity simultaneously. In pure math, this produces an issue of almost uniform convergence; however, if you're using math to describe physical reality then it is nonsense to send these two dimensionful scales off to dimensionless infinity simultaneously. A (possible, since it was EJR) econ student also had no idea about this, which makes me think this attitude is very widespread. The only sensible thing you can do is look at T β >> 1 or T β << 1. Other limits are nonsense.
Nick Rowe doesn't seem to have a problem with sending dimensionful numbers to dimensionless infinity either (which I wrote about yesterday), sending time steps dT to zero when he should really be looking at the relationship to the other physical scale in his theory -- the delay in the onset of the end of fiscal stimulus dt. This carelessness creates nonsense results.
Noah Smith, John Cochrane and Michael Woodford (see Noah for one-stop shopping, and see slide 30 of Woodford's presentation [pdf] [1]) all make this same error when talking about neo-Fisherism in terms of permanent rate pegs and expectations infinitely far in the future. And if Woodford is considered the Ed Witten of economics (per Noah in the link above), that doesn't bode well for any economist knowing about how to use math to describe reality. It also makes me think the whole neo-Fisherite view may just be an artefact of poor dimensional analysis (something that I will be looking into ... )
The problem is that economists don't see this as an issue. And they don't seem to take kindly to physicists trying to tell them what to do. When Chris House says "[Physicists'] mathematical abilities are actually not that much better than most economists (if they are better at all)" my spidey sense starts suggesting the reason is probably Dunning-Kruger [2]. The second paper on his list on his website ("Layoffs, Lemons and Temps") has a individual firms with "production function[s] with the usual properties" which in the footnote contains two nonsense limits where the number of workers n goes to zero and infinity ... treated as if it's an everyday assumption (even relegated to a footnote). They are in fact everyday assumptions in economics (called the Inada conditions)! The only sensible limits in that case would be (for instance) n/N >> 1 or n/N << 1 where N is the number of firms (in English, a few firms have a large number of workers versus many firms have a small number of workers). Another way would be to compare to the population size (does everyone work for a few companies, n/P ~ 1, or do very few people work for the same company, n/P << 1). I don't think this impacts the results of House's paper, but it is careless mathematics. At least the Inada conditions are described in terms of a pure mathematical function with dimensionless inputs.
Update 11/9/2015:
Even if the reason for finite number of belief updates in footnote [1] is that they cost some amount of money dm (or people are just n-smart), you still can't send dimensionful time T to dimensionless infinity when things happen in your model that take finite amount of time (or have some finite timescale such as the decaying functions on Woodford's slide 26). The version where revisions take a finite time dt is just one possible way to make the limit make sense -- not necessarily the only way. It's sort of like the example above with Chris House's paper where I used N and P. The issue is that there has to be some scale that the number of workers n is large compared to be it the number of firms or the total population, and there has to be some timescale that time T is long compared to.
Footnotes:
[1] Woodford takes two limits of n → ∞ and T → ∞ where n is the number of 'belief revisions'; these revisions obviously take some finite time dt (or else you could do an infinite number of revisions instantaneously), so the only sensible (in the sense of using math to describe reality) limits are n dt/T >> 1 or n dt/T << 1. The first says that belief revisions take longer than the time horizon of the interest rate peg (interest rates stop being pegged before you fully revise your beliefs -- which doesn't seem like a very long peg); the second says you revise your beliefs to a high order before the interest rate peg ends (which actually makes more sense). The limits that Woodford takes (n dt → ∞ and T → ∞ simultaneously, in both orders) don't make any sense.
[2] I've sort of come through the rabbit hole on this one. In my first forays into econ, I basically thought economists were just fine with math. I tended to defend econ from usurping physicists (here and here). But these limit problems (in a paper by preeminent economist Michael Woodford, no less) coupled with Paul Romer's diatribe against mathiness and Nick Rowe on the RCK model makes me think that maybe economists don't really know what they are talking about.