The St. Petersburg Paradox came up a more than a few times over the past several days and I think it is the perfect place to talk about what we really mean by infinity. The “paradox” itself is a result of thought experiment using a gambling game with an infinite expected payout. You flip a coin and the pot doubles each time heads comes up. As soon as tails comes up, you get what's in the pot. The question is: how much should you pay to play this game? The naïve calculation of the expected payout E is infinite:
E = (1/2) · 2 + (1/4) · 4 + (1/8) · 8 + ...
E = 1 + 1 + 1 + ...
E = ∞
Rationally you should be willing to pay amount to play because the payoff is infinite. However people don't and there have been several “answers” to this problem over the few hundred years it's been around in this form. This is where I usually inject my joke:
E = (1/2) · 2 + (1/4) · 4 + (1/8) · 8 + ...
E = 1 + 1 + 1 + ...
E = ζ(0)
E = −1/2
You should expect to lose $0.50 (assuming the units are dollars). The only way to win is not to play.
The typical (boring) resolution is either 1. logarithmic utility/Kelly criterion-esque methods to render the sum finite, or 2. meta-arguments against the game itself (“no one would be able to credibly offer a game with an infinite payout”). The former is a thing we often do in particle physics called regularization — changing something in the theory to make the expected values of observables finite. Put a pin in that as I want to come back to it in a minute. First, it’s important to note that we humans came up with the game; we humans came up with the infinity; we humans came up with how to deal with the infinity. We have something very much like the “stick in your own bike spokes” meme.
Infinity is not a real thing we encounter in Nature — it is a mathematical tool we invented, not Nature, and we are the ones who come up with the fixes when the tool goes awry, not Nature. That is to say it is a model element like a parameter or an HP filter, and in much the same way a modeler sets up a system of equations in order to match the way they see the data we are modelers setting up an agent when we select one of the various methods of dealing with the infinity in the St. Petersburg paradox.
Back to that pin — in physics, regulating the infinity is typically parameterizing our ignorance. We don’t know what the quantum theory of gravity looks is at the Planck scale, so we have good reason to cut off infinite sums1 at some high energy/short wavelength scale. The theory is an effective theory at lower energy and our cut off sets the scale of our theory and acts as a firewall that allows new ideas to proliferate. Newtonian mechanics is a useful effective theory for action (energy × time) S >> ℏ — the scale set by quantum mechanics.
However, the issue in social sciences like economics is that regulating the infinity is often parameterizing how humans think. How many moves ahead do people look? What time horizon2 do people consider? In markets, it’s often the tails of distributions that matter most. Figuring out how to take out the infinity (that you put in the problem) is exactly the problem you are trying to understand. It’s also the entire content of the St. Petersburg paradox.
Infinity and how you deal with it is always a modeling choice, but it is more so in social sciences because people are people at t = 1, t = 100, and t = 10,0003. There are no new emergent galaxy brain takes at t >> T with new degrees of freedom. Those galaxy brain takes are (unfortunately) here with us today4. Therefore, you have to be extra careful not to just assume it away. You also have to be extra careful not to get too high on your own supply — believing your model of how humans think based on your own experience with your own brain5 is the “correct” solution. In my opinion, it's also unlikely there is any single solution6 — for example, humans can be more shortsighted when panicking. The most important things to remember are that when you encounter infinity in a problem in social science: you put it there, you’re the one who is taking it out, and how you do it is a model (and only a model) of how humans think.
As always, there’s a lot of nuance here that makes up a huge chunk of quantum field theory class in grad school. The way you regulate infinity should not e.g. break symmetries in your system and naïve momentum cutoffs aren’t Lorentz invariant. You do weird stuff like zeta function regularization (my joke at the top of this post), dimensional regularization (we live in 4 + ε dimensions), fake particles, etc.
Wrote a post on this on the old blog that I regret not titling “angels dancing at the end of time”. In it, I show how the choice of what’s happening infinitely far in the future has an outsized impact on the present.
Following econ tradition, time steps do not have units. Cringe.
It would be funny to write an econ paper where it asserts a temporal cutoff based on the end of capitalism.
Or your own field of expertise. It’s wild how often a proposed solution to things in econ is based in biology when a biologist is talking about it, physics when a physicist is talking about it, or engineering when an engineer is talking about it (sarcasm).
For what it’s worth, information equilibrium does not make any claims about what agents think the eventual payoff is and instead would treat the bet as just another option for their multi-dimensional consumption basket constrained by their budget hyperplane. Since there are hundreds of items in each agent’s consumption basket, the average agent would spend some small fraction of their available budget — just as they would on blueberries, lottery tickets, or beer. This is generally consistent with asking people what they would spend (e.g. here).