The Efficiency of Random Parking


   At dinner recently, a friend mentioned the following problem:

    There is a street of length x (not necessarily an integer) on which cars of length 1 wish to park.  However, instead of parking in a nice organized way, they park at random, picking uniformly from the availible positions to park (they are apparently jerks).  Assuming no cars leave, and continue to arrive until no more can fit, what is the expected number of cars that will fit?

     For instance, if the street is length 2, then the first car will almost always park so that no other can fit,  and so the expected number of cars is 1.  If the street is length 3, the first car can never prevent a second from fitting; but a third car almost never will fit, and so the expected value is 2.  The trend of simple answers is broken at street length 4, where the expected number of cars is \frac{11}{3}-\frac{4}{3}\ln(2).  This came up as a fun, back-of-the-envelope type problem, but the answer is actually an iterated integral that becomes prohibitively difficult after a few steps.  Perhaps a intrepid reader can solve more steps than I was able to.

    The name of the game is finding the function E(x), the expected value of the number of cars for a street of length x.  We can find a relation this function has to satisfy in the following way.   

     Lets say the first car parks so its front is at x_1, dividing the previously empty street into two pieces of length x_1 and x-1-x_1.  Hence, the expected number of cars that park given the exact place the first car parks is 1+E(x_1)+E(x-1-x_1).  We can get the overall expected value (that is, without fixed starting location) by averaging over all possible starting locations:


Of course, the integral splits apart into three pieces:


Notice that the second integral becomes the first after the change of variables x_1\rightarrow x-1-x_1, and so we get:

(*) E(x)=1+\frac{2}{x-1}\int_0^{x-1}E(x_1)dx_1

We should note that this formula is only valid for x\geq 1, since we implicitly assumed the first car could park.

    The reason this equation is useful is because of its inductive flavor.  To compute E(x) for some fixed x requires only knowledge of E(x_1) for x_1\leq x-1, and we can start things with the simple observation that E(x)=0 for x<1.

     Computationally, this amounts to plugging equation (*) into itself over and over again.  Lets see how this works for one step.


Note that the lower limit of integration become 1, because equation (*) was only valid for x\geq 1; similarly, the above equation is only valid for x\geq 2.

    We can repeat this process as many times as we like (always being careful with the lower limit of integration and the validity of x).  After every step, the only term with a reference to E will be the last one, a big nested integral of E(x_n).  Ah, but x_n must be at least 1 less than x_{n-1}, etc, and so x_n is n less than x.  Therefore, if n>x-1, then x_n<1, and so the big nested integral will be zero.  Hence, we can get an expression for E(x) which involves no references to itself (as long as we know what integers x is between).

    Heres the result (for x>1):

    E(x) =1+\sum_{i=1}^{[x]}\int_i^{x-1}\int_{i-1}^{x_1-1}...\int_1^{x_{i-1}-1}\frac{2^idx_i...dx_1}{(x-1)(x_1-1)...(x_{i-1}-1)}

Now all the trickiness is hidden in evaluating the integral.  Its easy for the first few steps.



Thus, we can see that for x=4,


    However, the next term of E(x) has resisted all of my attempts to integrate, and I haven’t found it in any of the integration tables I looked up.  Its possible its not solvable, which would be a sad end to an otherwise interesting problem.

    A related question is the behavior of E(x) for really large x.  It seems intuitively likely that if the street is very long, the complicated factors that affect the precise expected value will stay relatively small, and E(x) will look like a nice function of x like multiplication by some number \lambda.  This number \lambda would be a measure of the macroscopic efficiency of this random parking method.  It would be a fun number to know, if it exists, but I can’t see how to find it.


10 Responses to “The Efficiency of Random Parking”

  1. Isabel Says:

    Maple can do the next integral, but it involves the dilogarithm. But it can’t do the one after that, so exact answers hardly seem the way to go.

    And I agree that E(x) ~ λ x for large x. Letting E(x) = λx (which isn’t true) the first equation with the integral is satisfied. Unfortunately it’s satisfied for any λ, so this gives no clue to the value of λ. I suspect the value of λ depends on the behavior of E(x) for small x — precisely the ones for which any sort of asymptotic result isn’t true.

    One thing that’s clear is that λ ≥ 1/2 if it exists. If cars take up less than half the street, then there is more empty space than space taken up by cars, and the number of gaps between cars is the same as the number of cars (plus or minus one). So there’s some gap that’s big enough to fit a car.

  2. Terence Tao Says:

    One can show without much trouble that \lambda exists, but I suspect that its value is not any particularly interesting number.

    It’s slightly more convenient to look at the waste F(x) := x - E(x), which obeys the delayed integral equation

    F(x) = \frac{2}{x-1} \int_0^{x-1} F(y)\ dy

    for large x, or (equivalently) the delayed differential equation

    \frac{d}{dx} ( \frac{x-1}{2} F )(x) = F(x-1).

    (Note that F becomes increasingly smooth as x increases, by the delayed integral equation, so all the formal computations here become valid for sufficiently large x, e.g. x > 3 is probably enough.)

    One can observe that we have the explicit solutions F(x) = \alpha \times (x+1) for any constant \alpha, confirming the intuition of asymptotic linear behaviour of F and E. To exploit this, we use the variation of parameters method, selecting the ansatz F(x) = \alpha(x) (x+1), where \alpha(x) is now a varying function of x. Some computation eventually gives the delayed differential equation

    \alpha'(x) = \frac{2x}{x^2-1} ( \alpha(x-1) - \alpha(x) )

    and hence by the mean-value theorem

    \alpha'(x) = -\frac{2x}{x^2-1} \alpha'(x-\theta(x))

    for some 0 < \theta(x) < 1. From this one easily shows that \alpha'(x) converges to zero very fast (super-exponentially fast, in fact), from which we see that E(x) converges quite rapidly to \lambda x for some \lambda, equal to 1 minus the limiting value of \alpha. Because of this fast convergence, it would be possible to compute many digits of \lambda very quickly (and rigorously) simply by computing E numerically for a reasonable range of x. But I suspect that it will just converge to some generic transcendental number.

  3. Terence Tao Says:

    A small correction: E(x) should converge super-exponentially fast to \lambda x + \lambda - 1 rather than \lambda x. (Note that \lambda x + \lambda - 1 solves the delayed integral equation for E.)

  4. matt.hastings Says:

    This problem is called random sequential adsorption in physics, representing the adsorption of molecules onto surfaces. In one-dimension, the problem is exactly solvable for \lambda. See, for example, Krapivsky, J Stat Phys, vol 69, page 135, 1992. Look just below Eq. 12 for the parking coverage.

    Actually, in physics one often considers a slightly more general problem: the objects that are added have some fixed integer length, and they can only be added with their left end at some randomly chosen integer. This mimics adding objects onto some underlying crystal lattice. The same technique as in Krapivsky’s paper also works here (don’t know a reference). There are also natural higher dimensional generalizations, but no exact solutions there except on lattices which are locally tree-like.

  5. Greg Muller Says:

    I’m not at work, so I can’t see the paper without paying for it. Is the solution something relatively nice? I also would like someone to include a decimal approximation of it to several places, since I am curious as to what its actual value is.

  6. matt.hastings Says:

    The answer is given as a double integral. So, no, not especially nice. Not at work now either, but I recall it being around 0.75, with 6 decimal places or so given in the paper. So, if you just guessed something halfway between the lower bound of 0.5 given above, and the upper bound of 1.0, you’d actually be very close.

  7. matt.hastings Says:

    Btw, basic technique used is to compute survival probability of an empty interval, as a function of length of the interval and time.

  8. dan Says:

    It’s actually a classical problem solved by Renyi – have a look at
    for the value of the limit and references.
    As far as an exact solution for E(x) goes, it looks like it could be possible to rewrite the iterated integral as a sum of special values of multiple polylogs…

  9. matt.hastings Says:

    I didn’t read the references in Krapivsky, I think he cites Renyi. How did Renyi solve it? The survival probability of an empty interval is a nice approach.

  10. dan Says:

    I don’t have access to the Renyi paper (plus it’s in Hungarian!) and the Math Review doesn’t indicate his method, but looking at the answer given on the Mathworld page, the inner integrand is one of the terms you get when you take a Laplace transform of a shift by 1 so I’d suspect that if you take Laplace transforms of the delayed differential equation Terry gives, the final value theorem will give the limit of $alpha$.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: