We don't know if the distribution of pDif fluctuation is linear. It could also be parabolic:
http://geoff82.files.wordpress.com/2.../parabola1.gif
Take this image for example. Say that the min pDif value existed at the far left of that image, at the very left edge of the parabola. And say that the max existed at the far right. Take the y axis, and designate it as representing ratio's of probability. The higher up on the y axis, the higher the chance that the value would occur, and the lower down on the y axis,t he lower chance that the value would occur. Now look at the shape of the parabola. It would hold that for values closer to the max and the min, the chance of it being chosen as a random pDif output would be higher. And for values closer to the median, the chance of them being chosen would be lower and lower as you get closer and closer. Even if the way that pDif values were chosen worked in this way, than the average output value over time would still be equal to the value at the median (or, again, the average of the max and min values).
But pDif distribution might not be "linear" or "parabolic" in the ways that I mentioned. It might be exponential, curving in one direction either to the max or the min, or it might show a linear progression of higher probability going from max to min or vice versa (using linear in a different way), or there might be gaps in between the min and max that couldn't be chosen, or the curve could be completely arbitrarily irregular, or any number of scenarios.
However the curve is distributed, there will be an average value. It might not be the same value as the average of the max and min aka median or midpoint (though for it to be this way, the curve would have to be asymmetrical with respect to the midpoint--why would SE do that? ..though they could). Even if we can't be sure what this average is, it'd be nice to have one.