No.Originally Posted by Raelia
Neutral is 1.0. The original value does not change.
Push is anything greater than 1.0. The original value increases.
Pull is anything less than 1.0. The original value decreases.
(This is for multiplication; neutral would be 0 if it were addition, obviously.)
What you're defining is a greater push vs a lesser push; ie: the upper half of values of the secondary modifier vs the lower half of values of the secondary modifier. Or perhaps push and pull relative to the average secondary value, when you actually stated that it was relative to the primary value.
Again, no.Originally Posted by Raelia
My posts are detailed for a reason: If I make an error, or reach an invalid conclusion, people who notice or just generally disagree with me can actually go through step by step to see how I reached that conclusion, and point out where I missed a step, made a mistake, made an erronous assumption, or whatever. They can then point that out, and I can go back and fix things and improve my own understanding as well.
A lack of explicitly detailed math is far more an indication that the writer doesn't want to have their work scrutinized, and makes any assertions or conclusions far more suspicious because they are neither verifiable nor repeatable.
- 148/59/2.52but the margin is so small (148/59/2.52, SqRt that for for what percentile each randomizer needs to roll, then subtract from 1 again and square that for two randomizers, then take the reciprocal for odds) this would require 190,800 hits to occur, statistically.
148/59 is the absolute minimum pDif necessary to reach 148. 2.52 is the maximum pDif in your model. This division gives the ratio between that minimum and maximum, however it has no relation to the probability of that result occuring. That would only be the case if pDif could cover the full range of 0 to 2.52, which it clearly does not.
- square that for two randomizers
Incorrect. That only applies if the two randomizers are equivalent. In this case, they are not. You need a specific portion of the original randomizer combined with a specific portion of the secondary randomizer, and those portions cannot be assumed to be equal.
Your illustration with the 80-sided die and the 20-sided die was more correct. The probability of rolling 100 is the combined respective probabilities of rolling an 80 and a 20. However you can't simply take the probability of rolling an 80 and square it to get the overall probability; it's nonsensical, since the probability of rolling an 80 on the first die has nothing to do with the probability of rolling a 20 on the other die.
For this case, you need the probability of a primary pDif result that has -any- possibility of reaching 148 when combined with the secondary randomizer. The lowest primary pDif is clearly the value that, if you got exactly a 1.05 on the secondary roll, would give you a 148.
148 / 59 / 1.05 = 2.38902
Since it has to be equal to or greater than that value, and I'd prefer to work with no more than 3 decimal places, I'll use 2.390.
The maximum value that the primary pDif can be in your model is 2.40, so the range of potential primary pDif values that could possibly generate a 148 would be 2.400 - 2.390 = 0.01.
The full quantity of values that the primary pDif can encompass is +0.4 - (-0.5) = 0.9. Therefore the probability that the primary pDif can have any chance whatsoever of resulting in a final value of 148 is 0.01 / 0.90 = 1.111%. (Note: This uses the full spread rather than the slightly shortened spread I used in the original post; that is, it doesn't account for the lower pDif limit being of questionable validity.)
The probability of actually reaching 148 then depends on the secondary roll. This ranges from near 0 (getting exactly 1.05 on the multiplier; this may possibly be 2% if it's 1 chance out of 50, using 50/1024 or thereabouts, but will treat it as 0 for the combined probabilities) to whatever the chance is when combined with the maximum pDif.
At maximum primary pDif, the secondary modifier would need to be at least (148/59) / 2.4 = 1.0452. The percentage of the time that could occur would be (1.05 - 1.0452) / (1.05 - 1.00) = 9.6%.
The overall probabilty area of those combined values is a triangle, so the total probability is 1/2 A B, or (1.11% * 9.6%) / 2 = 0.05336%. In terms of frequency, invert that value for a chance of 1 in 1874, two orders of magnitude more common than your assertion.
With 1850 sample points in Masa's data, there's a 37.25% chance of a 148 not showing up if it were in fact possible, and likewise a 62.75% chance that it -would- show up if it were possible. You'd need about 5600 sample points to have a 95% chance of the value showing up if it were possible.
Originally Posted by Raelia
Using the same procedure as detailed above:
Primary pDif range that can generate a 91: 0 to (91.9999/59 - 1.55) = 0.00932 out of the full range of 0.85 = 1.096%
Maximum secondary multiplier given primary pDif of +0.0: 91.9999/59 / 1.55 = 1.00601 out of 1.05 = 12.03%
Overall probability: 1.096% * 12.03% / 2 = 0.0660%, or 1 in 1516.
70.5% chance of seeing such a result in 1850 samples.
Probability of a 92 showing up, given a -0.45 LIA:
Primary pDif range that can generate a 92:
1) The above 0.00932 (1.096%) chance of something between 91.45 and 91.9999 when combined with secondary modifiers between 1.00601 and 1.0169 at the low end and 1.0 to 1.01087 at the high end.
Secondary range at low end: 0.01089
Secondary range at high end: 0.01087
Overall, range of valid secondary values seems to remain consistant (as should be expected, really) for the full extent of the primary values. Will use 0.01088 out of 0.05 for 21.76%.
Total probability for when primary is 91.xxx (no triangle in this case): 1.096% * 21.76% = 0.2385%
2) (92/59 - 1.55) to (92.9999/59 - 1.55) = 0.01695 out of the full range of 0.85 = 1.994%, combined with a secondary modifier between 0 and 1.01087 at the low end, down to 0 at the high end.
Total probability for when primary is 92.xxx: 1.994% * 21.76% / 2 = 0.2169%
Overall probability: 0.2385% + 0.2169% = 0.4554%, or 1 in 220
99.98% chance of seeing such a result in 1850 samples. On average, expect 8.4 such results. Total actually seen: 10
Average if minimum value was exactly 92.0 would be 1 in 460, or about 3 expected over 1850 results. Given the significantly higher numbers seen, that would seem to support the idea of a minimum somewhat below 92.
The chances of a 91 occurring here is worked out the same as the chance of a 92 occuring in the above example.Originally Posted by Raelia
Min pDif: 1.525
Max secondary multiplier on top of min pDif results in a value above 91, so we can treat everything from min up to 91.0 as a square segment.
pDif for 91.0: 1.5424
Primary range (square): 1.5424 - 1.525 = 0.0174 out of 0.875 = 1.989%
Secondary range covering 1.0 spread: (91.9999/59 / 1.525) - (91.0/59 / 1.525) = 1.02251 to 1.01139 = 0.01112
0.01112 out of 0.05 = 22.24%
Chance (square range): 1.989% * 22.24% = 0.4424% (1 in 226)
Chance (triangle range):
Primary pDif from 91/59 to 91.9999/59 = 1.5593 - 1.5424 = 0.0169 out of 0.875 = 1.931%
Secondary range at 91.0: 91.9999/91 = 1.01099 out of 1.05 = 21.98%
Chance (triangle range): 1.931% * 21.98% / 2 = 0.4244%
Total probability: 0.4424% + 0.4244% = 0.8668% (1 in 115)
1.0e-5% chance (1 in 9.9 million) of -not- seeing it if it were possible, in 1850 samples.
~~~~~~~~~~~~~~~
So 0.45 is somewhat possible, as there's still a 30% chance that a 91 wouldn't show up, but 0.475 is just flat out not believable.
Given the probabilities for a 92 occurring, I'd say that there's a strong argument for -0.45 (at least as an approximation) for the minimum pDif at 2.0 cRatio. Whether that's a fixed value or the result of a different calculation formula is another matter.