Item Search
     
BG-Wiki Search
+ Reply to Thread
Page 8 of 12 FirstFirst ... 6 7 8 9 10 ... LastLast
Results 141 to 160 of 231

Thread: Cast and Recast     submit to reddit submit to twitter

  1. #141
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    Ok, doing a comparison of your suggestion vs the formula already worked out. They're extremely close; there weren't many combinations of haste and fast cast that gave different results. You need a substantial amount of haste or initial cast time to start seeing differences.


    This is the excel formula I have for yours:

    Code:
    =TRUNC(TRUNC(($B5*4)*(100-$D$4)*(1024-F$3)/(1024*100))/4)
    Where $B5 is the nominal in-game recast (eg: 60 seconds for Reraise), $D$4 is the Fast Cast recast value (eg: 1 for Loquacious Earring), and F$3 is the haste value (eg: 51 for Goading Belt).

    Please verify if it matches your intent.

    This is based on:

    The values for r are simply the base recast listed in the game multiplied by four.
    and
    R = [[(r*(100-[F/2])*(1024-h))/(1024*100)]/4]

    The formula I'd developed based on discussions with Mougu:

    Code:
    =TRUNC(FLOOR($B5*(1024-F$3)/1024, 1/POWER(2,$D$1)) * (100-$C$4)/100)
    Where $B5 is the nominal in-game recast (eg: 60 seconds for Reraise), $D$3 is the Fast Cast recast value (eg: 1 for Loquacious Earring) (also the same as $D$4 in yours), and F$3 is the haste value (eg: 51 for Goading Belt). $D$1 is the number of fractional bits kept (it had been determined that 7 fractional bits were kept, so that value is what's being used).


    I find a difference in cast time for Cure III with 162/1024 haste and 1% Fast Cast. Under my model, recast is 4 seconds; under your model, recast is 5 seconds.

    162/1024 haste on whm... Goading Belt (51), Blessed Mitts (51), Goliard Saio (40), Blessed Pumps (20)
    1% Fast Cast recast: Loquacious Earring.

    Observed recast: 4 seconds


    Same haste and fast cast amount for Cura. Mine predicts 24 second recast, yours predicts 25 second recast.

    Observed recast: 25 seconds


    Same haste and fast cast amount for Protectra IV. Mine predicts 14 second recast, yours predicts 15 second recast.

    Observed recast: 14 seconds


    172 haste (change Blessed Pumps to Blessed Trousers) and 1% fast cast amount for Protectra III. Mine predicts 13 second recast, yours predicts 14 second recast.

    Observed recast: 13 seconds


    So Cure III, Protectra III and Protectra IV indicate my model is correct, while Cura indicates your model is correct.


    Overall neither model is perfectly correct, but mine/Mougu's seems to be a little closer.

  2. #142
    BG Content
    Join Date
    Jul 2007
    Posts
    21,105
    BG Level
    10
    FFXI Server
    Lakshmi
    Blog Entries
    1

    When you guys use Haste in these tests, how do you avoid making a logical loop?

    For instance:
    "This item or spell has XX/1024 Haste, because it changes Recast of this spell from Y to Z"
    "This recast model is invalid, because I have XX/1024 Haste from this item or spell and it changes the recast of this spell from Y to Z"

    Are you absolutely sure that you know how much Haste you have? What if Haste is represented differently than you think?

  3. #143
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    There is no logical loop. Haste values have been tested in isolation. Haste values are known to be represented in fractions of /1024. This has been tested a number of times. EG:

    1) Back at post 79 in this thread I noted that Walahra Turban cannot be either 50/1000 or 51/1024 (should have also noted that it can't be 49/1024). The only possible value it can be, to the limits of the precision we know to be in use, is 50/1024.

    2) Somewhere in here I tested Blessed Hands as being 51/1024 (and thus the denominator cannot be less than 1024, such as /512 or /256); same for Goading Belt.

    3) There are only two possible numeric models that we expect to work with: Either the value is kept as a decimal fraction, such as how Fast Cast is stored (ie: 7% Fast Cast is exactly 70/1000), or the value is kept as a fraction of a power of two (eg: haste in /1024, MDT in /256, etc). No other value has been proposed, and no other values that I can think of make sense in terms of game design. If you have any particular ideas, please state them.

    As such, haste is an exactly known quantity. Fast cast has likewise been tested to an exact amount. The only question is how the two (along with other factors) interact to yield a final result. If a proposed model of interaction does not yield results matching observed values, then the interaction itself (ie: the recast model) must be flawed. That doesn't imply any logical loop, though.

  4. #144
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    Haste is no doubt represented in some 1024 integer system. This was evidenced flawlessly by Kirschy's various testing before. The exact values for gear can only be found with certainty by advancing through march tiers and checking recast times, or by some combining pieces of gear with known haste values determined from that method. That's typically not practical these days now, because everybody gets capped marches. I don't know how the numbers for the pieces of gear Motenten used were established. Information like that is spread out in odd places. However, in the first three tests, if the value of haste was off by 1 point (i.e. 163 not 162), then the results would have been predicted accurately by the integer model. The same holds true for the latter test, in which a haste value of 173 would have yielded accurate results.

    I'll try and verify the haste on all of those pieces. I can try using my integer system with some fast cast to find exact values of haste on those gears. If the integer model is flawed, the results are not completely reliable, but they would mostly reliable because these discrepancies only occur in extreme situations. I was able to determine the haste cap of 448 points, the exact haste on Zelus Tiara (81, which matched a listing on the JP wiki, the only other place I could find it.) Additionally, using Haste amounts of 81 (Zelus Tiara), 150 (Haste spell), or 231 (both), I got the following results.

    Code:
    f	h	Spell		Observed	A	B
    2	231	Stoneskin	23		23	22
    2	231	Reraise		46		46	45
    8	150	Freeze		34		34	33
    22	150	Blizzard II	15		15	14
    30	81	Blizzard	8		9	8
    Where A is the integer model, and B is the previous model. I tested all of the spells that yielded differing results with Fast Cast values between 2 and 30, permitting that I had access to the spell and the amount of Fast Cast on the job with access to the spell. The integer model predicts most of these results accurately while the previous does not, although there is one exception, so it's not 100% correct still.

    I think then, based on the ways we've observed thus far, that the following are reasonable assumptions.

    *There is conversion to integers representing 0.25 seconds after all recast reduction calculations have been performed,
    *There is a final truncation to whole seconds after this step

    They may seem like they could be combined, but that would only be true if you assumed truncation was the method in which the value before the first step was converted to an integer. It would depend on how you implemented it programming wise, but truncation may be a good assumption because the integer only model will only ever over approximate the results it seems.

    Either way, the only method using only integers would rely on modular arithmetic, at least it seems so at this point. I think that's unlikely. The interim value is probably converted to a floating point number at some step. I was hoping that this would never occur because it greatly increases the number of ways the system can be modeled and turns the process of finding that system into an arduous task that I have little interest in. I still posit that whatever it is, if anybody is interested on continuing with that, it uses IEEE 754 standards for floating point arithmetic. It's extremely unlikely to be done any other way.

    Excel uses double precission IEEE 754 for calculations. I would assume that the game uses single or half precision IEEE 754 because 64 bit integers would have worked using just integers without causing a stack overflow and would have also been much much faster. I don't really know how to emulate this in a practical manner.

    If nobody has an issue with it, I'll continue working in terms into integer model, and put the final result on the wiki with a disclaimer that it may be inaccurate in extreme cases. My reason for doing this is mostly because I think it's far more useful to most people looking for information than something that relies on a custom bit precision.

  5. #145
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    I went and tried some implementations of IEEE 754 single precision calculations and didn't have any luck truncate that Blizzard example to 8 seconds. Matlab has native support for these so it made it quite easy. I think that it's probably IEEE 754 half precision for all of the floating point calculations. Also, I've been reviewing what you posted earlier.

    Quote Originally Posted by Motenten View Post
    Code:
    =TRUNC(FLOOR($B5*(1024-F$3)/1024, 1/POWER(2,$D$1)) * (100-$C$4)/100)
    As far as I can tell, this is going to truncate the result to a multiple of the machine epsilon of the system. This isn't how floating point numbers work. The discretization between two numbers isn't constant, although it is always a multiple of this number. Truncating a 64 bit floating point number (which is what Excel does produces) to the value of what it would be if fewer precision bits were used is complicated. I've been trying to do it for 16 bit but can't find a way. The best way would be to implement it on a system that has native 16 bit float support for calculations. No luck there either, matlab doesn't support that nor anything else I know of.

  6. #146
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    My final contribution to this for now is to consider the following.

    If all of the haste calculations are done using the concepts agreed upon by all (bases being 1024 for haste, 100 for fast cast, the result a product of the two) and carrying out all the math in 64 bits results in the following.

    Code:
    f	h	Spell		R64			n16	n32	
    30	81	Blizzard	0.0017822265625		3.65	29900.8
    2	162	Cure III	0.0002734375		0.56	4587.52
    2	162	Cura		0.0013671875		2.8	22937.6
    2	162	Protectra IV	0.0008203125		1.68	13762.56
    2	172	Protectra III	0.0030859375		6.32	51773.44
    Those are the cases that threw an error in one of our models. The R64 is the space between the closest integer from the 64 bit calculation. I think 64 bits is enough to make these calculations exact or very close to it. The n16 and n32 columns are the ratio of that space to machine epsilon for 16 and 32 bit systems. Basically, it's approximately how many machine epsilons away a 16 or 32 bit system would be from truncating that answer to the next lowest integer. I doesn't necessarily validate 16 bit floating point assumptions because the actual result that gets truncated likely undergoes more than one floating point operation, but it's at least some kind of assessment of the errors being observed.

  7. #147
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    So, I've found a still somewhat exhaustive, but at least repeatable, way to simulate 16 bit floating point operations.

    http://www.2shared.com/document/CNKq...alf_table.html

    or

    http://pastebin.com/ahNJc6w5

    I recommend downloading the first link. It's basically a table of all possible 16 bit floating point values. Values below 0.00781250000000000 may suffer from precision issues because Matlab is truncating their value to some character length. I don't know how to fix this.

    There are different methods the system may require rounding.

    The IEEE standard has four different rounding modes; the first is the default; the others are called directed roundings.

    Round to Nearest – rounds to the nearest value; if the number falls midway it is rounded to the nearest value with an even (zero) least significant bit, which occurs 50% of the time (in IEEE 754-2008 this mode is called roundTiesToEven to distinguish it from another round-to-nearest mode)
    Round toward 0 – directed rounding towards zero
    Round toward +∞ – directed rounding towards positive infinity
    Round toward −∞ – directed rounding towards negative infinity.
    The proper way to implement this is to round during every arithmetic operation. I've tried a few ways for the Blizzard 8 second case using round towards the nearest and the result was still 9 seconds. However, I found many ways to reach 8 seconds if directed rounding towards 0 was used.

  8. #148
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    One such model.

    Code:
    (46/4) * (943/1024) * (85/100)
    
    (11.5) * (0.920898437500000) * (0.849609375000000)
    
    (10.5859375000000) * (0.849609375000000)
    
    (8.99218750000000)
    Not saying it has to be directed rounding, just showing that a 16 bit calculation could certainly end up truncating that to 8 seconds if that rounding mode was specified. It may be possible to end up with 8 seconds using round to the nearest, it would depend on the order that operations are performed in.

  9. #149
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    As far as I can tell, this is going to truncate the result to a multiple of the machine epsilon of the system. This isn't how floating point numbers work.
    Actually, I simplified it to not use the floating point values, instead using a fixed point integer notation, since it didn't appear that the floating point method was actually needed. Appears I was wrong to go that route.



    Ok, for values below 1, float16 can represent them with precision 1/2048 (0.5/1024) or better. For values between 1 and 2, float16 can represent them with precision 1/1024.

    Given haste/slow can only fall between 0 and 2 (ie: 100% haste up to 100% slow), it's completely representable in units of /1024 across that entire spectrum using float16.


    Fast Cast appears to be an exact decimal representation, however using float16 that actually isn't the case. 1% recast (Loquacious) would be either 0.00999450683593750 or 0.0100021362304688 (probably the former), which falls at a precision of 1/65536.

    The former Fast cast cap of 50% would allow for 25% recast, which would be represented exactly. 1% less than that -- 24% -- would be represented by 0.239990234375000. That's accurate to 1/4096 (0.25/1024), and would make it very hard to spot discrepencies.

    With the current cap pretty much unrestricted, we're getting into the range where the representative quantum starts to be slightly noticeable, with resolution of 1/2048 up to 1.0.


    I do wonder if the apparent data resolution of various bits in the game (eg: fTP, MDT, etc) are actually an artifact of the resolution limit of their float16 number? For example, if fTP can go up to 10, its innate resolution at that point would be... 8/1024, or 1/128, which happens to be the unit I use when trying to isolate fTP values... Interesting.



    And it seems Byrth was right to question whether we were using correct values in our representations of haste and fast cast.


    Spending some time to see if I can jerry-rig float16 math in Excel...

  10. #150
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    I think you may be a bit confused about how float16 can be implemented. It's perfectly possible to represent integers and floats at the same time, and even if you chose to represent integers with floats, you can exactly represent integers up to 2^11 with float16. If you're questioning that we were somehow wrong with storing Fast Cast and Haste as integer values, then you're assuming they must be floats. Storing information with floats when it's possible to store them with integers is highly illogical given how they're implemented (Fast Cast/Haste values are added up, etc.)

    I spent some time thinking about this, and I think we may ultimately be able to say with certainty how it's being implemented, if some assumptions are made. Before I state what those are, consider the following properties about error I concluded today. They're horribly stated, but I'm doing my best.

    1. If you have two numbers exactly represented by a floating point system, and you take the quotient or product of those two numbers, the maximum amount of error for that calculation is bound by the product of machine epsilon and the exponent term of the result.

    2. Dividing any floating point number by the base of the exponential term will not generate any additional error unless you end up rounding to zero.

    3. If a number with an error term is multiplied by or divided, the existing error term is also multiplied or divided by the same number. Newly generated error if truncation is to occur will then be added onto this.

    Error generated by following these rules can then be compared to the true results of the integer system for some analysis, which is something like the following with an unknown order of calculation.
    Code:
    R=(r)*(100-[f/2])*(1024-h)/(4*100*1024)
    Now, I think it's safe to make the following assumptions.

    *Fast Cast numbers are stored as integers

    If they were stored as floats, then dividing them by two would not result in the truncation that we observe. The only other conceivable way for this behavior to occur would be that there are two separate values stored for Fast Cast, one for Recast and one for Casting Time, but I think it's much more logical to assume they are integers (imagine all the weirdness that comes along with storing values for gear like Vivid Strap) and that integer math is truncating off that odd number. This doesn't restrict Loquacious Earring observed values to exactly 1% cast time reduction always. Parsing the quotient of the integers 1/100 (or even floating point integers, since these can be represented exactly too) as a float16 calculation would result in the truncation that you stated, 0.00999450683593750 or 0.0100021362304688.

    *Haste values are stored as integers

    For similar reasons as above. It just makes sense. However, this is less important, for reasons below.

    The last thing before the conclusion that you must remember is that successive floating point calculations will accumulate error, and that the total accumulated error can be different depending on the order of operations. For instance, consider the discretization in integer space.
    Code:
    ((5/3)*7)/3=3.8888888888888888888888888888889 true
    ((5/3)*7)/3=2
    ((5*7)/3)/3=3
    Basically, just restating that the result can be different depending on whether haste is calculated first or fast cast, and things of that nature.

    Now, if you accept that fast cast and haste values are stored as integers, or even just that fast cast values are stored as integers, and that the addition term is parsed as an integer calculation at least before being parsed as a floating point calculation, then the problem can be reduced by quite a lot.

    That is, the error produced in r depending on the order of operations of the system below

    Code:
    R=(r)*(100-[f/2])*(1024-h)/(4*100*1024)
    will reduce to the following

    Code:
    let	F = (100-[f/2])
    	H = (1024-h)
    
    R = r*H*F/100
    H and F are exact, there is no error there, and the numbers 4 and 1024 in the denominator will have no effect on the error in any way they are parsed because they are 2 to the power of an integer.

    I would also assert that it's logical to retain the the appropriate base with the numerator. That is, you wouldn't program it to divide by 100 then multiply by H then multiply by F.

    Also, to prevent buffer overflow, it's necessary to pair the 4 and 1024 with their bases to explore all of the possible solutions.

    So the problem can at least be represented by the product of the following terms in which the order of the float terms is unknown, but restricted to six possibilities.

    Code:
    R=int(float(r/4)*float(int(100-int(f/2))/int(100))*float(int(1024-h)/int(1024)))
    I've also concluded some things about float64 and float16 in that it float16 can be emulated in Excel although I would need to relearn VBA. I will try and work on that sometime soon, or if someone knows VBA, PM me and I'll give you some pseudocode you can implement.

    I'll continue with this error analysis later but I need to work on other things currently. I'm pretty sure that I can show with certainty that if these calculations are being run in IEEE 754 format, that it's impossible for it to be 32 bit floating point. It may also be possible to conclude with certainty on which rounding method is being used.

  11. #151
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    Alright, so an error analysis on one possible routine, that is the following.

    R=int(float(r/4)*float(int(100-int(f/2))/int(100))*float(int(1024-h)/int(1024)))

    If operations were followed in this strict order, then the maximum error that could accumulate for 16 and 32 bit systems is as follows (this is for the Blizzard data.)

    Δmax = (943/1024)*((46/4)*(ε*2^0)+ε*2^3)+ε*2^3

    ε = 2^-11 in 16 bit
    ε = 2^-24 in 32 bit

    Calculating these yields the following.

    Code:
    Δtrue  = 0.0017822265625
    Δmax16 = 0.0126745700836181640625
    Δmax32 = 0.00000154718873091042041778564453125
    Δtrue is the true error (9 + Δtrue is the exact answer if discretization does not occur.) Δmax > Δtrue would need to be valid in order for the system have the possibility of truncating the answer to 8. It's clear that for the 32 bit system, this is impossible if this were the strict order of operations. I don't feel the need to do such an analysis of the other five possible ways under the previous assumptions I made. I don't feel it's necessary.

    In 16 bits, in order for round towards the nearest to be explicitly ruled out, the following would need to occur.

    Δtrue+εlocal/2>Δmax16

    Basically, if the span of Δmax16 was less than the space required to reach the point where it would have rounded up, then it can be explicitly ruled out.

    Δtrue+εlocal/2=0.0017822265625+0.00781250000000/2=0.0056884765625

    0.0056884765625<0.0126745700836181640625

    So it can't easily be ruled out for this example, but Δtrue+εlocal/2 is 44% as large as it would need to be in this case to rule it out, so it could be true for some other case and model. However, an easier way may be to simulate all 6 possible models using float16, and see if any can produce the result using round to the nearest.

  12. #152
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    I think you may be a bit confused about how float16 can be implemented. It's perfectly possible to represent integers and floats at the same time, and even if you chose to represent integers with floats, you can exactly represent integers up to 2^11 with float16. If you're questioning that we were somehow wrong with storing Fast Cast and Haste as integer values, then you're assuming they must be floats. Storing information with floats when it's possible to store them with integers is highly illogical given how they're implemented (Fast Cast/Haste values are added up, etc.)
    I may have misworded my thoughts. Storing them as integers isn't the issue; before we can make any analysis, though, they have to go through the calculations that we are currently assuming are processed as float16's. Therefore in order to figure out their effects, we have to treat them as effectively floats.

    For example, if the float16 value corresponding to the actual haste value has a resolution of 1/1024, it doesn't matter if haste itself is stored accurately to 1/4096; the float16 resolution is the bounding value of its accuracy. Likewise fTP may be stored to 1/1024, but if the float16 representation of it is limited to 1/128, that's the best effective value we can see.

    Essentially, the float16 representation becomes a filter on the original value that leads to the emergent phenomena of how it appears the values are represented.

    *Fast Cast numbers are stored as integers

    If they were stored as floats, then dividing them by two would not result in the truncation that we observe. The only other conceivable way for this behavior to occur would be that there are two separate values stored for Fast Cast, one for Recast and one for Casting Time, but I think it's much more logical to assume they are integers (imagine all the weirdness that comes along with storing values for gear like Vivid Strap) and that integer math is truncating off that odd number.
    Agreed, though I think this is the only case we can really be sure of, since it's the only case where some other operation affects the original value itself before it's passed through the float16 filter.

    Code:
    let	F = (100-[f/2])
    	H = (1024-h)
    
    R = r*H*F/100
    H and F are exact, there is no error there, and the numbers 4 and 1024 in the denominator will have no effect on the error in any way they are parsed because they are 2 to the power of an integer.
    Going to go on a slight tangent here.

    Unfortunately we do start running into errors here, though I suppose they start to go away if you use the tricksy storage representation of cast times. In simplistic form, suppose we have a 180 second recast spell (eg: Klimaform), and we use the 4x recast value. No haste, no fast cast. Now multiply through the stages.

    r = 180 * 4 = 720
    H = 1024
    F = 100

    r * H = 720*1024 = 737,280 --- we're already above the maximum float representation we're allowed, 65504. Can't treat it as pure integer math.

    Even if using the floating point representation, and ignoring the bottom 2 bits:
    180 * 1024 = 184,320; still too large

    Of course that leads us back to the fact that spells with a recast over 60 seconds don't get fully represented. The maximum complete stored bit value is 240 (1111 0000b) for a 60.00 recast spell (resolution of 0.25 seconds). A 180 second recast spell is stored as 208 (1101 0000b), which seems to be a 2x left bit shift of 180 (1011 0100b), and the two most significant bits discarded.

    Or, more generally, it's 180.00 (with 0.25 second resolution) with the top two bits discarded. 180*4 = 720, which is (10 1101 0000b) as binary, stored as (1101 0000b). 90 seconds is 104 (0110 1000b) instead of 90.00 (01 0110 1000b), and 120 seconds is 224 (1110 0000b) instead of 120.00 (01 1110 0000b).


    So how do all those values interact with the float16 system? Well, if we consider that the actual recasts are already in floating point form, we can look at the 'values' they're given:

    90 seconds => 90.00 seconds (01 0110 1000b) trimed => 110 1000b == 26.00
    120 seconds => 120.00 seconds (01 1110 0000b) trimed => 1110 0000b == 56.00
    180 seconds => 180.00 seconds (10 1101 0000b) trimed => 1101 0000b == 52.00

    We can see that the recast value that presumably we are putting into the float16 is never higher than 60.00. Multiplying 60 by 1024 gives 61440, which is within the valid range for the float16.

    * Note: still no idea how they then compensate for (or rather, know when to compensate for) the recasts over 60 seconds, as no extra flag has been identified in the dats.


    So in isolation, we can get that to work. However what about when combining with other effects? +50% recast with Hasso (if it was calculated before Haste; don't remember offhand) would be easy enough on its own, but would push a 60 second recast to 90 seconds, and 90*1024 generates an overflow.

    All in all, it would be far easier to deal with these issues by converting the haste/etc effects to floats themselves before doing the multiplication. We know they can be represented to /1024 accuracy as float values, so there's no loss in accuracy if they are indeed stored in that manner as ints.

  13. #153
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    I should have been a bit more specific, with that that simplification (R = r*H*F/100). As long as as overflow isn't created by ignoring the bases, then it can be simplified that way. We've never observed any weird overflow effects other than capping at 255 seconds. That cap would only be the result of sending the information out as a single integer byte, which is what I think it is. Pairing the values with their bases and treating everything as integers until the quotient is taken will work and has already been shown that it can work, there is no need to complicate the system beyond that. I think we might actually agree on this, if I'm reading what you're saying right though.

    If the system is constrained to just fast cast and haste for the time being, then the only six possible ways to calculate this (that would produce unique error) are as follows.

    Code:
    R=int(float(r/4)*float(int(100-int(f/2))/int(100))*float(int(1024-h)/int(1024)))
    R=int(float(r/4)*float(int(1024-h)/int(1024))*float(int(100-int(f/2))/int(100)))
    R=int(float(int(100-int(f/2))/int(100))*float(r/4)*float(int(1024-h)/int(1024)))
    R=int(float(int(100-int(f/2))/int(100))*float(int(1024-h)/int(1024))*float(r/4))
    R=int(float(int(1024-h)/int(1024))*float(r/4)*float(int(100-int(f/2))/int(100)))
    R=int(float(int(1024-h)/int(1024))*float(int(100-int(f/2))/int(100))*float(r/4))
    All other methods, assuming overflow is avoided (which I think it is because it's never been observed and would be a huge bug if it was,) would produce identical error to one of those methods.

    The next thing to do would be to implement these somehow, hopefully in Excel so others can use it, and then determine which rounding method could be used (although this could also be done by hand.) After that is done, we can try and find cases that eliminate it down to a single model (although, if round towards zero ends up being the rounding system that's implemented, I'm not sure even such a case would exist.)

  14. #154
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    Tried some stuff with h = 704 f = 80 today. Trying to repeat the 11s Reraise recast Byrnoth observed, but no luck.

    Reraise - 12s
    Stoneskin - 6s
    Stoneskin w/ Accession (sub) = 16s
    Stoneskin w/ Celerity - 6s

    I forgot to try Stoneskin w/ Accession and Celerity, but I suspect it would have been 12s.

    Either way, it looks like there is a total cap of 80% recast reduction that is enforced and that SCH buffs are applied before it's enforced. So if there was a time where there was no cap, it's been ninja patched, or the test server is giving different results than real servers.

  15. #155
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    Some psuedo-code for float16 calculations.

    First, calculate the result of something in 64 bit. The result is called n. n gets sent a VBA macro or matlab function with the following psuedocode.

    Code:
    c=trunc(log2(n));	% Integer value in exponential term
    Nmin=floor(n,ε*2^c);	% Minimum discretized value
    Nmax=nmin+ε*2^c;	% Maximum discretized value
    The returned values, Nmin and Nmax, are the the discretized values that the true result will be between in a floating point system with machine epsilon ε. What is returned will be either Nmin or Nmax depending on the rounding logic you chose (not programmed here.) This would only work for positive floating points, but it can easily be modified to do negative as well. Rounding logic is simple. If round towards zero is the mode, Nmin would always be returned. If round towards positive infinity the mode, Nmax is returned. If round towards nearest value is chosen, you'd have to do the following.

    Code:
    if (Nmin - n) > (Nmax - n)
    
    	N = Nmax
    
    elseif ((Nmin - n) < (Nmax - n))
    
    	N = Nmin
    
    elseif	((Nmax/(2^c*ε)) - (Nmin/(2^c*ε))) > 0
    
    	N = Nmin
    
    else
    
    	N = Nmax
    
    end
    It's actually simple enough to just put right into some calculation, but you'd have to do this every time you wanted to convert the result of a 64 bit calculation to a 16 bit. That would become laborious, which is why it may be advantageous to use VBA and make a macro you can just call on and specify the rounding mode.

  16. #156
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    We've never observed any weird overflow effects other than capping at 255 seconds. That cap would only be the result of sending the information out as a single integer byte, which is what I think it is.
    Agreed.

    Pairing the values with their bases and treating everything as integers until the quotient is taken will work and has already been shown that it can work, there is no need to complicate the system beyond that.
    While I disagree in terms of the technical aspects, this will probably be sufficient for most of our purposes. Tentative agreement.

    assuming overflow is avoided (which I think it is because it's never been observed and would be a huge bug if it was,)
    Well, certainly overflow is avoided. The question is -how- is it avoided? We see part of it in the manner in which recast times are stored. However if it's ever possible to increase recast above ~64 seconds before you reach the haste portion of the equation, multiplying by 1024 (0% haste) will always generate overflow.

    This can be avoided if all other terms occur after haste, and none of them are stored in /1024. However it seems easier to convert all the haste values to floats, add those up, and then multiply the floats (haste and recast time) together.

    Ultimately probably doesn't matter, though.



    I wrote out some comparisons using binary math, but then fiddled with some possibilities in excel, and think I found something (correction: was an error due to Spellcast equipping my Loquacious Earring... However it's still useful in disproving certain approaches).

    Haste can be considered in one of three ways:

    1) n/1024 (standard current usage). 2% haste = 20/1024. This leads to our current issue about value representation, so won't go any further.

    2) Accumulated units of /1024 fractions. If haste is stored as n/1024, and we know that float16 can represent a /1024 exactly, we can take, say, 2% and simply convert it into the closest increment of /1024. This is mostly identical to #1, but will vary depending on the order in which things are added.

    3) A simple fractional representation stored as a float16. Of particular interest is that float16 can represent values to the nearest /2048 for fractions below 1.0 (not bothering to work out how far down that goes, but at least a moderate bit is sufficient).

    So, I set up a check on recasts using three values:

    Set a percent target (2%, 3%, etc)

    Haste1: 1 - n/1024 [n is chosen to match current accepted values, such as 20 for 2%]
    Haste2: 1 - floor(target, 1/1024)
    Haste3: 1 - floor(target, 1/2048)

    Most of the time it seems that Haste2 and Haste3 result in the same value, so their results are identical. However at 7% I find a discrepency: Haste1 and Haste2 predict a 10 second recast for Curaga IV (using either 70/1024 or 71/1024 for Haste1), while Haste3 predicts a 9 second recast.

    (corrected) Observed recast: 10 seconds.

    Conclusion: the haste resolution must be determined before converting to float16 (thereby: /1024 resolution, not x% haste that may end up with /2048 resolution).


    Secondary confirmation of (presumably known) restriction:

    9% haste would be 92/1024. If haste were the percent values added together before converting to /1024, Goading Belt + Goliard Saio would be 92/1024; otherwise 91/1024.

    91/1024 recast of Aero III: 23 seconds
    92/1024 recast of Aero III: 22 seconds

    Observed: 23 seconds

    So the haste value must already be in /1024 format before being added together, not given as their original integer number that's added together before converting to float. This allows for the possibility that they are already stored as float16's.




    Rewrote things a few times, ending up with this in the spreadsheet:

    Code:
    BaseProduct = 4 * recast / (1024 - haste)
    Where recast is base recast, and haste is haste as /1024 value (eg: 150 for the spell Haste)

    Code:
    Calculated recast = GetFloat16(BaseProduct)/(4*1024)
    The extra multiplication/division by 4 was to ensure integral values for the bit manipulations.

    GetFloat16 VBA:

    Code:
    Public Function GetRoundFloat16(x As Long)
    
    Let exponent = 0
    If Int(((Log(x) / Log(2)) + 1) - 11) > 0 Then
        exponent = Int(((Log(x) / Log(2)) + 1.00000000000001) - 11)
    End If
        
    Let lowerBits = Int(2047 / 2 ^ (11 - exponent))
    Let andLowerBits = BITAND(x, (lowerBits))
    
    Let roundBit = 0
    If andLowerBits > ((2 ^ lowerBits) / 2) Then
        roundBit = 1
    End If
    
    Let mainBits = 2047 * 2 ^ exponent
    Let andMainBits = BITAND(x, (mainBits)) + roundBit * (lowerBits + 1)
    
    GetRoundFloat16 = andMainBits
    
    End Function
    Which depends on another VBA function, BITAND:

    Code:
    Public Function BITAND(x As Long, y As Long)
    BITAND = x And y
    End Function
    GetRoundFloat16 uses simple rounding. It can be adjusted if/when we determine the actual rounding method in use.

    There's another version which doesn't add the roundBit value called GetTruncFloat16, for comparison purposes, though I'm hard pressed to find anything where any difference shows up at all.

    Also note that trying to do the rounding this way can break the function when combining terms (eg: haste + fast cast) due to excessive bit shifting. The truncation version doesn't have that problem.




    Testing Haste first vs Fast Cast first, using these formulas:

    Haste first:
    Code:
    Haste: GetTruncFloat16(4*recast*(1024-hasteVal))/(4*1024)
    FC: GetTruncFloat16(GetTruncFloat16(2048*hastedRecast*(100-fcVal))/100)/2048
    FC first:
    Code:
    FC: GetTruncFloat16(GetTruncFloat16(4*128*recast*(100-fcVal))/100)/(4*128)
    Haste: GetTruncFloat16(4096*fcRecast*(1024-hasteVal))/(4096*1024)
    Extra multipliers are in there to ensure the fractional portion is available for bit manipulation.

    Several tests quickly showed haste to be calculated first, which matches current understanding.

    Edit: Updated the Float16 formula; exponent needs a tiny addition to prevent issues with rounding of the 64-bit double value.

  17. #157
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    Confirm FC as /100

    Aero III, 1% fast cast recast (Loq Earring), observed recast 24 s
    If it was /1024 or /2048, it would be 25 s. Would need to go up to /16368 before that would generate a 24 s recast.

    Difference between exactly /100, and converted to Float16 value

    Klimaform with 25% Fast Cast recast (so 50% Fast Cast) will be 135 if it's exact (at least for double-precision calculation), or 134 if it's half-precision.

    Recast: 2:15 == 135 s

    180 * 4 * 75 = 1101 0010 1111 0000b
    Which has 12 significant bits, which means the bottom one gets truncated using the VBA function.

    However 180 s spells get stored differently. Actual storage in dats is: 1101 0000b (208 decimal)

    If we use 52.00, the basic output is 10 0111b (39)

    If we then do a second calculation with the deducted bits of 10 0000 0000b (512 decimal == 128.00), we get 96

    39 + 96 = 135

    So we can't actually determine anything there. Like haste, it's impossible to distinguish between double precision and half precision using fast cast alone, as there are simply no instances of integer overlap (I guess this should probably be obvious, since the integer portion of the calculation comes from the most significant bits).






    Going back to the sample values we checked for (FC values are recast percents).

    Code:
    Spell         Haste     FC    Observed     Float16(haste first)   Float16(FC first)   Mote-orig 
    Cure III        162      1           4                       5                   4            4
    Cura            162      1          25                      24                  24           24 
    Protectra IV    162      1          14                      14                  14           14
    Protectra III   172      1          13                      13                  14           13
    Stoneskin       231      1          23                      22                  22           23
    Reraise         231      1          46                      45                  45           46 
    Freeze          150      4          34                      33                  33           33 
    Blizzard II     150     11          15                      15                  15           15 
    Blizzard         81     15           8                       8                   9            8
    Regardless of order chosen, there are several failures in the predictions. This doesn't look good. Possibly it's an error in my implementation.

    I also went back to double check that everything was still in sync with the original comparisons. Found that the values you used which you said failed using the 'original model', the results that you reported weren't the same as the results that I generate. It actually gets 4/5 correct, rather than the 1/5 shown in your chart. Still fails one, though, along with the one from my spell list.

    I decided to try one more tweak: change the fractional bit resolution on the original model from 7 bit to 8 bit. That made Freeze correct (34 seconds), but made Blizzard and Reraise incorrect, and didn't fix Cura. So that's not the direction to go.

  18. #158
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    You are making many illogical assumptions about how float16 would/needs to be implemented in any system. Building on those fallacies has led you to some false conclusions. I'm not trying to be an asshole I'm just asking you to rethink things from a programming perspective. Storing this type of data as floating points and then immediately adding them together before some other calculation is far fetched and costly performance wise. The base for Fast Cast has already been validated to be 100, and that the numerator is in integers, otherwise there is no simple way to implement any system that produces the error that it does. Also, on a somewhat random note, that Aero III Loq earring test is identical to the one I did that led me to initially abandon this effort.

    Now, I'm going to suggest something that may seem illogical based on what we've observed, but I think it may actually be true. The base for haste might not be 1024. I think it could be 1000, or maybe even 100 depending on how things were implemented. Observations regarding truncations to 1/1024 may have simply been coincidence because that amount can exactly be represented by float16 in most of the discretized values, and that an increase in this amount may have been much more likely (and perhaps required) to produce a change in the again discretized output.

    Consider the Victory March tiers Kirschy did a long time ago.

    http://trutlels.com/omenfiles/vicmarchchart.JPG

    The tiers are non-linear. This seems more likely the result of discretization error than design.

    The testing you did two posts ago can evidence this possibility. The real number 0.97 will need to be discretized to either 0.969726562500000 or 0.970214843750000 depending on the rounding method employed. Round to the nearest would result in the latter value, and this is said to be the default method for IEEE 754 floating point.

    We should definitely take a look into Haste again before trying to analyze the product of Haste and Fast Cast.

  19. #159
    Chram
    Join Date
    Sep 2007
    Posts
    2,526
    BG Level
    7
    FFXI Server
    Fenrir

    You are making many illogical assumptions about how float16 would/needs to be implemented in any system.
    I already noted that it's possible my implementation may be flawed. I provided the exact code that I'm using, which is about the fourth iteration of the math in Excel (and dealing with restrctions on what I can do in Excel). Feel free to review and critique it, and suggest a better approach.

    Storing this type of data as floating points and then immediately adding them together before some other calculation is far fetched and costly performance wise.
    That's part of what I was trying to get at with the 9% comparison.

    The base for haste might not be 1024. I think it could be 1000, or maybe even 100 depending on how things were implemented.
    Addressing this as well

    1) If haste is stored as /100 and base stored values are added together before calculating (ie: 5/100 + 4/100 = 9/100), Aero III recast would be 22 seconds.
    2) If haste is stored as /100 and base stored values are converted to float16 before being added together, Aero III recast would be 23 seconds.
    3) If haste is stored as /1000 and base stored values are added together before calculating (ie: 50/1000 + 40/1000 = 90/1000), Aero III recast would be 22 seconds.
    4) If haste is stored as /1000 and base stored values are converted to float16 before being added together, Aero III recast would be 23 seconds.
    5) If haste is stored as /1024 and base stored values are added together before calculating (ie: 51/1024 + 40/1024 = 91/1024), Aero III recast would be 23 seconds.
    6) If haste is stored as /1024 and base stored values are converted to float16 before being added together, Aero III recast would be 23 seconds.

    Observed recast: 23 seconds.

    So if base stored values are added together before converting to floats, only base /1024 generates a valid result. If base stored values are converted to float16 before adding, all of them produce correct results.

    Then there's the /2048 test I did with Curaga IV. At 7% haste, if the original value is 7/100 (or 70/1000), converting that to float16 lands it on a /2048 resolution point. However there's further options there depending on rounding, so looking at the possibilities:

    Options:
    1) Convert 7% to float16, then subtract from 1
    2) Subtract 7% from 1 (ie: 93/100 or whatever), then convert to float16

    Possible 7% values:
    a) 0.0699462890625000 [round-zero, truncate]
    b) 0.0700073242187500 [round-nearest]

    Possible 93% values:
    0.929687500000000 [round-zero, truncate]
    0.930175781250000 [round-nearest]
    0.930664062500000 [can't reach]

    Possible 1-7% values:
    0.929687500000000 [round-zero for both 7% and this]
    0.930175781250000 [round-nearest for both 7% and this]
    0.930664062500000 [can't reach]

    71/1024:
    0.0693359375000000 [exact]

    1 - 71/1024 value:
    0.930664062500000 [exact]

    Recasts:
    0.929687500000000 [9 seconds]
    0.930175781250000 [9 seconds]
    0.930664062500000 [10 seconds]

    Observed recast: 10 seconds

    So the only valid result is (1 - 71/1024).


    This doesn't really answer whether or not the values are converted to float before adding them together (and I think that's impossible to know, now, since both ways will produce identical results if haste is /1024), but it does pretty much require that haste be stored originally in units of /1024.


    Re: Marches
    The tiers are non-linear. This seems more likely the result of discretization error than design.
    Actually, it looks like a pattern I've seen in the cure tiers. Basically, they pick a starting point and an ending point, and work out how much gain there is between those two points, and that generates the slope. Usually it's a nice even number, but occasionally you get strange ratios.

    In this case, 27 skill gains 4 points of haste. So I'd guess the full range is 540 skill (60 to 600), with a gain of 80 haste across that range. Victory March caps at 96/1024 haste at 600 total skill; 80/1024 from skill would put the starting value at 16/1024, which also matches the additional amount of haste provided per +1 instrument. As for why it would have a minimum skill of 60, I don't know; it's just a guess on my part.

  20. #160
    Groinlonger
    Join Date
    Oct 2006
    Posts
    2,964
    BG Level
    7
    FFXI Server
    Fenrir

    Actually, it looks like a pattern I've seen in the cure tiers. Basically, they pick a starting point and an ending point, and work out how much gain there is between those two points, and that generates the slope. Usually it's a nice even number, but occasionally you get strange ratios.
    Could also be discretization error in the case of Cures.

    In those Aero III cases you listed, where are you getting the data from? Also, for the Curaga IV cases, you have come to an erroneous conclusion.

    93/100 (or 930/1000) = 0.93 => 0.929687500000000[round towards zero] or 0.930175781250000[round towards nearest]

    0.929687500000000*10.75 = 9.994140625 => 9.99218750000000[round towards zero] 10[round towards the nearest]

    or
    0.930175781250000*10.75 = 9.9993896484375 = > 9.99218750000000[round towards zero] 10[round towards the nearest]

    So if you assume round towards zero is the rounding method, then it doesn't work, but if you choose round to the nearest it still produces 10. Numbers cannot arbitrarily be truncated in float16, the result of any calculation needs to be completed by rounded using the method you've decided upon (round towards zero, round to the nearest, etc.), and then truncated to seconds.

+ Reply to Thread
Page 8 of 12 FirstFirst ... 6 7 8 9 10 ... LastLast