There is one case which I think is internally incorrect. Going by the
algorithm definition, [a:z:b] should have as its first element 'a' unless
the range is nonsensical in which case the empty set results. Therefore,
the ## above should have -0.000000 as the first element. The last
element of [a:z:b] is not necessarily 'b', it is whatever the computation
turns out to be, e.g., 6 + 2 * -3 which is 0, not -0.
I agree with you. Unless it is a null range, the first element should be a
direct copy of the base value of the range including the signbit.
Notice that the *** above is the same scenario as the incorrect result ##
internally. Rik, could you please think this over and check the
pertinent case in the code for that scenario? Perhaps there is a bug. Or
if you agree there is a bug, then go ahead and file a bug report.
I think this is an inconsistency, although I'm not sure if we want to fix it.
The display routine makes a guess at whether something is an integer, even
if it happens to be stored in a floating point format.
For example, the following shows how Octave displays a floating point
number which exceeds the current precision.
x = [0 2 4.0000001]
x =
0.00000 2.00000 4.00000
You can tell that these are not integers, even though they appear to be,
because Octave has put in the decimal point and is showing extra zeros of
precision.
By contrast, true integers don't have any decimal portion and can be
identified by their lack of a decimal point
x = [0 2 4]
x =
0 2 4
I will have to check, but I don't think we are using the integer format
code '%d' when we print integers. Instead, I think we are printing them as
floats with the width of the decimal portion set to zero.
For example,
x = [-0 2 4];
sprintf ("%4.0f ", x)
ans = -0 2 4