|
From: | Jaroslav Hajek |
Subject: | Re: unary mapper system redesigned + a few questions |
Date: | Wed, 18 Nov 2009 11:23:42 +0100 |
On Wed, Nov 18, 2009 at 9:13 AM, John W. Eaton <address@hidden> wrote:On 18-Nov-2009, Jaroslav Hajek wrote:I've thought some more about the log example I gave and I can also now
| On Tue, Nov 17, 2009 at 9:40 PM, John W. Eaton <address@hidden> wrote:
|
| > On 17-Nov-2009, Jaroslav Hajek wrote:
| >
| > | No. Quoting the C++ standard:
| > |
| > | template<class T> complex<T> log(const complex<T>& x);
| > | Notes: the branch cuts are along the negative real axis.
| > |
| > | Returns: the complex natural (base e) logarithm of x, in the range of a
| > | strip mathematically unbounded
| > | along the real axis and in the interval [-i times pi, i times pi ]
| > | along the imaginary axis. When x is a
| > | negative real number, imag(log(x)) is pi.
| > |
| > | ..end of story.
| >
| > Sorry, but I don't see why
| >
| > (0, -1) / (large representable number)
| >
| > should be considered to have a complex imaginary part yet
| >
| > (0, -1) / inf
| >
| > should not.
| >
| >
| This is what Octave does for (0, 1), so substitute in (0, 1) and use the
| same reason.
see some justification for the result that the math library produces.
I think this is general: you'll often want a different result from some operation because it is employed in a more complex computation that puts some corner cases into a different perspective. You simply need to figure out a way to force your logic upon the result. I think the discussion about 0*NaN we had earlier this year is similar.
I also agree that we will have a lot of potential confusion and
trouble if we try to do this operation in a way that is different from
what the implementation languages (C++, C, Fortran) compute. That's
also a reason for not trying to introduce a pure imaginary type in
Octave without having it in the implementation languages, unless we
want to rewrite those too (I don't)...
Maybe these issues are also arguments against any automatic narrowing
from complex to real, since that is also different from what the
implementation languages do. At this point I'm not convinced that
automatic narrowing from complex to real is a good thing to do.
It may be confusing and cause problems in some cases, but it has its practical merits. In the Matlab/Octave language, automatic data-driven conversions are just the way of life.
But I
don't see that we have much choice about that if we want compatibility
with Matlab. I'm certain that people would notice if we never
narrowed complex to real.
Most likely, yes. The question is whether the increased consistency would outweigh the added complexity for user scripts. My personal guess is no, but that's just my feeling.
For real values, we should still preserve -0 and print it, correct?
And we can still produce complex values with -0 imaginary part, using
the complex function (though I guess it will easily be lost on
subsequent operations)?
Yes, I think so. An extra information (almost) never hurts, and printing the sign of zero is also what C's printf or C++'s ostream do by default. I think we might consider the fact that Matlab hides the sign of zero (while still computing with it) as a (minor) design defect which we should not reproduce.
Note that Matlab also outputs complex (0, -0) as "0", while 1 / imag (complex (0, -0)) is -Inf, hiding the potentially useful information about the complex nature of the _expression_ from the user. I think Octave does the better thing here.
OK, then I suppose you should check in the change.
[Prev in Thread] | Current Thread | [Next in Thread] |