lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Issue 4934: analysis indicates new issue


From: ArnoldTheresius
Subject: Issue 4934: analysis indicates new issue
Date: Wed, 30 Oct 2019 05:49:24 -0700 (MST)

>Related to issue 4934:

Hello,
as I got this assertion failure on multiple scores on Win7 (on scores which
I created in sequence) I did tried to analyse this problem.

All my effected scores had one system per page. On the last page the
system is compressed.
If I apply tiny changes to the global-staff-size, there is a chance of 50 %
the lilypond run will succeed or it will terminate with assertion from
function
Page_breaking::min_page_count() in file lily/page-breaking.cc.
This problem does occur on Windows (mingw compilation), but not on
Linux compilations (executed in a LILYDEV VM on my Windows computer).

Tracing this function ‘Page_breaking::min_page_count()’ by additional
text output in the LILYDEV VM did show, the critical code must be in
line 1178. The comparison there is sensitive to rounding errors between
80 bit floating point and 64 bit floating point variables, a typical code
problem in IA32 compilations for Windows, where function return values
are passed back in the 80 bit floating point register of the X87 arithmetic
coprocessor.
If the last page contains a single compressed system, the stored (double)
value ‘cur_rod_height’ and the function return value (in X87 register in
IA32 architecture) from ‘cached_line_details_.back().full_height()’ were
equal in the Linux version.

Together with ‘harm6’ at the German Lilypond Forum it got a MINGW
compilation for Windows with tracing text output. It did prove what I
expected:

(1)     The assertion is thrown, because the function return value of
‘cached_line_details_.back().full_height()’ is ‘more exact’ (longer
mantissa)
than the double value stored in cur_rod_height. The function return value
is a 80 bit floating point value, and not rounded to the mantissa length
of the 64 bit floating point value it’s going to be compared with.

(2)     The check in line 1178 of lily/page-breaking.cc, which is:
&& cur_rod_height > cached_line_details_.back ().full_height ())
can be replaced with:
&& page_starter + 1 != cached_line_details_.size())
which does not compare (inexact) floating point numbers anymore to
check if there is only one system on the last page, to solve issue 4934.

Beside that, I found:

(3)     In the past, rounding to 64 bit floating point numbers in the
MINGW compilation was forced by _FPU_SETCW() in function
configure_fpu() from file lily\main.cc.
In this MINGW test compilation this code was not reached!
As this can cause many other peculiar problems, not so simple to
detect as this issue 3934, a new issue should be opened:

/++++++++++++++++++
Repair the broken support of configure_fpu() in the
mingw installation of GUB
/++++++++++++++++++

For a more easy check during GUB compilation I suggest to add
a conditional compile time error (or warning if you prefer) into
lily/main.cc to the empty dummy function configure_fpu():

... starting at line 202 of lily/main.cc ...
 #include <fpu_control.h>
 static void
 configure_fpu ()
 {
   fpu_control_t fpu_control = 0x027f;
   _FPU_SETCW (fpu_control);
 }
 
 #else
 
 static void
 configure_fpu ()
 {
+/* throw compilation error if MINGW 32bit with x87 */
+#if defined (__MINGW32__) && defined (__code_model_32__) &&
!defined(__SSE2_MATH__)
+#pragma GCC error "No FPU CONTROL in MINGW compilation required, but NOT
found"
+#endif
 }
 
 #endif /* defined(__x86__) || defined(__i386__) */


Furthermore, there are also command line options reported for MINGW
to set the floating point precision.
But notice, ‘-mpc64’ did not help for a simple test program on my MINGW
compilation in the LILYDEV VM. That might be the same origin as support
of configure_fpu() is broken in the GUB.
Instead ‘-march=pentium4 -mfpmath=sse -msse2' did work, but this
drops support for older CPU types.


ArnoldTheresius





--
Sent from: http://lilypond.1069038.n5.nabble.com/Dev-f88644.html



reply via email to

[Prev in Thread] Current Thread [Next in Thread]