[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[bug#53121] [PATCH] gnu: ceres: Update to 2.0.0.
From: |
Ludovic Courtès |
Subject: |
[bug#53121] [PATCH] gnu: ceres: Update to 2.0.0. |
Date: |
Wed, 19 Jan 2022 11:26:54 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) |
Hi,
Felix Gruber <felgru@posteo.net> skribis:
> Unfortunately, I'm getting mixed results for the benchmarks. In most
> cases, I got slight (<10%) improvements in runtime, but there are also
> some benchmarks that were worse with the --tune flag. I'm wondering
> whether the compiler flags set by the --tune option are correctly used
> by the custom 'build phase of the ceres-solver-benchmarks package. I
> didn't have the time to look closer into it as I'm currently in the
> middle of moving to another country.
OK.
> Anyways, I've attached the results of benchmark runs that I've
> generated using guix commit 7f779286df7e8636d901f4734501902cc934a72f
> once untuned and once tuned for broadwell CPUs.
> My laptop on which I ran the tests has a Quad Core AMD Ryzen 7 PRO
> 2700U CPU with 2200 MHz.
Could it be that ‘znver3’ or something works better on those CPUs?
> In the attachments you find
> * a script run_benchmarks.sh used to run the benchmarks in tuned and
> untuned guix shells,
> * text files ending in `-tuned` or `-untuned` which contain the
> results of those benchmark runs,
> * a script compare.sh which calls a Python script compare-results.py
> to generate files ending in `-diff` that contain the relative change
> between untuned and tuned benchmarks (negative time and CPU
> percentages mean the tuned benchmark was faster, while for the number
> of iterations, positive percentages mean the tuned benchmark had run
> more iterations).
Interesting, thanks for taking the time to run these benchmarks.
It’s hard to draw conclusions. I wonder how noisy these measurements
are and whether the differences we’re seeing are significant. Food for
thoughts!
Ludo’.