guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How long does it take to run the full rustc bootstrap chain?


From: Bengt Richter
Subject: Re: How long does it take to run the full rustc bootstrap chain?
Date: Mon, 31 Oct 2022 20:02:50 +0100
User-agent: Mutt/1.10.1 (2018-07-13)

Hi again, thanks for your reply...

On +2022-10-27 10:35:02 -0400, Maxim Cournoyer wrote:
> Hi,
> 

(Oops, pasting back alternative I thought would be faster)
> > So above combo command line now gives me
> > --8<---------------cut here---------------start------------->8---
> > SIZE MODEL                          TYPE  TRAN   VENDOR   NAME
> > 465.8G Samsung SSD 970 EVO Plus 500GB disk  nvme            nvme0n1
> > 
> > 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD 
> > Controller SM981/PM981
> > $ 
> > --8<---------------cut here---------------end--------------->8---

[...]

--8<---------------cut here---------------start------------->8---
> $ lsblk -o size,model,type,tran,vendor,name|grep -Ei 'ssd|model';echo;lspci 
> |grep -i nvme
>   SIZE MODEL                      TYPE  TRAN   VENDOR   NAME
> 465.8G Samsung SSD 860 EVO 500GB  disk  sata   ATA      sda
> 931.5G Samsung SSD 840 EVO 1TB    disk  sata   ATA      sdc
> --8<---------------cut here---------------end--------------->8---
> 
> Building Rust is mostly CPU dependent; I think fast single thread
> performance is key as not that much happen in parallel, IIRC.  The 3900X
> is a 12 cores (24 logical) beast.
>

Hm, just TRAN sata, no nvme, so it's going to be slow, but
what is the effect on what you timed?

Is there an easy way to get a measure of how many GB went
through those SATA channels during what you timed? That
would give an idea of what faster phusical disk memory
access would do for you. If many people are waiting longer
that they like, maybe they would chip in to fund an upgrade,
to feed that 12(24)-core "beast" :-)

I'd bet it is waiting a lot, if not more than computing :)
--
Regards,
Bengt Richter





reply via email to

[Prev in Thread] Current Thread [Next in Thread]