|
From: | Viktors Berstis |
Subject: | bug#35531: problem with ls in coreutils |
Date: | Thu, 2 May 2019 17:41:45 -0700 |
User-agent: | Mozilla/5.0 (Windows NT 5.1; rv:52.0) Gecko/20100101 Firefox/52.0 SeaMonkey/2.49.4 |
The November 10, 1999 version 3.16 of coreutils "ls" command is lightning fast on Windows (and on the large directory) but unfortunately stops at 32K files. The newer version of "ls" built for Windows has the problem. By "new" version, I am using the 64 bit build for windows dated 4/20/2005 at 11:41AM with exe size of 180736 bytes, md5sum: 47ba770d80382cbd66ddba13924c1417 Version 5.3.0 . I didn't see a place to download a newer binary version to try.
BTW, booting the machine with Ubuntu, ls on that same large directory is very fast.
- Viktors Berstis Paul Eggert wrote:
It's probably something inside the kernel (e.g., filesystem code). What does the shell command 'strace -o /tmp/tr -s 128 -T ls -U -1 dirname | wc' say? You can see which system calls are taking the most time by then running 'sort -t"<" -k2n /tmp/tr'. On my platform (Fedora 29 x86-64 ext4, an older desktop with only disk drives), the hoggiest syscalls are getdents64, which are as much as 24 ms per call when the data are not cached, and more like 0.7 ms per call when the data are cached (each such call retrieves about 1000 directory entries). What do you see?
[Prev in Thread] | Current Thread | [Next in Thread] |