gcmd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gcmd-usr] A rather strange g-c experience


From: Michael
Subject: Re: [gcmd-usr] A rather strange g-c experience
Date: Tue, 8 Jan 2019 16:10:23 +0100

Hi Ken,

I've never had a comparable issue with g-c, but more often such issues with USB 
in general.

From your report i conclude that you are probably copying with USB 2.0 speed 
(and not even fully) so i assume your motherboard ist old. Mine is too, and the 
USB ports even are flakey meanwhile. So, it happens that an inserted USB drive 
'dies'. 

Any kind of stuff related to drive I/O can zombie a Linux OS. Just some 15 
minutes ago i issued accidently an hdparm sleep command to a SATA drive which 
was already in sleep mode, and the process immediately went zombie (could not 
be killed even with -9) and prevented even a regular shutdown.
I think Linux I/O drivers have serious weaknesses here.

You should alse take into account that flash drives have internal cache. If you 
trnasmit huge amount of data, the file manager will signal 'done' but the drive 
is still busy with writing, for some more 2-3 minutes. If the flash controller 
has to TRIIM the already written storage blocks (which happens all the way 
along the normal operations) then it might be even like 5 minutes, worst case. 
I can see this with my old USB 2.0 stuff all the time. (You need a drive with 
LED to be able to recognize its activity clearly). If you remove the drive 
before the internal cache and trim operations are finished, the file system 
immediately is broken and probably data is lost, even if fsck can fix it. 

Once a fat32 drive is not cleanly unmounted (for whatever reason) the dirty bit 
is set, and it refuses to mount again. Run fsck (or dosfsck) to evaluate.

So, my best guess is that g-c was either requesting some disk info (maybe just 
the directory) which terminal or caja don't, but the drive was not ready. 
Or else, the muonting itself was not working (fast enough?) for g-c, which 
might be a bug; but then worked for terminal because the g-c try already 
changed things. It would be tedious to reproduce with your complex setup, it 
should be done with a simple test environment. But honestly, since USB (and 
especially 2.0) already is flakey kernel-wise, i would not want to do it; there 
are definitely more serious problems in the pipeline.

As a rule of thumb, g-c is not the ideal tool to access problem drives or for 
copying huge amounts of data, since it gots too less feedback. You should 
always test new drives from a terminal, always run a initial fsck, and if it's 
about huge amounts of data, i recommend midnight commander (a terminal app) who 
has nice indicators, and it's own copy modes (like, contents of dcirectories 
one after the other, as opposed to 'cp' which copies based on file size.)

As for the slow USB (if my guess was right) - it really pays off to copy on a 
fast USB 3 machine. If you don't have one, consider bying an PCI Express <-> 
USB adaptor card, which will let you copy USB with PCIE speed. (But if your USB 
drive, or even just the cable to an USB harddrive, itself is only 2.0 
specified, you will only yield max USB 2 speed.)


>  Naturally the journal feature of ext4 took up just enough space that the 
> files would not fit.

LOL

There is exfat which solves many fat32 problems (especially file size and 
naming limits) while still uses maximal space. With Linux desktops, you need to 
install the drivers manually, depending on the OS (yes with Debian), and have 
to mount with sudo. 

You may save some EXT4 space with the marked 'XXX' mk2efs options: 

-L ......... Label
-n ......... simulate
-c -c ...... slow r/w bad blocks test (Needs awful lot of time!)
-t ext4 .... Filesystem type
-j ......... create Journal
-m 0 ....... 0% superuser       XXX
-O  Options: large_file,dir_index,      sparse_super XXX

             Note: extent,resize_inode are default for ext4






reply via email to

[Prev in Thread] Current Thread [Next in Thread]