View previous topic :: View next topic |
Author |
Message |
blandoon Tux's lil' helper

Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Tue Aug 24, 2010 9:23 pm Post subject: Can't delete ".Trash-1000" |
|
|
I have a partition on my Gentoo server that is exported as a samba share, and it has a folder called ".Trash-1000" which is taking up a lot of space. I assume this got there when people who had the drive mounted via samba deleted files into the trash - but now I just want to get rid of them because they are causing problems with using dar to back up the whole partition.
If I try to delete the folder from the command line using rm, the rm command runs away with all of my memory and swap, and just hangs until I kill it. Granted, this server is not a powerful box, because it is in a very small and quiet enclosure (1GHz C7 processor and 512 MB RAM).
This isn't a disk hardware problem, because I have relocated this partition to a new drive (using lvm), and fsck says the file system is perfectly clean... do I just have so many files in here that my lightweight server can't handle it? If that's the case, do I have any other options to get rid of this thing? _________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
 |
rh1 Guru


Joined: 10 Apr 2010 Posts: 501
|
Posted: Tue Aug 24, 2010 9:27 pm Post subject: |
|
|
What if you go into the directory and try deleteing smaller sections instead of all at once? Does it still hang? |
|
Back to top |
|
 |
blandoon Tux's lil' helper

Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Tue Aug 24, 2010 9:57 pm Post subject: |
|
|
There seem to be three subfolders - two smaller ones which I was able to delete, and a third one which is pretty big, and which always hangs when I try to delete it (or even to do an ls on it). _________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23381
|
Posted: Tue Aug 24, 2010 10:19 pm Post subject: |
|
|
Does it run away even if you do ls -f |head? By disabling sorting, you might reduce memory requirements for the listing. Also, it would be interesting to see the output of ls -ld on the offending directory, to give us an idea of how many files might be in it.
If ls -f works, try find -print |wc -l to get a count of its contents, including subdirectories. |
|
Back to top |
|
 |
blandoon Tux's lil' helper

Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Wed Aug 25, 2010 10:41 pm Post subject: |
|
|
Thanks very much for that. ls -f worked, so I tried the following (took quite a while to come back):
Code: | # find -print |wc -l
7671674
|
I took a stab at trying to delete a few of the files using wildcards, but the rm command doesn't seem to be able to handle that either... any other suggestions? _________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
 |
krinn Watchman


Joined: 02 May 2003 Posts: 7471
|
Posted: Wed Aug 25, 2010 11:10 pm Post subject: |
|
|
well if find can handle it but rm couldn't, xargs the rm
find -print0 | xargs --null rm |
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23381
|
Posted: Thu Aug 26, 2010 12:06 am Post subject: |
|
|
blandoon wrote: | Code: | # find -print |wc -l
7671674 |
| Wow. blandoon wrote: | I took a stab at trying to delete a few of the files using wildcards, but the rm command doesn't seem to be able to handle that either... any other suggestions? | Not that it helps you much, but technically, the problem was that your shell was unable to construct a command line that both fit in the available argument length limits and contained all the files named by your wildcard.
The suggestion from krinn should get around that problem, since the names will be passed over a pipe to xargs, and xargs is designed specifically for dealing with inputs that exceed normal command line length limits. If you have GNU find (which you almost certainly do), you could use a variant of krinn's command: find . -delete, which allows the deletion to be done by the find process instead of requiring any rm process at all. Either way, you will be limited primarily by the speed with which the filesystem can service your unlink requests. Thinking about it, krinn's method might be faster since it can enumerate in one process and unlink in another. Another approach might be to go to the parent directory and rm -r .Trash-1000, so that the rm performs the enumeration internally. This is probably not that different from find -delete. |
|
Back to top |
|
 |
cwr Veteran

Joined: 17 Dec 2005 Posts: 1969
|
Posted: Thu Aug 26, 2010 7:44 am Post subject: |
|
|
Or as a very last resort, brute-force it; if ls can list the files, write a script
to remove them individually. There are peverse ways of getting filenames
via a binary editor and the directory itself, but on the whole, don't go there.
Will |
|
Back to top |
|
 |
blandoon Tux's lil' helper

Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Sat Aug 28, 2010 7:11 pm Post subject: |
|
|
Thanks for all the suggestions so far - this has been an interesting exercise to say the least.
It seems like find can't handle listing the files (at least not in a reasonable amount of time)... the only thing that can so far is ls -f, and even trying to do that directly into an xargs seems to take up an excessive amount of memory. I started playing around with a script to delete the files in small blocks at a time so as not to make the server unusable (while it runs in the background for days and days).
Here's what I got so far - I added a bunch of safety catches and tweakable parameters so that I can tune the load on the server to a reasonable level (and because I'm a lousy programmer and I want to keep from shooting myself in the foot):
Code: |
#!/bin/bash
# How many files to delete in each block?
atonce=10
# How many seconds to pause between blocks?
sleeptime=0
# How many blocks go by before displaying status?
statcheck=100
# How much extra time to pause on status check?
extrapause=10
# Name of safety-check file (stops executing if it exists)?
stopfile=/var/tmp/stopdeleting.now
count=0
fullcount=0
confirm=no
read -e -p "Path to delete files: " delpath
if [ ! -d "$delpath" ]; then
echo "Invalid path. Exiting."
exit 1
fi
echo "WARNING! This will delete all files from $delpath without prompting."
read -e -p "If you are SURE you want to do this, type yes: " confirm
if [ ! "$confirm" = "yes" ]; then
echo "Aborting."
exit 1
fi
cd $delpath
while [ ! -e $stopfile ]; do
todelete=(`ls -f | head -$atonce`)
for a in `seq $atonce`; do
if [ -f ${todelete[$a]} ]; then
rm -f ${todelete[$a]} && fullcount=$((fullcount+1))
fi
done
sleep $sleeptime
count=$((count+1))
if [ "$count" -eq "$statcheck" ]; then
if [ `ls -f | head -$atonce | wc -l` -lt "$atonce" ]; then
echo "Last pass..."
for files in `ls -A`; do
rm -f $files > /dev/null 2>&1
done
echo "Looks like we're all done!"
exit 0
fi
echo "Files deleted (best guess): $fullcount"
echo "Sleeping $extrapause seconds - touch $stopfile in another terminal to abort."
count=0
fi
done
|
_________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23381
|
Posted: Sat Aug 28, 2010 10:05 pm Post subject: |
|
|
Since xargs is designed to batch up the input and kick off a large command, it could require substantial memory in some cases. You might be able to influence this with the --max-lines and/or --max-args options.
Use nice and/or ionice to reduce the priority of the cleanup process, so that it has less impact on the server.
You should add a set -e near the top of the script, so that any unhandled errors cause the script to exit instead of proceeding on in a potentially dangerous fashion. |
|
Back to top |
|
 |
krinn Watchman


Joined: 02 May 2003 Posts: 7471
|
Posted: Sat Aug 28, 2010 11:36 pm Post subject: |
|
|
you could also lower files number in that directory without remove them.
using mv from a partition to the same partition won't really move the file but just the location of the file -> faster than anything else.
so as example
mkdir -p /todelete/a
mkdir /todelete/b ... you got the point
then mv .Trash-1000/a* /todelete/a
you will endup with lighter directories instead of a big one at a lighting fast speed as the files aren't access
look :
Code: | ls -l pak000.pk4 (a data file from etqw, because it's a big one for the example)
-rw-r--r-- 1 root root 268735253 24 juin 2008 pak000.pk4
time mv pak000.pk4 /
real 0m0.001s
user 0m0.000s
sys 0m0.000s
time cp /pak000.pk4 .
real 0m2.379s
user 0m0.002s
sys 0m0.383s
|
|
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23381
|
Posted: Sun Aug 29, 2010 1:14 am Post subject: |
|
|
krinn wrote: | then mv .Trash-1000/a* /todelete/a | Assuming he can find a glob that matches a useful number of files without matching so many that the glob expansion fails.  |
|
Back to top |
|
 |
blandoon Tux's lil' helper

Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Mon Aug 30, 2010 5:20 pm Post subject: |
|
|
I did try a few times to find a wildcard string that would match enough files to be useful, but few enough not to hang - I never had any luck. I also tried using xargs with the --max-args argument and couldn't get that to work either, but I may not have been doing it correctly... I'll look into that some more.
There are two major inefficiencies with the script above: One is that ls -f also returns directory entries which rm cannot delete, so I added a bunch of extra logic to try to deal with those. The other (related) problem is trying to determine when you are done, taking into account that there may be subdirectories, which is the clunkiest part of the script.
I think trying to move the files a few thousand at a time is a good idea; I'm going to give that a shot too.
Thanks, all of this is very useful. _________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
 |
|