Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Can't delete ".Trash-1000"
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
blandoon
Tux's lil' helper
Tux's lil' helper


Joined: 05 Apr 2004
Posts: 136
Location: Oregon, USA

PostPosted: Tue Aug 24, 2010 9:23 pm    Post subject: Can't delete ".Trash-1000" Reply with quote

I have a partition on my Gentoo server that is exported as a samba share, and it has a folder called ".Trash-1000" which is taking up a lot of space. I assume this got there when people who had the drive mounted via samba deleted files into the trash - but now I just want to get rid of them because they are causing problems with using dar to back up the whole partition.

If I try to delete the folder from the command line using rm, the rm command runs away with all of my memory and swap, and just hangs until I kill it. Granted, this server is not a powerful box, because it is in a very small and quiet enclosure (1GHz C7 processor and 512 MB RAM).

This isn't a disk hardware problem, because I have relocated this partition to a new drive (using lvm), and fsck says the file system is perfectly clean... do I just have so many files in here that my lightweight server can't handle it? If that's the case, do I have any other options to get rid of this thing?
_________________
"Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..."
Back to top
View user's profile Send private message
rh1
Guru
Guru


Joined: 10 Apr 2010
Posts: 501

PostPosted: Tue Aug 24, 2010 9:27 pm    Post subject: Reply with quote

What if you go into the directory and try deleteing smaller sections instead of all at once? Does it still hang?
Back to top
View user's profile Send private message
blandoon
Tux's lil' helper
Tux's lil' helper


Joined: 05 Apr 2004
Posts: 136
Location: Oregon, USA

PostPosted: Tue Aug 24, 2010 9:57 pm    Post subject: Reply with quote

There seem to be three subfolders - two smaller ones which I was able to delete, and a third one which is pretty big, and which always hangs when I try to delete it (or even to do an ls on it).
_________________
"Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..."
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23381

PostPosted: Tue Aug 24, 2010 10:19 pm    Post subject: Reply with quote

Does it run away even if you do ls -f |head? By disabling sorting, you might reduce memory requirements for the listing. Also, it would be interesting to see the output of ls -ld on the offending directory, to give us an idea of how many files might be in it.

If ls -f works, try find -print |wc -l to get a count of its contents, including subdirectories.
Back to top
View user's profile Send private message
blandoon
Tux's lil' helper
Tux's lil' helper


Joined: 05 Apr 2004
Posts: 136
Location: Oregon, USA

PostPosted: Wed Aug 25, 2010 10:41 pm    Post subject: Reply with quote

Thanks very much for that. ls -f worked, so I tried the following (took quite a while to come back):

Code:
# find -print |wc -l
7671674


I took a stab at trying to delete a few of the files using wildcards, but the rm command doesn't seem to be able to handle that either... any other suggestions?
_________________
"Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..."
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7471

PostPosted: Wed Aug 25, 2010 11:10 pm    Post subject: Reply with quote

well if find can handle it but rm couldn't, xargs the rm
find -print0 | xargs --null rm
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23381

PostPosted: Thu Aug 26, 2010 12:06 am    Post subject: Reply with quote

blandoon wrote:
Code:
# find -print |wc -l
7671674
Wow.
blandoon wrote:
I took a stab at trying to delete a few of the files using wildcards, but the rm command doesn't seem to be able to handle that either... any other suggestions?
Not that it helps you much, but technically, the problem was that your shell was unable to construct a command line that both fit in the available argument length limits and contained all the files named by your wildcard.

The suggestion from krinn should get around that problem, since the names will be passed over a pipe to xargs, and xargs is designed specifically for dealing with inputs that exceed normal command line length limits. If you have GNU find (which you almost certainly do), you could use a variant of krinn's command: find . -delete, which allows the deletion to be done by the find process instead of requiring any rm process at all. Either way, you will be limited primarily by the speed with which the filesystem can service your unlink requests. Thinking about it, krinn's method might be faster since it can enumerate in one process and unlink in another. Another approach might be to go to the parent directory and rm -r .Trash-1000, so that the rm performs the enumeration internally. This is probably not that different from find -delete.
Back to top
View user's profile Send private message
cwr
Veteran
Veteran


Joined: 17 Dec 2005
Posts: 1969

PostPosted: Thu Aug 26, 2010 7:44 am    Post subject: Reply with quote

Or as a very last resort, brute-force it; if ls can list the files, write a script
to remove them individually. There are peverse ways of getting filenames
via a binary editor and the directory itself, but on the whole, don't go there.

Will
Back to top
View user's profile Send private message
blandoon
Tux's lil' helper
Tux's lil' helper


Joined: 05 Apr 2004
Posts: 136
Location: Oregon, USA

PostPosted: Sat Aug 28, 2010 7:11 pm    Post subject: Reply with quote

Thanks for all the suggestions so far - this has been an interesting exercise to say the least.

It seems like find can't handle listing the files (at least not in a reasonable amount of time)... the only thing that can so far is ls -f, and even trying to do that directly into an xargs seems to take up an excessive amount of memory. I started playing around with a script to delete the files in small blocks at a time so as not to make the server unusable (while it runs in the background for days and days).

Here's what I got so far - I added a bunch of safety catches and tweakable parameters so that I can tune the load on the server to a reasonable level (and because I'm a lousy programmer and I want to keep from shooting myself in the foot):

Code:

#!/bin/bash

# How many files to delete in each block?
atonce=10

# How many seconds to pause between blocks?
sleeptime=0

# How many blocks go by before displaying status?
statcheck=100

# How much extra time to pause on status check?
extrapause=10

# Name of safety-check file (stops executing if it exists)?
stopfile=/var/tmp/stopdeleting.now

count=0
fullcount=0
confirm=no

read -e -p "Path to delete files: " delpath
if [ ! -d "$delpath" ]; then
  echo "Invalid path. Exiting."
  exit 1
fi

echo "WARNING! This will delete all files from $delpath without prompting."
read -e -p "If you are SURE you want to do this, type yes: " confirm
if [ ! "$confirm" = "yes" ]; then
  echo "Aborting."
  exit 1
fi

cd $delpath
while [ ! -e $stopfile ]; do
  todelete=(`ls -f | head -$atonce`)
  for a in `seq $atonce`; do
    if [ -f ${todelete[$a]} ]; then
      rm -f ${todelete[$a]} && fullcount=$((fullcount+1))
    fi
  done
  sleep $sleeptime
  count=$((count+1))
  if [ "$count" -eq "$statcheck" ]; then
    if [ `ls -f | head -$atonce | wc -l` -lt "$atonce" ]; then
      echo "Last pass..."
      for files in `ls -A`; do
        rm -f $files > /dev/null 2>&1
      done
      echo "Looks like we're all done!"
      exit 0
    fi
    echo "Files deleted (best guess): $fullcount"
    echo "Sleeping $extrapause seconds - touch $stopfile in another terminal to abort."
    count=0
  fi
done

_________________
"Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..."
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23381

PostPosted: Sat Aug 28, 2010 10:05 pm    Post subject: Reply with quote

Since xargs is designed to batch up the input and kick off a large command, it could require substantial memory in some cases. You might be able to influence this with the --max-lines and/or --max-args options.

Use nice and/or ionice to reduce the priority of the cleanup process, so that it has less impact on the server.

You should add a set -e near the top of the script, so that any unhandled errors cause the script to exit instead of proceeding on in a potentially dangerous fashion.
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7471

PostPosted: Sat Aug 28, 2010 11:36 pm    Post subject: Reply with quote

you could also lower files number in that directory without remove them.

using mv from a partition to the same partition won't really move the file but just the location of the file -> faster than anything else.
so as example
mkdir -p /todelete/a
mkdir /todelete/b ... you got the point
then mv .Trash-1000/a* /todelete/a

you will endup with lighter directories instead of a big one at a lighting fast speed as the files aren't access
look :
Code:
ls -l pak000.pk4 (a data file from etqw, because it's a big one for the example)
-rw-r--r-- 1 root root 268735253 24 juin   2008 pak000.pk4

time mv pak000.pk4 /
real   0m0.001s
user   0m0.000s
sys   0m0.000s

time cp /pak000.pk4 .
real   0m2.379s
user   0m0.002s
sys   0m0.383s
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23381

PostPosted: Sun Aug 29, 2010 1:14 am    Post subject: Reply with quote

krinn wrote:
then mv .Trash-1000/a* /todelete/a
Assuming he can find a glob that matches a useful number of files without matching so many that the glob expansion fails. ;)
Back to top
View user's profile Send private message
blandoon
Tux's lil' helper
Tux's lil' helper


Joined: 05 Apr 2004
Posts: 136
Location: Oregon, USA

PostPosted: Mon Aug 30, 2010 5:20 pm    Post subject: Reply with quote

I did try a few times to find a wildcard string that would match enough files to be useful, but few enough not to hang - I never had any luck. I also tried using xargs with the --max-args argument and couldn't get that to work either, but I may not have been doing it correctly... I'll look into that some more.

There are two major inefficiencies with the script above: One is that ls -f also returns directory entries which rm cannot delete, so I added a bunch of extra logic to try to deal with those. The other (related) problem is trying to determine when you are done, taking into account that there may be subdirectories, which is the clunkiest part of the script.

I think trying to move the files a few thousand at a time is a good idea; I'm going to give that a shot too.

Thanks, all of this is very useful.
_________________
"Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..."
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum