Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Documentation, Tips & Tricks
  • Search

Some ext3 Filesystem Tips

Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.
Post Reply
Advanced search
362 posts
  • Page 14 of 15
    • Jump to page:
  • Previous
  • 1
  • …
  • 11
  • 12
  • 13
  • 14
  • 15
  • Next
Author
Message
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Sun Jul 29, 2007 5:33 am

devsk wrote:
Maf wrote:But I kinda' can't believe the "journal-mode" is on ;) And unfortunatelly tune2fs -l doesn't contain this kind of information. Is there any other way?

Code: Select all

dmesg | grep "EXT3-fs: mounted filesystem"
a more detailed view of any extfs would be to use dumpe2fs, here's how to show only superblock info.

Code: Select all

dumpe2fs -h /dev/sda1
enjoy. 8)
Top
codergeek42
Bodhisattva
Bodhisattva
Posts: 5142
Joined: Mon Apr 05, 2004 4:44 am
Location: Anaheim, CA (USA)
Contact:
Contact codergeek42
Website

  • Quote

Post by codergeek42 » Sun Jul 29, 2007 7:26 am

Nifty tip! Thanks, likewhoa. :)
~~ Peter: Programmer, Mathematician, STEM & Free Software Advocate, Enlightened Agent, Transhumanist, Fedora contributor
Who am I? :: EFF & FSF
Top
XenoTerraCide
Veteran
Veteran
User avatar
Posts: 1418
Joined: Sun Jan 18, 2004 1:22 pm
Location: MI, USA
Contact:
Contact XenoTerraCide
Website

  • Quote

Post by XenoTerraCide » Sun Jul 29, 2007 2:35 pm

although not saying that dumpe2fs isn't usefull what does dumpe2fs -h show that tune2fs -l doesn't? it seems to me they show the same thing?
I don't hang out here anymore, try asking on http://unix.stackexchange.com/ if you want my help.
Top
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Sun Jul 29, 2007 6:31 pm

XenoTerraCide wrote:although not saying that dumpe2fs isn't usefull what does dumpe2fs -h show that tune2fs -l doesn't? it seems to me they show the same thing?
they show almost the same output other than dumpe2fs which shows the journal size.
Top
satanskin
Guru
Guru
Posts: 353
Joined: Mon Apr 25, 2005 1:37 am

  • Quote

Post by satanskin » Tue Jul 31, 2007 12:54 am

How might one recover from running the following:

# tune2fs -O dir_index /dev/hdXY

# e2fsck -D /dev/hdXY

It seems to have pretty much fucked most things up, especially portage and python
Top
i92guboj
Bodhisattva
Bodhisattva
User avatar
Posts: 10315
Joined: Tue Nov 30, 2004 8:17 pm
Location: Córdoba (Spain)

  • Quote

Post by i92guboj » Tue Jul 31, 2007 1:12 am

satanskin wrote: # e2fsck -D /dev/hdXY
You didn't run fsck while the filesystem was mounted. Did you?

If affirmative, remember never to do so again, umount it, and then run fsck again on it to fix it. If some files have been screwed there is not much you can do. Fsck can seriously screw up things if you run it on mounted filesystems. You should never do that.

If the filesystem wasn't mounted, then forget about my post.
Top
satanskin
Guru
Guru
Posts: 353
Joined: Mon Apr 25, 2005 1:37 am

  • Quote

Post by satanskin » Tue Jul 31, 2007 1:22 am

i92guboj wrote:
satanskin wrote: # e2fsck -D /dev/hdXY
You didn't run fsck while the filesystem was mounted. Did you?

If affirmative, remember never to do so again, umount it, and then run fsck again on it to fix it. If some files have been screwed there is not much you can do. Fsck can seriously screw up things if you run it on mounted filesystems. You should never do that.

If the filesystem wasn't mounted, then forget about my post.
It was indeed mounted. I will surely give your suggestion a shot. Thank you.
Top
i92guboj
Bodhisattva
Bodhisattva
User avatar
Posts: 10315
Joined: Tue Nov 30, 2004 8:17 pm
Location: Córdoba (Spain)

  • Quote

Post by i92guboj » Tue Jul 31, 2007 1:52 am

satanskin wrote:
It was indeed mounted. I will surely give your suggestion a shot. Thank you.
Then I hope that it did not damage something critical. You might need to emerge some packages if something fails. Python might be a problem if it is broken enough that portage can't work. In that case, you will have to rescue your Gentoo system using a prebuilt python package.
Top
padoor
Advocate
Advocate
User avatar
Posts: 4185
Joined: Fri Dec 30, 2005 1:43 am
Location: india

  • Quote

Post by padoor » Tue Jul 31, 2007 7:26 am

i am happy to see this thread contains lot of helpful informations.
reach out a little bit more to catch it (DON'T BELIEVE the advocate part under my user name)
Top
purpler
n00b
n00b
User avatar
Posts: 38
Joined: Thu Jan 11, 2007 7:38 pm
Location: /v*/l*/p*/world

  • Quote

Post by purpler » Wed Aug 01, 2007 8:35 pm

somebody have to say that activating noatime (last access time) option in fstab could noticeably improove file system responsiveness too..
daemon% cat /etc/fstab|grep noatime
/dev/hdb1 / ext3 noatime,data=journal 0 1
i converted from XFS too and can't say anything else except, excelent :)
Our system, who art on raised tile,hallowed be thy OS..
Gentoo.
Top
azp
Guru
Guru
Posts: 457
Joined: Sun Nov 16, 2003 5:48 pm
Location: Sweden
Contact:
Contact azp
Website

  • Quote

Post by azp » Thu Aug 09, 2007 12:53 pm

Maybe it's time to add this guide to the gentoo-wiki? It's a bit hard to read through 14 pages of answers to find out what changes and tips have been reported. I just managed to read through the first two pages, and the first post seems to be updated according to the reported errors!

Good guide to have, I was looking for the dir_index when I found it =)
Weeks of coding can save you hours of planning.
Top
steveL
Watchman
Watchman
Posts: 5153
Joined: Wed Sep 13, 2006 1:18 pm
Location: The Peanut Gallery

  • Quote

Post by steveL » Sun Oct 14, 2007 2:20 am

One minor point: tune2fs -O has_journal is not needed if the filesystem is made with mke2fs -j as outlined in the handbook.
Also, resize_node (which is a default) is not necessary if it's a fixed-size partition (which is handy for some purposes.)

Code: Select all

mke2fs -O dir_index,^resize_inode -j /dev/blah
tune2fs -o journal_data /dev/blah
tune2fs -l showed it as has_journal correctly on my box.

Thanks for an excellent HowTo :-)

I was thinking (after looking at man mke2fs.conf) that it'd be nice to have some defaults specifically for Gentoo purposes: ie /usr/portage (which might be reiser) /usr/portage/distfiles and /var/tmp/portage. These could be set as portage, distfiles or tmp so that we run mke2fs -T distfiles for example. Any suggestions on what those defaults could entail?

Personally I'm thinking of setting a similar one for home, which I imagine would be similar to distfiles, but it'd be cool to have, say video, audio and multimedia (for generic use) settings as well as usr, var (and tmp).
Top
azp
Guru
Guru
Posts: 457
Joined: Sun Nov 16, 2003 5:48 pm
Location: Sweden
Contact:
Contact azp
Website

  • Quote

Post by azp » Wed Oct 24, 2007 9:25 am

I don't think you want /usr/portage as reiserFS. I once thought I wanted it until I learned that ReiserFS fragments like a sonofabitch, and the only way to defrag it is to either tar, move, delete all, and untar back. Sure, you could just delete your whole portage partition every once in a while, it takes about a year for ReiserFS to become unusable on a fs with rather high i/o.
Weeks of coding can save you hours of planning.
Top
steveL
Watchman
Watchman
Posts: 5153
Joined: Wed Sep 13, 2006 1:18 pm
Location: The Peanut Gallery

  • Quote

Post by steveL » Mon Oct 29, 2007 3:52 am

Reiser has excellent perfomance, and even better space usage with small files. It's just not 100% reliable IMO. As such it's perfect for portage tree, since all files have md5 sums and they can easily be resync'ed. If you do this, it's advisable to have distfiles separate; although the files are also checksummed, they tend to be larger (pain to download again) and less frequently accessed, so reiser doesn't gain much. This also helps with the defrag issue, since the filetypes and access patterns are so different.

I guess I should reformat the portage partition at some point, and see if it speeds up though :)
Top
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Mon Oct 29, 2007 9:48 pm

steveL wrote: I was thinking (after looking at man mke2fs.conf) that it'd be nice to have some defaults specifically for Gentoo purposes: ie /usr/portage (which might be reiser) /usr/portage/distfiles and /var/tmp/portage. These could be set as portage, distfiles or tmp so that we run mke2fs -T distfiles for example. Any suggestions on what those defaults could entail?
for me i break everything up into several partitions under lvm2. i.e

/usr/portage <- mke2fs -b 1024 -N 200000
/usr/portage/distfiles <- mke2fs -b 4096 -T largefile

I used to keep /usr/portage on reiserfs but that eventually became slow, i get better performance overtime with extfs.
best think would be to have /usr/portage on a raid0 2x CF array, read time on that would be insane.
that's the plan for me.
Top
nowshining
n00b
n00b
User avatar
Posts: 3
Joined: Sun Nov 25, 2007 3:41 pm

thanks

  • Quote

Post by nowshining » Sun Nov 25, 2007 4:38 pm

did it all from the live Gutsy Cd - i have feisty updated to Gutsy by the way and it did make things a bit faster well at most/least a noticeable difference. thanks.. :) and NO I am NOT a gentoo user...

edit: if anyone is wondering yes I made the changes to my MAIN HardDrive /boot up drive from the LiveCD via sudo in the terminal. :)
7.10 user 2.6.25 user & KDE 3.5.9 user.
Top
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Wed Nov 28, 2007 7:16 pm

for those of you people currently using RAID, you can optimized ext2/3 for use in raid with the extended stride option, the way to calculate this value is simple but not trivial. first you need to multiply all the drives in the array against the chunk value of the array then you divide that with the block value for your ext3 file system for example.
remember that stride values are only good with raid0,5,6 and above it is not needed with raid1.

for example:

you setup and array out of 4 drives like so.

Code: Select all

# mdadm --create /dev/md0 /dev/sd[abcd]1 --level=0 --chunk=256 --raid-devices=4
note the chunk value you will need this to calculate the final stride value.

now that we created our raid0 array it's time to multiply the chunk value against the block value that we will use with the ext2/3 file system. for this example we will use 4096 which is the default value.

4 = number of drives.
256 = chunk value.
4 = block value.
results = stride value.

Code: Select all

# a=$((4*256/4)); echo $a
ok our stride value is 256, so now lets create the file system.

Code: Select all

# mke2fs -b 4096 -E stride=256 /dev/md0
that's all folks.
Top
neuron
Advocate
Advocate
User avatar
Posts: 2371
Joined: Tue May 28, 2002 7:43 pm

  • Quote

Post by neuron » Wed Nov 28, 2007 7:55 pm

likewhoa wrote:for those of you people currently using RAID, you can optimized ext2/3 for use in raid with the extended stride option, the way to calculate this value is simple but not trivial. first you need to multiply all the drives in the array against the chunk value of the array then you divide that with the block value for your ext3 file system for example.
That's not the information I've found about raid5/ext3 chunk size. The algorithm I found everywhere was simply:
stride = chunk / blocksize
So 256/4k in your case, = 64
Top
Cyker
Veteran
Veteran
Posts: 1746
Joined: Thu Jun 15, 2006 7:43 pm

  • Quote

Post by Cyker » Wed Nov 28, 2007 8:10 pm

Are you sure?

The calculation I got was [Optimum Stride]=[Array Stripe Chunk Size] / [FS Inode Blocksize]

There wasn't anything about dividing by the no' of drives...!

If this is the case, then the stride I set for my RAID array should be 64, and not 16 as I have set it!


But to my knowledge, the RAID chunk size is not dependent on the no. of disks in the system, and neither is the inode blocksize or stride.

My understanding of the stride, is this:

The RAID Chunk||stripe size is how many bytes of continuous data will be on one disk (The default being 64k.). No matter how many disks, each one will still get this same-sized chunk for each stripe.

The stride value tells mkfs how many inode blocks (def. 4k) will fit into one of those chunks - In my case, you can fit 16 4k inode blocks into a single RAID chunk (64/4=16)

Going on this, the stride value in your example should be 64, and not 256...

Edit: NB: Was replying to likewoah, but was checking Google/TLDP/genwiki to make sure I wasn't being stupid, and that cunning knave neuron cut in! :P
Top
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Thu Nov 29, 2007 12:43 am

Well i'm in the process of doing some benchmarks to show the difference in various chunk values and stride values with raid, my first understanding was that just doing the calculation "chunk/block size" would equal stride size but giving the number os drives in a giving raid array it makes more sense to put the number of drives into the equation. The only way to find this out is by running benchmarks which i should be doing at benchmarks.gentooexperimental.org soon. :)

so far i'm getting good results with the "raid drives*chunk/block size" equation but i can't give a conclusion until the benchmarks are fully done.
Top
Cyker
Veteran
Veteran
Posts: 1746
Joined: Thu Jun 15, 2006 7:43 pm

  • Quote

Post by Cyker » Thu Nov 29, 2007 8:09 am

Well, using the common calculation, the size of 1 stride = size of 1 stripe.

In yours, the stride would be spread over all 4 disks, so 1 stride = 4 stripes...

I must admit, I do hope yours is not the optimal - It would mean you can't add another disk to the array because that would change the 'optimal' stride, and you can't change the stride size without re-formatting the array.


The problem with this sort of thing is it tends to be dependent on access patterns and file size/spread.

For large contiguous files, setting the array chunk, inode and stride sizes to numbers much bigger than the norm (e.g. 512 vs 64, 32 vs 4 and 512 vs 16) would give very good performance, but as soon as random-access, fragmentation and small files are thrown in, the performance drops like a lead balloon.

But I look forward to seeing the benchmarks; I've not found any decent ones so far so it'd be interesting to see what results you get! :)
Top
likewhoa
l33t
l33t
Posts: 778
Joined: Wed Oct 04, 2006 12:28 pm
Location: Brooklyn, New York
Contact:
Contact likewhoa
Website

  • Quote

Post by likewhoa » Thu Nov 29, 2007 9:08 am

yea, I agree with you that this method is optimal until the number of drives increases and this is the first time I have come into contact with such method, anyways the performance can be seeing in the numbers, I manage to put some time into running a few benchmarks on a 8 drive raid0 array 3.6TB in size to see what the numbers say. I myself still use the old method of "chunk/block size" with all previous arrays and I have only been implementing this method because it was something that seems to give good performance on large arrays. anyways, here are some dd,mount,mke2fs benchmarks in raw form. once the full benchmarks which will be done on smaller size arrays, 50,75,125,150GB using various chunk values starting from the default 64k up to 8096, all those benchmarks will feature values from bonnie++,tiobench,dd, & hdparm. P.S the current benchmarks below were run on 8GB of ram.

/dev/md0 512 chunk & ext3fs stride=512 (8*512/4) 8 being number of drives in array.
filesystem mounted with journal data ordered.

Code: Select all


# time mke2fs -j -b 4096 -E stride=512 /dev/md0

real    7m34.245s
user    0m0.530s
sys     0m45.233s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.914s
user    0m0.000s
sys     0m0.009s

real    0m0.001s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.04643 s, 525 MB/s

real    0m2.781s
user    0m0.003s
sys     0m1.548s

real    0m2.604s
user    0m0.001s
sys     0m0.000s

# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.0373 s, 329 MB/s

real    0m13.039s
user    0m0.002s
sys     0m6.794s

real    0m4.153s
user    0m0.000s
sys     0m0.345s

# hdparm -Tt /dev/md0

/dev/md0:
 Timing cached reads:   8514 MB in  1.99 seconds = 4268.86 MB/sec
 Timing buffered disk reads:  1188 MB in  3.01 seconds = 395.32 MB/sec

# time umount /mnt/gentoo

real    0m4.785s
user    0m0.001s
sys     0m0.734s
/dev/md0 512 chunk & ext3fs stride=128 (512/4) old method.

Code: Select all


# time mke2fs -j -b 4096 -E stride=128 /dev/md0

real    6m28.650s
user    0m0.540s
sys     0m45.361s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.216s
user    0m0.001s
sys     0m0.010s

real    0m0.001s
user    0m0.001s
sys     0m0.000s

# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.91196 s, 562 MB/s

real    0m3.597s
user    0m0.000s
sys     0m1.637s

real    0m2.761s
user    0m0.001s
sys     0m0.075s


# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.7545 s, 312 MB/s

real    0m20.004s
user    0m0.000s
sys     0m7.374s

real    0m3.352s
user    0m0.000s
sys     0m0.167s

# hdparm -Tt /dev/md0

/dev/md0:
 Timing cached reads:   8630 MB in  1.99 seconds = 4327.93 MB/sec
 Timing buffered disk reads:  1100 MB in  3.01 seconds = 364.93 MB/sec

# time umount /mnt/gentoo

real    0m4.775s
user    0m0.000s
sys     0m0.735s
I still believe that "chunk/block size" is optimal, lets just hope other benchmarks go in it's favor.

EDIT: did some early runs today here are the results.

Code: Select all


Raid Level : raid0
Array Size : 3125665792 (2980.87 GiB 3200.68 GB)
Raid Devices : 8
Chunk Size : 1024K

# time mkfs.ext3 -b 4096 -j -E stride=2048 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)

real    12m0.932s
user    0m0.485s
sys     0m39.654s

real    0m4.624s
user    0m0.001s
sys     0m0.005s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.127s
user    0m0.002s
sys     0m0.006s

real    0m0.001s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.4599 s, 262 MB/s

real    1m10.561s
user    0m0.009s
sys     0m31.235s

real    0m4.611s
user    0m0.000s
sys     0m0.053s

# time umount /mnt/gentoo/;time sync

real    0m4.656s
user    0m0.000s
sys     0m0.587s

real    0m0.001s
user    0m0.002s
sys     0m0.000s

.: Stride Value Set To 256 :.

# time mkfs.ext3 -b 4096 -j -E stride=256 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)

real    11m45.907s
user    0m0.495s
sys     0m39.765s

real    0m4.065s
user    0m0.000s
sys     0m0.001s

# time mount /dev/md0 /mnt/gentoo/

real    0m0.594s
user    0m0.001s
sys     0m0.005s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 64.8825 s, 265 MB/s

real    1m5.374s
user    0m0.013s
sys     0m30.310s

real    0m4.733s
user    0m0.000s
sys     0m0.055s

# time umount /mnt/gentoo/;time sync

real    0m4.661s
user    0m0.000s
sys     0m0.594s

real    0m0.001s
user    0m0.001s
sys     0m0.000s

-- Reiser3.6 --

# time mkfs.reiserfs -q /dev/md0
mkfs.reiserfs 3.6.19 (2003 www.namesys.com)

real    1m43.844s
user    0m0.074s
sys     0m0.250s

# time mount /dev/md0 /mnt/gentoo

real    0m4.939s
user    0m0.001s
sys     0m0.027s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.5959 s, 270 MB/s

real    1m4.085s
user    0m0.021s
sys     0m19.515s

real    0m5.133s
user    0m0.000s
sys     0m0.066s

# time umount /mnt/gentoo

real    0m5.938s
user    0m0.000s
sys     0m0.674s

-- JFS --

# time mkfs.jfs -q /dev/md0
mkfs.jfs version 1.1.12, 24-Aug-2007

real    0m1.791s
user    0m0.022s
sys     0m0.346s

# time mount /dev/md0 /mnt/gentoo

real    0m4.633s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.1783 s, 272 MB/s

real    1m3.876s
user    0m0.006s
sys     0m15.049s

real    0m5.094s
user    0m0.000s
sys     0m0.002s

# time umount /mnt/gentoo/

real    0m5.199s
user    0m0.000s
sys     0m0.296s

-- XFS --

# time mkfs.xfs -q -f /dev/md0

real    0m1.496s
user    0m0.003s
sys     0m0.033s

# time mount /dev/md0 /mnt/gentoo

real    0m4.451s
user    0m0.000s
sys     0m0.004s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.2181 s, 263 MB/s

real    1m5.710s
user    0m0.007s
sys     0m20.209s

real    0m5.064s
user    0m0.000s
sys     0m0.008s

# time umount /mnt/gentoo/

real    0m5.458s
user    0m0.000s
sys     0m0.583s

Top
StarDragon
Guru
Guru
User avatar
Posts: 390
Joined: Sun Jun 19, 2005 3:45 pm
Location: tEXas

  • Quote

Post by StarDragon » Thu Dec 20, 2007 4:25 pm

I implemented this method on my laptop, and it worked like a charm. I have an older model and it ussualy chugs when doing a lot of tasks at once. But now, it seems to hum along just fine. :)
"After all, a person's a person, no matter how small."--Horton Hears A Who!
Top
Schizoid
Apprentice
Apprentice
Posts: 267
Joined: Fri Apr 11, 2003 12:54 am

  • Quote

Post by Schizoid » Sun Jan 13, 2008 7:48 pm

I have switched a few of my partitions from XFS to ext3. I was wondering if it is safe to delete the lost+found directories that it creates in every partition? I would think that if there was some lost data it found that it would recreate that directory as needed?
Top
i92guboj
Bodhisattva
Bodhisattva
User avatar
Posts: 10315
Joined: Tue Nov 30, 2004 8:17 pm
Location: Córdoba (Spain)

  • Quote

Post by i92guboj » Sun Jan 13, 2008 8:00 pm

Schizoid wrote:I have switched a few of my partitions from XFS to ext3. I was wondering if it is safe to delete the lost+found directories that it creates in every partition? I would think that if there was some lost data it found that it would recreate that directory as needed?
I am not 100% sure, but I think that that directory is re-created anyway each time you run fsck on the partition. In other words: if this is true, that directory would be recreated each time the partition is checked. This might be on each startup or after a number of mounts or after a given time. It all depends on how did you format your partition. tune2fs can be used to change those parameters without a reformat.

EDIT: you can also create it by hand by using "mklost+found". I don't know why would anyone want to delete that directory, though...
Top
Post Reply

362 posts
  • Page 14 of 15
    • Jump to page:
  • Previous
  • 1
  • …
  • 11
  • 12
  • 13
  • 14
  • 15
  • Next

Return to “Documentation, Tips & Tricks”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy