Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Some ext3 Filesystem Tips
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3 ... , 13, 14, 15  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Sun Jul 29, 2007 5:33 am    Post subject: Reply with quote

devsk wrote:
Maf wrote:
But I kinda' can't believe the "journal-mode" is on ;) And unfortunatelly tune2fs -l doesn't contain this kind of information. Is there any other way?

Code:
dmesg | grep "EXT3-fs: mounted filesystem"


a more detailed view of any extfs would be to use dumpe2fs, here's how to show only superblock info.
Code:

dumpe2fs -h /dev/sda1


enjoy. 8)
Back to top
View user's profile Send private message
codergeek42
Bodhisattva
Bodhisattva


Joined: 05 Apr 2004
Posts: 5142
Location: Anaheim, CA (USA)

PostPosted: Sun Jul 29, 2007 7:26 am    Post subject: Reply with quote

Nifty tip! Thanks, likewhoa. :)
_________________
~~ Peter: Programmer, Mathematician, STEM & Free Software Advocate, Enlightened Agent, Transhumanist, Fedora contributor
Who am I? :: EFF & FSF
Back to top
View user's profile Send private message
XenoTerraCide
Veteran
Veteran


Joined: 18 Jan 2004
Posts: 1418
Location: MI, USA

PostPosted: Sun Jul 29, 2007 2:35 pm    Post subject: Reply with quote

although not saying that dumpe2fs isn't usefull what does dumpe2fs -h show that tune2fs -l doesn't? it seems to me they show the same thing?
_________________
I don't hang out here anymore, try asking on http://unix.stackexchange.com/ if you want my help.
Back to top
View user's profile Send private message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Sun Jul 29, 2007 6:31 pm    Post subject: Reply with quote

XenoTerraCide wrote:
although not saying that dumpe2fs isn't usefull what does dumpe2fs -h show that tune2fs -l doesn't? it seems to me they show the same thing?


they show almost the same output other than dumpe2fs which shows the journal size.
Back to top
View user's profile Send private message
satanskin
Guru
Guru


Joined: 25 Apr 2005
Posts: 353

PostPosted: Tue Jul 31, 2007 12:54 am    Post subject: Reply with quote

How might one recover from running the following:

# tune2fs -O dir_index /dev/hdXY

# e2fsck -D /dev/hdXY

It seems to have pretty much fucked most things up, especially portage and python
Back to top
View user's profile Send private message
i92guboj
Bodhisattva
Bodhisattva


Joined: 30 Nov 2004
Posts: 10315
Location: Córdoba (Spain)

PostPosted: Tue Jul 31, 2007 1:12 am    Post subject: Reply with quote

satanskin wrote:

# e2fsck -D /dev/hdXY


You didn't run fsck while the filesystem was mounted. Did you?

If affirmative, remember never to do so again, umount it, and then run fsck again on it to fix it. If some files have been screwed there is not much you can do. Fsck can seriously screw up things if you run it on mounted filesystems. You should never do that.

If the filesystem wasn't mounted, then forget about my post.
Back to top
View user's profile Send private message
satanskin
Guru
Guru


Joined: 25 Apr 2005
Posts: 353

PostPosted: Tue Jul 31, 2007 1:22 am    Post subject: Reply with quote

i92guboj wrote:
satanskin wrote:

# e2fsck -D /dev/hdXY


You didn't run fsck while the filesystem was mounted. Did you?

If affirmative, remember never to do so again, umount it, and then run fsck again on it to fix it. If some files have been screwed there is not much you can do. Fsck can seriously screw up things if you run it on mounted filesystems. You should never do that.

If the filesystem wasn't mounted, then forget about my post.


It was indeed mounted. I will surely give your suggestion a shot. Thank you.
Back to top
View user's profile Send private message
i92guboj
Bodhisattva
Bodhisattva


Joined: 30 Nov 2004
Posts: 10315
Location: Córdoba (Spain)

PostPosted: Tue Jul 31, 2007 1:52 am    Post subject: Reply with quote

satanskin wrote:


It was indeed mounted. I will surely give your suggestion a shot. Thank you.


Then I hope that it did not damage something critical. You might need to emerge some packages if something fails. Python might be a problem if it is broken enough that portage can't work. In that case, you will have to rescue your Gentoo system using a prebuilt python package.
Back to top
View user's profile Send private message
padoor
Advocate
Advocate


Joined: 30 Dec 2005
Posts: 4185
Location: india

PostPosted: Tue Jul 31, 2007 7:26 am    Post subject: Reply with quote

i am happy to see this thread contains lot of helpful informations.
_________________
reach out a little bit more to catch it (DON'T BELIEVE the advocate part under my user name)
Back to top
View user's profile Send private message
purpler
n00b
n00b


Joined: 11 Jan 2007
Posts: 38
Location: /v*/l*/p*/world

PostPosted: Wed Aug 01, 2007 8:35 pm    Post subject: Reply with quote

somebody have to say that activating noatime (last access time) option in fstab could noticeably improove file system responsiveness too..
Quote:
daemon% cat /etc/fstab|grep noatime
/dev/hdb1 / ext3 noatime,data=journal 0 1

i converted from XFS too and can't say anything else except, excelent :)
_________________
Our system, who art on raised tile,hallowed be thy OS..
Gentoo.
Back to top
View user's profile Send private message
azp
Guru
Guru


Joined: 16 Nov 2003
Posts: 456
Location: Sweden

PostPosted: Thu Aug 09, 2007 12:53 pm    Post subject: Reply with quote

Maybe it's time to add this guide to the gentoo-wiki? It's a bit hard to read through 14 pages of answers to find out what changes and tips have been reported. I just managed to read through the first two pages, and the first post seems to be updated according to the reported errors!

Good guide to have, I was looking for the dir_index when I found it =)
_________________
Weeks of coding can save you hours of planning.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sun Oct 14, 2007 2:20 am    Post subject: Reply with quote

One minor point: tune2fs -O has_journal is not needed if the filesystem is made with mke2fs -j as outlined in the handbook.
Also, resize_node (which is a default) is not necessary if it's a fixed-size partition (which is handy for some purposes.)
Code:
mke2fs -O dir_index,^resize_inode -j /dev/blah
tune2fs -o journal_data /dev/blah

tune2fs -l showed it as has_journal correctly on my box.

Thanks for an excellent HowTo :-)

I was thinking (after looking at man mke2fs.conf) that it'd be nice to have some defaults specifically for Gentoo purposes: ie /usr/portage (which might be reiser) /usr/portage/distfiles and /var/tmp/portage. These could be set as portage, distfiles or tmp so that we run mke2fs -T distfiles for example. Any suggestions on what those defaults could entail?

Personally I'm thinking of setting a similar one for home, which I imagine would be similar to distfiles, but it'd be cool to have, say video, audio and multimedia (for generic use) settings as well as usr, var (and tmp).
Back to top
View user's profile Send private message
azp
Guru
Guru


Joined: 16 Nov 2003
Posts: 456
Location: Sweden

PostPosted: Wed Oct 24, 2007 9:25 am    Post subject: Reply with quote

I don't think you want /usr/portage as reiserFS. I once thought I wanted it until I learned that ReiserFS fragments like a sonofabitch, and the only way to defrag it is to either tar, move, delete all, and untar back. Sure, you could just delete your whole portage partition every once in a while, it takes about a year for ReiserFS to become unusable on a fs with rather high i/o.
_________________
Weeks of coding can save you hours of planning.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Mon Oct 29, 2007 3:52 am    Post subject: Reply with quote

Reiser has excellent perfomance, and even better space usage with small files. It's just not 100% reliable IMO. As such it's perfect for portage tree, since all files have md5 sums and they can easily be resync'ed. If you do this, it's advisable to have distfiles separate; although the files are also checksummed, they tend to be larger (pain to download again) and less frequently accessed, so reiser doesn't gain much. This also helps with the defrag issue, since the filetypes and access patterns are so different.

I guess I should reformat the portage partition at some point, and see if it speeds up though :)
Back to top
View user's profile Send private message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Mon Oct 29, 2007 9:48 pm    Post subject: Reply with quote

steveL wrote:

I was thinking (after looking at man mke2fs.conf) that it'd be nice to have some defaults specifically for Gentoo purposes: ie /usr/portage (which might be reiser) /usr/portage/distfiles and /var/tmp/portage. These could be set as portage, distfiles or tmp so that we run mke2fs -T distfiles for example. Any suggestions on what those defaults could entail?


for me i break everything up into several partitions under lvm2. i.e

/usr/portage <- mke2fs -b 1024 -N 200000
/usr/portage/distfiles <- mke2fs -b 4096 -T largefile

I used to keep /usr/portage on reiserfs but that eventually became slow, i get better performance overtime with extfs.
best think would be to have /usr/portage on a raid0 2x CF array, read time on that would be insane.
that's the plan for me.
Back to top
View user's profile Send private message
nowshining
n00b
n00b


Joined: 25 Nov 2007
Posts: 3

PostPosted: Sun Nov 25, 2007 4:38 pm    Post subject: thanks Reply with quote

did it all from the live Gutsy Cd - i have feisty updated to Gutsy by the way and it did make things a bit faster well at most/least a noticeable difference. thanks.. :) and NO I am NOT a gentoo user...

edit: if anyone is wondering yes I made the changes to my MAIN HardDrive /boot up drive from the LiveCD via sudo in the terminal. :)
_________________
7.10 user 2.6.25 user & KDE 3.5.9 user.
Back to top
View user's profile Send private message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Wed Nov 28, 2007 7:16 pm    Post subject: Reply with quote

for those of you people currently using RAID, you can optimized ext2/3 for use in raid with the extended stride option, the way to calculate this value is simple but not trivial. first you need to multiply all the drives in the array against the chunk value of the array then you divide that with the block value for your ext3 file system for example.
remember that stride values are only good with raid0,5,6 and above it is not needed with raid1.

for example:

you setup and array out of 4 drives like so.
Code:
# mdadm --create /dev/md0 /dev/sd[abcd]1 --level=0 --chunk=256 --raid-devices=4


note the chunk value you will need this to calculate the final stride value.

now that we created our raid0 array it's time to multiply the chunk value against the block value that we will use with the ext2/3 file system. for this example we will use 4096 which is the default value.

4 = number of drives.
256 = chunk value.
4 = block value.
results = stride value.

Code:
# a=$((4*256/4)); echo $a


ok our stride value is 256, so now lets create the file system.

Code:
# mke2fs -b 4096 -E stride=256 /dev/md0


that's all folks.
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Wed Nov 28, 2007 7:55 pm    Post subject: Reply with quote

likewhoa wrote:
for those of you people currently using RAID, you can optimized ext2/3 for use in raid with the extended stride option, the way to calculate this value is simple but not trivial. first you need to multiply all the drives in the array against the chunk value of the array then you divide that with the block value for your ext3 file system for example.


That's not the information I've found about raid5/ext3 chunk size. The algorithm I found everywhere was simply:
stride = chunk / blocksize
So 256/4k in your case, = 64
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Wed Nov 28, 2007 8:10 pm    Post subject: Reply with quote

Are you sure?

The calculation I got was [Optimum Stride]=[Array Stripe Chunk Size] / [FS Inode Blocksize]

There wasn't anything about dividing by the no' of drives...!

If this is the case, then the stride I set for my RAID array should be 64, and not 16 as I have set it!


But to my knowledge, the RAID chunk size is not dependent on the no. of disks in the system, and neither is the inode blocksize or stride.

My understanding of the stride, is this:

The RAID Chunk||stripe size is how many bytes of continuous data will be on one disk (The default being 64k.). No matter how many disks, each one will still get this same-sized chunk for each stripe.

The stride value tells mkfs how many inode blocks (def. 4k) will fit into one of those chunks - In my case, you can fit 16 4k inode blocks into a single RAID chunk (64/4=16)

Going on this, the stride value in your example should be 64, and not 256...

Edit: NB: Was replying to likewoah, but was checking Google/TLDP/genwiki to make sure I wasn't being stupid, and that cunning knave neuron cut in! :P
Back to top
View user's profile Send private message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Thu Nov 29, 2007 12:43 am    Post subject: Reply with quote

Well i'm in the process of doing some benchmarks to show the difference in various chunk values and stride values with raid, my first understanding was that just doing the calculation "chunk/block size" would equal stride size but giving the number os drives in a giving raid array it makes more sense to put the number of drives into the equation. The only way to find this out is by running benchmarks which i should be doing at benchmarks.gentooexperimental.org soon. :)

so far i'm getting good results with the "raid drives*chunk/block size" equation but i can't give a conclusion until the benchmarks are fully done.
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Thu Nov 29, 2007 8:09 am    Post subject: Reply with quote

Well, using the common calculation, the size of 1 stride = size of 1 stripe.

In yours, the stride would be spread over all 4 disks, so 1 stride = 4 stripes...

I must admit, I do hope yours is not the optimal - It would mean you can't add another disk to the array because that would change the 'optimal' stride, and you can't change the stride size without re-formatting the array.


The problem with this sort of thing is it tends to be dependent on access patterns and file size/spread.

For large contiguous files, setting the array chunk, inode and stride sizes to numbers much bigger than the norm (e.g. 512 vs 64, 32 vs 4 and 512 vs 16) would give very good performance, but as soon as random-access, fragmentation and small files are thrown in, the performance drops like a lead balloon.

But I look forward to seeing the benchmarks; I've not found any decent ones so far so it'd be interesting to see what results you get! :)
Back to top
View user's profile Send private message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 778
Location: Brooklyn, New York

PostPosted: Thu Nov 29, 2007 9:08 am    Post subject: Reply with quote

yea, I agree with you that this method is optimal until the number of drives increases and this is the first time I have come into contact with such method, anyways the performance can be seeing in the numbers, I manage to put some time into running a few benchmarks on a 8 drive raid0 array 3.6TB in size to see what the numbers say. I myself still use the old method of "chunk/block size" with all previous arrays and I have only been implementing this method because it was something that seems to give good performance on large arrays. anyways, here are some dd,mount,mke2fs benchmarks in raw form. once the full benchmarks which will be done on smaller size arrays, 50,75,125,150GB using various chunk values starting from the default 64k up to 8096, all those benchmarks will feature values from bonnie++,tiobench,dd, & hdparm. P.S the current benchmarks below were run on 8GB of ram.

/dev/md0 512 chunk & ext3fs stride=512 (8*512/4) 8 being number of drives in array.
filesystem mounted with journal data ordered.


Code:


# time mke2fs -j -b 4096 -E stride=512 /dev/md0

real    7m34.245s
user    0m0.530s
sys     0m45.233s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.914s
user    0m0.000s
sys     0m0.009s

real    0m0.001s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.04643 s, 525 MB/s

real    0m2.781s
user    0m0.003s
sys     0m1.548s

real    0m2.604s
user    0m0.001s
sys     0m0.000s

# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.0373 s, 329 MB/s

real    0m13.039s
user    0m0.002s
sys     0m6.794s

real    0m4.153s
user    0m0.000s
sys     0m0.345s

# hdparm -Tt /dev/md0

/dev/md0:
 Timing cached reads:   8514 MB in  1.99 seconds = 4268.86 MB/sec
 Timing buffered disk reads:  1188 MB in  3.01 seconds = 395.32 MB/sec

# time umount /mnt/gentoo

real    0m4.785s
user    0m0.001s
sys     0m0.734s


/dev/md0 512 chunk & ext3fs stride=128 (512/4) old method.

Code:


# time mke2fs -j -b 4096 -E stride=128 /dev/md0

real    6m28.650s
user    0m0.540s
sys     0m45.361s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.216s
user    0m0.001s
sys     0m0.010s

real    0m0.001s
user    0m0.001s
sys     0m0.000s

# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.91196 s, 562 MB/s

real    0m3.597s
user    0m0.000s
sys     0m1.637s

real    0m2.761s
user    0m0.001s
sys     0m0.075s


# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.7545 s, 312 MB/s

real    0m20.004s
user    0m0.000s
sys     0m7.374s

real    0m3.352s
user    0m0.000s
sys     0m0.167s

# hdparm -Tt /dev/md0

/dev/md0:
 Timing cached reads:   8630 MB in  1.99 seconds = 4327.93 MB/sec
 Timing buffered disk reads:  1100 MB in  3.01 seconds = 364.93 MB/sec

# time umount /mnt/gentoo

real    0m4.775s
user    0m0.000s
sys     0m0.735s


I still believe that "chunk/block size" is optimal, lets just hope other benchmarks go in it's favor.

EDIT: did some early runs today here are the results.

Code:


Raid Level : raid0
Array Size : 3125665792 (2980.87 GiB 3200.68 GB)
Raid Devices : 8
Chunk Size : 1024K

# time mkfs.ext3 -b 4096 -j -E stride=2048 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)

real    12m0.932s
user    0m0.485s
sys     0m39.654s

real    0m4.624s
user    0m0.001s
sys     0m0.005s

# time mount /dev/md0 /mnt/gentoo/;time sync

real    0m0.127s
user    0m0.002s
sys     0m0.006s

real    0m0.001s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.4599 s, 262 MB/s

real    1m10.561s
user    0m0.009s
sys     0m31.235s

real    0m4.611s
user    0m0.000s
sys     0m0.053s

# time umount /mnt/gentoo/;time sync

real    0m4.656s
user    0m0.000s
sys     0m0.587s

real    0m0.001s
user    0m0.002s
sys     0m0.000s

.: Stride Value Set To 256 :.

# time mkfs.ext3 -b 4096 -j -E stride=256 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)

real    11m45.907s
user    0m0.495s
sys     0m39.765s

real    0m4.065s
user    0m0.000s
sys     0m0.001s

# time mount /dev/md0 /mnt/gentoo/

real    0m0.594s
user    0m0.001s
sys     0m0.005s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 64.8825 s, 265 MB/s

real    1m5.374s
user    0m0.013s
sys     0m30.310s

real    0m4.733s
user    0m0.000s
sys     0m0.055s

# time umount /mnt/gentoo/;time sync

real    0m4.661s
user    0m0.000s
sys     0m0.594s

real    0m0.001s
user    0m0.001s
sys     0m0.000s

-- Reiser3.6 --

# time mkfs.reiserfs -q /dev/md0
mkfs.reiserfs 3.6.19 (2003 www.namesys.com)

real    1m43.844s
user    0m0.074s
sys     0m0.250s

# time mount /dev/md0 /mnt/gentoo

real    0m4.939s
user    0m0.001s
sys     0m0.027s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.5959 s, 270 MB/s

real    1m4.085s
user    0m0.021s
sys     0m19.515s

real    0m5.133s
user    0m0.000s
sys     0m0.066s

# time umount /mnt/gentoo

real    0m5.938s
user    0m0.000s
sys     0m0.674s

-- JFS --

# time mkfs.jfs -q /dev/md0
mkfs.jfs version 1.1.12, 24-Aug-2007

real    0m1.791s
user    0m0.022s
sys     0m0.346s

# time mount /dev/md0 /mnt/gentoo

real    0m4.633s
user    0m0.000s
sys     0m0.001s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.1783 s, 272 MB/s

real    1m3.876s
user    0m0.006s
sys     0m15.049s

real    0m5.094s
user    0m0.000s
sys     0m0.002s

# time umount /mnt/gentoo/

real    0m5.199s
user    0m0.000s
sys     0m0.296s

-- XFS --

# time mkfs.xfs -q -f /dev/md0

real    0m1.496s
user    0m0.003s
sys     0m0.033s

# time mount /dev/md0 /mnt/gentoo

real    0m4.451s
user    0m0.000s
sys     0m0.004s

# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync

16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.2181 s, 263 MB/s

real    1m5.710s
user    0m0.007s
sys     0m20.209s

real    0m5.064s
user    0m0.000s
sys     0m0.008s

# time umount /mnt/gentoo/

real    0m5.458s
user    0m0.000s
sys     0m0.583s

Back to top
View user's profile Send private message
StarDragon
Guru
Guru


Joined: 19 Jun 2005
Posts: 390
Location: tEXas

PostPosted: Thu Dec 20, 2007 4:25 pm    Post subject: Reply with quote

I implemented this method on my laptop, and it worked like a charm. I have an older model and it ussualy chugs when doing a lot of tasks at once. But now, it seems to hum along just fine. :)
_________________
"After all, a person's a person, no matter how small."--Horton Hears A Who!
Back to top
View user's profile Send private message
Schizoid
Apprentice
Apprentice


Joined: 11 Apr 2003
Posts: 267

PostPosted: Sun Jan 13, 2008 7:48 pm    Post subject: Reply with quote

I have switched a few of my partitions from XFS to ext3. I was wondering if it is safe to delete the lost+found directories that it creates in every partition? I would think that if there was some lost data it found that it would recreate that directory as needed?
Back to top
View user's profile Send private message
i92guboj
Bodhisattva
Bodhisattva


Joined: 30 Nov 2004
Posts: 10315
Location: Córdoba (Spain)

PostPosted: Sun Jan 13, 2008 8:00 pm    Post subject: Reply with quote

Schizoid wrote:
I have switched a few of my partitions from XFS to ext3. I was wondering if it is safe to delete the lost+found directories that it creates in every partition? I would think that if there was some lost data it found that it would recreate that directory as needed?


I am not 100% sure, but I think that that directory is re-created anyway each time you run fsck on the partition. In other words: if this is true, that directory would be recreated each time the partition is checked. This might be on each startup or after a number of mounts or after a given time. It all depends on how did you format your partition. tune2fs can be used to change those parameters without a reformat.

EDIT: you can also create it by hand by using "mklost+found". I don't know why would anyone want to delete that directory, though...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3 ... , 13, 14, 15  Next
Page 14 of 15

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum