|Main Archive Page > Month Archives > fedora-users archives|
First, thank you everyone for the responses.
I did performance testing on Fedora 15 before I decided on XFS. Brtfs doesn't seem to be a good option and ext4 was only going to take the limit to 64k so that wasn't going to work since there will be millions of these data images.
XFS works perfectly fine but at the time it was running on RHEL 5 which had the performance issue for one and wasn't going to be fixed until 6.2 and then the extra cost so it was moved to Fedora on XFS. When doing the migration I noticed the rsync would lock up the Highpoint raid cards every time it was syncing a directory with more than 32k files...and exactly every directory that exceeded that number. I then started doing research on the maximum limits which led me to XFS.
From: email@example.com [mailto:firstname.lastname@example.org] On Behalf Of Michael Cronenworth
Sent: Tuesday, October 18, 2011 8:52 AM
Subject: Re: Ext3 file count limits
> How about ext4 ?
> From what i remembererd from last fosdem, it was supposed to have much wider limitations and way faster....
ext4 has a 64k sub-directory limit.
XFS and btrfs do not have a (reachable) limit.
P.S. Since btrfs has been brought up, I would highly recommend *not*
using it until there is an fsck tool.
-- users mailing list email@example.com To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines -- users mailing list firstname.lastname@example.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines