My first experiment was simply setting up a distributed volume that needs to have each of the "bricks" as they refer to them as being reliable members.  

However you can configure it to mirror and strip across the "bricks" so they don't have to be.  In fact you can setup stripped mirrors, etc.

I've now got a number of these storage nodes as I call them.  It generally consists of a 1U box, stuffed full of ram, connected to a JBOD chassis full of disks.  I then use ZFS to create storage pools.

The problem with this is that your scale is limited to the size of a single "node" and while you can play games with autofs it's not a cohesive filesystem.

This solves that problem.  My only complaint is that it's fuse based.

Lustre is a kernel loaded filesystem that'll do the same thing as gluster.  However, it doesn't have any of the redunancy stuff, it simply assumes your underlying storage is reliable.

Tim.

On Tue, Mar 28, 2017 at 11:08 AM, John Stoffel <john@stoffel.org> wrote:

Tim> The subject for that was supposed to be "Fun with glusterfs"
Tim> On Tue, Mar 28, 2017 at 10:51 AM, Tim Keller <turbofx@gmail.com> wrote:

Tim>     I've been messing around with "glusterfs" and it's pretty cool.

Tim>     I build 3 machine running RHEL7.2 each with an extra 250GB disk.

Tim>     I then setup a filesystem that stripes across all three machines.

Tim>     So it looks like a single 750GB filesystem.  Over gigE I'm seeing okay
Tim>     performance ~50MB/s.

Tim>     Cool stuff.  Next on my list of filesystems to play with it Lustre.

Sounds neat.  Does glusterfs support RAID5/6 or does it depend on the
underlying nodes to be reliable?

_______________________________________________
Wlug mailing list
Wlug@mail.wlug.org
http://mail.wlug.org/mailman/listinfo/wlug



--
I am leery of the allegiances of any politician who refers to their constituents as "consumers".