Charter Telephone reviews, boylston?
by John Stoffel
Hi,
Anyone on here using Charter Telephone VOIP service? I've currently
got Verizon, but the wife hates it because our phone lines get flaky
all the time, esp when it rains.
So I'm thinking to save money and combine all my stuff onto Charter.
I've already got High Speed internet and regular old cable. Not wild
about Digital Cable since I'm happy with Tivo and I don't want yet
another set top box to have to deal with...
So, any horror stories about Charter Phone VOIP quality and service?
Thanks,
John
4 months, 3 weeks
Re: [Wlug] (no subject)
by Tim Keller
Lustre solves the problem by getting out of the way. The metadata server
simply tells the client on which of the nodes the pieces of the file
resides. Reading the documentation, when it comes to Lustre, split brain
can't happen because the moment any of the meta controllers go offline, the
whole thing stops.
Definitely with ZFS if you don't start with a plan you're going to have a
bad day.
The current design for a node is a 1U HP DL360G7 server with ~144 or 288GB
of ram.
Internal storage is handled by the smart array controller. The next one I
build I think I'm going to buy a 9200-8i and route the cables so that all
the internal and external storage is JBOD.
My current HBA card dejour for external storage is the LSI 9200-8e.
As for external JBOD enclosures, there are LOTS of choices. Generally I've
stuck with the promise J610s. It's essentially the "expansion" cabinet for
a "smart" promise array. It's beauty is in it's abject stupidity.
As for drives, currently all my stuff is running 4TB Seagate SAS drives.
For "mainline" nodes, I configure them as raid10 pools. This gets me ~29TB
of storage per pool
For "backup" nodes, I configure them as raidz2 pools. All my backup nodes
are 32 disk boxes so the pools are 116TB.
Obviously this design is a balance between cost and performance.
If I need more slots I'll use a 380 instead of 360 and I'll stick in an
Intel X540-T2 10GigE card.
It's fun to imagine if you scaled up the disks and the interconnects what a
filesystem would look like.
Tim.
On Tue, Mar 28, 2017 at 8:45 PM, John Stoffel <john(a)stoffel.org> wrote:
>
> Tim> My first experiment was simply setting up a distributed volume
> Tim> that needs to have each of the "bricks" as they refer to them as
> Tim> being reliable members.
>
> Nice.
>
> Tim> However you can configure it to mirror and strip across the
> Tim> "bricks" so they don't have to be. In fact you can setup
> Tim> stripped mirrors, etc.
>
> That would be safer for sure. I can just see a network problem taking
> down a bunch of bricks and leading to a split-brain situation if
> you're not careful.
>
> Tim> I've now got a number of these storage nodes as I call them. It
> Tim> generally consists of a 1U box, stuffed full of ram, connected to
> Tim> a JBOD chassis full of disks. I then use ZFS to create storage
> Tim> pools.
>
> So I used to like ZFS and think it was "da bomb" but after using it
> for several years, and esp after using the ZFS appliances from
> Sun/Oracle, I'm not really enamored of the entire design any more.
>
> I think they have some great ideas in terms of checksumming all the
> files and metadata. But the layering (or lack of) disks/devices
> inside zpools just drives me nuts. It's just really inflexible and
> can get you in trouble if you're not careful.
>
> Tim> The problem with this is that your scale is limited to the size
> Tim> of a single "node" and while you can play games with autofs it's
> Tim> not a cohesive filesystem.
>
> Can you give some more details of your setup?
>
> Tim> This solves that problem. My only complaint is that it's fuse based.
>
> Yeah, not really high performance at the end of the day.
>
> Tim> Lustre is a kernel loaded filesystem that'll do the same thing as
> Tim> gluster. However, it doesn't have any of the redunancy stuff,
> Tim> it simply assumes your underlying storage is reliable.
>
> This is the key/kicker for all these systems.
>
> I'm waiting for someone to come up with an opensource sharding system
> where you use erasure codes for low-level block storage so that you
> have alot of the RAID6 advantages, but even more reliability AND
> speed. But it's also a hard hard problem space to get right.
>
> Thanks for sharing!
> John
>
> _______________________________________________
> Wlug mailing list
> Wlug(a)mail.wlug.org
> http://mail.wlug.org/mailman/listinfo/wlug
>
--
I am leery of the allegiances of any politician who refers to their
constituents as "consumers".
1 year, 10 months
Re: [Wlug] (no subject)
by John Stoffel
Tim> My first experiment was simply setting up a distributed volume
Tim> that needs to have each of the "bricks" as they refer to them as
Tim> being reliable members.
Nice.
Tim> However you can configure it to mirror and strip across the
Tim> "bricks" so they don't have to be. In fact you can setup
Tim> stripped mirrors, etc.
That would be safer for sure. I can just see a network problem taking
down a bunch of bricks and leading to a split-brain situation if
you're not careful.
Tim> I've now got a number of these storage nodes as I call them. It
Tim> generally consists of a 1U box, stuffed full of ram, connected to
Tim> a JBOD chassis full of disks. I then use ZFS to create storage
Tim> pools.
So I used to like ZFS and think it was "da bomb" but after using it
for several years, and esp after using the ZFS appliances from
Sun/Oracle, I'm not really enamored of the entire design any more.
I think they have some great ideas in terms of checksumming all the
files and metadata. But the layering (or lack of) disks/devices
inside zpools just drives me nuts. It's just really inflexible and
can get you in trouble if you're not careful.
Tim> The problem with this is that your scale is limited to the size
Tim> of a single "node" and while you can play games with autofs it's
Tim> not a cohesive filesystem.
Can you give some more details of your setup?
Tim> This solves that problem. My only complaint is that it's fuse based.
Yeah, not really high performance at the end of the day.
Tim> Lustre is a kernel loaded filesystem that'll do the same thing as
Tim> gluster. However, it doesn't have any of the redunancy stuff,
Tim> it simply assumes your underlying storage is reliable.
This is the key/kicker for all these systems.
I'm waiting for someone to come up with an opensource sharding system
where you use erasure codes for low-level block storage so that you
have alot of the RAID6 advantages, but even more reliability AND
speed. But it's also a hard hard problem space to get right.
Thanks for sharing!
John
1 year, 11 months
Re: [Wlug] (no subject)
by Tim Keller
My first experiment was simply setting up a distributed volume that needs
to have each of the "bricks" as they refer to them as being reliable
members.
However you can configure it to mirror and strip across the "bricks" so
they don't have to be. In fact you can setup stripped mirrors, etc.
I've now got a number of these storage nodes as I call them. It generally
consists of a 1U box, stuffed full of ram, connected to a JBOD chassis full
of disks. I then use ZFS to create storage pools.
The problem with this is that your scale is limited to the size of a single
"node" and while you can play games with autofs it's not a cohesive
filesystem.
This solves that problem. My only complaint is that it's fuse based.
Lustre is a kernel loaded filesystem that'll do the same thing as gluster.
However, it doesn't have any of the redunancy stuff, it simply assumes your
underlying storage is reliable.
Tim.
On Tue, Mar 28, 2017 at 11:08 AM, John Stoffel <john(a)stoffel.org> wrote:
>
> Tim> The subject for that was supposed to be "Fun with glusterfs"
> Tim> On Tue, Mar 28, 2017 at 10:51 AM, Tim Keller <turbofx(a)gmail.com>
> wrote:
>
> Tim> I've been messing around with "glusterfs" and it's pretty cool.
>
> Tim> I build 3 machine running RHEL7.2 each with an extra 250GB disk.
>
> Tim> I then setup a filesystem that stripes across all three machines.
>
> Tim> So it looks like a single 750GB filesystem. Over gigE I'm seeing
> okay
> Tim> performance ~50MB/s.
>
> Tim> Cool stuff. Next on my list of filesystems to play with it
> Lustre.
>
> Sounds neat. Does glusterfs support RAID5/6 or does it depend on the
> underlying nodes to be reliable?
>
> _______________________________________________
> Wlug mailing list
> Wlug(a)mail.wlug.org
> http://mail.wlug.org/mailman/listinfo/wlug
>
--
I am leery of the allegiances of any politician who refers to their
constituents as "consumers".
1 year, 11 months
Re: [Wlug] (no subject)
by John Stoffel
Tim> The subject for that was supposed to be "Fun with glusterfs"
Tim> On Tue, Mar 28, 2017 at 10:51 AM, Tim Keller <turbofx(a)gmail.com> wrote:
Tim> I've been messing around with "glusterfs" and it's pretty cool.
Tim> I build 3 machine running RHEL7.2 each with an extra 250GB disk.
Tim> I then setup a filesystem that stripes across all three machines.
Tim> So it looks like a single 750GB filesystem. Over gigE I'm seeing okay
Tim> performance ~50MB/s.
Tim> Cool stuff. Next on my list of filesystems to play with it Lustre.
Sounds neat. Does glusterfs support RAID5/6 or does it depend on the
underlying nodes to be reliable?
1 year, 11 months
(no subject)
by Tim Keller
I've been messing around with "glusterfs" and it's pretty cool.
I build 3 machine running RHEL7.2 each with an extra 250GB disk.
I then setup a filesystem that stripes across all three machines.
So it looks like a single 750GB filesystem. Over gigE I'm seeing okay
performance ~50MB/s.
Cool stuff. Next on my list of filesystems to play with it Lustre.
Tim.
--
I am leery of the allegiances of any politician who refers to their
constituents as "consumers".
1 year, 11 months
Re: [Wlug] awk help
by Mike Peckar
Nathan’s solution worked, thanks to all for the replies. I do feel better this was not a solution that could fit into a tweet. Theo’s is elegant, but there are spaces in the data, and John’s would have work, but I can’t use perl in the environment I’m working in.
Mike
awk -F, '{linecount[$3]++;tmp=dup[$3];if(length(tmp)==0){dup[$3]=$0}else{dup[$3]=dup[$3]"\n"$0}} END {for (count in linecount){if(linecount[count]>1){print dup[count]}}}'
Hopefully this is not a homework problem.
On Mon, Mar 13, 2017 at 7:20 PM, Mike Peckar <fog(a)fognet.com> wrote:
Hey now, folks,
This seemed like it should be simple, but I’m at wits end. I simply want to find duplicates in the third column of a csv file, and output the duplicate line _and_ the original line that matched it. There’s a million examples out there that will output just the duplicate but not both.
In the data below, I’m looking for lines that match in the 3rd column…
Normal,Server,xldspntc02,,10.33.52.185,
Normal,Server,xldspntc02,,10.33.52.186,
Normal,Server,xldspntc04,,10.33.52.187,
Normal,Server,xldspntcs01,10.33.16.198,
Normal,Server,xldspntcs01,,10.33.16.199,
Normal,Server,xldsps01,10.33.16.162,
Normal,Server,xldsps02,10.33.16.163,
My desired output would be:
Normal,Server,xldspntc02,,10.33.52.185,
Normal,Server,xldspntc02,,10.33.52.186,
Normal,Server,xldspntcs01,10.33.16.198,
Normal,Server,xldspntcs01,,10.33.16.199,
$ awk -F, 'dup[$3]++' file.csv
I played around with the prev variable, but could not pumb it out fully, e.g { print prev }
Mike
_______________________________________________
Wlug mailing list
Wlug(a)mail.wlug.org
http://mail.wlug.org/mailman/listinfo/wlug
--
Nathan Panike
1 year, 11 months
Re: [Wlug] awk help
by John Stoffel
>>>>> "Theo" == Theo Van Dinter <felicity(a)kluge.net> writes:
Theo> Assuming there's no spaces in the csv, you could do:
Theo> awk -F, '{print $0,$3}' | sort -k2 | uniq -D -f 1 | awk '{print $1}'
Nice use of 'uniq -f 1' to re-read the file and do the de-duping.
Theo> $ cat 1
Theo> Normal,Server,xldspntcs01,10.33.16.198,
Theo> Normal,Server,xldspntc02,,10.33.52.185,
Theo> Normal,Server,xldsps01,10.33.16.162,
Theo> Normal,Server,xldspntc04,,10.33.52.187,
Theo> Normal,Server,xldspntcs01,,10.33.16.199,
Theo> Normal,Server,xldsps02,10.33.16.163,
Theo> Normal,Server,xldspntc02,,10.33.52.186,
Theo> $ cat 1 | awk -F, '{print $0,$3}' | sort -k2 | uniq -D -f 1 | awk '{print $1}'
Theo> Normal,Server,xldspntc02,,10.33.52.185,
Theo> Normal,Server,xldspntc02,,10.33.52.186,
Theo> Normal,Server,xldspntcs01,,10.33.16.199,
Theo> Normal,Server,xldspntcs01,10.33.16.198,
Theo> On Mon, Mar 13, 2017 at 7:20 PM, Mike Peckar <fog(a)fognet.com> wrote:
Theo> Hey now, folks,
Theo>
Theo> This seemed like it should be simple, but I’m at wits end. I simply
Theo> want to find duplicates in the third column of a csv file, and output
Theo> the duplicate line _and_ the original line that matched it. There’s a
Theo> million examples out there that will output just the duplicate but not
Theo> both.
Theo>
Theo> In the data below, I’m looking for lines that match in the 3^rd column…
Theo>
Theo> Normal,Server,xldspntc02,,10.33.52.185,
Theo> Normal,Server,xldspntc02,,10.33.52.186,
Theo> Normal,Server,xldspntc04,,10.33.52.187,
Theo> Normal,Server,xldspntcs01,10.33.16.198,
Theo> Normal,Server,xldspntcs01,,10.33.16.199,
Theo> Normal,Server,xldsps01,10.33.16.162,
Theo> Normal,Server,xldsps02,10.33.16.163,
Theo>
Theo> My desired output would be:
Theo>
Theo> Normal,Server,xldspntc02,,10.33.52.185,
Theo> Normal,Server,xldspntc02,,10.33.52.186,
Theo> Normal,Server,xldspntcs01,10.33.16.198,
Theo> Normal,Server,xldspntcs01,,10.33.16.199,
Theo>
Theo> $ awk -F, 'dup[$3]++' file.csv
Theo>
Theo> I played around with the prev variable, but could not pumb it out
Theo> fully, e.g { print prev }
Theo>
Theo> Mike
Theo> _______________________________________________
Theo> Wlug mailing list
Theo> Wlug(a)mail.wlug.org
Theo> http://mail.wlug.org/mailman/listinfo/wlug
Theo> _______________________________________________
Theo> Wlug mailing list
Theo> Wlug(a)mail.wlug.org
Theo> http://mail.wlug.org/mailman/listinfo/wlug
1 year, 11 months
Re: [Wlug] awk help
by Theo Van Dinter
Assuming there's no spaces in the csv, you could do:
awk -F, '{print $0,$3}' | sort -k2 | uniq -D -f 1 | awk '{print $1}'
eg:
$ cat 1
Normal,Server,xldspntcs01,10.33.16.198,
Normal,Server,xldspntc02,,10.33.52.185,
Normal,Server,xldsps01,10.33.16.162,
Normal,Server,xldspntc04,,10.33.52.187,
Normal,Server,xldspntcs01,,10.33.16.199,
Normal,Server,xldsps02,10.33.16.163,
Normal,Server,xldspntc02,,10.33.52.186,
$ cat 1 | awk -F, '{print $0,$3}' | sort -k2 | uniq -D -f 1 | awk '{print
$1}'
Normal,Server,xldspntc02,,10.33.52.185,
Normal,Server,xldspntc02,,10.33.52.186,
Normal,Server,xldspntcs01,,10.33.16.199,
Normal,Server,xldspntcs01,10.33.16.198,
On Mon, Mar 13, 2017 at 7:20 PM, Mike Peckar <fog(a)fognet.com> wrote:
> Hey now, folks,
>
>
>
> This seemed like it should be simple, but I’m at wits end. I simply want
> to find duplicates in the third column of a csv file, and output the
> duplicate line _*and*_ the original line that matched it. There’s a
> million examples out there that will output just the duplicate but not both.
>
>
>
> In the data below, I’m looking for lines that match in the 3rd column…
>
>
>
> Normal,Server,xldspntc02,,10.33.52.185,
>
> Normal,Server,xldspntc02,,10.33.52.186,
>
> Normal,Server,xldspntc04,,10.33.52.187,
>
> Normal,Server,xldspntcs01,10.33.16.198,
>
> Normal,Server,xldspntcs01,,10.33.16.199,
>
> Normal,Server,xldsps01,10.33.16.162,
>
> Normal,Server,xldsps02,10.33.16.163,
>
>
>
> My desired output would be:
>
>
>
> Normal,Server,xldspntc02,,10.33.52.185,
>
> Normal,Server,xldspntc02,,10.33.52.186,
>
> Normal,Server,xldspntcs01,10.33.16.198,
>
> Normal,Server,xldspntcs01,,10.33.16.199,
>
>
>
> $ awk -F, 'dup[$3]++' file.csv
>
>
>
> I played around with the prev variable, but could not pumb it out fully,
> e.g { print prev }
>
>
>
> Mike
>
>
> _______________________________________________
> Wlug mailing list
> Wlug(a)mail.wlug.org
> http://mail.wlug.org/mailman/listinfo/wlug
>
>
1 year, 11 months
Re: [Wlug] awk help
by John Stoffel
>>>>> "Mike" == Mike Peckar <fog(a)fognet.com> writes:
Mike> This seemed like it should be simple, but I’m at wits end. I
Mike> simply want to find duplicates in the third column of a csv
Mike> file, and output the duplicate line _and_ the original line
Mike> that matched it. There’s a million examples out there that
Mike> will output just the duplicate but not both.
Mike> In the data below, I’m looking for lines that match in the 3^rd column…
The sorting part is easy...
sort -k 3 -t "," <file>
Now to find the duplicates... I'd probably jump to a perl script:
perl -e '@a=(<>);foreach @a {@t=split(",",$_) {push @{$t{$t[2]}},$_; } foreach
sort (keys %t) { if ($#{$t{$_}} > 0) { print @{$t{$_}}, "\n";}}'
Should also do the right thing. First it splits into keys, stuffs the
line into an assoc array of arrays. Then it sorts the assoc array and
prints out those with more than one entry in it.
Admittedly done off the top of my head, without any actual testing. :-)
Mike> Normal,Server,xldspntc02,,10.33.52.185,
Mike> Normal,Server,xldspntc02,,10.33.52.186,
Mike> Normal,Server,xldspntc04,,10.33.52.187,
Mike> Normal,Server,xldspntcs01,10.33.16.198,
Mike> Normal,Server,xldspntcs01,,10.33.16.199,
Mike> Normal,Server,xldsps01,10.33.16.162,
Mike> Normal,Server,xldsps02,10.33.16.163,
Mike> My desired output would be:
Mike> Normal,Server,xldspntc02,,10.33.52.185,
Mike> Normal,Server,xldspntc02,,10.33.52.186,
Mike> Normal,Server,xldspntcs01,10.33.16.198,
Mike> Normal,Server,xldspntcs01,,10.33.16.199,
Mike> $ awk -F, 'dup[$3]++' file.csv
Mike> I played around with the prev variable, but could not pumb it out fully,
Mike> e.g { print prev }
Mike> Mike
Mike> _______________________________________________
Mike> Wlug mailing list
Mike> Wlug(a)mail.wlug.org
Mike> http://mail.wlug.org/mailman/listinfo/wlug
1 year, 11 months