RE: [Wlug] NFS question
I'm not sync'ing data, all three servers are writing logs to the share constantly (the applcsf share). We didn't buy Veritas volume manager, and we are not running Red Hat 3.0 yet, so I had to go with NFS for a shared file system. Believe me, I wish I didn't have to use NFS. I am looking into buying a NAS device to store the shares on, that way any of the servers can go down and the others won't be affected. But the NAS will still be a single point of failure. -----Original Message----- From: wlug-bounces@mail.wlug.org [mailto:wlug-bounces@mail.wlug.org] On Behalf Of Charles R. Anderson Sent: Tuesday, October 12, 2004 10:22 AM To: wlug@mail.wlug.org Subject: Re: [Wlug] NFS question On Tue, Oct 12, 2004 at 11:15:11AM -0400, Don Peterson wrote:
I mount a share: shoprod1:/applcsf /applcsf nfs soft,rsize=8192,wsize=8192,retrans=6,timeo=14,intr Can you see anything wrong with this line, or can anyone suggest something else to try. I can't even forceably unmount the shares with
an umount -f. And if we try to kill any processes accessing the shares the kill command just hangs (even with a kill -9). Any suggestions?
I don't think there is any good solution. Using the "soft" and "intr" mount options is an extremely bad idea. Do it only if you don't care about losing/corrupting your data. Important data should always be mounted "hard,nointr". Personally, I avoid NFS like the plague. I only use it for readonly mounts (where soft,intr is less of a problem). Instead, I've standardized on rsync over ssh with RSA keys to get data between servers that need to be synchronized. _______________________________________________ Wlug mailing list Wlug@mail.wlug.org http://mail.wlug.org/mailman/listinfo/wlug
Hello Don, True, but if you must use NAS for this, a redundant NAS device takes care of the single point of failure issue. Look at NAS solutions that provide clustered heads attached with redundant paths to redundant disks. Clustered Linux w/ SAN solutions would work or perhaps a blackbox like a Network Appliance filer cluster ($$$!) depending on your budget and time constraints. Moving the nfs shares off the first database server is important, because, in order to have redundancy in the cluster, members each must have the same function. Right now the first database server provides an additional unique function. Nothing can take over for it, as you have seen. -Adam On Oct 12, 2004, at 11:48 AM, Don Peterson wrote:
I'm not sync'ing data, all three servers are writing logs to the share constantly (the applcsf share). We didn't buy Veritas volume manager, and we are not running Red Hat 3.0 yet, so I had to go with NFS for a shared file system. Believe me, I wish I didn't have to use NFS. I am looking into buying a NAS device to store the shares on, that way any of the servers can go down and the others won't be affected. But the NAS will still be a single point of failure.
participants (2)
-
Adam Keck
-
Don Peterson