Hello Don, True, but if you must use NAS for this, a redundant NAS device takes care of the single point of failure issue. Look at NAS solutions that provide clustered heads attached with redundant paths to redundant disks. Clustered Linux w/ SAN solutions would work or perhaps a blackbox like a Network Appliance filer cluster ($$$!) depending on your budget and time constraints. Moving the nfs shares off the first database server is important, because, in order to have redundancy in the cluster, members each must have the same function. Right now the first database server provides an additional unique function. Nothing can take over for it, as you have seen. -Adam On Oct 12, 2004, at 11:48 AM, Don Peterson wrote:
I'm not sync'ing data, all three servers are writing logs to the share constantly (the applcsf share). We didn't buy Veritas volume manager, and we are not running Red Hat 3.0 yet, so I had to go with NFS for a shared file system. Believe me, I wish I didn't have to use NFS. I am looking into buying a NAS device to store the shares on, that way any of the servers can go down and the others won't be affected. But the NAS will still be a single point of failure.