Archive of UserLand's first discussion group, started October 5, 1998.

Re: Frontier on Wolfpack

Author:Dave Kopper
Posted:5/20/1999; 2:27:42 PM
Topic:Frontier on Wolfpack
Msg #:6522 (In response to 6513)
Prev/Next:6521 / 6523

I haven't tried Frontier under Wolfpack yet (soon I hope), but with a reasonable setup it shouldn't be too hard.

Let's do a quick run through how cluster servers are setup - which will probably answer the question about Frontier on Wolfpack (actually, its MSCS or Microsoft Cluster Server).

Start with two servers, have at least two lan cards in each. The first lan is for client access, the second is for a cluster interconnection (it can be a cross-over lan cable). Both machines will need to be in the same NT Domain. You'll also need a disk (or more than one) physically connected between the two servers (it'll probably be a SCSI disk, it'd be nicer if it was a hardward RAID solution, but its up to you - software raid DOESN'T work!). Be careful about the disk controller setup and the scsi bus termination (we either use jumpers on adaptec controllers or Y cables with a differential hardware RAID disk array).

Setup the first server (you'll need the Enterprise version of NT), make sure to get both lans setup (the interconnect lan can use private IP #'s like 192.168.1.X - they'll never get routed anywhere). You'll need to setup the physically shared disk from the first side - make sure to use only NTFS.

Once you get that far, then you'll want to install MSCS (second CD of the enterprise edition of NT). It'll ask reasonable questions - new cluster for the first side, give the cluster a name, it'll ask about the physically shared disks (which can't be on the same SCSI bus as the system disk), and a resource disk (it'll need a little disk space to remember stuff) and it'll also ask about a cluster IP # (this is an IP# that will migrate between the servers on the client lan). Once it's done setup will force a reboot of the first system - once it's back up you will find a new program in the administrative programs (cluster administrator). Start that program and type in the name of the first computer to see what the cluster is doing.

On to the other side - this one is easier, setup MSCS there and join an existing cluster (make sure to type in the same cluster name as before). It'll go around the block too. You'll see the second system in the cluster administrator kind of at the end of setup.

Ok - that get's you to a basic cluster that is working. Now you'll want to get Frontier installed onto a disk that is physically shared between the systems (I'd recommend a different physical disk than the resource disk, but that's up to you). Start with the first system and get Frontier working just like you want - I've never tried to run frontier as a service, but it should work as an application just fine. Once you get it working on the first system, now we get into some cluster details that you'll need to know...

MSCS manages lots of neat stuff - resources are the primitive stuff that it likes (things like disks and ip#s and virtual IIS servers). Those resources are grouped into resource groups. The resource groups are intended to be used as a collection of stuff needed for the whole set of resources to work together. You can setup dependancies between resources within a resource group. A good example is a file share, you need a resource group that has a physically shared disk (NOT a partition, but a physical disk or LUN), another client lan IP#, a network name (kind of like a virtual computer name) and finally the shared disk resource. Dependancies work like you would expect, to get the file share to work correctly, the IP# has to be online before the network name, and the file share needs both the network name and the physical disk to be online before it can work. So, the dependancies would be that the network name depends on the IP#, and the file share would depend on the network name and the physical disk. Ok - example out of the way... :-)

Back to trying to get frontier to work, now that it works on the first system, you'll want to stop frontier there and use the cluster administrator to move the resource group to the other server. Work on the other server to get frontier to work correctly from the physically shared disk...

I'd probably suggest that you rename one of the resource groups to be Frontier Group, and in that group add an IP # for the client lan (that'll be the IP# that frontier can use which will move between the two servers). Once you have frontier working on both systems (one at a time), then you'll probably want to add either a service or an application to the resource group (depends on how frontier starts for you). You'll proabably want this service/application to depend on the physical disk and the IP# (services can have some of the registry replicated between the two systems). And you should finally be able to bring the Frontier Group online. Frontier should startup and generally be pretty happy. You'll probably want to experiment with moving the resource group between the two machines for a while.

The above setup is a simple failover type system (one system running frontier while the other spins his thumbs waiting for something to do). If frontier could be started a second time on the same system, then you might want to setup two different resource groups with different physical disks. That would allow you to have the two systems both running frontier and if one fails, then the other would pick up the resource group that failed (i.e. a single system would have two copies of frontier running).

You might also want to look into details like a UPS and check out the preferred ownership of resource groups.

Hope this helps some...




This page was archived on 6/13/2001; 4:50:19 PM.

© Copyright 1998-2001 UserLand Software, Inc.