Chat log from Silas and Liz - 2007-02-13 21:43 PST

From UGCS
Jump to: navigation, search
(08:17:56 PM) silas_bennett: Did you get my emails?
(08:18:11 PM) Liz Fong: the most recent?
(08:18:11 PM) Liz Fong: yeah
(08:18:27 PM) silas_bennett: What do you think of the fileserver?
(08:20:38 PM) Liz Fong: josh goldstein is not a UGCS admin any more by fiat of UASH.
(08:20:39 PM) Liz Fong: fyi
(08:20:44 PM) Liz Fong: I'm looking at the info right now
(08:20:53 PM) silas_bennett: Oh, okay.
(08:21:21 PM) Liz Fong: question.
(08:21:27 PM) Liz Fong: why are we buying new machines for fileserving?
(08:21:28 PM) silas_bennett: answer.
(08:21:38 PM) Liz Fong: we have three or four machines that are 2u and very suitable to do storage
(08:21:45 PM) Liz Fong: they can take 6 disks each, from what I understand
(08:22:02 PM) Liz Fong: we should carefully inventory those systems and see if they will fit our needs
(08:22:04 PM) silas_bennett: Oh, I thought that they were only 2 disk machines.
(08:22:11 PM) Liz Fong: before we look at buying outside hardware
(08:22:20 PM) silas_bennett: Gotcha.
(08:22:24 PM) Liz Fong: no, Evan explicitly mentioned he thought they'd be good fileservers due to the capacity
(08:22:34 PM) Liz Fong: I mean, I'd like to get the best bang for the $15k as possible
(08:22:46 PM) Liz Fong: now...
(08:22:49 PM) silas_bennett: That makes sense. What kind of disks do they take?
(08:22:53 PM) Liz Fong: if they offered a SCSI model of that server...
(08:23:08 PM) Liz Fong: then that would make sense as the single "fast" data server
(08:23:09 PM) silas_bennett: They do, I asked for SATA because the drives are cheaper.
(08:23:41 PM) Liz Fong: ah.
(08:23:43 PM) Liz Fong: *nods*
(08:24:01 PM) Liz Fong: but anyhow, yeah, inventorying is what we should do first thing to determine our needs
(08:24:04 PM) silas_bennett: SCSI isn't that much faster than SATAII.
(08:24:08 PM) Liz Fong: because ITS left us some nice things
(08:24:14 PM) silas_bennett: Makes sense.
(08:24:17 PM) Liz Fong: yeah, but you can kick up the disk RPM
(08:24:31 PM) Liz Fong: on SCSI far above what SATAII does, or is that out of date?
(08:24:35 PM) silas_bennett: You can get 10K RPM SATA drives as well.
(08:25:35 PM) silas_bennett: Again, if you want really fast storage, the best thing you can do is max out the RAM and tell the kernel to pre-cache agressively.
(08:25:51 PM) Liz Fong: True.
(08:26:37 PM) silas_bennett: Well, if there are suitable storage boxes, then we shouldn't buy hardware for it. I just saw them at SCALE and they had some nice products so I enquired.
(08:26:45 PM) Liz Fong: thanks :D
(08:27:54 PM) silas_bennett: I still don't like the root over AFS idea. It would be best to throw a ton of disks in one of those servers and RAID it with a Gigabit NIC card for serving the files out. Oh and fill it with RAM too.
(08:28:27 PM) Liz Fong: well, thing is...
(08:28:34 PM) Liz Fong: I want new machine installation to be plug and play
(08:28:49 PM) Liz Fong: and I want updates to existing machines to be doable in a single step
(08:28:51 PM) silas_bennett: Then you can do NFSv4 (for security), or any other NFS through a ssh tunnel. or use FuseFS to ssh mount.
(08:29:07 PM) silas_bennett: Did you see my other email about debian-live?
(08:29:31 PM) silas_bennett: use the make-live tool on the fileserver, and netboot the client machines.
(08:29:35 PM) Liz Fong: eww, fuse
(08:29:40 PM) Liz Fong: *balks*
(08:29:48 PM) Liz Fong: that's not terribly stable for mounting a root filesystem.
(08:30:03 PM) Liz Fong: I dunno about NFS...
(08:30:04 PM) silas_bennett: I was thinking fuse for /home
(08:30:07 PM) Liz Fong: it's caused all kinds of problems.
(08:30:15 PM) Liz Fong: maybe NFS4 is better
(08:30:19 PM) Liz Fong: but I just want out...
(08:30:38 PM) silas_bennett: There are ways to do NFS right. And there are other protocols such as iSCSI or AoE
(08:31:52 PM) silas_bennett: you can export a partition using iSCSI or AoE, marking it readonly, and then the client just uses UnionFS to make changes. That way if you suspect that a box has been comprimied all you have to do is reboot it.
(08:31:54 PM) Liz Fong: is AFS particularly worse than NFS?
(08:32:22 PM) silas_bennett: Only in the fact that you are assuming that the other machines will always be up.
(08:32:47 PM) silas_bennett: A single machine with RAID is much more reliable. Especially if that machine has dual power supplies.
(08:33:37 PM) silas_bennett: Back to debian-live: This centralizes the upgrades.
(08:33:39 PM) Liz Fong: why would I assume the other machines will always be up?
(08:34:27 PM) silas_bennett: When you want to upgrade the machines, you just chroot into the debian-live image and upgrade it. The changes then get pushed out to the clients.
(08:35:26 PM) Liz Fong: clever.
(08:35:35 PM) Liz Fong: does it provide a mechanism though of caching the data on the client side?
(08:35:49 PM) silas_bennett: Yes.
(08:36:32 PM) silas_bennett: The machines come up running on the client side (Not Thin / Not Thick) Still loads software from the server, but executes on the client.
(08:37:01 PM) silas_bennett: So you dedicate the disk on the client as swap, and tell it to cache.
(08:37:02 PM) Liz Fong: right, in ram of the client?
(08:37:06 PM) Liz Fong: gotcha.
(08:37:08 PM) Liz Fong: hmm
(08:37:17 PM) Liz Fong: this does sound more tenable than AFS in terms of being very standard
(08:37:24 PM) silas_bennett: Of course the Chacheing will prefer RAM, then SWAP
(08:37:43 PM) silas_bennett: Yes, and it is more scalable too.
(08:37:54 PM) Liz Fong: on the other hand
(08:37:58 PM) Liz Fong: it focuses load on a single server.
(08:38:05 PM) silas_bennett: And it won't add to network congestion.
(08:38:05 PM) Liz Fong: with no way to distribute that load
(08:38:15 PM) silas_bennett: It isn't a heavy load though.
(08:38:16 PM) Liz Fong: and no ability to deal with a downed machine
(08:38:35 PM) Liz Fong: what happens if the server providing the image dies?
(08:38:56 PM) Liz Fong: (sorry, I always devil's advocate)
(08:39:34 PM) silas_bennett: You have 1 reliable box with high redundancy (i.e. the server I showed you) or you have 2 or 3 (non-redundant) servers set up to do heartbeat.
(08:39:49 PM) Liz Fong: does NIS do heartbeat?
(08:39:51 PM) Liz Fong: err
(08:39:52 PM) silas_bennett: If the primary dies, the secondary updates the DNS entries and become the primary.
(08:39:52 PM) Liz Fong: NFS
(08:39:58 PM) Liz Fong: /SAMBA
(08:40:02 PM) Liz Fong: or whatever it's using
(08:40:08 PM) silas_bennett: heartbeat is not done at the protocol level.
(08:40:21 PM) silas_bennett: It is implemented at the network layer.
(08:40:24 PM) Liz Fong: okay, but the clients can't access their root fs for a period of time
(08:40:36 PM) Liz Fong: this seems like a Bad Thing, even if downtime is <0.5s
(08:40:36 PM) silas_bennett: No, it is imeadiate
(08:40:59 PM) silas_bennett: Well, psudo immediate.
(08:41:04 PM) Liz Fong: I'm confused how exactly this would work - client has a TCP connection open to the old server
(08:41:10 PM) Liz Fong: server goes dead, TCP link flaps
(08:41:25 PM) Liz Fong: what if the client doesn't happen to have the right data cached?
(08:41:40 PM) Liz Fong: if it's UDP, it might be slightly better
(08:41:41 PM) Liz Fong: but still
(08:42:15 PM) Liz Fong: stale nfs file handles, etc.
(08:42:17 PM) silas_bennett: The TCP should not be an issue, heartbeat configures the secondary to pick up the primaries sessions.
(08:42:36 PM) Liz Fong: ah, so the connection doesn't actually flap
(08:42:38 PM) Liz Fong: got it.
(08:42:52 PM) silas_bennett: recent data: I am not sure about this, I will look into it.
(08:43:06 PM) Liz Fong: this still seems like a lot of custom jiggling, when AFS is designed out of the box to support redundancy
(08:43:14 PM) Liz Fong: but we need to give everything fair evaluation
(08:43:25 PM) silas_bennett: Yeah but what happens when 2 machines die?
(08:43:26 PM) Liz Fong: because I do like the debian live idea, and it seems easy to maintain
(08:43:41 PM) Liz Fong: assuming you didn't replicate the data to 3 places, then you're hosed
(08:44:00 PM) Liz Fong: but it's the same no matter what system you set up
(08:44:06 PM) silas_bennett: Which is why I like a single high redundant server for critical applications.
(08:44:13 PM) Liz Fong: you decide how much redundancy you want, and then you carry it out
(08:44:40 PM) Liz Fong: I guess we're of conflicting schools of thought, I prefer having the ability to take an arbitrary machine down, fiddle with it, and bring it back up
(08:44:55 PM) silas_bennett: I have dealt with the high availablility servers quite a bit. The most common failiers on computers are, in this order, Hard Disks, Power Supplies, Fans.
(08:45:30 PM) silas_bennett: Having those things hot swappable goes a long way for reliability.
(08:46:03 PM) silas_bennett: Well here is an idea:
(08:46:45 PM) Liz Fong: right, that guards against hardware failure quite nicely having things hot-swappable
(08:47:19 PM) Liz Fong: but I'm foreseeing that we'll want to regularly do software tweaks from time to time, upgrade things, etc.
(08:47:27 PM) silas_bennett: Bring up the 3 File servers, and have them use AFS with eachother over a seperate network. Don't export AFS to the clients, give them some other access mechanism. Heartbeat will take care of the TCP sessions, while AFS should track the data on the server side.
(08:48:00 PM) Liz Fong: what's the fear of exporting AFS to clients?
(08:48:03 PM) silas_bennett: That way AFS doesn't congest the network.
(08:48:15 PM) Liz Fong: ah.
(08:48:22 PM) silas_bennett: And you don't want to rely on the hard disks of the clients.
(08:48:49 PM) silas_bennett: The servers are running RAID. so a machine would actually have to die to loose the AFS parity.
(08:48:57 PM) Liz Fong: so Keegan says that a DoE lab was using AFS for a fairly broad installation without problems of congestion or performance
(08:49:06 PM) Liz Fong: how would I be relying on client hard disks?
(08:49:11 PM) silas_bennett: If you have parity info on the clients, you loose AFS parity with a dead disk on the client.
(08:49:16 PM) Liz Fong: Correct.
(08:49:20 PM) Liz Fong: but clients aren't storing parity data :)
(08:49:36 PM) Liz Fong: they're just using their memory (and disks) as cache for themselves
(08:49:52 PM) silas_bennett: Oh, I thought you were making them part of the distributed FS.
(08:49:55 PM) Liz Fong: mounting an AFS volume doesn't make you an authoritative parity source
(08:49:57 PM) Liz Fong: no :)
(08:50:10 PM) Liz Fong: *laughs*
(08:50:36 PM) silas_bennett: Well, that is okay then.
(08:51:33 PM) silas_bennett: The only other thing that is bothersome about AFS is that it is relatively new. But initial testing with it will convince us one way or the other.
(08:51:55 PM) Liz Fong: CMU has apparently been using it for ages
(08:52:13 PM) Liz Fong: AFS is certainly newer than NFS, but definitely more stable than its forks (Coda, etc.)
(08:53:58 PM) silas_bennett: The other thing to point out is that you don't actually need AFS for the root FS, as the Root is actually exported read only to the clients. The clients just use UnionFS to make changes, this is how KNOPPIX allows you to make changes to the OS running on a CDROM.
(08:54:13 PM) silas_bennett: You would definately want to use AFS for /home though.
(08:54:24 PM) Liz Fong: yeah. the cool thing about afs
(08:54:32 PM) Liz Fong: is that it has independent addressing of volumes
(08:54:58 PM) Liz Fong: so /afs/ugcs.caltech.edu/home/efong could be on file1 and backed up on backup1
(08:55:16 PM) silas_bennett: please explain that in more depth.
(08:55:35 PM) Liz Fong: it lets me transparently reallocate volumes among members of the fileserver cluster
(08:55:48 PM) Liz Fong: so if fileserver1 is running out of space
(08:55:59 PM) Liz Fong: I can migrate /afs/ugcs.caltech.edu/home/efong to a different server
(08:56:13 PM) Liz Fong: as the primary read-write server for the volume
(08:56:15 PM) silas_bennett: Don't they run on all of the servers?
(08:56:37 PM) silas_bennett: I thought the whole point was that it was a distributed striped FS.
(08:57:09 PM) Liz Fong: no, AFS stores things in fixed locations, but you can clone things to different servers
(08:57:20 PM) Liz Fong: no, it's not parityed or striped
(08:57:43 PM) Liz Fong: although a filesystem that did that would be pretty cool.
(08:58:28 PM) silas_bennett: Does the cloned data update automaticly?
(08:58:38 PM) Liz Fong: yes.
(08:58:38 PM) silas_bennett: And if so, on what time scale?
(08:59:19 PM) Liz Fong: I believe fairly immediately after the current read-write server for a volume receives a change, it's propagated to all copies of that volume on all servers in the AFS cluster
(08:59:38 PM) Liz Fong: not sure how many ms, etc., but it's not as if there's a deliberate delay
(08:59:44 PM) silas_bennett: Here is a clever idea:
(08:59:54 PM) silas_bennett: Are you familiar with AoE?
(09:00:33 PM) Liz Fong: vaguely.
(09:01:07 PM) silas_bennett: ATA over Ethernet, is just like it sounds. There is no TCP overhead involved, just strait ATA protocol over an ethernet cable.
(09:02:31 PM) silas_bennett: Take 3 servers, with Hardware RAID. i.e. the boxes available to us. and export the RAID disks using AoE.
(09:02:45 PM) Liz Fong: uhh...
(09:03:24 PM) Liz Fong: that won't work.
(09:03:30 PM) silas_bennett: Why not?
(09:03:30 PM) Liz Fong: AoE doesn't support any kind of routing
(09:03:38 PM) silas_bennett: Doesn't need to.
(09:03:39 PM) Liz Fong: you can't pipe AoE to switches.
(09:03:46 PM) Liz Fong: and from switches to clients
(09:03:52 PM) Liz Fong: it's point to poitn
(09:04:03 PM) silas_bennett: AoE will be transparent to switches, but will not traverse routers. This is a Security Feature.
(09:04:48 PM) Liz Fong: or am I misreading how AoE works?
(09:05:29 PM) silas_bennett: You partition the disks as such: 100MB for /boot : ~20GB for /root : ~1GB SWAP : The rest for AoE.
(09:05:54 PM) silas_bennett: You misread AoE. Google uses it over switches just fine.
(09:06:45 PM) silas_bennett: True it is not routed, but a switch doesn't route, it rebroadcasts a packet to the port which has an ip address in that ports cache.
(09:08:11 PM) silas_bennett: Well anyway, you use software raid5 on the exported AoE drives and store /home on them.
(09:08:11 PM) Liz Fong: ah, I see what it's doing
(09:08:29 PM) Liz Fong: wait a minute.
(09:08:35 PM) silas_bennett: in /home you have a debian-live folder. ;)
(09:08:37 PM) Liz Fong: you can't have multiple machines trying to run RAID
(09:08:40 PM) Liz Fong: on the same set of disks
(09:08:47 PM) silas_bennett: No you don't need to.
(09:08:56 PM) silas_bennett: Just the primary server.
(09:08:58 PM) Liz Fong: you'd need to have one machine being arbitrator of the parity data, etc.
(09:09:15 PM) Liz Fong: we're running in circles.
(09:10:03 PM) silas_bennett: The second and third server, have the raid configured, but are not running it. If one machine dies, not the primary, then the primary just sees a degredated raid5.
(09:10:59 PM) silas_bennett: If the primary dies, then the secondary, kicks in with heartbeat, and among the procedures it runs when activating the heartbeat is to load the raid array sans the primary servers disk.
(09:11:07 PM) silas_bennett: Still degredated.
(09:11:34 PM) silas_bennett: If the secondary then dies, then you have lost the raid.
(09:11:57 PM) silas_bennett: I would have to proof of concept this of course.
(09:12:14 PM) Liz Fong: has anyone else done this before?
(09:12:28 PM) Liz Fong: this is the same thing again with tested vs. untested solutions
(09:12:42 PM) silas_bennett: But it would imediatly give you a Striped Paritied Distributed disk.
(09:12:47 PM) Liz Fong: we know AFS works because it's off-the-shelf and a lot of people use it.
(09:12:52 PM) Liz Fong: true.
(09:13:17 PM) silas_bennett: I am not suggesting that we do this, I was only suggesting it as a clever idea that might warrant looking into.
(09:13:24 PM) Liz Fong: but yeah, this is just another reason I need to go ahead and buy machines :)
(09:13:30 PM) Liz Fong: so we can do this proof of concepting
(09:13:42 PM) Liz Fong: I'll probably schedule the machine buy for next week or so
(09:14:00 PM) silas_bennett: What boxes are you looking at? and what arch?
(09:17:08 PM) Liz Fong: I haven't even had time to look, is the problem.
(09:17:45 PM) Liz Fong: well, I'm thinking i386, at least to start, although x86_64/amd64 may be a promising upgrade path in the future when we can afford to purge the pukes
(09:18:06 PM) Liz Fong: so get amd64-supporting hardware
(09:18:10 PM) Liz Fong: but run it in 32-bit mode
(09:18:28 PM) Liz Fong: we need one or two specimen client machines. this we do not have onhand.
(09:19:00 PM) Liz Fong: we need to evaluate the fileservers
(09:21:18 PM) silas_bennett: Well it sounds like sticking a KNOPPIX disk in the 3 fileservers would be the first step. As far as server, do you want rackmount, or workstation form factor?
(09:22:00 PM) Liz Fong: I'm thinking there will be 6+ workstation form factor machines in the finished UGCS
(09:22:02 PM) silas_bennett: The cheepest solution obviously is to build them in workstation form factor from the ground up. That way we can also get the exact hardware we want.
(09:22:04 PM) Liz Fong: running X
(09:22:19 PM) Liz Fong: and that the collection of remote login machines will be rackmounted for density
(09:22:31 PM) Liz Fong: but that's flexible
(09:22:40 PM) Liz Fong: I mean, I'm budgeting $500-$1000 per client machine
(09:22:51 PM) Liz Fong: and $3000 or so for each server we buy
(09:23:01 PM) silas_bennett: I would say the best plan would be to build the boxes using:
(09:23:11 PM) silas_bennett: TYAN: Dual Proc Mobo's
(09:23:40 PM) silas_bennett: AMD64 or Opteron (possibly dual core)
(09:23:40 PM) Liz Fong: can you write this stuff on the wiki?
(09:23:45 PM) Liz Fong: I have a ton of work due tomorrow
(09:23:54 PM) silas_bennett: Cheap ass video card.
(09:23:58 PM) Liz Fong: and the other admins should really be seeing the result of these conversations as well
(09:24:03 PM) silas_bennett: Okay I will write on wiki.
(09:24:09 PM) Liz Fong: cheap ass video on the login clients
(09:24:18 PM) silas_bennett: I will post this whole thread.
(09:24:18 PM) Liz Fong: might be nice to put nice cards on the X servers
(09:24:29 PM) silas_bennett: Well, if you think people need that.
(09:25:01 PM) silas_bennett: I didn't think people would need 3D acceleration on cluster machines.
(09:25:30 PM) silas_bennett: Can we scrounge up used hard disks?
(09:25:32 PM) Liz Fong: might be a nice feature to provide :)
(09:25:45 PM) Liz Fong: Not ones large enough for our needs, at least for fileservers
(09:25:50 PM) Liz Fong: for client machines... dunno
(09:25:51 PM) silas_bennett: Yeah, but that adds a ~$200 to the price.
(09:26:05 PM) Liz Fong: that's not a big deal for ~6 machines.
(09:26:23 PM) silas_bennett: Spare disks: I was thinking for the clients, even ~10GB disks would be more than enough.
(09:26:30 PM) Liz Fong: agree.
(09:26:46 PM) Liz Fong: on the other hand, it would be nice being able to impose a uniform partitioning scheme
(09:26:55 PM) silas_bennett: Okay. I can go online and price components, and put the results on the wiki.
(09:27:00 PM) Liz Fong: well, the netboot could search to see if the partition scheme matches
(09:27:02 PM) Liz Fong: excellent, thanks!
(09:27:17 PM) Liz Fong: and if it doesn't, it fries the data on the machine after a confirmation and partitions it the way it wants it
(09:27:32 PM) Liz Fong: (1gb swap, remainder as AFS cache)
(09:27:32 PM) silas_bennett: The partition scheme I had for the clients was the entire disk is SWAP.
(09:27:48 PM) Liz Fong: AFS does stuff even nicer than swap
(09:27:53 PM) Liz Fong: it persists its cache between reboots
(09:27:58 PM) silas_bennett: Well then that would be worthwhile.
(09:28:41 PM) silas_bennett: Remember that if you RAM cache agressively on the server that the file access over the network is faster than the local disk.
(09:29:16 PM) silas_bennett: Okay, get to work!!!  ;)
(09:29:19 PM) Liz Fong: oh?
(09:29:23 PM) Liz Fong: I don't believe it.
(09:29:33 PM) Liz Fong: but I suppose you could be right.
(09:29:47 PM) Liz Fong: I guess it's a tradeoff between server load and client load
(09:29:53 PM) silas_bennett: Why not? can you get 1Gb/sec through put from a local disk?
(09:30:16 PM) Liz Fong: you're not getting 1Gb/sec thoguh
(09:30:22 PM) Liz Fong: you're getting at most 100Mb/sec
(09:30:32 PM) Liz Fong: and that's assuming no overhead, and server providing perfect performance
(09:30:41 PM) silas_bennett: 100Mb/sec?
(09:30:45 PM) Liz Fong: because our big server machines will communicate using gigabit
(09:30:50 PM) Liz Fong: but the clients will be on smaller pipes
(09:31:06 PM) silas_bennett: All of the new clients you get will have 1Gb cards in them.
(09:31:08 PM) Liz Fong: has to be that way, we only have 1 gigabit cisco switch
(09:31:15 PM) Liz Fong: with 24 ports
(09:31:19 PM) Liz Fong: 2 to IMSS
(09:31:24 PM) Liz Fong: 4 for the routing machine
(09:31:42 PM) Liz Fong: two for uplink to the 10/100 switches on the gigabit uplink ports
(09:31:59 PM) Liz Fong: so only 18 usable ports.
(09:32:04 PM) silas_bennett: 8 so far. 3 more for the fileservers
(09:32:04 PM) silas_bennett: 11
(09:32:24 PM) silas_bennett: 1 lenin: 12
(09:32:58 PM) Liz Fong: mailserver, netboot server, build server, log server, database/svn server
(09:33:29 PM) Liz Fong: some of these may want multiple gigabit ports and aggregate them (I'm thinking the fileservers will do this)
(09:33:33 PM) silas_bennett: Mail, DNS, Kerberos, & LDAP can live on 100Mb you would definately want the Database on Gb
(09:33:51 PM) Liz Fong: Mail cannot live on 100Mb.
(09:33:59 PM) Liz Fong: are you familiar with our mail system?
(09:34:04 PM) Liz Fong: it process a tremendous quantity of mail.
(09:34:10 PM) silas_bennett: Okay.
(09:34:26 PM) Liz Fong: backup fileservers as well
(09:34:37 PM) Liz Fong: the point is, the majority of the clients will be connected to the satellite switches
(09:35:26 PM) silas_bennett: Well, again if you are willing to spend $200 on video cards for 6 workstations, you could put a $20 or free Video card in one of those, and buy a 48 port Gb switch.
(09:35:54 PM) Liz Fong: I doubt it'll work horridly well with the cisco switch we currently have...
(09:36:00 PM) Liz Fong: with respect to handling the vlan separation
(09:36:09 PM) silas_bennett: It will. I have used just such a combo.
(09:36:38 PM) Liz Fong: does it do dynamic VLANs
(09:36:43 PM) Liz Fong: by MAC address?
(09:36:53 PM) silas_bennett: GE had Catalyst switches running 3 VLANS, and we were uplinking to one of those VLANs using the Linksys switches.
(09:37:07 PM) silas_bennett: Yes. Dynamic, by MAC address.
(09:37:14 PM) Liz Fong: hmm.
(09:37:23 PM) silas_bennett: It is a Cisco product!
(09:37:51 PM) silas_bennett: The only thing it won't do is Virtual Trunking, i.e. VTP.
(09:40:00 PM) Liz Fong: it shouldn't be a problem given that we'd only have ~4 switches running at a time.
(09:40:00 PM) Liz Fong: so we could manually reconfigure when needed
(09:40:01 PM) silas_bennett: You could even throw in the 48 port switch in place of the 24 port switch, and use the 24 port switch specifically for the AFS network. That has the added benifit of physically segregating the file access. (More Security).
(09:40:29 PM) Liz Fong: mmm.
(09:40:35 PM) Liz Fong: well, we should think on it at any rate
(09:40:37 PM) Liz Fong: and I should work.
(09:40:43 PM) silas_bennett: Get to work!!! ;)
Personal tools