Ok, what if we start with plugging raw drives into the network, fronting as iSCSI Targets (possibly using embedded netbsd)...
Then the Solaris dom0 sets these up as Raid-Z.
Solaris then exports a NFS root for each domain. In fact, we could also export CDROM images and such over NFS so that the various OSes could have access to them.
At this point we would either have a small grub floppy image that would do network boot; or we have Xen boot the domU directly over the network. In either case, the plan is the same, have the OS run 'diskless' over NFS (with rw permissions)... If they have access to change which kernel grub points to, probably better.... so what if we have a small local grub image that points to a remote NFS grub image that is local to that filesystem... and thus they could change which kernel image, but not where it is...
Anyways, then, if we want to increase storage capacity, we add a drive to the network (iSCSI) and tell ZFS to add it to the pool... voila, all the domains have more space.
And if we want to back up a domain, we can take a snapshot and clone, etc.... I see us taking a snapshot at say 12:01am on monday, then incremental snapshot every day; thus we can rollback up to a week... although, probably need to maintain 2 weeks if we want to restore 1 -- otherwise every monday when we kill the primary full snapshot, all the incrementals become useless...
And if there is a hardware failure, Raid-Z should autorecover and provide means for me to fix it.
And if we need a test domain, we should be able to snapshot/clone (although booting with the same network parameters could be a problem).
And we should be able to write a tool (web based) that they can run in their own domain to request that we revert back to a previous snapshot...
And if we need to, we can specify quotas or reservations on a per domain or even on a per directory...
And although NFS should cause some overhead, since it is on the same physical server, I doubt it will even talk to the network...
And if we ever need to offload one of the domU to a different server, it should be extremely simple since we'd only need to move the Xen configuration and the tiny grub floppy image.
Overall I am liking this idea. Not sure how well it will work, but I guess I should start looking at hardware for it.
BTW: The Solaris dom0 needs redundancy too. I figure that what we would do is have a small mirrored root to boot Solaris -- but the dom0 needs to be the one in charge of the Raid-Z, so...
hmm, how would things like chmod be affected by this? I mean, if someone marks a directory as being owned my personA, but the zfs server hosting the NFS doesn't have a personA... figure out how this plays out.
ReplyDeleteswap partitions probably need done differently
ReplyDeleteregarding the ACLs... maybe that is what Zones are for
ReplyDeleteQuote from here:Yes, and that was where I was going for when asking for the share options.
ReplyDeleteI'm looking in the ZFS Administration Guide, page 71.
zfs set sharenfs=rw,root=host users/home/ormandj
where host is a ':' seperated list of hosts.
You could also do:
zfs set sharenfs=rw,anon=0 users/home/ormandj
to allow root access to all hosts.
Look at share_nfs(1M) for more help on what you can pass in here. Note
that sharenfs accepts whatever access modifiers that share_nfs can accept
This appears to be the ZFS admin guide and this admin guide refers to the share_nfs... it appears to say that we could set rw=serveris.eoti.org as an ACL or something.