24 March 2007
Brett came by to help me get the server built yesterday. I have been waiting for Solaris 10 b60 (which has AMD-V support), but I am starting to seriously consider using b59 to install and then upgrading.
Only real problem with the build was that the Armor Extreme case had drive rails that got in the way of the Addonics drive cages... Brett managed to dremmel them out of the way.
On the left you will see 4 pictures... Yes, I resized it (Brett hates those 13MB photos), but you can still click on it to get a slightly larger photo.
The left images are with the doors open. The right images are with the doors closed.
The top images are with a flash (see detail) and the bottom without (see lights)...
Now to choose an installation medium :)
21 February 2007
Unfortunately, it doesn't appear to work. Both 2x7-segment LEDs say "88" and the rest of the lights are randomly flashing...
I would look up the error code, but the manual has nothing, the website has nothing, and I am not finding anything on Google. Guess I will have to talk with Belkin support tomorrow.
Good thing I have the laptop.
16 February 2007
14 February 2007
09 February 2007
05 February 2007
Asus L1N64-SLI WS
2x AMD FX-70
Thermaltake HardCano 13
2x Kingston KVR667D2E5K2/2G
Thermaltake 850Watt W0131
going to use an existing space nVidia 6600GT
2x Hitachi Deskstar T7K500 250GB Serial ATA II (mirrored boot)
5x Hitachi Deskstar T7K500 250GB Serial ATA II (raidz2 data)
2x Addonics AE5RCS35NSA
NEC Black Floppy (memtest86, hitachi drive tool, etc etc)
going to use an existing dual layer burner
Belkin 1500VA/1000 joules UPS
We're also upgrading the KVM:
3x Belkin F1D9400-06 (for machines on the rack)
1x Belkin F1D9400-25 (for docking station on the desk)
Still need to grab a Cat6 cable from PCHCables when we get a chance to run out to Hillsboro.
Plan is to run OpenSolaris as Xen dom0, then host all Xen domU via locally shared ZFS NFS directory...
01 February 2007
31 January 2007
- Installation was easy, but first time, it did not make the system bootable. I had to format all the drives to get it to quit presenting FreeBSD as a boot option as well.
- The first install, it tried to use c2d0, but the second install it used c4d0. Shouldn't this be consistently the same boot device?
- Even though the router has the NIC set to a specific IP, it defaulted to a different MAC address and thus a different IP. I can only assume that it is a virtual NIC.
- When I switch back and forth on the Belkin KVM, the mouse freaks out in Solaris -- even though it works in Linux and FreeBSD.
- When activating the remote desktop, via GNOME, it sets it to localhost and doesn't allow me to change it to use the server name or IP -- thus, can't avoid #4 that way. Might have to go in and edit the files by hand, but unlike Linux/FreeBSD, I am not too familiar with Solaris yet.
- When I went to do the Update via the GUI, it asked for my Sun account info (which I provided) then just said there was an error. Guess that won't fix #4 or #5.
- ZPOOL seemed to work fine -- but creating iSCSI, SMB and NFS targets (via CLI or GUI) only seems to work at the server end. No one else on the network can access it.
- Contrary to what most people have said, the GUI doesn't look so bad.
- I enabled Telnet in services, and that didn't seem to work either.
30 January 2007
Which is wierd, since cases are usually around $20 and the drives (250GB Samsung) were only $52 and the OS (NetBSD or Linux) is free...
I was looking at possibly making my own via mini-itx or something. Found iSCSI-SCSI bridges (only $1200)...
maybe it is time to make my own...? well, I would, but I really don't have the time to experiment right now... so I guess I once again have to get a larger server box up and running; then take my time designing my iSCSI solution (rotozip must like the enclosures)... And THEN design my nice small cool (as in temp) server....
In the meantime, got to figure out what to do... build new AMD-V machine (which isn't on Solaris HCL) or convert the last machine I bought (that has memory controller bugs and Asus lies all over it)
29 January 2007
25 January 2007
Then the Solaris dom0 sets these up as Raid-Z.
Solaris then exports a NFS root for each domain. In fact, we could also export CDROM images and such over NFS so that the various OSes could have access to them.
At this point we would either have a small grub floppy image that would do network boot; or we have Xen boot the domU directly over the network. In either case, the plan is the same, have the OS run 'diskless' over NFS (with rw permissions)... If they have access to change which kernel grub points to, probably better.... so what if we have a small local grub image that points to a remote NFS grub image that is local to that filesystem... and thus they could change which kernel image, but not where it is...
Anyways, then, if we want to increase storage capacity, we add a drive to the network (iSCSI) and tell ZFS to add it to the pool... voila, all the domains have more space.
And if we want to back up a domain, we can take a snapshot and clone, etc.... I see us taking a snapshot at say 12:01am on monday, then incremental snapshot every day; thus we can rollback up to a week... although, probably need to maintain 2 weeks if we want to restore 1 -- otherwise every monday when we kill the primary full snapshot, all the incrementals become useless...
And if there is a hardware failure, Raid-Z should autorecover and provide means for me to fix it.
And if we need a test domain, we should be able to snapshot/clone (although booting with the same network parameters could be a problem).
And we should be able to write a tool (web based) that they can run in their own domain to request that we revert back to a previous snapshot...
And if we need to, we can specify quotas or reservations on a per domain or even on a per directory...
And although NFS should cause some overhead, since it is on the same physical server, I doubt it will even talk to the network...
And if we ever need to offload one of the domU to a different server, it should be extremely simple since we'd only need to move the Xen configuration and the tiny grub floppy image.
Overall I am liking this idea. Not sure how well it will work, but I guess I should start looking at hardware for it.
BTW: The Solaris dom0 needs redundancy too. I figure that what we would do is have a small mirrored root to boot Solaris -- but the dom0 needs to be the one in charge of the Raid-Z, so...
This link was showing how to remote boot FreeBSD from PXE/NFS... I *think* but am not sure that if we boot/run FreeBSD from NFS, it won't care if that filesystem gets larger... right?
24 January 2007
23 January 2007
* We do not want a repeat of the data wipeout we got a year ago... As such, if one domain gets hacked, it can not take down the rest of the domains.
- I had been planning on using Jail to alleviate this problem, but at this point I think our best bet is Xen
- While many OSs support Xen domU, many fewer support Xen dom0.
* Also, if a domain gets hacked and wiped out, we need a backup to restore from.
- ZFS Snapshots (maybe full weekly and incremental daily) seems like a good solution for this
- SVN might be a good solution to this
- It might be good if there was a way to automate this (ie: allow the root user to do it instead of emailing me)
* We want to get disk redundancy, while keeping cost and maintenance low.
- RAID-0 gives us the redundancy, but at a fairly high cost. The current 1TB gives us less than 500GB this way.
- RAID-5 could be acceptable, as long as we could fix the system if the root partition were the one to screw up
- ZFS and Raid-Z seem like excellent options to provide the redundancy and automatic corrections
* Each domain should be able to have it's own root user and OS.
- Jail could have provided for this, but seems very buggy ('man man' in a jail crashed the entire OS)
- Xen should allow for this fairly easy. As such, the base OS really doesn't do anything other than launch the domUs.
* If multiple OSs share a base OS, it would be nice (though not required) if we were able to not waste a lot of duplicated space
- Unionfs is ideal for this. Not sure how easy it is to use with Xen
- At some point, it has to be better to have a full copy (like if they upgraded every single app)
* It would be preferable if all OS's could run unmodified, thus allowing us greater flexibility and choices
- Intel VT or AMD-V technology. If we are planning on doing any RSA work on any domain, it can't be the Intel option. So probably an Opteron 2xxx (or 2).
* If we run low on drive space, it should be fairly simple to add more
- ZFS is supposed to allow for this with 'zfs add'
- We *could* do separate drives for each domain, but that would waste a lot of drive space
- iSCSI might be a better option than internal drives for this. Instead of running out of hard drive bays, we'd just have to worry about running out of ethernet ports -- which realistically, is much easier to come up with more of. ** SEE BELOW
** iSCSI Thoughts:
- IDEALLY, I would have a set of cheap 250GB or 500GB drives that host themselves as iSCSI Targets. Then, adding new drives is literally adding another drive to the network.
- We could, in theory, run Solaris or NetBSD to easily provide iSCSI Targets to the network - but then we have to wonder why we are creating a separate box to host Xen
- If we have multiple iSCSI Targets, which our Solaris(?) box were to RAID-Z together, we could also have extra iSCSI Targets that VMWare and/or other hardware could use directly (even Windows)
- If we have multiple iSCSI Targets, which our Solaris(?) box were to RAID-Z together, the Solaris box could provide the pool as a iSCSCI Target to other devices/OSes
- If we were to have Solaris running RAID-Z directly on the iSCSI Target (ie: providing ZFS pools as iSCSI Targets), then separate Xen domUs could point to these iSCSI Targets for their primary data storage and POTENTIALLY get the ZFS benefits regardless of their OS (snapshots, error-correction [at least from hardware errors], etc)
- In theory, if we could come up with mini embeddable systems that could boot from iSCSI Targets, then we could do away with Xen entirely and just have a little device on the network for each domain... this might be overkill
- Note: Can Solaris boot off iSCSI?
22 January 2007
21 January 2007
20 January 2007
At first, I used 3 SCSI hard drives, but could not figure out for the life of me what the names were... Kept getting I/O errors when I ran tools to find out.
Redid it as 3 (cuz the CDRom took one of the slots) IDE drives (1GB each).
AVAILABLE DISK SELECTIONS:
# zpool create mypool raidz c0d0 c0d1 c1d1
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mypool 2.95G 157K 2.95G 0% ONLINE -
So far so good. It doesn't have Xen support yet, and I don't really feel like trying it right now since it is a LiveCD and all changes are lost when I reboot the vm -- maybe later.
18 January 2007
This blog has some various experiences and links of it's own. Actually the next two links here were obtained there.
ZFS Best Practices Guide
This page suggests that 3 disks use raidz and 5 disks use raidz2.
ZFS Management and Troubleshooting
Of importance at the top of this page, only Solaris 10 from 6/2006+ have ZFS. Also, they recommend 3-9 disks for Raid-Z pools. There is also info there on how to replace a drive. It says that Root pools (bootable) are not yet available as of 6/2006 -- but that once they are, they should be kept on a separate pool from the other data.
That last link also provided a link to:
Solaris ZFS Administration Guide
16 January 2007
Hi Malachi,My response:
Thank you for visiting the web chat forum on the Sun website yesterday, when you expressed interest in the Try and Buy promotion being currently run by Sun.
You mentioned that you are deciding which machine you will trial on the promotion - excellent! You will see the full list of products on the promotion at www.sun.com/tryandbuy
You can email me with any questions that you may have, or you can visit our team at the web chat forum. Here's some additional information on the Try and Buy promotion, which you may find useful:
When you receive the Server on the Try and Buy promotion you will also receive a Welcome Pack.
Inside the Welcome Pack you will find-
1. A Quick Guide to Installation.
2. Access to Tuning and Optimization documentation.
3. Access to an online portal and forum for system and application tuning hints and tips.
4. Access to Sun's performance engineers.
5. All of the latest patches and configuration files for the system on the portal.
Additional options for purchase are available for system ready and business ready services.
The Solaris 10 Operating System and other software provided with the system is covered under the Warranty support during the Try and Buy period.
Tel: + 353 (0)599136768
I am debating the Sun Fire X4200 or the Sun Ultra 40 Workstation. Perhaps it would be better for me to explain the intended usage and get your feedback.
First, a little background. We had a FreeBSD box running for about 4 years without reboot. As is probably obvious from that statement, we were behind on the security updates. While we were hosting multiple domains, they were not in Xen or Jail or anything. On Christmas eve a year ago, a hacker from a Polish cable isp hacked in through one of the user accounts and wiped out the entire system. Because of this, I decided this time that I wanted to ensure that the various domains stay secure even if one is compromised.
We bought the latest top-of-the-line Asus, AMD, etc. Unfortunately, AMD lied about the capabilities of the board. They said it was capable of RAID-5 on SATAII, but in actuality it is capable of RAID-5 OR SATAII. Then we installed FreeBSD-CURRENT, and found out it doesn't support RAID-5 yet. Needless to say, it has been a hassle.
So we are debating buying a Sun server and running Solaris on it. My expectation is that there should not be any driver compatibility problems at that point.
So long story not so short -- we are looking at trying to run a bootable RAID-Z (yes, I know that requires some tweaking, but I want to ensure that the boot is also recoverable) with a Xen Dom0. On top of that, we want to allow each domain to run its own OS (whichever they choose, thus prefer AMD-V) as a DomU. Based on this, I think that each DomU would get the advantage of Raid-Z on the underlying filesystem, even if they didn't know about it directly, and regardless of which OS they are running.
Do you have any thoughts, concerns, or questions?
First of all, sorry to hear about the experiences you've had with the FreeBSD and Asus! Not good.
Secondly, one of the reasons for the Try and Buy promotion is that you can test the capabilities of each machine prior to actually buying it. I'm sure that you will have a positive experience and benefit from this.
I would recommend that we assign a server specialist to you, that can be available prior to your choice of machine, and also be available for you right through the trial process to assist you in your testing, additional components, adjustments in configurations, etc. Would you have a
contact number that we can call you on?
So the rest will most likely be via phone.
15 January 2007
You have been connected to Brenda Byrne.
Brenda Byrne: Welcome to Sun Microsystems. How may I help you?
Malachi de AElfweald: Try 2.
Brenda Byrne: Sorry about you getting pushed off a moment ago
Malachi de AElfweald: Hi Brenda. I have a question about Try and Buy products.
Brenda Byrne: Sure - go ahead!
Malachi de AElfweald: I am looking at replacing my server with one to run Solaris (or OpenSolaris), Raid-Z and Xen
Brenda Byrne: ok
Malachi de AElfweald: I assume that the Try and Buy products would all be 100% compatible with Solaris and OpenSolaris, correct?
Brenda Byrne: yes, of course
Malachi de AElfweald: How does the Try and Buy program work?
Brenda Byrne: You can simply receive one of the listed Sun machines on trial, no questions asked, for 60 days, with shipping costs provided by Sun, and at the end of the 60 days, or prior to then, make a choice to keep the machine....or buy it!
Brenda Byrne: Would you be trialing a machine on behalf of a company, or in a personal capacity?
Malachi de AElfweald: Company
Malachi de AElfweald: And do they come with the OS preloaded, or do I install it?
Malachi de AElfweald: Also, is there a subscription fee, or just a one-time cost to purchase?
Brenda Byrne: OS is preloaded
Brenda Byrne: there is a one-time cost to purchase the machines
Brenda Byrne: if I understand you correctly
Malachi de AElfweald: so, for example, it would come with... Solaris10? no subscription/support costs for it?
Brenda Byrne: Yes - they come with Solaris 10 - which is a free operating system anyway.
Brenda Byrne: But no, Solaris 10 customer support does not come provided with the machine
Malachi de AElfweald: available but not required?
Brenda Byrne: There is a fee for signing up to customer support for Solaris 10 (which is not required)
Brenda Byrne: Solaris 10 customer support fees start at $10 per month
Brenda Byrne: What kind of a machine are you thinking of trialing?
Malachi de AElfweald: kk. and is there a way to specify how we want it configured? ie: bootable RAID-Z
Brenda Byrne: yes
Malachi de AElfweald: ok. it all sounds pretty good. I guess at this point I just need to figure out which machine I want.
Malachi de AElfweald: is there anything else I should know?
Malachi de AElfweald: any requirements to qualify or anything?
Brenda Byrne: Nope...except that it is on behalf of your company, and is delivered to your company address
Brenda Byrne: What kind of applications would you test on it?
Malachi de AElfweald: That's fine.
Malachi de AElfweald: A year ago, a polish hacker wiped out our FreeBSD box. everything. multiple domains. I told the FBI, but they didn't even ask what the IP address was.
Malachi de AElfweald: So, I am looking at replacing that server with a new one.
Malachi de AElfweald: Expected usage is that it is going to be a Xen Dom0 and run each domain as a DomU.
Malachi de AElfweald: Primarily, I am a java developer - but that won't really matter since each domain will have its own OS
Malachi de AElfweald: Ok, I think I have all the information I need. Just need to try to decide which machine to go with.
Malachi de AElfweald: Thank you for your assistance.
Brenda Byrne: no problem.
Your session has ended. You may now close this window.