Specifically, I reran webminsetup and chose all the defaults # su - root # cd /etc/webmin # vi miniserv.users vi -- copied the first bit from the password file (ie: 'malachi:x:101') to the end of the file # vi webmin.acl vi> yyp vi -- change the second 'root' to 'malachi' # exit # svcadm restart webmin
I had installed the packages but could not get a ZFS folder to share via SMB. The service was in maintenance mode, and it kept complaining about it not being loaded into the kernel. I tried add_drv, but that complained that it was already added.
Rebooting managed to enable the service. Next, when I ran the cifs-chkcfg script, it said I need to run this (as root): echo other password required pam_smb_passwd.so.1 nowarn >> /etc/pam.conf
Running it again, it said:
/var/smb/smbpasswd does not exist or it is empty
passwd must be used to create CIFS-style password for local users
Hmm, but I don't really want to be in workgroup mode anyways. Let's follow these instructions and change that and join the domain....
'failed to find any domain controllers for __insert_anything_I_tried_here__'
It was a long and tedious process, but I got it working (on my work machine)... well, actually, it wasn't so bad once this was resolved [which took about 2-3 days to figure out]. While working on that, I also installed SUNWxvmhvm as it appeared to be the only xVM one not installed according to the Package Manager.
The first step was to download the ISOs. I ended up downloading both sol-10-u6-companion-ga.iso and sol-10-u6-ga1-x86-dvd.iso, though I haven't gotten around to using (or even looking at) the companion cd.
I played around a few times until I figured out how I wanted this done. I decided the original domain was going to be used as a starting point to clone from, and not as a domain to run; so some of the names seem a little overboard early on. I used this page for a lot of the details of what to do.
The next step was to create a volume. root@eris:/rpool/vm/iso# zfs create -V 16G rpool/vm/sol-10-u6-ga1-x86.zvol This creates /dev/zvol/dsk/rpool/vm/sol-10-u6-ga1-x86.zvol.
Using virt-manager to create the domain: name: sol_10_u6_ga1_x86 virtualization: full iso: /rpool/vm/iso/sol-10-u6-ga1-x86-dvd.iso os type: Solaris os variant: Sun OpenSolaris disk: /dev/zvol/dsk/rpool/vm/sol-10-u6-ga1-x86.zvol nic: shared eg1000g0 mem (min): 1024 mem (max): 1024 VCPUs: 1
During the installation, Solaris and Option 4 (to allow for a ZFS root). Once it was installed, I rebooted and verified that it worked. I shut it down. root@eris# zfs snapshot rpool/vm/sol-10-u6-ga1-x86.zvol@FreshInstall root@eris# zfs list -t snapshot -r rpool/vm/sol-10-u6-ga1-x86.zvol | grep -v auto-snap (the second step was to verify that it worked)
Logged into the domain (using virt-manager) root@sol_10_u6_ga1_x86# sys-unconfig This halts it, but through trial and error I found that I need to hit a key to get it to start restarting THEN hit the shutdown button on xVM so that it is NOT running. root@eris# zfs snapshot rpool/vm/sol-10-u6-ga1-x86.zvol@Unconfigured root@eris# zfs clone rpool/vm/sol-10-u6-ga1-x86.zvol@Unconfigured rpool/vm/eris-vm1.zvol root@eris# zfs snapshot rpool/vm/eris-vm1.zvol@Unconfigured
The work machine is a Dell OptiPlex 755 with an E8400 cpu and BIOS A11. I couldn't figure out why both 'xm info' and 'virt-install' both showed as it not having virtualization support. I tried turning it off and back on (in the bios), etc.
Turns out (thx to this thread) it just required turning OFF 'Trusted Execution' in the BIOS.
Doing 'svcs -xv' I saw that svc:/network/device-discovery/printers:snmp was in maintenance mode. I tried clearing/enabling it, but it just wasn't cooperating. Something about:
[ Nov 13 13:13:10 Executing start method ("/lib/svc/method/svc-network-discovery start snmp"). ] /usr/bin/dbus-send --system --print-reply --dest=org.freedesktop.Hal --type=method_call /org/freedesktop/Hal/devices/network_attached org.freedesktop.Hal.Device.NetworkDiscovery.EnablePrinterScanningViaSNMP int32:60 string:public string:0.0.0.0 Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
So first I installed 'SUNWsmmgr' via the package manager root@eris:~# svcadm restart svc:/system/hal:default root@eris:~# svcadm clear svc:/network/device-discovery/printers:snmp root@eris:~# svcadm enable svc:/network/device-discovery/printers:snmp
That seemed to fix it.
As far as setting the printers up, it seemed to work much quicker setting it up using socket, the IP, default port (as opposed to my last attempt using samba and the name)....
Checking for non-recursive missed // snapshots Checking for recursive missed // snapshots rpool Last snapshot for svc:/system/filesystem/zfs/auto-snapshot:frequent taken on Thu Nov 13 11:30 2008which was greater than the 15 minutes schedule. Taking snapshot now.cannot create snapshot 'rpool/ROOT/opensolaris@zfs-auto-snap:frequent-2008-11-13-12:51': dataset is busy no snapshots were created Error: Unable to take recursive snapshots of rpool@zfs-auto-snap:frequent-2008-11-13-12:51. Moving service svc:/system/filesystem/zfs/auto-snapshot:frequent to maintenance mode.
This problem is being caused by the old (IE: read non-active) boot environments not being mounted and it is trying to snapshot them. You can't 'svcadm clear' or 'svcadm enable' them because they will still fail.
Based on suggestions I find everywhere (most recently this one), I did this: # mkdir /BE # zfs list (shows rpool/ROOT/opensolaris, rpool/ROOT/opensolaris-1 and the current rpool/ROOT/opensolaris-2) # zfs set mountpoint=/BE/opensolaris rpool/ROOT/opensolaris # zfs mount rpool/ROOT/opensolaris # zfs unmount rpool/ROOT/opensolaris (repeat the 3 steps for opensolaris-1)
Now you can run 'svcs -xv' to get the list of services and restart them... In my case: # svcadm clear svc:/system/filesystem/zfs/auto-snapshot:frequent # svcadm enable svc:/system/filesystem/zfs/auto-snapshot:frequent # svcadm clear svc:/system/filesystem/zfs/auto-snapshot:hourly # svcadm enable svc:/system/filesystem/zfs/auto-snapshot:hourly
Still have to test whether it works post-reboot without doing this every time.