Search This Blog

17 December 2011

Canon PIXMA MX860 and Ubuntu 11.10

When I upgraded the laptop to Ubuntu 11.10, the Canon inkjet quit working.  I tried reinstalling the drivers, but it refused (even with --force-architecture).  I tried (for a few hours) building the drivers, but it was just one thing after another.

Eventually, I found this:
sudo add-apt-repository ppa:michael-gruz/canon
sudo apt-get update
sudo apt-get install cnijfilter-mx860series
sudo apt-get install scangearmp-mx860series

Good news is, it prints again. Bad news, no landscape or double-sided printing.

25 August 2011

Share your git hooks

At work, we are using git for one of our projects.  One of the problems we encountered is that people (including myself) regularly forget to put the branch name in their commit message.  Why is that a big deal?  Once merged back to master, you see a lot of comments that show no indication of what they were for (for example 'added .gitignore').  I'd like to simplify things so that no one has to remember.  No need to prevent the checkin - just fix the comment...

These steps will ignore the ACLs and which server, etc... just pseudo code it...

Step 1: Setup a repo to hold the new hooks
server:/srv $ git init --bare githooks.git

Step 2: Add a new hook
server:/usr/share/git-core/templates $ rm hooks
server:/usr/share/git-core/templates $ git clone /srv/githooks.git hooks
server:/usr/share/git-core/templates $ cd hooks
server:/usr/share/git-core/templates/hooks $ nano commit-msg

#!/bin/bash

branch_name=$(git symbolic-ref -q HEAD)
branch_name=${branch_name##refs/heads/}
branch_name=${branch_name:-HEAD}

OLD=`cat $1`
NEW="[$branch_name] $OLD"
echo "$NEW" > $1
exit 0

server:/usr/share/git-core/templates/hooks $ chmod a+x commit-msg
server:/usr/share/git-core/templates/hooks $ git add commit-msg
server:/usr/share/git-core/templates/hooks $ git commit -a -s -m "added commit-msg"
server:/usr/share/git-core/templates/hooks $ git push origin master

Step 3: User clones a new repo....
client:~/work $ git clone git://somerepo.git repo
client:~/work $ cd repo
client:~/work/repo $ echo "something" > test.txt
client:~/work/repo $ git add test.txt
client:~/work/repo $ git commit -a -s -m "added something"

Step 4: Checking that it worked...
client:~/work/repo $ git log

commit 07ed146319a2e45ea97b2ecaa4a1ea8d365b6b01
Author: Malachi de AElfweald <malachid@gmail.com>
Date:   Thu Aug 25 12:59:58 2011 -0700

    [master] added something
   
    Signed-off-by: Malachi de AElfweald <malachid@gmail.com>

Step 5: Updating the hook "server side"

server:/usr/share/git-core/templates/hooks $ nano commit-msg

#!/bin/bash

branch_name=$(git symbolic-ref -q HEAD)
branch_name=${branch_name##refs/heads/}
branch_name=${branch_name:-HEAD}

OLD=`cat $1`
NEW="$branch_name: $OLD"
echo "$NEW" > $1
exit 0

server:/usr/share/git-core/templates/hooks $ git commit -a -s -m "changed commit-msg"
server:/usr/share/git-core/templates/hooks $ git push

Step 5: Updating the hook "client side"
client:~/work/repo $ cd .git/hooks
client:~/work/repo/.git/hooks $ git pull

Step 6: Retest it...

client:~/work/repo $ echo "something else" > test.txt
client:~/work/repo $ git commit -a -s -m "changed something else"
client:~/work/repo $ git log

commit 7f381725dd2663728f11f08d08ed0c1d83608047
Author: Malachi de AElfweald <malachid@gmail.com>
Date:   Thu Aug 25 13:33:43 2011 -0700

    master: changed something else
   
    Signed-off-by: Malachi de AElfweald <malachid@gmail.com>



13 August 2011

Native ZFS for Linux

I miss OpenSolaris.  At work, the only options are Windows and Linux.  The other day, I was rm -rf'ing about 10 different copies of the Android tree and it reminded me how much quicker it was to wipe out a zfs filesystem than it was to rm -rf one "little" directory.  After many hours of waiting for access to my system, I decided to check on the state of ZFS on Linux again.

Last year, I had tried the FUSE port.  This time I decided to try zfsonlinux.  This comment on their page had me excited pretty early:
There is a lot of very good ZFS documentation already out there. Almost all of it should be applicable to this implementation because we have tried to conform to the Solaris behavior as much as possible.

 For installation, I followed the steps listed here:
sudo add-apt-repository ppa:dajhorn/zfs
sudo apt-get update
sudo apt-get install ubuntu-zfs

Installation went pretty smooth.  Trying to use it afterwards was a bit more tricky.  The module wasn't loaded so I tried to follow the instructions to do an insmod:

malachi@onyx:~$ sudo insmod /lib/modules/2.6.38-10-generic/updates/dkms/zfs.ko
insmod: error inserting '/lib/modules/2.6.38-10-generic/updates/dkms/zfs.ko': -1 Unknown symbol in module
Well, that's not good.  Luckily, there was a really simple workaround:
malachi@onyx:~$ sudo modprobe zfs
And voila.  It was loaded until next reboot.  I didn't have a spare drive in the laptop to convert to a root zfs drive, so I didn't worry about doing a grub setup... but I would like the module to auto-load on boot.  Turns out, that's pretty simple.  Simply add a line 'zfs' to /etc/modules.  While you are at it, modify /etc/default/zfs to enable auto (un)mounting:

malachi@onyx:~$ cat /etc/default/zfs
# Automatically run `zfs mount -a` at system startup if set non-empty.
ZFS_MOUNT='yes'
#ZFS_MOUNT=''

# Automatically run `zfs unmount -a` at system shutdown if set non-empty.
ZFS_UNMOUNT='yes'
#ZFS_UMOUNT=''
Currently, the /dev/zfs is owned by root:root with permissions set to 600. What this means is that to even do 'zfs list' or 'zpool status' you have to be root.  While I understand the logic there, I find it extremely annoying.  It may not go anywhere, but I submitted an enhancement request to allow admin group rw.  I realize that there are ACL mechanisms, but I do not believe those apply to the /dev/zfs special character device.


So, with no spare disk, how will I test it?  While I know that performance would suffer, I decided to try my testing with file-backed zfs.  I didn't want to keep doing it by hand, so created a script:

malachi@onyx:~$ cat /mnt/zfs-disks/createZFSFile
#!/bin/bash

FILE=$1
GIGS=$2
dd if=/dev/zero of=/mnt/zfs-disks/$FILE.zfs bs=1G count=$GIGS
ls -lh /mnt/zfs-disks/$FILE.zfs
Then created a few backing disks....

malachi@onyx:~$ createZFSFile disk1 10
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 114.092 s, 94.1 MB/s
-rw-r--r-- 1 malachi malachi 10G 2011-08-12 15:41 /mnt/zfs-disks/disk1.zfs
Repeat that for disk2,3,4...  For this testing, and especially since it was file-backed, I decided to not use raidz2 like I normally would so that I could easily add more space to the pool if necessary.

malachi@onyx:~$ sudo zpool create pool /mnt/zfs-disks/disk1.zfs
I then created a filesystem in the pool to work from:

malachi@onyx:~$ sudo zfs create pool/android
malachi@onyx:~$ sudo zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
pool           128K  9.78G    30K  /pool
pool/android    30K  9.78G    30K  /pool/android
Now to test adding some disks to the pool...

malachi@onyx:~$ sudo zpool add pool /mnt/zfs-disks/disk2.zfs
malachi@onyx:~$ sudo zpool add pool /mnt/zfs-disks/disk3.zfs
malachi@onyx:~$ sudo zpool add pool /mnt/zfs-disks/disk4.zfs

malachi@onyx:~$ sudo zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
pool           140K  39.1G    31K  /pool
pool/android    30K  39.1G    30K  /pool/android

malachi@onyx:~$ sudo zpool status
  pool: pool
 state: ONLINE
 scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    pool                        ONLINE       0     0     0
      /mnt/zfs-disks/disk1.zfs  ONLINE       0     0     0
      /mnt/zfs-disks/disk2.zfs  ONLINE       0     0     0
      /mnt/zfs-disks/disk3.zfs  ONLINE       0     0     0
      /mnt/zfs-disks/disk4.zfs  ONLINE       0     0     0

errors: No known data errors
That all looks good.  Downloading Android, built it, and then tried some timings...

root@onyx:~# annotate-output zfs snapshot pool/android@freshbuild
21:55:14 I: Started zfs snapshot pool/android@freshbuild
21:55:14 I: Finished with exitcode 0

root@onyx:~# zfs list -t all
NAME                      USED  AVAIL  REFER  MOUNTPOINT
pool                     10.1G  29.1G    31K  /pool
pool/android             10.1G  29.1G  10.1G  /pool/android
pool/android@freshbuild      0      -  10.1G  -

root@onyx:~# annotate-output zfs destroy pool/android@freshbuild
21:56:00 I: Started zfs destroy pool/android@freshbuild
21:56:00 I: Finished with exitcode 0

root@onyx:~# zfs list -t all
NAME           USED  AVAIL  REFER  MOUNTPOINT
pool          10.1G  29.1G    31K  /pool
pool/android  10.1G  29.1G  10.1G  /pool/android


root@onyx:~# annotate-output zfs destroy pool/android
21:59:54 I: Started zfs destroy pool/android
22:00:28 I: Finished with exitcode 0

root@onyx:~# zfs list -t all
NAME   USED  AVAIL  REFER  MOUNTPOINT
pool  1.52M  39.1G    30K  /pool


root@onyx:~# annotate-output zfs create pool/android
22:01:05 I: Started zfs create pool/android
22:01:06 I: Finished with exitcode 0

root@onyx:~# zfs list -t all
NAME           USED  AVAIL  REFER  MOUNTPOINT
pool          1.55M  39.1G    30K  /pool
pool/android    30K  39.1G    30K  /pool/android
Overall, I am really happy with these timings... Especially since it was running on file-backed zfs.

During all of my testing (I tried it 3-4 times) there was one gotcha.  During my first test, git locked up downloading Android.  Not sure what happened, but I couldn't even kill -9 it.  Even rebooting wouldn't work since it hung waiting for that process to die.  Since I had to hard power-cycle it, I am not surprised that the zfs system got corrupted.  When I rebooted, 'zfs list' and 'zpool status' locked up.  I was able to kill -9 the zfs list; but not the zpool status.  I deleted the /etc/zfs/zpool.cache and on reboot it no longer knew about the pool, but no longer locked up.  I was able to recreate everything and did not see that error happen again.  Maybe there was a better way to fix the problem.

I think for the next Linux system I build, I may try a root zfs partition using zfsonlinux.  Even with the one (serious) issue, I was pretty happy with the results of the testing.




Thanks to aarcane for finding a solution that allows you to give users access to the zfs/zpool commands.  Recapped here:

root@onyx:~# nano /etc/udev/rules.d/91-zfs-permissions.rules

#Use this to add a group and more permissive permissions for zfs
#so that you don't always need run it as root.  beware, users not root
#can do nearly EVERYTHING, including, but not limited to destroying
#volumes and deleting datasets.  they CANNOT mount datasets or create new
#volumes, export datasets via NFS, or other things that require root
#permissions outside of ZFS.
ACTION=="add", KERNEL=="zfs", MODE="0660", GROUP="zfs"
 
root@onyx:~# groupadd zfs
root@onyx:~# gpasswd -a username zfs 

Now, reboot.  Upon login, the specified user should be able to run zfs list and zpool status without using sudo.

22 March 2011

Open Indiana

Saw this... maybe I'll do this to the home server.