Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
sanboot:ubuntu_iscsi [2009/04/29 00:15]
commander1024 modified required packages
sanboot:ubuntu_iscsi [2010/08/16 20:55] (current)
cinquero
Line 1: Line 1:
-====== Ubuntu 9.04 (jaunty jackalope) ======+====== Ubuntu 9.04 (Jaunty Jackalope) and 9.10  (Karmic Koala) ======
 (this howto is based on the Debian Etch iSCSI howto with some major changes) (this howto is based on the Debian Etch iSCSI howto with some major changes)
 ===== Introduction ===== ===== Introduction =====
 +This howto shows how to make an existing installation of a recent *buntu (ubuntu, kubuntu, edubuntu, xubuntu and so on) bootable over the network using iSCSI. ​
 +
 +I will be using the great gpxe to boot the OS over the network later and a dhcp (v3) Server to easily propagate settings to the gpxe client.
 +
 If you are experienced with ubuntu and initramfs / initrd creation you can install ubuntu directly onto the iscsi target using the desktop (i386 or amd64) install CD (more on that later). The alternate install disk is not capable of installing additional packages at runtime, because of the limitations of the busybox environment. If you are experienced with ubuntu and initramfs / initrd creation you can install ubuntu directly onto the iscsi target using the desktop (i386 or amd64) install CD (more on that later). The alternate install disk is not capable of installing additional packages at runtime, because of the limitations of the busybox environment.
  
Line 14: Line 18:
    * there is a working dhcp server (although you will only need it for comfort, configuring gpxe manually does also work)    * there is a working dhcp server (although you will only need it for comfort, configuring gpxe manually does also work)
  
-===== Preparations ​=====+===== Preparing the initrd ​=====
 The debian howto recommended to compile open-iscsi yourself as iscsistart and fwparam_ibft were not available in the debian packages that time. This is not necessary for ubuntu for some reasons: The debian howto recommended to compile open-iscsi yourself as iscsistart and fwparam_ibft were not available in the debian packages that time. This is not necessary for ubuntu for some reasons:
  
-   * fwparam_ibft is no longer a (fully featured) component of neither the open-iscsi semi-stable release (which does not even support kernel 2.6.28+) nor the development git source tree FIXME **Is there some kind of a replacement?​**+   * fwparam_ibft is no longer a (fully featured) component of neither the open-iscsi semi-stable release (which does not even support kernel 2.6.28+) nor the development git source tree. It has been merged into the ''​iscsistart''​ tool in the same package, and its featureset reduced.
    * iscsistart is available via [[http://​packages.ubuntu.com/​jaunty/​open-iscsi|open-iscsi]] from the ubuntu repositories.    * iscsistart is available via [[http://​packages.ubuntu.com/​jaunty/​open-iscsi|open-iscsi]] from the ubuntu repositories.
    * the initramfs scripts are even prepared for iscsi usage in some way    * the initramfs scripts are even prepared for iscsi usage in some way
Line 26: Line 30:
  
    sudo aptitude install initramfs-tools open-iscsi sysv-rc-conf    sudo aptitude install initramfs-tools open-iscsi sysv-rc-conf
 +
 +add iscsi to the list of modules to include in the boot image
 +
 +   sudo echo "​iscsi"​ >> /​etc/​initramfs-tools/​modules
  
 change the initiatorname (of the client) to something unique if you are planning to set up many boxes change the initiatorname (of the client) to something unique if you are planning to set up many boxes
Line 35: Line 43:
    sudo touch /​etc/​iscsi/​iscsi.initramfs    sudo touch /​etc/​iscsi/​iscsi.initramfs
  
-now we have to workaround a bug in the jaunty stable tree, so edit /​usr/​share/​initramfs-tools/​hooks/​iscsi with your favourite editor:+If you are using Ubuntu 9.04 then you have to workaround a bug in the jaunty stable tree
 + 
 +The bug is in /​usr/​share/​initramfs-tools/​hooks/​iscsi 
 + 
 +If you are using Ubuntu 9.10 you don't have to edit this file. 
 + 
 +edit /​usr/​share/​initramfs-tools/​hooks/​iscsi with your favourite editor:
  
    sudo joe /​usr/​share/​initramfs-tools/​hooks/​iscsi    sudo joe /​usr/​share/​initramfs-tools/​hooks/​iscsi
Line 42: Line 56:
    to    to
       copy_exec /​sbin/​iscsistart /sbin       copy_exec /​sbin/​iscsistart /sbin
 +
 +Even if the modules for the network cards are loaded early enough, udev still initializes too late to activate the NICs which will cause the iscsi script to fail.
 +As long as there is not a much smarter solution, you will be fine to copy the udev script from /​scripts/​nfs-top to /​scripts/​local-top/​early_udev **FIXME Perhaps it is smarter to copy it to /​etc/​initramfs-tools/​scripts/​local-top/​**
 +
 +   sudo cp /​usr/​share/​initramfs-tools/​scripts/​nfs-top/​udev /​usr/​share/​initramfs-tools/​scripts/​local-top/​early_udev
 +   ls -lha /​usr/​share/​initramfs-tools/​scripts/​local-top/​
 +   ​insgesamt 12K
 +   ​drwxr-xr-x ​ 2 root root   35 2009-04-28 23:12 .
 +   ​drwxr-xr-x 10 root root 4,0K 2009-04-28 22:59 ..
 +   ​-rwxr-xr-x ​ 1 root root  430 2009-04-28 23:00 early_udev
 +   ​-rwxr-xr-x ​ 1 root root 1,9K 2009-04-28 22:32 iscsi
  
 generate the new initrd('​s) generate the new initrd('​s)
Line 48: Line 73:
    ​update-initramfs:​ Generating /​boot/​initrd.img-2.6.28-11-generic    ​update-initramfs:​ Generating /​boot/​initrd.img-2.6.28-11-generic
  
 +===== Preparing the system =====
 now we have to take care that neither /​etc/​init.d/​networking nor the [[http://​projects.gnome.org/​NetworkManager/​|Network Manager]] reconfigures the interface again during the init process or X11 startup so edit /​etc/​network/​interfaces:​ now we have to take care that neither /​etc/​init.d/​networking nor the [[http://​projects.gnome.org/​NetworkManager/​|Network Manager]] reconfigures the interface again during the init process or X11 startup so edit /​etc/​network/​interfaces:​
  
    sudo joe /​etc/​network/​interfaces    sudo joe /​etc/​network/​interfaces
    and change the appropiate entry for your primary NIC from:    and change the appropiate entry for your primary NIC from:
 +      auto eth0
       #iface eth0 inet dhcp       #iface eth0 inet dhcp
    to    to
 +      auto eth0
       iface eth0 inet manual       iface eth0 inet manual
  
Line 73: Line 101:
    ​title Ubuntu 9.04, kernel 2.6.28-11-generic iSCSI    ​title Ubuntu 9.04, kernel 2.6.28-11-generic iSCSI
    ​uuid f859db1e-39f8-4539-9f0e-8292f8f93cca    ​uuid f859db1e-39f8-4539-9f0e-8292f8f93cca
-   ​kernel /​vmlinuz-2.6.28-11-generic ISCSI_INITIATOR=iqn.2009-04.arbeitsgruppe:​general ISCSI_TARGET_NAME=iqn.2009-04.Arbeitsgruppe.Commander1024:​general.ubuntu ISCSI_TARGET_IP=192.168.234.1 ISCSI_TARGET_PORT=3260 root=UUID=021ebcfc-188a-41ba-9864-ee18cbf7af9f ro+   ​kernel /​vmlinuz-2.6.28-11-generic ​ip=dhcp ​ISCSI_INITIATOR=iqn.2009-04.arbeitsgruppe:​general ISCSI_TARGET_NAME=iqn.2009-04.Arbeitsgruppe.Commander1024:​general.ubuntu ISCSI_TARGET_IP=192.168.234.1 ISCSI_TARGET_PORT=3260 root=UUID=021ebcfc-188a-41ba-9864-ee18cbf7af9f ro
    ​initrd /​initrd.img-2.6.28-11-generic    ​initrd /​initrd.img-2.6.28-11-generic
    quiet    quiet
  
-===== Doing the final cut =====+WARNING: Make a backup of grub.cfg! When you upgrade your system ​the ubuntu installer will overwrite it. If that is the case you have to add your iscsi options again! After upgrading always check if your grub.cfg is still intact.
  
-Now your installation is nearly ready to be booted from an iscsi storage, there are only three last minor fixes to do.+===== Transfer ​your image to the iSCSI Target server ===== 
 +[[sanboot:​transfer|transfer the disk image to your iSCSI target]]
  
-After having tried a lot of things, spying into the nfs scripts showed, that even if the modules for the network cards are loaded early enough, udev still initializes too late to activate ​the NICs which will cause the iscsi script to fail. +[[sanboot:​iscsi_boot|Boot from your iSCSI target image]] 
-As long as there is not a much smarter solution, ​you will be fine to copy the udev script from /​scripts/​nfs-top to /​scripts/​local-top/​early_udev **FIXME Perhaps it is smarter to copy it to /​etc/​initramfs-tools/​scripts/​local-top/​**+===== Installing ubuntu directly onto the iSCSI SAN ===== 
 +Installing ubuntu directly onto SAN is basically ​the same as the method: HDD -> SAN and takes advantage of the possibility ​to install packages in the running live cd systemBut that means you cannont use the Alternate install disc out-of-the-box.
  
-   sudo cp /​usr/​share/​initramfs-tools/​scripts/​nfs-top/​udev /​usr/​share/​initramfs-tools/​scripts/​local-top/​early_udev +Boot your favourite *buntu variant using the desktop CD. 
-   ls -lha /​usr/​share/​initramfs-tools/​scripts/​local-top/​ +Install the open-iscsi packate in the live system 
-   insgesamt 12K + 
-   drwxr-xr-x  2 root root   ​35 ​2009-04-28 23:12 +   sudo aptitude install open-iscsi 
-   drwxr-xr-x 10 root root 4,0K 2009-04-28 22:59 .. +   sudo iscsi_discovery <​SAN-IP>​ 
-   -rwxr-xr-x  1 root root  430 2009-04-28 23:00 early_udev +      Set target iqn.2009-04.apobyte.fattony:​eee-box.ubuntu to automatic login over tcp to portal 192.168.111.212:​3260 
-   -rwxr-xr-x  1 root root 1,9K 2009-04-28 22:32 iscsi+      Logging out of session [sid: 3, target: iqn.2009-04.apobyte.fattony:​eee-box.ubuntu, portal192.168.111.212,​3260] 
 +      ​Logout of [sid: 3target: iqn.2009-04.apobyte.fattony:​eee-box.ubuntu, portal192.168.111.212,​3260]:​ successful 
 +      discovered 1 targets at 192.168.111.212 
 +   sudo iscsiadm ​-m node -p <SAN-IP> -T iqn.2009-04.apobyte.fattony:​eee-box.ubuntu -l 
 + 
 +Now you should be logged into your SAN Target (if it does not require Authentication,​ this could be done now by editing the config file 
 + 
 +   sudo joe /​etc/​iscsi/​nodes/​iqn.2009-04.apobyte.fattony\:eee-box.ubuntu/​192.168.111.212\,​3260\,​1/​default 
 + 
 +and add the following to the config file: 
 + 
 +    node.session.auth.username = <​username>​ 
 +    node.session.auth.password = <​password>​ 
 + 
 +now the login command will finally work which can be veriefied with the "​dmesg"​ program. This will give you a hint, which disk you will have to partition and install ubuntu to: 
 + 
 +   [   ​62.491211] scsi8 : iSCSI Initiator over TCP/IP 
 +   ​[ ​  ​63.540037] scsi 8:0:0:0: Direct-Access ​    ​IET ​     VIRTUAL-DISK     ​0 ​   PQ: 0 ANSI: 4 
 +   ​[ ​  ​63.541815] sd 8:0:0:0: [sdg] 50331648 512-byte hardware sectors: (25.7 GB/24.0 GiB) 
 +   ​[ ​  ​63.548975] sd 8:0:0:0: [sdg] Write Protect is off 
 +   ​[ ​  ​63.548979] sd 8:0:0:0: [sdg] Mode Sense: 77 00 00 08 
 +   ​[ ​  ​63.550486] sd 8:0:0:0: [sdg] Write cache: disabled, read cache: enabled, doesn'​t support DPO or FUA 
 +   ​[ ​  ​63.556047] sd 8:0:0:0: [sdg] 50331648 512-byte hardware sectors: (25.7 GB/24.0 GiB) 
 +   ​[ ​  ​63.560353] sd 8:0:0:0: [sdg] Write Protect is off 
 +   ​[ ​  ​63.560357] sd 8:0:0:0: [sdg] Mode Sense: 77 00 00 08 
 +   ​[ ​  ​63.565124] sd 8:0:0:0: [sdg] Write cache: disabled, read cache: enabled, doesn'​t support DPO or FUA 
 + 
 +Now start the install program and install ubuntu exactly the same way you would do on a local harddrive. There is no difference at all, even grub goes onto this drive by default so nothing to take care about, except the fact you should not reboot your box right after the installation procedure. 
 + 
 +Now mount your newly created ​root device (that partition you mount as "/"​)if you are unsure you can use "​fdisk ​-l /​dev/​sdX"​ to call your partition structure back to mind and create a sane environmet so that all the scripts you will use will be satisfied. I alway use a separate /boot partition, so we have to keep that in mind. 
 + 
 +   sudo mkdir /mnt/sdc3 
 +   sudo mount /dev/sdc3 /mnt/sdc3 
 +   mount /dev/sdc1 /​mnt/​sdc3/​boot/​ 
 +   sudo chroot /mnt/sdc3 
 +   sudo su - 
 +   mount -t proc none /proc 
 +   ​hostname -F /​etc/​hostname
  
 +Now jump back to "​Preparing the initrd"​ and walk down the howto from there on.
  
 +===== Troubleshooting =====
 +=== Emergency Busybox Shell on Boot ===
 The following procedure will allow you to get your system up with 2 short commands, if something went wrong and you are dropped to a Busybox Shell: The following procedure will allow you to get your system up with 2 short commands, if something went wrong and you are dropped to a Busybox Shell:
  
Line 98: Line 168:
    exec switch-root (to continue the boot process)    exec switch-root (to continue the boot process)
  
-To keep ubuntu ​from crashing on system shutdown disable networking and openiscsi scripts for shutdown, because shutting down the network will cause loss of the root-device:​+=== Crash on Shutdown === 
 +If ubuntu ​keeps crashing on system shutdown, try to disable networking and openiscsi scripts for shutdown, because shutting down the network will cause loss of the root-device:​
    sudo sysv-rc-conf    sudo sysv-rc-conf
  
Line 126: Line 197:
    ​│ ​                       space: toggle service on / off                        │    ​│ ​                       space: toggle service on / off                        │
    ​└──────────────────────────────────────────────────────────────────────────────┘    ​└──────────────────────────────────────────────────────────────────────────────┘
 +===== FIXME: Notes from contributors =====
  
-===== Transfer your image to the iSCSI Target server ​===== +==== Ubuntu 10.04 LTS / Lucid ====
-[[sanboot:​transfer|transfer the disk image to your iSCSI target]]+
  
 +Disclaimer: use everything in this section at your own risk! Think for yourself.
  
-===== Installing ubuntu directly onto the iSCSI SAN ===== +With Lucid, it is basically ​the same as described above, except for:
-WIP+
  
-===== FIXME: Notes from contributors ​=====+  * copying early_udev and bugfixing scripts is not necessary 
 +  * make sure the OS image contains its target definition under /​etc/​iscsi/​nodes/​... -- otherwise iscsid start will kill the connection upon boot. You get that target definition by mounting the iscsi target. iscsiadm seems to be creating it. 
 +  * use sysv-rc-conf to remove "​networking"​ and "​open-iscsi"​ from *all* run levels. 
 +  * make sure that *all* open-iscsi sysv links are gone. My experience shows that using sysv-rc-conf is *not* enough: 
 +<​code>​ 
 +find /etc -name '​K*open-iscsi'​ 
 +find /etc -name '​S*open-iscsi'​ 
 +</​code>​ 
 +  * One possibly has to repeat the previous step when upgrading/​reinstalling open-iscsi. 
 +  * create /​etc/​init/​open-iscsi.conf:​ 
 +<​code>​ 
 +description "​Open-iSCSI"​ 
 + 
 +start on filesystem 
 +# no stopping! 
 +exec /​etc/​init.d/​open-iscsi start 
 +</​code>​ 
 +  * Lucid detects on its own if rootfs is an iscsi target. It will prevent network shutdown on shutdown and read-only remounting of the rootfs should make sure that data gets flushed to disk. 
 +  * For your own convenience,​ edit the following line in /​etc/​default/​grub as shown below: 
 +<​code>​ 
 +GRUB_CMDLINE_LINUX_DEFAULT="ip=dhcp ISCSI_INITIATOR=<​yourInitiatorName>​ ISCSI_TARGET_NAME=<​yourTargetName>​ ISCSI_TARGET_IP=<​yourTargetIp>​ ISCSI_TARGET_PORT=3260 quiet splash"​ 
 +</​code>​ 
 + 
 +=== Stabilization/​Paranoia/​Warnings ​=== 
 + 
 +in order to prevent data loss (for example, a corrupted dpkg db), one may do the following stuff:
  
 +  * Enable data journaling. That will cut your maximum write performance in half.
 +  * One may increase node.session.timeo.replacement_timeout in iscsid.conf to avoid escalation of temporary connection problems to upper layers. In *my* situation it (almost) *never* makes sense to tell the ext4 driver about I/O problems because there is no fallback. It would just give me an unclean filesystem (not unmounted properly, like a hard reset). However(!!),​ an unclean filesystem may be better than a totally screwed one: imagine your iSCSI target host crashes, looses some data that still has not been written to disk, and the client continues to use that target without doing a journal replay first... personally, I try to avoid that situation by never starting the iSCSI target automatically (removed from init scripts, starting it manually after killing the client machines...). IMHO the iSCSI protocol is a bit dumb. The target should detect a crash and refuse session continuation after restart... again, I'm not liable for *any* sort of data loss if you act according to these explanations. You *will* screw your data. Sooner or later.
 +  * Additional warning: SAN boot using gpxe seems to not allow using a MaxSessions=1/​MaxConnections=1 setting for the target, which would prevent concurrent accesses -- a very basic need for data integrity purposes.

Navigation

* [[:start|Home]] * [[:about|About our Project]] * [[:download|Download]] * [[:screenshots|Screenshots]] * Documentation * [[:howtos|HowTo Guides]] * [[:appnotes|Application Notes]] * [[:faq:|FAQs]] * [[:doc|General Doc]] * [[:talks|Videos, Talks, and Papers]] * [[:hardwareissues|Hardware Issues]] * [[:mailinglists|Mailing lists]] * [[http://support.etherboot.org/|Bugtracker]] * [[:contributing|Contributing]] * [[:editing_permission|Wiki Edit Permission]] * [[:wiki:syntax|Wiki Syntax]] * [[:contact|Contact]] * [[:relatedlinks|Related Links]] * [[:commerciallinks|Commercial Links]] * [[:acknowledgements|Acknowledgements]] * [[:logos|Logo Art]]

QR Code
QR Code sanboot:ubuntu_iscsi (generated for current page)