Following Boyd’s presentation on LiveUpgrade at MSOSUG, I used LiveUpgrade to upgrade a scratch machine to Solaris 10, Update 4. Here’s a walkthrough:
Our machine, salsa, is running Solaris 10u2 on an x86 host with root disks mirrored with SVM.
# grep Solaris /etc/release Solaris 10 6/06 s10x_u2wos_09a X86 # uname -a SunOS salsa 5.10 Generic_125101-10 i86pc i386 i86pc # df -kh / /opt Filesystem size used avail capacity Mounted on /dev/md/dsk/d20 7.9G 4.5G 3.3G 59% / /dev/md/dsk/d23 7.9G 369M 7.4G 5% /opt # metastat -p d20 d20 -m d10 d30 1 d10 1 1 c0d0s0 d30 1 1 c1d0s0 # metastat -p d23 d23 -m d13 d33 1 d13 1 1 c0d0s3 d33 1 1 c1d0s3
As you can see, a pretty common desktop type deployment. /opt is on a different filesystem and we’ll need to upgrade some packages that live there, so we’ll have to take care of /opt as well. If the machine had a seperate /var filesystem (as all production systems should), we’d take care of that as well.
You may wish to install the latest Live Upgrade packages at this time, as well as check for any critical liveupgrade patches. You can use Patch Check Advanced to make the search for patches easier, and a recent LU will be on your Solaris media.
Before commencing the LiveUpgrade process on an x86/x64 machine, you’ll want to make sure /sbin/biosdev returns the correct information for your disks. It often doesn’t, so this is an important first step. If you have a SPARC machine, you can cheerfully ignore this. It should return a line with the BIOS disk number and full path for each disk. If it doesn’t, consult Boyd’s slides, page 49 for a hackjob. This is apparently being fixed. The BIOS disk numbering runs like this, traditionally:
| BIOS ID | Description | DOS drive name |
| 0x80 | Primary Master | C: |
| 0x81 | Primary Slave | D: |
| 0x82 | Secondary Master | E: |
| 0x83 | Secondary Slave | F: |
To get started, we’ll use “lucreate” to break the mirrors and create a new boot environment (BE). I chose d40 for the new mirror name, but you can pick whatever you like that isn’t already in use.
# lucreate -c u2 -n u4 -m /:/dev/md/dsk/d40:ufs,mirror -m /:/dev/dsk/c0d0s0:detach,attach,preserve -m /opt:/dev/md/dsk/d43:ufs,mirror -m /opt:/dev/dsk/c0d0s3:detach,attach,preserve
This tells LiveUpgrade to break the existing mirrors and create new mirrors on d40 and d43 with the selected submirror partitions, in this case c0d0s0 and c0d0s3.
You will see a lot of noise at this point, including some errors. Do not worry. As long as the process completes succesfully, you should be fine. The output will look something like this:
Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environment configurations Determining types of file systems supported Validating file system requests Preparing logical storage devices Preparing physical storage devices Configuring physical storage devices Configuring logical storage devices Analyzing system configuration. Comparing source boot environment <u2> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Searching /dev for possible boot environment filesystem devices Updating system configuration files. The device is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <u4>. Source boot environment is <u2>. Creating boot environment <u4>. Checking for GRUB menu on boot environment <u4>. Saving GRUB menu on boot environment <u4>. Creating file systems on boot environment <u4>. Preserving <ufs> file system for </> on . Preserving <ufs> file system for on . Mounting file systems for boot environment <u4>. Calculating required sizes of file systems for boot environment <u4>. Populating file systems on boot environment <u4>. Checking selection integrity. Integrity check OK. Preserving contents of mount point . Preserving contents of mount point . Copying file systems that have not been preserved. Creating shared file system mount points. Creating compare databases for boot environment <u4>. Creating compare database for file system . Creating compare database for file system . Updating compare databases on boot environment <u4>. Making boot environment <u4> bootable. Updating bootenv.rc on ABE <u4>. Generating partition and slice information for ABE <u4> Setting root slice to Solaris Volume Manager metadevice . Restoring GRUB menu. The GRUB menu has been restored on device . Population of boot environment <u4> successful. Creation of boot environment <u4> successful.
Again, you will see some errors and noise here. I saw messages like:
invalid option 'r' usage: mount [-o opts] <path> ERROR: Mount failed for:
It didn’t interfere with the process at all.
Now, mount your media for the upgrade. Either pop the DVD in the drive or mount using the loopback interface. Since this little machine had no optical drive, I mounted the media like this:
# lofiadm -a /share/solaris-10-update-4-x86.iso # mkdir /mnt/Solaris10u4 # mount -F hsfs -o ro /dev/lofi/1 /mnt/Solaris10u4
If you already have some lofi devices created, the device to mount will be different. Look at the output of lofiadm -a to get the correct device name.
Next, to upgrade the new BE:
# luupgrade -u -n u4 -s /mnt/Solaris10u4
The new BE will be upgraded. This takes quite some time, and again, you will see noise and the occasional error. As long as the job finishes, you should be fine. Example output:
Copying failsafe multiboot from media. Uncompressing miniroot Creating miniroot device miniroot filesystem is <ufs> Mounting miniroot at </mnt/Solaris10u4//Solaris_10/Tools/Boot> Validating the contents of the media </mnt/Solaris10u4> The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris;> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <u4>. Checking for GRUB menu on ABE <u4>. Saving GRUB menu on ABE <u4>. Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE <u4>. Performing the operating system upgrade of the BE <u4>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: xx% Installation of the packages from this media is complete. Restoring GRUB menu on ABE <u4>. Updating package information on boot environment <u4>. Package information successfully updated on boot environment <u4>. Adding operating system patches to the BE <u4>. The operating system patch installation is complete. ABE boot partition backing deleted. Configuring failsafe for system. Failsafe configuration is complete. INFORMATION: The file on boot environment <u4> contains a log of the upgrade operation. INFORMATION: The file on boot environment <u4> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <u4>. Before you activate boot environment <u4>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <u4> is complete. Installing failsafe Failsafe install is complete.
Okay, your new BE is upgraded. You can use the lumount command to take a look around the environment and make any changes if required.Once you’re ready to roll, activate the new BE with:
# luactivate u4
The output will look like:
Generating partition and slice information for ABE <u4>
Boot menu exists.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Do *not* change *hard* disk order in the BIOS.
2. Boot from the Solaris Install CD or Network and bring the system to
Single User mode.
3. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1d0s0 /mnt
4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
GRUB menu is on device: </dev/md/dsk/d40>.
Filesystem type for menu device: <ufs>.
Activation of boot environment <u4> successful.
And the message isn’t kidding about making sure you shut down properly – the last, critical steps of the activation are performed during system shutdown, so make sure you use something like init 6 rather than halt or reboot. Remember, halt and reboot don’t shut the box down properly, they simple unmount the filesystems and reboot. Baaad. Never use them.Shut the box down like this:
# init 6 updating /platform/i86pc/boot_archive...this may take a minute
and when it comes back up you’ll be running an upgraded Solaris. If you’re upgrading an x86 box to update 4, it may prompt you to set a keyboard locale on the way up (especially if you have a PS/2 keyboard, or none plugged in at all). The host will not complete booting until it gets this manual intervention, so be careful when upgrading a host you have no console access to.
You’ll want to do your usual testing and then use ludelete to drop the old BE and then you can remirror your disks. So easy:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
u2 yes no no yes -
u4 yes yes yes no -
# ludelete u2
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Updating GRUB menu on device </dev/md/dsk/d40>
Boot environment <u2> deleted.
# metaclear d20 ; metattach d40 d30
d20: Mirror is cleared
d40: submirror d30 is attached
# metaclear d23 ; metattach d43 d33
d23: Mirror is cleared
d43: submirror d33 is attached
# metastat -c d40
d40 m 8.0GB d10 d30 (resync-12%)
d10 s 8.0GB c0d0s0
d30 s 8.0GB c1d0s0
Now, your mirrors will resync and you’re back in business with mirrored disks.
Now, how easy was that? In my next LU walkthrough, I’ll cover using LiveUpgrade to apply patches to a host.
Posts

September 28th, 2007 at 6:23 am
I found your posting to be of great help during my first experience with LU.. Thanks!
October 31st, 2007 at 11:50 pm
I have a question of clarification. In the example, you put a /dev/md/dsk/d40 in the lucreate command but you don’t say how you arrived at the d40 disk name. It’s not shown in the metastat commands, so I was wondering if you a) created the disk device beforehand or b) simply specified a device name (choosing the next incrementatl number) and let the lucreate command create the device using that name? And thanks for putting this together. The SUN instructions are a bit vague for something that one does on a live production system.
October 31st, 2007 at 11:54 pm
Sure, questions are good.
I chose d40 because it sounded like a good number – we’re choosing a *new* metadevice to create the configuration on, so it could be anything. So, option B in this case.
Glad you enjoyed my article.
November 6th, 2007 at 1:03 am
Good article, really easy to follow.
Compares favourably (highly!) with the official SUN recommedantion on how to upgrade with LU
Procedure to upgrade using Solaris[TM] Live Upgrade on host with root drive encapsulated in
Solstice[TM] Disksuite or logical volume manager
http://sunsolve.sun.com/search/document.do?assetkey=1-9-90559-1
November 28th, 2007 at 1:25 am
Did you do this with the LU packages from the sol10u4 media? When I tried I ran into this (snippet from Sun support case):
There’s a known issue with So10U4 Lu packages that has a change in the /usr/lib/lu/lulib which causing this issue.
Any last filesystems you have in your /etc/vfstab file will fail whether it is currently mounted or not.
There’s already a bug filed and filed an escalation on this one. Here’s the bug information:
Bug id# 6606935
Synopsis: S10U4 /usr/lib/lu/lulib script with “X” under $lcpm_vfstab entry causing lucreate to fail
—–
Sun advised me to use the LU packages from the sol10u3 (11/06) media instead, which contradicts everything they say elsewhere.
I also had the problem that ludelete wouldn’t remove the original BE after booting the new one, as the GRUB foo was still on there. One can relocate GRUB manually but you seem to have not had that problem at all.
December 18th, 2007 at 12:08 pm
How well does LU deal with non-standard filesystems that may contain parts of the OS. Specifically : zone local roots.
With Sol 10 u4 we can do LU with zones but if we have zone roots on / we want to ensure they are copied to the alt root location as well.
Making the scenario even worse, can we copy such non-standard zone roots if they exist on non-Sun external storage such as a SAN array – without any SLVM mirroring. I would expect the requirements for LU would be met as long as there was some spare disk space somewhere that could be redeployed for LU use. We then have the subsequent problem of migrating the upgraded zone roots back to the SAN but that is just a copy exercise.
May 8th, 2008 at 3:38 pm
FYI S10U5 bug
http://www2.purplecow.org/?p=129