Tuesday, April 23, 2013

IPS repository update

On previous posts I've shown how to create and access IPS repositories.
Now I comment on how to get fixes and enhancements.
 
Of course, a paid support plan is required.
But let me say, it's more than worthy for serious business.
Solaris is much, much better than its presumable competitors.
It's about well invested money, not bells and whistles and appearance.
By the way, Solaris is not Pandora's box like others.

Coming back to this post, assume the following starting point:
(as previously mentioned, I prefer that depot be a ZFS pool on its own)

# zfs list -t all -r -o name,used,mountpoint depot/solaris
NAME                    USED  MOUNTPOINT
depot/solaris          6.07G  /depot/solaris
depot/solaris@release      0  -

Note that repository update operations are heavily I/O bound.
I'd recommend repositories on pools backed by SSD disks and not under RAID-Zn.
Most timings were on an Intel Core 2 Quad Q6600 2.40 GHz with a slow disk.
 
Find out which update to obtain, not necessarily the latest.
Check My Oracle Support (MOS) or https://pkg.oracle.com/solaris/support.
Suppose that Oracle Solaris 11.1.6.0.4.0 is found and selected.

Now there's an important decision point.
How to apply 11.1.6.0.4.0?
  1. Merge everything onto a single repository?
  2. On a selected intermediate updated clone from the initial release?
  3. On the last more recently updated clone, such as, 11.1.5.0.5.0?
The answer probably is:
It mostly depends on flexibility rather than storage constrains.
  
Rational:
  1. This is the worst case is as it may be more difficult to select a specific version.
    It is impossible to get rid of older "intermediate" versions.
      
  2. For keeping just a few update levels, this may be the better option.
    That's because we keep no outdated packages on each repository level.
    The disadvantage is that each repository level probably contains duplicates.
       
  3. For keeping all update levels, this seems the better option.
    That's because ZFS cloning saves our bacon, saving space and time.
    There's no duplicates throughout each repository level. 
As an exercise, I'll just pursue paths (2) and (3), comparing a few results.
Instead of incremental ISO, https://pkg.oracle.com/solaris/support could be used.

OPTION 2

Clone the initial release repository and simply apply SRU-6.4.
This will pull in 197 updated packages and grow storage usage by 1.88 GB.
The initial release packages take 6.07 GB, so the total used space is 7.95 GB.
It takes a long time as there may be many changes since the initial release.
 
# zfs clone depot/solaris@release depot/solaris/6_4_0

# lofiadm -a /export/archive/sol-11_1_6_4_0-incr-repo.iso
/dev/lofi/1
  
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 21
drwxr-xr-x   3 root     root        2.0K Apr  1 15:10 repo
-rw-r--r--   1 root     root        3.0K Apr  1 15:43 README
-rwxr-xr-x   1 root     root        1.3K Apr  1 15:43 NOTICES
-rw-r--r--   1 root     root        3.2K Apr  1 15:43 COPYRIGHT

# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 197 package(s)...
PROCESS                          ITEMS    GET (MB)   SEND (MB)
Completed                      197/197   2133/2133   4601/4601

real    621m20.861s (VirtualBox)
user    8m51.073s
sys     1m24.112s


# time pkgrepo refresh -s /depot/solaris/6_4_0
   
timing info is pending 

# umount /mnt
# lofiadm -d /dev/lofi/1

# zfs snapshot depot/solaris/6_4_0@sru

# zfs list -r -t all depot/solaris
NAME                     USED AVAIL  REFER  MOUNTPOINT
depot/solaris           7.95G  176G  6.07G  /depot/solaris
depot/solaris@release   2.26M     -  6.07G  -
depot/solaris/6_4_0     1.88G  176G  7.87G  /depot/solaris/6_4_0
depot/solaris/6_4_0@sru     0     -  7.87G  -
  
OPTION 3

If SRU-5.5 wasn't prepared yet, clone the initial release repository and apply it.
This will pull in 167 updated packages and grow storage usage by 1.27 GB.
With the initial release taking 6.07 GB, this amounts to 7.33 GB up to this step.
It takes a long time as there may be many changes since the initial release.

# zfs clone depot/solaris@release depot/solaris/5_5_0
 
# lofiadm -a /export/archive/sol-11_1-sru5-05-incr-repo.iso
/dev/lofi/1
  
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 20
drwxr-xr-x   3 root     root        2.0K Mar  6 17:23 repo
-rw-r--r--   1 root     root        3.0K Mar  6 18:13 README
-rwxr-xr-x   1 root     root        1.3K Mar  6 18:13 NOTICES
-rw-r--r--   1 root     root        3.2K Mar  6 18:13 COPYRIGHT
 
# time pkgrecv -s /mnt/repo -d /depot/solaris/5_5_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 167 package(s)...
PROCESS                          ITEMS    GET (MB)   SEND (MB)
Completed                      167/167   1340/1340   3321/3321

real    29m16.753s
user    22m36.367s
sys     1m32.850s

# time pkgrepo refresh -s /depot/solaris/5_5_0
Initiating repository refresh.

real    1m58.325s
user    1m55.420s
sys     0m2.762s

# umount /mnt
# lofiadm -d /dev/lofi/1

# zfs snapshot depot/solaris/5_5_0@sru 
 
# zfs list -r -t all depot/solaris
NAME                     USED AVAIL  REFER  MOUNTPOINT
depot/solaris           7.33G  175G  6.07G  /depot/solaris
depot/solaris@release   2.26M     -  6.07G  -
depot/solaris/5_5_0     1.27G  175G  7.21G  /depot/solaris/5_5_0
depot/solaris/5_5_0@sru     0     -  7.21G  -
   
Clone the SRU 5.5 repository and apply SRU-6.4.
Although 197 packages are evaluated, only 60 are actually downloaded.
The extra amount of storage is 1.02 GB, in contrast to the other option's 1.88 GB.
Some small overhead is added on the origin of the clone, from 1.27 GB to 1.31 GB.
In the end, the total amount of storage is 8.39 GB, contrasting to other options' 7.95 GB.
  
# zfs clone depot/solaris/5_5_0@sru depot/solaris/6_4_0
 
# lofiadm -a /export/archive/sol-11_1_6_4_0-incr-repo.iso
/dev/lofi/1
  
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 21
drwxr-xr-x   3 root     root        2.0K Apr  1 15:10 repo
-rw-r--r--   1 root     root        3.0K Apr  1 15:43 README
-rwxr-xr-x   1 root     root        1.3K Apr  1 15:43 NOTICES
-rw-r--r--   1 root     root        3.2K Apr  1 15:43 COPYRIGHT
 
# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 197 package(s)...
PROCESS                          ITEMS    GET (MB)   SEND (MB)
Completed                        60/60   1063/1063   1929/1929

real    16m54.721s
user    11m51.007s
sys     0m54.853s


# time pkgrepo refresh -s /depot/solaris/6_4_0
Initiating repository refresh.

real    1m54.976s
user    1m51.583s
sys     0m2.805s

# umount /mnt
# lofiadm -d /dev/lofi/1
 
# zfs snapshot depot/solaris/6_4_0@sru

# zfs list -r -t all depot/solaris
NAME                     USED AVAIL  REFER  MOUNTPOINT
depot/solaris           8.39G  174G  6.07G  /depot/solaris
depot/solaris@release   2.26M     -  6.07G  -
depot/solaris/5_5_0     1.31G  174G  7.22G  /depot/solaris/5_5_0
depot/solaris/5_5_0@sru 11.6M     -  7.21G  -
depot/solaris/6_4_0     1.02G  174G  8.13G  /depot/solaris/6_4_0
depot/solaris/6_4_0@sru     0     -  8.13G  -



NOTE
At least for Solaris 11 Express, depending how far an SRU is from the initial release, it seems safer not to issue pkgrepo refresh right after pkgrepo recv. Instead, prefer a more conservative approach, such as:

# svcadm disable -t pkg/server (case previously enabled)
# pkg update --be-name solaris-sru-<N>-pkg pkg
...

The previous command will update just the most critical parts, including the kernel and the IPS. After eventually needed house keeping, reboot to the newly created updated BE:

# init 6
NOTE
Arguably the above ZFS hierarchy suggestion could be better; for instance:

# zfs list -o name -r -t all depot/solaris
NAME                 
depot/solaris        
...
depot/solaris/11.1/release 

depot/solaris/11.1/release@<date-0>
depot/solaris/11.1/sru-5.5.0  
depot/solaris/11.1/sru-5.5.0@<date-1>
depot/solaris/11.1/sru-6.4.0  
depot/solaris/11.1/sru-6.4.0@<date-2>
...


A few conclusions, confirming what may sound obvious:
  • Storage savings might get close to 50%, but still not relevant after all.
  • Having multiple incremental versions do saves time and provides flexibility.
  • Getting and mounting incremental ISOs seems the most efficient way out.