Nowadays I'd say it's hard to believe that anything is really secure.
Nevertheless one can keep going adding more and more barriers.
The idea is not to be selected as an easier path to attack.
But again, if someone is determined who can tell...
Despite this gave introduction, my goal is to repeat one known tiny bit:
Help preventing unauthorized GRUB configuration change by adding a password.
The method below isn't for GRUB2 (the next generation), but for the older version.
Locate the grub menu file where to configure the password:
# bootadm list-menu
the location ... is: /rpool/boot/grub/menu.lst
default 4
timeout 15
...
Invoke the grub binary to create the password.
Take note of the resulting encrypyted hash.
# /boot/grub/bin/grub
GNU GRUB version 0.97 (640K lower / 65536K upper memory)
[ ...
...
... ]
grub> md5crypt
Password: ***************
Encrypted: $1$...
grub> quit
Edit the grub menu file and include the generated password hash as shown below:
# head -7 /rpool/boot/grub/menu.lst
splashimage /boot/grub/splash.xpm.gz
foreground 343434
background F7FbFF
default 4
timeout 15
password --md5 $1$...
#---------- ADDED BY BOOTADM - DO NOT EDIT ----------
...
That's all what's need for GRUB1.
For GRUB2 I'm still trying to learn how to do it.
Personal notes and recipes, views and opinions.
If it must run, it runs on Solaris!
Friday, April 26, 2013
Wednesday, April 24, 2013
The /export file system
Beyond historical facts around /export, I'm concerned with ZFS issues here.
I'll start this post referring to the global zone, later I hope to touch non-global zones.
By default, ZFS creates /export below rpool as rpool/export.
rpool/export in turn has the following hierarchy:
The problem is:
I don't like this default, because I prefer a maximal decoupling from rpool.
It's typically a good idea for NFS servers to set /export on a larger device.
NOTE
Enter single-user state (SINGLE USER MODE) by booting with the -s option.
Note that init S or s isn't enough to umount eventually busy file systems.
Enter single-user credentials (usually root or preferably a RABC configured login).
This will precisely let us play with rpool/export without any hassles.
rpool/VARSHARE is a new Solaris 11.1 feature to save space in boot environments.
<user> is an user typically configured with access to the root role.
# zfs list -r -d 3 -o name,mountpoint,mounted rpool
NAME MOUNTPOINT MOUNTED
rpool /rpool no
rpool/ROOT legacy no
rpool/ROOT/solaris / yes
rpool/ROOT/solaris-bk / no
rpool/ROOT/solaris-bk/var /var no
rpool/ROOT/solaris/var /var yes
rpool/VARSHARE /var/share yes
rpool/dump - -
rpool/export /export no
rpool/export/home /export/home no
rpool/export/home/<user> /export/home/<user> no
rpool/swap - -
Create and configure a new pool and name it export.
You may decide to enable compression, encryption, deduplication, and so on...
I'll omit many details on this step and assume the existence of such a new pool.
(# zpool create -O compression=on -O dedup=on export <device>)
For each hierarchy within the current /export subtree (for instance home):
Recursively snapshot it:
# zfs snapshot -r rpool/export/home@migrate
Replicate it via send/receive to the new pool:
# zfs send -R rpool/export/home@migrate \
| zfs recv -d -x mountpoint export
Delete it or at least disable it from automatically mounting:
# zfs set -r canmount=noauto rpool/export
Clean up snapshots used in the migration:
# zfs destroy -r rpool/export/home@migrate
# zfs destroy -r export/home@migrate
NOTE
# zfs list -r -o name,mountpoint,mounted export
NAME MOUNTPOINT MOUNTED
export /export no
export/home /export/home no
export/home/<user> /export/home/<user> no
For the non-global zones things vary depending on the system version.
The /export is only present by default from Solaris 11 upwards.
In fact on newer systems, the default file systems are:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 446M 252G 31K /rpool
rpool/ROOT 446M 252G 31K legacy
rpool/ROOT/solaris 446M 252G 414M /
rpool/ROOT/solaris/var 29.7M 252G 29.0M /var
rpool/VARSHARE 39K 252G 39K /var/share
rpool/export 96.5K 252G 32K /export
rpool/export/home 64.5K 252G 32K /export/home
rpool/export/home/<user> 32.5K 252G 32.5K /export/home/<user>
This is the view from within a non-global zone <ngz>.
That's quite cool as it closely resembles a physical system.
As such, rpool/swap and rpool/dump aren't present, of course.
Assume a dedicated pool zone for zones.
From the global zone perspective, the non-global zone file system hierarchy is:
# zfs list -r -o name,mountpoint zone
NAME MOUNTPOINT
zone /zone
zone/<ngz> /zone/<ngz>
zone/<ngz>/rpool /rpool
zone/<ngz>/rpool/ROOT legacy
zone/<ngz>/rpool/ROOT/solaris /zone/<ngz>/root
zone/<ngz>/rpool/ROOT/solaris/var /zone/<ngz>/root/var
zone/<ngz>/rpool/VARSHARE /zone/<ngz>/root/var/share
zone/<ngz>/rpool/export /export
zone/<ngz>/rpool/export/home /export/home
zone/<ngz>/rpool/export/home/<user> /export/home/<user>
Decoupling /export as was proposed for the global zone is just slightly different.
NOTE
For instance, among others, we could choose one of the following options:
Which approach to follow, depends on a variety of reasons.
If a zone makes heavy usage of /export, then a dedicated pool might be better.
Otherwise, it could certainly share a pool with other such zones.
But note that too many pools may lead to inefficient storage utilization.
For Solaris 11 onwards things are even easier due to dataset aliasing.
Let's see each of these possibilities, in turn, starting by Solaris 11.
SOLARIS 11
Just for example, assume that <ngz> is in zone, that is, zone/<ngz>
The scenario is excatly the one described above for a non-global zone.
Here's an strategy:
# zfs create -o zoned=on -o mountpoint=/export zone/<ngz>-export
# zonecfg -z <ngz>
zonecfg:<ngz>> add dataset
zonecfg:<ngz>:dataset> set name=zone/<ngz>-export
zonecfg:<ngz>:dataset> set alias=export
zonecfg:<ngz>:dataset> end
zonecfg:<ngz>> verify
zonecfg:<ngz>> commit
zonecfg:<ngz>> exit
# zoneadm -z <ngz> boot -s
# zlogin -C <ngz>
[NOTICE: Zone booting up with arguments: -s]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Booting to milestone "milestone/single-user:default".
Hostname: <ngz>
Requesting System Maintenance Mode
SINGLE USER MODE
Enter user name for system maintenance (...): root
Enter root password (...): ************
Apr 29 13:05:07 su: 'su root' succeed for root on /dev/console
root@<ngz>:~# zfs list -r -o name,mountpoint,mounted,canmount
NAME MOUNTPOINT MOUNTED CANMOUNT
export /export no on
rpool /rpool no on
rpool/ROOT legacy no off
rpool/ROOT/solaris / yes noauto
rpool/ROOT/solaris/var /var yes noauto
rpool/VARSHARE /var/share yes noauto
rpool/export /export no on
rpool/export/home /export/home no on
rpool/export/home/<user> /export/home/<user> no on
root@<ngz>:~# zfs snapshot -r rpool/export/home@migrate
root@<ngz>:~# zfs send -R rpool/export/home@migrate |
zfs recv -e -x mountpoint export
root@<ngz>:~# zfs list -r -o name,mountpoint,mounted,canmount
NAME MOUNTPOINT MOUNTED CANMOUNT
export /export yes on
export/home /export/home yes on
export/home/<user> /export/home/<user> yes on
rpool /rpool no on
rpool/ROOT legacy no off
rpool/ROOT/solaris / yes noauto
rpool/ROOT/solaris/var /var yes noauto
rpool/VARSHARE /var/share yes noauto
rpool/export /export no on
rpool/export/home /export/home no on
rpool/export/home/<user> /export/home/<user> no on
root@<ngz>:~# zfs set -r canmount=noauto rpool/export
root@<ngz>:~# zfs destroy -r rpool/export/home@migrate
root@<ngz>:~# zfs destroy -r export/home@migrate
root@<ngz>:~# reboot
LEGACY
In legacy systems, such as Solaris 11 Express, dataset aliasing isn't available.
In this case, I'd propose using an export pool for a more consistent naming.
Most steps are the same so I won't repeat them, showing only the outline.
From the global zone, it would be seen as in the following arrangement:
# zfs list -r -o name,mountpoint export
NAME MOUNTPOINT
export /export
export/home /export/home
export/home/<user> /export/home/<user>
export/<ngz> /export/<ngz>
export/<ngz>/home /export/<ngz>/home
export/<ngz>/home/<user> /export/<ngz>/home/<user>
# zonecfg -z <ngz> info dataset
dataset:
name: export/<ngz>
From a legacy non-global zone (without aliasing) the arrangement would be seen as:
# zlogin -l <user> <ngz>
[Connected to zone '<ngz>' pts/2]
<user>@<ngz>:~# zfs list -o name,mountpoint
NAME MOUNTPOINT
export /export
export/<ngz> /export/<ngz>
export/<ngz>/home /export/<ngz>/home
export/<ngz>/home/<user> /export/<ngz>/home/<user>
zone /zone
zone/<ngz> /zone/<ngz>
zone/<ngz>/ROOT legacy
zone/<ngz>/ROOT/zbe legacy
<user>@<ngz>:~# tail /etc/auto_home
...
#
+auto_home
#
* localhost:/export/<ngz>/home/&
Naturally, this will imply that the user info must be updated accordingly.
# usermod -d /home/<user> <user>
I'll start this post referring to the global zone, later I hope to touch non-global zones.
By default, ZFS creates /export below rpool as rpool/export.
rpool/export in turn has the following hierarchy:
- rpool/export/home
- rpool/export/home/<user>
The problem is:
I don't like this default, because I prefer a maximal decoupling from rpool.
It's typically a good idea for NFS servers to set /export on a larger device.
NOTE
I haven't heard about any issues by deviating from the default.Accomplishing the mission:
Don't forget to update the automounter info as well, as needed.
That may typically include /etc/auto_home, NIS or LDAP.
Enter single-user state (SINGLE USER MODE) by booting with the -s option.
Note that init S or s isn't enough to umount eventually busy file systems.
Enter single-user credentials (usually root or preferably a RABC configured login).
This will precisely let us play with rpool/export without any hassles.
rpool/VARSHARE is a new Solaris 11.1 feature to save space in boot environments.
<user> is an user typically configured with access to the root role.
# zfs list -r -d 3 -o name,mountpoint,mounted rpool
NAME MOUNTPOINT MOUNTED
rpool /rpool no
rpool/ROOT legacy no
rpool/ROOT/solaris / yes
rpool/ROOT/solaris-bk / no
rpool/ROOT/solaris-bk/var /var no
rpool/ROOT/solaris/var /var yes
rpool/VARSHARE /var/share yes
rpool/dump - -
rpool/export /export no
rpool/export/home /export/home no
rpool/export/home/<user> /export/home/<user> no
rpool/swap - -
Create and configure a new pool and name it export.
You may decide to enable compression, encryption, deduplication, and so on...
I'll omit many details on this step and assume the existence of such a new pool.
(# zpool create -O compression=on -O dedup=on export <device>)
For each hierarchy within the current /export subtree (for instance home):
Recursively snapshot it:
# zfs snapshot -r rpool/export/home@migrate
Replicate it via send/receive to the new pool:
# zfs send -R rpool/export/home@migrate \
| zfs recv -d -x mountpoint export
Delete it or at least disable it from automatically mounting:
# zfs set -r canmount=noauto rpool/export
Clean up snapshots used in the migration:
# zfs destroy -r rpool/export/home@migrate
# zfs destroy -r export/home@migrate
NOTE
The above snapshot-releated commands could be simplified:Finally, our result is as follows:
(the -F option overwrites anything in the new export pool, "empty" anyway)
# zfs snapshot -r rpool/export@migrate
# zfs send -R rpol/export@migrate |zfs recv -F export
# zfs destroy -r rpool/export
# zfs destroy -r export@migrate
# zfs list -r -o name,mountpoint,mounted export
NAME MOUNTPOINT MOUNTED
export /export no
export/home /export/home no
export/home/<user> /export/home/<user> no
For the non-global zones things vary depending on the system version.
The /export is only present by default from Solaris 11 upwards.
In fact on newer systems, the default file systems are:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 446M 252G 31K /rpool
rpool/ROOT 446M 252G 31K legacy
rpool/ROOT/solaris 446M 252G 414M /
rpool/ROOT/solaris/var 29.7M 252G 29.0M /var
rpool/VARSHARE 39K 252G 39K /var/share
rpool/export 96.5K 252G 32K /export
rpool/export/home 64.5K 252G 32K /export/home
rpool/export/home/<user> 32.5K 252G 32.5K /export/home/<user>
This is the view from within a non-global zone <ngz>.
That's quite cool as it closely resembles a physical system.
As such, rpool/swap and rpool/dump aren't present, of course.
Assume a dedicated pool zone for zones.
From the global zone perspective, the non-global zone file system hierarchy is:
# zfs list -r -o name,mountpoint zone
NAME MOUNTPOINT
zone /zone
zone/<ngz> /zone/<ngz>
zone/<ngz>/rpool /rpool
zone/<ngz>/rpool/ROOT legacy
zone/<ngz>/rpool/ROOT/solaris /zone/<ngz>/root
zone/<ngz>/rpool/ROOT/solaris/var /zone/<ngz>/root/var
zone/<ngz>/rpool/VARSHARE /zone/<ngz>/root/var/share
zone/<ngz>/rpool/export /export
zone/<ngz>/rpool/export/home /export/home
zone/<ngz>/rpool/export/home/<user> /export/home/<user>
Decoupling /export as was proposed for the global zone is just slightly different.
NOTE
It's crucial that the new dataset be independent of the zone's datasets hierarchy. I mean that it shouldn't be a descendand of the zone/<ngz> pool as per the above example, otherwise non-global zone machinery may get broken.There are many possibilities where to locate the non-global zone's export dataset.
For instance, among others, we could choose one of the following options:
- Use new individual pools for each non-global zones' export dataset
- Use a new single pool for all non-global zones' export dataset
- Use an already existent export pool for the global zone
- Use an already existent zone pool for the zones
Which approach to follow, depends on a variety of reasons.
If a zone makes heavy usage of /export, then a dedicated pool might be better.
Otherwise, it could certainly share a pool with other such zones.
But note that too many pools may lead to inefficient storage utilization.
For Solaris 11 onwards things are even easier due to dataset aliasing.
Let's see each of these possibilities, in turn, starting by Solaris 11.
SOLARIS 11
Just for example, assume that <ngz> is in zone, that is, zone/<ngz>
The scenario is excatly the one described above for a non-global zone.
Here's an strategy:
- Create an appropriate dataset to host the new /export hierarchy
- Assign it to the zone configuration, aliasing it to export.
- Boot in single-user mode and migrate data
# zfs create -o zoned=on -o mountpoint=/export zone/<ngz>-export
# zonecfg -z <ngz>
zonecfg:<ngz>> add dataset
zonecfg:<ngz>:dataset> set name=zone/<ngz>-export
zonecfg:<ngz>:dataset> set alias=export
zonecfg:<ngz>:dataset> end
zonecfg:<ngz>> verify
zonecfg:<ngz>> commit
zonecfg:<ngz>> exit
# zoneadm -z <ngz> boot -s
# zlogin -C <ngz>
[NOTICE: Zone booting up with arguments: -s]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Booting to milestone "milestone/single-user:default".
Hostname: <ngz>
Requesting System Maintenance Mode
SINGLE USER MODE
Enter user name for system maintenance (...): root
Enter root password (...): ************
Apr 29 13:05:07 su: 'su root' succeed for root on /dev/console
root@<ngz>:~# zfs list -r -o name,mountpoint,mounted,canmount
NAME MOUNTPOINT MOUNTED CANMOUNT
export /export no on
rpool /rpool no on
rpool/ROOT legacy no off
rpool/ROOT/solaris / yes noauto
rpool/ROOT/solaris/var /var yes noauto
rpool/VARSHARE /var/share yes noauto
rpool/export /export no on
rpool/export/home /export/home no on
rpool/export/home/<user> /export/home/<user> no on
root@<ngz>:~# zfs snapshot -r rpool/export/home@migrate
root@<ngz>:~# zfs send -R rpool/export/home@migrate |
zfs recv -e -x mountpoint export
root@<ngz>:~# zfs list -r -o name,mountpoint,mounted,canmount
NAME MOUNTPOINT MOUNTED CANMOUNT
export /export yes on
export/home /export/home yes on
export/home/<user> /export/home/<user> yes on
rpool /rpool no on
rpool/ROOT legacy no off
rpool/ROOT/solaris / yes noauto
rpool/ROOT/solaris/var /var yes noauto
rpool/VARSHARE /var/share yes noauto
rpool/export /export no on
rpool/export/home /export/home no on
rpool/export/home/<user> /export/home/<user> no on
root@<ngz>:~# zfs set -r canmount=noauto rpool/export
root@<ngz>:~# zfs destroy -r rpool/export/home@migrate
root@<ngz>:~# zfs destroy -r export/home@migrate
root@<ngz>:~# reboot
LEGACY
In legacy systems, such as Solaris 11 Express, dataset aliasing isn't available.
In this case, I'd propose using an export pool for a more consistent naming.
Most steps are the same so I won't repeat them, showing only the outline.
From the global zone, it would be seen as in the following arrangement:
# zfs list -r -o name,mountpoint export
NAME MOUNTPOINT
export /export
export/home /export/home
export/home/<user> /export/home/<user>
export/<ngz> /export/<ngz>
export/<ngz>/home /export/<ngz>/home
export/<ngz>/home/<user> /export/<ngz>/home/<user>
# zonecfg -z <ngz> info dataset
dataset:
name: export/<ngz>
From a legacy non-global zone (without aliasing) the arrangement would be seen as:
# zlogin -l <user> <ngz>
[Connected to zone '<ngz>' pts/2]
<user>@<ngz>:~# zfs list -o name,mountpoint
NAME MOUNTPOINT
export /export
export/<ngz> /export/<ngz>
export/<ngz>/home /export/<ngz>/home
export/<ngz>/home/<user> /export/<ngz>/home/<user>
zone /zone
zone/<ngz> /zone/<ngz>
zone/<ngz>/ROOT legacy
zone/<ngz>/ROOT/zbe legacy
<user>@<ngz>:~# tail /etc/auto_home
...
#
+auto_home
#
* localhost:/export/<ngz>/home/&
Naturally, this will imply that the user info must be updated accordingly.
# usermod -d /home/<user> <user>
Tuesday, April 23, 2013
IPS repository update
On previous posts I've shown how to create and access IPS repositories.
Now I comment on how to get fixes and enhancements.
Of course, a paid support plan is required.
But let me say, it's more than worthy for serious business.
Solaris is much, much better than its presumable competitors.
It's about well invested money, not bells and whistles and appearance.
By the way, Solaris is not Pandora's box like others.
Coming back to this post, assume the following starting point:
(as previously mentioned, I prefer that depot be a ZFS pool on its own)
# zfs list -t all -r -o name,used,mountpoint depot/solaris
NAME USED MOUNTPOINT
depot/solaris 6.07G /depot/solaris
depot/solaris@release 0 -
Note that repository update operations are heavily I/O bound.
I'd recommend repositories on pools backed by SSD disks and not under RAID-Zn.
Most timings were on an Intel Core 2 Quad Q6600 2.40 GHz with a slow disk.
Find out which update to obtain, not necessarily the latest.
Check My Oracle Support (MOS) or https://pkg.oracle.com/solaris/support.
Suppose that Oracle Solaris 11.1.6.0.4.0 is found and selected.
Now there's an important decision point.
How to apply 11.1.6.0.4.0?
It mostly depends on flexibility rather than storage constrains.
Rational:
Instead of incremental ISO, https://pkg.oracle.com/solaris/support could be used.
OPTION 2
Clone the initial release repository and simply apply SRU-6.4.
This will pull in 197 updated packages and grow storage usage by 1.88 GB.
The initial release packages take 6.07 GB, so the total used space is 7.95 GB.
It takes a long time as there may be many changes since the initial release.
# zfs clone depot/solaris@release depot/solaris/6_4_0
timing info is pending
OPTION 3
If SRU-5.5 wasn't prepared yet, clone the initial release repository and apply it.
This will pull in 167 updated packages and grow storage usage by 1.27 GB.
With the initial release taking 6.07 GB, this amounts to 7.33 GB up to this step.
It takes a long time as there may be many changes since the initial release.
# zfs clone depot/solaris@release depot/solaris/5_5_0
# lofiadm -a /export/archive/sol-11_1-sru5-05-incr-repo.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 20
drwxr-xr-x 3 root root 2.0K Mar 6 17:23 repo
-rw-r--r-- 1 root root 3.0K Mar 6 18:13 README
-rwxr-xr-x 1 root root 1.3K Mar 6 18:13 NOTICES
-rw-r--r-- 1 root root 3.2K Mar 6 18:13 COPYRIGHT
# time pkgrecv -s /mnt/repo -d /depot/solaris/5_5_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 167 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 167/167 1340/1340 3321/3321
real 29m16.753s
user 22m36.367s
sys 1m32.850s
# time pkgrepo refresh -s /depot/solaris/5_5_0
Initiating repository refresh.
real 1m58.325s
user 1m55.420s
sys 0m2.762s
# zfs snapshot depot/solaris/5_5_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 7.33G 175G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/5_5_0 1.27G 175G 7.21G /depot/solaris/5_5_0
depot/solaris/5_5_0@sru 0 - 7.21G -
Clone the SRU 5.5 repository and apply SRU-6.4.
Although 197 packages are evaluated, only 60 are actually downloaded.
The extra amount of storage is 1.02 GB, in contrast to the other option's 1.88 GB.
Some small overhead is added on the origin of the clone, from 1.27 GB to 1.31 GB.
In the end, the total amount of storage is 8.39 GB, contrasting to other options' 7.95 GB.
# zfs clone depot/solaris/5_5_0@sru depot/solaris/6_4_0
# lofiadm -a /export/archive/sol-11_1_6_4_0-incr-repo.iso
/dev/lofi/1
# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 197 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 60/60 1063/1063 1929/1929
real 16m54.721s
user 11m51.007s
sys 0m54.853s
# time pkgrepo refresh -s /depot/solaris/6_4_0
Initiating repository refresh.
real 1m54.976s
user 1m51.583s
sys 0m2.805s
# zfs snapshot depot/solaris/6_4_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 8.39G 174G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/5_5_0 1.31G 174G 7.22G /depot/solaris/5_5_0
depot/solaris/5_5_0@sru 11.6M - 7.21G -
depot/solaris/6_4_0 1.02G 174G 8.13G /depot/solaris/6_4_0
depot/solaris/6_4_0@sru 0 - 8.13G -
NOTE
A few conclusions, confirming what may sound obvious:
Now I comment on how to get fixes and enhancements.
Of course, a paid support plan is required.
But let me say, it's more than worthy for serious business.
Solaris is much, much better than its presumable competitors.
It's about well invested money, not bells and whistles and appearance.
By the way, Solaris is not Pandora's box like others.
Coming back to this post, assume the following starting point:
(as previously mentioned, I prefer that depot be a ZFS pool on its own)
# zfs list -t all -r -o name,used,mountpoint depot/solaris
NAME USED MOUNTPOINT
depot/solaris 6.07G /depot/solaris
depot/solaris@release 0 -
Note that repository update operations are heavily I/O bound.
I'd recommend repositories on pools backed by SSD disks and not under RAID-Zn.
Most timings were on an Intel Core 2 Quad Q6600 2.40 GHz with a slow disk.
Find out which update to obtain, not necessarily the latest.
Check My Oracle Support (MOS) or https://pkg.oracle.com/solaris/support.
Suppose that Oracle Solaris 11.1.6.0.4.0 is found and selected.
Now there's an important decision point.
How to apply 11.1.6.0.4.0?
- Merge everything onto a single repository?
- On a selected intermediate updated clone from the initial release?
- On the last more recently updated clone, such as, 11.1.5.0.5.0?
It mostly depends on flexibility rather than storage constrains.
Rational:
- This is the worst case is as it may be more difficult to select a specific version.
It is impossible to get rid of older "intermediate" versions.
- For keeping just a few update levels, this may be the better option.
That's because we keep no outdated packages on each repository level.
The disadvantage is that each repository level probably contains duplicates.
- For keeping all update levels, this seems the better option.
That's because ZFS cloning saves our bacon, saving space and time.
There's no duplicates throughout each repository level.
Instead of incremental ISO, https://pkg.oracle.com/solaris/support could be used.
OPTION 2
Clone the initial release repository and simply apply SRU-6.4.
This will pull in 197 updated packages and grow storage usage by 1.88 GB.
The initial release packages take 6.07 GB, so the total used space is 7.95 GB.
It takes a long time as there may be many changes since the initial release.
# zfs clone depot/solaris@release depot/solaris/6_4_0
# lofiadm -a /export/archive/sol-11_1_6_4_0-incr-repo.iso
/dev/lofi/1
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 21
drwxr-xr-x 3 root root 2.0K Apr 1 15:10 repo
-rw-r--r-- 1 root root 3.0K Apr 1 15:43 README
-rwxr-xr-x 1 root root 1.3K Apr 1 15:43 NOTICES
-rw-r--r-- 1 root root 3.2K Apr 1 15:43 COPYRIGHT
# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
total 21
drwxr-xr-x 3 root root 2.0K Apr 1 15:10 repo
-rw-r--r-- 1 root root 3.0K Apr 1 15:43 README
-rwxr-xr-x 1 root root 1.3K Apr 1 15:43 NOTICES
-rw-r--r-- 1 root root 3.2K Apr 1 15:43 COPYRIGHT
# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 197 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 197/197 2133/2133 4601/4601
real 621m20.861s (VirtualBox)
user 8m51.073s
sys 1m24.112s
# time pkgrepo refresh -s /depot/solaris/6_4_0
Retrieving and evaluating 197 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 197/197 2133/2133 4601/4601
real 621m20.861s (VirtualBox)
user 8m51.073s
sys 1m24.112s
# time pkgrepo refresh -s /depot/solaris/6_4_0
timing info is pending
# umount /mnt
# lofiadm -d /dev/lofi/1
# zfs snapshot depot/solaris/6_4_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 7.95G 176G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/6_4_0 1.88G 176G 7.87G /depot/solaris/6_4_0
depot/solaris/6_4_0@sru 0 - 7.87G -# zfs snapshot depot/solaris/6_4_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 7.95G 176G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/6_4_0 1.88G 176G 7.87G /depot/solaris/6_4_0
OPTION 3
If SRU-5.5 wasn't prepared yet, clone the initial release repository and apply it.
This will pull in 167 updated packages and grow storage usage by 1.27 GB.
With the initial release taking 6.07 GB, this amounts to 7.33 GB up to this step.
It takes a long time as there may be many changes since the initial release.
# zfs clone depot/solaris@release depot/solaris/5_5_0
# lofiadm -a /export/archive/sol-11_1-sru5-05-incr-repo.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 20
drwxr-xr-x 3 root root 2.0K Mar 6 17:23 repo
-rw-r--r-- 1 root root 3.0K Mar 6 18:13 README
-rwxr-xr-x 1 root root 1.3K Mar 6 18:13 NOTICES
-rw-r--r-- 1 root root 3.2K Mar 6 18:13 COPYRIGHT
Processing packages for publisher solaris ...
Retrieving and evaluating 167 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 167/167 1340/1340 3321/3321
real 29m16.753s
user 22m36.367s
sys 1m32.850s
# time pkgrepo refresh -s /depot/solaris/5_5_0
Initiating repository refresh.
real 1m58.325s
user 1m55.420s
sys 0m2.762s
# umount /mnt
# lofiadm -d /dev/lofi/1
# zfs snapshot depot/solaris/5_5_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 7.33G 175G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/5_5_0 1.27G 175G 7.21G /depot/solaris/5_5_0
depot/solaris/5_5_0@sru 0 - 7.21G -
Clone the SRU 5.5 repository and apply SRU-6.4.
Although 197 packages are evaluated, only 60 are actually downloaded.
The extra amount of storage is 1.02 GB, in contrast to the other option's 1.88 GB.
Some small overhead is added on the origin of the clone, from 1.27 GB to 1.31 GB.
In the end, the total amount of storage is 8.39 GB, contrasting to other options' 7.95 GB.
# zfs clone depot/solaris/5_5_0@sru depot/solaris/6_4_0
# lofiadm -a /export/archive/sol-11_1_6_4_0-incr-repo.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# ll /mnt
total 21
drwxr-xr-x 3 root root 2.0K Apr 1 15:10 repo
-rw-r--r-- 1 root root 3.0K Apr 1 15:43 README
-rwxr-xr-x 1 root root 1.3K Apr 1 15:43 NOTICES
-rw-r--r-- 1 root root 3.2K Apr 1 15:43 COPYRIGHT
total 21
drwxr-xr-x 3 root root 2.0K Apr 1 15:10 repo
-rw-r--r-- 1 root root 3.0K Apr 1 15:43 README
-rwxr-xr-x 1 root root 1.3K Apr 1 15:43 NOTICES
-rw-r--r-- 1 root root 3.2K Apr 1 15:43 COPYRIGHT
# time pkgrecv -s /mnt/repo -d /depot/solaris/6_4_0 '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 197 package(s)...
PROCESS ITEMS GET (MB) SEND (MB)
Completed 60/60 1063/1063 1929/1929
real 16m54.721s
user 11m51.007s
sys 0m54.853s
# time pkgrepo refresh -s /depot/solaris/6_4_0
Initiating repository refresh.
real 1m54.976s
user 1m51.583s
sys 0m2.805s
# umount /mnt
# lofiadm -d /dev/lofi/1
# zfs snapshot depot/solaris/6_4_0@sru
# zfs list -r -t all depot/solaris
NAME USED AVAIL REFER MOUNTPOINT
depot/solaris 8.39G 174G 6.07G /depot/solaris
depot/solaris@release 2.26M - 6.07G -
depot/solaris/5_5_0 1.31G 174G 7.22G /depot/solaris/5_5_0
depot/solaris/5_5_0@sru 11.6M - 7.21G -
depot/solaris/6_4_0 1.02G 174G 8.13G /depot/solaris/6_4_0
depot/solaris/6_4_0@sru 0 - 8.13G -
NOTE
At least for Solaris 11 Express, depending how far an SRU is from the initial release, it seems safer not to issue pkgrepo refresh right after pkgrepo recv. Instead, prefer a more conservative approach, such as:NOTE
# svcadm disable -t pkg/server (case previously enabled)
# pkg update --be-name solaris-sru-<N>-pkg pkg
...
The previous command will update just the most critical parts, including the kernel and the IPS. After eventually needed house keeping, reboot to the newly created updated BE:
# init 6
Arguably the above ZFS hierarchy suggestion could be better; for instance:
# zfs list -o name -r -t all depot/solaris
NAME
depot/solaris
...
depot/solaris/11.1/release
depot/solaris/11.1/release@<date-0>
depot/solaris/11.1/sru-5.5.0
depot/solaris/11.1/sru-5.5.0@<date-1>
depot/solaris/11.1/sru-6.4.0
depot/solaris/11.1/sru-6.4.0@<date-2>
...
A few conclusions, confirming what may sound obvious:
- Storage savings might get close to 50%, but still not relevant after all.
- Having multiple incremental versions do saves time and provides flexibility.
- Getting and mounting incremental ISOs seems the most efficient way out.
Monday, April 22, 2013
Solaris 11 Express
I'd like to take a break for an honorable mention to Solaris 11 Express.
Although it isn't as good as Solaris 11, it still can be a quite good 32-bit SOHO platform.
It runs on old hardware that can't run Solaris 11.
It has even been used with Engineered Systems.
It offers many application packages, efficient virtualization and, of course, ZFS!
It's a pity that the last useful support repository update was SRU 13.
But you may even live well without it and still get many benefits.
The major annoyance is due to Solaris 11's new syntax adopted in many commands.
Unfortunately in these cases we have no option but to cope with them.
In spite of that I think it's still better than FreeBSD or OpenIndiana.
Least to say that you're in a better path to Solaris 11.
Although it isn't as good as Solaris 11, it still can be a quite good 32-bit SOHO platform.
It runs on old hardware that can't run Solaris 11.
It has even been used with Engineered Systems.
It offers many application packages, efficient virtualization and, of course, ZFS!
It's a pity that the last useful support repository update was SRU 13.
But you may even live well without it and still get many benefits.
The major annoyance is due to Solaris 11's new syntax adopted in many commands.
Unfortunately in these cases we have no option but to cope with them.
In spite of that I think it's still better than FreeBSD or OpenIndiana.
Least to say that you're in a better path to Solaris 11.
Shell initialization files
So, after a while I'm back writing a few posts on this blog.
This time I'm struggling to endure against the shadows of the evil.
They want to destroy everything that's fair, good and honest.
I'm thirsty for the Divine Justice to fix what's wrong.
Sorry, I needed to get that off my chest.
Now let's stick to our humble attempt to build what's good.
I'd say the chief way to administer Solaris is at the CLI (command line interface).
In Unix this is also traditionally known as the shell, which in Solaris 11 is bash by default.
Nowadays there's that beast called Ops Center which I dislike due to:
I prefer to count on my own to carry out administration.
Hence the importance of being comfortable at the shell level.
As each person is unique, it's clear that so is customization.
My personal approach, as much as possible, is:
What I find absolutely essential in customizing the shell follows:
This time I'm struggling to endure against the shadows of the evil.
They want to destroy everything that's fair, good and honest.
I'm thirsty for the Divine Justice to fix what's wrong.
Sorry, I needed to get that off my chest.
Now let's stick to our humble attempt to build what's good.
I'd say the chief way to administer Solaris is at the CLI (command line interface).
In Unix this is also traditionally known as the shell, which in Solaris 11 is bash by default.
Nowadays there's that beast called Ops Center which I dislike due to:
- It's an extremely heavy-weight and resource-hungry software.
- It's operations are mostly, if not completely, black-box style.
- It's suited to a common-least-denominator, to lazy sysadmins.
I prefer to count on my own to carry out administration.
Hence the importance of being comfortable at the shell level.
As each person is unique, it's clear that so is customization.
My personal approach, as much as possible, is:
- Stick to the defaults
- Strive for simplicity
- Always keep clarity
What I find absolutely essential in customizing the shell follows:
Subscribe to:
Posts (Atom)