Wednesday, July 18, 2012

AI derived manifest sample

I'm assuming the setup previously described.
A derived manifest example can be the best explanation about it:
 
      : official documentation showing how to mirror the rpool during the installation.
      : what's not shown or what I have changed based on the official documentation.
      : official documentation showing the structure of a derived manifest.
 
$ cat /export/auto_install/files/derived.sh
#!/bin/bash -
 
SCRIPT_SUCCESS=0
SCRIPT_FAILURE=1
 
function error_handler
{
    exit $SCRIPT_FAILURE
}
 
trap error_handler ERR
 
# Define the location of the custom base manifest.
AI_SERVER=192.168.0.50:5555
AI_PATH=export/auto_install/$SI_INSTALL_SERVICE/auto_install
AI_BASE_MANIFEST=base.xml
 
alias wget=/usr/bin/wget
alias aimanifest=/usr/bin/aimanifest
 
# Load a base, customized, manifest which to dynamically adjust.
wget -P /tmp http://$AI_SERVER/$AI_PATH/$AI_BASE_MANIFEST
aimanifest load /tmp/$AI_BASE_MANIFEST
 

# Use the default if there is only one disk.
if [[ $SI_NUMDISKS -ge 2 ]]
then
    # Turn mirroring on.
 
    # Assumes a root zpool is already set up.
    vdev=$(aimanifest add -r target/logical/zpool[@name=rpool]/vdev@name mirror-0)
    aimanifest set ${vdev}@redundancy mirror
 
    # A 2-way mirror is enough.
    typeset -i disk_num
    for ((disk_num = 1; disk_num <= 2; disk_num++))

    do
        eval curr_disk="$"SI_DISKNAME_${disk_num}
 
        disk=$(aimanifest add -r target/disk@in_vdev mirror-0)
        aimanifest set ${disk}@in_zpool rpool
        aimanifest set ${disk}@whole_disk true
 
        disk_name=$(aimanifest add -r ${disk}/disk_name@name $curr_disk)
        aimanifest set ${disk_name}@name_type ctd
    done
fi
 
exit $SCRIPT_SUCCESS
   

Tuesday, July 17, 2012

AI derived manifests

Beyond the default AI setup are custom AI installations.
For custom AI installations, one of the most difficult part is about manifests.

Derived manifests take into account client hardware specifics.
Essentially they are scripts that dynamically generate a "real" manifest.
They start by loading a base (static) manifest, a default or a customized one.
According to client-specific environment variables, other clauses are dynamically inserted.
An example of the overall strategy is as follows:
   
Copy an existing manifest as a starting point:
 
# cd /export/auto_install/solaris11-i386/auto_install
# cp -p manifest/{default,base}.xml
# ln -s {manifest/,}base.xml

     
Manually customize as much as needed and as much as possible:
 
# vi base.xml
# egrep 'ai_instance name|origin' base.xml

  <ai_instance name="base" auto_reboot="true">
          <origin name="http://s11-depot-01"/>
  
Within a derived manifest, address what couldn't be set by the previous steps:
 
# cd /export/auto_install/files
# vi derived.sh
# chmod a+x derived.sh

   
Associate the derived manifest with an installation service:
 
# installadm create-manifest
  -n solaris11-i386
  -m derived
  -f /export/auto_install/files/derived.sh
  -d
   

Verify:
 
# installadm list -m
Service Name    Manifest      Status
------------    --------      ------
default-i386    orig_default  Default
solaris11-i386  derived       Default
 
# installadm list -m -n solaris11-i386
Manifest      Status    Criteria
--------      ------    --------
derived       Default   None
orig_default  Inactive  None
       

    Associating a profile to an AI service

    Beyond the default AI setup are custom AI installations. 
    The next-to-first task may be to assign profiles to an installation service.
    Profiles (system configuration profiles) avoids any interaction during the installation.
    Profiles perform the same function as the sysidcfg files in Solaris 10.

    So far, a 1:1 relationship between clients and profiles is what makes sense to me.

    The following command will create a system configuration profile (XML file) at the indicated location, containing all the responses to the common installation prompts:

    # sysconfig create-profile 
      -o /export/auto_install/files/ai_client-1.xml

    Note that due to a bug on Solaris 11 GA (general availability) / FCS (first customer shipment), upon profile creation it's not possible to specify a user account already present on the system from which the command is run. To circumvent this bug, simply specify another user account and if desired manually adjust the XML file. Other manual adjustments may be possible, although error prone.

    Connect the ai_client-1.xml profile to the solaris11-i386 installation service.
    At the same time, select the (unique) intended client (client-1) by it's MAC address.

    # installadm create-profile 
      -n solaris11-i386 
      -p ai_client-1 
      -f /export/auto_install/files/ai_client-1.xml
      -c mac=68:b5:99:c1:e6:58 
         

    Associating a client to an AI service

    Beyond the default AI setup are custom AI installations. 
    For custom AI installations, the first task is to assign a client to an installation service.
    It's necessary to identify the MAC address used by the client for network booting.
    On X86 that's the MAC address of a connected interface associated to PXE.
    On SPARC any connected interface can be used for network booting.

    For example, a client is associated to the solaris11-i386 installation service as follows:

    # installadm create-client -n solaris11-i386 -e 68:b5:99:c1:e6:58

    AI may not be able adjust ISC-DCHP at all or exactly as needed.
    In any case, check the ISC-DHCP configuration and adjust it, if necessary.

    To list assigned clients for all (custom) services:

    # installadm list -c 
    Service Name   Client Address  Arch  Image Path
    ------------   --------------  ----  ----------
    solaris11-i386 08:00:27:AF:4C:5B   i386  /export/auto_install/solaris11-i386
                   68:B5:99:C1:E6:58   i386  /export/auto_install/solaris11-i386
          
    At least on X86 it's necessary to adjust GRUB default boot image:

    # ll /etc/netboot/
    total 10
    lrwxrwxrwx   1 root root  ...  0168B599C1E658 
                                   -> ./solaris11-i386/boot/grub/pxegrub
    drwxr-xr-x  19 root root  ...  default-i386
    -rw-r--r--   1 root root  ...  menu.lst.0168B599C1E658
    drwxr-xr-x  19 root root  ...  solaris11-i386

    # grep default /etc/netboot/menu.lst.0168B599C1E658
    default=1
          

    The initial default AI setup

    A default installation of AI on X86, similarly to on SPARC, become as follows:

    # installadm list

    Service Name   Alias Of       Status  Arch   Image Path
    ------------   --------       ------  ----   ----------
    default-i386   solaris11-i386 on      x86    /export/auto_install/solaris11-i386
    solaris11-i386 -              on      x86    /export/auto_install/solaris11-i386

    # installadm list -m


    Service Name    Manifest      Status
    ------------    --------      ------
    default-i386    orig_default  Default
    solaris11-i386  orig_default  Default

    # installadm list -m -n default-i386

    Manifest      Status    Criteria
    --------      ------    --------
    orig_default  Default   None

    # installadm list -m -n solaris11-i386


    Manifest      Status    Criteria
    --------      ------    --------
    orig_default  Default   None

    # installadm list -c
    There are no clients configured for local services.

    # installadm list -p
    There are no profiles configured for local services.

    The default setup is minimally useful, basically to exempt us from optical media.
    To fully leverage all its potential, custom AI installations are required:
      
        

    Monday, July 16, 2012

    Solaris 10 update cache cleanup

    To cleanup the update cache of patchsvr and/or smpatch perform the following:
        
    • Stop the patch server:

      # patchsvr stop
          
    • On the patch server:

      # cd /var/sadm/spool
      # rm patchsvr/*
      # rm cache/xml/*
      # rm cache/updatemanager/analysis.results/*
      # rm cache/entitlement/*
      # rm cache/Database/*
      # rm cache/collection/*
      # rm cache/category/*
      # rm cache/*detectors*

          
    • On the patch client:

      # cd /var/sadm/spool
      # rm cache/xml/*
      # rm cache/updatemanager/analysis.results/*
      # rm cache/entitlement/*
      # rm cache/Database/*
      # rm cache/*detectors*

        
    • Start the patch server:

      # patchsvr start
          
    • Rebuild server cache first, then the client's:
      (run the following command on the server, then on the client)

      # smpatch analyze 
       

    Solaris 10 registration

    Registration with an adequate MOS credential is important to get patches.
    Particularly, smpatch and patchsvr won't work if the system is not registered.
    Make sure to have a properly configured Internet connection and DNS resolution.
    There are many hard to pinpoint possibilities that may cause a registration attempt to fail.
    For the latest details check My Oracle Support website and consider the following hints:
         
    • Make sure to have at least, patches:
      (with something such as $ showrev -p | grep 99999 | cut -c 1-17)
      SPARC: 121118-19, 123005-09, 124171-08, 123630-04, 123893-25
      X86:   121119-19, 123006-09, 124187-08, 123631-04, 123896-25

           
    • Install required patches, beginning by the Common Agent Container:

      # cacaoadm stop
      # patchadd 123896-XX
      # cacaoadm start
      # patchadd 121119-XX
      # patchadd 123006-XX
      ...

            
    •  Make sure registration database is clean:

      # cacaoadm stop
      # rm /var/scn/persistence/SCN*
      # /usr/lib/cc-ccr/bin/eraseCCRRepository
      # cacaoadm start

          
    • Edit registration profile, setting MOS credentials and proxy information only:

      # REG_FILE=RegistrationProfile.properties
      # cp /usr/lib/breg/data/$REG_FILE /tmp

      # vi /tmp/$REG_FILE

         
    • Perform the registration:

      # sconadm proxy -r /tmp/$REG_FILE
      # sconadm register -a -r /tmp/$REG_FILE
        
    • Adjust the patch source (and proxy, if needed):

      # smpatch set patchpro.patch.source=https://updates.oracle.com/
      # smpatch set patchpro.proxy.host=proxy-1
      # smpatch set patchpro.proxy.port=8080
      # smaptch set patchpro.proxy.user
      # smpatch set patchpro.proxy.passwd

         
    • For testing, attempt an update operation.
       
    IMPORTANT
    As for Solaris 10 Update 11 (aka 1/13) sconadm has been removed.
    To register such new systems a cool hack has been recently indisclosed.
    Perhaps (I haven't tested) it may help register other boxes too (U9 and U10).

    s10-u10# cd /var/cc-ccr
    s10-u10# tar cvf /tmp/cc-ccr.tar .
    s10-u10# scp /tmp/cc-ccr.tar s10-u11:/tmp
    s10-u10# rm -r /tmp/cc-ccr.tar

    s10-u11# cd /var/cc-ccr
    s10-u11# tar /tmp/cc-ccr.tar
    s10-u11# rm -r /tmp/cc-ccr.tar

    Note: Unfortunately patchsvr may not work either due to latest U10 Tomcat updates.
         

    Saturday, July 14, 2012

    Solaris 10 patchsvr configuration

    The Sun Update Connection Proxy is commonly called "Patch Server" or LPS.
    To take advantage of patchsvr make sure you have at least patch 119788-11 / 119789-11.

    IMPORTANT 
    Don't ever install patch 148150-03 or newer on the patchsvr host!
    The patch will remove Tomcat 4 which will break patchsvr.
     
    Optionally, point the repository cache to a different location:

    # zfs create -o mountpoint=/patchsvr data/patchsvr
    # patchsvr setup -c /patchsvr
     
    Make sure the patch source URL is up-to-date:

    # patchsvr setup -p https://updates.oracle.com/
        
    If behind a proxy/firewall:

    # patchsvr setup -x proxy-1:8080
    # (umask 77; vi /tmp/proxy_password)
    # patchsvr setup -u proxy_user -s /tmp/proxy_password
    # rm /tmp/proxy_password
       
    Check configurations:

    # patchsvr setup -l
      
    Start and enable boot-time startup:

    # patchsvr start
    # patchsvr enable
      
    Check that only one instance is running, otherwise kill them all and start over.

    # /usr/ucb/ps -auxwww | grep java | grep ccr
    ...
       
    On the patchsvr machine, also configure smpatch to point directly to Oracle.
    That is, use patchpro.patch.source=https://updates.oracle.com/

    If you ever need to clear the cache due to client reconfiguration:
     

    # patchsvr stop
    # cd /var/sadm/spool

    # rm patchsvr/*        (or the actual path to repository cache, if changed)
    # rm cache/xml/*
    # rm cache/updatemanager/analysis.results/*
    # rm cache/entitlement/*
    # rm cache/Database/*
    # rm cache/collection/*
    # rm cache/category/*
    # rm cache/*detectors*

      

    Friday, July 13, 2012

    Solaris 10 patchsvr

    The patchsvr is an important complement to smpatch.
    It's not a repository of patches, but a centralized update proxy and site local cache.
    The benefits are:
      
    • Patches are downloaded and cached so later requests are served much faster.
    • The cache also preserves the Internet uplink (no redundant downloads).
    • Internal servers never access the Internet (they entirely rely on patchsvr).
    • It's built-in with few requirements, but disk space for the downloaded patches.
    • It can serve both SPARC and X86 architectures.
       
    On large companies, a chain of patchsvr servers can be set up for distribution.
    There's no control of multiple patch versions or point-in-time patches.
       
    IMPORTANT
    Unfortunately patchsvr is broken since one latest Tomcat update in Solaris 10 U10.
    At least by the release of Solaris 10 U11 no solution is available.
       

    Tuesday, July 10, 2012

    Swap analysis

    This is admittedly a rather confusing post but I don't feel completely guilty.
    There are many official papers, but each only partially describes the Solaris VMM.
    Furthermore, there are many CLI tools whose output is rather difficult to correlate.
    Fortunately Solaris 11.1 promises VMM 2.0 at some point which I hope to be better.
      
    At some point Solaris has changed that to: swap = "traditional swap" + "unused RAM".
    And by doing this, it turned out that swap somehow became the virtual memory itself.
    It's also called virtual swap causing a great deal of confusion and obscurity.
    I believe that the "overloaded" meaning of swap word is a big source of confusion.
     
    Options -l and -s of the swap command are complementary to each other.
    Both options are crucial to virtual swap (virtual memory) administration.

    Consider the previously given swap example on the related status and summary posts.
    The same information processed with the -h option may help the eyes:

    $ swap -lh; swap -sh
    swapfile                    dev  swaplo  blocks  free
    /dev/zvol/dsk/rpool/swap  195,2      4K    8.0G  8.0G
    total: 856M allocated + 162M reserved = 1016M used, 12G available

    Most of people should normally have other things to do than analyze swap!
    Nevertheless, once in a while though it may be relevant to do so.
    Facts to consider for memory resource planning include:

    •  allocated is a concrete value that has already been written by processes. It doesn't include text or static data from the running executables. It corresponds to memory pages either in RAM or disk. If blocks = free then the pages are all in RAM, otherwise some (blocks - free) have reached the disk-based portion of the virutal swap space.
       
    • reserved is an "abstract" value that may need to be honored, eventually.
      Keeping free (or the sum of all of them, in case there are many disk-based devices) above reserved is the only way to assure reserved can be honored, without sacrificing any physical memory from the RAM-base virtual swap space or without running out of virtual memory.
        
    • available is normally equal to or bigger than the sum of all blocks. That's because some unused RAM (physical memory) may be used as swap space as well. Rationale: On systems with huge amounts of spare RAMs, disk-based virtual swap space can be greatly reduced.

      On the given example:
      The system has 8 GB of RAM of which, so far, 4 GB is for (virtual) swap.
       
    • Physical memory (prtconf | grep Mem) is a limited and at premium resource. Everybody knows that overcommit causes swap and, in general, degrades system performance.

      Useful figures (in 4 KB units) can be obtained as follows:

      $ kstat -p -n system_pages |
        egrep 'availrmem|physmem|freemem|locked|pp_kernel'

       
    • ISM is a completely different story. Their pages are "born" on disk.
      When such pages get used, they make to physical memory and are locked.
      This suggests that other anonymous pages can be switched with ISM pages.
         
    A few safe conclusions are:
       
    1. Monitor the desired relationship reserved <  free.
      Everything that is reserved must be guaranteed to be honored.
       
    2. Do provide disk space, otherwise physical memory will be sacrificed.
      Furthermore, allocation will be limited to available physical memory only.
      Note that disk space alone doesn't help dealing with ISM which locks pages.
       
    Notes:

    • The swap concept continues to be a backing store for anonymous (anon) memory pages from the stack, heap, COW and shared segments. Pages of text and static data segments are simply backed by executable files on the ordinary file system (executable files have names, so text and data pages aren't anonymous).
        
    • After shrinking or expanding swap space the above behavior and numbers can "normalize" only after the next reboot. During the transition of a swap space shrink or expansion the behavior and numbers seems to vary in unusual ways.
        
    • It seems difficult to correlate all these values with those from prstat and zonestat perhaps (?) due to rounding or truncation of each tool or maybe just dynamics.
            

    Swap summary

    Listing swap information is easy, but interpreting it is difficult.
    So far I believe the tools aren't precise and just give rough approximations.
     
    Traditionally, swap used to mean only the "disk portion" of the virtual memory.
    At some point  Solaris has changed that to: swap = "traditional swap" + "unused RAM".
    And by doing this, it turned out that swap somehow became the virtual memory itself.
    It's also called virtual swap causing a great deal of confusion and obscurity.

    I apologize in advance if I incidentally get caught by what I just stated above.

    The general virtual swap usage is reported by option -s (summary).
    For example:

    S swap -s
    total: 872608k bytes allocated + 165044k reserved = 1037652k used, 12709624k available

    As immediately perceived (k), all output is in terms of 1024 bytes units.
    Of course the important fields are:

    • allocated: amount of "asked" virtual swap (virtual memory) already written to.
      Note that it doesn't mean that any portion of it has to be necessarily on disk.
      Essentially, it corresponds to memory pages effectively used by processes.
       
      On the given example:
      872608k = 0.832 GB of virtual memory are already in use.
        
    • reserved: amount of "asked" virtual swap (virtual memory) not yet written to.
      This consumes absolutely no space (no memory pages are set aside).

      On the given example:
      165044k = 0.157 GB of virtual memory is yet to be used.
        
    • available: virtual swap (virtual memory) that can be provided to processes.
      On a fresh system this figure is generally almost equal to free from swap -l.
      Disk-based virtual swap space, if present, contributes to it, of course.
      RAM-based virtual swap space may contribute to this figure
      in case swap -l itself shows reserved > blocks.
       
      On the given example:
      12709624k = 12.120 GB of virtual memory is available.
        
    The /usr/bin/swap from a Solaris 11 and its -sh option works fine on a Solaris 10.
    Of course the binary needs to be of the same architecture (X86 or SPARC).
    That's great because -h is very useful.
      

    Monday, July 9, 2012

    Listing swap status

    Listing swap information is easy, but interpreting it is difficult.
    So far I believe the tools aren't precise and just give rough approximations.

    Traditionally, swap used to mean only the "disk portion" of the virtual memory.
    At some point Solaris has changed that to: swap = "traditional swap" + "unused RAM".
    And by doing this, it turned out that swap somehow became the virtual memory itself.
    It's also known as virtual swap causing a great deal of confusion and obscurity.

    I apologize in advance if I incidently get caught by what I just stated above.

    The "disk space" portion of virtual swap is reported by option -l (list status).
    For example:
      
    $ swap -l
    swapfile                    dev  swaplo    blocks      free
    /dev/zvol/dsk/rpool/swap  195,2       8  16777208  16765736
     
    On the above output, the ZFS volume rpool/swap is the only disk-based swap device.
    The last three fields in the output are all in terms of 512 bytes units:
     
    • swaplo : Starting page offset on the device.
       
      On the given example:
      8 x 512 = 4096 = 4 KB, because it's an X86.
      In X86 the page size is 4KB, while SPARC it's 8KB, typically.
       
    • blocks: amount of 512 bytes blocks on the disk-based swap device.
       
      On the given example:
      16777208 x 512 = 8589930496 = 8388604 KB, approximately 7.999 GB.
      As rpool/swap is 8 GB, the difference can be ZFS metadata overhead (?).
       
    • free: amount of 512 bytes free blocks on the disk-based swap device.
      Two fundamental observations for subsequent conclusions are:

      1. Is it equal or not to the blocks field?
          
        On the given example:
        (16777208 - 16765736) x 512 = 11472 x 512 = 5873664 = 5736 KB.
        Approximately 0.005 GB disk-based swap is being used.
        If it's equal, of course, then no disk-based swap is being used.
         
      2. Is it less or not than reserved from swap -s ?
          
        On the given example:
        16765736 x 512 = 8584056832 = 8382868  KB, around 7.994 GB.
         
    The /usr/bin/swap from a Solaris 11 and its -lh option works fine on a Solaris 10.
    Of course the binary needs to be of the same architecture (X86 or SPARC).
    That's great because -h is very useful.
       

    Expanding swap

    Things change or underestimation may happen.
    To increase the swap space backed by disk:
     
    • Verify that disk-based swap is not in use (blocks == free):
       
      # swap -l
      swapfile                   dev  swaplo   blocks    free
      /dev/zvol/dsk/rpool/swap 256,1      16  8388592 8388592
       
    • If not in use, simply resize the ZFS volume:
       
      # zfs set volsize=30G rpool/swap
       
    • Or, if in use, create another swap device and add it to the VMM:
      (don't forget to adjust /etc/vfstab for persistence upon next reboots).
       
      # zfs create -V 26G -b $(pagesize) rpool/swap-x1
      # zfs set primarycache=metadata rpool/swap-x1

      # swap -a /dev/zvol/dsk/rpool/swap-x1

       
    • Verify the new disk-based swap info:
       
      # swap -l
      swapfile                   dev  swaplo   blocks     free
      /dev/zvol/dsk/rpool/swap 256,1      16 62914544 62914544
       
       
    Don't forget to adjust changes accordingly in /etc/vfstab.
      

    Shrinking swap

    Things change or an overestimation may happen.
    To reduce the swap space portion backed by disk:
     
    Verify that disk-based swap is not in use (blocks == free):
      
    # swap -l
    swapfile                   dev  swaplo   blocks     free
    /dev/zvol/dsk/rpool/swap 256,1      16 62914544 62914544
     
    Remove the swap device from the VMM:
      
    # swap -d  /dev/zvol/dsk/rpool/swap
     
    Confirm the old device is gone:
      
    # swap -l
    No swap devices configured
     
    Reduce the size or create a smaller ZFS volume for the new swap device:
      
    # zfs destroy rpool/swap
    # zfs create -V 4G -b $(pagesize) rpool/swap

    # zfs set primarycache=metadata rpool/swap
     
    Add the swap device to the VMM:
      
    # swap -a /dev/zvol/dsk/rpool/swap
     
    Verify the new disk-based swap info:
      
    # swap -l
    swapfile                   dev  swaplo   blocks    free
    /dev/zvol/dsk/rpool/swap 256,1      16  8388592 8388592  

    Note that we may have to invert the order of operations.
    That is, begin by adding a new (smaller) device and then remove the old (larger) device.
    In any case, don't forget to adjust changes accordingly in /etc/vfstab.
          

    Friday, July 6, 2012

    AI - Framework installation


    Beyond the AI server itself, an ISC-DHCP server and an IPS repository server are required.
    Assume 192.168.0.11 is the IP address of the AI server being set up.
         
    Take a snapshot of the system (just in case):
      
    # zfs snapshot -r rpool@backup
    # zfs destroy rpool/swap@backup
    # zfs destroy rpool/dump@backup
     
    To install the AI server and tools, simply add the installadm package.
    For a "dry-run" (preview of the installation changes) add the -nv  option. 
      
    # pkg install installadm

    At first, there should be nothing configured:
      
    # installadm list
    There are no services configured on this server.

    Perhaps it's a good idea to pre-create a ZFS dataset for it:
      
    # zfs create -o compression=on rpool/export/auto_install
    # zfs create rpool/export/auto_install/files

    Enable the mDNS (multicast DNS):
      
    # svcadm enable dns/multicast

    Then create the default service and let most things setup automatically:
      
    # installadm create-service

    Creating service from: pkg:/install-image/solaris-auto-install
    OK to use subdir of /export/auto_install/ to store image? [y/N]: y
    Download: install-image/solaris-auto-install ...  Done
    Install Phase ...  Done
    Package State Update Phase ...  Done
    Image State Update Phase ...  Done
    Reading Existing Index ...  Done
    Indexing Packages ...  Done

    Creating service: solaris11-i386

    Image path: /export/auto_install/solaris11-i386

    Refreshing install services

    Creating default-i386 alias.

    No local DHCP configuration found. 
    This service is the default alias for all PXE clients. 
    If not already in place, the following should be added to the DHCP configuration:
            Boot server IP       : 192.168.0.11
            Boot file            : default-i386/boot/grub/pxegrub

    Refreshing install services

    The above message "No local DHCP configuration found..." can be ignored.
    One must set /etc/inet/dhcpd4.conf as in ISC-DHCP AI configuration sample.
    As per the warning message, the key clauses of  "PXEBoot" class must be set:
    • "filename" (Boot file)
    • "next-server" (Boot server IP)
     
    Watch out for multi-homed AI server hosts as they need adjustment.
    In such cases it's necessary to define which network AI is to service.
    The default is all networks (0.0.0.0) which may cause trouble.
      
    $ svcprop -p all_services/networks install/server:default
    0.0.0.0/0
        

    ISC-DHCP AI configuration

    ISC-DHCP is an important part of an AI (Automated Installer) infra-structure.
    Complementing the ISC-DHCP baseline configuration here's a sample for X86 clients:
     
    ...

    group {
      
      host XXXXXXXXXXXX { 
        hardware ethernet XX:XX:XX:XX:XX:XX;
        filename "01XXXXXXXXXXXX";
        fixed-address 192.168.0.210;
      }

      host YYYYYYYYYYYY {
        hardware ethernet YY:YY:YY:YY:YY:YY;
        filename "01YYYYYYYYYYYY";
        fixed-address 192.186.0.211;
      }

      # The unspecified X86 clients use the defaults (catch-all)
      class "PXEBoot" {
        match if 
        (
          substring(option vendor-class-identifier, 0, 9) 
          "PXEClient"
        );
        filename "default-i386/boot/grub/pxegrub";
      }

      # The AI server from where to retrieve filename
      next-server 192.168.0.11;

    }
       
    Note: The only difference for SPARC would be the "catch-all" class.
        

    ISC-DHCP baseline configuration

    There's a good and well explained sample at /etc/inet/dhcpd4.conf.example.
    Nevertheless here's a base configuration which I believe useful:

    authoritative;

    option domain-name "example.local";
    option domain-name-servers ns1.example.local, ns2.example.local;

    default-lease-time 600;
    max-lease-time 7200;

    log-facility local7;

    class "Solaris" {
      match pick-first-value
       (option dhcp-client-identifier, hardware);
    }

    # Clients by MAC
    subclass "Solaris" 1:XX:XX:XX:XX:XX:XX;
    subclass "Solaris" 1:YY:YY:YY:YY:YY:YY;

    subnet 192.168.0.0 netmask 255.255.255.0 {

      pool {
        allow members of "Solaris";
        range 192.168.0.100 192.168.0.150;
      }

      option broadcast-address 192.168.0.255;
      option routers 192.168.0.254;

    }

    # Group clients by specifics (not shown)
    group {

      ...

      host XXXXXXXXXXXX {
        hardware ethernet XX:XX:XX:XX:XX:XX;
        fixed-address 192.168.0.210;
      }

      host YYYYYYYYYYYY {
        hardware ethernet YY:YY:YY:YY:YY:YY;
        fixed-address 192.186.0.211;
      }

    }

    Note: On this example I assume the server participates in subnet 192.168.0.0.
        

    ISC-DHCP installation

    I assume Solaris 11 here, letting the past go and looking ahead.

    If installation mode was server:
      
    # pkg info -r system/solaris-large-server
              Name: group/system/solaris-large-server
           Summary: Oracle Solaris Large Server
       Description: Provides an Oracle Solaris large server environment
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.0.0.0.2.2576
    Packaging Date: October 20, 2011 06:36:10 AM
              Size: 5.45 kB
              FMRI: pkg://solaris/group/system/solaris-large-server@0.5.11,5.11-0.175.0.0.0.2.2576:20111020T063610Z

    Than ISC-DHCP is already installed:
      
    # pkg info -r isc-dhcp
              Name: service/network/dhcp/isc-dhcp
           Summary: ISC DHCP Server and Relay Agent.
       Description: ISC DHCP is open source software that implements the Dynamic
                    Host Configuration Protocols for connection to a local network.
                    This package includes the ISC DHCP server, relay agent and the
                    omshell tool.
          Category: System/Services
             State: Installed
         Publisher: solaris
           Version: 4.1.0.4
     Build Release: 5.11
            Branch: 0.175.0.6.0.2.0
    Packaging Date: March 17, 2012 01:04:34 AM
              Size: 7.51 MB
              FMRI: pkg://solaris/service/network/dhcp/isc-dhcp@4.1.0.4,5.11-0.175.0.6.0.2.0:20120317T010434Z

    But if it was uninstalled or installation mode was desktop (on X86), then:
      
    # pkg install isc-dhcp
      

    Solaris 11 release & update

    Here's how to check Solaris 11 release and update:

    # cat /etc/release
                         Oracle Solaris 11 11/11 X86
      Copyright (c) 1983, 2011, Oracle and/or its affiliates.  All rights reserved.
                          Assembled 18 October 2011

    # pkg info entire | grep Summary | sed 's/.*[\(]\(.*\)[\)].*/\1/'
    Oracle Solaris 11 11/11 SRU 8.5

    Or perhaps, yet:

    # pkg info entire | grep Version | sed 's/.*[\(]\(.*\)[\)].*/\1/'
    Oracle Solaris 11.3.1.5.2

        

    Tuesday, July 3, 2012

    VirtualBox VM startup

    A fully configured VM is expected to be ready to launching.
    I like virtualized Solaris guests from a Solaris 11 host with no desktop (GUI) installed.

    For RDP access so guest console is accessible:

    $ VBoxHeadless --startvm vm1 &

    For no console access (generally not a good idea, unless security constrains mandate):
     
    $ VBoxHeadless --startvm vm1 --vrde off &

    Alternatively, on more recent versions of VirtualBox you may be more attracted to the VirtualBox VM autostart feature, which saves us typing the above commands for the VMs used every time the host system is booted.
      

    VirtualBox setup information

    This post is just to wrap up the basic toolset for operating VirtualBox.
    Listing available VMs and its detailed configuration:
     
    $ VBoxManage list vms
    $ VBoxManage showvminfo vm1 --details | less


    Other useful options can be obtained from:
     
    $ VBoxManage list