Saturday, December 28, 2013

The 2013 retrospective

We are finally coming to the end of 2013 according to the Gregorian Calendar.
The improvements in Solaris 11.1 and other Oracle/Sun products do have continued.
Not only platform and infrastructure evolved, applications too, notably Oracle Database 12c.
It was an amazing wealth of datacenter evolution around virtualization, specially at the:

  • Operating System
  • Networking
  • Storage
 
Solaris, of course, continues to be the #1 enterprise cloud-ready operating system.
There's absolutely no way to contest this fact.

This year we saw the introduction of the massive scale-up M6-32.
It is impressively capable of:
 
  • 32 TB or RAM.
    Unprecedented huge memory.
    Large pages and the Solaris VMM 2.0 can handle that.
     
  • 3072 processing threads.
    For a single process!
    Yes, the Solaris scheduleres can handle that almost linearly.
     
  • 3 TB/s interconnect.
    More than 100x faster than Infiniband.
    That's because it's on silicon, that is ns instead of ms.
     
  • 1 TB/s I/O.
    Faster than many current parallel file systems.
  
This redefined the wisdom of a multitude of commodity systems for integer computations.
It's a far less TCO and better ROI against the higher TCA, all part of critical IT indicators.
On the Cloud era, on-premise solutions ought to have the Solaris edge for the IT to survive.
 
For the next year, as the demand explodes, I have confidence that Solaris will keep up.
But I have to say that deemed competitors shall be put through their paces.
Of course, those of us who survive till there will find out.
Hope to be there and see you there.
Take care and happy new year!
 

Friday, December 27, 2013

Zone cloning

On this post I intend to exemplify cloning a non-global zone (NGZ).
In the end it shall be quite obvious why cloning is so powerful and desirable.
In this context I understand cloning as a duplication within the same host.
An identical NGZ on another host is another topic related to migration.
The underlying support for cloning is ultimately provided by ZFS.

I make the following assumptions:
  • The system is a Solaris 11 or higher.
  • There is a dedicated ZFS pool for NGZs paths.
  • There is an accessible IPS local repository.
  • There's no DNS service implemented yet.
  • There is an available (unused) network interface.

$ pkg info entire | grep Version
       Version: 0.5.11 (Oracle Solaris 11.1.13.6.0)


$ zpool list zone
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
zone   15.9G   622M  15.3G   3%  1.00x  ONLINE  -


$ zfs list -r -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone             622M  15.0G    35K  /zone
zone/server-1a   479M  15.0G    33K  /zone/server-1a
zone/server-1b  70.8M  15.0G    34K  /zone/server-1b
zone/server-1c  70.7M  15.0G    34K  /zone/server-1c


$ pkg publisher
PUBLISHER        TYPE     STATUS P LOCATION
solaris          origin   online F http://192.168.0.100/


$ svcs '*dns*'
STATE          STIME    FMRI
disabled        9:17:59 svc:/network/dns/client:default
disabled        9:18:02 svc:/network/dns/multicast:default
disabled        9:18:10 svc:/network/dns/server:default


# dladm show-phys -o link,state,speed,duplex,device
LINK              STATE      SPEED  DUPLEX    DEVICE
net0              up         1000   full      e1000g4
net3              up         1000   full      e1000g7
server-1c/net3    up         1000   full      e1000g7
net2              up         1000   full      e1000g6
server-1b/net2    up         1000   full      e1000g6
net1              up         1000   full      e1000g5
server-1a/net1    up         1000   full      e1000g5
net4              unknown    0      unknown   e1000g8
net7              unknown    0      unknown   e1000g11
net6              unknown    0      unknown   e1000g10
net5              unknown    0      unknown   e1000g9


Let's create another NGZ (server-1d) as a clone of server-1a.
Note from the previous output that server-1b and server-1c are clones.
More clearly:

$ zfs list -t all -r -d 2 -o name,used zone/server-1a
NAME                                   USED
zone/server-1a                         479M
zone/server-1a/rpool                   479M
zone/server-1a/rpool@server-1c_snap00     0
zone/server-1a/rpool@server-1b_snap00     0

zone/server-1a/rpool/ROOT              479M
zone/server-1a/rpool/VARSHARE           39K
zone/server-1a/rpool/export            134K


$ zfs get -o value origin zone/server-{1b,1c}/rpool
VALUE
zone/server-1a/rpool@server-1b_snap00

zone/server-1a/rpool@server-1c_snap00

Extract the source NGZ (server-1a) configuration:

# zonecfg -z server-1a export -f /tmp/server-1a.cfg

# cat /tmp/server-1a.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1a
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.11/24
set configure-allowed-address=true
set physical=net1
end

add attr
set name=description
set type=string
set value=Template
end


Edit the target NGZ (server-1d) configuration accordingly:
(attention: if net4 is already a vnic, then use net instead of anet)

# cp /tmp/server-{1a,1d}.cfg

# cat /tmp/server-1d.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1d
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.14/24
set configure-allowed-address=true
set physical=net4
end

add attr
set name=description
set type=string
set value="NIS server"
end


Import the target NGZ (server-1d) configuration:

# zonecfg -z server-1d -f /tmp/server-1d.cfg

# zonecfg -z server-1d info
zonename: server-1d
zonepath: /zone/server-1d
brand: solaris
autoboot: true
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
    address not specified
    allowed-address: 192.168.0.14/24
    configure-allowed-address: true
    physical: net4
    defrouter not specified

attr:
    name: description
    type: string
    value: "NIS server"


# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   3 server-1a running    /zone/server-1a  solaris  excl 
   - server-1d configured /zone/server-1d  solaris  excl


Create a configuration profile to help streamline this and future cloning.

NOTE
During the creation of the configuration profile, selecting None for networking connection configuration may avoid mistakes, but it's probably better to specify the correct settings. It doesn't seem a good idea to include the name services configuration while operating the sysconf create-profile utility. The results seems rather terse or minimalist. I would rather manually edit the configuration profile subsequently (using SMF info extraction from other golden or template systems) as later exemplified for the case of enabling NIS services right from the start. Furthermore, there may be complains about IPv6, hence I prefer to edit out it's default configuration.  If using the anet zone configuration, net0 is probably the correct choice; but if a net physical interface is being referenced in the zone configuration, then choose the corresponding interface.

An interesting alternative, is to copy from a configuration profile template initially generated by sysconfig create-profile and then manually adjust accordingly.
 
In other words my advice is:
  • Specify the correct network settings, using net0 for vnics (anets) and the matching physical interface in the zone configuration. The IP address must respect the eventual  allowed-address zone configuration clause. Example: Configuration profile - NIS client
  • Do not specify any name services configurations when initially generating the profile via sysconfig create-profile. Manually edit the initially generated profile and add name services and any other thing that makes sense to a particular purpose. Example: Configuration profile - NIS client
  • Remove altogether the IPv6 configuration section if you'll use just IPv4. That is, remove the following lines from the configuration profile:
     
    <property_group type="application" name="install_ipv6_interface"
    >
    <
    propval type="astring" name="stateful" value="yes">

    <
    propval type="astring" name="address_type" value="addrconf"/
    >
    <
    propval type="astring" name="name" value="net10/v6"/>

    <
    /property_group
    >
Taking into consideration the above advice, create the very first (initial) configuration profile to be customized and subsequently used as a baseline for similar installations:

# sysconfig create-profile -o /tmp/server-1d.xml
SC profile successfully generated.
Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.6643


If a baseline configuration profile already existed, then adjust accordingly. In general, the following fields will be updated (beyond the deletion of the aforementioned IPv6 section). Here's an unrelated/independent example:

# diff /tmp/dns-1.xml /tmp/dns-2.xml
40c40
<         <
propval type="astring" name="nodename" value="dns-1"/>;
---

<
         <
propval type="astring" name="nodename" value="dns-2"/>
69,70c69,70

<
        
<propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
<
        
<propval type="astring" name="name" value="net9/v4"/>
---

>
        
<propval type="net_address_v4" name="static_address" value="192.168.0.87/24"/
>
>
        
<propval type="astring" name="name" value="net10/v4"/>    

Shutdown the source NGZ (server-1a) for performing the cloning.
In general, there should be a golden template NGZ ready to be cloned.

# zoneadm -z server-1a shutdown

# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   - server-1a installed  /zone/server-1a  solaris  excl 
   - server-1d configured /zone/server-1d  solaris  excl 


# zoneadm -z server-1d clone -c /tmp/server-1d.xml server-1a
The following ZFS file system(s) have been created:
    zone/server-1d
Progress being logged to ...
Log saved in non-global zone as ...


# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   - server-1a installed  /zone/server-1a  solaris  excl 
   - server-1d installed  /zone/server-1d  solaris  excl
 


Resume the source NGZ (server-1a) to its fully operational state.
As previously noted, this isn't needed in case a golden template is being used.

# zoneadm -z server-1a boot

Before booting the cloned NGZ (server-1d) for the 1st time, do minor adjustments such as manually editing /zone/server-1d/root/etc/hosts. If much more elaborated measures are needed them there's a chance that cloning may not be the best solution. Of course, it all depends on a case by case analysis.

# cat /zone/server-1d/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost                loghost
#
192.168.0.14    server-1d.business.corp  server-1d


The above /etc/hosts example may not be adequate to NIS services, unless the even more insecure local network dynamic discovery is used. For NIS services direct mode, typically and in addition, it's also required to add at least two NIS servers, such as:

# cat /zone/server-1d/root/etc/hosts
...

192.168.0.14    server-1d.business.corp  server-1d
#
192.168.0.202       nis-2.business.corp  nis-2
192.168.0.203       nis-3.business.corp  nis-3
   
For NIS services, the relevant part of the configuration profile changes from:
  
<service version="1" type="service" name="system/name-service/switch">
 
<property_group type="application" name="config">
    <propval type="astring" name="default" value="files"/>
   
<propval type="astring" name="printer" value="user files"/>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

To:

<service version="1" type="service" name="system/name-service/switch">
   
<property_group type="application" name="config">
     
<propval type="astring" name="default" value="files nis"/>
     
<propval type="astring" name="printers" value="user files nis"/>
     
<propval type="astring" name="netgroup" value="nis"/>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

<service version="1" type="service" name="network/nis/domain">
 
<property_group type="application" name="config">
   
<propval type="hostname" name="domainname" value="business.corp"/>
   
<property type="host" name="ypservers">
     
<host_list>
       
<value_node value="nis-2"/>
       
<value_node value="nis-3"/>
     
</host_list>
   
</property>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

<service version="1" type="service" name="network/nis/client">
 
<instance enabled="true" name="default"/> 
</service>

One might well be wondering how did I find out what to substitute for in the above XML excerpt. For more detail on how to obtain to obtain the above changes, please, read my other posts about SMF info extraction and NIS & NSS. Of course, I found out about which services to inspect based on the on-line manuals and references.

For the final step it's advisable to use two terminals. One for the console monitoring of the 1st boot. Other for issuing the zone boot command. Depending on the existing configuration in the source NGZ, it will take a little while for the system to realize the inherent changes to be applied to the newly cloned NGZ.

# zlogin -C server-1d
[Connected to zone 'server-1d' console]
 
# zoneadm -z server-1d boot  (from another terminal)
[NOTICE: Zone booting up]

SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Hostname: unknown
Hostname: server-1d

server-1d console login:


Hit ~. (or ~~. if nested twice, and so on...) and watch the results:

# zfs list -r -t all -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone             669M  15.0G    36K  /zone
zone/server-1a   487M  15.0G    33K  /zone/server-1a
zone/server-1b  70.9M  15.0G    34K  /zone/server-1b
zone/server-1c  70.8M  15.0G    34K  /zone/server-1c
zone/server-1d  38.1M  15.0G    34K  /zone/server-1d


Thanks to ZFS the cloning is naturally fast and extremely space efficient.
We were able to quickly get a new fully functional OS instance with just around 40 MB! In addition to the near zero virtualization overhead, this is a unique advantage of Solaris. 
   
There is one caveat when it comes to updating a system with multiple cloned zones. As updates are applied, they will be duplicated on each and every cloned zone, thus lessening the space savings benefits (zone server-1f was cloned from server-1a after an update process).

# zfs list -r -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone            1.85G  13.8G    38K  /zone
zone/server-1a   187M  13.8G    33K  /zone/server-1a
zone/server-1b   304M  13.8G    34K  /zone/server-1b
zone/server-1c   301M  13.8G    34K  /zone/server-1c
zone/server-1d   301M  13.8G    34K  /zone/server-1d
zone/server-1e   739M  13.8G    35K  /zone/server-1e
zone/server-1f  59.7M  13.8G    34K  /zone/server-1f

 
To mitigate the problem, the update plan must take into consideration the redeployment of cloned zones from updated golden templates. This implies a best practice:
Keep actual configuration and installation scripts synchronized.
I wonder if deduplication would be effective.
Of course, I'm not convinced.
  

Wednesday, December 4, 2013

GNOME desktop

This post started just to share part of my desktop theme with you.
Actually, what matters most to Unix administration is just a terminal window :-)
It took a while but nowadays I'm happy to have gotten rid of GUI administration long ago.
 
By the way, to get a Gnome desktop out of a text-only installation, IPS is there to help. From my experience, any warning or error message can be safely ignored and just be sure to reboot the system after the additional software get installed.

# pkg install solaris-desktop

But the desktop installation is not quite so simple like that if your intention is to have a result that matches exactly the effect of desktop installation right from the start.