Saturday, December 28, 2013

The 2013 retrospective

We are finally coming to the end of 2013 according to the Gregorian Calendar.
The improvements in Solaris 11.1 and other Oracle/Sun products do have continued.
Not only platform and infrastructure evolved, applications too, notably Oracle Database 12c.
It was an amazing wealth of datacenter evolution around virtualization, specially at the:

  • Operating System
  • Networking
  • Storage
 
Solaris, of course, continues to be the #1 enterprise cloud-ready operating system.
There's absolutely no way to contest this fact.

This year we saw the introduction of the massive scale-up M6-32.
It is impressively capable of:
 
  • 32 TB or RAM.
    Unprecedented huge memory.
    Large pages and the Solaris VMM 2.0 can handle that.
     
  • 3072 processing threads.
    For a single process!
    Yes, the Solaris scheduleres can handle that almost linearly.
     
  • 3 TB/s interconnect.
    More than 100x faster than Infiniband.
    That's because it's on silicon, that is ns instead of ms.
     
  • 1 TB/s I/O.
    Faster than many current parallel file systems.
  
This redefined the wisdom of a multitude of commodity systems for integer computations.
It's a far less TCO and better ROI against the higher TCA, all part of critical IT indicators.
On the Cloud era, on-premise solutions ought to have the Solaris edge for the IT to survive.
 
For the next year, as the demand explodes, I have confidence that Solaris will keep up.
But I have to say that deemed competitors shall be put through their paces.
Of course, those of us who survive till there will find out.
Hope to be there and see you there.
Take care and happy new year!
 

Friday, December 27, 2013

Zone cloning

On this post I intend to exemplify cloning a non-global zone (NGZ).
In the end it shall be quite obvious why cloning is so powerful and desirable.
In this context I understand cloning as a duplication within the same host.
An identical NGZ on another host is another topic related to migration.
The underlying support for cloning is ultimately provided by ZFS.

I make the following assumptions:
  • The system is a Solaris 11 or higher.
  • There is a dedicated ZFS pool for NGZs paths.
  • There is an accessible IPS local repository.
  • There's no DNS service implemented yet.
  • There is an available (unused) network interface.

$ pkg info entire | grep Version
       Version: 0.5.11 (Oracle Solaris 11.1.13.6.0)


$ zpool list zone
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
zone   15.9G   622M  15.3G   3%  1.00x  ONLINE  -


$ zfs list -r -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone             622M  15.0G    35K  /zone
zone/server-1a   479M  15.0G    33K  /zone/server-1a
zone/server-1b  70.8M  15.0G    34K  /zone/server-1b
zone/server-1c  70.7M  15.0G    34K  /zone/server-1c


$ pkg publisher
PUBLISHER        TYPE     STATUS P LOCATION
solaris          origin   online F http://192.168.0.100/


$ svcs '*dns*'
STATE          STIME    FMRI
disabled        9:17:59 svc:/network/dns/client:default
disabled        9:18:02 svc:/network/dns/multicast:default
disabled        9:18:10 svc:/network/dns/server:default


# dladm show-phys -o link,state,speed,duplex,device
LINK              STATE      SPEED  DUPLEX    DEVICE
net0              up         1000   full      e1000g4
net3              up         1000   full      e1000g7
server-1c/net3    up         1000   full      e1000g7
net2              up         1000   full      e1000g6
server-1b/net2    up         1000   full      e1000g6
net1              up         1000   full      e1000g5
server-1a/net1    up         1000   full      e1000g5
net4              unknown    0      unknown   e1000g8
net7              unknown    0      unknown   e1000g11
net6              unknown    0      unknown   e1000g10
net5              unknown    0      unknown   e1000g9


Let's create another NGZ (server-1d) as a clone of server-1a.
Note from the previous output that server-1b and server-1c are clones.
More clearly:

$ zfs list -t all -r -d 2 -o name,used zone/server-1a
NAME                                   USED
zone/server-1a                         479M
zone/server-1a/rpool                   479M
zone/server-1a/rpool@server-1c_snap00     0
zone/server-1a/rpool@server-1b_snap00     0

zone/server-1a/rpool/ROOT              479M
zone/server-1a/rpool/VARSHARE           39K
zone/server-1a/rpool/export            134K


$ zfs get -o value origin zone/server-{1b,1c}/rpool
VALUE
zone/server-1a/rpool@server-1b_snap00

zone/server-1a/rpool@server-1c_snap00

Extract the source NGZ (server-1a) configuration:

# zonecfg -z server-1a export -f /tmp/server-1a.cfg

# cat /tmp/server-1a.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1a
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.11/24
set configure-allowed-address=true
set physical=net1
end

add attr
set name=description
set type=string
set value=Template
end


Edit the target NGZ (server-1d) configuration accordingly:
(attention: if net4 is already a vnic, then use net instead of anet)

# cp /tmp/server-{1a,1d}.cfg

# cat /tmp/server-1d.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1d
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.14/24
set configure-allowed-address=true
set physical=net4
end

add attr
set name=description
set type=string
set value="NIS server"
end


Import the target NGZ (server-1d) configuration:

# zonecfg -z server-1d -f /tmp/server-1d.cfg

# zonecfg -z server-1d info
zonename: server-1d
zonepath: /zone/server-1d
brand: solaris
autoboot: true
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
    address not specified
    allowed-address: 192.168.0.14/24
    configure-allowed-address: true
    physical: net4
    defrouter not specified

attr:
    name: description
    type: string
    value: "NIS server"


# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   3 server-1a running    /zone/server-1a  solaris  excl 
   - server-1d configured /zone/server-1d  solaris  excl


Create a configuration profile to help streamline this and future cloning.

NOTE
During the creation of the configuration profile, selecting None for networking connection configuration may avoid mistakes, but it's probably better to specify the correct settings. It doesn't seem a good idea to include the name services configuration while operating the sysconf create-profile utility. The results seems rather terse or minimalist. I would rather manually edit the configuration profile subsequently (using SMF info extraction from other golden or template systems) as later exemplified for the case of enabling NIS services right from the start. Furthermore, there may be complains about IPv6, hence I prefer to edit out it's default configuration.  If using the anet zone configuration, net0 is probably the correct choice; but if a net physical interface is being referenced in the zone configuration, then choose the corresponding interface.

An interesting alternative, is to copy from a configuration profile template initially generated by sysconfig create-profile and then manually adjust accordingly.
 
In other words my advice is:
  • Specify the correct network settings, using net0 for vnics (anets) and the matching physical interface in the zone configuration. The IP address must respect the eventual  allowed-address zone configuration clause. Example: Configuration profile - NIS client
  • Do not specify any name services configurations when initially generating the profile via sysconfig create-profile. Manually edit the initially generated profile and add name services and any other thing that makes sense to a particular purpose. Example: Configuration profile - NIS client
  • Remove altogether the IPv6 configuration section if you'll use just IPv4. That is, remove the following lines from the configuration profile:
     
    <property_group type="application" name="install_ipv6_interface"
    >
    <
    propval type="astring" name="stateful" value="yes">

    <
    propval type="astring" name="address_type" value="addrconf"/
    >
    <
    propval type="astring" name="name" value="net10/v6"/>

    <
    /property_group
    >
Taking into consideration the above advice, create the very first (initial) configuration profile to be customized and subsequently used as a baseline for similar installations:

# sysconfig create-profile -o /tmp/server-1d.xml
SC profile successfully generated.
Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.6643


If a baseline configuration profile already existed, then adjust accordingly. In general, the following fields will be updated (beyond the deletion of the aforementioned IPv6 section). Here's an unrelated/independent example:

# diff /tmp/dns-1.xml /tmp/dns-2.xml
40c40
<         <
propval type="astring" name="nodename" value="dns-1"/>;
---

<
         <
propval type="astring" name="nodename" value="dns-2"/>
69,70c69,70

<
        
<propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
<
        
<propval type="astring" name="name" value="net9/v4"/>
---

>
        
<propval type="net_address_v4" name="static_address" value="192.168.0.87/24"/
>
>
        
<propval type="astring" name="name" value="net10/v4"/>    

Shutdown the source NGZ (server-1a) for performing the cloning.
In general, there should be a golden template NGZ ready to be cloned.

# zoneadm -z server-1a shutdown

# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   - server-1a installed  /zone/server-1a  solaris  excl 
   - server-1d configured /zone/server-1d  solaris  excl 


# zoneadm -z server-1d clone -c /tmp/server-1d.xml server-1a
The following ZFS file system(s) have been created:
    zone/server-1d
Progress being logged to ...
Log saved in non-global zone as ...


# zoneadm list -cv
  ID NAME      STATUS     PATH             BRAND    IP   
   0 global    running    /                solaris  shared
   1 server-1c running    /zone/server-1c  solaris  excl 
   2 server-1b running    /zone/server-1b  solaris  excl 
   - server-1a installed  /zone/server-1a  solaris  excl 
   - server-1d installed  /zone/server-1d  solaris  excl
 


Resume the source NGZ (server-1a) to its fully operational state.
As previously noted, this isn't needed in case a golden template is being used.

# zoneadm -z server-1a boot

Before booting the cloned NGZ (server-1d) for the 1st time, do minor adjustments such as manually editing /zone/server-1d/root/etc/hosts. If much more elaborated measures are needed them there's a chance that cloning may not be the best solution. Of course, it all depends on a case by case analysis.

# cat /zone/server-1d/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost                loghost
#
192.168.0.14    server-1d.business.corp  server-1d


The above /etc/hosts example may not be adequate to NIS services, unless the even more insecure local network dynamic discovery is used. For NIS services direct mode, typically and in addition, it's also required to add at least two NIS servers, such as:

# cat /zone/server-1d/root/etc/hosts
...

192.168.0.14    server-1d.business.corp  server-1d
#
192.168.0.202       nis-2.business.corp  nis-2
192.168.0.203       nis-3.business.corp  nis-3
   
For NIS services, the relevant part of the configuration profile changes from:
  
<service version="1" type="service" name="system/name-service/switch">
 
<property_group type="application" name="config">
    <propval type="astring" name="default" value="files"/>
   
<propval type="astring" name="printer" value="user files"/>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

To:

<service version="1" type="service" name="system/name-service/switch">
   
<property_group type="application" name="config">
     
<propval type="astring" name="default" value="files nis"/>
     
<propval type="astring" name="printers" value="user files nis"/>
     
<propval type="astring" name="netgroup" value="nis"/>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

<service version="1" type="service" name="network/nis/domain">
 
<property_group type="application" name="config">
   
<propval type="hostname" name="domainname" value="business.corp"/>
   
<property type="host" name="ypservers">
     
<host_list>
       
<value_node value="nis-2"/>
       
<value_node value="nis-3"/>
     
</host_list>
   
</property>
 
</property_group>
 
<instance enabled="true" name="default"/> 
</service>

<service version="1" type="service" name="network/nis/client">
 
<instance enabled="true" name="default"/> 
</service>

One might well be wondering how did I find out what to substitute for in the above XML excerpt. For more detail on how to obtain to obtain the above changes, please, read my other posts about SMF info extraction and NIS & NSS. Of course, I found out about which services to inspect based on the on-line manuals and references.

For the final step it's advisable to use two terminals. One for the console monitoring of the 1st boot. Other for issuing the zone boot command. Depending on the existing configuration in the source NGZ, it will take a little while for the system to realize the inherent changes to be applied to the newly cloned NGZ.

# zlogin -C server-1d
[Connected to zone 'server-1d' console]
 
# zoneadm -z server-1d boot  (from another terminal)
[NOTICE: Zone booting up]

SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Hostname: unknown
Hostname: server-1d

server-1d console login:


Hit ~. (or ~~. if nested twice, and so on...) and watch the results:

# zfs list -r -t all -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone             669M  15.0G    36K  /zone
zone/server-1a   487M  15.0G    33K  /zone/server-1a
zone/server-1b  70.9M  15.0G    34K  /zone/server-1b
zone/server-1c  70.8M  15.0G    34K  /zone/server-1c
zone/server-1d  38.1M  15.0G    34K  /zone/server-1d


Thanks to ZFS the cloning is naturally fast and extremely space efficient.
We were able to quickly get a new fully functional OS instance with just around 40 MB! In addition to the near zero virtualization overhead, this is a unique advantage of Solaris. 
   
There is one caveat when it comes to updating a system with multiple cloned zones. As updates are applied, they will be duplicated on each and every cloned zone, thus lessening the space savings benefits (zone server-1f was cloned from server-1a after an update process).

# zfs list -r -d 1 zone
NAME             USED  AVAIL  REFER  MOUNTPOINT
zone            1.85G  13.8G    38K  /zone
zone/server-1a   187M  13.8G    33K  /zone/server-1a
zone/server-1b   304M  13.8G    34K  /zone/server-1b
zone/server-1c   301M  13.8G    34K  /zone/server-1c
zone/server-1d   301M  13.8G    34K  /zone/server-1d
zone/server-1e   739M  13.8G    35K  /zone/server-1e
zone/server-1f  59.7M  13.8G    34K  /zone/server-1f

 
To mitigate the problem, the update plan must take into consideration the redeployment of cloned zones from updated golden templates. This implies a best practice:
Keep actual configuration and installation scripts synchronized.
I wonder if deduplication would be effective.
Of course, I'm not convinced.
  

Wednesday, December 4, 2013

GNOME desktop

This post started just to share part of my desktop theme with you.
Actually, what matters most to Unix administration is just a terminal window :-)
It took a while but nowadays I'm happy to have gotten rid of GUI administration long ago.
 
By the way, to get a Gnome desktop out of a text-only installation, IPS is there to help. From my experience, any warning or error message can be safely ignored and just be sure to reboot the system after the additional software get installed.

# pkg install solaris-desktop

But the desktop installation is not quite so simple like that if your intention is to have a result that matches exactly the effect of desktop installation right from the start.

Monday, October 28, 2013

NIS & Automounter

The NFS Automounter technology leverages NFS and is essential to most Unix networking environment. It seems that most, if not all Linux implementations are more or less broken, but fortunately with the Solaris Autofs land we have nothing but blue sky.

Even better is the integration of NIS services with the Automounter, adding flexibility and central administration with the elimination of client-by-client local files administration. That's exceptional for medium-to-large networks.

The integration is extremely simple and straightforward, just requiring to enable the corresponding (automount) Name Service Switch (NSS) database and crafting appropriate built-in (auto.master and auto.home) NIS maps alongside additional auto_* custom NIS maps for extended flexibility.

It's critical to avoid as much as possible any hard-coded dependencies in the /etc/auto_master of each client in order to avoid expensive post-installation management in case locations must be substituted. It's enough to consider the importance of this on a network of hundreds or thousands of computers. As a consequence, it's important to carefully choose stable names for the topmost mounting points for the data at rest, such as /data, /public, /download, /business.corp, and so on.

Once all the requirements have been fulfilled, typical usages are:

  1. The in-line (a.k.a. the no absolute paths prefixes) syntax.
     
    The referenced NIS map contents are considered in relation to: i) a mountpoint prefix (to the left) of the map's name in the current Automounter configuration file or else ii) a nested map (multiple indirection) into another referenced NIS map
     
  2. The inclusion (+map_name) syntax.
     
    The map_name NIS map contents are inserted at the current Automounter configuration file (a.k.a. local map) referenced line position. This has the main disadvantage of not integrating with variable substitution as previously described. So except for special cases I won't consider this option due to its inherent limitation.
     
    Again, a couple of examples should help clarify matters.
    I'll start by the simplest and progress to the more elaborated examples.

    Example 1: 

    Requirement: Standards based centrally management of on-demand mounting of assorted NFS shares by several hosts under /data.

    Solution: Configure the automounter of each host to use a specific NIS map describing what's to be mounted as needed. Assuming a host called app-server-1, this is as simple as modifying its /etc/auto_master as follows:

    app-server-1:~# cat /etc/auto_master 
    ...
    /data        auto_data_${HOST}       -nobrowse


    Note:
    I can make use of variable substitution as long as it happens on the value (not the key) field of NIS maps themselves. In addition, it can't be used in included NIS maps. This last limitation is precisely what prevents me from taking advantage of the auto.master NIS map (which is referenced in the standard /etc/auto_master as simply +auto_master at the position where I inserted the ellipses).
    app-server-1:~# ypcat -k auto_data_app-server-1
    area-4 file-server-1:/export/data/tank-4
    area-3 file-server-1:/export/data/tank-3
    area-2 file-server-1:/export/data/tank-2
    area-1 file-server-1:/export/data/tank-1


    In this example, areas area-1 through area-4 are to be on-demand mapped to app-server-1 under /data. Later, if any modifications are needed, it's just a matter of updating the corresponding (auto_data_app-server-1) NIS map. Given the normal automounter timeouts (for indirect NIS maps — direct ones require automounter restart), nothing else is necessary to get the updates reflected to app-server-1.

    I'd like to show the corresponding NIS server side changes in order to support the new custom map. But at the same time, I'd like to show the include directive, which is a more manageable approach then simply ever adding stuff to /var/yp/Makefile to the point it can turn cumbersome or virtually unmanageable. As such, in /var/yp/Makefile just add the 2 following changes for each new custom NIS map:

    # cat /var/yp/Makefile
    ... 

    all: passwd ageing group netid \
            project netgroup aliases publickey \
            hosts ipnodes ethers networks netmasks \

            rpc services protocols \
            auto.master auto.home \
            auth.attr exec.attr prof.attr user.attr \

            auto_data_app-server-1

    ...

    include target.auto_data_app-server-1

    ...

    The file now being included is as follows:

    # cat /var/yp/target.auto_data_app-server-1
    auto_data_app-server-1.time: $(DIR)/auto_data_app-server-1
        -@if [ -f $(DIR)/auto_
    data_app-server-1 ]; then\
            # Join any continuation lines.;\
            (\
            while read L;\
            do;\
                echo "$$L";\
            done\
            < $(DIR)/auto_data_
    app-server-1\
            $(CHKPIPE)) |\
            #;\
            # Normalize the input to makedbm.;\
            # Stripe-out comments,;\
            # then delete blank lines.;\
            (sed -e "s/[`echo '\t'` ]*#.*$$//" -e "/^ *$$/d"\
            $(CHKPIPE)) |\
            #;\
            # Build the updated map.;\
            $(MAKEDBM) - $(YPDBDIR)/$(DOM)/auto_
    data_app-server-1;\
            #;\
            # Finishing house-keeping.;\
            touch auto_
    data_app-server-1.time;\
            echo "updated auto_
    data_app-server-1";\
            #;\
            # Push the updated map to slaves?;\
            if [ ! $(NOPUSH) ]; then\
                $(YPPUSH) auto_
    data_app-server-1;\
                echo "pushed auto_
    data_app-server-1";\
            fi\
        else\
            echo "couldn't find $(DIR)/auto_
    data_app-server-1";\
        fi

    auto_
    data_app-server-1: auto_data_app-server-1.time


    But repeating all this stuff for every time is boring, inefficient and error-prone. A more intelligent approach is required. I can say I know the basics of the make utility but until know I haven't faced the need to go beyond. Well that's one of those moments. After a couple of days thinking over the problem I've been finally inspired to the following solution:

    Instead of the aforementioned auto_data_app-server-1 include file, use the following alternative include file, call it target.template-1 if you will, which adds more 2 maps — not shown on the original all make target — just to illustrate the better scalability and manageability of the new approach:

    BUILD_TEMPLATE_1 = -@if [ -f $(DIR)/$(CUSTOM_MAP) ];\
    then\
        (\
            while read L;\
            do\
                echo "$$L";\
            done\
            < $(DIR)/$(CUSTOM_MAP)\
            $(CHKPIPE)\
        )\
        |\
        (\
            sed -e "s/[`echo '\t'` ]*\#.*$$//" -e "/^ *$$/d"\
            $(CHKPIPE)\
        )\
        |\
        $(MAKEDBM) - $(YPDBDIR)/$(DOM)/$(CUSTOM_MAP);\
        :;\
        touch $(CUSTOM_MAP).time;\
        echo "updated $(CUSTOM_MAP)";\
        :;\
        if [ ! $(NOPUSH) ];\
        then\
            $(YPPUSH) $(CUSTOM_MAP);\
            echo "pushed $(CUSTOM_MAP)";\
        fi\
    else\
        echo "couldn't find $(DIR)\$(CUSTOM_MAP)";\
    fi

    #---------------------------------------------------------

    auto_data_app-server-1.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_app-server-1.time : $(DIR)/auto_data_app-server-1
            $(BUILD_TEMPLATE_1)
     
       
    auto_data_app-server-1: auto_data_app-server-1.time

    #---------------------------------------------------------
    auto_data_group-1.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_group-1.time : $(DIR)/auto_data_group-1
            $(BUILD_TEMPLATE_1)


    auto_data_group-1: auto_data_group-1.time

    #---------------------------------------------------------
    auto_data_group-2.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_group-2.time : $(DIR)/auto_data_group-2
            $(BUILD_TEMPLATE_1)
     


    auto_data_group-2: auto_data_group-2.time

    As seen, I make use of the following new knowledge:
    • macros ( = );
    • conditional macros ( := );
    • pattern replacement macro reference ( : %=% ).
      
    I also had to adjust what's passed to BUILD_TEMPLATE_1 taking into consideration that now I putting it all into a make macro.

    Example 2:

    Consider a slightly more complex variation of example 1, where multiple indirection is used to factor out commonalities. This greatly improves manageability and flexibility. Note that the reference in /etc/auto_master doesn't change, but the contents of the auto_data_${HOST} map, change as follows:

    app-server-1:~# ypcat -k auto_data_app-server-1
    group-2 -fstype=autofs,nobrowse        auto_data_&
    group-1 -fstype=autofs,nobrowse        auto_data_&


    And the 2 NIS maps being referenced with the aid of key substitution are as follows:

    app-server-1:~# ypcat -k auto_data_group-1
    area-2 -fstype=autofs,nobrowse file-server-1:/export/data/tank2
    area-1 -fstype=autofs,nobrowse file-server-1:/export/data/tank1


    app-server-1:~# ypcat -k auto_data_group-2
    area-4 -fstype=autofs,nobrowse file-server-1:/export/data/tank4
    area-3 -fstype=autofs,nobrowse file-server-1:/export/data/tank3


    In this example, areas area-1 and area-2 were grouped into group-1 and areas area-3 and area-4 were grouped into group-2. In order to prevent spurious directories under /data (note that /etc/auto_master is the same from example 1), I had to explicitly declare group-1 and group-2 in auto_data_app-server-1 where I used key substitution to somewhat enhance manageability.

    Example 3:

    This example demonstrates the extremely flexible automounter's executable maps. Executable maps are local files that are run whenever indirect mounting requests take place. Perhaps its greatest advantage is precisely the freedom to dynamically correlate several pieces of information in building the mounting string. For instance, in can query and correlate multiple NIS maps. Whatever it performs it must be efficient. If the executable map happens to be a shell script, an obvious requirement is setting the execution bit. In addition it may be advisable to set the file mode to 0550. Furthermore, the expected behavior is to accept a lookup key as its $1 parameter and, case successful, to return the contents of a respective automounter map entry to standard output, otherwise nothing.

    The local file that is run can be of any type as long it exhibits the expected aforementioned behavior. So, for instance, let me present a source code boilerplate for a binary implementation based on previous NIS programming examples:

    #include <cstdlib>
    #include
    <iostream>

    #include <rpcsvc/ypclnt.h>

    #include "pointer.hxx"

    inline void std_c_free( void * p ) throw()
    {
        ::free( p );
    }

    int main( int argc, char * argv[] )
    {

        // No input, treat as invalid key.
        // No output to the standard output.
        if ( argc != 2 )
            ::exit( YPERR_KEY );


        // The lookup logic could be rather involved.
        // It could:
        //     - Query systems
        //     - Query databases
        //     - Trigger special actions
        //
        // For better startup performance,
        // a companion custom multi-threaded SMF service
        // to which most tasks were to be delegated could help.
        //
        // Here, as an example, it's just a simple NIS lookup.

        // Consider hard-coded values.
        // Trade-offs: software engineering x performance.

        int code;
        char * domain;

        if ( ( code = ::yp_get_default_domain( & domain ) ) == 0 )
        {  

            char map[] = "auto_query_001";
            char * key = argv[ 1 ];

            char * value;
            int length;

            if
            (
                (
                    code = ::yp_match
                    (
                        domain, 

                        map,
                        key, 

                        ::strlen( key ),
                        & value, 

                        & length
                    )
                )
                == 0
            )
            {

                // Lookup success.
                // Send (including the \n) to the standard output.
                pointer< char, std_c_free > p_value( value );
                std::cout << p_value;
            }
            else

                // Lookup error.
                // No output to the standard output.
                ::exit( code );
        }
        else

            // Lookup error.
            // No output to the standard output.
            ::exit( code );

        return EXIT_SUCCESS;
    }


    The simplest compilation line for the above code could be:

    # CC -m64 -lsocket -lnsl \
         -o auto_x_query_001 auto_x_query_001.cxx

    In this particular example, the equivalent shell script could be as simple as:

    # ll /etc/auto_x_query_001
    -r-xr-x---  1 root bin ... /etc/auto_x_query_001

    # cat /etc/auto_x_query_001
    #!/bin/sh -
    /usr/bin/ypmatch "$1" auto_query_001
     
    In general, the main advantage in adopting an indirect executable map is the possibility of adding a dynamic touch or override, thereby changing the otherwise deterministic value statically associated to the lookup key.

    For instance, considering the following sample on overriding:

    # ypcat -k auto_query_001
    area-1 file-server-1:/export/data/tank-1
    area-2 file-server-1:/export/data/tank-2

    To enforce the nosuid mount option I could have:

    # cat /etc/auto_x_query_001
    #!/bin/sh -
    /usr/bin/ypmatch "$1" auto_query_001 | 2> /dev/null \
    /usr/bin/sed -e 's/.*/-nosuid &/'
      
    The typical output of the previous script is:

    # /etc/auto_x_query_001 area-1
    -nosuid file-server-1:/export/data/tank-1

    By referring to auto_x_query_001 instead of auto_query_001 in /etc/auto_master, all the respective mountings will get the nosuid mount flag enforced.

    Example 4:

    This sort of multi-example attempts to show the extra flexibility provided by combining variable substitution (built-in and custom) and hierarchical mounts.

    For instance, let's define a custom variable for depicting the RD (for Research & Development) class a certain workstation.

    Note: Save yourself from trouble by not using shell characters.

    workstation-1:~# sharectl set -p environment=CLASS=RD autofs
    workstation-1:~# sharectl get -p environment autofs
    environment=CLASS=RD

    Let's say we want to give each client equipment a specific view of /business.corp depending on to which class it belongs (as above described, by means of a custom variable called CLASS). Their /etc/auto_master could include something similar to:

    # cat /etc/auto_master
    ...
    /business.corp   auto_business.corp   -nobrowse
    ...

    Let's say that the auto_business.corp and the auto_projects_RD custom NIS maps are somewhat as follows:

    # ypcat -k auto_business.corp
    ...
    projects    -fstype=autofs,nobrowse   auto_projects_${CLASS}
    ...
    standards   file-server-1:/export/standards
    templates   file-server-2:/export/templates
    ...

    # ypcat -k auto_projects_RD
    project_C   file-server-4:/export/projects/rd/&
    project_B   file-server-5:/export/projects/rd/&
    project_A   file-server-6:/export/projects/rd/&

     
    The following logical structure will result:

    # tree -d /business.corp
    /business.corp
    ├── projects
    │   ├── project_A
    │   ├── project_B
    │   └── project_C
    ├── standards
    └── templates


    Solaris has the following built-in variables:
    (examples are shown for Solaris 11.1 on a typical Intel microprocessor)
    ARCH      arch                 i86pc
    KARCH     arch -k / uname -m   i86pc
    CPU       uname -p            i386
    HOST     
    uname -n            ...
    OSNAME    
    uname -s             SunOS
    OSREL     
    uname -r            5.11
    OSVERS    
    uname -v             11.1
    PLATFORM  
    uname -i            i86pc
    NATISA     isainfo -n          amd64