Monday, October 28, 2013

NIS & Automounter

The NFS Automounter technology leverages NFS and is essential to most Unix networking environment. It seems that most, if not all Linux implementations are more or less broken, but fortunately with the Solaris Autofs land we have nothing but blue sky.

Even better is the integration of NIS services with the Automounter, adding flexibility and central administration with the elimination of client-by-client local files administration. That's exceptional for medium-to-large networks.

The integration is extremely simple and straightforward, just requiring to enable the corresponding (automount) Name Service Switch (NSS) database and crafting appropriate built-in (auto.master and auto.home) NIS maps alongside additional auto_* custom NIS maps for extended flexibility.

It's critical to avoid as much as possible any hard-coded dependencies in the /etc/auto_master of each client in order to avoid expensive post-installation management in case locations must be substituted. It's enough to consider the importance of this on a network of hundreds or thousands of computers. As a consequence, it's important to carefully choose stable names for the topmost mounting points for the data at rest, such as /data, /public, /download, /business.corp, and so on.

Once all the requirements have been fulfilled, typical usages are:

  1. The in-line (a.k.a. the no absolute paths prefixes) syntax.
     
    The referenced NIS map contents are considered in relation to: i) a mountpoint prefix (to the left) of the map's name in the current Automounter configuration file or else ii) a nested map (multiple indirection) into another referenced NIS map
     
  2. The inclusion (+map_name) syntax.
     
    The map_name NIS map contents are inserted at the current Automounter configuration file (a.k.a. local map) referenced line position. This has the main disadvantage of not integrating with variable substitution as previously described. So except for special cases I won't consider this option due to its inherent limitation.
     
    Again, a couple of examples should help clarify matters.
    I'll start by the simplest and progress to the more elaborated examples.

    Example 1: 

    Requirement: Standards based centrally management of on-demand mounting of assorted NFS shares by several hosts under /data.

    Solution: Configure the automounter of each host to use a specific NIS map describing what's to be mounted as needed. Assuming a host called app-server-1, this is as simple as modifying its /etc/auto_master as follows:

    app-server-1:~# cat /etc/auto_master 
    ...
    /data        auto_data_${HOST}       -nobrowse


    Note:
    I can make use of variable substitution as long as it happens on the value (not the key) field of NIS maps themselves. In addition, it can't be used in included NIS maps. This last limitation is precisely what prevents me from taking advantage of the auto.master NIS map (which is referenced in the standard /etc/auto_master as simply +auto_master at the position where I inserted the ellipses).
    app-server-1:~# ypcat -k auto_data_app-server-1
    area-4 file-server-1:/export/data/tank-4
    area-3 file-server-1:/export/data/tank-3
    area-2 file-server-1:/export/data/tank-2
    area-1 file-server-1:/export/data/tank-1


    In this example, areas area-1 through area-4 are to be on-demand mapped to app-server-1 under /data. Later, if any modifications are needed, it's just a matter of updating the corresponding (auto_data_app-server-1) NIS map. Given the normal automounter timeouts (for indirect NIS maps — direct ones require automounter restart), nothing else is necessary to get the updates reflected to app-server-1.

    I'd like to show the corresponding NIS server side changes in order to support the new custom map. But at the same time, I'd like to show the include directive, which is a more manageable approach then simply ever adding stuff to /var/yp/Makefile to the point it can turn cumbersome or virtually unmanageable. As such, in /var/yp/Makefile just add the 2 following changes for each new custom NIS map:

    # cat /var/yp/Makefile
    ... 

    all: passwd ageing group netid \
            project netgroup aliases publickey \
            hosts ipnodes ethers networks netmasks \

            rpc services protocols \
            auto.master auto.home \
            auth.attr exec.attr prof.attr user.attr \

            auto_data_app-server-1

    ...

    include target.auto_data_app-server-1

    ...

    The file now being included is as follows:

    # cat /var/yp/target.auto_data_app-server-1
    auto_data_app-server-1.time: $(DIR)/auto_data_app-server-1
        -@if [ -f $(DIR)/auto_
    data_app-server-1 ]; then\
            # Join any continuation lines.;\
            (\
            while read L;\
            do;\
                echo "$$L";\
            done\
            < $(DIR)/auto_data_
    app-server-1\
            $(CHKPIPE)) |\
            #;\
            # Normalize the input to makedbm.;\
            # Stripe-out comments,;\
            # then delete blank lines.;\
            (sed -e "s/[`echo '\t'` ]*#.*$$//" -e "/^ *$$/d"\
            $(CHKPIPE)) |\
            #;\
            # Build the updated map.;\
            $(MAKEDBM) - $(YPDBDIR)/$(DOM)/auto_
    data_app-server-1;\
            #;\
            # Finishing house-keeping.;\
            touch auto_
    data_app-server-1.time;\
            echo "updated auto_
    data_app-server-1";\
            #;\
            # Push the updated map to slaves?;\
            if [ ! $(NOPUSH) ]; then\
                $(YPPUSH) auto_
    data_app-server-1;\
                echo "pushed auto_
    data_app-server-1";\
            fi\
        else\
            echo "couldn't find $(DIR)/auto_
    data_app-server-1";\
        fi

    auto_
    data_app-server-1: auto_data_app-server-1.time


    But repeating all this stuff for every time is boring, inefficient and error-prone. A more intelligent approach is required. I can say I know the basics of the make utility but until know I haven't faced the need to go beyond. Well that's one of those moments. After a couple of days thinking over the problem I've been finally inspired to the following solution:

    Instead of the aforementioned auto_data_app-server-1 include file, use the following alternative include file, call it target.template-1 if you will, which adds more 2 maps — not shown on the original all make target — just to illustrate the better scalability and manageability of the new approach:

    BUILD_TEMPLATE_1 = -@if [ -f $(DIR)/$(CUSTOM_MAP) ];\
    then\
        (\
            while read L;\
            do\
                echo "$$L";\
            done\
            < $(DIR)/$(CUSTOM_MAP)\
            $(CHKPIPE)\
        )\
        |\
        (\
            sed -e "s/[`echo '\t'` ]*\#.*$$//" -e "/^ *$$/d"\
            $(CHKPIPE)\
        )\
        |\
        $(MAKEDBM) - $(YPDBDIR)/$(DOM)/$(CUSTOM_MAP);\
        :;\
        touch $(CUSTOM_MAP).time;\
        echo "updated $(CUSTOM_MAP)";\
        :;\
        if [ ! $(NOPUSH) ];\
        then\
            $(YPPUSH) $(CUSTOM_MAP);\
            echo "pushed $(CUSTOM_MAP)";\
        fi\
    else\
        echo "couldn't find $(DIR)\$(CUSTOM_MAP)";\
    fi

    #---------------------------------------------------------

    auto_data_app-server-1.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_app-server-1.time : $(DIR)/auto_data_app-server-1
            $(BUILD_TEMPLATE_1)
     
       
    auto_data_app-server-1: auto_data_app-server-1.time

    #---------------------------------------------------------
    auto_data_group-1.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_group-1.time : $(DIR)/auto_data_group-1
            $(BUILD_TEMPLATE_1)


    auto_data_group-1: auto_data_group-1.time

    #---------------------------------------------------------
    auto_data_group-2.time := CUSTOM_MAP = $(@:%.time=%)
    auto_data_group-2.time : $(DIR)/auto_data_group-2
            $(BUILD_TEMPLATE_1)
     


    auto_data_group-2: auto_data_group-2.time

    As seen, I make use of the following new knowledge:
    • macros ( = );
    • conditional macros ( := );
    • pattern replacement macro reference ( : %=% ).
      
    I also had to adjust what's passed to BUILD_TEMPLATE_1 taking into consideration that now I putting it all into a make macro.

    Example 2:

    Consider a slightly more complex variation of example 1, where multiple indirection is used to factor out commonalities. This greatly improves manageability and flexibility. Note that the reference in /etc/auto_master doesn't change, but the contents of the auto_data_${HOST} map, change as follows:

    app-server-1:~# ypcat -k auto_data_app-server-1
    group-2 -fstype=autofs,nobrowse        auto_data_&
    group-1 -fstype=autofs,nobrowse        auto_data_&


    And the 2 NIS maps being referenced with the aid of key substitution are as follows:

    app-server-1:~# ypcat -k auto_data_group-1
    area-2 -fstype=autofs,nobrowse file-server-1:/export/data/tank2
    area-1 -fstype=autofs,nobrowse file-server-1:/export/data/tank1


    app-server-1:~# ypcat -k auto_data_group-2
    area-4 -fstype=autofs,nobrowse file-server-1:/export/data/tank4
    area-3 -fstype=autofs,nobrowse file-server-1:/export/data/tank3


    In this example, areas area-1 and area-2 were grouped into group-1 and areas area-3 and area-4 were grouped into group-2. In order to prevent spurious directories under /data (note that /etc/auto_master is the same from example 1), I had to explicitly declare group-1 and group-2 in auto_data_app-server-1 where I used key substitution to somewhat enhance manageability.

    Example 3:

    This example demonstrates the extremely flexible automounter's executable maps. Executable maps are local files that are run whenever indirect mounting requests take place. Perhaps its greatest advantage is precisely the freedom to dynamically correlate several pieces of information in building the mounting string. For instance, in can query and correlate multiple NIS maps. Whatever it performs it must be efficient. If the executable map happens to be a shell script, an obvious requirement is setting the execution bit. In addition it may be advisable to set the file mode to 0550. Furthermore, the expected behavior is to accept a lookup key as its $1 parameter and, case successful, to return the contents of a respective automounter map entry to standard output, otherwise nothing.

    The local file that is run can be of any type as long it exhibits the expected aforementioned behavior. So, for instance, let me present a source code boilerplate for a binary implementation based on previous NIS programming examples:

    #include <cstdlib>
    #include
    <iostream>

    #include <rpcsvc/ypclnt.h>

    #include "pointer.hxx"

    inline void std_c_free( void * p ) throw()
    {
        ::free( p );
    }

    int main( int argc, char * argv[] )
    {

        // No input, treat as invalid key.
        // No output to the standard output.
        if ( argc != 2 )
            ::exit( YPERR_KEY );


        // The lookup logic could be rather involved.
        // It could:
        //     - Query systems
        //     - Query databases
        //     - Trigger special actions
        //
        // For better startup performance,
        // a companion custom multi-threaded SMF service
        // to which most tasks were to be delegated could help.
        //
        // Here, as an example, it's just a simple NIS lookup.

        // Consider hard-coded values.
        // Trade-offs: software engineering x performance.

        int code;
        char * domain;

        if ( ( code = ::yp_get_default_domain( & domain ) ) == 0 )
        {  

            char map[] = "auto_query_001";
            char * key = argv[ 1 ];

            char * value;
            int length;

            if
            (
                (
                    code = ::yp_match
                    (
                        domain, 

                        map,
                        key, 

                        ::strlen( key ),
                        & value, 

                        & length
                    )
                )
                == 0
            )
            {

                // Lookup success.
                // Send (including the \n) to the standard output.
                pointer< char, std_c_free > p_value( value );
                std::cout << p_value;
            }
            else

                // Lookup error.
                // No output to the standard output.
                ::exit( code );
        }
        else

            // Lookup error.
            // No output to the standard output.
            ::exit( code );

        return EXIT_SUCCESS;
    }


    The simplest compilation line for the above code could be:

    # CC -m64 -lsocket -lnsl \
         -o auto_x_query_001 auto_x_query_001.cxx

    In this particular example, the equivalent shell script could be as simple as:

    # ll /etc/auto_x_query_001
    -r-xr-x---  1 root bin ... /etc/auto_x_query_001

    # cat /etc/auto_x_query_001
    #!/bin/sh -
    /usr/bin/ypmatch "$1" auto_query_001
     
    In general, the main advantage in adopting an indirect executable map is the possibility of adding a dynamic touch or override, thereby changing the otherwise deterministic value statically associated to the lookup key.

    For instance, considering the following sample on overriding:

    # ypcat -k auto_query_001
    area-1 file-server-1:/export/data/tank-1
    area-2 file-server-1:/export/data/tank-2

    To enforce the nosuid mount option I could have:

    # cat /etc/auto_x_query_001
    #!/bin/sh -
    /usr/bin/ypmatch "$1" auto_query_001 | 2> /dev/null \
    /usr/bin/sed -e 's/.*/-nosuid &/'
      
    The typical output of the previous script is:

    # /etc/auto_x_query_001 area-1
    -nosuid file-server-1:/export/data/tank-1

    By referring to auto_x_query_001 instead of auto_query_001 in /etc/auto_master, all the respective mountings will get the nosuid mount flag enforced.

    Example 4:

    This sort of multi-example attempts to show the extra flexibility provided by combining variable substitution (built-in and custom) and hierarchical mounts.

    For instance, let's define a custom variable for depicting the RD (for Research & Development) class a certain workstation.

    Note: Save yourself from trouble by not using shell characters.

    workstation-1:~# sharectl set -p environment=CLASS=RD autofs
    workstation-1:~# sharectl get -p environment autofs
    environment=CLASS=RD

    Let's say we want to give each client equipment a specific view of /business.corp depending on to which class it belongs (as above described, by means of a custom variable called CLASS). Their /etc/auto_master could include something similar to:

    # cat /etc/auto_master
    ...
    /business.corp   auto_business.corp   -nobrowse
    ...

    Let's say that the auto_business.corp and the auto_projects_RD custom NIS maps are somewhat as follows:

    # ypcat -k auto_business.corp
    ...
    projects    -fstype=autofs,nobrowse   auto_projects_${CLASS}
    ...
    standards   file-server-1:/export/standards
    templates   file-server-2:/export/templates
    ...

    # ypcat -k auto_projects_RD
    project_C   file-server-4:/export/projects/rd/&
    project_B   file-server-5:/export/projects/rd/&
    project_A   file-server-6:/export/projects/rd/&

     
    The following logical structure will result:

    # tree -d /business.corp
    /business.corp
    ├── projects
    │   ├── project_A
    │   ├── project_B
    │   └── project_C
    ├── standards
    └── templates


    Solaris has the following built-in variables:
    (examples are shown for Solaris 11.1 on a typical Intel microprocessor)
    ARCH      arch                 i86pc
    KARCH     arch -k / uname -m   i86pc
    CPU       uname -p            i386
    HOST     
    uname -n            ...
    OSNAME    
    uname -s             SunOS
    OSREL     
    uname -r            5.11
    OSVERS    
    uname -v             11.1
    PLATFORM  
    uname -i            i86pc
    NATISA     isainfo -n          amd64 
         

    NIS & NFS

    The integration of NIS services and NFS is more than convenient, it's vital to NFS. Not that NFS couldn't use another directory service, notably LDAP, but just that NIS and NFS together amount for most of the UNIX glory.

    Fortunately ZFS simplifies the intricacies of the NFS part, automating most of the tasks and providing a reasonably hierarchical view of the NFS options associated with an exported file system.

    By the way, while talking about NFS I'm assuming NFSv4 which is used by default on Solaris since a few releases already. In addition, I'm not assuming any DNS installation, just NIS, but anyway that shouldn't be an issue in any case.

    I'd say that one of the very first steps is to double-check the proper settings of NFSv4 user and group id mapping. For all the details, check nfsmapid(1M) man page. For the simpler big hand override, simply set a desired value of nfsmapid_domain property of a location profile with sharectl(1M). In fact, this should already be OK with the assumed setup of NIS services.

    # sharectl get -p nfsmapid_domain nfs
    nfsmapid_domain=business.corp

    I'll lazily (on a real scenario consider setting /export on a dedicated device outside rpool) create a ZFS file system, set the ACL as per a previous example, and finally share it via NFS to illustrate the whole idea:

    # zfs create rpool/export/data
    # zfs set quota=5G rpool/export/data
     
    # zfs get share.all rpool/export/data | grep nfs
    rpool/export/data  share.nfs        off            default
    rpool/export/data  share.nfs.*      ...            default


    # ll -dV /export/data
    drwxr-xr-x   2 root     root   2 Oct 29 10:28 /export/data

                     owner@:rwxp-DaARWcCos:-------:allow
                     group@:r-x---a-R-c--s:-------:allow
                  everyone@:r-x---a-R-c--s:-------:allow


    The following will allow user1 to create (but not delete) any number of subdirectories directly under /export/data. Each subdirectory contents will be fully manageable, even for wiping their contents out.

    # chmod A+user:user1:rxpaRcs:allow /export/data
    # ll -dV /export/data
    drwxr-xr-x   2 root     root   2 Oct 29 10:28 /export/data

                 user:user1:r-xp--a-R-c--s:-------:allow
                     owner@:rwxp-DaARWcCos:-------:allow
                     group@:r-x---a-R-c--s:-------:allow
                  everyone@:r-x---a-R-c--s:-------:allow


    Prepare all the necessary NFS options. For the sake of simplicity and minimality, but not giving up some access control, I'll just use the rw= NFS option referencing a proper netgroup.

    # zfs set share.nfs.sec.sys.rw=desktops rpool/export/data
    # zfs get share.nfs.sec.sys.rw rpool/export/data
    NAME                PROPERTY              VALUE     SOURCE
    rpool/export/data   share.nfs.sec.sys.rw  desktops  local


    Finally, as the last step after double-checking everything, open the gate by activating the NFS sharing by switching on a simple ZFS file system property.

    # zfs set share.nfs=on rpool/export/data
    # zfs get share.all rpool/export/data | grep nfs
    rpool/export/data  share.nfs        on             local
    rpool/export/data  share.nfs.*      ...            local
    rpool/export/data  share.protocols  nfs            local
       

    # zfs get share | grep data
    ...,path=/export/data,prot=nfs,sec=sys,rw=desktops ...

    # share | grep data
    rpool_export_data  /export/data  nfs  sec=sys,rw=desktops


    # showmount -e | grep data
    /export/data  desktops


    And that's all folks!

    dt-10:~# mount nfs-1:/export/data /mnt
    dt-10:~# su - user1
    user1@dt-10:~$ mkdir /mnt/test
    user1@dt-10:~$ ll -dV /mnt/test
     drwxr-xr-x   2 user1    staff 3 Oct 29 11:32 /mnt/test
                     owner@:rwxp-DaARWcCos:-------:allow
                     group@:r-x---a-R-c--s:-------:allow
                  everyone@:r-x---a-R-c--s:-------:allow

     
    It may be worthy to add a few notes:

    • If the underlying group id happens to change, then it may be necessary to restart the nfs/mapid service on the NFS client in order to the new group id get visible.
         
    • If the access list of the NFS server gets changed then it's necessary to repeat the share command "over the previous one" in order to the new access list published to NFS clients. The simplest way to achieve this is refreshing the SMF service as follows:

      # svcadm refresh nfs/server
       

    pam_list

    The pam_list PAM account management module for UNIX, as described in pam_list(5), is a modern and more manageable and scalable version of the traditional way of restricting user's access to a system. In fact, it's superior to the traditional additions to /etc/passwd method as PAM centralizes all authentication and authorization operations of a standard system.

    The recent Solaris version, at this time Solaris 11.1 SRU 12.5, even support the newer and more manageable /etc/pam.d structure in alternative to the traditional monolithic /etc/pam.conf.

    My favorite use case is the following entries in /etc/pam.d/other:

        account requisite    pam_roles.so.1
        account required     pam_unix_account.so.1
        account required     pam_list.so.1 allow=/etc/users.allow


    Where /etc/users.allow contains:

        root
        local_login
       
    remote_login
        @netgroup


    The pam_list(5) man describe more options, including the possibility of considering roles, which as ignored by default, in addition to logins. In particular, I see the possibility of referencing a netgroup as a very flexible and powerful feature.
      
    NOTE
    After saving changes to /etc/users.allow it may take a little while for the module to reflect the changes, that is, it is not immediate.
       

    NIS netgroups

    A netgroup map is one special built-in NIS map, whose flexibility, composition and broad application makes it very, very useful. But in essence it's ridiculously easy, absolutely no mystery.

    The main role of a netgroup is to collect a group of users or a group of hosts under a common name. In doing so, it's possible, and usually a best practice, to structure a composition (by nesting) of netgroups. That is, a netgroup contains a list of users, a list of hosts or a list of other netgroups. It seems clear that there's no one point in creating self-references or cyclical-references.

    Being special, a netgroup map has a particular syntax for its value fields, in the form of a triple whose first two components respectively refer to a host and a user. Counter-intuitively, there's absolute no relationship between the first two components of the triple.  Although completely unimportant as a practical aspect, in my opinion this is unfortunate, because a relationship could enrich the possibilities, whereas the lack of relationship makes its syntax contradictory to the very natural meaning of a tuple.

    By the way, the last field of the triple, depicts the RPC (NIS) domain under consideration and seems specially important when there's no DNS implemented for the network, particularly when dealing with NFSv4 permission options such as rw= referencing a netgroup.

    By the way, to more clearly decouple the first and second components, it's recommended to use the "no-value" indicator (a dash, "-") on the unused component. Nevertheless, using it for the last component wouldn't be recommended as it would mean that the triple wouldn't apply to any domain, which probably isn't what's needed in general.

    Example:

    # ypcat -k netgroup | grep desktop
    desktops-10 (dt-10,-,business.corp) (dt-11,-,business.corp) ...
    desktops-20 (dt-20,-,business.corp) (dt-21,-,business.corp) ...
    desktops dt-10 dt-11 ...

    As we can see, desktops is a netgroup composed of two other nested netgroups, desktops-10 and desktops-20. This way, it's possible to break down, preferably by some meaningful grouping, what could otherwise be a very large list. Furthermore, not adopting this technique would imply in very long lines which is likely to hit some line length capacity leading to subtle errors at least when attempting to build the map, so avoid it by all means.

    As one final comment, each triple (including its parenthesis) is delimited by one or more spaces or a comma, which might be useful, perhaps, depending on how a particular entry in the netgroup is expected to be parsed by its consumer. In fact I wonder if what's outside the parenthesis is of any relevance at all, but a single space is more than enough for me.
     
    A few useful applications of netgroups are:
       
    1. In the aforementioned NFS permissions by means of ro=, rw= and root= NFS export options. But as an exception, unfortunately, netgroups can only be used for the root= option when the NFS security mode is sys. The nfssec(5) man page doesn't say that netgroups can't be used with the ro= and rw= when the NFS security mode options are Kerberos related, such krb5, krb5i, krb5p, or even among others.
       
      Note: When a netgroup is updated, dependent NFS options aren't immediately refreshed, if ever. In order to force the refresh it's necessary to repeat the sharing command ("over" the previous one), a kind of overwrite. If many NFS clients depend on the share, then a mounting storm may happen.
         
    2.  In the pam_list PAM account management module.
          
    3. In the default sudo security policy module (SUDOERS(1m)).
      The file location is /etc/sudoers edited by visudo (VISUDO(1m)).
      I avoid sudo as much as possible for considering RBAC far superior.
        

    Thursday, October 17, 2013

    NIS programming

    Working with NIS services at the system administration level is fine. In addition to all the built-in NIS maps and the flexibility of custom NIS maps, the standard arsenal of interface commands (ypwhich(1), ypmatch(1), ypcat(1), ypset(1M)) are pretty much sufficient for most daily tasks.

    Nevertheless, there may be cases where interfacing with NIS services at programming level is a great and useful complement, easily incorporating some non-standard and centralized information into distributed and/or replicated applications.

    Let's demonstrate with a couple of examples.
    That should suffice to convey the main ideas.

    Example 1:

    This example mimics the functionality of ypmatch built-in command.

    #include <cstdlib>
    #include
    <iostream>

    #include <rpcsvc/ypclnt.h
    >

    #include "pointer.hxx"

    void error_msg( char const * context, int code )
    {
        std::cout
            << "Context: "
            << context
            << std::endl;

        std::cout
            << "Message: "
            << ::yperr_string( code )
            << "."
            << std::endl;
    }

    inline void std_c_free( void * p ) throw()
    {
        ::free( p );
    }

    int main()
    {
        int code = 0;
        char * domain = "";

        if ( ( code = ::yp_get_default_domain( & domain ) ) == 0 )
        {

            // Consider the previous custom NIS map example.
            // Hard-coded key for demonstration purposes.
            char map[] = "phonebook";
            char key[] = "user1";

     
            char * value = "";
            int length = 0;


            // This call is decoupled from the following if
            // just for the readability of this example.
            code = ::yp_match
            (
                domain, 

                map,
                key,

                sizeof(key) - 1,    // '\0' trim.
                & value, 

                & length
            );


            if ( code == 0 )
            {

                // Use a simple smart pointer
                // for automatic custom destruction.
                pointer< char, std_c_free > v_ptr( value );

                v_ptr[ length ] = '.';   
    // '\n' override.
     

                std::cout
                   
    << "Domain " << domain
                    << ", map " << map
                    << ", key " << key
                    << ", value " << v_ptr
                    <<
    std::endl;
            }
            else
                error_msg( "yp_match()", code );
        }
        else
            error_msg( "yp_get_default_domain()", code );

        return EXIT_SUCCESS;
    }


    Example 2:

    This example mimics the functionality of the ypcat built-in command.
    As depicted on the man pages, this calls are using the UPD protocol.
    This implies that the results may not be always accurate.

    #include <cstdlib>
    #include <iostream>
     
    #include
    <rpcsvc/ypclnt.h>

    #include "pointer.hxx"


    // Omitted stuff common to example 1...

    int main()
    {
        int code = 0;
        char * domain = "";

        if ( ( code = ::yp_get_default_domain( & domain ) ) == 0 )
        {


            // Consider the previous custom NIS map example.
            char map[] = "phonebook";

            char * key = "";
            int k_length = 0;

            char * value = "";
            int v_length = 0;


            // This call is decoupled from the following if         
            // just for the readability of this example. 
            code = ::yp_first
            (
                domain,
                map,
                & key,
                & k_length,
                & value,
                & v_length
            );
           
            if ( code == 0 )
            {
                do
                {

                    // Use a simple smart pointer
                    
    // for automatic custom destruction.
                    pointer< char, std_c_free > k_ptr( key );
                    pointer
    < char, std_c_free > v_ptr( value );
                   
                    
    // Trim standard '\n' "garbage".
                    k_ptr[ k_length ] = '\0';
                    v_ptr[ v_length ] = '\0';
                   
                    std::cout
     
                        << k_ptr 
                        << ": " 
                        << v_ptr  
                        << std::endl;

                    // This call is decoupled from the following if
                    // just for the readability of this example. 
                    code = ::yp_next
                    (
                        domain,
                        map,
                        key,
                        k_length,
                        & key,
                        & k_length,
                        & value,
                        & v_length
                    );

                    if ( code != 0 )
                    {
                        if ( code != YPERR_NOMORE )
                            message( "yp_next()", code );

                        break;
                    }
                }
                while ( true );
            }
            else
                error_msg( "yp_first()", code );
        }
        else
            error_msg( "yp_get_default_domain()", code );

        return EXIT_SUCCESS;
    }



    Example 3:

    This example also mimics the functionality of the ypcat built-in command.
    As depicted on the man pages, this calls are using the TCP protocol.
    This implies that the results should be always accurate.

    #include <cstdlib>
    #include <iostream>
     
    #include
    <rpcsvc/ypclnt.h>
    #include <rpcsvc/yp_prot.h>

    #include "pointer.hxx"


    // Omitted stuff common to example 1...

    // NIS callback support
     
    extern "C" typedef int ( *callback ) ();

    extern "C"
    int pair 


        int status, 
        char const * const k, int k_length, 
        char const * const v, int v_length, 
        void
    )
    {
        if ( status == YP_TRUE )
        {

            //
            // k and v are private to yp_all().
            // Copy as needed, for instance, with:
            //
            //   std::string key( k, k_length );
            //   std::string value( v, v_length );
            //

            std::cout.write( k, k_length );
            std::cout << ' ';
           
            std::cout.write( v, v_length );       
            std::cout << std::endl;
           
            return 0;
        }
       
        int code = 0;
       
        if ( ( code = ::ypprot_err( status ) ) !=  YPERR_NOMORE )
            error_msg( "ypall_callback()", code );
       
        return EXIT_FAILURE;
    }

    int main()
    {
        int code = 0;
        char * domain = "";

        if ( ( code = ::yp_get_default_domain( & domain ) ) == 0 )
        {

             // Consider the previous custom NIS map example.
             char map[] = "phonebook";
           
            ::ypall_callback dumper = { (callback) pair, 0 };
           
            if ( ( code = ::yp_all( domain, map, & dumper ) ) != 0 )
                error_msg( "yp_all()", code );
        }
        else
            error_msg( "yp_get_default_domain()", code );

        return EXIT_SUCCESS;
    }