Resource control historically appeared for limiting the system's
resources that processes and their children could consume, but nowadays
in Solaris this concept has been elaborated to other collections of
processes: tasks, projects and zones as well.
The best practice is to carefully assess (as using extended accounting) the resource consumption of the workloads on the system before applying any fine-grained resource control to prevent over-consumptions. And, of course, above all, the system must meet or exceed the combined resource requirements of all the workloads it's supposed to host.
This topic is vast because there are many resources (resource-controls(5)) ranging from the most "elementary" to the most complex ones, there 3 control levels (basic, privileged and system), there are 4 containment levels (process, task, project and zone), 2 types of actions and flags (local and global) as well more than one available interface managing part (ulimit(1) and getrlimit(2)) or all of this stuff (rctladm(1M), prctl(1), setrctl(2), the projects database and zone configuration).
All the manpages provide extensive information I won't discuss, at least for now. In addition there are some other lenghty references such as the chapter 5 of Resource Management and Oracle® Solaris Zones Developer's Guide which is a kind of revamp of the original chapter 5 of the (partially) archived Solaris Containers: Resource Management and Solaris Zones Developer's Guide.
Personal notes and recipes, views and opinions.
If it must run, it runs on Solaris!
Showing posts with label Zones. Show all posts
Showing posts with label Zones. Show all posts
Monday, April 24, 2017
Monday, April 10, 2017
The physical memory
The physical memory is a crucial and precious resource and is commonly one of the major system bottlenecks as well as one of the system components that bumps a system price to the skies.
Knowing and, better yet, determining at runtime the amount of physical memory that is physically installed on a host and that is actually available to the system is important to many deployment and administration strategies.
Without recurring to programming at the system APIs level, it is possible to easily determine such figures as shown below.
$ prtconf | grep Mem
Memory size: 8192 Megabytes
# echo ::memstat | mdb -k | grep Total
Total 2096958 7.9G
$ kstat -p -n system_pages | egrep 'avail|physmem|locked|total'
unix:0:system_pages:availrmem 930155
unix:0:system_pages:pageslocked 1162706
unix:0:system_pages:pagestotal 2092861
unix:0:system_pages:physmem 2092861
Note that pagestotal = availrmem + pageslocked and that it seems that interestingly pagestotal = physmem, all in multiples of page sizes.
$ pagesize
4096
Then we can now compare things and better grasp the reality:
$ echo "(2092861 * `pagesize`) / 1024 ^ 2" | bc
8175
$ echo "(2096958 * `pagesize`) / 1024 ^ 2" | bc
8191
To me the 16 MB (4096 pages) difference between 8191 and 8175 seems to be fixed (non-pageable) and is still a mystery, a matter to open investigation, perhaps some part of the kernel known only by the internal staff.
That is, according to system's best result it actually sees 8191 MB, 1 MB less than what's physically installed on the host and that's not so hard to wonder why (perhaps set aside for the on-board video or so). Using closer to perfect figures ought to provide more exact results for planning and assessments.
Knowing and, better yet, determining at runtime the amount of physical memory that is physically installed on a host and that is actually available to the system is important to many deployment and administration strategies.
Without recurring to programming at the system APIs level, it is possible to easily determine such figures as shown below.
$ prtconf | grep Mem
Memory size: 8192 Megabytes
# echo ::memstat | mdb -k | grep Total
Total 2096958 7.9G
$ kstat -p -n system_pages | egrep 'avail|physmem|locked|total'
unix:0:system_pages:availrmem 930155
unix:0:system_pages:pageslocked 1162706
unix:0:system_pages:pagestotal 2092861
unix:0:system_pages:physmem 2092861
Note that pagestotal = availrmem + pageslocked and that it seems that interestingly pagestotal = physmem, all in multiples of page sizes.
$ pagesize
4096
Then we can now compare things and better grasp the reality:
$ echo "(2092861 * `pagesize`) / 1024 ^ 2" | bc
8175
$ echo "(2096958 * `pagesize`) / 1024 ^ 2" | bc
8191
To me the 16 MB (4096 pages) difference between 8191 and 8175 seems to be fixed (non-pageable) and is still a mystery, a matter to open investigation, perhaps some part of the kernel known only by the internal staff.
That is, according to system's best result it actually sees 8191 MB, 1 MB less than what's physically installed on the host and that's not so hard to wonder why (perhaps set aside for the on-board video or so). Using closer to perfect figures ought to provide more exact results for planning and assessments.
Labels:
Memory,
Resource Control,
Swap,
VirtualBox,
Virtualization,
ZFS,
Zones
Kernel zones & ZFS ARC
Assuming your system meet sufficient kernel zones support requirements one important tunning is the adjustment of the ZFS ARC maximum bytes (the so known zfs_arc_max in /etc/system). I've done a somewhat similar tunning a couple of years ago as tunning best practice right after installing VirtualBox. For kernel zones it may not be just a case of simple best practice but more likely a be advised or neglect it at your own risk!
By the way, according to more recent Solaris public documentation, the host system sees kernel zones just as another application. The required tuning on the host system should take into account all the kernel zones and processes that are anticipated to run on the system.
In the past, for figuring out the current zfs_arc_max I just relied on the c_max bytes from kstat -n arcstats. But more recently Solaris 11.2 documentation refers to ::memstat from mdb -k. So let's just put them in perspective (remembering that other figures from arcstats may play a role not being considered below):
# kstat -n arcstats | grep c_max
c_max 7498616832
# echo ::memstat | mdb -k
Page Summary Pages Bytes %Tot
----------------- ---------------- ---------------- ----
Kernel 293573 1.1G 14%
ZFS Metadata 28199 110.1M 1%
ZFS File Data 517332 1.9G 25%
Anon 269994 1.0G 13%
Exec and libs 6008 23.4M 0%
Page cache 328957 1.2G 16%
Free (cachelist) 3779 14.7M 0%
Free (freelist) 628887 2.3G 30%
Total 2096958 7.9G
# pagesize
4096
To quote the Solaris 11.2 documentation topic:
By the way, according to more recent Solaris public documentation, the host system sees kernel zones just as another application. The required tuning on the host system should take into account all the kernel zones and processes that are anticipated to run on the system.
In the past, for figuring out the current zfs_arc_max I just relied on the c_max bytes from kstat -n arcstats. But more recently Solaris 11.2 documentation refers to ::memstat from mdb -k. So let's just put them in perspective (remembering that other figures from arcstats may play a role not being considered below):
# kstat -n arcstats | grep c_max
c_max 7498616832
# echo ::memstat | mdb -k
Page Summary Pages Bytes %Tot
----------------- ---------------- ---------------- ----
Kernel 293573 1.1G 14%
ZFS Metadata 28199 110.1M 1%
ZFS File Data 517332 1.9G 25%
Anon 269994 1.0G 13%
Exec and libs 6008 23.4M 0%
Page cache 328957 1.2G 16%
Free (cachelist) 3779 14.7M 0%
Free (freelist) 628887 2.3G 30%
Total 2096958 7.9G
# pagesize
4096
To quote the Solaris 11.2 documentation topic:
The suggested value is one-half of what you would like the host ZFS resources to use. For example, if you want ZFS to use less than 2 GB of memory, set the ARC cache to 1 GB, or 0x40000000.Furthermore the Solaris 11.2 documentation on zfs_arc_max says:
- 75% of memory on systems with less than 4 GB
of memory.
physmem minus 1 GB on systems with greater than 4 GB of memory.
If a future memory requirement is significantly large and well defined, you might consider reducing the value of this parameter to cap the ARC so that it does not compete with the memory requirement. For example, if you know that a future workload requires 20% of memory, it makes sense to cap the ARC such that it does not consume more than the remaining 80% of memory.
Informs the system about how much memory is reserved for application use, and therefore limits how much memory can be used by the ZFS ARC cache as the cache increases over time.
By means of this parameter, administrators can maintain a large reserve of available free memory for future application demands. The user_reserve_hint_pct parameter is intended to be used in place of the zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
If a dedicated system is used to run a set of applications with a known memory footprint, set the parameter to the value of that footprint.
For upward adjustments, increase the value if the initial value is determined to be insufficient over time for application requirements, or if application demand increases on the system. Perform this adjustment only within a scheduled system maintenance window. After you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application requirements. Make sure to use decrease the value only by small amounts, no greater than 5% at a time.
Sunday, April 9, 2017
Kernel zones support
The advent of kernel zones in Solaris 11.2 is another great
improvement to Solaris. But it may not be supported on aging hardware as
I may have just found out. I happen to use a box more than 5 years old, from
later 2009, which seems not support all the required
virtualization technology for kernel zones. But I'm pending confirmation if my issue is just because I have VirtualBox installed on my x86-64 and this is posing some sort of conflict with kernel zones availability in terms of lack of (already allocated to VirtualBox) virtualization resources.
So if you're planning to set aside some "cool hardware" for your Solaris 11.3 kernel zones, I suggest you learn from this experience of mine beforehand in order to make sure your "cool hardware" and system setup meet all the requirements.
You may start by checking the man page solaris-kz(5):
$ virtinfo -c supported list kernel-zone
kernel-zone: no such supported virtual environment found
$ virtinfo
NAME CLASS
non-global-zone supported
And under VirtualBox 5.1.18 r114002, I've got:
$ virtinfo
NAME CLASS
virtualbox current
non-global-zone supported
Well, it's true that in the logs you'll have to look for kernel-zone.
But you'll have to do so in /var/adm/messages instead.
So I set out to further investigate what was missing.
For my physical box I've got:
$ grep kernel-zone /var/adm/messages | cut -d: -f5,6 | sort -u
... environment not supported: VMX already in use
... unsupported Intel model 15
And under VirtualBox (on that same physical box), I've got:
$ grep kernel-zone /var/adm/messages | cut -d: -f5,6 | sort -u
... environment not supported: CPU doesn't have VMX
According to a wikipedia article on x86 virtualization, VMX happen to be the designation for the CPU flag related to VT-x support. What caught my attention was the single message VMX already in use. It appeared just once and it's true I have enabled virtualization support on my physical box's BIOS which makes me wonder if the situation would change in favor of kernel-zones meeting all its requirements if I completely uninstall VirtualBox. I haven't tried it yet nor I'm willing to do it right now as I do heavy use of VirtualBox. But depending on the scenario, the trade-off would certainly pay off.
By the way, I'd like to mention that I did try something less drastic than uninstalling VirtualBox. I tried disabling (setting to off) the VirtualBox's property hwvirtexclusive but that didn't make any difference in solving the problem (at least as to version 5.1.18). Later I found a forum entry about this that claims to have worked, but this was for earlier versions.
So if you're planning to set aside some "cool hardware" for your Solaris 11.3 kernel zones, I suggest you learn from this experience of mine beforehand in order to make sure your "cool hardware" and system setup meet all the requirements.
You may start by checking the man page solaris-kz(5):
...In my case, in general, I've got:
The solaris-kz brand uses certain hardware features which may not be available in older systems, or in virtualized environments. To detect whether a system supports the solaris-kz brand, install the brand-solaris-kz package and then run the virtinfo command.
# virtinfo -c supported list kernel-zone
If kernel-zone is not shown in the supported list, you can seesyslogfor more information. Messages pertaining to kernel zones will contain the string kernel-zone.
...
$ virtinfo -c supported list kernel-zone
kernel-zone: no such supported virtual environment found
$ virtinfo
NAME CLASS
non-global-zone supported
And under VirtualBox 5.1.18 r114002, I've got:
$ virtinfo
NAME CLASS
virtualbox current
non-global-zone supported
Well, it's true that in the logs you'll have to look for kernel-zone.
But you'll have to do so in /var/adm/messages instead.
So I set out to further investigate what was missing.
For my physical box I've got:
$ grep kernel-zone /var/adm/messages | cut -d: -f5,6 | sort -u
... environment not supported: VMX already in use
... unsupported Intel model 15
And under VirtualBox (on that same physical box), I've got:
$ grep kernel-zone /var/adm/messages | cut -d: -f5,6 | sort -u
... environment not supported: CPU doesn't have VMX
According to a wikipedia article on x86 virtualization, VMX happen to be the designation for the CPU flag related to VT-x support. What caught my attention was the single message VMX already in use. It appeared just once and it's true I have enabled virtualization support on my physical box's BIOS which makes me wonder if the situation would change in favor of kernel-zones meeting all its requirements if I completely uninstall VirtualBox. I haven't tried it yet nor I'm willing to do it right now as I do heavy use of VirtualBox. But depending on the scenario, the trade-off would certainly pay off.
By the way, I'd like to mention that I did try something less drastic than uninstalling VirtualBox. I tried disabling (setting to off) the VirtualBox's property hwvirtexclusive but that didn't make any difference in solving the problem (at least as to version 5.1.18). Later I found a forum entry about this that claims to have worked, but this was for earlier versions.
Tuesday, July 29, 2014
Configuration profile - DNS
This is an example of a DNS client configuration profile.
This is useful to streamline installations:
Assume all DNS services prerequisites and assumptions stay the same.
Also check the on-line documentation Managing DNS (Tasks) for details.
The following are the necessary customizations:
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
<service version="1" type="service" name="system/config-user">
<instance enabled="true" name="default">
<property_group type="application" name="root_account">
<propval type="astring" name="login" value="root"/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="role"/>
</property_group>
<property_group type="application" name="user_account">
<propval type="astring" name="login" value="..."/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="normal"/>
<propval type="astring" name="description" value="Primary Administrator"/>
<propval type="count" name="gid" value="10"/>
<propval type="astring" name="shell" value="/usr/bin/bash"/>
<propval type="astring" name="roles" value="root"/>
<propval type="astring" name="profiles" value="System Administrator"/>
<propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/timezone">
<instance enabled="true" name="default">
<property_group type="application" name="timezone">
<propval type="astring" name="localtime" value="..."/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/environment">
<instance enabled="true" name="init">
<property_group type="application" name="environment">
<propval type="astring" name="LANG" value="en_US.UTF-8"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/identity">
<instance enabled="true" name="node">
<property_group type="application" name="config">
<propval type="astring" name="nodename" value="zone-1"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/keymap">
<instance enabled="true" name="default">
<property_group type="system" name="keymap">
<propval type="astring" name="layout" value="US-English"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/console-login">
<instance enabled="true" name="default">
<property_group type="application" name="ttymon">
<propval type="astring" name="terminal_type" value="sun-color"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/physical">
<instance enabled="true" name="default">
<property_group type="application" name="netcfg">
<propval type="astring" name="active_ncp" value="DefaultFixed"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/install">
<instance enabled="true" name="default">
<property_group type="application" name="install_ipv4_interface">
<propval type="astring" name="address_type" value="static"/>
<propval type="net_address_v4" name="static_address" value="192.168.0.91/24"/>
<propval type="astring" name="name" value="net11/v4"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files"/>
<propval type="astring" name="host" value="files dns"/>
<propval type="astring" name="printer" value="user files"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/name-service/cache">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/dns/client">
<property_group type="application" name="config">
<property type="net_address" name="nameserver">
<net_address_list>
<value_node value="10.0.1.10"/>
<value_node value="10.0.1.20"/>
<value_node value="10.0.1.30"/>
</net_address_list>
</property>
<property type="astring" name="search">
<astring_list>
<value_node value="business.corp"/>
</astring_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/ocm">
<instance enabled="true" name="default">
<property_group type="application" name="reg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="key" value=""/>
<propval type="astring" name="cipher" value=""/>
<propval type="astring" name="proxy_host" value=""/>
<propval type="astring" name="proxy_user" value=""/>
<propval type="astring" name="proxy_password" value=""/>
<propval type="astring" name="config_hub" value=""/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/fm/asr-notify">
<instance enabled="true" name="default">
<property_group type="application" name="autoreg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="index" value=""/>
<propval type="astring" name="private-key" value=""/>
<propval type="astring" name="public-key" value=""/>
<propval type="astring" name="client-id" value=""/>
<propval type="astring" name="timestamp" value=""/>
<propval type="astring" name="proxy-host" value=""/>
<propval type="astring" name="proxy-user" value=""/>
<propval type="astring" name="proxy-password" value=""/>
<propval type="astring" name="hub-endpoint" value=""/>
</property_group>
</instance>
</service>
</service_bundle>
The trailing notices for Configuration profile - NIS still apply.
Of course, there's no need to declare the DNS servers on /etc/hosts.
This is useful to streamline installations:
Assume all DNS services prerequisites and assumptions stay the same.
Also check the on-line documentation Managing DNS (Tasks) for details.
The following are the necessary customizations:
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
<service version="1" type="service" name="system/config-user">
<instance enabled="true" name="default">
<property_group type="application" name="root_account">
<propval type="astring" name="login" value="root"/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="role"/>
</property_group>
<property_group type="application" name="user_account">
<propval type="astring" name="login" value="..."/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="normal"/>
<propval type="astring" name="description" value="Primary Administrator"/>
<propval type="count" name="gid" value="10"/>
<propval type="astring" name="shell" value="/usr/bin/bash"/>
<propval type="astring" name="roles" value="root"/>
<propval type="astring" name="profiles" value="System Administrator"/>
<propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/timezone">
<instance enabled="true" name="default">
<property_group type="application" name="timezone">
<propval type="astring" name="localtime" value="..."/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/environment">
<instance enabled="true" name="init">
<property_group type="application" name="environment">
<propval type="astring" name="LANG" value="en_US.UTF-8"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/identity">
<instance enabled="true" name="node">
<property_group type="application" name="config">
<propval type="astring" name="nodename" value="zone-1"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/keymap">
<instance enabled="true" name="default">
<property_group type="system" name="keymap">
<propval type="astring" name="layout" value="US-English"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/console-login">
<instance enabled="true" name="default">
<property_group type="application" name="ttymon">
<propval type="astring" name="terminal_type" value="sun-color"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/physical">
<instance enabled="true" name="default">
<property_group type="application" name="netcfg">
<propval type="astring" name="active_ncp" value="DefaultFixed"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/install">
<instance enabled="true" name="default">
<property_group type="application" name="install_ipv4_interface">
<propval type="astring" name="address_type" value="static"/>
<propval type="net_address_v4" name="static_address" value="192.168.0.91/24"/>
<propval type="astring" name="name" value="net11/v4"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files"/>
<propval type="astring" name="host" value="files dns"/>
<propval type="astring" name="printer" value="user files"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/name-service/cache">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/dns/client">
<property_group type="application" name="config">
<property type="net_address" name="nameserver">
<net_address_list>
<value_node value="10.0.1.10"/>
<value_node value="10.0.1.20"/>
<value_node value="10.0.1.30"/>
</net_address_list>
</property>
<property type="astring" name="search">
<astring_list>
<value_node value="business.corp"/>
</astring_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/ocm">
<instance enabled="true" name="default">
<property_group type="application" name="reg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="key" value=""/>
<propval type="astring" name="cipher" value=""/>
<propval type="astring" name="proxy_host" value=""/>
<propval type="astring" name="proxy_user" value=""/>
<propval type="astring" name="proxy_password" value=""/>
<propval type="astring" name="config_hub" value=""/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/fm/asr-notify">
<instance enabled="true" name="default">
<property_group type="application" name="autoreg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="index" value=""/>
<propval type="astring" name="private-key" value=""/>
<propval type="astring" name="public-key" value=""/>
<propval type="astring" name="client-id" value=""/>
<propval type="astring" name="timestamp" value=""/>
<propval type="astring" name="proxy-host" value=""/>
<propval type="astring" name="proxy-user" value=""/>
<propval type="astring" name="proxy-password" value=""/>
<propval type="astring" name="hub-endpoint" value=""/>
</property_group>
</instance>
</service>
</service_bundle>
The trailing notices for Configuration profile - NIS still apply.
Of course, there's no need to declare the DNS servers on /etc/hosts.
Configuration profile - NIS
This is an example of a NIS client configuration profile.
This is useful to streamline installations:
Assume all initial prerequisites stay the same.
The following are the necessary customizations.
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
<service version="1" type="service" name="system/config-user">
<instance enabled="true" name="default">
<property_group type="application" name="root_account">
<propval type="astring" name="login" value="root"/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="role"/>
</property_group>
<property_group type="application" name="user_account">
<propval type="astring" name="login" value="..."/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="normal"/>
<propval type="astring" name="description" value="Primary Administrator"/>
<propval type="count" name="gid" value="10"/>
<propval type="astring" name="shell" value="/usr/bin/bash"/>
<propval type="astring" name="roles" value="root"/>
<propval type="astring" name="profiles" value="System Administrator"/>
<propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/timezone">
<instance enabled="true" name="default">
<property_group type="application" name="timezone">
<propval type="astring" name="localtime" value="..."/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/environment">
<instance enabled="true" name="init">
<property_group type="application" name="environment">
<propval type="astring" name="LANG" value="en_US.UTF-8"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/identity">
<instance enabled="true" name="node">
<property_group type="application" name="config">
<propval type="astring" name="nodename" value="zone-1"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/keymap">
<instance enabled="true" name="default">
<property_group type="system" name="keymap">
<propval type="astring" name="layout" value="US-English"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/console-login">
<instance enabled="true" name="default">
<property_group type="application" name="ttymon">
<propval type="astring" name="terminal_type" value="sun-color"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/physical">
<instance enabled="true" name="default">
<property_group type="application" name="netcfg">
<propval type="astring" name="active_ncp" value="DefaultFixed"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/install">
<instance enabled="true" name="default">
<property_group type="application" name="install_ipv4_interface">
<propval type="astring" name="address_type" value="static"/>
<propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
<propval type="astring" name="name" value="net9/v4"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files nis"/>
<propval type="astring" name="printers" value="user files nis"/>
<propval type="astring" name="netgroup" value="nis"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/domain">
<property_group type="application" name="config">
<propval type="hostname" name="domainname" value="business.corp"/>
<property type="host" name="ypservers">
<host_list>
<value_node value="nis-2"/>
<value_node value="nis-3"/>
</host_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/client">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/name-service/cache">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/dns/client">
<instance enabled="false" name="default"/>
</service>
<service version="1" type="service" name="system/ocm">
<instance enabled="true" name="default">
<property_group type="application" name="reg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="key" value=""/>
<propval type="astring" name="cipher" value=""/>
<propval type="astring" name="proxy_host" value=""/>
<propval type="astring" name="proxy_user" value=""/>
<propval type="astring" name="proxy_password" value=""/>
<propval type="astring" name="config_hub" value=""/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/fm/asr-notify">
<instance enabled="true" name="default">
<property_group type="application" name="autoreg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="index" value=""/>
<propval type="astring" name="private-key" value=""/>
<propval type="astring" name="public-key" value=""/>
<propval type="astring" name="client-id" value=""/>
<propval type="astring" name="timestamp" value=""/>
<propval type="astring" name="proxy-host" value=""/>
<propval type="astring" name="proxy-user" value=""/>
<propval type="astring" name="proxy-password" value=""/>
<propval type="astring" name="hub-endpoint" value=""/>
</property_group>
</instance>
</service>
</service_bundle>
Note that as the zone configuration (shown below) is using a net resource, the network/install service must refer to the corresponding name (net9), otherwise error or warning messages will appear during installation. The same goes to the IP address which must respect the value of allowed-address.
# zonecfg -z zone-1 info
zonename: zone-1
zonepath: /zone/zone-1
brand: solaris
autoboot: false
bootargs:
file-mac-profile: fixed-configuration
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
address not specified
allowed-address: 192.168.0.84/24
configure-allowed-address: true
physical: net9
defrouter not specified
attr:
name: description
type: string
value: "zone-1"
Before the 1st boot it's recommended to update the zone's /etc/hosts.
In fact, for NIS services this is a critical step before the 1st boot:
# cat /zone/zone-1/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost loghost
#
192.168.0.33 zone-1.business.corp zone-1
#
192.168.0.202 nis-2.business.corp nis-2
192.168.0.203 nis-3.business.corp nis-3
Note that this is an immutable zone.
An immutable zone installation behavior has been already documented.
This is useful to streamline installations:
Assume all initial prerequisites stay the same.
The following are the necessary customizations.
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
<service version="1" type="service" name="system/config-user">
<instance enabled="true" name="default">
<property_group type="application" name="root_account">
<propval type="astring" name="login" value="root"/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="role"/>
</property_group>
<property_group type="application" name="user_account">
<propval type="astring" name="login" value="..."/>
<propval type="astring" name="password" value="$5$..."/>
<propval type="astring" name="type" value="normal"/>
<propval type="astring" name="description" value="Primary Administrator"/>
<propval type="count" name="gid" value="10"/>
<propval type="astring" name="shell" value="/usr/bin/bash"/>
<propval type="astring" name="roles" value="root"/>
<propval type="astring" name="profiles" value="System Administrator"/>
<propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/timezone">
<instance enabled="true" name="default">
<property_group type="application" name="timezone">
<propval type="astring" name="localtime" value="..."/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/environment">
<instance enabled="true" name="init">
<property_group type="application" name="environment">
<propval type="astring" name="LANG" value="en_US.UTF-8"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/identity">
<instance enabled="true" name="node">
<property_group type="application" name="config">
<propval type="astring" name="nodename" value="zone-1"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/keymap">
<instance enabled="true" name="default">
<property_group type="system" name="keymap">
<propval type="astring" name="layout" value="US-English"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/console-login">
<instance enabled="true" name="default">
<property_group type="application" name="ttymon">
<propval type="astring" name="terminal_type" value="sun-color"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/physical">
<instance enabled="true" name="default">
<property_group type="application" name="netcfg">
<propval type="astring" name="active_ncp" value="DefaultFixed"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="network/install">
<instance enabled="true" name="default">
<property_group type="application" name="install_ipv4_interface">
<propval type="astring" name="address_type" value="static"/>
<propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
<propval type="astring" name="name" value="net9/v4"/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files nis"/>
<propval type="astring" name="printers" value="user files nis"/>
<propval type="astring" name="netgroup" value="nis"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/domain">
<property_group type="application" name="config">
<propval type="hostname" name="domainname" value="business.corp"/>
<property type="host" name="ypservers">
<host_list>
<value_node value="nis-2"/>
<value_node value="nis-3"/>
</host_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/client">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="system/name-service/cache">
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/dns/client">
<instance enabled="false" name="default"/>
</service>
<service version="1" type="service" name="system/ocm">
<instance enabled="true" name="default">
<property_group type="application" name="reg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="key" value=""/>
<propval type="astring" name="cipher" value=""/>
<propval type="astring" name="proxy_host" value=""/>
<propval type="astring" name="proxy_user" value=""/>
<propval type="astring" name="proxy_password" value=""/>
<propval type="astring" name="config_hub" value=""/>
</property_group>
</instance>
</service>
<service version="1" type="service" name="system/fm/asr-notify">
<instance enabled="true" name="default">
<property_group type="application" name="autoreg">
<propval type="astring" name="user" value=""/>
<propval type="astring" name="password" value=""/>
<propval type="astring" name="index" value=""/>
<propval type="astring" name="private-key" value=""/>
<propval type="astring" name="public-key" value=""/>
<propval type="astring" name="client-id" value=""/>
<propval type="astring" name="timestamp" value=""/>
<propval type="astring" name="proxy-host" value=""/>
<propval type="astring" name="proxy-user" value=""/>
<propval type="astring" name="proxy-password" value=""/>
<propval type="astring" name="hub-endpoint" value=""/>
</property_group>
</instance>
</service>
</service_bundle>
Note that as the zone configuration (shown below) is using a net resource, the network/install service must refer to the corresponding name (net9), otherwise error or warning messages will appear during installation. The same goes to the IP address which must respect the value of allowed-address.
# zonecfg -z zone-1 info
zonename: zone-1
zonepath: /zone/zone-1
brand: solaris
autoboot: false
bootargs:
file-mac-profile: fixed-configuration
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
address not specified
allowed-address: 192.168.0.84/24
configure-allowed-address: true
physical: net9
defrouter not specified
attr:
name: description
type: string
value: "zone-1"
Before the 1st boot it's recommended to update the zone's /etc/hosts.
In fact, for NIS services this is a critical step before the 1st boot:
# cat /zone/zone-1/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost loghost
#
192.168.0.33 zone-1.business.corp zone-1
#
192.168.0.202 nis-2.business.corp nis-2
192.168.0.203 nis-3.business.corp nis-3
Note that this is an immutable zone.
An immutable zone installation behavior has been already documented.
Configuration profile
A system configuration profile is to avoid interactions during installations.
solaris(5) describes its usage as -c option to subcommands.
They are the roughly equivalent to Solaris 10 sysidcfg files.
The main benefits are:
They can be used during bare-metal system installations but also during zone installations and even a combination of both. In any case, the benefits are immense and it's worth while take some time to learn how to deal with system configuration profiles.
A system configuration profile is a somewhat complex XML file.
Instead of building it from the scratch, the following approach seems best:
The 1st step is rather easy.
Simply do:
$ sysconfig create-profile -o <output_xml_file>
The 2nd step may be much harder in at first.
That is, while you have to research what excerpts have to be inserted.
The ultimate help are the on-line manuals and some SMF info extraction.
NOTE
Please, refer to the following other posts:
Examples of system configuration profiles:
solaris(5) describes its usage as -c option to subcommands.
They are the roughly equivalent to Solaris 10 sysidcfg files.
The main benefits are:
- Consistency;
- Simplicity;
- Speed;
They can be used during bare-metal system installations but also during zone installations and even a combination of both. In any case, the benefits are immense and it's worth while take some time to learn how to deal with system configuration profiles.
A system configuration profile is a somewhat complex XML file.
Instead of building it from the scratch, the following approach seems best:
- Generate a baseline by using sysconfig create-profile;
- Manually edit the baseline accordingly.
The 1st step is rather easy.
Simply do:
$ sysconfig create-profile -o <output_xml_file>
The 2nd step may be much harder in at first.
That is, while you have to research what excerpts have to be inserted.
The ultimate help are the on-line manuals and some SMF info extraction.
NOTE
A configuration profile is focused on a client-side configuration.I have already given examples on applying a system configuration profile.
It can't configure for instance a DNS server.
That's another story.
See sysconfig(1M).
Please, refer to the following other posts:
Examples of system configuration profiles:
Monday, July 28, 2014
Immutable zone installation
This post is a kind of wrap up of a few others, such as:
I will just show how an immutable zone gets installed.
On this example the zone won't have any specific services.
Well, at a minimum, for convenience, I choose make it a NIS client.
On a more real scenario, I would further refine the configuration profile.
For instance, I could add other pre-configured SMF services.
I assume all the premises of the aforementioned posts.
The immutable zone configuration and configuration profile are ready.
In fact, there are more than one installation method.
It can happen through:
There's nothing really special about installing "from the scratch":
# zoneadm -z zone-1 install -c /tmp/zone-1.xml
...
I like the cloning method because it's faster and tends to save space:
# zoneadm -z zone-1 clone -c /tmp/zone-1.xml template-zone
...
NOTE
# zlogin -C zone-1
[Connected to zone 'zone-1' console]
From another terminal just boot the zone:
# zoneadm -z zone-1 boot
Now go back to the zone's console and watch:
[NOTICE: Read-only zone booting up read-write]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates...
Hostname: unknown
Hostname: zone-1
[NOTICE: This read-only system transiently booted read/write]
[NOTICE: Now that self assembly has been completed, the system is rebooting]
[NOTICE: Zone rebooting]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates...
Hostname: zone-1
zone-1 console login:
It's amazing how the system detects I'm installing an immutable zone and then upon installation boots the zone in read-write mode and after installation finishes, the zone is automatically rebooted to assume its immutability state. This saves administrators some work and makes sure no interactions are required.
I will just show how an immutable zone gets installed.
On this example the zone won't have any specific services.
Well, at a minimum, for convenience, I choose make it a NIS client.
On a more real scenario, I would further refine the configuration profile.
For instance, I could add other pre-configured SMF services.
I assume all the premises of the aforementioned posts.
The immutable zone configuration and configuration profile are ready.
In fact, there are more than one installation method.
It can happen through:
- Automated Installer (AI); not shown on this post;
- From the scratch;
- Cloning;
There's nothing really special about installing "from the scratch":
# zoneadm -z zone-1 install -c /tmp/zone-1.xml
...
I like the cloning method because it's faster and tends to save space:
# zoneadm -z zone-1 clone -c /tmp/zone-1.xml template-zone
...
NOTE
The argument to the -c option must be an absolute path.Here's the zone-1 zone's console on the 1st boot:
template-zone must not be an immutable zone already.
# zlogin -C zone-1
[Connected to zone 'zone-1' console]
From another terminal just boot the zone:
# zoneadm -z zone-1 boot
Now go back to the zone's console and watch:
[NOTICE: Read-only zone booting up read-write]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates...
Hostname: unknown
Hostname: zone-1
[NOTICE: This read-only system transiently booted read/write]
[NOTICE: Now that self assembly has been completed, the system is rebooting]
[NOTICE: Zone rebooting]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates...
Hostname: zone-1
zone-1 console login:
It's amazing how the system detects I'm installing an immutable zone and then upon installation boots the zone in read-write mode and after installation finishes, the zone is automatically rebooted to assume its immutability state. This saves administrators some work and makes sure no interactions are required.
Friday, April 11, 2014
NIS & logins - troubleshooting #1
Despite all the quality and stability of Solaris 11 sometimes things go wrong.
Before pinning up anything it's important to assess all the variables.
For instance, upon forcibly rebooting a x86 with a few Solaris 11 zones, one of them acting as a NFS server, within a NIS services infrastructure something odd happened; the following values simply vanished:
The resolution was simply reentering the correct values, of course.
But until narrow down to what was wrong the symptoms were diverse:
Nevertheless it's most important to determine the cause.
In my specific case I suspect of the well known:
Yes, due to a video adapter failure the host system had to be halted for repair.
And, yes, I haven't enabled the cache flushing on the VMs.
Before pinning up anything it's important to assess all the variables.
For instance, upon forcibly rebooting a x86 with a few Solaris 11 zones, one of them acting as a NFS server, within a NIS services infrastructure something odd happened; the following values simply vanished:
- The config/* of name-service/switch SMF service.
- The config/domainname of nis/domain SMF service.
- The sharectl nfsmapid_domain.
The resolution was simply reentering the correct values, of course.
But until narrow down to what was wrong the symptoms were diverse:
- The NIS users were not being able to log in.
- The NFS server was not resolving any uid and gid.
- The NIS clients were listing nobody for user and group.
Nevertheless it's most important to determine the cause.
In my specific case I suspect of the well known:
Using ZFS Storage Pools in VirtualBoxYes, I'm using VirtualBox for composing many posts.
Yes, due to a video adapter failure the host system had to be halted for repair.
And, yes, I haven't enabled the cache flushing on the VMs.
Labels:
Automounter,
Networking,
NFS,
NIS,
Security,
VirtualBox,
Virtualization,
Zones
Wednesday, January 8, 2014
DNS server installation
DNS server installation in itself is a rather ordinary sysadmin task.
Nevertheless a simple but important measures are frequently disregarded.
The problem is that these oversights or omissions leads to security issues.
As a consequence, everything that relies on DNS is affected as well.
As such, it's not difficult to see that impacts can be disastrous.
As a good practice:
So here's a few important measures to running a DNS server:
Assume a NGZ, not yet (of course) immutable.
Suppose the following interface configuration is present:
dns-1# dladm show-link
LINK CLASS MTU STATE OVER
net7 phys 1500 up --
dns-1# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net7 Ethernet up 1000 full e1000g11
dns-1# dladm show-phys -m
LINK SLOT ADDRESS INUSE CLIENT
net7 primary 8:0:27:ad:65:e yes net7
dns-1# dladm show-linkprop -p allowed-ips,protection
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
net7 allowed-ips rw 192.168.0.17 -- --
net7 protection rw ip-nospoof -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
NOTE
Check for a reasonably up-to-date software.
If available, update the IPS repository to the latest SRU.
# pkg info -r service/network/dns/bind | egrep '(State|Ver)'
State: Not installed
Version: 9.6.3.8.0 (9.6-ESV-R8)
Check the respective ISC-BIND resources on the Internet:
For instance, for BIND 9.6-ESV-R8 there exists vulnerability #56.
After assessing all the information, the conclusion is that it's safe to proceed as the environment is comprised only of Unix hosts which aren't affected.
# pkg install -nv service/network/dns/bind
Packages to install: 1
Estimated space available: 13.98 GB
Estimated space to be consumed: 19.20 MB
Create boot environment: No
Create backup boot environment: No
Services to change: 1
Rebuild boot archive: No
Changed packages:
solaris
service/network/dns/bind
None -> 9.6.3.8.0,...
Services:
restart_fmri:
svc:/system/manifest-import:default
# svcs '*dns*'
STATE STIME FMRI
disabled 14:01:15 svc:/network/dns/client:default
disabled 14:01:18 svc:/network/dns/multicast:default
# pkg install service/network/dns/bind
Packages to install: 1
Create boot environment: No
Create backup boot environment: No
Services to change: 1
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 14/14 0.4/0.4 778k/s
PHASE ITEMS
Installing new actions 44/44
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
# pkg info service/network/dns/bind | egrep '(State|Ver)'
State: Installed
Version: 9.6.3.8.0 (9.6-ESV-R8)
# svcs '*dns*'
STATE STIME FMRI
disabled 14:01:15 svc:/network/dns/client:default
disabled 14:01:18 svc:/network/dns/multicast:default
disabled 15:04:12 svc:/network/dns/server:default
The next step is to perform the DNS server configuration.
Nevertheless a simple but important measures are frequently disregarded.
The problem is that these oversights or omissions leads to security issues.
As a consequence, everything that relies on DNS is affected as well.
As such, it's not difficult to see that impacts can be disastrous.
As a good practice:
Start right from the very beginning.
As the British say: Don't make a rod for your own back.
So here's a few important measures to running a DNS server:
- Keep the software as up-to-date as possible;
- Consider a robust networking scheme, such as IPMP or DLMP;
- Run the daemon on an immutable non-global zone (NGZ);
- Run the daemon under a non-root user account;
- Countermeasure MAC-spoof and IP-spoof;
- Consider IPsec where feasible.
Assume a NGZ, not yet (of course) immutable.
Suppose the following interface configuration is present:
dns-1# dladm show-link
LINK CLASS MTU STATE OVER
net7 phys 1500 up --
dns-1# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net7 Ethernet up 1000 full e1000g11
dns-1# dladm show-phys -m
LINK SLOT ADDRESS INUSE CLIENT
net7 primary 8:0:27:ad:65:e yes net7
dns-1# dladm show-linkprop -p allowed-ips,protection
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
net7 allowed-ips rw 192.168.0.17 -- --
net7 protection rw ip-nospoof -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
NOTE
For the examples I'm using VirtualBox 4.3.6 on a Solaris 11 host.
On the host, there are several vnics over a single etherstub.
Such vnics are being provided to VirtualBox guests.
Guests' non-global zones can't use the anet resource.
The only choice in this particulare case is the net resource.
In this scenario it seems impossible to set the mac-nospoof.
On a real world scenario anet resources would fill the gap.
Check for a reasonably up-to-date software.
If available, update the IPS repository to the latest SRU.
# pkg info -r service/network/dns/bind | egrep '(State|Ver)'
State: Not installed
Version: 9.6.3.8.0 (9.6-ESV-R8)
Check the respective ISC-BIND resources on the Internet:
For instance, for BIND 9.6-ESV-R8 there exists vulnerability #56.
| 56 | 2013-6320 | A Winsock API Bug can cause a side-effect affecting BIND ACLs |
After assessing all the information, the conclusion is that it's safe to proceed as the environment is comprised only of Unix hosts which aren't affected.
# pkg install -nv service/network/dns/bind
Packages to install: 1
Estimated space available: 13.98 GB
Estimated space to be consumed: 19.20 MB
Create boot environment: No
Create backup boot environment: No
Services to change: 1
Rebuild boot archive: No
Changed packages:
solaris
service/network/dns/bind
None -> 9.6.3.8.0,...
Services:
restart_fmri:
svc:/system/manifest-import:default
# svcs '*dns*'
STATE STIME FMRI
disabled 14:01:15 svc:/network/dns/client:default
disabled 14:01:18 svc:/network/dns/multicast:default
# pkg install service/network/dns/bind
Packages to install: 1
Create boot environment: No
Create backup boot environment: No
Services to change: 1
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 14/14 0.4/0.4 778k/s
PHASE ITEMS
Installing new actions 44/44
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
# pkg info service/network/dns/bind | egrep '(State|Ver)'
State: Installed
Version: 9.6.3.8.0 (9.6-ESV-R8)
# svcs '*dns*'
STATE STIME FMRI
disabled 14:01:15 svc:/network/dns/client:default
disabled 14:01:18 svc:/network/dns/multicast:default
disabled 15:04:12 svc:/network/dns/server:default
The next step is to perform the DNS server configuration.
Tuesday, January 7, 2014
Zones L2 & L3 protection
L2 (layer 2) and L3 (layer 3) refers respectively to MAC and IP addresses.
Zones are particularly attractive in securing those two networking entities.
This is because zones are fundamental to the IaaS cloud service model.
This assures zones will never be a vector for common L2/L3 attacks.
Zones can use two different kind of protected networking resources:
Hence, at a minimum, no matter if net or anet is used, what's important is to set the allowed-address and configure-allowed-address parameters. But if using anet, then also set the link-protection to mac-nospoof.
For instance:
# zonecfg -z server-1b info net
net:
address not specified
allowed-address: 192.168.0.12/24
configure-allowed-address: true
physical: net2
defrouter not specified
But the protection technology isn't restricted to non-global zones (NGZ).
In fact, this networking protection technology is totally independent.
As such they can be used in global zones (GZ) as well.
Zones are particularly attractive in securing those two networking entities.
This is because zones are fundamental to the IaaS cloud service model.
This assures zones will never be a vector for common L2/L3 attacks.
Zones can use two different kind of protected networking resources:
net
Refers to either a physical NIC or a predefined virtual NIC (vnic).
The protection is given by the allowed-address and the configure-allowed-address parameters. This will automatically set ip-nospoof for the underlying link, but currently it will make it impossible to set the mac-nospoof (sort of bug or limitation).
anet
Automatically creates a vnic according to parameters.
Naturally, the underlying link (lower-link) can't be another vnic.
The protection is given by the link-protection parameter, but
its setting is somewhat automatic along the setting of allowed-address and the configure-allowed-address parameters. I say somewhat because they set the ip-nospoof. But it may be good to add the mac-nospoof.
Hence, at a minimum, no matter if net or anet is used, what's important is to set the allowed-address and configure-allowed-address parameters. But if using anet, then also set the link-protection to mac-nospoof.
For instance:
# zonecfg -z server-1b info net
net:
address not specified
allowed-address: 192.168.0.12/24
configure-allowed-address: true
physical: net2
defrouter not specified
But the protection technology isn't restricted to non-global zones (NGZ).
In fact, this networking protection technology is totally independent.
As such they can be used in global zones (GZ) as well.
Immutable zones
The advent of immutable zones is a great improvement to Solaris.
Now it's possible to set portions of the root file system as read-only.
The improvement is two-folded:
There are 3 degrees of protection:
To check if a zone is configured as immutable:
# zonecfg -z server-1b info file-mac-profile
file-mac-profile: fixed-configuration
To check if a zone is running as immutable:
# zoneadm list -p | grep server-1b | cut -d: -f8,9
R:fixed-configuration
NOTE
NOTE
Now it's possible to set portions of the root file system as read-only.
The improvement is two-folded:
Security
If the zone virtual environment somehow gets compromised, then the read-only root file system will be a tough barrier helping to limit the exposed surface.
For instance, a DNS service running on a dedicated immutable zone doesn't require the associated SMF service tunning in order to run the service under a non-root account.
Management
This is easier to understand. Once perfectly set up, it's assured that the configuration won't be changed by accident or even by tampering. It's known that a many problems arise from a poor change management. Now the operating system supports and enforces the expected behavior. Great!
There are 3 degrees of protection:
Complete
This is given by the zone property file-mac-profile=strict.
Nothing can be changed and data can only be logged remotely.
Fixed
This is given by file-mac-profile=fixed-configuration.
Logging can be local and portions of /var are writable.
For instance, NIS services seem to work fine.
Flexible
This is given by file-mac-profile=flexible-configuration.
This differs from Fixed by allowing a writable /etc.
To check if a zone is configured as immutable:
# zonecfg -z server-1b info file-mac-profile
file-mac-profile: fixed-configuration
To check if a zone is running as immutable:
# zoneadm list -p | grep server-1b | cut -d: -f8,9
R:fixed-configuration
NOTE
Immutable zones doesn't protect non-root file systems.
Thus other forms of protection and recover must be devised.
NOTE
To manage an immutable zone, it's necessary to temporarily remove the immutability / read-only enforcements:
# zoneadm -z <zonename> boot -w
If the zone is already running immutable you don't need to halt or shutdown and then perform the above command; simply use:
# zoneadm -z <zonename> reboot -w
On the last case (a reboot for management) the message [NOTICE: Read-only zone rebooting read-write] will follow on the zone's console.
After the management, to reenter the immutable state simply use init 6 or shutdown -r for an ordered shutdown as usual.
Friday, December 27, 2013
Zone cloning
On this post I intend to exemplify cloning a non-global zone (NGZ).
In the end it shall be quite obvious why cloning is so powerful and desirable.
In this context I understand cloning as a duplication within the same host.
An identical NGZ on another host is another topic related to migration.
The underlying support for cloning is ultimately provided by ZFS.
I make the following assumptions:
$ pkg info entire | grep Version
Version: 0.5.11 (Oracle Solaris 11.1.13.6.0)
$ zpool list zone
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zone 15.9G 622M 15.3G 3% 1.00x ONLINE -
$ zfs list -r -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 622M 15.0G 35K /zone
zone/server-1a 479M 15.0G 33K /zone/server-1a
zone/server-1b 70.8M 15.0G 34K /zone/server-1b
zone/server-1c 70.7M 15.0G 34K /zone/server-1c
$ pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://192.168.0.100/
$ svcs '*dns*'
STATE STIME FMRI
disabled 9:17:59 svc:/network/dns/client:default
disabled 9:18:02 svc:/network/dns/multicast:default
disabled 9:18:10 svc:/network/dns/server:default
# dladm show-phys -o link,state,speed,duplex,device
LINK STATE SPEED DUPLEX DEVICE
net0 up 1000 full e1000g4
net3 up 1000 full e1000g7
server-1c/net3 up 1000 full e1000g7
net2 up 1000 full e1000g6
server-1b/net2 up 1000 full e1000g6
net1 up 1000 full e1000g5
server-1a/net1 up 1000 full e1000g5
net4 unknown 0 unknown e1000g8
net7 unknown 0 unknown e1000g11
net6 unknown 0 unknown e1000g10
net5 unknown 0 unknown e1000g9
Let's create another NGZ (server-1d) as a clone of server-1a.
Note from the previous output that server-1b and server-1c are clones.
More clearly:
$ zfs list -t all -r -d 2 -o name,used zone/server-1a
NAME USED
zone/server-1a 479M
zone/server-1a/rpool 479M
zone/server-1a/rpool@server-1c_snap00 0
zone/server-1a/rpool@server-1b_snap00 0
zone/server-1a/rpool/ROOT 479M
zone/server-1a/rpool/VARSHARE 39K
zone/server-1a/rpool/export 134K
$ zfs get -o value origin zone/server-{1b,1c}/rpool
VALUE
zone/server-1a/rpool@server-1b_snap00
zone/server-1a/rpool@server-1c_snap00
Extract the source NGZ (server-1a) configuration:
# zonecfg -z server-1a export -f /tmp/server-1a.cfg
# cat /tmp/server-1a.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1a
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.11/24
set configure-allowed-address=true
set physical=net1
end
add attr
set name=description
set type=string
set value=Template
end
Edit the target NGZ (server-1d) configuration accordingly:
(attention: if net4 is already a vnic, then use net instead of anet)
# cp /tmp/server-{1a,1d}.cfg
# cat /tmp/server-1d.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1d
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.14/24
set configure-allowed-address=true
set physical=net4
end
add attr
set name=description
set type=string
set value="NIS server"
end
Import the target NGZ (server-1d) configuration:
# zonecfg -z server-1d -f /tmp/server-1d.cfg
# zonecfg -z server-1d info
zonename: server-1d
zonepath: /zone/server-1d
brand: solaris
autoboot: true
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
address not specified
allowed-address: 192.168.0.14/24
configure-allowed-address: true
physical: net4
defrouter not specified
attr:
name: description
type: string
value: "NIS server"
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
3 server-1a running /zone/server-1a solaris excl
- server-1d configured /zone/server-1d solaris excl
Create a configuration profile to help streamline this and future cloning.
NOTE
# sysconfig create-profile -o /tmp/server-1d.xml
SC profile successfully generated.
Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.6643
If a baseline configuration profile already existed, then adjust accordingly. In general, the following fields will be updated (beyond the deletion of the aforementioned IPv6 section). Here's an unrelated/independent example:
# diff /tmp/dns-1.xml /tmp/dns-2.xml
40c40
< <propval type="astring" name="nodename" value="dns-1"/>;
---
< <propval type="astring" name="nodename" value="dns-2"/>
69,70c69,70
< <propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
< <propval type="astring" name="name" value="net9/v4"/>
---
> <propval type="net_address_v4" name="static_address" value="192.168.0.87/24"/>
> <propval type="astring" name="name" value="net10/v4"/>
Shutdown the source NGZ (server-1a) for performing the cloning.
In general, there should be a golden template NGZ ready to be cloned.
# zoneadm -z server-1a shutdown
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
- server-1a installed /zone/server-1a solaris excl
- server-1d configured /zone/server-1d solaris excl
# zoneadm -z server-1d clone -c /tmp/server-1d.xml server-1a
The following ZFS file system(s) have been created:
zone/server-1d
Progress being logged to ...
Log saved in non-global zone as ...
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
- server-1a installed /zone/server-1a solaris excl
- server-1d installed /zone/server-1d solaris excl
Resume the source NGZ (server-1a) to its fully operational state.
As previously noted, this isn't needed in case a golden template is being used.
# zoneadm -z server-1a boot
Before booting the cloned NGZ (server-1d) for the 1st time, do minor adjustments such as manually editing /zone/server-1d/root/etc/hosts. If much more elaborated measures are needed them there's a chance that cloning may not be the best solution. Of course, it all depends on a case by case analysis.
# cat /zone/server-1d/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost loghost
#
192.168.0.14 server-1d.business.corp server-1d
The above /etc/hosts example may not be adequate to NIS services, unless the even more insecure local network dynamic discovery is used. For NIS services direct mode, typically and in addition, it's also required to add at least two NIS servers, such as:
# cat /zone/server-1d/root/etc/hosts
...
192.168.0.14 server-1d.business.corp server-1d
#
192.168.0.202 nis-2.business.corp nis-2
192.168.0.203 nis-3.business.corp nis-3
For NIS services, the relevant part of the configuration profile changes from:
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files"/>
<propval type="astring" name="printer" value="user files"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
To:
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files nis"/>
<propval type="astring" name="printers" value="user files nis"/>
<propval type="astring" name="netgroup" value="nis"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/domain">
<property_group type="application" name="config">
<propval type="hostname" name="domainname" value="business.corp"/>
<property type="host" name="ypservers">
<host_list>
<value_node value="nis-2"/>
<value_node value="nis-3"/>
</host_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/client">
<instance enabled="true" name="default"/>
</service>
One might well be wondering how did I find out what to substitute for in the above XML excerpt. For more detail on how to obtain to obtain the above changes, please, read my other posts about SMF info extraction and NIS & NSS. Of course, I found out about which services to inspect based on the on-line manuals and references.
For the final step it's advisable to use two terminals. One for the console monitoring of the 1st boot. Other for issuing the zone boot command. Depending on the existing configuration in the source NGZ, it will take a little while for the system to realize the inherent changes to be applied to the newly cloned NGZ.
# zlogin -C server-1d
[Connected to zone 'server-1d' console]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Hostname: unknown
Hostname: server-1d
server-1d console login:
Hit ~. (or ~~. if nested twice, and so on...) and watch the results:
# zfs list -r -t all -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 669M 15.0G 36K /zone
zone/server-1a 487M 15.0G 33K /zone/server-1a
zone/server-1b 70.9M 15.0G 34K /zone/server-1b
zone/server-1c 70.8M 15.0G 34K /zone/server-1c
zone/server-1d 38.1M 15.0G 34K /zone/server-1d
Thanks to ZFS the cloning is naturally fast and extremely space efficient.
We were able to quickly get a new fully functional OS instance with just around 40 MB! In addition to the near zero virtualization overhead, this is a unique advantage of Solaris.
There is one caveat when it comes to updating a system with multiple cloned zones. As updates are applied, they will be duplicated on each and every cloned zone, thus lessening the space savings benefits (zone server-1f was cloned from server-1a after an update process).
# zfs list -r -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 1.85G 13.8G 38K /zone
zone/server-1a 187M 13.8G 33K /zone/server-1a
zone/server-1b 304M 13.8G 34K /zone/server-1b
zone/server-1c 301M 13.8G 34K /zone/server-1c
zone/server-1d 301M 13.8G 34K /zone/server-1d
zone/server-1e 739M 13.8G 35K /zone/server-1e
zone/server-1f 59.7M 13.8G 34K /zone/server-1f
To mitigate the problem, the update plan must take into consideration the redeployment of cloned zones from updated golden templates. This implies a best practice:
Of course, I'm not convinced.
In the end it shall be quite obvious why cloning is so powerful and desirable.
In this context I understand cloning as a duplication within the same host.
An identical NGZ on another host is another topic related to migration.
The underlying support for cloning is ultimately provided by ZFS.
I make the following assumptions:
- The system is a Solaris 11 or higher.
- There is a dedicated ZFS pool for NGZs paths.
- There is an accessible IPS local repository.
- There's no DNS service implemented yet.
- There is an available (unused) network interface.
$ pkg info entire | grep Version
Version: 0.5.11 (Oracle Solaris 11.1.13.6.0)
$ zpool list zone
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zone 15.9G 622M 15.3G 3% 1.00x ONLINE -
$ zfs list -r -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 622M 15.0G 35K /zone
zone/server-1a 479M 15.0G 33K /zone/server-1a
zone/server-1b 70.8M 15.0G 34K /zone/server-1b
zone/server-1c 70.7M 15.0G 34K /zone/server-1c
$ pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://192.168.0.100/
$ svcs '*dns*'
STATE STIME FMRI
disabled 9:17:59 svc:/network/dns/client:default
disabled 9:18:02 svc:/network/dns/multicast:default
disabled 9:18:10 svc:/network/dns/server:default
# dladm show-phys -o link,state,speed,duplex,device
LINK STATE SPEED DUPLEX DEVICE
net0 up 1000 full e1000g4
net3 up 1000 full e1000g7
server-1c/net3 up 1000 full e1000g7
net2 up 1000 full e1000g6
server-1b/net2 up 1000 full e1000g6
net1 up 1000 full e1000g5
server-1a/net1 up 1000 full e1000g5
net4 unknown 0 unknown e1000g8
net7 unknown 0 unknown e1000g11
net6 unknown 0 unknown e1000g10
net5 unknown 0 unknown e1000g9
Let's create another NGZ (server-1d) as a clone of server-1a.
Note from the previous output that server-1b and server-1c are clones.
More clearly:
$ zfs list -t all -r -d 2 -o name,used zone/server-1a
NAME USED
zone/server-1a 479M
zone/server-1a/rpool 479M
zone/server-1a/rpool@server-1c_snap00 0
zone/server-1a/rpool@server-1b_snap00 0
zone/server-1a/rpool/ROOT 479M
zone/server-1a/rpool/VARSHARE 39K
zone/server-1a/rpool/export 134K
$ zfs get -o value origin zone/server-{1b,1c}/rpool
VALUE
zone/server-1a/rpool@server-1b_snap00
zone/server-1a/rpool@server-1c_snap00
Extract the source NGZ (server-1a) configuration:
# zonecfg -z server-1a export -f /tmp/server-1a.cfg
# cat /tmp/server-1a.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1a
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.11/24
set configure-allowed-address=true
set physical=net1
end
add attr
set name=description
set type=string
set value=Template
end
Edit the target NGZ (server-1d) configuration accordingly:
(attention: if net4 is already a vnic, then use net instead of anet)
# cp /tmp/server-{1a,1d}.cfg
# cat /tmp/server-1d.cfg
create -b
set brand=solaris
set zonepath=/zone/server-1d
set autoboot=true
set ip-type=exclusive
add net
set allowed-address=192.168.0.14/24
set configure-allowed-address=true
set physical=net4
end
add attr
set name=description
set type=string
set value="NIS server"
end
Import the target NGZ (server-1d) configuration:
# zonecfg -z server-1d -f /tmp/server-1d.cfg
# zonecfg -z server-1d info
zonename: server-1d
zonepath: /zone/server-1d
brand: solaris
autoboot: true
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net:
address not specified
allowed-address: 192.168.0.14/24
configure-allowed-address: true
physical: net4
defrouter not specified
attr:
name: description
type: string
value: "NIS server"
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
3 server-1a running /zone/server-1a solaris excl
- server-1d configured /zone/server-1d solaris excl
Create a configuration profile to help streamline this and future cloning.
NOTE
During the creation of the configuration profile, selecting None for networking connection configuration may avoid mistakes, but it's probably better to specify the correct settings. It doesn't seem a good idea to include the name services configuration while operating the sysconf create-profile utility. The results seems rather terse or minimalist. I would rather manually edit the configuration profile subsequently (using SMF info extraction from other golden or template systems) as later exemplified for the case of enabling NIS services right from the start. Furthermore, there may be complains about IPv6, hence I prefer to edit out it's default configuration. If using the anet zone configuration, net0 is probably the correct choice; but if a net physical interface is being referenced in the zone configuration, then choose the corresponding interface.
An interesting alternative, is to copy from a configuration profile template initially generated by sysconfig create-profile and then manually adjust accordingly.
In other words my advice is:
- Specify the correct network settings, using net0 for vnics (anets) and the matching physical interface in the zone configuration. The IP address must respect the eventual allowed-address zone configuration clause. Example: Configuration profile - NIS client
- Do not specify any name services configurations when initially generating the profile via sysconfig create-profile. Manually edit the initially generated profile and add name services and any other thing that makes sense to a particular purpose. Example: Configuration profile - NIS client
Taking into consideration the above advice, create the very first (initial) configuration profile to be customized and subsequently used as a baseline for similar installations:
- Remove altogether the IPv6 configuration section if you'll use just IPv4. That is, remove the following lines from the configuration profile:
<property_group type="application" name="install_ipv6_interface">
<propval type="astring" name="stateful" value="yes">
<propval type="astring" name="address_type" value="addrconf"/>
<propval type="astring" name="name" value="net10/v6"/>
</property_group>
# sysconfig create-profile -o /tmp/server-1d.xml
SC profile successfully generated.
Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.6643
If a baseline configuration profile already existed, then adjust accordingly. In general, the following fields will be updated (beyond the deletion of the aforementioned IPv6 section). Here's an unrelated/independent example:
# diff /tmp/dns-1.xml /tmp/dns-2.xml
40c40
< <propval type="astring" name="nodename" value="dns-1"/>;
---
< <propval type="astring" name="nodename" value="dns-2"/>
69,70c69,70
< <propval type="net_address_v4" name="static_address" value="192.168.0.84/24"/>
< <propval type="astring" name="name" value="net9/v4"/>
---
> <propval type="net_address_v4" name="static_address" value="192.168.0.87/24"/>
> <propval type="astring" name="name" value="net10/v4"/>
Shutdown the source NGZ (server-1a) for performing the cloning.
In general, there should be a golden template NGZ ready to be cloned.
# zoneadm -z server-1a shutdown
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
- server-1a installed /zone/server-1a solaris excl
- server-1d configured /zone/server-1d solaris excl
# zoneadm -z server-1d clone -c /tmp/server-1d.xml server-1a
The following ZFS file system(s) have been created:
zone/server-1d
Progress being logged to ...
Log saved in non-global zone as ...
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 server-1c running /zone/server-1c solaris excl
2 server-1b running /zone/server-1b solaris excl
- server-1a installed /zone/server-1a solaris excl
- server-1d installed /zone/server-1d solaris excl
Resume the source NGZ (server-1a) to its fully operational state.
As previously noted, this isn't needed in case a golden template is being used.
# zoneadm -z server-1a boot
Before booting the cloned NGZ (server-1d) for the 1st time, do minor adjustments such as manually editing /zone/server-1d/root/etc/hosts. If much more elaborated measures are needed them there's a chance that cloning may not be the best solution. Of course, it all depends on a case by case analysis.
# cat /zone/server-1d/root/etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost loghost
#
192.168.0.14 server-1d.business.corp server-1d
The above /etc/hosts example may not be adequate to NIS services, unless the even more insecure local network dynamic discovery is used. For NIS services direct mode, typically and in addition, it's also required to add at least two NIS servers, such as:
# cat /zone/server-1d/root/etc/hosts
...
192.168.0.14 server-1d.business.corp server-1d
#
192.168.0.202 nis-2.business.corp nis-2
192.168.0.203 nis-3.business.corp nis-3
For NIS services, the relevant part of the configuration profile changes from:
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files"/>
<propval type="astring" name="printer" value="user files"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
To:
<service version="1" type="service" name="system/name-service/switch">
<property_group type="application" name="config">
<propval type="astring" name="default" value="files nis"/>
<propval type="astring" name="printers" value="user files nis"/>
<propval type="astring" name="netgroup" value="nis"/>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/domain">
<property_group type="application" name="config">
<propval type="hostname" name="domainname" value="business.corp"/>
<property type="host" name="ypservers">
<host_list>
<value_node value="nis-2"/>
<value_node value="nis-3"/>
</host_list>
</property>
</property_group>
<instance enabled="true" name="default"/>
</service>
<service version="1" type="service" name="network/nis/client">
<instance enabled="true" name="default"/>
</service>
One might well be wondering how did I find out what to substitute for in the above XML excerpt. For more detail on how to obtain to obtain the above changes, please, read my other posts about SMF info extraction and NIS & NSS. Of course, I found out about which services to inspect based on the on-line manuals and references.
For the final step it's advisable to use two terminals. One for the console monitoring of the 1st boot. Other for issuing the zone boot command. Depending on the existing configuration in the source NGZ, it will take a little while for the system to realize the inherent changes to be applied to the newly cloned NGZ.
# zlogin -C server-1d
[Connected to zone 'server-1d' console]
# zoneadm -z server-1d boot (from another terminal)[NOTICE: Zone booting up]
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle ... All rights reserved.
Hostname: unknown
Hostname: server-1d
server-1d console login:
Hit ~. (or ~~. if nested twice, and so on...) and watch the results:
# zfs list -r -t all -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 669M 15.0G 36K /zone
zone/server-1a 487M 15.0G 33K /zone/server-1a
zone/server-1b 70.9M 15.0G 34K /zone/server-1b
zone/server-1c 70.8M 15.0G 34K /zone/server-1c
zone/server-1d 38.1M 15.0G 34K /zone/server-1d
Thanks to ZFS the cloning is naturally fast and extremely space efficient.
We were able to quickly get a new fully functional OS instance with just around 40 MB! In addition to the near zero virtualization overhead, this is a unique advantage of Solaris.
There is one caveat when it comes to updating a system with multiple cloned zones. As updates are applied, they will be duplicated on each and every cloned zone, thus lessening the space savings benefits (zone server-1f was cloned from server-1a after an update process).
# zfs list -r -d 1 zone
NAME USED AVAIL REFER MOUNTPOINT
zone 1.85G 13.8G 38K /zone
zone/server-1a 187M 13.8G 33K /zone/server-1a
zone/server-1b 304M 13.8G 34K /zone/server-1b
zone/server-1c 301M 13.8G 34K /zone/server-1c
zone/server-1d 301M 13.8G 34K /zone/server-1d
zone/server-1e 739M 13.8G 35K /zone/server-1e
zone/server-1f 59.7M 13.8G 34K /zone/server-1f
To mitigate the problem, the update plan must take into consideration the redeployment of cloned zones from updated golden templates. This implies a best practice:
Keep actual configuration and installation scripts synchronized.I wonder if deduplication would be effective.
Of course, I'm not convinced.
Subscribe to:
Comments (Atom)