Showing posts with label Solaris 11 Express. Show all posts
Showing posts with label Solaris 11 Express. Show all posts

Wednesday, May 30, 2018

Installing Docutils 0.14

I've recently built and installed Python version 2.7.15 on Solaris 11 Express. Soon after I was trying to install the latest version of Mercurial (4.6 at the time of this writing), but on the last moments I've got to know the required dependency on Docutils, a Python utility which positions itself as:
"Docutils is an open-source text processing system for processing plaintext documentation into useful formats, such as HTML, LaTeX, man-pages, open-document or XML. It includes reStructuredText, the easy to read, easy to use, what-you-see-is-what-you-get plaintext markup language."
People seem to prefer, for obvious reasons I presume, the acronym ReST instead of the very long name ReStructuredText and I've visited the ReST Primer in order to know a little something about it and, well, OK, thank you. Anyway:
"This primer introduces the most common features of reStructuredText, but there are a lot more to explore. The Quick reStructuredText user reference is a good place to go next. For complete details, the reStructuredText Markup Specification is the place to go [1]."
To install it:

Tuesday, May 29, 2018

Building Python 2.7.15

On this post I'll document how to build the (up to this writing) latest version of Python version 2 series. But instead of doing it under a 64-bits Solaris 11.3 GA, this time I'll do it under a 32-bits Solaris 11 Express SRU-14. I'll do so because Python has become an important technology upon which many other depend and because running 32-bits software is still important in the 3rd world where most of people remain on a deprived situation and there's not enough discarded 64-bits hardware for them to reuse. Running a 32-bits Solaris on these legacy hardware is pure gold them. This way deprived people have a chance of getting latest Python on legacy hardware they can afford. It's true one could argue, for instance, that running FreeBSD or even Linux could be more appealing to this scenario, but that's not Solaris, which IMO still is overall better than both of them.

$ python
Python 2.7.15 (default, May 27 2018, 21:38:19)
[GCC 3.4.3 (csl-sol210-3_4-20050802)] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>>
 


$ cat /etc/release
            Oracle Solaris 11 Express snv_151a X86
     Copyright (c) 2010, Oracle ...  All rights reserved.
                  Assembled 04 November 2010


$ pkg info entire |grep Summary |sed 's/.*[\(]\(.*\)[\)].*/\1/'
Oracle Solaris 11 Express 2010.11 SRU 14


$ isainfo -v
32-bit i386 applications
        ssse3 ahf sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu
 


Friday, May 25, 2018

Updating Java ad hoc

If one somehow depends on Java and it's not the case of a legacy dependency which will be insulated on an island system of yet very restricted subnetwork, then it will certainly need to get Java updated from time to time, because of security fixes and required new functionalities and support. But in addition, if the system isn't a more recent Solaris 11 where one can't or doesn't want IPS taking care of the update, then it will be necessary to perform an ad hoc Java update. That's probably the case with a Solaris 11 Express system or even Solaris 10, although I do highlight that in case of a Solaris 10, perhaps fortunately there's an additional option to using the old SVr4 package system to get the update in place in a presumably easier way.

Saturday, November 18, 2017

The Firefox issue

The Firefox versions that were shipped on Solaris 11 Express (even across all of its SRUs) and Solris 11.3 GA (and probably across all of its SRUs as well) are, saddly, rather outdated (for Solaris 11 Express that's even worse). For Solaris 11.3 the version is 31.8.0:


Wednesday, October 25, 2017

Solaris 11 Express awesome update

Solaris 11 Express was a transitional version from Solaris 10 to Solaris 11, post OpenSolaris 2009.06. Since it became publicly available everybody could see by that time how promising Solaris 11 would be, in many ways, part because of OpenSolaris community. Nowadays, one can better recognize what has been achieved up to Solaris 11.3, which by the time of this writing, has been around about 3 years. There's also an early announcement that Solaris 11.4 will be available by mid-2018, and according to planned EoLN (end-of-life notices) chances are that many desktop features will be further trimmed out, although GCC should be finally upgraded, hopefully to version greater than 7, let's see.

NOTE
Although by now Solaris 11 Express is officially obsolete and OpenSolaris has been left behind by Oracle, there are still community efforts to reestablish open-source variants, such as OpenIndiana and SmartOS, both running over a lagging-behind kernel called Illumos based on the original open-source version of the SunOS 5.11. But unfortunately, that kernel isn't as nearly modernized and optimized as Oracle's current closed-source product.
But in spite of all that, Solaris 11 Express major benefit was to early incorporate many Solaris advancements over Solaris 10 and still run under a legacy 32-bits platform! Yes! This is key, because despite the official business strategies and propaganda focusing on 64-bits mid and high-end big-iron, the truth is that there's a lot of legacy hardware can still be put to good service to the crowds on the 3rd world which are thriving to evolve and that do not count on a lot of money and other powerful and current resources.

The initial GA release of Solaris 11 Express didn't perform well, perhaps due to a lot of debugging hooks (code assertions) and conservative strategies, after all it was a key transitional milestone of Solaris. But fact is that those who had payed for a support contract could benefit from regular updates, called SRU (service release updates), which fixed many issues and greatly improved the system performance, including booting speed. By the last general SRU, SRU-13, things were noticeably better.

For instance, Solaris 11 Express SRU-13 rivals the speed of Solaris 11.3 GA and certainly runs faster than OpenIndiana 2017.04. In my opinion, a relative comparison among "recent" Solaris distros could be depicted by the following table:


But wait! Things could become even better because Engineered Systems for high-performance grid-computing started to see the light of the day and some of them were to be driven by Solaris 11 Express! This transitional version of Solaris then became so acclaimed and accredited that it deserved a additional and special SRU update targeted to Exalogic, the SRU-14. The SRU-14 could not be applied to ordinary systems because it had a special dependency associated to Exalogic Engineered System. Of course, there should be good reasons for such constrain. But fact is or at least seems to be that in general it runs amazingly well on ordinary systems too!
To enjoy all the power of SRU-14 on ordinary systems, some homework is necessary in order lift the impeding constrain embedded on the update.

NOTE
I'll assume that a support repository has already been made available by means described on procedures I've visited on the past, such as, the IPS repository update post. 
For instance, consider the following local support repository:

# zfs list -o mountpoint -H -r /depot
/depot
/depot/solaris
/depot/solaris/11e
/depot/solaris/11e/release
/depot/solaris/11e/sru-13
/depot/solaris/11e/sru-14


At first, an usual update attempt from SRU-13 to SRU-14 fail:

# pkg update --be-name solaris-11e-sru-14
Creating Plan ...
pkg update: No solution was found to satisfy constraints
Plan Creation: Package solver has not found a solution 

               to update to latest available versions.
               This may indicate an overly constrained 

               set of packages are installed.

latest incorporations:

  pkg://solaris/consolidation/gnome/gnome-incorporation@...151.0.1.14...
  pkg://solaris/consolidation/sfw/sfw-incorporation@...151.0.1.14...
  pkg://solaris/consolidation/osnet/osnet-incorporation@...151.0.1.14...
  pkg://solaris/entire@...151.0.1.14...

The following indicates why the system cannot update to the latest version:

    Reject:  pkg://solaris/entire@...151.0.1.14...
    Reason:  A version for 'require-any' dependency on

             pkg:/system/platform/exalogic/firstrun cannot be found

From the diagnostic messages above it's possible to realize that the SRU-14 was crafted to be applied as part of an automated installation of Solaris 11 Express target to the Exalogic Engineered System. The only constrain was a missing IPS package delivering a one-time-run SMF service performing initial configurations to Exalogic:
pkg:/system/platform/exalogic/firstrun.
NOTE
It's noticeable that in more recent releases, the description of the technique for tailoring a one-time-run IPS packages has evolved while it was completely lacking on Solaris 11 Express. Nevertheless, the simpler and straightforward instructions found on Solaris 11/11 Information Library is enough to perfectly work under Solaris 11 Express. Despite the evolution in documentation it still lacks a lot of clarity by sticking to do it that way instead of building knowledge, unfortunately. For this post I'll stay as much as possible with the clearer and simpler procedures which seem to still work equally well in terms of backward compatibility.
The major steps in creating the missing package are:
  1. Creating a SMF service manifest for a dummy service.
  2. Deploying the special IPS package unlocking SRU-14.
The above steps does nothing more than completing the set of requirements that unlock the SRU-14 installation. I'm not sure if I could create an "empty" package, that is, maybe the dummy SMF service is unneeded after all. Anyway, the difficulties rely just on the intrinsics of these steps themselves, not on the big picture. Let's visit each of the above steps in more detail:

1. Creating a SMF service manifest for a dummy service.

In recent versions of Solaris, this has been somewhat simplified by svcbundle(1M), but I won't rely on it at this moment. I prefer to know all the details and stay in control as much as possible.

# mkdir /tmp/sru-14-unlock
# cd !!$

# cat sru-14-unlock.xml
...
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle name="sru-14-unlock" type="manifest">

  <service name="sru-14-unlock" type="service" version="1">

    <create_default_instance enabled="false"/>
    <single_instance/>

    <dependency name="multi_user" type="service" grouping="require_all" restart_on="none">
      <service_fmri value="svc:/milestone/multi-user:default"/>
    </dependency>

    <exec_method name="start"   type="method" exec=":true" timeout_seconds="60"/>
    <exec_method name="stop"    type="method" exec=":true" timeout_seconds="60"/>
    <exec_method name="refresh" type="method" exec=":true" timeout_seconds="60"/>


    <!-- must be defined at this exact place -->
    <property_group name="startd" type="framework">
      <propval name="duration" type="astring" value="transient"/>
    </property_group>

  </service>


</service_bundle>

# svccfg validate !!$
...

2. Deploying the special IPS package unlocking SRU-14.

This amounts to the creation and installation of the missing package.
What matters most is the package name, highlighted below.

# pwd
/tmp/sru-14-unlock

# mkdir -p ./prototype/lib/svc/manifest/site
# cp sru-14-unlock/sru-14-unlock.xml !!$
...

# cat sru-14-unlock.p5m
set \
  name=pkg.fmri \
  value=system/platform/exalogic/firstrun@1.0,5.11

set \

  name=pkg.summary \
  value="SRU-14 unlock."

set \

  name=pkg.description \
  value="Dummy package to unlock SRU-14 installation."
 

set \
  name=org.opensolaris.smf.fmri \
  value=svc:/sru-14-unlock

set \
  name=org.opensolaris.consolidation \
  value=userland

set \
  name=info.classification \
  value="org.opensolaris.category.2008:System/Packaging"
 

file \
  path=lib/svc/manifest/site/sru-14-unlock.xml \
  mode=0444 owner=root group=sys

NOTE
The extension .p5m most probably means package v5 manifest.
# pkglint !!$
Lint engine setup...
Starting lint run...

 
This package must be placed into a package repository, from which it can be subsequently installed. If the installation is to be part of an automated install (AI), then the repository must created on a location accessible to AI clients during the first boot. On this post I'm not using AI, so I'll just create the repository under /tmp which suffices to a one-time interactive install.

# pwd
/tmp/sru-14-unlock

# pkgrepo create ./repo
# pkgrepo add-publisher -s ./repo solaris

# pkgsend publish -d ./prototype -s ./repo sru-14-unlock.p5m
pkg://solaris/system/platform/exalogic/firstrun@1.0,5.11:...Z
PUBLISHED


# pkg list -af -g ./repo
NAME (PUBLISHER)                     VERSION    IFO
system/platform/exalogic/firstrun    1.0        ---


# pkg info -g ./repo firstrun
       Name: system/platform/exalogic/firstrun
    Summary: SRU-14 unlock.
Description: This dummy package...
      State: Not installed
  Publisher: solaris
    Version: 1.0
... Release: 5.11
     Branch: None
   ... Date: ...
       Size: 928.00 B
       FMRI: pkg://solaris/system/platform/exalogic/firstrun...


# pkg install -g ./repo -nv firstrun
           Packages to install:        1
     Estimated space available:   ... GB
Estimated space to be consumed: 14.31 MB
       Create boot environment:       No
Create backup boot environment:       No
          Rebuild boot archive:       No

Changed packages:
solaris
  system/platform/exalogic/firstrun
    None -> 1.0,5.11:...


# pkg install -g ./repo -v firstrun 
...
DOWNLOAD               PKGS       FILES    XFER (MB)
Completed               1/1         1/1      0.0/0.0

PHASE                                        ACTIONS
Install Phase                                    7/7

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

PHASE                                          ITEMS
Reading Existing Index                           8/8
Indexing Packages                                1/1


# pkg info firstrun
       Name: system/platform/exalogic/firstrun
    Summary: SRU-14 unlock.
Description: Dummy package to unlock SRU-14 installation.
   Category: System/Packaging
      State: Installed
  Publisher: solaris
    Version: 1.0
...


# svcadm restart manifest-import

On the console one sees:
Loading smf(5) servicd descriptions: 1/1

# svcs -a |grep sru-14
disabled       19:36:52 svc:/sru-14-unlock:default


And voilà!

For a default text-installation of Solaris 11 Express with SRU-13 one gets:

# pkg update -nv --be-name solaris-11e-sru-14
            Packages to update:        15 
     Estimated space available:    ... GB
Estimated space to be consumed: 366.68 MB
       Create boot environment:       Yes
     Activate boot environment:       Yes
Create backup boot environment:        No 
          Rebuild boot archive:       Yes

Changed packages:

solaris
  SUNWcs
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  consolidation/gnome/gnome-incorporation
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  consolidation/osnet/osnet-incorporation
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  consolidation/sfw/sfw-incorporation
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  database/sqlite-3
    3.6.23,5.11-0.151.0.1.4:...  ->  3.7.5,5.11-0.151.0.1.14:...
  entire
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  image/library/libpng
    0.5.11,5.11-0.151.0.1:...    -> 0.5.11,5.11-0.151.0.1.14:...
  library/desktop/gtk2
    0.5.11,5.11-0.151.0.1:...    -> 0.5.11,5.11-0.151.0.1.14:...
  library/libtasn1
    0.5.11,5.11-0.151.0.1:...    -> 0.5.11,5.11-0.151.0.1.14:...
  runtime/python-26
    2.6.4,5.11-0.151.0.1:...     ->  2.6.4,5.11-0.151.0.1.14:...
  system/file-system/zfs
    0.5.11,5.11-0.151.0.1.11:... -> 0.5.11,5.11-0.151.0.1.14:...
  system/kernel
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  system/kernel/platform
    0.5.11,5.11-0.151.0.1.12:... -> 0.5.11,5.11-0.151.0.1.14:...
  system/library
    0.5.11,5.11-0.151.0.1.13:... -> 0.5.11,5.11-0.151.0.1.14:...
  system/network/nis
    0.5.11,5.11-0.151.0.1.8:...  -> 0.5.11,5.11-0.151.0.1.14:...


# pkg update --be-name solaris-11e-sru-14
            Packages to update:  15
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD               PKGS       FILES    XFER (MB)
Completed             15/15   1743/1743    41.8/41.8

PHASE                                        ACTIONS
Removal Phase                                  58/58
Install Phase                                  52/52
Update Phase                               4258/4258

PHASE                                          ITEMS
Package State Update Phase                     30/30
Package Cache Update Phase                     15/15
Image State Update Phase                         2/2

PHASE                                          ITEMS
Reading Existing Index                           8/8
Indexing Packages                              15/15

A clone of ... exists and has been updated and activated.
On the next boot the Boot Environment solaris-11e-sru-14 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

----------------------------------------------------------
NOTE: Please review release notes posted at:

http://www.oracle.com/pls/topic/lookup?ctx=E23824&id=SERNS
----------------------------------------------------------


# init 6

And this concludes this post.
 

Thursday, October 5, 2017

ZFS basic mirroring

Mirroring is a traditional strategy in providing fault-tolerance which became more popular for secondary storage systems, typically for the hard-disks. ZFS improves the strategy by introducing checksums to prevent that eventually corrupted data (due to bit-rot or some other component malfunction) at one side of the mirror gets replicated to the other healthy side of the mirror. This ZFS enhancement has been unique since in general it seems not viable to implement it solely at the physical layer (controller) as it may have dependencies at the logical layer (file-systems). ZFS achieves its goal by abstracting the physical layer into storage pools over which logical datasets (file-systems ans raw volumes) are managed.

Establishing mirrors within storage pools is a relatively simple task, specially in more recent versions of Solaris such as the Solaris 11.x. But in late Solaris 10 U1x as well as in Solaris 11 Express some initial disk preparation was required. In addition, for root pools under these older systems, it was necessary to manually install (via installboot(1M) or installgrub(1M)) the boot-loader on new disks just integrated into a mirror. On more recent versions of Solaris the bool-loader management for mirrored root pools were automated yet eventually manually manageable via the install-bootloader sub-command of bootadm(1M).

Another usual difference contrasting older systems (Solaris 10 U1x and Solaris 11 Express) from newer Solaris 11.x is as how underlying disks comprising storage pools are seem with respect to disk-labeling: SMI (VTOC) for older systems and disks and EFI (GPT) for newer ones. The most important implications about these two types of label is that SMI labels impose a limit of 2 TB of usable storage even on larger disks and are the only supported label for root pools under older systems. Typically, referring to whole-disks (which are preferred for ZFS over legacy slices/partitions), when a SMI label is used, disks names take the form c?[t?]d?s0 , otherwise they lack the trailing s0.

Here is some exemplification on how to hassle-free successfully establish a basic mirror on storage pools initially consisting on a single disk:


1) When SMI-labeled disks are required for a pool:

This is typical for older systems in general, for some SPARC systems or for not-so-old systems that don't yet support EFI devices on the root pool.

I assume that the disks were already appropriately prepared.

# zpool status
  pool: rpool
 state: ONLINE
 scan: ...
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          c8t0d0s0    ONLINE       0     0     0

errors: No known data errors


# zpool attach -f rpool c8t0d0s0 c8t1d0s0
Make sure to wait until resilver is done before rebooting.


# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  

        The pool will continue to function, 
        possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since ...
    1.50G scanned out of 3.78G at ...M/s, 0h2m to go
    1.50G resilvered, 39.70% done
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c8t0d0s0  ONLINE       0     0     0
            c8t1d0s0  ONLINE       0     0     0  (resilvering)

errors: No known data errors


As this is a root pool, when the resilver is complete, one can optionally make sure the boot-loader is properly installed on the newly attached disk as well. But according to the official documentation, this extra step is only mandatory when a zpool replace command is issued on the root pool. For an i86pc system, if one decide so, the command would be similar to:

# installgrub \
  /boot/grub/stage1 /boot/groub/stage2 \
  /dev/rdsk/c8t1d0s0

or the newer and far superior:

# bootadm install-bootloader


2) Systems supporting EFI-labeled disks for any kind of pool:

This is good news as no tedious disk preparation is required beforehand at all and moreover is totally useless as during an attachment the disk will be automatically formatted and labeled as necessary and accordingly.

Therefore, the attachment procedure is as simple as:

# zpool attach rpool c1t0d0 c1t1d0

NOTE
It's possible to have N disks in a mirror which means the mirror will withstand as many as N-1 members failing at a given time. This may seem highly exaggerated at first but it may make sense on some scenarios.

But let me exclude the case of a 3-way mirror for a root pool with over 2 TB disks as an insane case: a root pool should really never require that much space justifying a 3rd member preventing a double-fault while resilvering from a single-fault.

For instance, a N-way mirror (N>3) for a non-root critical pool may make sense and be an straightforward solution if one intends to keep critical data replicated at N-2 remote locations. The mirrored devices (not disks) forming this pool could be iSCSI LUNs from separate remote storage facilities (preferably also backed by ZFS) as long as each LUN isn't comprised of many individual disks and as long as the pool also keeps local log and cache devices indispensable for better equalizing disparate remote storage performances and link latencies.
NOTE
Mirrors can be created or added right from the start with a single command, such as:

# zpool create hq \
  mirror c0t0d0
c1t0d0 \
  mirror
c0t1d0 c2t0d0

(each mirror above will resist a single disk and controller failure)
(it's similar to RAID-10 but RAID-Z(1) could rival if I/O block is over 128KB)

# zpool add hq \
  mirror
c1t1d0 c2t1d0

(the hq pool above is now stripping over 3 2-way mirrors)
(a better solution could be a RAID-Z2 scheme depending on block size)
 
Each mirror on the example above is known as a vdev.
Not surprisingly, ZFS stripes I/O along the top-level vdevs.
By the way, root pools support just 1 mirror vdev.

To remove a device from a mirror:
# zpool detach rpool c1t1d0

To replace a device in a mirror:
# zpool replace rpool c1t1d0 c1t2d0

And that seems the pretty much basics.
  

Tuesday, October 3, 2017

IPMP basics

IPMP is an acronym for IP multi-path which roughly means resilience in terms of IP connectivity by means of some sort of redundancy provided by multiple paths of communication. This resilience is also commonly referred to as fault-tolerance In fact, multi-path is a general strategy concept used for resilience in critical subsystems. Another example could be MPIO which stands for multi-path I/O, but that's another story.

A necessary consequence of multiples paths is that performance can be enhanced as well as streaming data can flow through multiple paths in parallel. But, by the connection-oriented nature of TCP/IP this performance enhancement frequently narrows down to outbound traffic, that is, traffic flowing out of the IPMP system to remote clients.

IPMP has been available since older Solaris releases and I would say it has become progressively better and simpler to configure since its inception. According to another post of mine called Legacy & Future I'll be focusing on Solaris 11 as my discussions cut-point. Things started to get significantly simpler and better since Solaris 11 Express and really top-notch onward with Solaris 11.x.

I could only talk about Solaris 11.x but I'll address Solaris 11 Express because it's still a nice back-end system capable of running on x86 (32-bits) platform. As everybody knows, beyond mid-range and high-end big-iron SPARC systems, Solaris 11.x only runs on x86-64 (64-bits) platforms. Oracle has completely dropped support to Solaris 11 Express as it was marketed as a short-term transition from Solaris 10 to Solaris 11. The last update was SRU-13 or SRU-14 (focused on some Engineered-Systems). But the truth is that Solaris 11 Express is an awesome system to the near-zero or very small IT budgets business models based on legacy x86 hardware, and it still rivals much more recent Linux and BSD alternatives because it embeds very advanced key technologies such as ZFS and BEs (boot-environments), beyond, of course, other high-end technologies such as IPMP. So if you still have this piece software consider using it, specially because it's quite possible to independently update some of its crucial components and applications based on open software.

Back to IPMP the central idea is to group a given number of network interfaces and associate it to a pool of new (data) addresses by which the group will be publicly accessible. The group is materialized as a new network interface in the system which operation and availability is provided by the collaboration of the underlying group members. In general the number of members network interfaces should greater than the number of data addresses and some member network interfaces can each be set as a hot stand-by to the group. When stand-by network interfaces are present the IPMP group is said to be of an active-standby type, otherwise it's an active-active type. Unless you really have lots of network interfaces to spare an active-standby IPMP type would waste a precious network resource, hence otherwise prefer an active-active IPMP group.

NOTE
Sometimes there's some confusion, argumentation and comparison to another technology know as Link-Aggregation but things are quite different beats although both contribute to resilience and performance. One advantage of IPMP is that it operates on layer-3 thus possessing no layer-2 special driver and hardware requirements as Link-Aggregation dos. Both are not mutually exclusive and can even be combined, but perhaps each one is better suited to an specific scenario or requirement. For instance, a back-to-back connection between two servers is better implemented via Link-Aggregation while out-bound traffic load spread may be better deployed via IPMP.
Let's go straight to a minimal practical example, first on Solaris 11 Express and then on Solaris 11.3. Don't be fooled by the simplicity because the solution is still quite a lot powerful and significant to a many applications infrastructure models which is not easily attained, if at all, by more modern competitors systems. By the way, I will assume that some techniques and technologies (NCP, routes and name resolution) described for manual wired connections will be implicitly used as needed.

EXAMPLE:

Setting up an active-active IPMP group from interfaces net2 and net3 which link names have been respectively renamed from an e1000g2 and an e1000g3 originally available on the system.

# dladm show-phys
LINK      MEDIA         STATE      SPEED  DUPLEX    DEVICE
...
net2      Ethernet      unknown    0      half      e1000g2
net3      Ethernet      unknown    0      half      e1000g3

...
 
The newly generated network interface representing the new IPMP group will stop working only if both net2 and net3 fail simultaneously, but as long as both underlying interfaces remains operational up to 2 Gbps of overall outbound bandwidth will be available for multiple TCP connections, letting clear that still no more that 1 Gbps inbound bandwidth per single TCP connection.

NOTE
It may still not be crystal clear, but having N underlying interfaces of 1 Gbps on a given IPMP group will generally provide an overall outbound bandwidth of N Gbps for that IPMP group. The inbound bandwidth is a different story; if M < N data addresses are configured for an IPMP group of 1 Gbps underlying interfaces, then the overall inbound performance will still be limited to 1 Gbps per TCP session inbound traffic even though it may be possible to simultaneously have M such sessions.

On Solaris 11 Express:

Under Solaris 11 Express the update of the IPMP management interface is still transitioning and its crucial parts still must be managed via the old ifconfig command. Do not attempt to manage the underlying interfaces net2 and net3 via the new ipadm command for anything related to IPMP.

Configure the IPMP group and its data-address.
The group will subsequently receive the underlying member interfaces:
# ifconfig ipmp0 ipmp 192.168.1.230/24 up
Configure the underlying interfaces:
# ifconfig net2 plumb group ipmp0 up
# ifconfig net3 plumb
group ipmp0 up
 
NOTE
In the case of an active-standby configuration it would be necessary to choose one of the underlying interfaces as a stand-by interface by simply inserting the standby keyword just before the up keyword.
Verify the configuration:
# ifconfig -a |ggrep -A2 'ipmp0:'
ipmp0: flags=8001000842
<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> ...
        inet 192.168.1.230 netmask ffffff00 broadcast 192.168.1.255
        groupname ipmp0


# ifconfig -a |ggrep -A3 -E 'net(2|3):'
net2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> ...
      inet 0.0.0.0 netmask ff000000
      groupname ipmp0
      ether 8:0:27:fe:f6:44
net3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> ...
      inet 0.0.0.0 netmask ff000000
      groupname ipmp0
      ether 8:0:27:c3:94:2


# ipadm show-if |ggrep -E 'ipmp0|net2|net3'
ipmp0  ok   bm--I-----4- ---
net2   ok   bm--------4- ---
net3   ok   bm--------4- ---


# ipadm show-addr 'ipmp0/'
ADDROBJ    TYPE     STATE  ADDR
ipmp0/?    static   ok     192.168.1.230/24


# ipmpstat -g
GROUP   GROUPNAME  STATE  FDT  INTERFACES
ipmp0   ipmp0      ok     --   net3 net2


# ipmpstat -i
INTERFACE ACTIVE GROUP  FLAGS   LINK PROBE    STATE
net3      yes    ipmp0  ------- up   disabled ok
net2      yes    ipmp0  --mb--- up   disabled ok
Make the configuration persistent across reboots:
(the order of parameters below is important in obtaining the exact results)
# cat /etc/hostname.ipmp0
ipmp 192.168.1.230/24 up

# cat /etc/hostname.net2
group ipmp0 up

# cat /etc/hostname.net3
group ipmp0 up
To eventually disable and clean up the IPMP group:
# rm /etc/hostname.net3
# rm /etc/hostname.net2
# rm /etc/hostname.ipmp0
 
# ifconfig ipmp0 down
# ifconfig net2 down
# ifconfig net3 down
 
# ifconfig net2 group ""
# ifconfig net3 group ""

# ifconfig net2 unplumb
# ifconfig net3 unplumb
# ifconfig ipmp0 unplumb

On Solaris 11.3:

Under Solaris 11.3 things are somewhat easier. The IPMP management has been fully integrated into the ipadm command and the case for persistency across reboots is on by default requring no additional actions.

Configure the underlying interfaces:
# ipadm create-ip net2
# ipadm create-ip net3
Configure the IPMP group:
# ipadm create-ipmp -i net2,net3 ipmp0
Set the data address for the IPMP group:
# ipadm create-addr -T static -a 192.168.1.230/24 ipmp0
ipmp0/v4


NOTE

Unfortunately, perhaps due to some subtle bug in the GA release of Solaris 11.3, it seems safer to only set the IPMP group data-address after the underlying interfaces have been added to the IPMP group.
To eventually disable and clean up the IPMP group:
(the order is important, again, due to some subtle bug)
# ipadm delete-addr ipmp0/v4
# ipadm remove-ipmp -i net2,net3 ipmp0
# ipadm delete-ipmp ipmp0
# ipadm delete-ip net2
# ipadm delete-ip net3
In the rare case where a standby underlying interface is still desired, for instance net4, it suffices to perform the following commands:
# ipadm create-ip net4
# ipadm set-ifprop -p standby=on -m ip net4
# ipadm add-ipmp -i net4 ipmp0
 
# ipadm show-if
IFNAME   CLASS    STATE   ACTIVE OVER
lo0      loopback ok      yes    --
ipmp0    ipmp     ok      yes    net2 net3 net4
net2     ip       ok      yes    --
net3     ip       ok      yes    --
net4     ip       ok      no     --


# ipmpstat -g

GROUP    GROUPNAME  STATE  FDT  INTERFACES
ipmp0    ipmp0      ok     --   net3 net2 (net4)

That's all very powerful and not that difficult to set up.
For sure one more cool technology available in Solaris!
   

Monday, April 22, 2013

Solaris 11 Express

I'd like to take a break for an honorable mention to Solaris 11 Express.
Although it isn't as good as Solaris 11, it still can be a quite good 32-bit SOHO platform.
It runs on old hardware that can't run Solaris 11.
It has even been used with Engineered Systems.
It offers many application packages, efficient virtualization and, of course, ZFS!
It's a pity that the last useful support repository update was SRU 13.
But you may even live well without it and still get many benefits.

The major annoyance is due to Solaris 11's new syntax adopted in many commands.
Unfortunately in these cases we have no option but to cope with them.
In spite of that I think it's still better than FreeBSD or OpenIndiana.
Least to say that you're in a better path to Solaris 11.