Friday, June 28, 2013

Profile shells

Profile shells pfexec(1) are part of rbac(5) implementation.
They are great but have to be used with caution, upon "best practices".

One important point to highlight, possibly as a best practice, is that built-in rights profiles, prof_attr(4), those located in /etc/security/prof_attr.d and also viewable with profiles(1), may exhibit unexpected behavior if combined with profile shells.

Example:

$ who am i
prime      pts/3        Jun 24 11:26    (:0.0)

$ whoami
prime

$ mkdir test

$ ll -d test
drwxr-xr-x   2 root     staff        117 Jun 24 11:28 test

$ ll -d /tmp
drwxrwxrwt  18 root     sys         2.2K Jun 24 11:28 /tmp

$ pkg info entire | grep Version
       Version: 0.5.11 (Oracle Solaris 11.1.8.4.0)


$ id -a
uid=60004(prime) gid=10(staff) groups=10(staff)

$ getent passwd prime
prime:x:60004:10:Prime Admin:/home/prime:/usr/bin/pfbash

$ getent passwd 60004
prime:x:60004:10:Prime Admin:/home/prime:/usr/bin/pfbash


$ getent user_attr prime
prime::::type=normal;...;profiles=System Administrator;roles=...


As seen above, unexpected side-effects, such as persistent file system attributes (in this case just the ownership), takes place: if the same user (in this case prime) subsequently tries to put something in /tmp/test, it won't succeed because under the System Administrator rights profile, cp(1) or mv(1) don't have the same privilege elevation (uid=0) as mkdir(1).

Still in other words:
Profile shells provides automatic temporary privilege elevation.
The automation is handy, unless there are persistent side-effects.
I'd say profile shells are convenient and recommended only when it can be assured that the side-effects aren't persistent or harmful. As it's impossible to control this for the built-in rights profiles, it's better to avoid combining them.
 
Hence it's still better to explicitly use the traditional pfexec(1) instead of specifying (with the -e option) a profile shell to passwd(1). Indeed, pfexec(1) is still more convenient and safer than the ubiquitous sudo(1m).
  
Nevertheless, in this particular example, I shall investigate if somehow the authorization solaris.file.owner from /etc/security/auth_attr.d/core-os mitigates the issue.
 

Thursday, June 20, 2013

A custom C++ allocator

Custom standard C++ allocators can be very useful.
Unfortunately, C++ allocators may be a less known or misunderstood feature.

Important references are:
  
A possibly associated reference is:
  

Particularly, there may be excellent libraries out there, for instance, Boost, but at the same time, I think those libraries are too complex. Not that an industrial approach isn't a must, but I think that a negative aspect of them are the considerably verbose coding imposed by the artifacts, beyond the burden of headers and libraries, not counting the their build process.

I believe in a simpler interface surface, yet industrial implementation, specially in terms of algorithms and strategies, but not so much in terms of multi-platform specific detail, which is often conveniently complemented by vendors.

I'll now start an "ambitious attempt" (at least to me) to implement some C++ artifacts around memory allocation for leveraging Solaris (D)ISM. It's not my intention to reinvent the wheel, but simply provide an implementation that can be used in Solaris.

While I was delving into the challenge, inspired by Bjarne Stroustrup's example found on Section 19.4 of the The C++ Programming Language and browsing Solaris man pages , I soon realized that the need to devise memory alignment artifacts is a must. I've gladly found another useful example by Tomasz Müldner & Peter W. Steele on sections 7.3.2.1 on page 149,  9.2 on page 277 and 13.3 on page 390 of the C as a Second Language for Native Speakers of Pascal.

As one thing leads to another, I also realized that memory page size could play a role in the implementation as to the memory chunks of an allocation pool. To further complicate matters, Solaris, at least, may support different system memory page sizes despite the hardware page size as indicated in getpagesize(3C). According to getpagesizes(3C), "not all processors support all page sizes or combinations of page sizes with equal efficiency." Fortunately, by means of memcntl(2) we can advice the system, MC_HAT_ADVICE (Hardware Address Translation), to set the page size of a segment "based on the size and alignment of the memory region, type of processor, and other considerations". By using meminfo(2) we can obtain the page size, MEMINFO_VPAGESIZE, of a segment.

Smart pointers seems very useful for implementing underlying storage pools in order to support efficient allocator copy and assignment while maintaining the required semantics.

With respect to rebinding semantics, another standard requirement, I tend to follow the most basic concept of SLAB allocation: a per object cache. Of course I'm not going mad for that, but if you want to know about SLAB see:
 
 

Wednesday, June 19, 2013

Official host name and nicknames

This post is related to /etc/hosts and the official host name.
The official host name is the 2nd field on an /etc/hosts entry.
I assume that /etc/nsswitch.conf specifies files as the 1st host database.

According to hosts(4) names in /etc/hosts can be a few things.
I'm interested on just two kinds:
  
  • Host
     
  • Domain

The problem is:
Which kind of name to use as the official host name?
 
  1. Nowadays I'd say that the only reason to use just the host name as the official host name is if you are not using DNS at all, which may be rather unlike.
     
  2. If using DNS, then prefer the domain as the official host name.
     
Option 1 is not recommended with DNS (and for relying services and applications) because just the host name (not the domain) is returned by both getipnodebyaddr(3SOCKET) and getipnodebyname(3SOCKET), even if the latter is explicitly queried with a domain name.

Here's the proof:

Considering the official host name as a host name:

#
# Copyright 2009 Sun Microsystems, Inc.  ...
# Use is subject to license terms.
#
# Internet host table
#
::1               server   localhost
127.0.0.1         server   localhost           loghost
#
10.0.0.1          server   server.domain.com

Here's the output:

server $ ./getipnode_byname server
10.0.0.1 server

server $ ./getipnode_byname server.domain.com
10.0.0.1 server

server $ ./getipnode_byaddr 10.0.0.1
10.0.0.1 server

Considering the official host name as a domain name:

#
# Copyright 2009 Sun Microsystems, Inc.  ...
# Use is subject to license terms.
#
# Internet host table
#
::1               server              localhost
127.0.0.1         server              localhost   loghost
#
10.0.0.1          server.domain.com   server

Here's the output:

server $ ./getipnode_byname server
10.0.0.1 server

server $ ./getipnode_byname server.domain.com
10.0.0.1 server.domain.com

server $ ./getipnode_byaddr 10.0.0.1

10.0.0.1 server.domain.com
  

getipnodebyaddr(3SOCKET)

This routine supersedes the traditional gethostbyaddr(3NSL).
The following sample code demonstrates its usage:

 1 #include <iostream>
 2 #include <cstdlib>
 3 #include <cstdio>
 4
 5 #include <netdb.h>
 6 #include <sys/socket.h>
 7 #include <netinet/in.h>
 8 #include <arpa/inet.h>
 9
10 int main( int argc, char* argv[] )
11 {
12     if ( argc != 2 )
13     {   
14         std::cout << argv[0] << " <IPv4 address>" << std::endl;
15         ::exit( EXIT_FAILURE );
16     }   
17
18     char address[ INET_ADDRSTRLEN ];
19
20     switch ( ::inet_pton( AF_INET, argv[1], address ) )
21     {   
22         case 1:
23             // OK
24             break;
25
26         case 0:
27             std::cout << 
               "Invalid IP address." << std::endl;
28             ::exit( EXIT_FAILURE );
29             break;
30
31         case -1:
32             ::perror( 0 );
33             ::exit( EXIT_FAILURE );
34             break;
35
36         default:
37             std::cout << 
               "Unknown error." << std::endl;
38             ::exit( EXIT_FAILURE );
39     }     
40
41     int error;
42
43     if ( hostent * hp = ::getipnodebyaddr
               ( address, INET_ADDRSTRLEN, AF_INET, &error ) )
44     {   
45         for ( char** p = hp->h_addr_list; *p; p++ )
46         {   
47             std::cout << 
               ::inet_ntoa( *reinterpret_cast< in_addr * >( *p ) )
               << " " << hp->h_name << std::endl;
48         }   
49     }   
50     else
51     {   
52         switch ( error )
53         {   
54             case HOST_NOT_FOUND:
55                 std::cout << 
                   "Host unknown.";
56                 break;
57
58             case NO_DATA:
59                 std::cout << 
                   "No address is avaliable.";
60                 break;
61
62             case NO_RECOVERY:
63                 std::cout << 
                   "Unexpected server failure. "
                   "Unrecoverable.";
64                 break;
65
66             case TRY_AGAIN:
67                 std::cout << 
                   "No response from authoritative server. "
                   "Retry later.";
68                 break;
69
70             default:
71                 std::cout << 
                   "Unknown error.";
72         }   
73
74         std::cout << std::endl;
75     }   
76
77     return 0;
78 }
  

getipnodebyname(3SOCKET)

This routine supersedes the traditional gethostbyname(3NSL).
The following sample code demonstrates its usage:

 1 #include <iostream>
 2 #include <cstdlib>
 3
 4 #include <netdb.h>
 5 #include <sys/socket.h>
 6 #include <netinet/in.h>
 7 #include <arpa/inet.h>
 8
 9 int main( int argc, char * argv[] )
10 {
11     if ( argc != 2 )
12     {
13         std::cout << argv[0] << " <hostname>" << std::endl;
14         ::exit( EXIT_FAILURE );
15     }
16
17     int error;
18
19     if ( hostent * hp =
            ::getipnodebyname( argv[1], AF_INET, 0, &error ) )
20     {
21         for ( char ** p = hp->h_addr_list; *p; p++ )
22         {
23             std::cout <<
               ::inet_ntoa( *reinterpret_cast< in_addr * >( *p ) )
               << " " << hp->h_name << std::endl;    
24         }
25     }
26     else
27     {
28         switch ( error )
29         {
30             case HOST_NOT_FOUND:
31                 std::cout <<
                   "Host unknown.";
32                 break;
33    
34             case NO_DATA:
35                 std::cout <<
                   "No address is avaliable.";
36                 break;
37    
38             case NO_RECOVERY:
39                 std::cout <<
                   "Unexpected server failure. "
                   "Unrecoverable.";
40                 break;
41    
42             case TRY_AGAIN:
43                 std::cout <<
                   "No response from authoritative server. "
                   "Retry later.";
44                 break;
45    
46             default:
47                 std::cout <<
                   "Unknown error.";
48         }
49
50         std::cout << std::endl;
51     }
52
53     return 0;
54 }
 

Tuesday, June 18, 2013

DISM run sample 3.0

This is a sample run for DISM of the shared memory code sample 3.0:

In an attempt to better demonstrate the presumable advantage of DISM, please, consider the following changes in the original sample source code 3.0:
  
037:     std::size_t const size =
         12UL * 1024UL * 1024UL * 1024UL;
 
051:         void * p = ::shmat( id, 0, SHM_PAGEABLE );
 
063:             if ( ::mlock( p, size / 4 ) == 0 )
 
069:                 ::memset( p, '*', size / 2 );
 
075:                 switch ( ::munlock( p, size / 4 ) )
  

$ getent user_attr 
...::::defaultpriv=basic,proc_lock_memory;...
  
$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 500M allocated + 178M reserved = 676M used, 18G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.32 1.37% 8355M 34.0% 10.0G 35.9%  396K 0.01%
 [system]  0.32 1.36% 8274M 33.6%  9.9G 35.5%     -     -
      ...  0.00 0.00% 81.0M 0.33%  116M 0.40%   228 0.00%

Getting shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 500M allocated + 12G reserved = 13G used, 5.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ
CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      0 12884901888 3832 ...        0 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY     Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.26 1.11% 8378M 34.1% 22.0G 78.8%  711K 0.02%
 [system]  0.26 1.10% 8296M 33.7%  9.9G 35.5%     -     -
      ...  0.00 0.00% 82.5M 0.33% 12.1G 43.2%   228 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...      5140K 2176K ...
...

Attaching to shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 492M allocated + 12G reserved = 13G used, 5.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      1 12884901888 3832 ...        1 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.26 1.10% 8391M 34.1% 22.0G 78.8%   534 0.00%
 [system]  0.26 1.09% 8308M 33.8%  9.9G 35.5%     -     -
      ...  0.00 0.00% 82.6M 0.33% 12.1G 43.3%   432 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...        12G 2176K ...
...

$ pmap -x 3832 | head
3832:    ./shm_v04
         Address   Kbytes RSS ... Locked Mode  Mapped File
0000000000400000        8   8 ...      - r-x-- shm_v04
0000000000411000        8   8 ...      - rw--- shm_v04
0000000000413000      160
  68 ...      - rw---  [ heap ]
FFFF80FC80000000 12582912   - ...      - rwxs-  [ dism ... ]

...

Locking shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 3.5G allocated + 9.2G reserved = 13G used, 2.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ...  ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      1 12884901888 3832 ...         1 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.29 1.23% 11.1G 46.6% 25.0G 89.5%  704K 0.00%
 [system]  0.29 1.22% 8308M 33.8% 12.9G 46.2%     -     -
      ...  0.00 0.00% 3154M 12.8% 12.1G 43.3%  1347 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...        12G 3074M ...
...

$ pmap -x 3832 | head
3832:    ./shm_v04
         Address   Kbytes     RSS ...  Locked Mode  Mapped File
0000000000400000        8       8 ...       - r-x-- shm_v04
0000000000411000        8       8 ...       - rw--- shm_v04
0000000000413000      160      68 ...       - rw---  [ heap ]
FFFF80FC80000000 12582912 3145728 ... 3145728 rwxs-  [ dism ... ]
...

Using shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 6.5G allocated + 6.2G reserved = 13G used, 2.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      1 12884901888 3832 ...        1 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.27 1.13% 14.1G 59.1% 25.0G 89.5%   228 0.00%
  
 [system]  0.27 1.12% 8309M 33.8% 12.9G 46.2%     -     -
      ...  0.00 0.00% 6225M 25.3% 12.1G 43.3%   228 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...        12G 6146M ...
...

$ pmap -x 3832 | head
3832:    ./shm_v04
         Address   Kbytes     RSS ...  Locked Mode  Mapped File
0000000000400000        8       8 ...       - r-x-- shm_v04
0000000000411000        8       8 ...       - rw--- shm_v04
0000000000413000      160      68 ...       - rw---  [ heap ]
FFFF80FC80000000 12582912 6291456 ... 3145728 rwxs-  [ dism ... ]
...

Unlocking shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 6.5G allocated + 6.2G reserved = 13G used, 5.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1          1 12884901888 3832 ...        1 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.30 1.27% 14.1G 59.1% 22.0G 78.8%  181K 0.00%
 [system]  0.30 1.27% 8309M 33.8%  9.9G 35.5%     -     -
      ...  0.00 0.00% 6225M 25.3% 12.1G 43.2%   432 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...        12G 6146M ...

...

$ pmap -x 3832 | head
3832:    ./shm_v04
         Address   Kbytes     RSS ... Locked Mode  Mapped File
0000000000400000        8       8 ...      - r-x-- shm_v04
0000000000411000        8       8 ...      - rw--- shm_v04
0000000000413000      160      68 ...      - rw---  [ heap ]
FFFF80FC80000000 12582912 6291456 ...      - rwxs-  [ dism ... ]
...

Detaching shared memory.
Press <ENTER> to continue...
Success!


vlab-3 $ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap
             -         4K     4.0G     3.9G
total: 6.5G allocated + 6.2G reserved = 13G used, 5.9G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH       SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1          0 12884901888 3832 ...        0 group.staff

vlab-3 $ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.29 1.22% 14.1G 59.1% 22.0G 78.8%   228 0.00%
 [system]  0.29 1.22% 14.1G 58.8%  9.9G 35.5%     -     -
      ...  0.00 0.00% 81.5M 0.33% 12.1G 43.2%   228 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...      5140K 2180K ...
...

$ pmap -x 3832 | head
3832:    ./shm_v04
         Address Kbytes  RSS ... Locked Mode  Mapped File
0000000000400000
     8    8 ...      - r-x-- shm_v04
0000000000411000      8    8 ...      - rw--- shm_v04
0000000000413000    160   68 ...      - rw---  [ heap ]
FFFF80FFB8F00000   1620 1024 ...      - r-x-- libCstd.so.1
...

Removing shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 500M allocated + 171M reserved = 672M used, 18G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.26 1.11% 8385M 34.1% 10.0G 35.9%   228 0.00%
 [system]  0.26 1.11% 8303M 33.8%  9.9G 35.5%     -     -
      ...  0.00 0.00% 81.5M 0.33%  123M 0.43%   228 0.00%

$ prstat -c -p 3832 1 1
Please wait...
 PID USERNAME  SIZE   RSS ...     
3832 ...      5140K 2180K ...
...

Exiting program.
Press <ENTER> to continue...


When Oracle Database fellows argues that "DISM can be dynamically resized depending on the application demand" they induce sysadmins to confusion or error.

As seen above throughout ipcs -mA on all the "output groups" the shared segment size never changes. What may change is the amount of memory the application chooses to lock, contrasting to ISM where the kernel always keep the whole segment locked.

Furthermore, they also believe that "Solaris does not allocate or reserve memory of size SGA_MAX_SIZE at instance startup". Again, for me, this is a misconception of them. I'm convinced that Oracle Database do reserve one or more large DISM segments, but that may not be apparent for a DBA, but the above output shows that Solaris does.

The behavior of ::shmget() is the same for ISM and DISM. The differences start, depending on how the segment is attached to the process. But before that no system resources are actually used, just accounted in the kernel (as a "promise" to the process which called ::shmget()).

Attaching DISM to a process, also just accounts pageable (as seen by pmap -x Mode flag) virtual memory to the process, but not yet any actual resources.

Locking a portion of (or the whole) DISM segment causes adjustments to virtual memory figures (allocated x reserved) because actual resources are allocated. Memory overhead is credited into kernel figures and the process resident size (RSS) also grows.

On the 5th "output group" we see that touching more memory than is locked (as in this sample code), causes even more pages to be allocated. In the example, we touch twice more than is currently locked, just to be more eye catching.

Unlocking DISM, reliefs the accounting of virtual memory but not necessarily frees physical memory (no pageouts) as there may not be memory pressure (as in this example).

And as with ISM, detaching shared memory, just unmaps it from the process, but not from the kernel. To free kernel resources the ::shmctl() must be issued.
 

ISM run sample 3.0

This is a sample run for ISM of the shared memory code sample 3.0:

$ getent user_attr prime    ...::::defaultpriv=basic,proc_lock_memory;...

$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 496M allocated + 170M reserved = 668M used, 18G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY   Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.26 1.09% 7719M 31.4% 10.0G 35.8%  540K 0.00%
 [system]  0.26 1.09% 7632M 31.0%  9.9G 35.4%     -     -
      ...  0.00 0.00% 86.9M 0.35%  117M 0.40%  1280 0.00%

Getting shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 496M allocated + 1.2G reserved = 1.7G used, 17G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      0 1073741824 3253 ...        0 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.25 1.05% 7720M 31.4% 11.0G 39.4%   317 0.00%
 [system]  0.25 1.05% 7633M 31.0%  9.9G 35.4%     -     -
      ...  0.00 0.00% 86.9M 0.35% 1144M 3.99%   317 0.00%

$ prstat -c -p 3253 1 1
Please wait...
   PID USERNAME  SIZE   RSS ... 
  3253 ...      3832K 1720K 
...

Attaching to shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 1.5G allocated + 182M reserved = 1.7G used, 17G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      1 1073741824 3253 ...        1 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.25 1.07% 8741M 35.5% 11.0G 39.4%  1347 0.00%
 [system]  0.25 1.06% 7631M 31.0%  9.9G 35.4%     -     -
      ...  0.00 0.00% 1109M 4.51% 1148M 4.00%  1347 0.00%

$ prstat -c -p 3253 1 1
Please wait...
   PID USERNAME  SIZE   RSS ...     
  3253 ...      1028M 1026M ...
...

$ pmap -x 3253 | head
3253:    ./shm_v04
 Address  Kbytes     RSS ...  Locked Mode  Mapped File
08050000       8       8 ...       - r-x-- shm_v04
08061000       4       4 ...       - rwx-- shm_v04
08062000     128      44 ...       - rwx--   [ heap ]
80000000 1048576 1048576 ... 1048576 rwxsR   [ ism shmid=... ]
...

Locking shared memory.
Press <ENTER> to continue...
Success!


    not relevant for ISM

Using shared memory.
Press <ENTER> to continue...
Success!


    not relevant for ISM

Unlocking shared memory.
Press <ENTER> to continue...
Success!


    not relevant for ISM

Detaching shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 1.5G allocated + 182M reserved = 1.7G used, 17G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:
m ... 0x1 ...      0 1073741824 3253 ...        0 group.staff

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.24 1.03% 8742M 35.5% 11.0G 39.4%   228 0.00%
 [system]  0.24 1.03% 8656M 35.2%  9.9G 35.4%     -     -
      ...  0.00 0.00% 85.9M 0.34% 1143M 3.99%   228 0.00%

$ prstat -c -p 3253 1 1
Please wait...
   PID USERNAME  SIZE   RSS ...     
  3253 ...      3832K 1720K ...
...

$ pmap -x 3253 | head
3253:    ./shm_v04
 Address  Kbytes     RSS ... Locked Mode   Mapped File
08050000       8       8 ...      - r-x--  shm_v04
08061000       4       4 ...      - rwx--  shm_v04
08062000     128      44 ...      - rwx--    [ heap ]
FE380000      24      12 ...      - rwx--    [ anon ]
...

Removing shared memory.
Press <ENTER> to continue...
Success!


$ swap -lh; swap -sh
swapfile             dev    swaplo   blocks     free
/dev/swap             -         4K     4.0G     3.9G
total: 496M allocated + 174M reserved = 668M used, 18G available

$ ipcs -mA
IPC status ...
T ... KEY ... NATTCH      SEGSZ CPID ... ISMATTCH     PROJECT
Shared Memory:

$ zonestat 1 1
Collecting data for first interval...
Interval: 1, Duration: 0:00:01
SUMMARY  Cpus/Online: 24/24   PhysMem: 23.9G  VirtMem: 27.9G
          ---CPU----  --PhysMem-- --VirtMem-- --PhysNet--
     ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE
  [total]  0.30 1.25% 7718M 31.4% 10.0G 35.8%  9852 0.00%
 [system]  0.30 1.25% 7632M 31.0%  9.9G 35.4%     -     -
      ...  0.00 0.00% 85.9M 0.34%  120M 0.41%   228 0.00%

$ prstat -c -p 3253 1 1
Please wait...
   PID USERNAME  SIZE   RSS ...     
  3253 ...      3832K 1720K ...
...

Exiting program.
Press <ENTER> to continue...


When Oracle Database fellows argues that "::shmget() requires swap reservation" they induce sysadmins to confusion or error.

As seen above (on the 2nd "output group") what happens is simply virtual memory reservation within the kernel. Note there's no other swap maneuver or switch between disk and physical memory and so on. The process that does the call doesn't even exhibit different figures.

With ISM, physical memory actually gets allocated only when the reserved shared segment is mapped in the calling process via ::shmat(). This is clearly seen in the 3rd "output group" above.

We also see that ::shmdt() doesn't really release any allocated memory as seen in the 4th "output group". The shared memory segment is unmapped from the process but continues to exist within the kernel.

It's only when ::shmctl() is called that memory is actually freed (fifth "output group").