Now and then since last year I've been trying to improve my skill on building myself some more up-to-date 64-bits software artifacts to use on my prefered OS (currently, Solaris 11.3 GA) and optimized to my particular CPU (an old Intel Core 2). I've been somewhat radical experimenting this for a few applications, programming languages and GNU tools and utilities. The learning curve is slow and rather steep, with the added difficulty that a Solaris system isn't a true GNU/Linux, despite the honorable efforts to make it compatible enough for running some popular GNU software and performing standard GNU builds of open-source software.
Personal notes and recipes, views and opinions.
If it must run, it runs on Solaris!
Showing posts with label Solaris Studio. Show all posts
Showing posts with label Solaris Studio. Show all posts
Wednesday, March 14, 2018
Thursday, July 6, 2017
Building Qt 4.8.7
Many already have probably heard about Qt by now so I will be terse in saying how cool it is and yet maybe too overwhelming, except for big and complex applications. What matters most in this post is how to using on a Solaris 11.3 (GA / Release) desktop. There are no pre-built binaries, except to some well known Solaris (honorable) sites which attempt to address the gaps in software availability to Solaris. The problem with those sites is that they somewhat lag behind or require to much stuff to setup their distribuition infrastructure. In face of that, I always prefer to build the software myself under a controlled environment I know better and is 100% free of security issues.
I will build it with GNU tools and compilers, targeting both 32 and 64 bits. The reason for not using Developer Studio at first is because software out there is more frequently distributed with the GNU build automation tools in mind so that the probability of a successful build is higher at first. Most of the GNU tools furnished with Solaris 11.3 (GA / Release) seems fairly updated, except for GNU build automation the themselves, which I recommend updating beforehand. Furthermore, the GCC packaged with Solaris is version 4.8.2, which means that it only fully supports C89 and C++03, so all bets are off for anything newer than that. Fortunately, this legacy version 4.8.7 of Qt don't require anything else than that. I know that C++03 is already history, but nevertheless it is there in case you need it. By the way, C is also history and yet it is still wildely prevalent in so many areas.
I learned the hard-way that the following Solaris 11.3 packages must be installed in order to maximize features and gracefully build Qt:
I thought that building Qt shouldn't be difficult (and in the end I confirm that) but that's not so unless you have all the necessary information at your fingertips, which, at least in case of Solaris, isn't that so. Despite the available information on its website the task wasn't easy at first and I had to pass to a series of trials and errors until I get there. As I believe noone should deserve that, as usual, I share my knowlege with the rest of the world on the belief that the contribution would make more people happier and think about doing the same more often.
In building Qt you'll have the ability to enable some features deemed as plugins but some of them must preexist installed on the Solaris system, either pre-packaged or manually built similarly to this task. Some of these features already package in Solaris will work with no issues, others won't because they are outdated or because of the way they were built and/or made available in the system. One example is SQLite version 3. The package available to Solaris is the 32-bits version 3.8.8.1 with certain features disabled or not available. For Qt that seems to be not much of a problem, but for Firefox it certainly is. Thus, as SQLite is so prevalent, this is a pre-requisite that's better to address right-away to avoid subsequent limitations or unexpected shortcomings, that is, update / install SQLite (by manually building both the 32 and 64 bits enabling all its features) right-away! The build work-flow is similar to the one on this post!
At the time of this writing, first, download the source-code tarball for version 4.8.7. You should look for the 230 Mb file named qt-everywhere-opensource-src-4.8.7.tar.gz (using git is not a good choice, as the Solaris version is outdated and in the end you'll only get more trouble, unless you have already taken care of manually updating it). Now follow my recommedation on GNU - Build preparation in order to prepare a sane environment for the task. Next to this previous step, fine-tune the generated setenv executable script in the root of the corresponding version subtree to consider an updated SQLite (as well as any other pre-requisite software) installation, for instance:
...
# Insert below, other PATH and PKG_CONFIG_PATH settings.
# Follow my PATH building suggestion.
PKG=/opt/sqlite-3.19.3/gnu$BITS
if [[ -d "$PKG" ]] ;
then
CFG=$PKG/bin
! [[ "$PATH" =~ "$PKG" ]] && \
export PATH=$CFG:$PATH
CFG=$PKG/lib/pkgconfig
PCP=${PKG_CONFIG_PATH:+:$PKG_CONFIG_PATH}
! [[ "$PCP" =~ "$PKG" ]] && \
export PKG_CONFIG_PATH=$CFG$PCP
fi
...
In fact, the above will be part of a more complete environment setup (the setenv script) which I reference on some of the previous links. The relevant ajustments to the script are:
#
# Other PATH and PKG_CONFIG_PATH settings.
# Put in reverse order of dependency.
#
extend-env /opt/sqlite-3.19.3
extend-env /opt/tcl-8.5.19
extend-env /opt/automake-1.15
extend-env /opt/autoconf-2.69
extend-env /opt/m4-1.4.18
extend-env /opt/libtool-2.4.6
From the ZFS perspective, the "sane" environment is:
$ DS=...
$ zfs list -o name -t all -r $DS/software/Qt |sed 's,$DS,...,'
NAME
.../software/Qt
.../software/Qt/qt-4.8.7
.../software/Qt/qt-4.8.7@source
.../software/Qt/qt-4.8.7-gnu32
.../software/Qt/qt-4.8.7-gnu32@start
.../software/Qt/qt-4.8.7-gnu32@config
.../software/Qt/qt-4.8.7-gnu32@build
.../software/Qt/qt-4.8.7-gnu64
.../software/Qt/qt-4.8.7-gnu64@start
.../software/Qt/qt-4.8.7-gnu64@config
.../software/Qt/qt-4.8.7-gnu64@build
The qt-4.8.7 is the ZFS dataset to where the tarball was extracted.
Right after extraction the @source ZFS snapshot is taken.
The qt-4.8.7 dataset is made readonly, hum..., just in case.
The qt-4.8.7-gnu32 and qt-4.8.7-gnu64 are ZFS clones of @source.
Before the @start snapshots, certain "adjustments" are required.
First, the files qmake.conf and qplatformdefs.h .
They are at mkspecs/build-type/ subdirectories of the clones' mountpoints.
You may note in mkspecs/ a symbolic link called default, but ignore it.
As I'm building for GNU, build-type will be solaris-g++ and solaris-g++64.
The adjustments are:
The adjustment is:
Now take the @start snapshots and configure each build-type.
NOTE
For GNU32:
For GNU64:
An initial licensing terms is presented:
(accept it by entering yes)
This is the Open Source Edition.
You are licensed to use this software under the terms of
the Lesser GNU General Public License (LGPL) versions 2.1.
You are also licensed to use this software under the terms of
the GNU General Public License (GPL) versions 3.
Type '3' to view the GNU General Public License version 3.
Type 'L' to view the Lesser GNU General Public License version 2.1.
Type 'yes' to accept this license offer.
Type 'no' to decline this license offer.
Do you accept the terms of either license?
Each build target will output with a corresponding header:
But both will continue their output as follows:
After a while (around 15 min on my slow machine) one sees:
(it's safe to completely ignore most of the configure instructions;
ZFS is an exceeding file-system dispensing reconfigure and confclean)
Now take the @config snapshots and build each build-type:
(this takes time on my slow machine; enough for a good nap, at least!)
$ gmake
Now take the @build snapshots and install each build-type:
(assuming that the right ZFS datasets were already mapped under /opt)
$ sudo gmake install
Now take the @release snapshots of each build-type installed.
The final results on the building machine are as follows:
$ DS=rpool/VARSHARE/qt-4.8.7
$ zfs list -o name -t all -r $DS
NAME
rpool/VARSHARE/qt-4.8.7
rpool/VARSHARE/qt-4.8.7/gnu32
rpool/VARSHARE/qt-4.8.7/gnu32@release
rpool/VARSHARE/qt-4.8.7/gnu64
rpool/VARSHARE/qt-4.8.7/gnu64@release
Now, if you plan to distribute the binaries, you have to package or create a tarball for each build-type. In general, if nothing more than the sub-structure under /opt is enough (which is the case), then the simplicity of a tarball is much more appealing.
Once the binaries are in place (integrated into /opt), it's just a matter of adjusting the PATH environment variable as usual:
$ export PATH=/opt/qt-4.8.7/gnu64/bin:$PATH
$ cd /opt/qt-4.8.7/gnu64/bin
$ ls -1
assistant
designer
lconvert
linguist
lrelease
lupdate
moc
pixeltool
qcollectiongenerator
qdbus
qdbuscpp2xml
qdbusviewer
qdbusxml2cpp
qdoc3
qhelpconverter
qhelpgenerator
qmake
qmlplugindump
qmlviewer
qt3to4
qtconfig
qtdemo
qttracereplay
rcc
uic
uic3
xmlpatterns
xmlpatternsvalidator
$ cd /opt/qt-4.8.7/gnu64/lib
$ ls -1 *.so.?.?.?
libphonon.so.4.4.0
libQt3Support.so.4.8.7
libQtCLucene.so.4.8.7
libQtCore.so.4.8.7
libQtDBus.so.4.8.7
libQtDeclarative.so.4.8.7
libQtDesigner.so.4.8.7
libQtDesignerComponents.so.4.8.7
libQtGui.so.4.8.7
libQtHelp.so.4.8.7
libQtMultimedia.so.4.8.7
libQtNetwork.so.4.8.7
libQtOpenGL.so.4.8.7
libQtScript.so.4.8.7
libQtScriptTools.so.4.8.7
libQtSql.so.4.8.7
libQtSvg.so.4.8.7
libQtTest.so.4.8.7
libQtXml.so.4.8.7
libQtXmlPatterns.so.4.8.7
You should make sure that the GUI Style is GTK+.
$ qtconfig
Et voilà!
$ qtdemo
NOTE
I will build it with GNU tools and compilers, targeting both 32 and 64 bits. The reason for not using Developer Studio at first is because software out there is more frequently distributed with the GNU build automation tools in mind so that the probability of a successful build is higher at first. Most of the GNU tools furnished with Solaris 11.3 (GA / Release) seems fairly updated, except for GNU build automation the themselves, which I recommend updating beforehand. Furthermore, the GCC packaged with Solaris is version 4.8.2, which means that it only fully supports C89 and C++03, so all bets are off for anything newer than that. Fortunately, this legacy version 4.8.7 of Qt don't require anything else than that. I know that C++03 is already history, but nevertheless it is there in case you need it. By the way, C is also history and yet it is still wildely prevalent in so many areas.
I learned the hard-way that the following Solaris 11.3 packages must be installed in order to maximize features and gracefully build Qt:
- unicode
- developer/icu
- library/icu
- image/graphviz
- x11/library/xtrans
I thought that building Qt shouldn't be difficult (and in the end I confirm that) but that's not so unless you have all the necessary information at your fingertips, which, at least in case of Solaris, isn't that so. Despite the available information on its website the task wasn't easy at first and I had to pass to a series of trials and errors until I get there. As I believe noone should deserve that, as usual, I share my knowlege with the rest of the world on the belief that the contribution would make more people happier and think about doing the same more often.
In building Qt you'll have the ability to enable some features deemed as plugins but some of them must preexist installed on the Solaris system, either pre-packaged or manually built similarly to this task. Some of these features already package in Solaris will work with no issues, others won't because they are outdated or because of the way they were built and/or made available in the system. One example is SQLite version 3. The package available to Solaris is the 32-bits version 3.8.8.1 with certain features disabled or not available. For Qt that seems to be not much of a problem, but for Firefox it certainly is. Thus, as SQLite is so prevalent, this is a pre-requisite that's better to address right-away to avoid subsequent limitations or unexpected shortcomings, that is, update / install SQLite (by manually building both the 32 and 64 bits enabling all its features) right-away! The build work-flow is similar to the one on this post!
At the time of this writing, first, download the source-code tarball for version 4.8.7. You should look for the 230 Mb file named qt-everywhere-opensource-src-4.8.7.tar.gz (using git is not a good choice, as the Solaris version is outdated and in the end you'll only get more trouble, unless you have already taken care of manually updating it). Now follow my recommedation on GNU - Build preparation in order to prepare a sane environment for the task. Next to this previous step, fine-tune the generated setenv executable script in the root of the corresponding version subtree to consider an updated SQLite (as well as any other pre-requisite software) installation, for instance:
...
# Insert below, other PATH and PKG_CONFIG_PATH settings.
# Follow my PATH building suggestion.
PKG=/opt/sqlite-3.19.3/gnu$BITS
if [[ -d "$PKG" ]] ;
then
CFG=$PKG/bin
! [[ "$PATH" =~ "$PKG" ]] && \
export PATH=$CFG:$PATH
CFG=$PKG/lib/pkgconfig
PCP=${PKG_CONFIG_PATH:+:$PKG_CONFIG_PATH}
! [[ "$PCP" =~ "$PKG" ]] && \
export PKG_CONFIG_PATH=$CFG$PCP
fi
...
In fact, the above will be part of a more complete environment setup (the setenv script) which I reference on some of the previous links. The relevant ajustments to the script are:
#
# Other PATH and PKG_CONFIG_PATH settings.
# Put in reverse order of dependency.
#
extend-env /opt/sqlite-3.19.3
extend-env /opt/tcl-8.5.19
extend-env /opt/automake-1.15
extend-env /opt/autoconf-2.69
extend-env /opt/m4-1.4.18
extend-env /opt/libtool-2.4.6
From the ZFS perspective, the "sane" environment is:
$ DS=...
$ zfs list -o name -t all -r $DS/software/Qt |sed 's,$DS,...,'
NAME
.../software/Qt
.../software/Qt/qt-4.8.7
.../software/Qt/qt-4.8.7@source
.../software/Qt/qt-4.8.7-gnu32
.../software/Qt/qt-4.8.7-gnu32@start
.../software/Qt/qt-4.8.7-gnu32@config
.../software/Qt/qt-4.8.7-gnu32@build
.../software/Qt/qt-4.8.7-gnu64
.../software/Qt/qt-4.8.7-gnu64@start
.../software/Qt/qt-4.8.7-gnu64@config
.../software/Qt/qt-4.8.7-gnu64@build
The qt-4.8.7 is the ZFS dataset to where the tarball was extracted.
Right after extraction the @source ZFS snapshot is taken.
The qt-4.8.7 dataset is made readonly, hum..., just in case.
The qt-4.8.7-gnu32 and qt-4.8.7-gnu64 are ZFS clones of @source.
Before the @start snapshots, certain "adjustments" are required.
First, the files qmake.conf and qplatformdefs.h .
They are at mkspecs/build-type/ subdirectories of the clones' mountpoints.
You may note in mkspecs/ a symbolic link called default, but ignore it.
As I'm building for GNU, build-type will be solaris-g++ and solaris-g++64.
The adjustments are:
In .../qt-4.8.7-gnu32/mkspecs/solaris-g++/qmake.conf:Second, config.tests/x11/xinput/xinput.cpp.
(core2 is a particular optimization, not a requirement)
22: QMAKE_CFLAGS = -march=core2 -std=gnu89
34: QMAKE_CXXFLAGS = -march=core2 -std=gnu++03
45: QMAKE_INCDIR = /usr/include
46: QMAKE_LIBDIR = /usr/lib
In .../qt-4.8.7-gnu32/mkspecs/solaris-g++/qplatformdefs.h:
124: // typedef unsigned int useconds_t;
125: // extern "C" int usleep(useconds_t);
126: // extern "C" int gethostname(char *, int);
In .../qt-4.8.7-gnu64/mkspecs/solaris-g++-64/qmake.conf:
QMAKE_CFLAGS = -m64 -march=core2 -std=gnu89 ...keep rest...
QMAKE_CXXFLAGS = -m64 -march=core2 -std=gnu++03 ...keep rest...
QMAKE_INCDIR = /usr/include
QMAKE_LIBDIR = /usr/lib/64
The adjustment is:
42 // #ifdef Q_OS_SOLARIS
43 // #error "Not supported."
44 // #else
...
59 // #endif
Now take the @start snapshots and configure each build-type.
NOTE
The next configure commands' arguments are non-negotiable:
(otherwise the build will result unsuccessful)
-prefix (must be present, but its value can vary)
-no-webkit (must be present; Solaris lacks pthread_getattr_np())
-qt-libpng (must be present; Solaris headers and libs are misplaced)
-R (must be present; for each extra-system feature)
Be aware of the -qtnamespace setting. If you set it (but its value cannot be Qt) it will probably cause further impacts on any source-code, source-code generators and other libraries that may not count on such namespace value. Hence, it seems better to leave it off.
Furthermore, certain arguments that are not listed, must not be! For instance, -xvideo must not be listed due to a bug in configure. Similarly, declaratives and scripts arguments must not be listed. In general, for the build to succeed it's preferably not to rely on system options, that is, prefer the -qt-... alternatives. Finally, make sure you exclude portions of the machine instructions-set not supported by the particular hardware, such as: -no-3dnow, -no-sse4.1 and so on...
For GNU32:
$ ./configure -prefix /opt/qt-4.8.7/gnu32 -opensource -qt-sql-sqlite -xmlpatterns -no-webkit -multimedia -audio-backend -phonon -phonon-backend -svg -no-sse4.1 -no-sse4.2 -no-avx -no-neon -no-3dnow -qt-zlib -qt-libtiff -qt-libmng -qt-libpng -qt-libjpeg -openssl -make libs -make tools -make examples -make demos -make docs -R/opt/sqlite-3.19.3/gnu32/lib -nis -cups -iconv -pch -dbus -gtkstyle -no-nas-sound -opengl -sm -xshape -xsync -xinerama -xcursor -xfixes -xrandr -xrender -mitshm -fontconfig -xinput -xkb -glib -platform solaris-g++
For GNU64:
$ ./configure -prefix /opt/qt-4.8.7/gnu64 -opensource -qt-sql-sqlite -xmlpatterns -no-webkit -multimedia -audio-backend -phonon -phonon-backend -svg -no-sse4.1 -no-sse4.2 -no-avx -no-neon -no-3dnow -qt-zlib -qt-libtiff -qt-libmng -qt-libpng -qt-libjpeg -openssl -make libs -make tools -make examples -make demos -make docs -R/opt/sqlite-3.19.3/gnu64/lib -nis -cups -iconv -pch -dbus -gtkstyle -no-nas-sound -opengl -sm -xshape -xsync -xinerama -xcursor -xfixes -xrandr -xrender -mitshm -fontconfig -xinput -xkb -glib -platform solaris-g++-64
An initial licensing terms is presented:
(accept it by entering yes)
This is the Open Source Edition.
You are licensed to use this software under the terms of
the Lesser GNU General Public License (LGPL) versions 2.1.
You are also licensed to use this software under the terms of
the GNU General Public License (GPL) versions 3.
Type '3' to view the GNU General Public License version 3.
Type 'L' to view the Lesser GNU General Public License version 2.1.
Type 'yes' to accept this license offer.
Type 'no' to decline this license offer.
Do you accept the terms of either license?
Each build target will output with a corresponding header:
This target is using ... (solaris-g++).or
Build type: solaris-g++
Architecture: i386
This target is using ... (solaris-g++-64).
Build type: solaris-g++-64
Architecture: x86_64
But both will continue their output as follows:
Debug .................. no
Qt 3 compatibility ..... yes
QtDBus module .......... yes (run-time)
QtConcurrent code ...... yes
QtGui module ........... yes
QtScript module ........ yes
QtScriptTools module ... yes
QtXmlPatterns module ... yes
Phonon module .......... yes
Multimedia module ...... yes
SVG module ............. yes
WebKit module .......... no
JavaScriptCore JIT ..... To be decided by JavaScriptCore
Declarative module ..... yes
Declarative debugging ...yes
Support for S60 ........ no
Symbian DEF files ...... no
STL support ............ yes
PCH support ............ yes
MMX/3DNOW/SSE/SSE2/SSE3. yes/no/yes/yes/yes
SSSE3/SSE4.1/SSE4.2..... yes/no/no
AVX..................... no
Graphics System ........ default
IPv6 support ........... yes
IPv6 ifname support .... yes
getaddrinfo support .... yes
getifaddrs support ..... yes
Accessibility .......... yes
NIS support ............ yes
CUPS support ........... yes
Iconv support .......... sun
Glib support ........... yes
GStreamer support ...... yes
PulseAudio support ..... yes
Large File support ..... yes
GIF support ............ plugin
TIFF support ........... plugin (qt)
JPEG support ........... plugin (qt)
PNG support ............ yes (qt)
MNG support ............ plugin (qt)
zlib support ........... yes
Session management ..... yes
OpenGL support ......... yes (Desktop OpenGL)
OpenVG support ......... no
NAS sound support ...... no
XShape support ......... yes
XVideo support ......... yes
XSync support .......... yes
Xinerama support ....... yes
Xcursor support ........ yes
Xfixes support ......... yes
Xrandr support ......... yes
Xrender support ........ yes
Xi support ............. yes
MIT-SHM support ........ yes
FontConfig support ..... yes
XKB Support ............ yes
immodule support ....... yes
GTK theme support ...... yes
SQLite support ......... qt (qt)
OpenSSL support ........ yes (run-time)
Alsa support ........... no
ICD support ............ no
libICU support ......... yes
Use system proxies ..... no
After a while (around 15 min on my slow machine) one sees:
(it's safe to completely ignore most of the configure instructions;
ZFS is an exceeding file-system dispensing reconfigure and confclean)
Qt is now configured for building.Just run 'gmake'.Once everything is built, you must run 'gmake install'.
Qt will be installed into /opt/qt-4.8.7/...
To reconfigure, run 'gmake confclean' and 'configure'.
Now take the @config snapshots and build each build-type:
(this takes time on my slow machine; enough for a good nap, at least!)
$ gmake
Now take the @build snapshots and install each build-type:
(assuming that the right ZFS datasets were already mapped under /opt)
$ sudo gmake install
Now take the @release snapshots of each build-type installed.
The final results on the building machine are as follows:
$ DS=rpool/VARSHARE/qt-4.8.7
$ zfs list -o name -t all -r $DS
NAME
rpool/VARSHARE/qt-4.8.7
rpool/VARSHARE/qt-4.8.7/gnu32
rpool/VARSHARE/qt-4.8.7/gnu32@release
rpool/VARSHARE/qt-4.8.7/gnu64
rpool/VARSHARE/qt-4.8.7/gnu64@release
Now, if you plan to distribute the binaries, you have to package or create a tarball for each build-type. In general, if nothing more than the sub-structure under /opt is enough (which is the case), then the simplicity of a tarball is much more appealing.
Once the binaries are in place (integrated into /opt), it's just a matter of adjusting the PATH environment variable as usual:
$ export PATH=/opt/qt-4.8.7/gnu64/bin:$PATH
$ cd /opt/qt-4.8.7/gnu64/bin
$ ls -1
assistant
designer
lconvert
linguist
lrelease
lupdate
moc
pixeltool
qcollectiongenerator
qdbus
qdbuscpp2xml
qdbusviewer
qdbusxml2cpp
qdoc3
qhelpconverter
qhelpgenerator
qmake
qmlplugindump
qmlviewer
qt3to4
qtconfig
qtdemo
qttracereplay
rcc
uic
uic3
xmlpatterns
xmlpatternsvalidator
$ cd /opt/qt-4.8.7/gnu64/lib
$ ls -1 *.so.?.?.?
libphonon.so.4.4.0
libQt3Support.so.4.8.7
libQtCLucene.so.4.8.7
libQtCore.so.4.8.7
libQtDBus.so.4.8.7
libQtDeclarative.so.4.8.7
libQtDesigner.so.4.8.7
libQtDesignerComponents.so.4.8.7
libQtGui.so.4.8.7
libQtHelp.so.4.8.7
libQtMultimedia.so.4.8.7
libQtNetwork.so.4.8.7
libQtOpenGL.so.4.8.7
libQtScript.so.4.8.7
libQtScriptTools.so.4.8.7
libQtSql.so.4.8.7
libQtSvg.so.4.8.7
libQtTest.so.4.8.7
libQtXml.so.4.8.7
libQtXmlPatterns.so.4.8.7
You should make sure that the GUI Style is GTK+.
$ qtconfig
Et voilà!
$ qtdemo
Now it's possible to follow the baby steps of:
How to Develop Qt Applications in the Oracle Developer Studio IDENOTE
Fortunately, you can also use NetBeans 8.2 hassle-free.
Version 8.1 doesn't integrate well, perhaps requiring some tricks...
Wednesday, January 22, 2014
Mercurial help
After Mercurial installation one may need to browse some help.
But, above all, it's paramount to read:
The CLI consists on a long list of hg subcommands:
# hg -v help
Mercurial Distributed SCM
list of commands:
add:
add the specified files on the next commit
addremove:
add all new files, delete all missing files
annotate, blame:
show changeset information by line for each file
archive:
create an unversioned archive of a repository revision
backout:
reverse effect of earlier changeset
bisect:
subdivision search of changesets
bookmarks:
track a line of development with movable markers
branch:
set or show the current branch name
branches:
list repository named branches
bundle:
create a changegroup file
cat:
output the current or given revision of files
clone:
make a copy of an existing repository
commit, ci:
commit the specified files or all outstanding changes
copy, cp:
mark files as copied for the next commit
diff:
diff repository (or selected files)
export:
dump the header and diffs for one or more changesets
forget:
forget the specified files on the next commit
graft:
copy changes from other branches onto the current branch
grep:
search for a pattern in specified files and revisions
heads:
show current repository heads or show branch heads
help:
show help for a given topic or a help overview
identify, id:
identify the working copy or specified revision
import, patch:
import an ordered set of patches
incoming, in:
show new changesets found in source
init:
create a new repository in the given directory
locate:
locate files matching specific patterns
log, history:
show revision history of entire repository or files
manifest:
output the current or given revision of the project manifest
merge:
merge working directory with another revision
outgoing, out:
show changesets not found in the destination
parents:
show the parents of the working directory or revision
paths:
show aliases for remote repositories
phase:
set or show the current phase name
pull:
pull changes from the specified source
push:
push changes to the specified destination
recover:
roll back an interrupted transaction
remove, rm:
remove the specified files on the next commit
rename, move, mv:
rename files; equivalent of copy + remove
resolve:
redo merges or set/view the merge status of files
revert:
restore files to their checkout state
rollback:
roll back the last transaction (dangerous)
root:
print the root (top) of the current working directory
serve:
start stand-alone webserver
showconfig, debugconfig:
show combined config settings from all hgrc files
status, st:
show changed files in the working directory
summary, sum:
summarize working directory state
tag:
add one or more tags for the current or given revision
tags:
list repository tags
tip:
show the tip revision
unbundle:
apply one or more changegroup files
update, up, checkout, co:
update working directory (or switch revisions)
verify:
verify the integrity of the repository
version:
output version and copyright information
additional help topics:
config Configuration Files
dates Date Formats
diffs Diff Formats
environment Environment Variables
extensions Using Additional Features
filesets Specifying File Sets
glossary Glossary
hgignore Syntax for Mercurial Ignore Files
hgweb Configuring hgweb
merge-tools Merge Tools
multirevs Specifying Multiple Revisions
patterns File Name Patterns
phases Working with Phases
revisions Specifying Single Revisions
revsets Specifying Revision Sets
subrepos Subrepositories
templating Template Usage
urls URL Paths
global options:
-R --repository REPO repository root directory or
name of overlay bundle file
--cwd DIR change working directory
-y --noninteractive do not prompt, automatically
pick the first choice for
all prompts
-q --quiet suppress output
-v --verbose enable additional output
--config CONFIG [+] set/override config option
(use 'section.name=value')
--debug enable debugging output
--debugger start debugger
--encoding ENCODE set the charset encoding
(default: UTF-8)
--encodingmode MODE set the charset encoding mode
(default: strict)
--traceback always print a traceback on exception
--time time how long the command takes
--profile print command execution profile
--version output version information and exit
-h --help display help and exit
[+] marked option can be specified multiple times
Of course man pages are available at HG(1).
For instance:
$ man hg
Mercurial Manual HG(1)
NAME
hg - Mercurial source code management system
SYNOPSIS
hg command [option]... [argument]...
DESCRIPTION
The hg command provides a command line
interface to the Mercurial system.
COMMAND ELEMENTS
...
OPTIONS
...
In addition there's a quick and cool HTTP option.
All that's required is to create an empty repository and start the server.
# cd /var/tmp
# hg init sample
# hg serve -d -p 8000 -R sample -A /tmp/access -E /tmp/error
Assume that the previous commands were given at the mercurial host.
Just point a web browser to http://mercurial:8000/help to get started.
The access log will immediately starting tracking activity.
# cat /tmp/access
192.168.0.100 ... "GET /help HTTP/1.1" 200 -
192.168.0.100 ... "GET /static/mercurial.js HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/style-paper.css HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/hgicon.png HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/hglogo.png HTTP/1.1" 304 -
To stop the web server:
# pgrep -f 'hg serve -d -p 8000'
16737
# pkill -f 'hg serve -d -p 8000'
But, above all, it's paramount to read:
- Mercurial: The Definitive Guide (HTML)
- Mercurial: The Definitive Guide (Documentation)
The CLI consists on a long list of hg subcommands:
# hg -v help
Mercurial Distributed SCM
list of commands:
add:
add the specified files on the next commit
addremove:
add all new files, delete all missing files
annotate, blame:
show changeset information by line for each file
archive:
create an unversioned archive of a repository revision
backout:
reverse effect of earlier changeset
bisect:
subdivision search of changesets
bookmarks:
track a line of development with movable markers
branch:
set or show the current branch name
branches:
list repository named branches
bundle:
create a changegroup file
cat:
output the current or given revision of files
clone:
make a copy of an existing repository
commit, ci:
commit the specified files or all outstanding changes
copy, cp:
mark files as copied for the next commit
diff:
diff repository (or selected files)
export:
dump the header and diffs for one or more changesets
forget:
forget the specified files on the next commit
graft:
copy changes from other branches onto the current branch
grep:
search for a pattern in specified files and revisions
heads:
show current repository heads or show branch heads
help:
show help for a given topic or a help overview
identify, id:
identify the working copy or specified revision
import, patch:
import an ordered set of patches
incoming, in:
show new changesets found in source
init:
create a new repository in the given directory
locate:
locate files matching specific patterns
log, history:
show revision history of entire repository or files
manifest:
output the current or given revision of the project manifest
merge:
merge working directory with another revision
outgoing, out:
show changesets not found in the destination
parents:
show the parents of the working directory or revision
paths:
show aliases for remote repositories
phase:
set or show the current phase name
pull:
pull changes from the specified source
push:
push changes to the specified destination
recover:
roll back an interrupted transaction
remove, rm:
remove the specified files on the next commit
rename, move, mv:
rename files; equivalent of copy + remove
resolve:
redo merges or set/view the merge status of files
revert:
restore files to their checkout state
rollback:
roll back the last transaction (dangerous)
root:
print the root (top) of the current working directory
serve:
start stand-alone webserver
showconfig, debugconfig:
show combined config settings from all hgrc files
status, st:
show changed files in the working directory
summary, sum:
summarize working directory state
tag:
add one or more tags for the current or given revision
tags:
list repository tags
tip:
show the tip revision
unbundle:
apply one or more changegroup files
update, up, checkout, co:
update working directory (or switch revisions)
verify:
verify the integrity of the repository
version:
output version and copyright information
additional help topics:
config Configuration Files
dates Date Formats
diffs Diff Formats
environment Environment Variables
extensions Using Additional Features
filesets Specifying File Sets
glossary Glossary
hgignore Syntax for Mercurial Ignore Files
hgweb Configuring hgweb
merge-tools Merge Tools
multirevs Specifying Multiple Revisions
patterns File Name Patterns
phases Working with Phases
revisions Specifying Single Revisions
revsets Specifying Revision Sets
subrepos Subrepositories
templating Template Usage
urls URL Paths
global options:
-R --repository REPO repository root directory or
name of overlay bundle file
--cwd DIR change working directory
-y --noninteractive do not prompt, automatically
pick the first choice for
all prompts
-q --quiet suppress output
-v --verbose enable additional output
--config CONFIG [+] set/override config option
(use 'section.name=value')
--debug enable debugging output
--debugger start debugger
--encoding ENCODE set the charset encoding
(default: UTF-8)
--encodingmode MODE set the charset encoding mode
(default: strict)
--traceback always print a traceback on exception
--time time how long the command takes
--profile print command execution profile
--version output version information and exit
-h --help display help and exit
[+] marked option can be specified multiple times
Of course man pages are available at HG(1).
For instance:
$ man hg
Mercurial Manual HG(1)
NAME
hg - Mercurial source code management system
SYNOPSIS
hg command [option]... [argument]...
DESCRIPTION
The hg command provides a command line
interface to the Mercurial system.
COMMAND ELEMENTS
...
OPTIONS
...
In addition there's a quick and cool HTTP option.
All that's required is to create an empty repository and start the server.
# cd /var/tmp
# hg init sample
# hg serve -d -p 8000 -R sample -A /tmp/access -E /tmp/error
Assume that the previous commands were given at the mercurial host.
Just point a web browser to http://mercurial:8000/help to get started.
The access log will immediately starting tracking activity.
# cat /tmp/access
192.168.0.100 ... "GET /help HTTP/1.1" 200 -
192.168.0.100 ... "GET /static/mercurial.js HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/style-paper.css HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/hgicon.png HTTP/1.1" 304 -
192.168.0.100 ... "GET /static/hglogo.png HTTP/1.1" 304 -
To stop the web server:
# pgrep -f 'hg serve -d -p 8000'
16737
# pkill -f 'hg serve -d -p 8000'
Tuesday, January 21, 2014
Mercurial installation
Mercurial is a Revision Control System.
Its CLI isn't installed by default.
$ pkg info -r mercurial
Name: developer/versioning/mercurial
Summary: The Mercurial Source Control Management System
Description: A fast, lightweight source control ...
Category: Development/Source Code Management
State: Not installed
Publisher: solaris
Version: 2.2.1
Build Release: 5.11
Branch: 0.175.1.0.0.24.0
Packaging Date: September 4, 2012 05:17:40 PM
Size: 713.77 kB
FMRI: pkg://...
Solaris Studio 12.3 comes "half-way" configured for Mercurial.
This is verified by accessing the Tools | Plugins menu.
Any attempt to use the Team menu gives the following diagnostic:
To remedy both issues it's necessary to install the required IPS package.
The installation is rather simple.
# pkg install mercurial
Packages to install: 2
Create boot environment: No
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 2/2 531/531 2.7/2.7 813k/s
PHASE ITEMS
Installing new actions 599/599
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
# hg --version
Mercurial Distributed SCM (version 2.2.1)
(see http://mercurial.selenic.com for more information)
Copyright (C) 2005-2012 Matt Mackall and others
This is free software; ...
There is NO warranty; ...
As an alternative, you may try to "externally" install from the source:
https://www.mercurial-scm.org/downloads
If going to use it under Solaris Studio 12.3, then restart the IDE.
For remote use, install it on both endpoints.
It may be worthy checking help options as a quick reference.
But Mercurial: The Definitive Guide by Bryan O'Sullivan is the book:
I've also found a few introductory videos by Brian Will quite worthwhile:
I would also recommend the following:
Its CLI isn't installed by default.
$ pkg info -r mercurial
Name: developer/versioning/mercurial
Summary: The Mercurial Source Control Management System
Description: A fast, lightweight source control ...
Category: Development/Source Code Management
State: Not installed
Publisher: solaris
Version: 2.2.1
Build Release: 5.11
Branch: 0.175.1.0.0.24.0
Packaging Date: September 4, 2012 05:17:40 PM
Size: 713.77 kB
FMRI: pkg://...
Solaris Studio 12.3 comes "half-way" configured for Mercurial.
This is verified by accessing the Tools | Plugins menu.
Any attempt to use the Team menu gives the following diagnostic:
To remedy both issues it's necessary to install the required IPS package.
The installation is rather simple.
# pkg install mercurial
Packages to install: 2
Create boot environment: No
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 2/2 531/531 2.7/2.7 813k/s
PHASE ITEMS
Installing new actions 599/599
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
# hg --version
Mercurial Distributed SCM (version 2.2.1)
(see http://mercurial.selenic.com for more information)
Copyright (C) 2005-2012 Matt Mackall and others
This is free software; ...
There is NO warranty; ...
As an alternative, you may try to "externally" install from the source:
https://www.mercurial-scm.org/downloads
If going to use it under Solaris Studio 12.3, then restart the IDE.
For remote use, install it on both endpoints.
It may be worthy checking help options as a quick reference.
But Mercurial: The Definitive Guide by Bryan O'Sullivan is the book:
- Mercurial: The Definitive Guide (HTML)
- Mercurial: The Definitive Guide (Documentation)
I've also found a few introductory videos by Brian Will quite worthwhile:
- Version Control with Mercurial (part 1 of 5)
- Version Control with Mercurial (part 2 of 5)
- Version Control with Mercurial (part 3 of 5)
- Version Control with Mercurial (part 4 of 5)
- Version Control with Mercurial (part 5 of 5)
I would also recommend the following:
Friday, January 17, 2014
Revision Control System
A Revision Control System (RCS) is at a premium.
Information and complexity are ever growing problems.
A single person can generate and use tons of information a day.
The need to manage changes is absolutely intrinsic to evolution.
Fortunately, there are tools to help with the challenge.
The two most traditional and famous tools are:
There are others as well, such as:
Fortunately, both Solaris 11 and Oracle Solaris Studio support Mercurial.
Actually, they support the more traditional CVS and SVN too.
I have started with Mercurial by listening to others' experience.
Then I've managed to learn a little more from a main reference:
Most think that a Revision Control System (RCS) is just developers.
System administrators usually don't take it to their own advantage.
Traditionally, this results on a nightmare of multiple copies of a file.
Even with a strict change management discipline errors can happen.
ZFS snapshots certainly help, but they aren't fine grained enough.
Perhaps one difficulty is the burden to cope with such a tool.
But the great news is that Mercurial is quite simple and effective.
A Revision Control System (RCS) is also know as a:
Information and complexity are ever growing problems.
A single person can generate and use tons of information a day.
The need to manage changes is absolutely intrinsic to evolution.
Fortunately, there are tools to help with the challenge.
The two most traditional and famous tools are:
There are others as well, such as:
Fortunately, both Solaris 11 and Oracle Solaris Studio support Mercurial.
Actually, they support the more traditional CVS and SVN too.
I have started with Mercurial by listening to others' experience.
Then I've managed to learn a little more from a main reference:
Mercurial: The Definitive Guide
By Bryan O'Sullivan
Version Control with Mercurial
By Brian Will
Most think that a Revision Control System (RCS) is just developers.
System administrators usually don't take it to their own advantage.
Traditionally, this results on a nightmare of multiple copies of a file.
Even with a strict change management discipline errors can happen.
ZFS snapshots certainly help, but they aren't fine grained enough.
Perhaps one difficulty is the burden to cope with such a tool.
But the great news is that Mercurial is quite simple and effective.
A Revision Control System (RCS) is also know as a:
- Source/Software Configuration/Control Management (SCM)
- Version Control System (VCS)
Saturday, September 21, 2013
Advanced C++ smart pointers
All the development for the simple C++ smart pointers is good for many scenarios, but so far I haven't taken advantage of the so called reference counting. That's the main reason why I had to use the helper relay objects across function calls and why I couldn't consider the STL containers, not to say multi-threading. The helper relay objects critical role may change to passing the pointers across multiple threads (of a same process, of course!).
In what follows, I intend an implementation because that's quite useful in spite of the added overhead. Furthermore, I believe that most implementations, such as Boost's shared_ptr (whose name I don't like for being misleading — what's shared is the pointed to object; not even the ownership), aren't appropriately implemented for Solaris.
For the dynamically allocated multi-threaded reference counter I'll use my specialized memory pool which I believe is flexible and efficient enough to get the job done.
In what follows, I intend an implementation because that's quite useful in spite of the added overhead. Furthermore, I believe that most implementations, such as Boost's shared_ptr (whose name I don't like for being misleading — what's shared is the pointed to object; not even the ownership), aren't appropriately implemented for Solaris.
For the dynamically allocated multi-threaded reference counter I'll use my specialized memory pool which I believe is flexible and efficient enough to get the job done.
...
Tuesday, September 10, 2013
A 1st specialized C++ memory pool
Specialized C++ memory pools have many important applications. They are not C++ memory allocators, but can be used in their implementation. Solaris provides a quite good and rich support to this which I'll try to take advantage of.
Perhaps the most obvious application is for nodes of varying data structures. Nevertheless, as a particular example of how useful a specialized memory pool is, consider the strategy of reference counting, which is specially useful to smart pointers. The fact is that the counters must be shared by the set of smart pointers pointing to the same object. As I said on another post, this has induced the misnaming of Boost's shared_ptr. But back to the subject, the counter requirement implies that it must be dynamically allocated. The known problem is that using the standard operators ::new and ::delete are quite inefficient, specially for an intended industrial strength version. What's needed is a replacement, such as the slab allocator by Jeff Bonwick or the Bjarne Stroustrup's User-Defined Allocator (The C++ Programming Language Third/Special, section 19.4.2), both based on the idea of caches for efficient constant time O(1) operations.
Furthermore, it should be thread-safe, preferably with non-locking atomic arithmetic. I'll see if it's possible to avoid mutexes and I intend to use atomic operations provided by the Solaris Standard C Library atomic_ops(3C) collection of functions, as indicated by Darryl Gove in his book Solaris Application Programming, section 12.7.4, Using Atomic Operations. In fact, on Multicore Application Programming: For Windows, Linux, and Oracle® Solaris, chapter 8, listings 8.4 and 8.5, also by Darryl Gove I may have the solution: instead of mutexes, use a lock-free variant that loops on the CAS of a local variable.
So, inspired by the above references, I'll start my implementation of a thread-safe and always expanding specialized pool. Internally, it will be comprised of several chunks. My intention is to make the size of each chunk fit a certain page size, which ultimately will imply how many objects slots each chunk will able to cache. The first hurdle is that this is dynamic, depending on the hardware and its multiple supported page sizes among which to choose. As such, internally, I can't declare an array of objects (whose size must be known at compile time), nor can I declare a pointer to objects (as this would decouple the list of chunks from the chunks themselves, incurring on separate dynamic memory management, one for the list and other for the chunks).
The initial (probably incomplete) version of my constant time, O(1), thread-safe, always expanding pool, has two fundamental high-level operations, request() and recycle(), in order to make it publicly useful.
Next are a few raw examples just for illustration. More realistically, the pool would be internal to some other object such as an advanced C++ smart pointer in order to provide more efficient and multi-threaded allocations and deallocations.
Example 1:
template< typename T >
struct pointer
{
...
// All instances must share the same pool.
static memory::pool< std::size_t > pool;
...
};
// The shared pool.
template< typename T >
memory::pool< std::size_t > pointer< T >::pool;
Example 2:
void f()
{
// An ISM cache.
static memory::pool
<
std::size_t,
memory::policy::shared< memory::policy::ism >
>
pool;
...
}
Example 3:
// On Intel x86-64, sizeof( S ) = 8 = sizeof( void * )
// So, there's no internal fragmentation.
struct S
{
char c;
int i;
};
void f()
{
memory::pool
<
S,
memory::policy::c< memory::policy::locked >
>
pool( memory::largest_page() );
try
{
// Manual pre-expand (just for illustration).
pool.expand();
S * p = ( S * ) pool.request();
p->c = 'S';
p->i = 11;
pool.recycle( p );
}
catch ( ... )
{
...
}
}
Example 4:
#ifndef S_HXX
#define S_HXX
#include "memory.hxx"
struct S : memory::operations< S >
{
int code;
char text[ 10 ];
S( int c = 0, char const * t = "" ) : code( c )
{
::strlcpy( text, t, sizeof(text) - 1 );
}
};
#endif /* S_HXX */
#include "s.hxx"
void f()
{
S * s = new S;
...
delete s;
}
#include "s.hxx"
struct derived_S : S
{
...
};
extern void f();
void g()
{
S * s = new S;
...
delete s;
...
derived_S * ds = new derived_S;
...
delete ds;
...
f();
}
Perhaps the most obvious application is for nodes of varying data structures. Nevertheless, as a particular example of how useful a specialized memory pool is, consider the strategy of reference counting, which is specially useful to smart pointers. The fact is that the counters must be shared by the set of smart pointers pointing to the same object. As I said on another post, this has induced the misnaming of Boost's shared_ptr. But back to the subject, the counter requirement implies that it must be dynamically allocated. The known problem is that using the standard operators ::new and ::delete are quite inefficient, specially for an intended industrial strength version. What's needed is a replacement, such as the slab allocator by Jeff Bonwick or the Bjarne Stroustrup's User-Defined Allocator (The C++ Programming Language Third/Special, section 19.4.2), both based on the idea of caches for efficient constant time O(1) operations.
Furthermore, it should be thread-safe, preferably with non-locking atomic arithmetic. I'll see if it's possible to avoid mutexes and I intend to use atomic operations provided by the Solaris Standard C Library atomic_ops(3C) collection of functions, as indicated by Darryl Gove in his book Solaris Application Programming, section 12.7.4, Using Atomic Operations. In fact, on Multicore Application Programming: For Windows, Linux, and Oracle® Solaris, chapter 8, listings 8.4 and 8.5, also by Darryl Gove I may have the solution: instead of mutexes, use a lock-free variant that loops on the CAS of a local variable.
So, inspired by the above references, I'll start my implementation of a thread-safe and always expanding specialized pool. Internally, it will be comprised of several chunks. My intention is to make the size of each chunk fit a certain page size, which ultimately will imply how many objects slots each chunk will able to cache. The first hurdle is that this is dynamic, depending on the hardware and its multiple supported page sizes among which to choose. As such, internally, I can't declare an array of objects (whose size must be known at compile time), nor can I declare a pointer to objects (as this would decouple the list of chunks from the chunks themselves, incurring on separate dynamic memory management, one for the list and other for the chunks).
The initial (probably incomplete) version of my constant time, O(1), thread-safe, always expanding pool, has two fundamental high-level operations, request() and recycle(), in order to make it publicly useful.
The implementation idea is somewhat simple: Maintain a free list of data slots over the unused payload areas of the buffers comprising the cache.But achieving this at the code-level isn't as simple because of the performance, space and concurrency constrains. One notorious trade-off is due to the free list pointers sharing the space of unused data slots, which implies that the minimum size of a data slot is the size of a pointer (currently 8 bytes on 64-bit Intel platforms). Thus, for instance, if the data type is int (currently 4 bytes on 64-bit Intel platforms), then 50% of space will be wasted. When there's waste, the situation is known as internal fragmentation. Hence, in terms of space efficiency:
It's best to have the size of the main (pointed to) data type (T)Here's my 1st implementation attempt:
as a multiple of the platform's pointer (void *) size.
#include <stdexcept>
#include <cstdlib>
#include <cerrno>
#include <alloca.h>
#include <atomic.h>
#include <unistd.h>
#include <sys/shm.h>
#include <sys/mman.h>
#include <sys/types.h>
namespace memory
{
struct bad_alloc
{
bad_alloc( int const error ) throw() :
error( error )
{
}
int const error;
};
namespace policy
{
enum { locked, ism, pageable, dism };
namespace internal
{
// Base class for policy classes of memory management.
struct control
{
control( std::size_t const page_size ) throw() :
page_size( page_size )
{
}
void hat_advise_va( void const * p ) const throw()
{
::memcntl_mha mha;
mha.mha_cmd = MHA_MAPSIZE_VA;
mha.mha_flags = 0;
mha.mha_pagesize = page_size;
if
(
::memcntl
(
static_cast< caddr_t >
( const_cast< void * >( p ) ),
page_size,
MC_HAT_ADVISE,
reinterpret_cast< caddr_t >( & mha ),
0,
0
)
!= 0
)
{
// Log the error.
}
}
void lock( void const * p ) const throw()
{
if ( ::mlock( p, page_size ) != 0 )
{
// Log the error.
}
}
void unlock( void const * p ) const throw()
{
if ( ::munlock( p, page_size ) != 0 )
{
// Log the error.
}
}
// The runtime size of buffers.
std::size_t const page_size;
};
// Policy class for low-level shared memory management.
// Use non-type template parameters for additional data.
// Template functions to typecast data members.
template< int F >
struct shared : control
{
shared( std::size_t const page_size ) throw() :
control( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
int const handle =
::shmget( IPC_PRIVATE, page_size, SHM_R | SHM_W );
if ( handle == -1 )
{
// Log the error.
throw std::bad_alloc();
}
void * p = ::shmat( handle, 0, F );
if ( ! p )
{
// Log the error.
if ( ::shmctl( handle, IPC_RMID, 0 ) != 0 )
{
// Log the error.
}
throw std::bad_alloc();
}
* const_cast< int * >
( & reinterpret_cast< B * >( p )->handle ) =
handle;
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
int const handle = p->handle;
if ( ::shmdt( ( void * ) p ) != 0 )
{
// Log the error.
}
if ( ::shmctl( handle, IPC_RMID, 0 ) != 0 )
{
// Log the error.
}
}
};
} // namespace internal
// Policy class for low-level C++ memory management.
struct cxx : internal::control
{
cxx( std::size_t const page_size ) throw() :
internal::control( page_size )
{
}
template< typename >
void * request() const throw( std::bad_alloc )
{
// Unfortunately, in general, not aligned!
// No point for locking or setting page size.
// Unfortunately, all bets are off!
return ::operator new ( page_size );
}
template< typename B >
void recycle( B const * const p ) const throw()
{
::operator delete ( ( void * ) p );
}
};
// Template policy class for low-level C memory management.
template< int >
struct c;
template<>
struct c< pageable > : internal::control
{
c( std::size_t const page_size ) throw() :
internal::control( page_size )
{
}
template< typename >
void * request() const throw( std::bad_alloc )
{
// Solaris Standard C library to the rescue!
void * p = ::memalign( page_size, page_size );
if ( ! p )
throw std::bad_alloc();
// Advise HAT to adopt a corresponding page size.
hat_advise_va( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
::free( ( void * ) p );
}
};
template<>
struct c< locked > : c< pageable >
{
c( std::size_t const page_size ) throw() :
c< pageable >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = c< pageable >::request< B >();
lock( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
unlock( ( void * ) p );
c< pageable >::recycle< B >( p );
}
};
template< int >
struct shared;
template<>
struct shared< ism > : internal::shared< SHM_SHARE_MMU >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_SHARE_MMU >( page_size )
{
}
};
template<>
struct shared< dism > : internal::shared< SHM_PAGEABLE >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_PAGEABLE >( page_size )
{
}
};
template<>
struct shared< pageable > : internal::shared< SHM_RND >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_RND >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = internal::shared< SHM_RND >::request< B >();
// Advise HAT to adopt a corresponding page size.
hat_advise_va( p );
return p;
}
};
template<>
struct shared< locked > : shared< pageable >
{
shared( std::size_t const page_size ) throw() :
shared< pageable >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = shared< pageable >::request< B >();
lock( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
unlock( ( void * ) p );
shared< pageable >::recycle< B >( p );
}
};
} // namespace policy
namespace internal
{
// Template for basic-memory based buffers.
template< typename T, typename >
struct buffer
{
buffer( buffer const * const p ) throw() :
next( p )
{
}
union
{
// The next buffer on the list.
buffer const * const next;
// Alignment enforcement for the payload.
T * align;
};
// The cached objects reside beyond this offset.
// A trick to keep everything within the same chunk.
private:
// Just allow placement new and explicit destruction.
static void * operator new ( std::size_t ) throw();
static void operator delete ( void * ) throw();
static void * operator new [] ( std::size_t ) throw();
static void operator delete [] ( void * ) throw();
buffer( buffer const & );
buffer & operator = ( buffer const & );
};
// Partial specialization for shared-memory based buffers.
template< typename T, int S >
struct buffer< T, policy::shared< S > >
{
buffer( buffer const * const p ) throw() :
handle( handle ), next( p )
{
}
// The shared memory associated handle.
//
// WATCH OUT!
// This will be set in placement new
// even before the constructor is called!
//
int const handle;
union
{
// The next buffer on the list.
buffer const * const next;
// Alignment enforcement for the payload.
T * align;
};
// The cached objects reside beyond this offset.
// A trick to keep everything within the same chunk.
private:
// Just allow placement new and explicit destruction.
static void * operator new ( std::size_t ) throw();
static void operator delete ( void * ) throw();
static void * operator new [] ( std::size_t ) throw();
static void operator delete [] ( void * ) throw();
buffer( buffer const & );
buffer & operator = ( buffer const & );
};
} // namespace internal
template< typename >
struct strategy
{
enum { shared = false };
};
template< int S >
struct strategy< policy::shared< S > >
{
enum { shared = true };
};
template< typename T >
inline T * tmp_array( std::size_t const n ) throw()
{
return static_cast< T * >( ::alloca( n * sizeof( T ) ) );
}
inline std::size_t largest_page() throw()
{
std::size_t largest = ::sysconf( _SC_PAGESIZE );
int n = ::getpagesizes( NULL, 0 );
std::size_t * size = tmp_array< std::size_t >( n );
if ( ::getpagesizes( size, n ) != -1 )
while ( --n >= 0 )
if ( size[ n ] > largest )
largest = size[ n ];
return largest;
}
// The specialized memory pool of T objects.
template
<
typename T,
typename A = policy::c< policy::pageable >
>
struct pool
{
// Do not pre-allocate anything.
// This provides very fast construction.
pool
(
std::size_t const page_size =
strategy< A >::shared
? largest_page()
: ::sysconf( _SC_PAGESIZE )
)
throw() :
allocator( page_size ),
segment( 0 ),
expanding( 0 ),
available( 0 )
{
}
~pool() throw()
{
// An iterative instead of a recursive deleter.
// This assures no stack overflow will ever happen here.
while ( segment )
{
buffer const * const p = segment;
segment = segment->next;
p->~buffer();
#include <cstdlib>
#include <cerrno>
#include <alloca.h>
#include <atomic.h>
#include <unistd.h>
#include <sys/shm.h>
#include <sys/mman.h>
#include <sys/types.h>
namespace memory
{
struct bad_alloc
{
bad_alloc( int const error ) throw() :
error( error )
{
}
int const error;
};
namespace policy
{
enum { locked, ism, pageable, dism };
namespace internal
{
// Base class for policy classes of memory management.
struct control
{
control( std::size_t const page_size ) throw() :
page_size( page_size )
{
}
void hat_advise_va( void const * p ) const throw()
{
::memcntl_mha mha;
mha.mha_cmd = MHA_MAPSIZE_VA;
mha.mha_flags = 0;
mha.mha_pagesize = page_size;
if
(
::memcntl
(
static_cast< caddr_t >
( const_cast< void * >( p ) ),
page_size,
MC_HAT_ADVISE,
reinterpret_cast< caddr_t >( & mha ),
0,
0
)
!= 0
)
{
// Log the error.
}
}
void lock( void const * p ) const throw()
{
if ( ::mlock( p, page_size ) != 0 )
{
// Log the error.
}
}
void unlock( void const * p ) const throw()
{
if ( ::munlock( p, page_size ) != 0 )
{
// Log the error.
}
}
// The runtime size of buffers.
std::size_t const page_size;
};
// Policy class for low-level shared memory management.
// Use non-type template parameters for additional data.
// Template functions to typecast data members.
template< int F >
struct shared : control
{
shared( std::size_t const page_size ) throw() :
control( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
int const handle =
::shmget( IPC_PRIVATE, page_size, SHM_R | SHM_W );
if ( handle == -1 )
{
// Log the error.
throw std::bad_alloc();
}
void * p = ::shmat( handle, 0, F );
if ( ! p )
{
// Log the error.
if ( ::shmctl( handle, IPC_RMID, 0 ) != 0 )
{
// Log the error.
}
throw std::bad_alloc();
}
* const_cast< int * >
( & reinterpret_cast< B * >( p )->handle ) =
handle;
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
int const handle = p->handle;
if ( ::shmdt( ( void * ) p ) != 0 )
{
// Log the error.
}
if ( ::shmctl( handle, IPC_RMID, 0 ) != 0 )
{
// Log the error.
}
}
};
} // namespace internal
// Policy class for low-level C++ memory management.
struct cxx : internal::control
{
cxx( std::size_t const page_size ) throw() :
internal::control( page_size )
{
}
template< typename >
void * request() const throw( std::bad_alloc )
{
// Unfortunately, in general, not aligned!
// No point for locking or setting page size.
// Unfortunately, all bets are off!
return ::operator new ( page_size );
}
template< typename B >
void recycle( B const * const p ) const throw()
{
::operator delete ( ( void * ) p );
}
};
// Template policy class for low-level C memory management.
template< int >
struct c;
template<>
struct c< pageable > : internal::control
{
c( std::size_t const page_size ) throw() :
internal::control( page_size )
{
}
template< typename >
void * request() const throw( std::bad_alloc )
{
// Solaris Standard C library to the rescue!
void * p = ::memalign( page_size, page_size );
if ( ! p )
throw std::bad_alloc();
// Advise HAT to adopt a corresponding page size.
hat_advise_va( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
::free( ( void * ) p );
}
};
template<>
struct c< locked > : c< pageable >
{
c( std::size_t const page_size ) throw() :
c< pageable >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = c< pageable >::request< B >();
lock( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
unlock( ( void * ) p );
c< pageable >::recycle< B >( p );
}
};
template< int >
struct shared;
template<>
struct shared< ism > : internal::shared< SHM_SHARE_MMU >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_SHARE_MMU >( page_size )
{
}
};
template<>
struct shared< dism > : internal::shared< SHM_PAGEABLE >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_PAGEABLE >( page_size )
{
}
};
template<>
struct shared< pageable > : internal::shared< SHM_RND >
{
shared( std::size_t const page_size ) throw() :
internal::shared< SHM_RND >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = internal::shared< SHM_RND >::request< B >();
// Advise HAT to adopt a corresponding page size.
hat_advise_va( p );
return p;
}
};
template<>
struct shared< locked > : shared< pageable >
{
shared( std::size_t const page_size ) throw() :
shared< pageable >( page_size )
{
}
template< typename B >
void * request() const throw( std::bad_alloc )
{
void * p = shared< pageable >::request< B >();
lock( p );
return p;
}
template< typename B >
void recycle( B const * const p ) const throw()
{
unlock( ( void * ) p );
shared< pageable >::recycle< B >( p );
}
};
} // namespace policy
namespace internal
{
// Template for basic-memory based buffers.
template< typename T, typename >
struct buffer
{
buffer( buffer const * const p ) throw() :
next( p )
{
}
union
{
// The next buffer on the list.
buffer const * const next;
// Alignment enforcement for the payload.
T * align;
};
// The cached objects reside beyond this offset.
// A trick to keep everything within the same chunk.
private:
// Just allow placement new and explicit destruction.
static void * operator new ( std::size_t ) throw();
static void operator delete ( void * ) throw();
static void * operator new [] ( std::size_t ) throw();
static void operator delete [] ( void * ) throw();
buffer( buffer const & );
buffer & operator = ( buffer const & );
};
// Partial specialization for shared-memory based buffers.
template< typename T, int S >
struct buffer< T, policy::shared< S > >
{
buffer( buffer const * const p ) throw() :
handle( handle ), next( p )
{
}
// The shared memory associated handle.
//
// WATCH OUT!
// This will be set in placement new
// even before the constructor is called!
//
int const handle;
union
{
// The next buffer on the list.
buffer const * const next;
// Alignment enforcement for the payload.
T * align;
};
// The cached objects reside beyond this offset.
// A trick to keep everything within the same chunk.
private:
// Just allow placement new and explicit destruction.
static void * operator new ( std::size_t ) throw();
static void operator delete ( void * ) throw();
static void * operator new [] ( std::size_t ) throw();
static void operator delete [] ( void * ) throw();
buffer( buffer const & );
buffer & operator = ( buffer const & );
};
} // namespace internal
template< typename >
struct strategy
{
enum { shared = false };
};
template< int S >
struct strategy< policy::shared< S > >
{
enum { shared = true };
};
template< typename T >
inline T * tmp_array( std::size_t const n ) throw()
{
return static_cast< T * >( ::alloca( n * sizeof( T ) ) );
}
inline std::size_t largest_page() throw()
{
std::size_t largest = ::sysconf( _SC_PAGESIZE );
int n = ::getpagesizes( NULL, 0 );
std::size_t * size = tmp_array< std::size_t >( n );
if ( ::getpagesizes( size, n ) != -1 )
while ( --n >= 0 )
if ( size[ n ] > largest )
largest = size[ n ];
return largest;
}
// The specialized memory pool of T objects.
template
<
typename T,
typename A = policy::c< policy::pageable >
>
struct pool
{
// Do not pre-allocate anything.
// This provides very fast construction.
pool
(
std::size_t const page_size =
strategy< A >::shared
? largest_page()
: ::sysconf( _SC_PAGESIZE )
)
throw() :
allocator( page_size ),
segment( 0 ),
expanding( 0 ),
available( 0 )
{
}
~pool() throw()
{
// An iterative instead of a recursive deleter.
// This assures no stack overflow will ever happen here.
while ( segment )
{
buffer const * const p = segment;
segment = segment->next;
p->~buffer();
allocator.recycle( p );
}
}
// The function expand() can be delayed as much as desired.
// It will be automatically called if absolutely necessary.
// One thread will do the expand and others will wait.
void expand() throw( bad_alloc )
{
// Serialize and minimize concurrent expansions.
if
(
::atomic_cas_ptr( & expanding, 0, ( void * ) 1 )
==
0
)
{
// The modifying thread attempts the expansion.
// Blocked threads will get notified at end.
try
{
allocate();
// Release other threads.
::atomic_swap_ptr( & expanding, 0 );
}
catch ( std::bad_alloc const & )
{
// Release other threads.
::atomic_swap_ptr( & expanding, 0 );
// Notifies itself about the exception.
throw bad_alloc( ENOMEM );
}
}
else
// Wait on loop before resuming.
// Better than throwing exceptions.
while
(
::atomic_cas_ptr( & expanding, 0, 0 )
==
( void * ) 1
)
;
}
void * request() throw( bad_alloc )
{
start:
try
{
slot * a;
do
{
if ( ! ( a = available ) )
throw bad_alloc( ENOMEM );
}
while
(
::atomic_cas_ptr( & available, a, a->next ) != a
);
return a;
}
catch ( bad_alloc const & )
{
try
{
// Race for expansion.
expand();
}
catch ( ... )
{
// Out of memory.
throw;
}
}
// The previous expansion succeeded.
// Try again to fulfill the request for a slot.
goto start;
}
void recycle( void * p ) throw()
{
slot * a;
do
{
a = available;
reinterpret_cast< slot * >( p )->next = a;
}
while ( ::atomic_cas_ptr( & available, a, p ) != a );
}
private:
void allocate() throw( bad_alloc )
{
segment =
::new ( allocator.request< buffer >() )
buffer( segment );
// Skip the buffer's prefix.
slot * const p =
reinterpret_cast< slot * >
(
reinterpret_cast< intptr_t >( segment )
+
sizeof( buffer )
);
// Add new slots from the new buffer's payload.
// Is it worthy to unroll (parallelize) the loop?
slot const * const limit =
p
+
( allocator.page_size - sizeof( buffer ) )
/
sizeof( slot );
slot * tail = p;
slot * tracker = tail++;
while ( tail < limit )
{
tracker->next = tail;
tracker = tail++;
}
(--tail)->next = 0;
// Prepend the new slots.
slot * a;
do
{
a = available;
tail->next = a;
}
while ( ::atomic_cas_ptr( & available, a, p ) != a );
}
private:
// The low-level (OS) memory allocator.
A const allocator;
// Convenience.
typedef internal::buffer< T, A > buffer;
// The list of buffers.
// Each node contains the slots of data.
buffer const * segment;
// Expansion serialization control.
int volatile expanding;
// The slots of data (T objects)
// and list of available (free) slots.
// Reusing free slots for the list's pointers.
union slot
{
// Just the T size is needed (for space reservation).
// Avoid further dependencies around the T type.
unsigned char data[ sizeof( T ) ];
slot * next;
}
* volatile available;
private:
pool( pool const & );
pool & operator = ( pool const & );
};
// A base class for a very convenient integration
// with the standard C++ memory management operators.
template
<
typename T,
typename A = policy::c< policy::pageable >
>
struct operations
{
static void * operator new ( std::size_t ) throw()
{
return pool.request();
}
static void operator delete ( void * p ) throw()
{
pool.recycle( p );
}
// The pool must be common (static).
static memory::pool< T, A > pool;
};
// The multiple translation unit template merging
// avoids manually defining their static declarations.
template< typename T, typename A >
pool< T, A > operations< T, A >::pool;
} // namespace memory
}
// The function expand() can be delayed as much as desired.
// It will be automatically called if absolutely necessary.
// One thread will do the expand and others will wait.
void expand() throw( bad_alloc )
{
// Serialize and minimize concurrent expansions.
if
(
::atomic_cas_ptr( & expanding, 0, ( void * ) 1 )
==
0
)
{
// The modifying thread attempts the expansion.
// Blocked threads will get notified at end.
try
{
allocate();
// Release other threads.
::atomic_swap_ptr( & expanding, 0 );
}
catch ( std::bad_alloc const & )
{
// Release other threads.
::atomic_swap_ptr( & expanding, 0 );
// Notifies itself about the exception.
throw bad_alloc( ENOMEM );
}
}
else
// Wait on loop before resuming.
// Better than throwing exceptions.
while
(
::atomic_cas_ptr( & expanding, 0, 0 )
==
( void * ) 1
)
;
}
void * request() throw( bad_alloc )
{
start:
try
{
slot * a;
do
{
if ( ! ( a = available ) )
throw bad_alloc( ENOMEM );
}
while
(
::atomic_cas_ptr( & available, a, a->next ) != a
);
return a;
}
catch ( bad_alloc const & )
{
try
{
// Race for expansion.
expand();
}
catch ( ... )
{
// Out of memory.
throw;
}
}
// The previous expansion succeeded.
// Try again to fulfill the request for a slot.
goto start;
}
void recycle( void * p ) throw()
{
slot * a;
do
{
a = available;
reinterpret_cast< slot * >( p )->next = a;
}
while ( ::atomic_cas_ptr( & available, a, p ) != a );
}
private:
void allocate() throw( bad_alloc )
{
segment =
::new ( allocator.request< buffer >() )
buffer( segment );
// Skip the buffer's prefix.
slot * const p =
reinterpret_cast< slot * >
(
reinterpret_cast< intptr_t >( segment )
+
sizeof( buffer )
);
// Add new slots from the new buffer's payload.
// Is it worthy to unroll (parallelize) the loop?
slot const * const limit =
p
+
( allocator.page_size - sizeof( buffer ) )
/
sizeof( slot );
slot * tail = p;
slot * tracker = tail++;
while ( tail < limit )
{
tracker->next = tail;
tracker = tail++;
}
(--tail)->next = 0;
// Prepend the new slots.
slot * a;
do
{
a = available;
tail->next = a;
}
while ( ::atomic_cas_ptr( & available, a, p ) != a );
}
private:
// The low-level (OS) memory allocator.
A const allocator;
// Convenience.
typedef internal::buffer< T, A > buffer;
// The list of buffers.
// Each node contains the slots of data.
buffer const * segment;
// Expansion serialization control.
int volatile expanding;
// The slots of data (T objects)
// and list of available (free) slots.
// Reusing free slots for the list's pointers.
union slot
{
// Just the T size is needed (for space reservation).
// Avoid further dependencies around the T type.
unsigned char data[ sizeof( T ) ];
slot * next;
}
* volatile available;
private:
pool( pool const & );
pool & operator = ( pool const & );
};
// A base class for a very convenient integration
// with the standard C++ memory management operators.
template
<
typename T,
typename A = policy::c< policy::pageable >
>
struct operations
{
static void * operator new ( std::size_t ) throw()
{
return pool.request();
}
static void operator delete ( void * p ) throw()
{
pool.recycle( p );
}
// The pool must be common (static).
static memory::pool< T, A > pool;
};
// The multiple translation unit template merging
// avoids manually defining their static declarations.
template< typename T, typename A >
pool< T, A > operations< T, A >::pool;
} // namespace memory
Next are a few raw examples just for illustration. More realistically, the pool would be internal to some other object such as an advanced C++ smart pointer in order to provide more efficient and multi-threaded allocations and deallocations.
Example 1:
template< typename T >
struct pointer
{
...
// All instances must share the same pool.
static memory::pool< std::size_t > pool;
...
};
// The shared pool.
template< typename T >
memory::pool< std::size_t > pointer< T >::pool;
Example 2:
void f()
{
// An ISM cache.
static memory::pool
<
std::size_t,
memory::policy::shared< memory::policy::ism >
>
pool;
...
}
Example 3:
// On Intel x86-64, sizeof( S ) = 8 = sizeof( void * )
// So, there's no internal fragmentation.
struct S
{
char c;
int i;
};
void f()
{
memory::pool
<
S,
memory::policy::c< memory::policy::locked >
>
pool( memory::largest_page() );
try
{
// Manual pre-expand (just for illustration).
pool.expand();
S * p = ( S * ) pool.request();
p->c = 'S';
p->i = 11;
pool.recycle( p );
}
catch ( ... )
{
...
}
}
Example 4:
#ifndef S_HXX
#define S_HXX
#include "memory.hxx"
struct S : memory::operations< S >
{
int code;
char text[ 10 ];
S( int c = 0, char const * t = "" ) : code( c )
{
::strlcpy( text, t, sizeof(text) - 1 );
}
};
#endif /* S_HXX */
#include "s.hxx"
void f()
{
S * s = new S;
...
delete s;
}
#include "s.hxx"
struct derived_S : S
{
...
};
extern void f();
void g()
{
S * s = new S;
...
delete s;
...
derived_S * ds = new derived_S;
...
delete ds;
...
f();
}
Subscribe to:
Posts (Atom)




