Sunday, March 25, 2018

Building GNU automake 1.16.1

I start with the same considerations from the post Building autoconf 2.69. The GNU automake library is sometimes used on GNU buildings so I add it to my crazy list of GNU software to manually build.

NOTE
On this post I'm using the lastest GNU automake verion at the time of this writing. But sometimes one particular version simply doesn't work. For instance, version 1.16 fails building, but version 1.16.1 succeeds.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../automake-1.16.1.tar.xz
...

$ cd automake/automake-1.16.1-64

$ source ../setenv 64

CONFIG_SHELL=/usr/bin/bash

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu89

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++03

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/opt/gnu/bin:/usr/gnu/bin:/usr/bin:/usr/sbin

PKG_CONFIG_PATH=

Suggested build sequence:

Fine-tune/fix config.h.in, Makefile.in and others...

$ ./configure \
  --build=x86_64-pc-solaris2.11 \
  --prefix=/opt/... \
  ...

$ gmake -j3

For IPS package:


$ sudo gmake DESTDIR=/stage/prototype/automake/1.16.1/64 install

For immediate use:

$ sudo gmake install
$ sudo snapshot -r .../opt/...@automake-1.16.1


Monday, March 19, 2018

Building GNU autoconf 2.69

I start with the same considerations from the post Building GNU m4. The GNU autoconf library is sometimes used on GNU buildings so I add it to my crazy list of GNU software to manually build.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../autoconf-2.69.tar.xz
...

$ cd autoconf/autoconf-2.69-64

$ source ../setenv 64

CONFIG_SHELL=/usr/bin/bash

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu89

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++03

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/opt/gnu/bin:/usr/gnu/bin:/usr/bin:/usr/sbin

PKG_CONFIG_PATH=

Suggested build sequence:

Fine-tune/fix config.h.in, Makefile.in and others...

$ ./configure \
  --build=x86_64-pc-solaris2.11 \
  --prefix=/opt/... \
  ...

$ gmake -j4


For IPS package:


$ sudo gmake DESTDIR=/stage/prototype/autoconf/2.69/64 install

For immediate use:

$ sudo gmake install
$ sudo zfs snapshot -r .../opt/...@autoconf-2.69

Building GNU m4 1.4.18

I start with the same considerations from the post Building GNU libtool 2.4.6. The GNU m4 library is sometimes used on GNU buildings so I add it to my crazy list of GNU software to manually build.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../m4-1.4.18.tar.xz
...

$ cd m4/m4-1.4.18

$ source ../setenv 64

CONFIG_SHELL=/usr/bin/bash

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu89

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++03

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/opt/gnu/bin:/usr/gnu/bin:/usr/bin:/usr/sbin

PKG_CONFIG_PATH=

Suggested build sequence:

Fine-tune/fix config.h.in, Makefile.in and others...

$ ./configure \
  --build=x86_64-pc-solaris2.11 \
  --prefix=/opt/... \
  ...
 

$ gmake -j3

For IPS package:


$ sudo gmake DESTDIR=/stage/prototype/m4/1.4.18/64 install

For immediate use:

$ sudo gmake install
$ sudo zfs snapshot -r .../opt/...@m4-1.4.18

Saturday, March 17, 2018

Building GNU libtool 2.4.6

I start with the same considerations from the post Building GNU libsigsegv 2.12. The GNU libtool library is sometimes used on GNU buildings so I add it to my crazy list of GNU software to manually build.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../libtool-2.4.6.tar.xz

Processing /stage/source/.../libtool-2.4.6.tar.xz

------------------------------------
App: libtool
Ver: 2.4.6
------------------------------------

In the process, the following ZFS datasets will be created:

  rpool/stage/build/libtool
 
rpool/stage/build/libtool/libtool-2.4.6
 
rpool/stage/build/libtool/libtool-2.4.6-32
 
rpool/stage/build/libtool/libtool-2.4.6-64

 
rpool/stage/prototype/libtool
 
rpool/stage/prototype/libtool/libtool-2.4.6
 
rpool/stage/prototype/libtool/libtool-2.4.6/32
 
rpool/stage/prototype/libtool/libtool-2.4.6/64

Enter "y" to proceed:
y

Creating
rpool/stage/build/libtool...
Creating
rpool/stage/build/libtool/libtool-2.4.6...
   
rpool/stage/build/libtool 
rpool/stage/build/libtool/libtool-2.4.6
rpool/stage/build/libtool/libtool-2.4.6@source
rpool/stage/build/libtool/libtool-2.4.6-32 
rpool/stage/build/libtool/libtool-2.4.6-32@start
rpool/stage/build/libtool/libtool-2.4.6-64
rpool/stage/build/libtool/libtool-2.4.6-64@start

Creating data-0/stage/prototype/libtool subtree.

rpool/stage/prototype/libtool
rpool/stage/prototype/libtool/2.4.6
rpool/stage/prototype/libtool/2.4.6/32

rpool/stage/prototype/libtool/2.4.6/64

Creating pre-configuration script.


$ cd libtool/libtool-2.4.6-64

$ source ../setenv 64
...

Friday, March 16, 2018

Building GNU libsigsegv 2.12

In a crazy effort to gather an ever growing set of up-to-date GNU tools and utilities built my myself I proceed little by little on each GNU software, getting its source code and attempting the build. At first I'm concerned in just successfully building the software. But I'll try to figure out how to reference the new artifacts in a sort of side-by-side installation that could co-exist with the standard Solaris packages without mixing stuffs.

The GNU libsigsegv library is sometimes used on GNU buildings so I add it to my crazy list of GNU software to manually build. For now, it seems that later on I would have to adjust my GNU m4 macros in order to take into account this new libsigsegv version. Fortunately, later on when I build an updated version of GNU m4 I'll have the option (--with-libsigsegv-prefix) to automatically do that.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../libsigegv-2.12.tar.gz

Processing /stage/source/.../libsigsegv-2.12.tar.gz

------------------------------------
App: libsigsegv
Ver: 2.12
------------------------------------

In the process, the following ZFS datasets will be created:

  rpool/stage/build/libsigsegv
  rpool/stage/build/libsigsegv/libsigsegv-2.12
  rpool/stage/build/libsigsegv/libsigsegv-2.12-32
  rpool/stage/build/libsigsegv/libsigsegv-2.12-64

  rpool/stage/prototype/libsigsegv
  rpool/stage/prototype/libsigsegv/libsigsegv-2.12
  rpool/stage/prototype/libsigsegv/libsigsegv-2.12/32
  rpool/stage/prototype/libsigsegv/libsigsegv-2.12/64

Enter "y" to proceed:
y
 

Creating rpool/stage/build/libsigsegv...
Creating rpool/stage/build/libsigsegv/libsigsegv-2.12...

rpool/stage/build/libsigsegv
rpool/stage/build/libsigsegv/libsigsegv-2.12
rpool/stage/build/libsigsegv/libsigsegv-2.12@source
rpool/stage/build/libsigsegv/libsigsegv-2.12-32
rpool/stage/build/libsigsegv/libsigsegv-2.12-32@start
rpool/stage/build/libsigsegv/libsigsegv-2.12-64
rpool/stage/build/libsigsegv/libsigsegv-2.12-64@start

Creating rpool/stage/prototype/libsigsegv subtree.

rpool/stage/prototype/libsigsegv
rpool/stage/prototype/libsigsegv/2.12
rpool/stage/prototype/libsigsegv/2.12/32
rpool/stage/prototype/libsigsegv/2.12/64

Creating pre-configuration script.


$ cd libsigsegv/libsigsegv-2.12-64

$ source ../setenv 64

CONFIG_SHELL=

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu89

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++03

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/usr/gnu/bin:/usr/bin:/usr/sbin

PKG_CONFIG_PATH=

Suggested build sequence:

Fine-tune/fix config.h.in, Makefile.in and others...

$ ./configure \
  --build=x86_64-pc-solaris2.11 \
  --prefix=/opt/... \
  ...


$ gmake -j4

For IPS package:


$ sudo gmake DESTDIR=/stage/prototype/libsigsegv/2.12/64 install

For immediate use:

$ sudo gmake install

$ zfs snapshot -r .../opt/...@libsigsegv-2.12

Thursday, March 15, 2018

Building GNU Screen 4.6.2

Working on the command line interface is a typical UNIX administrator routine. GNU screen offers one interesting alternative for working with multiple terminals within a single text-mode display or X (or other GUI environment) window. But the whole set of functionalities are far more extensive. GNU screen delves into a lot of terminal complexities which can become quite difficult to master, specially nowadays where one usually surf many layers above the bare-bones foundations.

Of course there are alternatives to GNU Screen. Some are roughly equivalent, such as tmux, while others, such as Terminator (blog) just specializes in splitting one X window into multiple terminals each running an independent shell session. What seems more interesting is combining the facilities of Terminator and Screen.

Solaris 11.3 GA offers a rather outdated versions, 4.0.3, while Solaris 11.4 Beta is packaging version 4.6.1. Perhaps, until Solaris 11.4 official launch that gets better; we'll see. But for now I'm going to build myself version 4.6.2, the latest. By doing this I hope to empower me on building later versions as I please.

The basic building strategy and general assumptions have been detailed on a previous post: Staged Building, so I'll (hopefully) get more straight to the point:

$ pwd
/stage/build

$ ./gnu-build-preparation ../source/.../screen-4.6.2.tar.gz
...

$ cd screen/screen-4.6.2-64

$ source ../setenv 64

CONFIG_SHELL=

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu89
 

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++03
 

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/usr/gnu/bin:/usr/bin:/usr/sbin
 

PKG_CONFIG_PATH=

Suggested build sequence:
 
$ ./configure \
  --build=x86_64-pc-solaris2.11 \
  --prefix=/opt/... \

  --enable-pam \
  --enable-colors256
 
$ gmake -j3
 
For IPS package:

$ sudo gmake DESTDIR=/stage/prototype/screen/4.6.2/64 install

For immediate use:

$ sudo gmake install
$ sudo zfs snapshot -r .../opt/...@screen-4.6.2

Wednesday, March 14, 2018

Building Node.js 8.10.0

This post is more of a continuous effort in order to learn how to build by myself some more up-to-date artifacts, this time Node.js 8.10.0 (LTS), to my preferred software platform: Solaris 11. In particular, I'm still considering Solaris 11.3 GA, which by the time of this writing is about 3 years old with GCC 4.8.2. Solaris 11.4 GA is expected to arrive late this year (2018).

To partially quote Node.js:
 
Node.js® is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. ... .

Node.js is a typical case for a Solaris back-end system, where its cloud-ready infrastructure would be typically very well-suited to the task: SMF, ZFS, zones, higher threading count and advanced networking capabilities.

The version used in this post, Node.js 8.10.0 (LTS), is a March 2018 security update. I've chosen the LTS 8.x series because it's the latest branch that better aligns to the Solaris way-of-things. It's expected to live until Dec 2019 and not beyond because it's aligning to OpenSSL 1.0.2 life-cycle to which it depends.

This post's (at the time of this writing) up-to-date custom build is relevant not only because of the security side, but because the software isn't available in the official repositories and, even if it could be one day, it would be uncertain if the update pace of the support repository would be acceptable. Furthermore, by performing a custom build instead of just downloading pre-built binaries one ensures getting more exactly what's needed and optimized for a particular machine as per specific compiler (CPU) options and source-code modules selection.

Right from the start I was able to build a 64-bits Node.JS 8.10.0 (LTS) in Solaris 11.4 Beta (already available) with GCC 5.5.0, but at first I wasn't able to repeat the task under Solaris 11.3 GA and I was somewhat settled with it as the "build instructions" stated a more updated (4.9.4 - Aug 3, 2016) GCC was required and I only had an almost 3-years old 4.8.2 (from Oct 16, 2013) in Solaris 11.3. Although a newer GCC is generally better, by inspecting the initial build failures in Solaris 11.3 GA I've noticed that there weren't that many changes that could render the task impossible. This motivated me to take some time to further investigate a way of trying to accomplish the task.

Staged Building

OUT OF DATE
Now and then since last year I've been trying to improve my skill on building myself some more up-to-date 64-bits software artifacts to use on my prefered OS (currently, Solaris 11.3 GA) and optimized to my particular CPU (an old Intel Core 2). I've been somewhat radical experimenting this for a few applications, programming languages and GNU tools and utilities. The learning curve is slow and rather steep, with the added difficulty that a Solaris system isn't a true GNU/Linux, despite the honorable efforts to make it compatible enough for running some popular GNU software and performing standard GNU builds of open-source software.

Tuesday, March 13, 2018

Building Ruby 2.5.0

This post is more of a continuous effort in order to learn how to build by myself some more up-to-date artifacts, this time Ruby version 2.5.0, to my preferred software platform: Solaris 11.

In particular, I'm still considering Solaris 11.3 GA, which by the time of this writing is about 3 years old with GCC 4.8.2. Solaris 11.4 GA is expected to arrive late this year (2018), but curiously I wasn't yet able to succeed in building Ruby 2.5.0 in Solaris 11.4 Beta (already available) with GCC 5.5.0.

This very latest stable Ruby version 2.5.0 isn't yet available on the package repositories of any Solaris version and that's why I'm doing all this. For instance, on Solaris 11.3 GA:

$ pkg search -H -o pkg.shortfmri *:set:pkg.fmri:runtime/ruby* \
  |sort -u
pkg:/runtime/ruby-18@1.8.7.374-0.175.3.0.0.24.0
pkg:/runtime/ruby-19@1.9.3.551-0.175.3.0.0.30.0
pkg:/runtime/ruby-19/ruby-tk@1.9.3.551-0.175.3.0.0.30.0
pkg:/runtime/ruby-21@2.1.6-0.175.3.0.0.30.0
pkg:/runtime/ruby-21/ruby-tk@2.1.6-0.175.3.0.0.30.0
pkg:/runtime/ruby@1.9-0.175.3.0.0.30.0


In my previous learning path towards building myself a few open-source software to Solaris 11.3 I've ended up with a facility script presented on another post, GNU build preparation, which was the result from a few early experiments (which I shall review at some point in the near future, I hope):


I assume the developer-gnu IPS package has been installed and that the following ZFS delegations are already in place:

$ zfs allow rpool/software/build
---- Permissions on rpool/software/build ------------------
Permission sets:
    @descendent clone,compression,destroy,promote,quota,

                readonly,rename,reservation,share,sharenfs
    @generic create,diff,hold,mount,receive,

             release,rollback,send,snapshot,userprop
Descendent permissions:
    user user1 @descendent
Local+Descendent permissions:
    user user1 @generic


$ zfs allow rpool/software/prototype
---- Permissions on rpool/software/prototype --------------
Permission sets:
    @descendent clone,compression,destroy,promote,quota,

                readonly,rename,reservation,share,sharenfs
    @generic create,diff,hold,mount,receive,

             release,rollback,send,snapshot,userprop
Descendent permissions:
    user user1 @descendent
Local+Descendent permissions:
    user user1 @generic


After downloading and verifying the checksum of a compressed source-code tarball and using the gnu-build-prepration script from the GNU build preparation post I end up with the following build structure mounted at /software (the details of running the GNU build preparation script are presented in the last example of that that post):
  
rpool/software/build/ruby
rpool/software/build/ruby/ruby-2.5.0
rpool/software/build/ruby/ruby-2.5.0@source 
rpool/software/build/ruby/ruby-2.5.0-gnu32 
rpool/software/build/ruby/ruby-2.5.0-gnu32@start
rpool/software/build/ruby/ruby-2.5.0-gnu64 
rpool/software/build/ruby/ruby-2.5.0-gnu64@start
 
rpool/software/prototype/ruby 
rpool/software/prototype/ruby/ruby-2.5.0
rpool/software/prototype/ruby/ruby-2.5.0/gnu32
rpool/software/prototype/ruby/ruby-2.5.0/gnu64

NOTE
Since the original writing of this post I have revised some important assumptions and recommendations which would lead to an extensive rewriting. Instead of doing it all over again, I'm kindly asking that you pay attention to the following changes:
  1. I've adjusted all the ZFS tree datasets' names
    (the development tree is now rooted at /stage)
     
  2. I've started using DESTDIR and a different --prefix
    (DESTDIR=/stage/prototype/ruby/2.5.0/64)
    (--prefix=/opt/sfw/ruby/2.5.0)
     
  3. I'm taking a final ZFS snapshot as the deployment closing.
    (zfs snapshot -r .../opt/sfw/ruby/2.5.0@release)
Hence, in what follows just keep an eye on adjusting accordingly.

The next step is to edit the configuration script (setenv) that is delivered in the previous step in order to adjust / tune up some environment variables which are influential to the build process. Then, before starting the build, extend the environment accordingly. For instance, to prepare a 64-bits build on my particular machine I do as follows:

$ cd ruby/ruby-2.5.0-64

$ source ../setenv 64

--prefix=/software/prototype/.../gnu64
--build=x86_64-pc-solaris2.11

CONFIG_SHELL=

CC=/usr/bin/gcc CFLAGS=-m64 -march=core2 -std=gnu99

CXX=/usr/bin/g++ CXXFLAGS=-m64 -march=core2 -std=gnu++11

LD=/usr/bin/ld LDFLAGS=-m64 -march=core2

PATH=/usr/gnu/bin:/usr/bin:/usr/sbin

PKG_CONFIG_PATH=