Tuesday, December 6 2016

Release of OASIS 0.4.8

I am happy to announce the release of OASIS v0.4.8.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Fix various problems of parsing present in OASIS 0.4.7 (extraneous whitespaces, handling of ocamlbuild argument...)
  • Enable creation of OASIS plugin and OASIS command line plugin.
  • Various fixes for the plugin "omake".
  • Create 2 branches to pin OASIS with OPAM, making easier for contributor to test dev. version.

Thanks to Edwin Török, Yuri D. Lensky and Gerd Stolpmann for their contributions.

Monday, August 22 2016

Release of OASIS 0.4.7

I am happy to announce the release of OASIS v0.4.7.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlibextrafiles (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

Friday, April 29 2016

Release of OASIS 0.4.6

I am happy to announce the release of OASIS v0.4.6.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

The main purpose of this release is to make possible to install OASIS with OPAM with OCaml 4.03.0. In order to do so, I had to disable some tests and use a new set of String.*_ascii functions. The OPAM release is pending upload and should soon be available.

Thursday, October 23 2014

Release of OASIS 0.4.5

On behalf of Jacques-Pascal Deplaix

I am happy to announce the release of OASIS v0.4.5.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

  • Build and install annotation files.
  • Use builtin bin_annot and annot tags.
  • Tag .mly files on the same basis as .ml and .mli files (required by menhir).
  • Remove 'program' constraint from C-dependencies. Currently, when a library has C-sources and e.g. an executable depends on that library, then changing the C-sources and running '-build' does not yield a rebuild of the library. By adding these dependencies (rather removing the constraint), it seems to work fine.
  • Some bug fixes

Features:

  • noautomaticsyntax (alpha): Disable the automatic inclusion of -syntax camlp4o for packages that matches the internal heuristic (if a dependency ends with a .syntax or is a well known syntax).
  • compiledsetupml (alpha): Fix a bug using multiple arguments to the configure script.

This new version is a small release to catch up with all the fixes/pull requests present in the VCS that have not yet been published. This should made the life of my dear contributors easier -- thanks again for being patient.

I would like to thanks again the contributor for this release: Christopher Zimmermann, Jerome Vouillon, Tomohiro Matsuyama and Christoph Höger. Their help is greatly appreciated.

Tuesday, March 25 2014

Release of OASIS 0.4.3

I am happy to announce the release of OASIS v0.4.3.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

  • Added -remove switch to the setup-clean subcommand designed to remove unaltered generated files completely, rather than simply emptying their OASIS section.
  • Translate path of ocamlfind on Windows to be bash/win32 friendly.
  • Description is now parsed in a more structured text (para/verbatim).

Features:

  • stdfiles_markdown (alpha): set default extension of StdFiles (AUTHORS, INSTALL, README) tp be '.md'. Use markdown syntax for standard files. Use comments that hides OASIS section and digest. This feature should help direct publishing on GitHub.
  • disableoasissection (alpha): it allows DisableOASISSection to be specified in the package with a list of expandable filenames given. Any generated file specified in this list doesn't get an OASIS section digest or comment headers and footers and is therefore regenerated each time `oasis setup` is run (and any changes made are lost). This feature is mainly intended for use with StdFiles so that, for example, INSTALL.txt and AUTHORS.txt (which often won't be modified) can have the extra comment lines removed.
  • compiledsetupml (alpha): allow to precompile setup.ml to speedup.

This new version closes 4 bugs, mostly related to parsing of _oasis. It also includes a lot of refactoring to improve the overall quality of OASIS code base.

The big project for the next release will be to setup a Windows host for regular build and test on this platform. I plan to use WODI for this setup.

I would like to thanks again the contributor for this release: David Allsopp, Martin Keegan and Jacques-Pascal Deplaix. Their help is greatly appreciated.

Sunday, February 23 2014

Release of OASIS 0.4.2

I am happy to announce the release of OASIS v0.4.2.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

  • Change BSD3 and BSD4 to BSD-3-clause and BSD-4-clause to comply with DEP5, add BSD-2-clause.

BSD3 and BSD4 are still valid but marked as deprecated.

  • Enhance .cmxs supports through the generation of .mldylib files.

When one of the modules of a library has the name of the library, ocamlbuild tends to just transform this module into a .cmxs. Now, the use of a .mldylib fix that problem and the .cmxs really contains all modules of the library.

  • Refactor oasis.cli to be able to create subcommand plugins.
    • Exported modules starts now with CLI.
    • Display plugins in the manual.
    • Design so that it is possible to be thread-safe.
    • Try to minimize the number of functions.
    • Make better choice of name and API.
    • A subcommand plugin 'dist' to create tarball is in preparation, as a separate project.
  • Remove plugin-list subcommand, this command was limited and probably not used. A better alternative will appear in next version.
  • Sub-command setup-dev is now hidden and will soon be removed.

I have published a quick intermediate version 0.4.1, a few days after the previous release. This was a bug fix related to threads. I also decided to skip the release of last month. I was in the US at this time and didn't have time to work enough on OASIS (christmas vacation and a travel to the US). This month I am back on track.

This new version doesn't feature a lot of visible changes. I mostly work on the command line interface code, in order to be able to create external plugins. A first external plugin is almost ready, but need some more polishing before release. This first plugin project is a port of the script that I have used for a long time and that was present in the source code of oasis (oasis-dist.ml). It will be a project on its own and have a different release cycle. The point of this plugin is to create .tar.gz out of an OASIS enabled project.

I also must admit that I am very happy to see contributors sending me pull-request through GitHub. It helps me a lot and I also realize that the learning curve to enter OASIS code is steep. This last point is something I will try to improve.

Friday, December 13 2013

Release of OASIS 0.4.0

I am happy to announce that OASIS v0.4.0 has just been released.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

I have recently resumed my work on OASIS and this will be hopefully the new version that will lead to quicker iteration in the development of OASIS. The development process was slowdown by the fact, that I feared introducing new fields in _oasis or regression. This was a pain and I decided to change my development model.

Features

The most important step is the introduction of AlphaFeatures and BetaFeatures fields. They allow to introduce pieces of code that will only be activated if certain features are listed in those fields. It should help to be always ready to release.

The features also cover other aspect like flag_tests and flag_docs which has been introduced in OASIS v0.3.0. In fact the features API is now used to introduce all enhancement while keeping backward compatibility with regard to OASISFormat. Rather than defining a ~since_version:0.3 for fields we use a feature that handle the maturity level of the feature. When I feel a specific feature is ready to ship, I just change the InDev Alpha to InDev Beta and then SinceVersion 0.4. On the long term, when we won't support anymore a version of OASIS that existed before the SinceVersion, the feature will always be true and I will fully integrate it in the code.

The only constraint around features is: if you use AlphaFeatures or BetaFeatures field, you must use the latest OASISFormat...

Features section in the manual.

Example of features available:

  • section_object: allow to create object (.cmo/.cmx) in _oasis
  • pure_interface: an OCamlbuild feature that allows to handle .mli without a .ml file

Automate

Another topic is automation of testing releases. For OASIS v0.3.0, I ran tests on all platforms manually, late in the development of v0.3.0 and it was painful to fix. So I have decided to setup a Jenkins instance that automate testing on Linux. On the long term, I plan to also setup a Mac OS X builder and start looking at Windows as well. This should help me catch errors early and be able to fix them quickly.

However, for v0.4.0 I have decided to just release what I have and which has mainly be tested on Linux. The point here is to quickly release and iterate, rather than wait for perfection. Hopefully end user testing will allow to quickly discover new bugs.

Time boxed release

In the coming months, I will try to do time boxed releases. I will try to release version of OASIS every 15th of the month. The point here is to try to iterate faster and avoid long delay between release.

See you in 1 month for the next release.

Sunday, November 10 2013

opam2debian, a tool to create Debian binary package out of OPAM

One week ago, a thread in opam-devel mailing list started about the possibility to create binary snapshot to distribute OPAM packages. Distributing binary packages with everything already compiled is pretty useful when you want to make sure that everyone has the same version of packages installed and you don't want to spend time configuring all the computer of your colleague.

As a matter of fact I was also interested to see that happening. I have several computers where I want to install a set of packages and I want to snapshot OPAM archives when I am ready for an upgrade. I have tested for a long time another source distribution: GODI. I even wrote a puppet module to drive it. This was a fun experience, but as with any source only distribution, there are some drawbacks. Especially even if it is fully automated, you get a lot of errors and timeout when trying to build automatically from source.

This is how opam2debian has started. I wanted a replacement for puppet-godi that takes advantage of OPAM and the possibility to distribute my set of packages everywhere quickly.

The goals of this project are:

  1. generate a Debian binary package with all the dependencies set on external packages (like libpcre)
  2. use OPAM to build everything
  3. use standard Debian process to build package
  4. create snapshot of OPAM repository that allow to rebuild exactly the same thing on different arches

The real challenge behind opam2debian was to be able to use standard Debian process to create package to get all the power of dependencies computation provided by the Debian maintainer scripts. My great achievement on this topic was to find a tool called proot that allows you to bind mount a directory as a user. This is an achievement because it allows to create a package using a directory where you are not allowed to write (i.e. /opt/opam2debian directory which is owned by root). This is a way to do it on any Jenkins builder without root access and a standard way for Debian to build packages.

Usage example

I need to compile a set of package to install on all my Jenkins builder. Here is the list of packages I need and I also want to use the latest OCaml version. I want to build a Debian package for Debian Wheezy which has only OCaml 3.12.1.

Building the package:

$ opam2debian create --build --name opam2debian-test2 --compiler 4.01.0 \
 ocamlfind fileutils ocaml-data-notation ocaml-expect ounit \
 ocamlmod ocamlify oasis yojson sexplib extlib pcre-ocaml \
 calendar ocaml-inifiles ocamlnet ocurl gettext inotify ocaml-sqlexpr \
 ocamlrss ocaml-xdg-basedir

Getting the package list right can be tricky because the process will stop at the first error -- and if it is because the last package fails to build it can be long. I have implemented a --keep-build-dir in case you want to tune the package list live.

The program will build everything and it can takes quite a while. At the end you get a file opam2debian-test2_20131105_amd64.deb which has a reasonable size of 232MB. That is big, but it looks like the biggest directory is $OPAMROOT/4.01.0/build, I am not sure if I can remove it but we may save some space here (the build directory represents 50% of the package size).

Then you can just do a standard Debian installation:

$ sudo dpkg -i opam2debian-test2_20131105_amd64.deb

And you can use it:

$ eval $(opam config env --root /opt/opam2debian/opam2debian-test2/)
$ which ocamlfind
 /opt/opam2debian/opam2debian-test2/4.01.0/bin/ocamlfind
$ ocamlfind list
 [...]
 stdlib              (version: [distributed with Ocaml])
 str                 (version: [distributed with Ocaml])
 threads             (version: [distributed with Ocaml])
 threads.posix       (version: [internal])
 threads.vm          (version: [internal])
 type_conv           (version: 109.41.00)
 unidiff             (version: 0.0.2)
 unix                (version: [distributed with Ocaml])
 userconf            (version: 0.3.1)
 xdg-basedir         (version: 0.0.3)
 xmlm                (version: 1.1.1)
 yojson              (version: 1.1.5)

A nice thing to note, is that it is a standard OPAM install. You can install, update and upgrade OPAM from there, as root. However, I would recommend not to do it, and just rebuild a newer Debian package to upgrade.

Install

You will need the opam and proot Debian package available in Debian jessie and sid. You will also need various OCaml libraries (cmdliner, calendar, fileutils and jingoo).

Download the opam2debian tarball on the forge, build and install it.

The project is hosted on github.

Open issues

On the initial list of goals of the project. Not everything is completed. I have still several open issues.

In particular 4. (create snapshot of OPAM repository) was blocked by a bug in opam-mk-repo, that prevents snapshots (see my pull-request to solve it).

Another issues is about licenses of the included files. I should list them all and I need to figure out a way to extract every licences.

Submit bugs directly to github.

Sunday, September 29 2013

OUnit 2.0, official release

After 1.5 month of work, I am proud to officialy release OUnit 2.0.0. This is a major rewrite of OUnit to include various feature that I think was missing from OUnit1. The very good news is that the port of the OASIS test suite has proven that this new version of OUnit can drastically improve the running time of a test suite.

OUnit is a unit test framework for OCaml. It allows one to easily create unit-tests for OCaml code. It is based on HUnit, a unit testing framework for Haskell. It is similar to JUnit, and other XUnit testing frameworks.

Download OUnit v2.0.0

Documentation of v2.0.0

Website

The basic features:

  • better configuration setup
    • environment variable
    • command line options
    • configuration files
  • improved output of the tests:
    • allow vim quickfix to jump in the log file where the error has happened
    • output HTML report
    • output JUnit report
    • systematic logging (verbose always on), but output log in a file
  • choose how to run your test:
    • run tests in parallel using processes (auto-detect number of CPU and run as many worker processes)
    • run tests concurrently using threads
    • use the old sequential runner
  • choose which test to run with a chooser that can do smart selection of tests:
    • simple: just run test in sequence
    • failfirst: run the tests that failed in the last run first and skip the success if they are still failing
  • some refactoring:
    • bracket: use a registration in the context but easier to use
    • remove all useless functions in the OUnit2 interface
  • non-fatal section: allow to fail inside non-fatal section without quitting immediately the whole test
  • allow to use OUnit1 test inside OUnit2 (smooth the transition)
  • timer that makes tests fail if they take too long, only when using the processes runner (I was not able to do it cleanly using threads and sequential)
  • allow to parametrize filenames so that you can use OUNIT_OUTPUT_FILE=ounit-$(suite_name),log and have $(suite_name) replace by the test suite name
  • create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a chdir while another thread is doing a chdir elsewhere)
  • create a in_testdata_dir function to locate test data, if any

Migration path

OUnit 2.0.0 still provides the OUnit module which is exactly the same as the last OUnit 1.X version. This way, you are not forced to migrate. However, this means that you will gain no advantage of the new release and even some slowdown due to increase complexity of the code. Though, I strongly recommend to upgrade to OUnit2.

Here is a checklist to do the migration:

  • replace all open OUnit by open OUnit2
  • the test function now takes test_ctxt argument, so replace all fun () -> ...) by fun test_ctxt -> ...
  • bracket are now inlined so bracket setUp f tearDown is now let x = bracket setUp tearDown test_ctxt in
  • make sure that you don't change global process state like chdir or Unix.putenv or that you don't rely on another test setting something for the next test

The OASIS test suite migration

In order to check that everything was working correctly, I have migrated the OASIS test suite to OUnit2. This is a big test suite (210 test cases) and it includes quite big sequences of tests (end to end tests from calling oasis setup to compiling and installing the results). This was really time consuming and I wish to see a significant speedup for the tests with OUnit2.

You can see the result in term of code of the full migration here.

Here are the results on my Intel Core i7 920/SSD:

  • Pristine test suite (210 tests):
    • oUnit v1: 52.36s (i.e. latest OUnit v1.x, reference time)
    • oUnit1 over oUnit2: 60.39s (OUnit v2.0.0 using the OUnit v1 layer)
  • Migration to OUnit2 (166 tests):
    • processes (8 shards): 10.12s
    • processes (autodect, 4 shards): 12.99s
    • sequential: 58.77s

OUnit v2 benchmark

The migration was quite heavy because this test suite had a big design problem. It uses in-place modification of the test data. I think I pick this design because I thought this was a good idea to decrease the running time. As a matter of fact, this was a huge mistake, that keeps popping failed test cases because one of the previous tests was failing. I have refactored all this and now we start by copying the test data into a temporary directory, which ensure that everything is always starting from pristine test data.

During the redesign, I have decided to reduce the number of tests by merging some of them. This should have no big impact on the running time, although this is not as pure 1:1 comparison with OUnit v1. Although it is still testing exactly the same thing. This explain the loss of ~50 tests, which in fact has been merged in other tests.

The overall speedup is 4x compared to OUnit v1, when using processes. However there is a 12% increase compared to OUnit v1 for sequential and a 15% increase when using OUnit v1 compatibility layer. While this is not very good score, I hope this is small enough to compensate the huge win of being able to run tests in parallel with processes.

And now the magic !

At this point, if you read carefully the numbers, you would have noticed that there is a 4.5x decrease in speed when you compare sequential and 4 processes for OUnit2. Since we are only actively testing in 4 shards, it looks strange. I don't expect a super-linear speedup due to the use of processes. I have checked that every tests was indeed running and found no solutions to this mystery. Right now, I think this is due to the fact that we are running less tests in more processes which should lighten the load of the GC (which may not trigger at all). I am not sure about this explanation and will welcome any bug report which shows a problem in the implementation of either sequential or processes runner. Although, this is great.

Help still wanted

If you find any bugs with OUnit v2, this is the time to submit a bug. OUnit BTS

If you want to try to fix bugs by yourself, please checkout the latest version of OUnit:

 $> darcs get http://forge.ocamlcore.org/anonscm/darcs/ounit/ounit

Patches always welcome.

Wednesday, September 25 2013

OUnit 2.0 progress, September 2013

Continuing last month progress report onOUnit2. The release is just a few days away, I am testing real life application and the core of the work is already in the VCS.

The basic features:

  • better configuration setup
    • environment variable
    • command line options
    • configuration files
  • improved output of the tests:
    • allow vim quickfix to jump in the log file where the error has happened
    • output HTML report
    • output JUnit report
    • systematic logging (verbose always on), but output log in a file
  • choose how to run your test:
    • run tests in parallel using processes (auto-detect number of CPU and run as many worker processes)
    • run tests concurrently using threads
    • use the old sequential runner
  • choose which test to run with a chooser that can do smart selection of tests:
    • simple: just run test in sequence
    • failfirst: run the tests that failed in the last run first and skip the success if they are still failing
  • some refactoring:
    • bracket: use a registration in the context but easier to use
    • remove all useless functions in the OUnit2 interface
  • non-fatal section: allow to fail inside non-fatal section without quitting immediately the whole test
  • allow to use OUnit1 test inside OUnit2 (smooth the transition)
  • timer that makes tests fail if they take too long, only when using the processes runner (I was not able to do it cleanly using threads and sequential)
  • allow to parametrize filenames so that you can use OUNIT_OUTPUT_FILE=ounit-$(suite_name),log and have $(suite_name) replace by the test suite name
  • create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a chdir while another thread is doing a chdir elsewhere)
  • create a in_testdata_dir function to locate test data, if any

Still remaining to do, but quite straightforward:

  • sys admin (website, release process)
  • update the whole documentation

Some things that I decided not to do for OUnit 2.0 release:

  • introduce a 'cached' state to avoid rerunning a test if you can programmaticaly determine that the result will be the same.

The main development is now done, but before releasing I decided to test it first on a real scale application. The first big migration to OUnit2, will be the OASIS test suite. This is a pretty big test suite (100+ tests) that takes a fair amount of time to run. I hope that during the next week I will be able to port the whole test suite and come back with some timing results.

You can follow my progress on porting OASIS to OUnit 2.0, in github.

Help wanted

If you have a long standing issue with OUnit, this is the time to submit a bug. OUnit BTS

If you want to try the dev version of OUnit:

 $> darcs get http://forge.ocamlcore.org/anonscm/darcs/ounit/ounit

Patches always welcome.

You can read the documentation of the devel version the website.

Friday, September 6 2013

OUnit 2.0 progress, August 2013

After a long pause, I have resumed my work on OUnit2. It is going quite well.

The basic features:

  • better configuration setup
    • environment variable
    • command line options
    • configuration files
  • systematic logging (verbose always on), but output log in a file
  • allow vim quickfix to jump in the log file where the error has happened
  • output HTML report
  • output JUnit report
  • choose how to run your test:
    • run tests in parallel using processes (auto-detect number of CPU and run as many worker processes)
    • run tests concurrently using threads
    • use the old sequential runner
  • refactoring of the bracket, now easier to use
  • refactoring of OUnit2 interface (remove all useless functions)
  • non-fatal section: allow to fail inside non-fatal section without quitting immediately the whole test
  • allow to use OUnit1 test inside OUnit2 (smooth the transition)

I still need to do the following:

  • a test chooser that does smart selection of tests:
    • run the one that failed in the last run and run them first
    • before re-running the one that was ok, check that all failing tests are now passing otherwise skip the already passing tests.
  • timer that makes tests fail if they take too long
  • allow to parametrize output filename so that you can use OUNIT_OUTPUT_FILE=ounit-$(name),log and have $(name) replace by the test suite name
  • create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a chdir while another thread is doing a chdir elsewhere)
  • better logging when using multiple workers
  • add more tests for the new runners
  • introduce a 'cached' state to avoid rerunning a test if you can programmaticaly determine that the result will be the same.
  • create a in_testdata_dir function to locate test data, if any
  • sys admin (website, release process)
  • update the whole documentation

There is still a lot of work, but the current results are already quite good. The speed improvement of the processes runner is a good thing to shorten test timing (HINT: tester needed!).

Focus on: the new bracket.

In OUnit 1, a bracket was very functional:

 bracket 
    (fun () -> "foo")  (* setup *)
    (fun foo -> ())
    (fun foo -> ())  (* tear down *)

So for common bracket, like bracket_tmpfile

   bracket_tmpfile
       (fun (fn, chn) ->
             (* Do something with chn and fn *)

The problem is that if you were using 2 or 3 temporary files, the level of indentation was high. I have decide to switch to a more imperative approach, registering the tear down function inside the test context:

   let (fn1, chn1) = bracket_tmpfile ctxt in
   let (fn2, chn2) = bracket_tmpfile ctxt in
      ....

This is shorter and clearer (albeit less functional).

Focus on: non fatal section

Sometimes, you want to verify a set of properties but to have a clear vision of what is going wrong, you need to do more than one assert.

In OUnit1, you can do:

 assert_equal exp1 v1;
 assert_equal exp2 v2

But if exp1 <> v1, you quit immediately and you'll never know for exp2 and v2.

In OUnit2, you can do:

 non_fatal ctxt (fun ctxt -> assert_equal exp1 v1);
 non_fatal ctxt (fun ctxt -> assert_equal exp2 v2)

In this new version, you will test both equality and the result of the test will be the worst failure you get (or success if both of them succeed).

Help wanted

If you have a long standing issue with OUnit, this is the time to submit a bug. OUnit BTS

If you want to try the dev version of OUnit:

 $> darcs get http://forge.ocamlcore.org/anonscm/darcs/ounit/ounit

Patches always welcome.

Special thanks to Thomas Wickham who has entirely written OUnitRunnerThreads and kickstarted the processes runner.

Saturday, August 17 2013

Augeas tips and tricks for Puppet user: edit a complex node.

I have a recurring problem when trying to use augeas on a complex node: trying to edit a specific entry in a list which is uniquely defined by many attributes.

You probably don't know that you have this problem, but it is easy to spot it into your augeas/puppet resource.

Here are some symptoms of this problem:

  • you need to use onlyif with multiple constraint on the selection
  • you use last() and last() +1
 
 augeas {
   "setup-shorewall":
     changes =>
       [
         "set entry[last() + 1]/source 'all'",
         "set entry[last()]/dest 'all'",
         "set entry[last()]/policy 'REJECT'",
         "set entry[last()]/log_level 'info'",
       ],
     onlyif => "match entry[source = 'all'][dest = 'all'][policy = 'REJECT'] size == 0";

For a long time, I thought it was the only solution. But last week, I read again the documentation and found another solution.

My main concerns are the onlyif and last() parts, it doesn't look clean to me. The problem is that I cannot define the entry all at once and if I use a value that will be set late, the node cannot be targeted in between.

The clean way to do this was to define first the target attribute. Typically, in augeas changes:

 set spec[user = '$name']/user '$name'

This way if the node doesn't exist it is created and you can then use it directly:

 set spec[user = '$name']/host_group/host 'ALL'
 set spec[user = '$name']/host_group/command1 'ALL'
 set spec[user = '$name']/host_group/command1/tag 'PASSWD'

But sometimes it is not possible to set the attribute directly -- typically when you need to use multiple attribute. The solution in this case is to use defnode:

 defnode target entry[#comment = 'puppet: <%= name %>']/ "<%= name %>"
 set $target/action '<%= action %>'
 set $target/source '<%= source %>'
 set $target/#comment 'puppet: <%= name %>'
 clear $target

The big trick here is that defnode needs a value, but most of the time you cannot set a value for the node -- because it has none. To solve this, you set a value with defnode, process with your change and you clear the node at the end.

This recent discovery has simplify a lot some augeas changes I use.

Feel free to leave comment on your personal technique to deal with augeas and puppet.

Friday, August 16 2013

OASIS website updated

Logo OASIS small The OASIS website has not been updated since a while. So I decide to take a shot at making more up to date. This blog post is about the pipeline I have but in place to automatically update the website. It is the first end to end 'continuous deployment' project I have achieved.

Among the user visible changes:

  • an invitation to circle the OASIS G+ page, which is now the official channel for small updates in OASIS.
  • an invitation to fork the project on Github since it is now the official repository for OASIS.
  • some link to documentation for the bleeding edge version of OASIS manual and API.

The OASIS website repository is also on Github. Feel free to fork it and send me pull request if you see any mistake.

The website is still using a lot of markdown processed by pandoc. But there are some new technical features:

  • no more index.php, we use a templating system to point to latest version
  • we use a Jenkins job to generate the website daily or after any update in OASIS itself.

Since I start using quite quite a lot Python, I have decided to use it for this project. It has a lot of nice libraries and it helps me to quickly do something about the website (and provides plenty of idea to create equivalent tools in OCaml).

The daily generation: Jenkins

I have a Jenkins instance running, so I decided to use it to compile once a day the new website with updated documentation and links. This Jenkins instance also monitor changes of the OASIS source code. So I can do something even more precise: regenerate the website after every OASIS changes.

I use the Jenkins instance to also generate a documentation tarball for OASIS manual and API. This helps a lot to be able to display the latest manual and API documentation. This way I can quickly browse the documentation and spot errors early.

Another good point about Jenkins, is that it allows to store SSH credential. So I created a build user, with its own SSH key, in the OCaml Forge and I use it to publish the website at the end of the build.

Right now Jenkins do the following:

  • trigger a build of the OASIS website:
    • every day (cron)
    • when a push in OASIS website repository is detected
    • when a successful build of OASIS is achieved.
  • get documentation artifact from the latest successful build of OASIS
  • build the website
  • publish it

Data gathering

To build the website I need some data:

  • documentation tarballs containing the API (HTML from OCamldoc) and manual (Markdow)
  • list of OASIS version published
  • links to each tarball (documentation and source tarball)

The OCaml Forge has a nice SOAP API. But one need to be logged in to access it. This is unfortunate, because I just want to access public data. The only way I found to gather my data was to scrape the OCaml Forge.

Python has a very nice scraping library for that: beautifulsoup.

I use beautifulsoup to parse the HTML downloaded from the Files tab of the OASIS project and extract all the relevant information. I use curl to download the documentation tarball (for released versions) and for the latest development version.

Code

Template

Python has also a very nice library to process template: mako.

Using the data I have gathered, I feed them to mako and I process all the .tmpl files in the repository to create matching files.

Among the thing that I have transformed into template:

  • the index.php has been transformed into a index.mkd.tmpl, it was a hackish PHP script scraping the RSS of updates before, it is now a clean template.
  • robots.txt.tmpl, see the following section for explanation
  • documentation.mkd.tmpl in order to list all version of documentation.

Fix documentation and indexing

One of the problem of providing access to all versions of the documentation, is that people can end up reading an old version of the documentation. In order to prevent that, I use two different techniques:

  • prevent search engine to index old version.
  • warn the user that he is reading an old version.

To prevent search engine to index the file, I have created a robots.txt that list all URL of old documentation. This should be enough to prevent search engine to index the wrong page.

To warn the user that he is reading the wrong version, I have added a box "you are not viewing the latest version". This part was tricky but beautifulsoup v4 provide a nice API to edit HTML in place. I just have to find the right CSS selector to define the position where I want to insert my warning box.

Code

Publish

The ultimate goal of the project is the 'continuous deployment'. Rather than picking what version to deploy and do the process by hand, I let Jenkins deploy every version of it.

Deploying the website used to be a simple rsync. But for this project I decided to use a fancier method. I spend a few hours deciding what was the best framework to do the automatic deployment. There are two main frameworks around: capistrano (Ruby) and fabric (Python).

Fabric is written in Python, so I pick this one because it was a good fit for the project. Fabric biggest feature is to be a SSH wrapper.

The fabric script is quite simple and to understand it, you just have to know that local run a local command and run run a command on the target host.

The fabfile.py script do the following:

  • create a local tarball using the OASIS website html/ directory
  • upload the tarball to ssh.ocamlcore.org
  • uncompress it and replace the htdocs/ directory of the oasis project
  • remove the oldest tarballs, we keep a few versions to be able to perform a rollback.

Given this new process, the website is updated in 3 minutes automatically after a successful build of OASIS.

Tuesday, July 23 2013

Migrating a puppet maintained computer from Squeeze to Wheezy

This blog post is a little recipe to do a Debian migration for a node using Puppet and some other good practices.

We do all the following commands as root, which on of the exceptional situation where you should have a real root session (login through console, or su -).

I tend to avoid using the X server while doing an upgrade, so my 'best' setup is to have a laptop to take note and check things on the internet and a session to the computer to upgrade (ssh + su - or login as root into console). In both case, I use screen during the upgrade so that I can handle disconnection.

Create or update your admin-syscheck script

First of all, a good practice is to have a script that runs various test on the system that checks everything is ok. This is not only for upgrade but in general. But in the case of upgrade, it can be particularly useful. I call this script admin-syscheck. This is a simple bash script.

This script check various aspect of the system and serve me as my external worker to check the most advanced knowledge I have gathered about setting up a service. For example, I know that having *.bak or *.dpkg-dist in /etc/ means that something needs to be merged and a file should be deleted. Another example is about setting up the right alias for 127.0.0.1 and ::1 (which you can differentiate using getent ahostsv4 and getent ahostsv6).

I have packaged this script and distributed it using a specific apt-get repository. You can just distribute it using puppet. I recommend to run it daily to track changes (e.g. after an apt-get dist-upgrade) and to check that my setup is aligned with my most up-to-date knowledge about setting up a service (i.e. this is my external worker).

In our case we are interested in checking presence of old and new configuration files, before and after upgrading. Here is this specific section of my script:

if ! $IS_ROOT; then
  warning "Not trying to detect dpkg leftover file in /etc/."
else
  LEFTOVER_FILES=( $(find /etc/ \
      -name "*.dpkg-dist" -o \
      -name "*.dpkg-old" -o \
      -name "*.ucf-old" -o \
      -name "*.ucf-dist" -o \
      -name "*.bak") )
  for i in "${LEFTOVER_FILES[@]}"; do
    if [ "$i" = "/etc/hosts.deny.purge.bak" ]; then
      continue
    fi
    if $fix; then
      BASE=${i%.*}
      cond_exec vim -d $BASE $i
      read -p "Delete $i (y/N)? " ans
      if [ "$ans" = "y" ]; then
        cond_exec rm $i
      fi
    else
       report_error "dpkg leftover file: '$i'."
    fi
  done
fi

(cond_exec allows to do a dry run, you can just remove it).

Setting $fix to true will spawn a vim -d old new command where you can edit and then delete the leftover file. This is extremly handy.

Upgrading to Wheezy

I strongly recommend to read first the upgrade chapter of the release notes. This gives you a more complete overview of the upgrade procedure. I just go through the basic step here.

1. Update everything on the system:

$> apt-get update 
$> apt-get dist-upgrade

2. Check that the current configuration apply cleanly:

$> puppet agent --test

3. Run admin-syscheck:

$> admin-syscheck

And fix all the problems.

4. Disable puppet:

I use a cronjob to run puppet, so I just comment the line for the job (/etc/cron.d/puppet-custom). You should disable puppet by stopping the daemon and preventing it to run by editing /etc/default/puppet and set START=no.

4. Fix your sources and pinning:

Change squeeze to wheezy in /etc/apt/sources.list and remove useless files in /etc/apt/sources.lists.d/. (You may keep certain sources that refers to stable, like google-chrome.list).

$> rm /etc/apt/sources.list.d/* # Check if this ok to do this with your system.

Although, I tend to fully purge /etc/apt/sources.list expect for the main line (removing backports and security is fine for a short time). The first run of puppet after upgrade will anyway reset this file.

$> rm /etc/apt/preferences.d/* # (at least the ones that do pin some version)

You can although remove all pinning from /etc/apt/preferences.

5. Now you start the real upgrade:

$> apt-get update 
$> apt-get dist-upgrade

6. During the upgrade you will be asked if you want to keep old configuration files or install the newer one from the maintainer.

I have always wondered what to answer to this. But here is the answer after a few major upgrade: always install configuration files from maintainer if the service has no ultra-specific settings that could break during the upgrade.

The only file, that I should not upgrade on my system, is /etc/sudoers. In this very specific case, you need to make sure before the upgrade that the old and new configuration can coexist. In the squeeze to wheezy case, I have just setup a few extra augeas rules to set the secure_path before the upgrade and it was fine. This is typically the kind of situation where you are thankful to have a real root session.

7. The upgrade can be long and require various fixing to remove/re-add packages to circumvent problems. At the end you will have a set of file *.dpkg-old and *.ucf-old (and some *.dpkg-dist and *.ucf-dist). The *-old files are your old version of the file, while the corresponding files match the maintainer version of it. The *-dist files are the maintainer version of the file and the corresponding files match your old version of it.

Starting from here you have 2 options:

  • This is one of the first computer you upgrade, go to 'first upgrade'.
  • Your puppet configuration for wheezy is already bullet proof, go to 'further upgrade'.

First upgrade

This is the tricky part, you'll have to spend a little time on it:

1. Go over all *.{ucf,dpkg}-{old,dist} and merge them with the corresponding configuration file. Use admin-syscheck with fix=true

2. Make a copy of your /etc directory into /etc.new:

$> cp -rp /etc /etc.new

3. Run puppet again:

$> puppet agent --test

4. Disable again the automatic run of puppet, if the previous command has re-enabled it.

5. Make a diff between /etc and /etc.new. Since you have a run of puppet, you know what has changed and should not have changed.

$> diff -Nurd /etc.new /etc

Everytime you find some files that doesn't match your expectation for the upgrade with puppet, change the corresponding puppet manifest to have what you expect.

For example, if this is a file:

if ($lsbdistcodename == 'wheezy') {
  file {
    "/etc/foo":
        source => "puppet:///files/foo.wheezy"
   }
} else {
  file {
    "/etc/foo":
        source => "puppet:///files/foo.squeeze"
   }
}

People working with augeas and puppet, will appreciate the fact that they probably have 0 changes to make for this to work (since it only does a few replacement in configuration files).

6. Once you are happy with the changes, copy back /etc.new to /etc and go to step 3 until the difference is almost 0.

7. Re-enable automatic run of puppet.

Do this procedure for a least each computer category you have (e.g. Desktop and Server nodes). Once you are fully confident your new puppet setup works, you will be able to use 'further upgrade' for the other nodes.

Further upgrade.

This one is super easy compare to a first upgrade:

1. Re-enable puppet and have it run at least once:

$> puppet agent --test

2. Merge *.{dpkg,ucf}-{dist,old} with corresponding files (you can run admin-syscheck with fix=true). This is mostly a sanity check since you should have already solved most problem with the 'first upgrade' procedure.

That's it.

Enjoy your upgrade to Wheezy with puppet.

Thursday, April 4 2013

Sekred a password helper for puppet.

Puppet is a nice tool but it has a significant problem with passwords:

  • it is recommended to store puppet manifests (*.pp) and related files in a VCS (i.e. git)
  • it is not recommended to store password in a VCS

This lead to complex situation and various workaround that more or less work:

  • serve password in a separate file/DB or do an extlookup on the master (pre-set passwords)
  • store password on the server and get them through a generate function (random password but on the master)

Most of these workarounds are complex, don't allow you to share the password you have set easily and most of the time are stored in another place than the target node.

So I have decided to create my own solution: sekred (LGPL-2.1).

The idea of sekred is to generate the password on the target node and made it available to the user that needs it. Then the user just have to ssh into the host and get the password.

Pro:

  • the password is generated and stored on the node
  • no VCS commit of your password
  • no DB storage of your password beside the local filesystem of the host
  • no need to use a common pre-set password for all you host, the password is randomly generated for only one host
  • to steal the password you need to crack the host first but if you have root access on the host, accessing a random generated password is pointless

Cons:

  • the password is stored in clear text
  • the password is only protected by the filesystem ACL

Let see some concrete examples.

Setting mysql root password

This is a very simple problem. When you first install mysql on Debian Squeeze, the root password is not set. That's bad. Let set it using sekred and puppet.

node "mysqlserver" {

  package {
    ["mysql-server",
     "mysql-client",
     "sekred"]:
      ensure => installed;
  }

  service {
    "mysqld":
      name => "mysql",
      ensure => running,
      hasrestart => true,
      hasstatus => true;
  }

  exec {
    "mysql-set-root-password":
      command => "mysqladmin -u root password $(sekred get root@mysql)",
      onlyif => "mysql -u root",  # Trigger only if password-less root account.
      require => [Service["mysqld"], Package["mysql-client", "sekred"]];
  }
}

And to get the root password for mysql, just login into the node "mysqlserver":

$> sekred get root@mysql
Cie8ieza

Setting password for SSH-only user

This example is quite typical of the broken fully automated scenario with passwords: - you setup a remote host only accessible through SSH - you create a user and set its SSH public key to authorize access - your user cannot access its account because SSH prevent password-less account login!

In other word, you need to login into the node, set a password for the user and mail him back.... That defeats a little bit the "automation" provided by puppet.

Here is what I do with sekred:

define user::template () {
  user {
    $name:
      ensure => present,
      membership => minimum,
      shell => "/bin/bash",
      ....
  }
  include "ssh_keys::$name"

  # Check password less account and set one, if required.
  $user_passwd="$(sekred get --uid $name $name@login)"
  exec {
    "user-set-default-password-$name":
      command => "echo $name:$user_passwd | chpasswd",
      onlyif => "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"",
      require => [User[$name], Package["sekred"]];
  }
}

So the command "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"" test for a password-less account. If this is the case, it creates a password using sekred get --uid $name $name@login and set it through chpasswd.

Note that $user_passwd use a shell expansion that will be evaluated when running the command only, on the host. The --uid flag of sekred assign the ownership of the password to the given user id.

So now the user (foo) can login into the node and retrieve its password using sekred get foo@login.

Try it!

Sekred was a very short project but I am pretty happy with it. It solves a long standing problem and helps to cover an extra mile of automation when setting up new nodes.

The homepage is here and you can download it here. Feel free to send patches, bugs and feature requests (here, login required).

Saturday, March 2 2013

Always test your HD first

I just received a WD 500GB Blue, to replace the hard drive of my wife's computer. First thing after unpacking: start to test. And I was right to do it... there are bad blocks on it.

WD 500GB DOA

The story: I tend to do proactive replacement of hard drive, to avoid losing too much data or be in hurry when the current one fails. There are two ways to monitor a hard drive:

Logcheck is pretty straightforward to install. It scans your log every two hours and send you a report of what is happening. It is not a very precise tool and you have to tune it a little bit to just send you what is relevant (install extra rules to ignore what you know is not of interest). Whenever you start to see log entry like that:

Jan 24 18:16:08 foo kernel: [ 1965.343980] ata5.00: exception Emask 0x50 SAct 0x39 SErr 0x800 action 0x6 frozen
Jan 24 18:16:08 foo kernel: [ 1965.343991] ata5.00: irq_stat 0x08000000, interface fatal error
Jan 24 18:16:08 foo kernel: [ 1965.344001] ata5: SError: { HostInt }
Jan 24 18:16:08 foo kernel: [ 1965.344036] ata5.00: failed command: READ FPDMA QUEUED
Jan 24 18:16:08 foo kernel: [ 1965.344055] ata5.00: cmd 60/08:00:18:31:44/00:00:1a:00:00/40 tag 0 ncq 4096 in
Jan 24 18:16:08 foo kernel: [ 1965.344059]          res 40/00:2c:e6:c2:fd/00:00:26:00:00/40 Emask 0x50 (ATA bus error)
Jan 24 18:16:08 foo kernel: [ 1965.344071] ata5.00: status: { DRDY }
Jan 24 18:16:08 foo kernel: [ 1965.344081] ata5.00: failed command: WRITE FPDMA QUEUED

It is a good time to think about changing your hardrive -- but it is maybe too late.

Smartmontools (aka smartd) is dedicated tool to monitor hard drives and do a good job. I think it is not installed by default in Debian, but it should be. It scans your hard drive for SMART capabilities and monitor the health of the HD using internal tools. In the case of bad blocks, you will start to see entry like that:

Feb 17 14:05:17 bar smartd[1268]: Device: /dev/sda [SAT], 1 Currently unreadable (pending) sectors
Feb 17 14:35:17 bar smartd[1268]: Device: /dev/sda [SAT], 1 Currently unreadable (pending) sectors

Obviously when you see this log, it is also a good time to change your hard drive.

In the case of my wife's HD, I just got data from logcheck. It means that the error is not that important (transient failure, something is wrong but the HD can cope with it). But I still decided to get a new one for my wife.

Whenever, I receive a new drive, the first thing I do is to check it for errors. You can do that using the program badblocks in write mode. It takes ages to test (count up to 1 day for 1TB on USB), but at the end you know that you have a good candidate -- where it is worth install your data.

You just have to follow this procedure

  1. dmesg | grep sd
  2. try to find what drive is the one you want to test in the output of 1.
  3. cfdisk /dev/sdX, sdX being the drive you want to test
  4. check that what you see in cfdisk is what you expect to test: right name and capacity for the drive
  5. sudo badblocks -wvs /dev/sdX
  6. run sudo tail -f /var/log/syslog in parallel just in case
  7. wait

If some errors appears in /var/log/syslog, you know something bad is happening. Whenever you have a single failing block, don't think it is ok. It is NOT ok for a HD to start its life with failing blocks. In this case, repack the drive and send it for replacement ASAP.

In the case of my hard drive: smartd mail:

The following warning/error was logged by the smartd daemon:

Device: /dev/sda [SAT], 15 Currently unreadable (pending) sectors

syslog entries:

Mar  2 20:52:57 foo kernel: [ 8317.419715] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
Mar  2 20:52:57 foo kernel: [ 8317.419724] ata4.00: irq_stat 0x40000008
Mar  2 20:52:57 foo kernel: [ 8317.419732] ata4.00: failed command: READ FPDMA QUEUED
Mar  2 20:52:57 foo kernel: [ 8317.419747] ata4.00: cmd 60/80:00:00:af:08/00:00:28:00:00/40 tag 0 ncq 65536 in
Mar  2 20:52:57 foo kernel: [ 8317.419750]          res 41/40:00:65:af:08/00:00:28:00:00/40 Emask 0x409 (media error) <F>
Mar  2 20:52:57 foo kernel: [ 8317.419758] ata4.00: status: { DRDY ERR }
Mar  2 20:52:57 foo kernel: [ 8317.419764] ata4.00: error: { UNC }
Mar  2 20:52:57 foo kernel: [ 8317.423959] ata4.00: configured for UDMA/133

I am not blaming any particular brand (like Western Digital), all computer parts I have ever bought had to follow the same procedure and it is a known fact that computer parts as a non-zero percentage of chance to be DOA (dead on arrival) or after a few weeks. But as a consumer you should be aware of that and take action to avoid spending 10h configuring your computer to see it failing after a week... The waste of time to test is a win on the long term.

Wednesday, February 27 2013

planet.ocaml.org spring cleaning

Hi planet.ocaml.org.

Just a quick post to thanks Marek Kubica for his help on the planet.ocaml.org spring cleaning.

Here are the feeds that have been removed:

  • Red Lizard Software

http://redlizards.com/blog/feed/?tag=ocaml

  • Alp Mestan

http://blog.mestan.fr/feed/?cat=16

  • Arlen Cuss

http://www.sairyx.org/tag/ocaml/feed/

  • Daniel Patterson

http://blog.dbpatterson.com/rss

  • Victor Nicollet, cannot find any feeds on the new blog

http://www.nicollet.net/toroidal/ocaml/feed/

  • OCaml Hackers

http://ocamlhackers.ning.com/profiles/blog/feed?tag=ocaml&xn_auth=no

  • Mauricio Fernandez, offline

http://eigenclass.org/R2/feeds/rss2/all

  • Christopher Conway, no OCaml-related posts since 2008

http://procrastiblog.com/category/ocaml/feed

  • Liquidsoap, cannot find the HTML blog on the website

http://savonet.sourceforge.net/liquidsoap.rss

Here is the feed that have been added:

  • Marc Simpson

http://newblog.0branch.com/rss.xml

Here are the feeds that have been updated:

  • Jane Street Capital now points to

https://ocaml.janestreet.com/?q=rss.xml

  • Jamie Brandon now points to

http://scattered-thoughts.net/atom.xml

  • Mihamina Rakotomandimby now points to

http://www.rktmb.org/feed/tag/ocaml/atom

  • Dario Teixeira now points to

http://nleyten.com/feed/tag/ocaml/atom

  • Erik de Castro Lopo now points to but FP-Sydney has been removed

http://www.mega-nerd.com/erikd/Blog/index.rss20

  • Y-Node now points to

http://y-node.com/blog/feeds/latest/

If you want that we had back your blog, please follow the howto add your feed to planet. We didn't have removed feed on purpose, this was just a way to get rid of a lot of 404,

And don't forget, planet.ocamlcore.org is now served by planet.ocaml.org! Update your feed reader.

Saturday, February 23 2013

OUnit 2.0 progress

I have recently started to work on OUnit 2.0. The point of this new version will be to adapt OUnit to improve speed and compatibility with third party system:

  • better configuration setup (through the use of environment variable, command line options and configuration files)
  • systematic logging (verbose always on), but output log in a file
  • allow vim quickfix to jump in the log file where the error has happened
  • output HTML report
  • output JUnit report
  • run tests in parallel
  • automatic selection of tests (choose the one that failed in the last run, before re-running the one that was ok)

Only the 3 last point needs to be completed.

Here is a screenshot of Jenkins reading JUnit output of an OUnit test: Jenkins, OUnit test results

So if you have a long standing issue with OUnit, this is the time to submit a bug. OUnit BTS

Friday, October 19 2012

Configuration management: Puppet is worth it.

Replying to an old blog post of Martin F. Krafft: Configuration management, I want to give my point of view.

The problems listed by madduck are quite common with Puppet, but I think Puppet is still worth, mostly because you can solve all these problems.

Let give you my opinion on the list:

  • Non-Unix approach to everything (own transport, self-made PKI, non-intuitive configuration language, a faint attempt at versioning (bitbucket), and much much more…)

True. I think the approach of puppet is not really UNIXish. It is probably on purpose. The biggest issue is probably the PKI. It breaks frequently for unknown reason. The "non intuitive configuration language" is probably a matter of taste. I think the language is not very well designed and strange, but I can cope with that. The attempt to versioning -- if I understand correcly what it means -- refers to the fact that when Puppet replace a file it moves the old file to a bucket. This is not a good thing, but you can say "backup => '.puppet-bak'" and you get almost the same behavior as ".dpkg-old".

  • Ruby

False debate. We can discuss for hours on Ruby, PHP, Java or whatever pet language people has invented. I am not a fan of Ruby but it is still nice as a general purpose language. To my mind, Ruby is still better to write daemon than bash.

  • Abysmal slowness

False debate.

   info: Caching catalog for centi.....
   info: Applying configuration version '1350597216'
   notice: Finished catalog run in 3.08 seconds

The config of this node is not complex, but 3s is not that bad for something that runs every 30min. If you need sub-second speed for this kind of thing, maybe you are not looking for this kind of tool. Does 144s of server time per day is a big deal ?

With a lot more complex setup, I can reach 30s for a run, although this is the point where I manage a lot of thing with it.

  • Lack of basic functionality (e.g. replace a line of text)

False and True. Augeas allows you to replace a single value (even more precise than a line). Just have a look at the augeas type. This is pretty nice and allow to do thing like replacing "Defaults env_reset" by "Defaults env_reset, !tty_tickets" in 4 lines of code. So this i not precisely "a single line of text", but there is other way to do it.

  • Host management and configuration programming intertwined, lack of a high-level approach to defining functionality

False. Well if you organize your code with manifests/site.pp and manifests/classes/*.pp, it seems like there is a separation between the two. Next you can try inheritance and define to create specific high-level features.

  • Horrific error messages

False-ish. Hey at least there are error message ;-) Now, most of the error that are related to the programming language are useless (at least as cryptic as a C++ error message). But as usual with error message in programming language

  • Catastrophic upgrade paths

True. Multi versions installation is horrible and you have to fix a lot of stuff to manage a sane overall configuration.

  • Lack of IPv6 support

Not sure to understand this point, I use puppet over IPv6...

To whoever is considering using puppet, this is worth a try. It is a nice system that really helps to maintain a decent configuration across nodes.

Friday, October 5 2012

Book review: Puppet 2.7 Cookbook

I recently read this book and it was enlightening -- that is the least I can say. It is organized into a set of recipes addressing the most common problems you have with puppet. I think the whole point is about being practical and it gives you various ways to achieve the same goal, depending if you want to do a quick and dirty hack or pick a long term solution. The whole book aims at being practical and it achieves this goal pretty well. However, it is not for beginners. Puppet itself is not for beginner and you should already have written and run into problem to really take advantage of this book.

I read the book in 5 days (only while commuting), and at each page I thought : "OMG, they do that like this" or "of course, you should use git + that" and wait impatiently coming back home to test all this stuff. The book is really about technical details and how to organize yourself to write nice Puppet classes. I have been an "irregular" user of Puppet since 4 years. I was mainly running it to distribute files on all my computers. Since 3 weeks, I am applying the recipes found in this book and it is just like discovering that you own something worth a million euros.

Let look at some example:

  • Setting a value in a configuration file:

Before: Replace the whole file.

Now: Use Augeas to just change the value and don't touch to the rest of the file. The book makes me discover Augeas and it rocks !!!

  • VCS repository and deployment:

Before:_ Loosy darcs repository inside /etc/ and edit directly the master.

Now: Shiny git repository in my home directory, using a Makefile to validate Puppet syntax, simulate applying it and effectively apply it to the current node before deploying it through git push.

  • Installing exim4:

Before: Nothing because I didn't think it was possible.

Now: Generate a configuration file using a template, install it, run the exim4 update script and restart the service.

  • Installing sshd:

Before: Copy a file and restart it.

Now: Use Augeas and a template with a loop to change what is needed for each node depending on which user I want to allow to connect on this node. Use different SSH port for every vserver that share the same IP and distribute an /etc/ssh_config to propagate the default ports set for all the vservers... (just being able to do that is worth the price of the book).

  • Have a problem and try to solve it:

Before: Complain, whine and give up.

Now: Go to Puppet Forge, take inspiration for similar module, re-read part of the book, find an idea and apply it.

And so forth and so on. This book made me love puppets again. I warmly recommend it to you.

- page 1 of 5