Tag - debian

Entries feed - Comments feed

Saturday, August 17 2013

Augeas tips and tricks for Puppet user: edit a complex node.

I have a recurring problem when trying to use augeas on a complex node: trying to edit a specific entry in a list which is uniquely defined by many attributes.

You probably don't know that you have this problem, but it is easy to spot it into your augeas/puppet resource.

Here are some symptoms of this problem:

  • you need to use onlyif with multiple constraint on the selection
  • you use last() and last() +1
 augeas {
     changes =>
         "set entry[last() + 1]/source 'all'",
         "set entry[last()]/dest 'all'",
         "set entry[last()]/policy 'REJECT'",
         "set entry[last()]/log_level 'info'",
     onlyif => "match entry[source = 'all'][dest = 'all'][policy = 'REJECT'] size == 0";

For a long time, I thought it was the only solution. But last week, I read again the documentation and found another solution.

My main concerns are the onlyif and last() parts, it doesn't look clean to me. The problem is that I cannot define the entry all at once and if I use a value that will be set late, the node cannot be targeted in between.

The clean way to do this was to define first the target attribute. Typically, in augeas changes:

 set spec[user = '$name']/user '$name'

This way if the node doesn't exist it is created and you can then use it directly:

 set spec[user = '$name']/host_group/host 'ALL'
 set spec[user = '$name']/host_group/command1 'ALL'
 set spec[user = '$name']/host_group/command1/tag 'PASSWD'

But sometimes it is not possible to set the attribute directly -- typically when you need to use multiple attribute. The solution in this case is to use defnode:

 defnode target entry[#comment = 'puppet: <%= name %>']/ "<%= name %>"
 set $target/action '<%= action %>'
 set $target/source '<%= source %>'
 set $target/#comment 'puppet: <%= name %>'
 clear $target

The big trick here is that defnode needs a value, but most of the time you cannot set a value for the node -- because it has none. To solve this, you set a value with defnode, process with your change and you clear the node at the end.

This recent discovery has simplify a lot some augeas changes I use.

Feel free to leave comment on your personal technique to deal with augeas and puppet.

Tuesday, July 23 2013

Migrating a puppet maintained computer from Squeeze to Wheezy

This blog post is a little recipe to do a Debian migration for a node using Puppet and some other good practices.

We do all the following commands as root, which on of the exceptional situation where you should have a real root session (login through console, or su -).

I tend to avoid using the X server while doing an upgrade, so my 'best' setup is to have a laptop to take note and check things on the internet and a session to the computer to upgrade (ssh + su - or login as root into console). In both case, I use screen during the upgrade so that I can handle disconnection.

Create or update your admin-syscheck script

First of all, a good practice is to have a script that runs various test on the system that checks everything is ok. This is not only for upgrade but in general. But in the case of upgrade, it can be particularly useful. I call this script admin-syscheck. This is a simple bash script.

This script check various aspect of the system and serve me as my external worker to check the most advanced knowledge I have gathered about setting up a service. For example, I know that having *.bak or *.dpkg-dist in /etc/ means that something needs to be merged and a file should be deleted. Another example is about setting up the right alias for and ::1 (which you can differentiate using getent ahostsv4 and getent ahostsv6).

I have packaged this script and distributed it using a specific apt-get repository. You can just distribute it using puppet. I recommend to run it daily to track changes (e.g. after an apt-get dist-upgrade) and to check that my setup is aligned with my most up-to-date knowledge about setting up a service (i.e. this is my external worker).

In our case we are interested in checking presence of old and new configuration files, before and after upgrading. Here is this specific section of my script:

if ! $IS_ROOT; then
  warning "Not trying to detect dpkg leftover file in /etc/."
  LEFTOVER_FILES=( $(find /etc/ \
      -name "*.dpkg-dist" -o \
      -name "*.dpkg-old" -o \
      -name "*.ucf-old" -o \
      -name "*.ucf-dist" -o \
      -name "*.bak") )
  for i in "${LEFTOVER_FILES[@]}"; do
    if [ "$i" = "/etc/hosts.deny.purge.bak" ]; then
    if $fix; then
      cond_exec vim -d $BASE $i
      read -p "Delete $i (y/N)? " ans
      if [ "$ans" = "y" ]; then
        cond_exec rm $i
       report_error "dpkg leftover file: '$i'."

(cond_exec allows to do a dry run, you can just remove it).

Setting $fix to true will spawn a vim -d old new command where you can edit and then delete the leftover file. This is extremly handy.

Upgrading to Wheezy

I strongly recommend to read first the upgrade chapter of the release notes. This gives you a more complete overview of the upgrade procedure. I just go through the basic step here.

1. Update everything on the system:

$> apt-get update 
$> apt-get dist-upgrade

2. Check that the current configuration apply cleanly:

$> puppet agent --test

3. Run admin-syscheck:

$> admin-syscheck

And fix all the problems.

4. Disable puppet:

I use a cronjob to run puppet, so I just comment the line for the job (/etc/cron.d/puppet-custom). You should disable puppet by stopping the daemon and preventing it to run by editing /etc/default/puppet and set START=no.

4. Fix your sources and pinning:

Change squeeze to wheezy in /etc/apt/sources.list and remove useless files in /etc/apt/sources.lists.d/. (You may keep certain sources that refers to stable, like google-chrome.list).

$> rm /etc/apt/sources.list.d/* # Check if this ok to do this with your system.

Although, I tend to fully purge /etc/apt/sources.list expect for the main line (removing backports and security is fine for a short time). The first run of puppet after upgrade will anyway reset this file.

$> rm /etc/apt/preferences.d/* # (at least the ones that do pin some version)

You can although remove all pinning from /etc/apt/preferences.

5. Now you start the real upgrade:

$> apt-get update 
$> apt-get dist-upgrade

6. During the upgrade you will be asked if you want to keep old configuration files or install the newer one from the maintainer.

I have always wondered what to answer to this. But here is the answer after a few major upgrade: always install configuration files from maintainer if the service has no ultra-specific settings that could break during the upgrade.

The only file, that I should not upgrade on my system, is /etc/sudoers. In this very specific case, you need to make sure before the upgrade that the old and new configuration can coexist. In the squeeze to wheezy case, I have just setup a few extra augeas rules to set the secure_path before the upgrade and it was fine. This is typically the kind of situation where you are thankful to have a real root session.

7. The upgrade can be long and require various fixing to remove/re-add packages to circumvent problems. At the end you will have a set of file *.dpkg-old and *.ucf-old (and some *.dpkg-dist and *.ucf-dist). The *-old files are your old version of the file, while the corresponding files match the maintainer version of it. The *-dist files are the maintainer version of the file and the corresponding files match your old version of it.

Starting from here you have 2 options:

  • This is one of the first computer you upgrade, go to 'first upgrade'.
  • Your puppet configuration for wheezy is already bullet proof, go to 'further upgrade'.

First upgrade

This is the tricky part, you'll have to spend a little time on it:

1. Go over all *.{ucf,dpkg}-{old,dist} and merge them with the corresponding configuration file. Use admin-syscheck with fix=true

2. Make a copy of your /etc directory into /etc.new:

$> cp -rp /etc /etc.new

3. Run puppet again:

$> puppet agent --test

4. Disable again the automatic run of puppet, if the previous command has re-enabled it.

5. Make a diff between /etc and /etc.new. Since you have a run of puppet, you know what has changed and should not have changed.

$> diff -Nurd /etc.new /etc

Everytime you find some files that doesn't match your expectation for the upgrade with puppet, change the corresponding puppet manifest to have what you expect.

For example, if this is a file:

if ($lsbdistcodename == 'wheezy') {
  file {
        source => "puppet:///files/foo.wheezy"
} else {
  file {
        source => "puppet:///files/foo.squeeze"

People working with augeas and puppet, will appreciate the fact that they probably have 0 changes to make for this to work (since it only does a few replacement in configuration files).

6. Once you are happy with the changes, copy back /etc.new to /etc and go to step 3 until the difference is almost 0.

7. Re-enable automatic run of puppet.

Do this procedure for a least each computer category you have (e.g. Desktop and Server nodes). Once you are fully confident your new puppet setup works, you will be able to use 'further upgrade' for the other nodes.

Further upgrade.

This one is super easy compare to a first upgrade:

1. Re-enable puppet and have it run at least once:

$> puppet agent --test

2. Merge *.{dpkg,ucf}-{dist,old} with corresponding files (you can run admin-syscheck with fix=true). This is mostly a sanity check since you should have already solved most problem with the 'first upgrade' procedure.

That's it.

Enjoy your upgrade to Wheezy with puppet.

Thursday, April 4 2013

Sekred a password helper for puppet.

Puppet is a nice tool but it has a significant problem with passwords:

  • it is recommended to store puppet manifests (*.pp) and related files in a VCS (i.e. git)
  • it is not recommended to store password in a VCS

This lead to complex situation and various workaround that more or less work:

  • serve password in a separate file/DB or do an extlookup on the master (pre-set passwords)
  • store password on the server and get them through a generate function (random password but on the master)

Most of these workarounds are complex, don't allow you to share the password you have set easily and most of the time are stored in another place than the target node.

So I have decided to create my own solution: sekred (LGPL-2.1).

The idea of sekred is to generate the password on the target node and made it available to the user that needs it. Then the user just have to ssh into the host and get the password.


  • the password is generated and stored on the node
  • no VCS commit of your password
  • no DB storage of your password beside the local filesystem of the host
  • no need to use a common pre-set password for all you host, the password is randomly generated for only one host
  • to steal the password you need to crack the host first but if you have root access on the host, accessing a random generated password is pointless


  • the password is stored in clear text
  • the password is only protected by the filesystem ACL

Let see some concrete examples.

Setting mysql root password

This is a very simple problem. When you first install mysql on Debian Squeeze, the root password is not set. That's bad. Let set it using sekred and puppet.

node "mysqlserver" {

  package {
      ensure => installed;

  service {
      name => "mysql",
      ensure => running,
      hasrestart => true,
      hasstatus => true;

  exec {
      command => "mysqladmin -u root password $(sekred get root@mysql)",
      onlyif => "mysql -u root",  # Trigger only if password-less root account.
      require => [Service["mysqld"], Package["mysql-client", "sekred"]];

And to get the root password for mysql, just login into the node "mysqlserver":

$> sekred get root@mysql

Setting password for SSH-only user

This example is quite typical of the broken fully automated scenario with passwords: - you setup a remote host only accessible through SSH - you create a user and set its SSH public key to authorize access - your user cannot access its account because SSH prevent password-less account login!

In other word, you need to login into the node, set a password for the user and mail him back.... That defeats a little bit the "automation" provided by puppet.

Here is what I do with sekred:

define user::template () {
  user {
      ensure => present,
      membership => minimum,
      shell => "/bin/bash",
  include "ssh_keys::$name"

  # Check password less account and set one, if required.
  $user_passwd="$(sekred get --uid $name $name@login)"
  exec {
      command => "echo $name:$user_passwd | chpasswd",
      onlyif => "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"",
      require => [User[$name], Package["sekred"]];

So the command "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"" test for a password-less account. If this is the case, it creates a password using sekred get --uid $name $name@login and set it through chpasswd.

Note that $user_passwd use a shell expansion that will be evaluated when running the command only, on the host. The --uid flag of sekred assign the ownership of the password to the given user id.

So now the user (foo) can login into the node and retrieve its password using sekred get foo@login.

Try it!

Sekred was a very short project but I am pretty happy with it. It solves a long standing problem and helps to cover an extra mile of automation when setting up new nodes.

The homepage is here and you can download it here. Feel free to send patches, bugs and feature requests (here, login required).

Saturday, February 26 2011

OCaml Debian News

... or don't shoot yourself in the foot.

This is not a big secret, Debian Squeeze has been released. Right after this event, the OCaml Debian Task force was back in action -- with Stephane in the leading role. He has planned the transition to OCaml 3.12.0. We will proceed in two steps: a small transition of a reduced set of packages that can be transitioned before 3.12 and then the big transition.

The reason for the small transition, is to avoid having to dep-wait (waiting for dependencies) of package upload by human. In -- a not so far -- past, the OCaml Debian Task force members were uploading packages by hand and waited for a full rebuild to go to the next step. This was long and cumbersome. We use now binNMU: it is binary only uploads -- with no source changes -- processed automatically by the release team and its infrastructure. This is far more effective and helps us to reduce the delay of the transition...

The small transition is happening now!!! Don't update/upgrade your critical Debian installations with OCaml packages, you'll get a lot of removal if you do so. N.B. these removal are part of the famous {{Enforcing type-safe linking using package dependencies}} paper.

As a side note, I am happy to announce that a full round of new OCaml packages has landed in Debian unstable:

People aware of my current work, should notice that all the dependencies of OASIS are now in Debian unstable: ocaml-data-notation, ocamlify, ocaml-expect. This is a hint about the next OCaml Debian package, I will upload. You can also have a look at OASIS enabled packages (all the OASIS dependencies, ocaml-sqlexpr and ocaml-extunix). These packages have been generated using oasis2debian a tool to convert _oasis into debian/ packaging files.

After these transition we will continue proceeding with standard upgrade work (e.g. camomile to 0.8.1).

Sylvain Le Gall is an OCaml consultant working for OCamlCore SARL

Wednesday, October 20 2010

Unison on windows tips

The big advantage of Unison on Windows is that it allows quite easily to synchronize between Windows and Linux. For those who need to work on Windows with the same set of files as on Linux, this is a big plus. Other tools do it as well, but the 2-way sync of Unison is quite nice. When you need to compile a software on Linux and Windows, you can modify both side at the same time and (almost) don't have problems.

On Windows, the .unison and unison.log are located into your %HOMEPATH%, which the upper directory of the classic Documents folder. In the directory .unison, you will find the .prf files that describe your unison profiles. As usual, default.prf in this directory is the default profile.


The basic tips are:

  • use fastcheck = true in your default.prf
  • disable directory indexing

Disable directory indexing

You can also disable virus live scan -- if you think it is safe!!!!


Using ssh under Windows is always a challenge. As a matter of fact, this tool doesn't match Windows context and it is not as integrated as in Linux/BSD.

There is Putty which can help you. It has a good support for remote shell but it is not very easy to setup with Unison. Putty and OpenSSH doesn't have precisely the same set of options and Unison relies on some not available in Putty. There is a script called ssh2plink.bat that can help you using Putty's plink as with Unison. I used it for a while, but this didn't give the expected throughput.

The best option is to use the ssh command provided by Cygwin. In this case you have at the same time good throughput and unison integration. I explain here how to configure you Cygwin's ssh to use a SSH key.

You can bypass the following steps, if you wish to use a password or if you have already setup your ssh to connect to the target computer.

Launch Cygwin's setup.exe and select openssh for installation.

To add a SSH key, launch the cygwin shell:

$ ssh-keygen -t rsa
Generating public/private rsa key pair.

Copy the file .ssh/id_rsa.pub to your target computer's .ssh/authorized_keys. You should be aware that the file format can be Windows EOL style (in this case use dos2unix to convert the file) and if you copy/paste from a dos box, some end of lines are added and you should remove them from the authorized_keys, to have a single line key.

Once, you have installed your ssh key into target computer, you should try to connect directly from the cygwin shell.

$ ssh XXX

Now, you can configure sshcmd = c:\cygwin\bin\ssh.exe to your default.prf.

Using Cygwin's ssh allow you to get ~2MB/s (or more) when you only get ~100KB/s using ssh2plink.bat.

If you have any other tips to improve Unison on Windows, I will be happy to test them and post it here.

Friday, September 10 2010

Dirty fix for omlet vim extension

omlet (or here) is a vim extension for writing OCaml code.

In my opinion, it has a better indentation than the standard OCaml vim support. Unfortunately, it has a cost: the indentation vim code is more complex. And it has a few bugs :-(

The main bug is that it doesn't like unbalanced comment opening "(*" and closing "*)" tags. From time to time, it enters an infinite (or very long) loop when there such a tag left in your file. It can be very far away from the point you are editing.

It isn't too problematic, because unbalanced tags are a syntax error. But the problem is that it matches these tags inside strings also. So whenever you start using regular expression like "(.*)" the whole indentation fails.

But there is a very ugly solution to this problem!

Problematic code:

 let parse_rgxp =
   Pcre.regexp ~flags:[`CASELESS] 
      ( *with *(?<exception>.*) *exception)?$"

Solution, add ignore"(*":

 let parse_rgxp =
   ignore "(*";
   Pcre.regexp ~flags:[`CASELESS] 
      ( *with *(?<exception>.*) *exception)?$"

Very very ugly coder: you balance comment tags in dead code -- very very bad ;-)

ps: another solution when the plugin enters the infinite loop, hit Ctrl-C. This will stop it and let you define your own indentation.

Wednesday, September 1 2010

OCaml 3.12 with Debian Sid right now!

Some careful readers of Planet OCamlCore should wonder why the OCaml packages in Debian has not yet been upgraded to 3.12.0. For the Planet Debian readers, this is the latest version of the Objective Caml programming language.

The answer is simple: Debian Squeeze froze on 6th August. This means that Debian folks focus on fixing release critical bugs and avoid doing big transitions in unstable (Sid). In particular, the Debian OCaml maintainers has decided to keep OCaml 3.11.2 for Squeeze, because the delay was really too short: OCaml 3.12 was out on 2nd August.

A great work has already been done by S. Glondu and the rest of the Debian OCaml maintainers to spot possible problems. The result was a series of bugs submitted to the Debian BTS. This effort has started quite early and have been updated with various OCaml release candidates.

S. Glondu has also built an unofficial Debian repository of OCaml 3.12.0 packages here.

Let's use it to experiment with OCaml 3.12.0.

schroot setup

Following my last post about schroot and CentOS, we will use a schroot to isolate our installation of unofficial OCaml 3.12.0 packages.


approx is a debian caching proxy server for Debian archive files. It is very effective and simple to setup. It is already on my server (Debian Lenny, approx v3.3.0). I just have to add a single line to create a proxy for ocaml 3.12 packages:

 $ echo "ocaml-312   http://ocaml.debian.net/debian/ocaml-3.12.0" >> /etc/approx/approx.conf
 $ invoke-rc.d approx restart

approx is written in OCaml, if you want to know how I come to it.

debootstrap and schroot

We create a chroot environment with Debian Sid:

# PROXY = host where approx is installed, debian/ points to official Debian repository of 
# your choice. 
$ debootstrap sid sid-amd64-ocaml312 http://PROXY:9999/debian

We create a section for sid-amd64-ocaml312 in /etc/schroot/schroot.conf (Debian Lenny):

description=Debian sid/amd64 with OCaml 3.12.0

Replace XXX by your login.

And we install additional softwares:

 $ schroot -c sid-amd64-ocaml312 apt-get update
 $ schroot -c sid-amd64-ocaml312 apt-get install vim-nox sudo

OCaml 3.12 packages

Now we can start the setup to access OCaml 3.12.0 packages.

The repository is signed by S. Glondu GPG key (see here). We need to get it and inject it into apt:

$ gpg --recv-key 49881AD3 
gpg: requête de la clé 49881AD3 du serveur hkp keys.gnupg.net
gpg: clé 49881AD3: « Stéphane Glondu <steph@glondu.net> » n'a pas changé
gpg:        Quantité totale traitée: 1
gpg:                      inchangée: 1
$ gpg -a --export 49881AD3 > glondu.gpg
$ schroot -c sid-amd64-ocaml312 apt-key add glondu.gpg

The following part is done in the schroot:

$ schroot -c sid-amd64-ocaml312
# PROXY = host where approx is installed
(sid-amd64-ocaml312)$ echo "deb http://PROXY:9999/ocaml-312 sid main" >> /etc/apt/sources.list
(sid-amd64-ocaml312)$ cat <<EOF >> /etc/apt/preferences
Package: *
Pin: release l=ocaml
Pin-Priority: 1001
(sid-amd64-ocaml312)$ apt-get update 
(sid-amd64-ocaml312)$ apt-cache policy ocaml
  Installé : (aucun)
  Candidat : 3.12.0-1~38
 Table de version :
     3.12.0-1~38 0
       1001 http://atto/ocaml-312/ sid/main amd64 Packages
     3.11.2-1 0
        500 http://atto/debian/ sid/main amd64 Packages
(sid-amd64-ocaml312)$ apt-get install ocaml-nox libtype-conv-camlp4-dev libounit-ocaml-dev...

That's it. The apt-policy command shows that OCaml 3.12 for the ocaml-312 repository has an higher priority for installation.

Good luck playing with OCaml 3.12.0.

Thursday, August 26 2010

CentOS 5 chroot with schroot

OCaml compiles native executables in static mode. It allows to have a minimal set of dependencies when delivering an executable. It has also disadvantages like the size of the executable and problems arising when considering libraries update -- but this is another topic. There is still one strong dependency that you should not forget when you want to deliver a product for most of the Linux distributions: dependency on the glibc version.

Trying to run OASIS compiled with Debian Lenny, on CentOS 5.5:

.../OASIS: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by .../OASIS)

So when compiling for delivery, one should choose the oldest distribution he targets. In my case, I choose CentOS 5 which comes with glibc v2.5. I usually choose Debian stable at the moment of writing Debian Lenny. But for now, the Debian Lenny's glibc is newer (v2.7) than the one coming from the CentOS 5.5 stable release. CentOS is a Red Hat like Linux distribution.

I use a Debian Lenny amd64 host system and I decided to setup a chroot of CentOS 5 i386 and amd64. I also setup schroot to use my CentOS chroot.

CentOS 5 amd64 setup

First of all we use rinse, which can setup a RPM based distribution in a chroot. The version v1.3 shipped with Debian Lenny has some bugs: it doesn't install nss and other mandatory packages. So I downloaded v1.7 directly from Debian Sid. There is no dependencies problems and the package is arch:all, so it is straightforward to install:

$ wget http://ftp.de.debian.org/debian/pool/main/r/rinse/rinse_1.7-1_all.deb # Replace ftp.de.debian.org by your preferred Debian mirror
$ dpkg -i rinse_1.7-1_all.deb

Then I create the chroot directory and launch rinse:

$ mkdir /srv/chroot/centos5-amd64
$ rinse --arch amd64 --distribution centos-5 --directory /srv/chroot/centos5-amd64 # N.B. you must use --arch, the default is i386

Once installation is complete, you can add an entry for this distribution in /etc/schroot/schroot.conf:

description=Centos 5 (amd64)

Replace XXX by your login.

If you try to login directly, you will get warnings:

$ schroot -c centos5-i386
I : [chroot centos5-i386-a952de23-7f4b-4bae-a9b9-752ecee4a185] Exécution de l'interpréteur de commandes initial : « /bin/bash »
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied

This is a bit misleading because the real problem is that nothing is created in /dev/. CentOS delegates creating char/block devices to udev. You have two solutions to solve this issue:

  • login and call MAKEDEV to create missing devices:
$ MAKEDEV random
$ MAKEDEV console
$ MAKEDEV zero
$ MAKEDEV null
$ MAKEDEV stdout
$ MAKEDEV stdin
$ MAKEDEV stderr
  • use an already setup Debian chroot to copy the missing devices:
$ rsync -av /srv/chroot/lenny-amd64/dev/* /srv/chroot/centos5-amd64/dev/

That's it, you now have a functional chrooted CentOS 5 environment:

$ schroot -c centos5-amd64 cat /etc/redhat-release
I : [chroot centos5-amd64-b9bae264-285b-4d17-a046-13386736cecd] Exécution de la commande : « cat /etc/redhat-release »
CentOS release 5.5 (Final)

CentOS 5 i386 setup

To setup an i386 environment, we follow almost the same scheme, except we need to fix a bug in rinse v1.7: we need to call linux32 before executing chroot. The problem is that the first stage installation of rinse install an i386/686 environment but as soon as you call chroot yum install ..., it will guess that the system is amd64 and will install missing packages. See the Debian bug report and the example patch attached to correct this behavior.

WARNING: this patch is just an example, you can apply it for creating CentOS i386 chroot on Lenny amd64 host but you should remove the patch as soon as the installation is complete.

$ mkdir /srv/chroot/centos5-i386/
$ rinse --arch i386 --distribution centos-5 --directory /srv/chroot/centos5-i386 # With /usr/lib/rinse/centos-5/post-install.sh patched 
$ rsync -av /srv/chroot/lenny-i386/dev/* /srv/chroot/centos5-i386/dev/

Add this distribution to /etc/schroot/schroot.conf:

description=Centos 5 (i386)

You now have a schroot of CentOS 5 i386:

$ schroot -c centos5-i386 cat /etc/redhat-release
I : [chroot centos5-i386-9acafa91-9862-4488-aaef-4ab2a482771e] Exécution de la commande : « cat /etc/redhat-release »
CentOS release 5.5 (Final)

Happy schroot hacking!

Thursday, June 10 2010

Waiting for her in ~1month

My wife is pregnant and we are expecting our second baby's arrival in about a month. Last time, she came back from her preparation lessons with a adverstisement "baby shower" gift pack.

One of them catch my attention:

English translation: be prepared to offer him the best...

English translation: because Debian guarantee the quality and offering the quality is a proof of love

OK, the swirl is the other way and I made a 5 minutes GIMP modification to cut-and-paste the Debian logo. But the message is here!

Remember me:

For those interested, the real thing is a soap called NUK(R).

Wednesday, March 10 2010

LLVM, OCaml and Debian

I hope some people from the OCaml community will enjoy this changelog, extracted from llvm 2.6-7, which has just been uploaded:

  [ Arthur Loiret ]

  [ Sylvain Le Gall ]
  * Build a libllvm-ocaml-dev package, which contains the OCaml binding:
    Closes: #568556.
    - debian/debhelper.in/libllvm-ocaml-dev.{dirs,doc-base,install,META}: Add.
    - debian/control.in/source: Build-Depends on ocaml-nox (>= 3.11.2),
      ocaml-best-compilers | ocaml-nox, dh-ocaml (>= 0.9.1).
    - debian/packages.d/llvm.mk:
      + (llvm_packages): Add libllvm-ocaml-dev.
      + (libllvm-ocaml-dev_extra_binary): Define, install META file.
    - debian/rules.d/binary.mk: Add dh_installdirs and dh_ocaml.
    - debian/rules.d/vars.mk:
      + include /usr/share/ocaml/ocamlvars.mk.
      + Configure with --with-ocaml-libdir=$(OCAML_STDLIB_DIR)/llvm.
  * debian/rules.d/build.mk: Fix symlinks pointing to the $DESTDIR.

In other words: LLVM is now built with its OCaml bindings and a META file for findlib. It will take some days before reaching every architectures, but hopefully it will be in Squeeze (next Debian stable release).

Thanks to Arthur Loiret for the quick upload.

Thursday, February 4 2010


Last year, I was not able to attend FOSDEM due to last minute problems. However, this year I will be there and even attend the Debian for some periods ([http://wiki.debian.org/DebianEvents/FOSDEM/2010]).

I will bring my Openbrick NG with a standard Debian Lenny and probably a Babelbox installation. The Openbrick is a VIA C3 fanless computer. It is not very exotic but it is quite interesting to see this kind of hardware. For years, I have tried to build/use fanless computer. This is not a very popular topic but it introduces problems of heat and noise at a higher level.

I am still setting up the Babelbox, which should have been a Debian Lenny RC1. I will try to upgrade it to Debian Lenny 5.0.3.

See you at FOSDEM 2010.

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

Tuesday, August 4 2009

Result of Debcamp at Debconf9 for OCaml task force

I am back from Debconf 9 in Cáceres. As usual it was a great time to meet other Debian contributor and to exchange knowledge.

This year we come with several new and old pieces of OCaml code that has attracted unexpected people to OCaml:

  • the debian release team ask us for some demonstration of the language in order to see if the transition monitor is usable for general transition tracking
  • Joachim Breitner is now using edos-debcheck to enhance test before starting build (should save time if some package are uninstallable)

Git repository of dh-ocaml

During Debcamp we, the OCaml Debian task force, have done a big work on dh_ocaml and related tools:

  • dh_ocaml can now computes Depends/Provides and .md5sums file for every OCaml package
  • dh_ocamldoc can produce documentation and is a replacement for CDBS ocamldoc targets
  • dh_ocamlinit do replacement in debian/*.in files just as ocamlinit.mk before
  • lintian tests (bug submitted to lintian for integration)
  • policy review (still in progress)

The dh_ocaml is working as follow:

  • it makes the distinction between 3 kinds of packages: dev, runtime, and (standard) binary
    • dev and runtime package are always associated since runtime just provide non-development objects of a dev package
    • binary package that are not dev neither runtime package
  • dev and runtime packages are scanned for objects containing assumptions (*.cmx, *.cmi, *.cmo...)
  • all packages are scanned for bytecode executables that also contain assumptions
  • assumption are extracted and sorted into 2 categories: defined and imported
  • defined assumptions are used to compute interfaces provided by the package
  • imported assumptions are used to compute interfaces from which Depends the package
  • defined assumption is used to generate .md5sums file that will be distributed in the dev package and will be used to compute Depends for other packages

Provides and Depends use virtual packages and checksums of the defined assumptions. E.g. it use libzip-ocaml-dev-12z3a to stat dependency to libzip-ocaml-dev, the 12z3a is the base 36 checksum of the 5 first digits of the MD5 of defined interfaces. This way, if some interfaces change in camlzip, checksums will also change and will make package that depends on libzip-ocaml-dev-12z3a not installable. It should prevent user to install packages that don't make good assumptions about libzip-ocaml-dev.

Package "ocaml" (resp. "ocaml-nox") is a dev package, its runtime is "ocaml-base" (resp "ocaml-base-nox"). In the case of this OCaml base packages, we use the upstream version rather than the checksum in the Provides/Depends. So provided packages are "ocaml-3.11.1" (runtime "ocaml-base-3.11.1"), "ocaml-nox-3.11.1" (runtime "ocaml-base-nox-3.11.1"). In fact, using this scheme we just fallback to the earlier way of making Depends in OCaml Debian packages, except that now it is automatically computed and generalized

Depends are computed using the type of packages: runtime and binary package will Depends on other runtime packages, dev packages will Depends on other dev packages. E.g. headache is a bytecode executable, this package is classified as a binary package. dh_ocaml automatically make it Depends on "ocaml-base-nox-3.11.1" because it contains bytecode objects that used it. Another example, libzip-ocaml-dev is a dev package and dh_ocaml make it Depends on "ocaml-nox-3.11.1".

We have checked on a various packages this Depends/Provides field completion and it seems to work quite well. We have found unexpected dependencies (ocaml-findlib depending on "ocaml-3.11.1" because it contains a wizard using Tk). We have also found unexpected defined interface (ocaml-fileutils defines module Unix due to a "-linkall" flag when building the .cma).

What remains to do:

  • dh_ocaml needs to make the difference between arch-independent and dependent provides
  • dh_ocaml needs to do binNMU safe version between internal packages (i.e. ones being built)
  • some more tools: dh_ocamlmeta to install META files (with good version), dh_ocamldirs to create standard directories, dh_ocamlinstall to move objects in dev and runtime package from debian/tmp build directory.

All this new tools should help to track automatically dependencies between packages and ease the work of OCaml debian task force. Debcamp was a great time for a brainstorm on Debian OCaml package management. The overall quality of OCaml in Debian should be increased.

Wednesday, July 8 2009

Debcamp, Debconf 9 and OCaml

This year I will spend 2 weeks at Debcamp and Debconf9. With some other members of the Debian OCaml Task Force, we will try to improve dh-ocaml.

In particular, we will try to implement a proposal by Stefano Zacchiroli for OCaml link-time compatibility and Debian dependencies. This proposal, approximated ABI in the paper, should allow to reflect through Debian dependencies which OCaml libraries are compatible. For now relationship between OCaml packages are quite naive, since we only use version operation like: >=, >>, <=... These operations are not precise enough with regard to OCaml assumption about modules. This proposal will improved dependencies by using virtual package that will make a summary of OCaml assumption.

If time and workforce is enough, maybe it should also be a good time to refesh our policy a little bit and take a look at implementing lintian check for it. But this only "bonus objectives" for this year Debcamp/Debconf.

Last time Debconf7 has been a great place to exchange idea, to delve into debian subjects and to discover totally amazing thing. The rest of the year, I am always busy with real life and urgent debian work. Debcamp and Debconf will be a great time to stay for a long period on the same subject, trying to find the best solution.

Friday, December 19 2008

OCaml meeting 2009 in Grenoble, progress

This morning, Alan sends me the link to the subscription form. We are now ready to accept participants for OCaml Meeting 2009.

Last week, I was at the annual CAML consortium meeting. We talked about OCaml Meeting with M. Leroy and other participants. Most of us, think this meeting is a real great way to make OCaml user meet. This can be the start of great projects, just because people realized that a real OCaml community exists. The INRIA seems really enthusiast about the meeting, maybe INRIA team will be directly involved in the organization in 2010.

I hope 2009 meeting will be as good as the former. At least this year, we are doing things quite in advance and the whole organization will be a little bit more "professional".

OCaml Meeting webpage

Monday, September 15 2008

Distributing OCaml libraries: common problems for packager and how upstream author can help solving it

Having already done some Debian packages of OCaml libraries, I wish to share my experience on what are the most common problems. The intent is to help people releasing OCaml libraries to have a good interaction with (Debian) packager. I think most of the tips should also apply to other distributions as well. I consider the use of findlib mandatory for this task as described in the Debian OCaml packaging policy.

Problems listed here are common, and I have encountered them for my libraries... In fact, most of the time when I finished building a personnal OCaml library I package it for my own and to test that everything is fine. Being a packager and an upstream author is really a different job.

"Upstream author" is used for people developing and releasing OCaml libraries and "packager" for people doing packages for a distribution.

Here is the list of the most common problems and their solutions:

  • Missing the "static link clause" in COPYING/LICENSE

Libraries are linked statically in OCaml. It is an advantage and a problem. But upstream author should remember that the LGPL license doesn't support static link directly. If you link an executable with the library statically, the executable is contaminated by the license. This error is probably the most problematic, because it requires to go through the whole project fixing license header for explaining that there is an exception to the LGPL.

There is no way for a packager to fix this problem. Packager are not allowed to change the license. Library having this problem can be still packaged but complicate the legal aspect of building application using this library.

Good example of this can be found in OCaml LICENSE itself :

As a special exception to the GNU Library General Public License, you
may link, statically or dynamically, a "work that uses the Library"
with a publicly distributed version of the Library to produce an
executable file containing portions of the Library, and distribute
that executable file under terms of your choice, without any of the
additional requirements listed in clause 6 of the GNU Library General
Public License.  By "a publicly distributed version of the Library",
we mean either the unmodified Library as distributed by INRIA, or a
modified version of the Library that is distributed under the
conditions defined in clause 2 of the GNU Library General Public
License.  This exception does not however invalidate any other reasons
why the executable file might be covered by the GNU Library General
Public License.

Followed by the text of the LGPL.

Each file header looks like:

This file is distributed under the terms of the GNU Library General
Public License, with the special exception on linking described in 
file ../LICENSE.
  • Forget that some architecture doesn't have an ocamlopt compiler

By far this is the most common error I have seen. Upstream author consider that you are able to build a native and byte version of their library. Unfortunately this is not the case for most architectures. This error is pretty simple to fix by separating byte and opt targets in Makefile. The ultimate test is to rename /usr/bin/ocamlopt and /usr/bin/ocamlopt.opt to something different and try to rebuild the library from scratch.

  • Missing or incomplete META file

META file is the base of everything in findlib. This is very important to provide good META file because it helps other people to easily build thing over OCaml library. Each library should provide its own META file. Until now, Debian packager was used to write this missing META file and contribute it to upstream author. I think that now findlib has become a standard in OCaml developement, so upstream should take into consideration this file.

A good test for META file is to install it and:

~$ ocaml
        Objective Caml version 3.10.1

# #use "topfind";;
- : unit = ()
Findlib has been successfully loaded. Additional directives:
  #require "package";;      to load a package
  #list;;                   to list the available packages
  #camlp4o;;                to load camlp4 (standard syntax)
  #camlp4r;;                to load camlp4 (revised syntax)
  #predicates "p,q,...";;   to set these predicates
  Topfind.reset();;         to force that packages will be reloaded
  #thread;;                 to enable threads

- : unit = ()
# #require "ZZZ";;

If you are able to load your library in ocaml toplevel, this is a good point.

  • Doesn't use ocamlfind to install library

ocamlfind allow to install library. Using it is the best way to install your ocaml library. Moreovoer, you should use the most simple way to install.

For pure OCaml library this is straightforward and allow Debian packager to override destination directory by using environment variable OCAMLFIND_DESTDIR.

      ocamlfind install toto META toto.mli toto.cmi toto.cmxa toto.cma toto.cmx

For non-pure library, you could add a OCAMLFINDFLAGS variable to allow Debian packager add "-ldconf ignore":

      ocamlfind install $(OCAMLFINDFLAGS) toto META toto.mli toto.cmi toto.cmxa toto.cma toto.cmx
  • Doesn't distribute .mli files

".mli" files is the human-readable interface to the library. Even if most OCaml programmers use the ocamldoc HTML API of the library, it is best to distribute these files. In particular, Debian packager use it to automatically generate up-to-date HTML API documentation directly.

  • Doesn't distribute .cmx files

".cmx" files contain more informations than ".cmxa". Distributing these files allow application based on the library to be more optimized.

  • Forget about other OS files

This one comes from my "Windows experiment". Most of the time upstream author doesn't forget about "*.a" but doesn't add "*.lib" (yes is this windows counterpart of ".a" file).

  • Use bad wildcard for file to distribute

This one is not very OCaml specific. In Makefile and sh you have the same wildcard "*.mli". But if the wildcard is not expanded into sh, it just stay as is. It will make the executable called looked for a file "*.mli" literally. Mixing Makefile wildcard and sh is the best option here:

     ocamlfind install $(OCAMLFINDFLAGS) toto META $(wildcard *.mli *.cmi toto.cmxa toto.cma *.cmx)

This way you test and expand to only existing files.

  • Use a custom build system

This is more or less a problem. In fact, if everything is fine, there is no problem and you should just forget about this point. If the custom build system is complex and if there is an error in it: it becomes a big problem.

I must confess that I have myself done this kind of mistake a lot of time. But looking back, I think that most of the time I should have used directly OCamlMakefile. I think relying on ocamlfind for library installation, simple Makefile, OCamlMakefile, OMake or ocamlbuild, is just enough.

  • Missing BTS

This point is a problem for upstream author and communication with packagers (from Debian and other distributions). A bug tracking system is a way to publish, comment, work on and find solution of bugs. If you have a single problem in a library having the bug public helps people to show you what is wrong.

Of course, Debian (and Fedora, FreeBSD...) has its own bug tracking system and upstream author can use it directly. This is less work for upstream but tends to make the bug "distribution specific". Having an upstream author BTS helps to track the bug across distributions.

If upstream author doesn't like maintaining their own BTS, he/she should consider hosting part of the project on a forge like OCamlCore.org. Even if they don't host the source code, this is still a good idea.

That's all. If you have other common problems, feel free to comment, i will add it to the list.

Tuesday, April 15 2008

Debian accounts and keyring

Even if i don't like talking about this in public, Lucas Nussbaum blog entry decides me to write this, because I, too, agree that the Debian accounts and keyring situation is severely hurting Debian, and that a solution needs to be found RSN.

I wish to be more clear on some of my positions concerning this. First of all, I am actually being hit by a "GPG key expired" status that prevents me to vote and upload packages. It is a little bit my fault because 1) i set an expiration date on my GPG and 2) i only update this expiration date in keyring.debian.org less than 1 month before it expires -- but it was more than 3 months ago... I have pinged different people since then to get the keyring updated without success. I have reuploaded my key without expiration date, and now i know that things can take a long time inside Debian (2nd rank after waiting for my own Debian account creation).

Another point is concerning DM (Debian Maintainer). I was against the proposal when it was voted, because i was thinking it will create a "sub" status of Debian Developper (i.e people which are in between debian users and debian developers). I have changed my mind concerning this point the DM process is lightweight and everything seems to be faster with it. This is a really good points for DM (and jetring). I am also considering to sponsor myself to apply to DM, in order to have a non-expired GPG key somewhere.

Last but not least, i met several French NM at DebConf 7 (Kibi, Goneri...). At that time, i didn't know them. Since DC7, I have seen them working on different parts of Debian. I think it is a shame to make them wait so long. It is really the best way to make their motivations disappear. Looking back at my own situation, i realize that i was more active before my account creation. This is probably because it takes a long time to process, and i loose hope in a possible account creation. I am now more active in Debian, but still not doing as much as i want to.

I don't know the best solution to solve this problem. I tend to think that some key people should delegate their work to other DD (within a team ?). Last year at DC7, Sam Hocevar was already discussing this problem with the people involved in Account Creation. I don't know what was the result of this discussion...

Hope, this post will help to show that this problem is important.

Wednesday, March 12 2008

Linux 2.6.24 and Debian Etch on a Thinkpad x60s

I just installed a new kernel on my thinkpad. The migration was done from a 2.6.23 kernel and brings a new 80211g driver: iwl3945.

This new driver is fully included in the kernel and only need an additional firmware, which can be found in firmware-iwlwifi Debian package. It works pretty well, seems to connect faster, but the blinking WIFI antenna on my laptop doesn't work anymore. It replaces ipw3945 with a daemon in user space.

The installation was not straightforward, since ipw3945 and udev persistent net rules prevent it. The solution is to remove lines concerning the net interface using ipw3945 (the line "# PCI device 0x8086:0x4227 (ipw3945)" and the line after) in the file /etc/udev/rules.d/z25_persistent-net.rules. After this, you must do:

modprobe -r iwl3945
modprobe iwl3945

The file /etc/udev/rules.d/z25_persistent-net.rules now contains a line for the iwl3945 driver.

Installing the driver for etch requires to update the package wireless-tools using its backport to etch and downloading/installing the package firmware-iwlwifi from unstable.

Wednesday, January 30 2008

OCamlMeeting in Paris -- Debian summary

On January 26th 2008, the OCaml community has its first meeting in Europe. There was 5 different talks, including one from me. The meeting was pretty constructive and i think some great things should come out of it. In this post, i will focus on the Debian maintainer point of view.

The meeting was organized by me, with the help of Garbriel Kerneis and Vincent Balat. I wish to thank Mme Turgis from ENST/ParisTech who has made possible all this to happen.


I announce the creation of OCamlCore, a SS2L. It will provide commercial support for OCaml and related products. It will also provide community with some resources (including a GForge, a planet, some SCM repository...)

For now, i will open administration to Debian people who wish to help (for root rights) and from the whole community for other task (gforge and general administration). For now, i will choose alone who can administrate and who cannot... This is not a very open position, but this server is also for my company.

Xavier Leroy give us a nice talk that has let us see some future direction:

  • OCaml 3.11 will have native dynlink
  • hope to integrate GHC like type system (GADT: Generalized Algebraic Data Types)
  • need to work on parallelism, following different paradigm

But above all, Xavier talked about the fact that there is a lack of manpower at INRIA and invited the community to build tools and other things (including a CPAN like distribution) to make the language more widespread. He also stated that a lot of things will stay in place in the name of backward compatibility.


Vincent Balat has given us a nice talk about OCsigen, a new framework for programming web application. This program is already in Debian and it is at the root of Debian discussion about moving .cm[ao] into libxxx-ocaml package. This new framework allows to program web application using continuation, which should ease a lot of aspect. It also features static type checking, including static verification of the generated HTML pages.


I was impressed to see G. Stolpmann coming to the meeting. Unfortunately, i cannot follow the whole talk, since i have some phone call to make, in order to book the restaurant. From what i have seen, it seems that GODI is growing, in term of manpower. They have some issues with OCaml 3.10, because of camlp4 and some packages. At the end of the conference, he was pushing for a more general use of GODIVA. In particular, it requires to have a very uniform way of building software. It should be the foundation of a build standard for OCaml software. In my humble opinion, i don't agree on this point. Basically, its proposal was based on configure/make all, which is too C style way of doing. It is not bad, but just exclude ocamlbuild enabled software.


Nicolas Pouillard has made an introduction on how to use ocamlbuild. I have discovered how it was working, since i didn't have time to work on this topic. It seems a good thing. I hope to be able to try it at some point. The conclusion was about modifying ocamlbuild to allow the use of multiple plugins. For now, the plugin is a separate file which is compiled first and then loaded, as a special case. Nicolas is looking for a full OCaml way to handle this for multiple plugins.

OCaml in Debian

I have spoken about different aspect of Debian packaging. After a short introduction and some history about OCaml in Debian, I come to some conclusions/problems:

  • we have an early integration, which makes Debian handle a lot of packages, but this is due to a long history
  • we still are not able to cope with libraries incompatibilities when rebuilding, we only have addressed ocaml package ABI problem
  • the debian package management system is not really compliant with the OCaml way of creating libraries, which is our main problem
  • native compilation is broken on 3 arches (Xavier Leroy told us that this should be solved for ARM)
  • use of Alioth has made us more efficient to integrate packages
  • Debian team are working by step: from time to time we begin to integrate a lot of packages, mostly because of some maintainers wishing to integrate a software with a lot of dependencies

Talking with Xavier give me some additional informations:

  • ARM is fixed, it should compile with Debian stable on ARM, he explains that the problem is related to the multi ABI flavor of ARM
  • he says that we should not have any problem concerning bytecode stripping of executable, if there is still some we should find the OCaml library that triggers this behavior
  • I realize that this behavior can be triggered by ocamlmakefile, that still have a "-custom" option
  • he promises me that he will send me a simple test to find custom bytecode application (combination of "file X" and a specific string at the end of the file)

At the end of the presentation, there were some questions. I discover that many people are ignoring the ABI problem with OCaml and its libraries. They seem to discover that it requires regular rebuild of our packages.

I also have a discussion about the fact that Debian binary distribution doesn't fit developers need. It seems that developers prefer to have the source of libraries they compile and also the most recent release. But at the same time, people avoid using GODI, which also have the problem of needing to wait for a particular library to be released... After some discussion, I point that some companies prefer to live in a frozen world, which Debian stable can provide. Living in a frozen world helps company developers to have the same environment from the beginning to the end of a project - which can last years. It also help to avoid build environment de-synchronization between company developers.

There was also a question about interaction between installed OCaml package and GODI software. I cannot answer it, expect with a "patches are welcome".

OCaml on a JVM using OCaml-Java

Xavier Clerc explained us how he has achieved running a full OCaml environment on a JVM. This work was pretty impressive, because he was able to compile OCaml top level and provide it as an applet. This is something amazing (and working). He stated also that this should allow to do some binding with java graphical user interface from OCaml. I was pretty busy with a phone call (to a restaurant for the dinner), so i don't have attend the whole conference. Anyway, the project is promising, but it needs to recompile every library using its compiler. I think this is not suited for Debian for now.


The workshop was shorter than it should have been. This is due to accumulated time shift during the day. However, it was pretty interesting, because everybody can talked almost freely.

The initial discussion was about the "Unicode situation". This was a question about Unicode in Unison, but there is no real solution about it. For reader information, unison has problem with filename containing UTF-8 char. In particular when syncing two directories which are on two file system with a different encoding on each. Even if the file can be represented on the two file systems, the comparison (between UTF-8 and ISO8859-15) fails. This is because the comparison is a byte to byte comparison.

We discuss further the idea of OSR (OCaml standard recommendation) which was briefly discussed in the morning. I think David Teller (who have also done a great part of the IRC retranscription) was very keen on this idea.

I invited Nicolas Pouillard to discuss the three following subjects.

About camlp4 documentation, Nicolas told people about the he set up. About ocamlfind and ocamlbuild, he went back to the multiple plugin problems. About ocamlbuild and GODI, Gerd, Nicolas and Xavier tried to build a list of what metadata can be shared. This includes list of files to be installed (PLIST in GODI).

Concerning the "feedback from the Gallium", Xavier explains that he has done this in the morning (which is true). I think that Xavier implicitly agreed on the fact that if this kind of event happens again, he can do another talk about future of OCaml.

Concerning Summer of Code, I just remind that people can write description of Google Summer of Code project in the cocan wiki. Another one remind about Jane St Capital summer of code.

Points concerning common interfaces raise the discussion about OSR.

The organization of next year events doesn't raise particular interest. I think people would be interested to have one next year, if something comes out of this meeting.

Wednesday, October 3 2007

Package for coThread

I am working on packaging coThread. A good part of the initial packaging has been done by Erik de Castro Lopo, with my help.

The package is ready, but since there is a big part concerning thread, i am scratching my head to know how to write a META file (correct and useful).

Any idea ?

OCaml 3.10.0 transition is still ongoing

One month later... the OCaml transition is in good shape.

In fact, the transition was almost finished after 15 days of work for the OCaml task force (i.e. September 20th or so). OCaml task force is now waiting to enter testing. The transition is blocked, because there is also an ongoing GTK transition that cross our path.

In order to enter testing, some packages will be removed: ocamldbi, regexp-pp... Zack has done a small poll to see if there was any reasons to keep them, since they don't compile with 3.10.0. After a week, we decided to go on and remove these packages.

With the transition there will be some small changes:

  • cameleon has been upgraded to 1.9.18 (+ some svn correction)
  • camomile is now in version 0.6.0
  • for ocaml developpers, we are now trying to ship as much as possible an ocamldoc generated documentation with every XXX-dev package (referenced as XXX-ocamldoc-apiref in doc-base)
  • arm and ia64 are buggy and prevent some packages to build (felix, camomile), these architectures won't be native anymore (OCaml will be shipped without ocamlopt on this arch)

- page 1 of 2