Wednesday, October 17 2007

svndump-utils progress

My project svndump-utils is getting really well and will hopefully be finished for next week.

The initial goal describe in the wiki specification page has been fullfilled "in spirit". The real goal is to be able to extract Subversion projects and feed them to tools like git-svnimport (or the future svn2darcs scripts). Of course, this transformation must be done without losing changes history (otherwise there won't be any interest in doing so).

Svndump-utils is made around some strong idea:

  • history: extraction of liveness (add/remove) and copy (copyfrom) for every node (file/dir). This helps to understand what is alive at which revision. It also defines entry points, e.g. extract project which is under project1@32, meaning project1/ at revision 32. This is processed as a standard graph.
  • filter: stacked iterator over svn dump record stream. This structure represents basic operation that can be performed on svn dump file. It should provides a next_record function and computed history of the stream. In most case, you compute a new history by applying some graph processing on a clone of the previous filter history and provide it to next filter. Based on this history, the filter has just to remove node which are not alive in its own history. For now, provided filter are:
    • Load: read a svn dump file (first filter)
    • Save: save a svn dump file (last filter)
    • Include: include a specific node and everything connected to it (copy, children, parent)
    • Exclude: exclude a specific node and everything connected to it (copy, children)
    • DropEmptyRev: remove empty revision from the stream
    • Reparent: given a specific node, make all connected nodes be under the same node

Classical configuration of filter:

Load(svndump.file) -> Include(project1@32) -> Exclude(project1/test@34) -> DropEmptyRev -> Save(svndump-clear.file)

At the beginning, i wasn't having a strong feeling about my ability to compute history to provide next filter with. In fact, after having write this utils, this task was quite simple. It is only a matter of iterating through node... When classical algorithms can be applied computer science is much more simple!

Wednesday, October 3 2007

Package for coThread

I am working on packaging coThread. A good part of the initial packaging has been done by Erik de Castro Lopo, with my help.

The package is ready, but since there is a big part concerning thread, i am scratching my head to know how to write a META file (correct and useful).

Any idea ?

OCaml 3.10.0 transition is still ongoing

One month later... the OCaml transition is in good shape.

In fact, the transition was almost finished after 15 days of work for the OCaml task force (i.e. September 20th or so). OCaml task force is now waiting to enter testing. The transition is blocked, because there is also an ongoing GTK transition that cross our path.

In order to enter testing, some packages will be removed: ocamldbi, regexp-pp... Zack has done a small poll to see if there was any reasons to keep them, since they don't compile with 3.10.0. After a week, we decided to go on and remove these packages.

With the transition there will be some small changes:

  • cameleon has been upgraded to 1.9.18 (+ some svn correction)
  • camomile is now in version 0.6.0
  • for ocaml developpers, we are now trying to ship as much as possible an ocamldoc generated documentation with every XXX-dev package (referenced as XXX-ocamldoc-apiref in doc-base)
  • arm and ia64 are buggy and prevent some packages to build (felix, camomile), these architectures won't be native anymore (OCaml will be shipped without ocamlopt on this arch)

Thursday, September 6 2007

Captcha and Spamtimeout

After some holiday, i was really afraid of the amount of spam i got on this blog.

As usual, i need to take "not good" decision to stop it. So i have activated captcha and spamtimeout plugin for dotclear. I hope that this time, it will work -- and that spam will stop filling this blog.

OCaml 3.10.0 transition is ongoing

After this long summer without posting (holiday time)... I come back to work.

The Debian OCaml Task Force has decided to migrate to OCaml 3.10.0. We have done a full round of test in experimental repository before deciding to submit a request to debian release management. After more than one week to let release manager give us permission to initiate the transition, we take the decision to do it!

Now, people can follow our progress using this web page or this one. On the status page, only package in unstable should be taken into account (package in experimental was the first round of test).

During this transition, Zack has also decided to create a common scheme for OCamldoc generated API documentation. For now, every library package that I upload contains this html/api location. This will allow to create a API bookshelf for OCaml library (more information on policy page).

Since this morning, OCaml 3.10.0 has been built for every arches and is installed... More packages to come

Thursday, June 28 2007

Debconf 7 - That was one week ago

The return from Debconf 7 with Lunar was at least as interesting as the whole conference. I really enjoy this week. The only problem was that my luggage get lost in Edinburgh or Paris airport... Take two days to get it back.

The things i remember the most clearly from the end of the last week is several talks/bof about Debian debtags and general archive testing. There is really great QA idea. I decided to give a try to sbuild setup to try some automated testing of OCaml packages.

Another point is that i realized how many differents people are involved in Debian. The idea about "friends of debian" (or debian-community.org) can really be a great things to give some official status to all this people involved in Debian.

Tuesday, June 19 2007

Comments are open again

Trying spamplemousse, a plugin for dotclear... Hope this work

Debconf 7 - Already 3 days

A lot of talks for now:

  • Welcome talk
  • Bits from the DPL
  • SE Linux for dummies
  • Data mining popcon
  • Debian installer an update
  • Dependency based boot sequence
  • Rewriting the Policy to be machine interpretable
  • Debian Release Management
  • Debian Live
  • Resurrecting "cruft"
  • OpenStreetMap
  • Popcon BOF

I really appreciate the ones about popcon. I think that today searching packages inside debian is a pain. I think that one of the way to make Debian more sexy is to show what is the best in Debian: a huge number of packages (more or less) well maintained ... This is a real advantage over any other distributions (thinking to Red Hat or Mandriva) which has a lot of disseminated source of packages which are not bound together and most of the time are uninstallable.

The idea behind using popcon data to propose more accurate result is great. I think that this idea combinated with debtags, should enable our users to have really good results when searching for a particular package. Another idea should also be interesting: adding hardware information to popcon. This should enable to do even more precise query (e.g. what TV viewer should i use, considering that my card is XXX). This could save a lot of time to many people that do not exactly know what program to use with their hardware. But the couple of package and hardware data in popcon could led to a problem of privacy...

Anyway, i think that searching a 15000 packages database is not enough efficiant debian (but i must agree that using debtags/ara is already a good way to search).

Saturday, June 16 2007

Debconf 7 - First impression

Today i attend two talks : HP relationship with Debian, DAK future direction.

The talk about DAK led me to meet some known people in a technical background. Many ideas were exchanged and in particular one about the possibility to create staging area into experimental! I think this should be really great and would enable debian to do things in a more team synchronized manner.

Going back to my own experiment, with ocaml packaging, i think this should really help ocaml people to do more soft transition. The big problem of team work is that their playground is unstable. There should be no problem concerning that -- expect when you need to do big transition that will break most of the team maintained packages. For example, uploading new OCaml package will break almost all OCaml library packages. You must at least binNMU all packages and most of the time you have to patches/upload new upstream. This transition make all the ocaml packages uninstallable/unusable for a time. This can be quite long when there is big problem with particular package (thinking of coq and mldonkey).

People should also consider the fact that this kind of staging area must:

  • be easy to setup (just give a name and a list of GPG key of people that are allowed to upload)
  • configure a list of arch to build (for team maintained package, chances are that you have at least i386, amd64, ppc)
  • define a shorter delay for rebuilding packages/updating ftp area
  • simulate an upload to unstable by building on all arches in sequence
  • upload packages to unstable waiting for them to be build on every arches before uploading the next one.

As usual this kind of thing should be great but the main problem, is the lack of manpower to create it...

Experimenting with Linux-Vserver

After my disappointing xen experiments, i try to use vserver. The way it handles virtualization is just a lot more simple than xen. The problem with this is that you only get linux installation. All is running upon the same kernel, every guest is separated using security context. It has advantage over chrooted env (i have also try this, but it is not worth to write a blog entry for it): it can use natively several IP/hostname.

It is a lot less fun than Xen: you cannot run windows concurrently with linux at native speed. But it is a lot more stable. I did some test and it show me that it has more or less the same stability as the Linux kernel. For now, i am able run my server for 9 days. I think this is stable.

Concerning my other need:

  • i reuse the same X/XDMCP scheme as with Xen
  • i set up a framebuffer (no problem)
  • i can share my sound card (just have to copy dev entry of the sound card to the "/dev/" of the vserver)
  • network is stable -- but there is some problem.

Just to give a quick summary of my network problem :

  • some firewall rules are strange to write, because you have no originating NIC device for it
  • you have to limit every duplicated daemon (host/guests) to listen to only IP address of host
  • eth2 and my USB printer seems to conflict, eth0 and my sound card seems to conflict

I have concerns about the last item! I think it is related to possible not enough good driver of the NIC. I need to investigate on this point (i.e. move to linux 2.6.21).

Monday, May 28 2007

Experimenting with Xen -- End

I spent the last three weeks to do test concerning stability and features of Xen. Well, i must say that i am not convinced, as some people thinks, that xen virtualization is ready for stable server.

Just for reminding, my previous post: there is no easy way to activate real framebuffer with xen. I try vesafb and intelfb. The first one doesn't work at all, the second one made a kernel oops (not at the beginning, you must wait a little before it). So i stick to standard console. I need to remove fbgetty because it uses framebuffer, and crashes (oops) with Xen (not at the beginning...).

I continue using my X configuration, expect that i move it to a xdmcp-chooser init script. This help me to "stop" it when my domU was not started. I still have some problems with the fact that after a time, if XDMCP fails, it restarts, switching to vt7 in the same time.

Now my real problem: stability of xen when playing around with PCI peripherals.

I try to hide my soundcard in dom0 and unhide it in a domU. This sound pretty well... But the computer seems to keep segfaulting after 24hours. So, i switch back to standard, non xen, configuration. It works for a week (at least). I also try to upgrade my BIOS, don't use ACPI, APIC et al -- but it doesn't work. Conclusion: my soundcard was a problem. In fact, the real problem comes from the fact that the sound card shared his IRQ with NIC and IDE controller. When running in a non-xen, the kernel see the conflict and rearranged IRQ. With Xen + sound card hide, there is no conflict and it finishes by a "oops".

Another, not so real, problem: performance.

I run a courier based imap server, including sqwebmail a webmail CGI. Running under xen configuration, it was almost like my previous computer (VIA C3 1GHz / 512MB / USB 2.0 HD and my current computer is Core 2 Duo T7600 / 2GB / SATA drive). When i test it with non-xen configuration, it was twice as fast. The main reason: courier use Maildir which contains a lot of small files. This implies a lot of IO, where xen is not very efficient.

Conclusion: Xen is not ready for my "production" environment. I think it is a good product to test things and to consolidate server which are not bound to hardware components (sound card, NIC). I don't think Xen is a good solution to build a "hardware" isolation.

As usual, i will use my favorite development scheme KISS (Keep It Simple and Stupid): build chrooted environment.

Tuesday, May 1 2007

Experimenting with Xen

As a personnal project, i decided to setup a fanless PC which will host a Debian GNU/Linux Etch server and a Sid desktop. Well, to be honest, there is no real reason, apart the fact that i want to test Xen, just to see if i can do something with it...

First problem: Xen doesn't allow easy access of graphic adapter in host domain. (!= dom0) This is a real problem. I want dom0 to be set up with a minimal Etch environment. I don't want something that will need a lot of update. If, as i have decided to, one of the domU is my Etch server, i cannot make it rely on something less stable. Most of the time, the solution is to run a desktop in dom0, so there is no problem accessing graphical adapter. Since, i choose a minimal Etch, i really don't think it will be a good desktop environment. Another solution is to run a X server and to use VNC (through Xvnc) in domU. You can also use SDL, and even the framebuffer (but this option is not in the etch Xen release). I decided to do something different ;-) The big picture of my solution: dom0 is a X terminal and connect through XDMCP to domU. It is pretty straightforward, and works well:

  • do a good xorg.conf file (i use vesa as a driver, for now) in dom0
  • install gdm in domU
  • activate XDMCP in gdmsetup (i do an "ssh -X domu.home.org" from a running computer with X, but things can be done in configuration file of gdm)
  • add "x:23:respawn:/usr/bin/X vt7 :0 -dpi 100 -indirect domu.home.org" to dom0 /etc/inittab
  • reboot and it works (or restart gdm in domU and "init 4 && init 2" in dom0)

Next step: Activate framebuffer and delegate most of the hardware to domU.

Thursday, April 19 2007

They found me... again! Comments are closed...

I am amazed, this blog was spammed. I heard that it was possible, but i don't understand why they have done that with my blog. Anyway, for now comments are closed. Mail-me if you are not a spammer !

Tuesday, April 17 2007

From Sarge to Etch

I spend a whole evening, last week doing this awaited transition. Off course, i only use stable for my "server". This server is an OpenBrick NG, using VIA C3 Nemiah with 2 Fast Ethernet, 1 Gigabit Ethernet and 2 USB 2.0 hard disk. This server is the sensible part of my network. I spend a lot of time configuring all the application of this server and i didn't want to loose all this.

It took me 4 hours to complete the task... The main problem was not with the upgrade -- which was smooth between sarge and etch: just have to replace sarge by etch in my /etc/apt/source.list. It was with the merge after the dist-upgrade and some problems related to udev.

I decided to do a "smart" merge between new configuration files and my old ones. By smart, i mean that it will try to loose the least "new option" used in the maintainer configuration file and try to keep things working as before. For most of the application it was pretty straightforward.

I only encounter a big problem with CUPS. /etc/cuprs/cupsd.conf has changed a lot, so i need to update big parts of it in order to make it work just as before. The next problem, with CUPS, was when i want him to use my printer... It keep telling me that my printer doesn't exist. I have tried to understand for a while, and in particular i was thinking it has something to do with the fact that USB printer now use directly USB id from the printer (which is more powerfull and much more flexible). But the problem was not here. CUPS was unable to find any USB printer, because no device node exist for it, which is a problem of udev running on a too old kernel. Udev was the center of the problem! I didn't notice that during the upgrade, it has issue a warning message concerning the version of my kernel... It was too old, i need to reboot the computer with a newer kernel... For this time, i was forced to reboot my server. But after this all was working well.

The final step of this migration was to get rid of every packages that comes from backports.org or that i build myself based on unstable package. I begin by removing all unwanted source from /etc/apt/sources.list. Then I use "apt-cache dump" and match every files which only are refered from /var/lib/dpkg/available. At the end of the process, i get a list of package which were coming from outside of debian etch.

Monday, March 12 2007

wiclear and GNU arch

I spend my whole sunday playing with GNU arch (tla). While i was looking at distributed SCM, i had a look at it... At this time, i find it was a bunch of very funny commands. I was right ;-)

OK, i am not fair with tla. I think it has some very powerful features. The main problem is the "learning curve" -- to reword with DARCS terminology. The problem has to do with the complex set of command which are available. I think you need a big training in order to begin to understand how tla works...

Another problem: tla remember what i have registered... I need to remove files in ~/.arch-params everytime i need to do crappy things. Not easy for testing purpose ;-) But anyway, after one day, i get a copy of wiclear tla repository and create my own branch for development.

I submitted a patch against wiclear to include some of my changes. After this, i will maybe begin to create an OpenID authentification mechanism.

Saturday, March 10 2007

wiclear 0.11.1

Busy weeks, but i have found enough time to upgrade wiclear to version 0.11.1.

Now, i have finished to merge all the things i have modified in version 0.10.1. I begin to work on making things more clean to submit a real patch to upstream author.

He changes one of thing which gives me a lot of work: history diff! Now it uses PEAR module to do the diff. It is better ;-)

Wednesday, February 7 2007

ocaml-dbus, ocaml-inotify and ocamlp3l

Reading OCaml mailing list is really great source of good ideas.

I was thinking for some times to have a binding to DBUS in OCaml. This can give an easy access to HAL which is a good tool to detect most of the interesting hardware on a computer. I think this is a good way to write efficient script (as i am experimenting with Perl to detect my DVD writer). I think this deserve a debian package ;-)

While browsing for ocaml-dbus, i see that the upstream author also release a binding to inotify... This is also something i am interested in. I was thinking of building a inotify daemon to launch commands when files appear in a directory. Inotify can detect this kind of events.

And yesterday, while reading ocaml-beginners, someone talk about ocamlp3l. I have already seen this software, but at the time, i have no interest in it. But as of today, i think this can be a great way to unravel the power of a dual core computer! OCamlp3l helps people building parallel application. I also want to give this a try one day (when i will have finished to package everything).

For now, there is an RFS for ocamlp3l, but all these packages need to be authorized by upstream author.

My next package :

SCM ? What about darcs ?

How I Learned to Stop Worrying and Love the SCM.

Continue reading...

Monday, February 5 2007

Solution Linux 2007

Last week i was at Solution Linux 2007. I meet some other debian developper and it is good to meet some member of the community. I am a little bit disappointed concerning the people who attends this event. It was not as crowded as it used to be. I think Linux is no more the top "hype" thing of the year (but i heard at least twice "web 2.0" during the show... the "hype" was here).

During a discussion after this, i realize that some people think that the town of associations ("village des assoces" in french) was not enough opened to non-technical users. This guy thinks that open-source guy doesn't know how to sell OSS.

I am wondering if the spirit of the OSS is to "sell" anything to the public. Most of the people that are working on this are technical guy, there is no real "marketing" staff of the open source among them. I think that the core is essentially non-marketing guy (i really do think no one will never perform selling "cron" or "at" to any windows users -- and they are essential components of a GNU/Linux distribution).

After all, what does the OSS has to win doing "maketing" things. More users ? To my mind it is non sense, users coming to GNU/Linux, for example, must have personnal motivation. It is not like selling a box with a "Windows Vista" compatible sticker on it. They will loose a lot of applications, habits and eye-candiness. They should have a stronger reason to do that. This must be a real "motivation". I won't try to convince a guy that there is no problem using OSS -- because nothings is perfect.

As conclusion, i think that the town of associations was what it should be : a place where OSS developper can meet.

Not building website (day 36)

One week and no real great changes in the website. I think i am coming to a sort of pause to see what is really needed! Maybe i won't change the face of every single external application to fit my need.

Violaine is putting our photo online. I am using the wiki to enter some personnal information. We begin to use the website. I think it will stay as his for a certain amount of time, until i will have again the time to change things.

- page 4 of 5 -