My photo

Mildred's Website

My avatar

GoogleTalk, Jabber, XMPP address:

GPG Public Key
(Fingerprint 197C A7E6 645B 4299 6D37 684B 6F9D A8D6 9A7D 2E2B)


Articles from 61 to 66

Thu 02 Aug 2012, 11:15 AM by Mildred Ki'Lya comp dev en git

I was looking at Git, and which features may land in the next few releases, and I found the following things:

Git-SVN will be completely redesigned

If you worked with git-svn, you probably know that the git-svn workflow has nothing to do with git. Basically, you just have the svn history and have to use git-svn to push the changes back to the subversion repository. You can't use git-push and that's really annoying.

Recently, the git-remote-helpers feature was added. It allows git to interact with any kind of remote url, using a specific git-remote-* command. For example, you can already use mercurial this way (according to git-remote-hg):

Git allows pluggable remote repository protocols via helper scripts. If you have a script named "git-remote-XXX" then git will use it to interact with remote repositories whose URLs are of the form XXX::some-url-here. So you can imagine what a script named git-remote-hg will do.

Yes, this script provides a remote repository implementation that communicates with mercurial. Install it and you can do:

$ git clone hg::
$ cd some-mercurial-repo
$ # hackety hackety hack
$ git commit -a
$ git push

The plan is to do the same with subversion. You could just do:

 $ git clone svn::

Branches might be tricky to implement, they might not be there. But you will get what's already in git-svn but with a way better UI. And far more possibilities for the future.

Here is the summary from the GSoc page:

The submodule system of git is very powerful, yet not that easy to work with. This proposed work will strengthen the submodule system even more and improve the user experience when working with submodules.

Git repository:

Midterm evaluation: passed

Progress report / status:

  • [GSoC 11] submodule improvements at git mailing list
  • [GSoC 11 submodule] Status update at git mailing list
  • [RFC PATCH] Move git-dir for submodules at git mailing list

Submodules will be improved a lot

I wish it was already there. From the wiki page, the improvements will be:

As Dscho put it, submodules are the “neglected ugly duckling” of git. Time to change that …

Issues still to be tackled in this repo:

  • Let am, bisect, checkout, checkout-index, cherry-pick, merge, pull, read-tree, rebase, reset & stash work recursively on submodules (in progress)
  • Teach grep the --recursive option
  • Add means to specify which submodules shall be populated on clone
  • Showing that a submodule has a HEAD not on any branch in “git status”
  • gitk: Add popup menu for submodules to see the detailed history of changes
  • Teach “git prune” the “--recurse-submodules” option (and maybe honour the same default and options “git fetch” uses)
  • Better support for displaying merge conflicts of submodules
  • git gui: Add submodule menu for adding and fetching submodules
  • git status should call “git diff --submodule --ignore-submodules=dirty” instead of “git submodule summary” for providing a submodule summary when configured to do so.
  • Add an “always-tip” mode
  • Other commands that could benefit from a “--recurse-submodules” option: archive, branch, clean, commit, revert, tag.
  • In the long run should be converted to a rather simple wrapper script around core git functionality as more and more of that is implemented in the git core.

Submodule related bugs to fix

  • Cherry picking across submodule creation fails even if the cherry pick doesn’t touch any file in the submodules path
  • git submodule add doesn’t record the url in .git/config when the submodule path doesn’t exist.
  • git rebase --continue won’t work if the commit only contains submodule changes.

Issues already solved and merged into Junio’s Repo:

  • Since git 1.6.6:
    New --submodule option to “git diff” (many thanks to Dscho for writing the core part!)
    Display of submodule summaries instead of plain hashes in git gui and gitk
  • Since git 1.7.0:
    “git status” and “git diff*” show submodules with untracked or modified files in their work tree as “dirty”
    git gui: New popup menu for submodule diffs
  • Since git 1.7.1:
    Show the reason why working directories of submodules are dirty (untracked content and/or modified content) in superproject
  • Since git 1.7.2:
    Add parameters to the “--ignore-submodules” option for “git diff” and “git status” to control when a submodule is considered dirty
  • Since git 1.7.3:
    Add the “ignore” config option for the default behaviour of “git diff” and “git status”. Both .git/config and .gitmodules are parsed for this option, the value set in .git/config. will override that from .gitmodules
    Add a global config option to control when a submodule is considered dirty (written by Dscho)
    Better support for merging of submodules (thanks to Heiko Voigt for writing that)
  • Since git 1.7.4:
    Recursive fetching of submodules can be enabled via command line option or configuration.
  • Since git 1.7.5:
    fetch runs recursively on submodules by default when new commits have been recorded for them in the superproject
  • Since git 1.7.7:
    git push learned the --recurse-submodules=check option which errors out when trying to push a superproject commit where the submodule changes are not pushed (part of Frederik Gustafsson’s 2011 GSoC project)
  • Since git 1.7.8:
    The “update” option learned the value “none” which disables “submodule init” and “submodule update”
    The git directory of a newly cloned submodule is stored in the .git directory of the superproject, the submodules work tree contains only a gitfile. This is the first step towards recursive checkout, as it enables us to remove a submodule directory (part of Frederik Gustafsson’s 2011 GSoC project)

And the GSoC page:

The submodule system of git is very powerful, yet not that easy to work with. This proposed work will strengthen the submodule system even more and improve the user experience when working with submodules.

Git repository:

Midterm evaluation: passed

Progress report / status:

  • [GSoC 11] submodule improvements at git mailing list
  • [GSoC 11 submodule] Status update at git mailing list
  • [RFC PATCH] Move git-dir for submodules at git mailing list

Tue 11 Sep 2012, 11:33 PM by Mildred Ki'Lya

Ceci est un test pour l'encodage de caractères (2)

Thu 13 Sep 2012, 03:30 PM by Mildred Ki'Lya comp dev en idea wwwgen

I started writing wwwgen: a website generator that uses redo for its dependency tracking. Unfortunately, I might not have taken the correct approach to the problem. I reuse the webgen concepts, and that might be a bad idea.

Specifically, webgen (and my first version of wwwgen) are bottom-up systems (you take the sources and build everything they can generate). The problem is that redo itself is top-down (you take the target and build it, building sources as they are needed), and I tried to make the two match. It's very difficult, and impossible to do with a clean design.

What I'd like to have is a simple wwwgen binary that generates a HTML file from a source file. Let's imagine how it could work:

  • If the source file is a simple page with no templates, just generate the HTML and this is it

  • If the source file is a simple page with a template, redo-ifchange the template source and build the HTML combining the two

  • If the source file is an index, we have a problem because multiple outputs are generated. Redo doesn't support this case and we must find a way to make it work.

So, we have a problem here ...

Now, we have another problem. Specifically, my source file is called and I want my output file to be title/index.html. In webgen, this is implemented by a configuration in telling it to build in title/index.html.

There is a solution to solve both problems at once. the wwwgen command creates an archive (the formats needs to be defined, it could be tar, or different yaml documents in the same file for example). Then, the build process would be:

 find src -name "*.src" \
   | sed 's/src$/gen/' \
   | xargs -d '\n' redo-ifchange
 find src -name "*.gen" \
   | xargs -d '\n' wwwgen unpack

 redo-ifchange "$2.src"
 wwwgen --redo-dependencies -o "$3" generate "$2.src"

wwwgen generate would parse the source file and generate an archive, that will be unpacked later by wwwgen unpack. Let's see how it can work:

  • The source file can choose where it unpacks, relatively to the directory where the source file is

  • If the source file is an index, it will redo-ifchange the other source files for the index and redo-ifchange the template, generate multiple pages packed together.

  • If the source file is a tag tree (a special source that doesn't output anything on its own but create index files dynamically), then it parses every child to find a canonical list of tags and the paths they refer to. Then, it creates the index files. Unfortunately, those index files will never be compiled until next build.

How an we improve the design to be able to create source files dynamically.

There are different views to the problem:

  • pages, index and tags should all generate all the output files they are related to. It means that index files should be able to generate pages, and tags should be able to generate indexes and pages.

  • pages should generate the output file, index should generate pages and feeds and tags should generate index.

  • mixed solution (the one described): pages generate output file, index should generate the output files as well and tags generates index.

How can we generate source files on the fly:

  • have a predefined compilation order: first tags, then index and lastly feeds and pages.

  • rebuild everything until no more source files are generated. We might build unnecessary things.

I prefer the second solution which is more flexible, but we need a way to avoid building things twice. For example, it's not necessary to build a page if on the next phase the page source is going to be regenerated.

Very simply, the generated page can contain a link to the index source file that generated it, and when generating the page, redo-ifchange is run on the index file.

Next question: what if a tag is deleted. The corresponding index page is going to stay around until the next clean. The tag page should keep around a list of index files it generated and delete them when a tag is no longer detected. And deleting the index should not be done using rm because the index will need to delete the pages it generated. The best solution would be to integrate to redo to detect these files.

The build scripts now are:

 srclist="$(find src -name "*.src")"
 while [ "$oldsrclist" != "$srclist" ]; do
   echo "$srclist" \
     | sed 's/src$/gen/' \
     | xargs -d '\n' redo-ifchange
   srclist="$(find src -name "*.src")"

 find src -name "*.gen" \
   | xargs -d '\n' wwwgen unpack

 redo-ifchange "$2.src"
 wwwgen --redo-dependencies -o "$3" generate "$2.src"

Fri 19 Oct 2012, 09:09 PM by Mildred Ki'Lya cobra fr



21 Octobre 2012

Il est temps d'agir ! Il est temps de reprendre contrôle de notre destinée ! En conséquence, on se réunira en groupes grands ou petits, individuellement ou en couples, au moment du jour de la décision au début de cette seconde fenêtre d'opportunités :

 ________ 21-12-2012 - Jour du Contact
 \      /
  \    /
   \  /
    \/    22-11-2012 - 11e pont 11:11
   /  \
  /    \
 /______\ 21-10-2012 - Joue de décision

Le peuple va se réunir pour prendre une décision pour la libération de notre planète face à la tyrannie des forces de l'ombre, de sorte que, pour la première fois de notre histoire, nous aurons la chance de créer notre propre destiné en temps que citoyens libres de notre terre.

Nos efforts de masse en ce jour sera l'étincelle qui permettra d'activer le Plan, de sorte qu'il puisse arriver à maturité. Notre activation en ce jour est notre déclaration d'indépendance et de liberté. Partagez ce texte autour du monte entier ! Postez le sur vos sites web et vos blogs. Si vous avez connaissance d'un média alternatif, vous pouvez le leur envoyer. Vous pouvez créer des groupes Facebook pour votre communauté locale faisant cela dans votre propre région du monde. Vous pouvez faire une vidéo et la mettre sur YouTube.

La masse critique pour que cette activation ait les effets escomptés est d'à peu près 118 000 personnes dans le monde entier. Si on considère que les capacités de concentration humaines ne sont pas parfaites, on a besoin de 144 000 personnes comme masse critique.

Mais lire à propos de ceci et prendre la décision par soi même est complètement différent. Je voudrais encourager autant de personnes que possibles à participer effectivement, même si cela pourrait être tôt le matin dans votre région du monde.

La cabale à bien compris le pouvoir du libre arbitre et des décisions. Ils ont utilisé leur propre libre arbitre pour prendre des décisions négatives pour l'humanité. C'est ainsi qu'ils étaient capable de nous garder sous leur contrôle pendant si longtemps.

Maintenant, l'humanité à besoin de reprendre le pouvoir. Nous avons besoin d'utiliser notre propre libre arbitre pour prendre des décisions positives pour nous même. Lorsque la masse critique de personne faisant cela sera atteinte, le changement se produira.

Nous ferons cette activation au même moment le 21 ou 22 octobre. L'heure exacte pour les différents fuseaux horaires sera la suivante :

  • 3:30 pm HAST October 21st (Hawaii)
  • 5:30 pm AKDT October 21st (Alaska)
  • 6:30 pm PDT October 21st (Los Angeles)
  • 7:30 pm MDT October 21st (Denver)
  • 8:30 pm CDT October 21st (Houston)
  • 9:30 pm EDT October 21st (New York)
  • 10:30 pm BRT October 21st (Rio de Janeiro)
  • 2:30 am BST October 22nd (London)
  • 3:30 am CEST October 22nd (Paris)
  • 3:30 am SAST October 22nd (South Africa)
  • 4:30 am EEST October 22nd (Bulgaria)
  • 5:30 am MSK October 22nd (Moscow)
  • 7:00 am IST October 22nd (India)
  • 9:30 am CST October 22nd (Beijing)
  • 10:30 am JST October 22nd (Tokyo)
  • 11:30 am AEST October 22nd (Sydney)

Si vous ne vous trouvez pas dans la liste, vous pouvez trouver une carte des fuseaux horaires à l'adresse suivante :

Instructions :

  1. Détendez votre esprit et votre corps en suivant votre respiration pour quelques minutes.

  2. Visualisez un pilier de lumière électrique bleue émanant du soleil central galactique, passant par votre corps vers le centre de la terre. Gardez ce pilier de lumière actif pendant quelques minutes. Prenez alors une décision inconditionnelle que notre planète SERA libérée et que son peuple DEVIENDRA libre.

  3. Visualisez comment VOUS pouvez contribuer à ce processus de libération. Découvrez vos talents et prenez une décision sur la manière dont vous utiliserez vos talents dans le but de libérer la population de la planète. Prenez la décision que vous soutiendrez les autres personnes qui se dédient au même objectif, de sorte à ce que nous soyons forts ensembles.

Après le jour de décision, vous aurez de nombreuses opportunités pour appliquer vos décisions dans votre vie quotidienne. Vous serez guidé par votre voix intérieure sur comment vous y prendre. Sachez que votre engagement sera testé dans des conditions réelles. Des conditions extérieures ou des entités non physiques négatives pourraient essayer de vous faire fléchir. La clef est de simplement vous en tenir à votre décision quelque soit les circonstances extérieures qui peuvent se présenter à vous. Chaque personne prenant une décision positive pour la libération de notre planète fait aussi une contribution estimable vers la victoire de la lumière.

Disclaimer: le jour d'activation (21 octobre) n'est très vraisemblablement PAS le jour où se produire l'Évènement.

Plus d'informations sur cette seconde fenêtre d'opportunité :

Informations sur le jour de décision :

Traduit de l'anglais par Mildred. Source :

Sun 28 Oct 2012, 02:48 PM by Mildred Ki'Lya test

This is a bold test  to see if I can post e-mails using the HTML markup from my mail user agent, and to see if attachments are included in the post.

Attached file fffd87df01a63284ff7eb197cb8e3035.png

Wed 28 Nov 2012, 02:49 PM by Mildred Ki'Lya comp dev en lisaac lysaac

The overlooked problem

Let me explain the reason why I think Lisaac is broken and why I stopped working on it. The fundamental concept behind it is completely broken, and i never could solve the problem. First, let me explain a few things about Lisaac:

  • It is statically typed (like Eiffel)
  • It is prototype based (like Self)

It may not seem like it but those two things are mutually exclusive, unless something is done to reconcile the two. Being statically typed means you know the type (or a parent of the real type) at compile-time. Being prototype based means that the type can change completely.

Imagine you have a CUCUMBER that inherit VEGETABLE. If at some point during run-time, the cucumber inherit SOFTWARE (yes, cucumber is the name of a software as well) you have a problem. Yet, this is not the problem I want to talk about.

Lisaac solve this problem because you can only assign a child type as a parent. In the following object:

 Section Header
   + name := CUCUMBER;
 Section Inherit
   + parent :VEGETABLE := ...;

So, you're forbidden to assign a SOFTWARE in the parent slot because SOFTWARE is not a child of VEGETABLE. But, you're allowed to assign a GREEN_VEGETABLE. So not it becomes:

 Section Header
   + name := CUCUMBER;
 Section Inherit

Now, let's say you send a message to your cucumber, and through the magic of inheritance, you end up in a slot of the parent. That is, you are executing code that is located in GREEN_VEGETABLE. Then, you have a problem.

 Section Header
   + name := GREEN_VEGETABLE;
 Section Public
   // inherited from VEGETABLE
   - do_something <- paint_it_green;
 Section Private
   // specific to GREEN_VEGETABLE
   - paint_it_green <- ( "green".println; );

In the code above, if you are executing the do_something slot and have SELF = CUCUMBER, you can't possibly call the paint_it_green slot because it does not exist in CUCUMBER, and the Lisaac compiler will refuse to compile your code. To explain it another way, the type of Self is not compatible with GREEN_VEGETABLE, and this is a problem for this is the code we are executing. You can't easily solve this problem.

In the Self language, if you had a similar situation, it would work because it is not statically typed. It would have failed to find paint_in_green in the cucumber and called the parent, which would have been a green vegetable. Somehow, the type of the cucumber would have changed at runtime, which is incompatible with a type system that can only compute types at compile time (and have them immutable at run time).

So, I stopped working on Lisaac

I tried to start up a new project, Lysaac, but got confronted to the same problem once the compiler was mature enough to start getting into inheritance. So I stopped again. Life getting in the way didn't help.

A new hope

I tried to look into virtual machines, convincing myself that compiled languages couldn't possibly answer such need to dynamic things in the language. I looked at Self for the first time. I tried to build a virtual machine that was fully concurrent. I looked at the Go language because I know that it has nice concurrent primitives ... and I found illumination.

The Go language feels like a dynamic language, but it is compiled. I always wondered how it could implement duck typing, and I found out. I also realized that the difference between a virtual machine with a JIT compiler, and a runtime for a compiled language that included a compiler was very narrow. Basically, nothing prevents any language from being compiled instead of being JIT compiled. You just have to include a compiler in the standard library.

Ironically, that what Common Lisp is doing.

A new language

I like how Go implements interfaces, and I want to do the same. In this new language, an object is composed of :

  • an opaque work (pointer sized)
  • a pointer to an interface

An interface is a collection of pointers to functions. There is always a pointer for the fallback implementation and a pointer for the case operation (to case an object to a new interface).

I want to use a smalltalk/Io type syntax. So the following will represent an object with one variable and one slot:

 obj := {
   var := obj slot;
   slot <- { ... };

The data word of obj would be a pointer to a struct containing the variable "var" The interface of obj would be:

  • a function pointer for the fallback implementation that throws an error
  • a function pointer for the "cast" operation
  • a function pointer for the "slot" slot

I haven't figured out yet how to pass parameters to functions and return objects. This kind of things will require to comply to predefined interfaces (we need a way to tell which object type is returned by a slot for example). Interfaces would probably be defined at compile time and available as constants in code at run time (to be given as parameter to the cast slot).

If I want to be able to compile on the fly the code, I need to include the LLVM library. So I need to implement the language in a language that is compiled using LLVM (so bootstraping will be easier).