Planet Lanedo

October 14, 2015

Tim Janik

DevLog: Setting up Continuous Integration


I’ve spent the last week setting up Rapicorn and Beast with, a free continuous integration service for Github. Since travis is only available for Github, this means the Beast Git repository (hosted on had to be moved (cloned) to Github.

Luckily, Git allows pushing to mutiple remotes:

git remote add all
git remote set-url --add --push all
git remote set-url --add --push all ssh://
git remote show all
* remote all
  Fetch URL:
  Push  URL:
  Push  URL: ssh://
  HEAD branch: master
  Remote branch:
    master new (next fetch will store in remotes/all)
  Local ref configured for 'git push':
    master pushes to master (up to date)

Now the following push will update both repositories:

git push all master

Also, ‘git push’ can be configured to push to ‘all’ instead of ‘origin’ by default:

git checkout master && git branch -u all/master
git push 
  038d442..22c807a master -> master
 To ssh://
  038d442..22c807a master -> master

The repos now contain a file .travis.yml that includes the complete build instructions, these need to be kept uptodate if any of the build dependencies change.

By default, travis-ci sets up Ubuntu 12.04 boxes for the continuous builds, but that’s way too old for most dependencies. Luckily there’s a beta program available to use Ubuntu 14.04 ‘trusty’, that can be selected with “dist: trusty”. The g++-4.8 compiler on trusty is still too old to build Beast, so the CI setup currently installs g++-5 from ppa:ubuntu-toolchain-r/test.

As a result, we now have automated test builds running on travis for the Rapicorn and Beast repositories that are triggered on each push command. After each build, the build bot reports success to the #beast IRC channel, and the current status can also be found via the “Build Status” buttons on github: Rapicorn Beast.

Flattr this!

by Tim Janik at October 14, 2015 08:11 AM

July 02, 2015

Tim Janik

DevLog: Rapicorn’s IDL moving into Beast

Rapicorn 'visitor' branch

Trying to keep it up, here’s an update on recent developments in Rapicorn and Beast.

Git Branches

For now, Rapicorn and Beast are using Git branches the following way:

  • Topic branches are created for each change. Where possible, commits should compile and pass all tests (i.e. pass make check installcheck).
  • Once completed, topic branches are merged into the master branch. For intermediate merges of huge branches, I’ve recently been adding [ongoing] to the merge commit message. As an aside, branch merges should probably be more elaborate in the future to make devlog articles easier to write and potentially more accurate.
  • The master branch must always compile and pass all tests.
  • OpenHub: The OpenHub repo links have been adjusted to point at Rapicorn’s and Beast’s master branch. Because of problems with spammers and a corresponding reimplementations, code statistic updates on the OpenHub are platform currently stalled however.

Hello and goodbye clang++

Rapicorn C++11 code currently compiles with g++-4.7 and upwards. An initial attempt was made at making the C++11 code compile with clang++-3.4 but the incompatibilities are currently too numerous. A few good fixes have come out of this and are merged into master now, but further work on this branch probably has to wait for a newer clang++ version.

New Widgets

Rapicorn is growing more widgets that implement state rendering via SVG element matching. Recent additions are:

  • LayerPainter – A container that allows rendering widgets on top of each other.
  • ElementPainter – A container that displays state dependent SVG image elements.
  • FocusPainter – An ElementPainter that decorates its child according to focus changes.

IDL Improvements

Several changes around Rapicorn’s IDL compiler and support code made it into master recently:

  • The IDL layer got bind() and connect() mthods (on the ObjectBroker interface). This models the IDL setup phase after the zeromq API. Beast makes use of this when setting up IDL interface layers in the UI and in BSE.
  • The Python binding was rewritten using Cython. Instead of invoking a heap of generated Python glue code and talking to the message passing interfaces directly, the Python binding now sits on top of the C++ binding. This makes the end results operate much faster, is less complex on the maintenance side and more functional with regards to the Python API offered. As an added bonus, it also eases testing of the C++ bindings.
  • And just to prove the previous point, the new Cython port uncovered a major issue lurking in the C++ IDL handling of objects in records and sequences. At least since the introduction of remote reference counting, client side object handles and server side object references are implemented and treated in fundamentally different ways. This requires records (struct) and sequences (std::vector) to have separate implementation types on the client and server sides. Thus, the client and server types are now prefixed with ClnT_ and SrvT_ respectively. Newly generated typedef aliases are hiding the prefixes from user code.
  • IDL files don’t need ‘ = 0 ‘ postfixes for methods any more. After all, generating non-virtual methods wasn’t really used anyway.
  • The Enum introspection facilities got rewritten so things like the Enum name are also accessible now. This area probably isn’t fully finished yet, for future Any integration a more versatile API is needed still.
  • Auxillary information for properties is now accessible through an __aida_aux_data__() method on generated interfaces.
  • Generated records now provide a template method __accept__<>(Visitor) to visit all record fields by value reference and name string. Exemplary visitor implementations are provided to serialize/deserialize records to XML and INI file formats.

BEAST Developments

For the most part, changes in Beast are driving or chasing Rapicorn at the moment. This means that often the tip of Rapicorn master is required to build Beast’s master branch. Here is why:

  • Beast now uses RAPIDRES(1) to embedd compressed files. Rapicorn::Blob and Rapicorn::Res make these accessible.
  • Beast now makes use of Rapicorn’s IDL compiler to generate beastrc config structures and to add a new ‘Bse‘ IDL layer into libbse that allows the UI code to interface with Bse objects via C++ interfaces. Of course, lots of additional porting work is needed to complete this.
  • Beast procedures (a kind of ‘remote method’ implemented in C with lots of boilerplate code) are now migrated to C++ methods one by one which majorly simplifies the code base, but also causes lots of laborious adaptions on the call sites, the UI and the undo system. An excursion into the changes this brings for the undo implementation is provided in DevLog: A day of templates.
  • The GParamSpec introspection objects for properties that Beast uses for GUI generation can now be constructed from __aida_aux_data__()  strings, which enabled the beastrc config structure migration.
  • An explanatory file was added which describes the ongoing migration efforts and provides help in accessing the object types involved.

What’s next?

For the moment, porting the object system in Beast from GObject to IDL based C++11 interfaces and related procedure, signal and property migrations is keeping me more than busy. I’ll try to focus on completing the majority of work in this area first. But for outlooks, adding a Python REPL might make a good followup step. 😉

Flattr this!

by Tim Janik at July 02, 2015 10:07 AM

June 27, 2015

Tim Janik

DevLog: A day of templates

c++11Yesterday I spent some 14+ hours on getting a templated undo method wrapper going.
Just to throw it all away this morning.

Here’s what I was trying to achieve, the C version of BEAST implements undo as follows:

// bse_track_remove_tick():
BseTrack *track;
uint tick;
BsePart *part;
bse_item_push_undo_proc (track, "insert-part", tick, part);

That is, it queues an undo step, that if executed, will call the “insert-part” procedure
on a BseTrack object that inserts a BsePart object at a ‘tick’.
This all happens through a varargs interface with lots of magic behind the scenes. In
particular the reference to ‘part’ is tricky. Future modifications to the BseTrack (or
project) may cause the removal and destruction of the BsePart object involved here.
While the execution of future undo steps will re-create a BsePart to be inserted here
before the step at hand is executed, the ‘part’ object pointer will have to be changed
to the re-created one instead of the destroyed one.
To achieve this, bse_item_push_undo_proc() internally converts the ‘part’ pointer into
a serializable descriptor string that allows to re-identify the BsePart object and the
undo machinery will resolve that before “insert-part” is called.

Now on to C++. I wanted the new pendant in the C++ version of Beast to look like:

// TrackImpl::remove_tick():
TrackImpl *this;
const uint tick;
PartImpl &part;
push_undo ("Remove Tick", *this, &TrackImpl::insert_part, tick, part);


Under the hood that means push_undo() (which is a template method on ItemImpl, a base type of TrackImpl) needs to process its variable argument list to:

  • A) Put each argument into a wrapper structure and store away the argument list (i.e. std::tuple<Wrapper<Args>…>).
  • B) Special case the wrapper structure for objects to store a descriptor internally (i.e. template specialisation on Wrapper<Arg> for Arg=ItemImpl& or derived).
  • C) Copy the wrapped argument list into a closure to be called when the undo step is executed.
  • D) When the closure is called, “unwrap” each of the wrapped arguments to yield its original type (i.e. construct a std::tuple<Args…> from std::tuple<Wrapper<Args>…>).
  • E) When unwrapping an object, resolve the descriptor stored internally (i.e. put more magic into Wrapper<Arg> to yield a valid Arg& object).
  • F) Construct a variable argument call to &TrackImpl::insert_part(…) (i.e. apply a C++ argument pack).

In short, I got A, B, C, D, F working after significant efforts.
A is somewhat straight forward with C++11 variable template arguments. C can be accomplished with a C++11 lambda capture list and F involves copying over std::integer_sequence from the C++14 proposals and hacking its std::apply() template to support instance + method calls. Last, D can be implemented in a related fashion to F.
What’s left is B and E, i.e. writing a wrapper that will store and yield ordinary arguments such as int or std::string and convert ItemImpl& derived types back and forth between a string representation.
Probably laborious but doable — or so I thought.

It turns out that because of all the argument and tuple packing hassle (template recursion, integer sequencing and more) involved in implementing A, D, F, it would be hard to pass needed serialization context into Wrapper<>. And what’s much worse is that g++-4.9 started to choke on template errors during the Wrapper<> development, aborting with “confused by earlier errors” after pages and pages of template error messages. clang++-3.4 isn’t yet capable of processing the C++11 used by Rapicorn, so it wasn’t of help here either (I plan on another attempt at porting my C++11 code to be clang++ compatible once I get my hands on a newer clang++ version).
I.e. in the end, I gave up after an overlong day in the middle of E, everything else having been accomplished. g++-4.9 choking was a main let down, but probably even more important is that I had the necessary state and mood to process multiple pages of template error messages yesterday, but the same cannot be expected of every push_undo() user in the future if any push_undo() argument ever mismatches.

This morning, I threw away yesterdays templating excess and within an hour got an alternative interface to work:

// undoing part removal needs an undo_descriptor because future
// deletions may invalidate and recreate the part object
TrackImpl *this;
const uint tick;
PartImpl &part;
UndoDescriptor<PartImpl> part_descriptor = undo_descriptor (part);
auto lambda = [tick, part_descriptor] (TrackImpl &self) {
  PartImpl &part = self.undo_resolve (part_descriptor);
  self.insert_part (utick, part);
push_undo ("Remove Tick", *this, lambda);

That is, this interface is fully type-safe, but the ‘part’ wrapping has to be done manually, which involves writing a small lambda around TrackImpl::insert_part(). If any argument of the lambda or push_undo() calls is erroneous, the compiler will point at a single failing variable assignment in the implementation of push_undo<>() and list the mismatching arguments.
That is much more digestible than multiple template recursion error pages, so it’s a plus on the side of future maintenance.

The short version of push_undo<>() that takes a method pointer instead of a lambda is still available for implementing undo steps that don’t involve object references, incidentally covering the majority of uses.

Flattr this!

by Tim Janik at June 27, 2015 01:34 AM

May 26, 2015

Tim Janik

Thread-Local-Storage Benchmark

A good while ago at a conference, I got into a debate over the usefulness of TLS (thread-local storage of variables) in performance critical code. Allegedly TLS should be too slow for practical uses, especially for shared libraries.

TLS can be quite useful for context sensitive APIs, here’s a simple example:

push_default_background (COLOR_BLUE);
auto w = create_colorful_widget(); // gets blue background

For a single threaded program, the above push/pop functions can keep the default background color for widget creation in a static variable. But to allow concurrent widget creation from multiple threads, that variable will have to be managed per-thread, so it needs to become a thread local variable.

Another example is GSlice, a memory allocator that keeps per-thread allocation caches (magazines) for fast successive allocation and deallocation of equally sized memory chunks. While operating within the cache size, only thread local data needs to be accessed to release and reallocate memory chunks. So no other synchronization operations with other threads are needed that could degrade performance.

GCC (I’m using 4.9.1 here), GLibc (2.19), et all have seen a lot of improvements since, so I thought I’d dig out an old benchmark and evaluate how TLS does nowadays. To test the shared library case in particular, I’ve written the benchmark as a patch against Rapicorn and posted it here: thread-local-storage-benchmark.diff.

The following table lists the best results from multiple benchmark runs. The numbers shown are the times for 2 million function calls to fetch a (TLS) pointer of each kind (plus some benchmarking overhead), on a Core-i7 CPU @ 2.80GHz in 64bit mode:

Local pointer access (no TLS):                0.003351 seconds
Shared library TLS pointer access:            0.003741 seconds
Static pointer access (no TLS):               0.004450 seconds
Executable global TLS pointer access:         0.004735 seconds
Executable function-local TLS pointer access: 0.004828 seconds

The greatest timing variation in these numbers is within thirty percent (30.6%). In realistic scenarios, the time needed for pointer accesses is influenced by a lot of other more dominant factors, like code locality and data cache faults.

So while it might have been true that TLS had some performance impacts in its infancy, with a modern tool chain on AMD64 Linux, performance is definitely not an issue with the use of thread-local variables.

Here is the count out in nano seconds per pointer access call:

TLS Benchmark

Let me know if there are other platforms that don’t perform as well.

Flattr this!

by Tim Janik at May 26, 2015 04:13 PM

May 05, 2015

Tim Janik

DevLog: shared_ptr, resources, eval syntax and more

Giving in to persistent nagging from Stephen and Stefan about progress updates (thanks guys), I’ll cherry pick some of the branches recently merged into Rapicorn devel for this post. We’ll see if I can keep posting updates more regularly in the future… 😉

Interactive Examples

Following an idea Pippin showed me for his FOSDEM talk, I’ve implemented a very simple small script (merged with the ‘interactive-examples’ branch) to restart an example program if any file of a directory hierarchy changes. This allows “live” demonstrations of widget tree modifications in source code, e.g.:

cd rapicorn/
misc/ python ./docs/tutorial/ &
emacs ./docs/tutorial/
# modify and save

Everytime a modification is saved, is restarted, so the test window it displays “appears” to update itself.

Shared_ptr widgets

Last weekend, I also pushed the make_shared_widgets branch to Rapicorn.

Some while ago, we started to use std::shared_ptr<> to maintain widget reference counts instead of the hand-crafted ref/unref functions that used atomic operations. After several cleanups, we can now also use std::make_shared() to allocate the same memory block for storing the reference count and widget data. Here is an image (originals by Herb Sutter) demonstrating it:


The hand-optimized atomic operations we used previously had some speed advantages, but using shared_ptr was needed to properly implement remote reference counting.


Since 2003 or so, Beast and later Rapicorn have had the ability to turn any resource file, e.g. PNG icons, into a stream of C char data to be compiled into a program data section for runtime access. The process was rather unordered and adhoc though, i.e. any source file could include char data generated that way, but each case needed its own make rules and support code to access/uncompress and use that data. Lately I did a survey across other projects on how they go about integrating resource files and simplified matters in Rapicorn based on the inspirations I got.
With the merge of the ‘Res’ branch, resource files like icons and XML files have now all been moved under the res/ directory. All files under this subdir are automatically compressed and compiled into the Rapicorn shared library and are accessible through the ‘Res’ resource class. Example:

Blob data = Res ("@res icons/example.png");

Blob objects can be constructed from resources or memory mapped files, they provide size() and data() methods and are automatically memory managed.

New eval syntax

In the recently merged ‘factory-eval-syntax’ branch, we’ve changed the expression evaluation syntax for UI XML files to the following:

<label markup-text="@eval label_variable"></label>

Starting attribute values with ‘@’ has precedence on other platforms and is also useful in other contexts like resources, which allows us to reduce the number of syntax special cases for XML notations.

Additionally, the XML files now support property element syntax, e.g. to set the ‘markup_text’ property of a Label:

    <Label.markup-text> Multiline <b>Text</b>... </Label.markup-text>

This markup is much more natural for complex property values and also has precedence on other platforms.

What’s next

I’m currently knee deep in the guts of new theming code, the majority of which has just started to work but some important bits still need finishing. This also brings some interesting renovation of widget states, which I hope to cover here soon. As always, the Rapicorn Task List contains the most important things to be worked on next. Feedback on missing tasks or opinions on what to prioritize are always appreciated.

Flattr this!

by Tim Janik at May 05, 2015 01:16 PM

December 31, 2014

Tim Janik

Is SSH Insecure?

In the true tradition of previous years, this years 31c3 in Hamburg revealed another bummer about surveillance capacities:

The brief summary is that viable attacks are available to surveillance agencies for PPTP, IPSEC, SSL/TLS and SSH.
New papers reveal that as of 2012, OTR and PGP seem to have resisted decryption attempts.

A related “Spiegel” article provides more details and the leaked papers that contain this information: Inside The Nsa War On Internet Security.

Several vulnerabilities regarding SSL/TLS have been discovered and fixed in the past years since these papers were created. But at the very least, for state agencies the possibility remains to decrypt individual connections with fake certificates via man-in-the-middle-attacks.

Claiming decryption of SSH caught me by surprise though, it’s a tool deeply ingrained into my daily workflow.

At he conference, I got a chance to discuss this with Jacob after studying some of the Spiegel revelations and since I’ve been asked about this so much I’ll wrap it up here:

  • The cited papers put an emphasis on breaking other crypto protocols like PPTP and IPSEC. That and even SSL enjoy much more focus than SSH attack possibilities.
  • Clearly, good attacks are possible against password protected sessions, given lots of computation power or (targeted) password collection databases.
  • Also 768bit RSA keys are probably nowadays breakable by surveillance agencies and 1024 bit key could be within reach based on revelations about their processing capacities.
  • Even 2048 bit keys could become approachable given future advances in mathematical attacks or weak random number generators used for key generation as was the case in Debian 2008 (CVE-2008-0166).
  • Additionally, there always remains the possibility of an undiscovered SSH implementation bug or protocol flaw that’s exploitable for agencies.

Fact is, we don’t yet know enough details about all possible attack surfaces against SSH available to the agencies and we badly need more information to know what infrastructure components remain save and reliable for our day to day work. However we do have an idea about the weak spots that should be avoided.

My personal take away is this:

  • Never allow password based SSH authentication ever:
    /etc/ssh/sshd_config: PasswordAuthentication no
  • Use 4096bit keys for SSH authentication only, I have been doing this for more than 5 years and performance has not been a problem:
    ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_HOSTNAME -C account@HOSTNAME
  • Turn to PGP and OTR for useful encryption.

Have a happy new year everyone…

Flattr this!

by Tim Janik at December 31, 2014 03:37 AM

October 28, 2014

Martyn Russell

What’s new in Tracker 1.2?


Reblogged from Lanedo GmbH. blog

Every 6 months or so we produce a new stable release and for Tracker 1.2 we had some new exciting work being introduced. For those that don’t know of Tracker, it is a semantic data storage and search engine for desktop and mobile devices. Tracker is a central repository of user information, that provides two big benefits for the user; shared data between applications and information which is relational to other information (for example: mixing contacts with files, locations, activities and etc.).

Providing your own data

Earlier in the year a client came Lanedo and to the community asking for help on integrating Tracker into their embedded platforms. What did they want? Well, they wanted to take full advantage of the Tracker project’s feature set but they also wanted to be able to use it on a bigger scale, not just for local files or content on removable USB keys. They wanted to be able to seamlessly query across all devices on a LAN and cloud content that was plugged into Tracker. This is not too dissimilar to the gnome-online-miners project which has similar goals.

The problem

Before Tracker 1.2.0, files and folders came by way of a GFile and GFileInfo which were found using the GFileEnumerator API that GLib offers. Underneath all of this the GFile* relates to GLocalFile* classes which do the system calls (like lstat()) to crawl the file system.

Why do we need this? Well, on top of TrackerCrawler (which calls the GLib API), is TrackerFileNotifier and TrackerFileSystem, these essentially report content up the stack (and ignore other content depending on rules). The rules come from a TrackerIndexingTree class which knows what to black list and what to white list. On top of all of this is TrackerMinerFS, which (now is inaccurately named) handles queues and processing of ALL content. For example, DELETED event queues are handled before CREATED event queues. It also gives status updates, handles INSERT retries when the system is busy and so on).

To make sure that we take advantage of existing technology and process information correctly, we have to plugin at the level of the TrackerCrawler class.

The solution

Essentially we have a simple interface for handling open and close cases for iterating a container (or directory) called TrackerDataProvider interface (and TrackerFileDataProvider implementation for the default or existing local file system case).

That is followed up with an enumerator interface for enumerating that container (or directory). That is called TrackerEnumerator and of course there is a TrackerFileEnumerator class to implement the previous functionality that existed.

So why not just implement our own GFile backend and make use of existing interfaces in GLib? Actually, I did look into this but the work involved seemed much larger and I was conscious of breaking existing use cases of GFile in other classes in libtracker-miner.

How do I use it?

So now it’s possible to provide your own data provider implementation for a cloud based solution to feed Tracker. But what are the minimum requirements? Well, Tracker requires a few things to function, those include providing a real GFile and GFileInfo with an adequate name, and mtime. The libtracker-miner framework requires the mtime for checking if there have been updates compared to the database. The TrackerDataProvider based implementation is given as an argument to the TrackerMiner object creation and called by the TrackerCrawler class when indexing starts. The locations that will be indexed by the TrackerDataProvider are given to the TrackerIndexingTree and you can use the TRACKER_DIRECTORY_FLAG_NO_STAT for non-local content.

Crash aware Extractor

In Tracker 1.0.0, the Extractor (the ‘tracker-extract’ process) used to extract metadata from files was upgraded to be passive. Passive meaning, the Extractor was only extracting content from files already added to the database. Before that, content was concatenated from the Extractor to the file system miner and inserted into the database collectively.

Sadly with 1.0.0, any files that caused crashes or serious system harm resulting in the termination of ‘tracker-extract’ were subsequently retried on each restart of the Extractor. In 1.2.0 these failures are noted and files are not retried.

New extractors?

Thanks to work from Bastien Hadess, there have been a number of extractors added for electronic book and comic books. If your format isn’t supported yet, let us know!

Updated Preferences Dialog

Often we get questions like:

  • Can Tracker index numbers?
  • How can I disable indexing file content?

To address these, the preferences dialog has been updated to provide another tab called “Control” which allows users to change options that have existed previously but not been presented in a user interface.


In addition to this, changing an option that requires a reindex or restart of Tracker will prompt the user upon clicking Apply.

What else changed?

Of course there were many other fixes and improvements as well as the things mentioned here. To see a full list of those, see them as mentioned in the announcement.

Looking for professional services?

If you or someone you know is looking to make use of Open Source technology and wants professional services to assist in that, get in touch with us at Lanedo to see how we can help!

by mr at October 28, 2014 04:48 PM

Lanedo Blog

What’s new in Tracker 1.2?

Every 6 months or so we produce a new stable release and for Tracker 1.2 we had some new exciting work being introduced. For those that don’t know of Tracker, it is a semantic data storage and search engine for

by Martyn Russell at October 28, 2014 04:42 PM

October 14, 2014

Martyn Russell

Tracker – What do we do now we’re stable?


Over the past month or two, I’ve spent time working on various feature branches for Tracker. This coming after a 1.2 stable release and a new feature set which was added in 1.2.

So a lot has been going on with Tracker internally. I’ve been relatively quiet on my blog of late and I thought it would be a good idea to run a series of blogs relating to what is going on within the project.

Among my blogs, I will be covering:

  • What features did we add in Tracker 1.2 – how can they benefit you?
  • The difference between URIs, URNs, URLs and IRIs – dispelling any confusion; for the bugs we’ve had reported
  • Making Tracker more Git-like – we’re moving towards a new ‘git’ style command line with some new features on the way
  • Preparing for the divorce – is it time to finally split tracker-store, the ontologies and the data-miners?
  • Making Tracker even more idle – using cgroups and perhaps keyboard/mouse idle notifications

If anyone has any questions or concerns they would like to see answered in articles around these subjects, please comment below and I will do my best to address them! :)

by mr at October 14, 2014 06:55 PM

September 22, 2014

Lanedo Blog

WHITE PAPER: Qualcomm Gobi devices in Linux based systems

Over the past few years, Aleksander Morgado has written about some of the improvements happening in the Linux world for networking devices, including Improving ModemManager for 3GPP2 Gobi 2k3k devices, Workarounds for QMI modems using LTE and other modem advances

by Martyn Russell at September 22, 2014 02:17 PM

December 18, 2013

Lanedo Blog

A quest for speed in compiling

Ever spent time looking a scrolling console wishing that compilation was taking less time? In this post we’re going to explore the various way to fasten the build of $YOUR_SOFTWARE_PROJECT. Let’s start with the simpler test case which will act as

by Pierre-Eric Pelloux-Prayer at December 18, 2013 11:09 AM

December 10, 2013

Lanedo Blog

The Main Loop: The Engine of a GUI Library

In this blog post, we will have a look at the “main loop”, which is the engine of most Graphical User Interface (GUI) libraries. Different GUI libraries implement their own main loop. When porting a GUI library to another platform,

by Kristian Rietveld at December 10, 2013 06:48 PM

December 04, 2013

Lanedo Blog

Exploring the LibreOffice code base

Opening LibreOffice’s source code for the first time, the amount of code that a new developer has to sift through can be intimidating and off-putting. So, here are some useful locations within the LibreOffice source directory structure that should help

by Eilidh McAdam at December 04, 2013 11:09 AM

November 26, 2013

Lanedo Blog

Filesystem monitoring in the Linux kernel

At Lanedo we’ve been working on file system monitoring in many contexts, like Gvfs/GIO development or Tracker, and usually we get asked about which are the available interfaces in the Linux kernel… The history behind filesystem monitoring interfaces in Linux

by Aleksander Morgado at November 26, 2013 04:37 PM

July 23, 2013

Lionel Dricot

The Last GUADEC?

Last year, during GUADEC, there was that running joke amongst some participants that this was the last GUADEC. It was, of course, a joke. Everybody was expecting to see each other in Brno, in 2013.

One year later, most of those who were joking are not coming to GUADEC. For them, the joke became a reality.

I believe the root cause is that GNOME has never been able to clearly offer an answer to one very simple question: what is GNOME? (baby don’t hurt me, don’t hurt me, no more)

People are increasingly leaving the desktop computer to use phones, tablets and services in the cloud. ChromeOS has successfully filled the gap between desktop and mobile devices and is becoming the dominant OS. Most people don’t need more than a Chromebook. In fact, it’s way easier to fill most current needs with a Chromebook.

One could say that the professional world is not following, that GNOME is targeting businesses or those who can’t work on a simple notebook/tablet. But we know that this is only a matter of time, that enterprises are simply lagging, on purpose or not. After all, some are still using Windows NT. And what was impossible to do in the cloud one year ago is already becoming the standard, like basic photo editing or video conference.

Of course, Android and Chrome OS are not free. Worst, the recent PRISM scandal has put under light the true importance of free software and transparent web services. Thousands of people understood the problem and decided to download the most popular free operating system of our time: Cyanogenmod, the free version of Android which reacted to PRISM by offering an incognito mode.

The switch is deeper and quicker than anything we imagined. Take a look at screens during a free software hackers conference. Yes, that’s it: Unity. Besides some Macbook and some Chromebook, it’s Unity everywhere. Unity who abandoned GTK+ to switch to Qt, renaming Qt Creator to Ubuntu SDK. Even Subsurface, Linus Torvald’s pet project, is switching from GTK+ to Qt. If you spot a GNOME desktop in a conference, chances are that you are dealing with a Red Hat employee. That’s it. According to Google Trends, interest in GNOME and GTK+ is soon to be extinct.

For years, I’ve been a proud GNOME supporter. I’ve been increasingly interested by the usability of the desktop, by the innovation of GNOME 3. But, today, who really cares about Unity/GNOME/KDE or GTK+/Qt when all you need to do is to launch a browser full screen? All I need, all I want are web based versions of the free software I use. Not an WinXP replacement.

Only a few years ago, GNOME was at the centre of the creative world. Remember Maemo and the N700? This was ground-breaking. A mobile full-fledged OS was the future. Multiple companies emerged from the chaos to provide support, expertise, innovation. But the last remaining bastard child of this era, Tizen, has been definitely buried only a couple of weeks ago.

I can’t accept that all we will keep from this wonderful story is a bunch of coloured t-shirts. We are multiple companies that were created during the GNOME golden era. We are a family of hackers, volunteers and friends. We are a community. We share a lot of experience, we share values. The free software ecosystem has produced hugely successful products which are still unmatched in the cloud offering: Gimp, Inkscape, LibreOffice, Blender to name a few. Even my own pet project, Getting Things GNOME, has no satisfactory web equivalent. And when a web solution exists, it is often a proprietary, centralized, privacy crushing one. There’s surely room for free solutions. That’s why LibreOffice is already investigating the web/mobile space.

For all those reasons, I would like to take the time to sit down together and discuss about the GNOME or free software business future during a BOF. How can we evolve? Can we move the GNOME spirit into a web browser? How can we make use of our history and bring freedom to users instead of becoming just another web dev consultancy company?

How can we ensure, together, that this will not be the last GUADEC?


Picture by Ana Rey

flattr this!

by Lionel Dricot at July 23, 2013 09:28 AM

June 24, 2013

Lionel Dricot

Flattr’s biggest problem

And how I work around it

A few months ago, I tried to convince you to spend 2€ per month to reward the content you like.

Lot of people are enthusiastic with Flattr. But there’s a recurring complain that there’s not enough content accepting Flattr.

This is a real concern. Flattr was build around a model where more or less every creator has a personal blog or website. Unfortunately, there’s a clear trend that creators are now increasingly regrouping on a few centralized platforms. Creators don’t have the control of the platform and, as such, cannot add a Flattr button.

This raises a lot of questions about centralization and gives the feeling that, besides a few blogs like mine, there’s nothing to Flattr on the web. It looks like you are standing alone on a desert island with your money.

Enter the unclaimed Flattr

Did you know that you can flattr someone which is not on Flattr yet? It is called an “unclaimed Flattr”. The money remains on your account until the creator sign-in. So don’t hesitate to use this feature.

What is great with unclaimed Flattrs is that you can flattr nearly everything on the web without thinking about it. If, for a given month, you flattr only unclaimed things, it will cost you nothing for that particular month.

To Flattr something, even if it has no Flattr button, install the Flattr browser extension for Firefox or for Chrome. When the content you are currently viewing can be flattered, a little Flattr icon will be show in the address bar. Click on it and confirm your flattr.

Unfortunately, not every web page can be flattered. So let me explain how I manage to give 62 flattrs this month.

Flattering blogs, articles, comics and social network messages

The web extension allows me to see immediately if a blog or a website has a Flattr account. I can even flattr each Wikipedia page!

If there’s no Flattr icon, I use this little trick: I flattr the Twitter account of the blog. If I liked the article Foo from blogger Bar, I simply find the tweet from @Bar that announced the article Foo. I then click on the tweeting date to open the tweet full page.

If you do that, you will see that the Flattr icon appears in your address bar. Indeed, you can flattr individual tweet. That makes one more pending flattr for this creator.


Of course, I also sometimes flattr individual tweets that I particularly enjoy. The same can be done with pictures on Instagram and even messages on but I did not found anyone using the later. Maybe it would be nice to be able to flattr a Tumblr post?

Flattering videos, music and podcasts

Every video on Youtube or Vimeo can be flattered if you have the browser extension installed. Same for any audio track or podcast on Soundcloud. If I appreciate a content on this platform, I flattr it without any second thought. Unfortunately, Dailymotion support is missing.

Grooveshark also has a Flattr setting where you can automatically Flattr any artist you are listening to. Most artists are not registered on Flattr yet but I flattr anyway. If any artist I like ever complains about piracy, I will happily point to the money waiting on Flattr.

Flattering pictures

To get illustrations for this blog, I look on Flickr or 500px for Creative Commons pictures. Any picture on those platforms can be flattered and I make sure to do it for each picture I re-use. It even happens that the author is already on Flattr, such as this one, used in an article in French.

I still miss to be able to flattr pictures on DevianArt but, on 500px and Flickr, I’ve no hesitation to Flattr any nice picture randomly appearing in front of my eyes.


There’s a growing list of software accepting Flattr donations. I try to regularly flattr software I use. If your favourite piece of code is not on Flattr, you can still flattr any tweet from the official account.

Also, any GitHub repository and any commit can be flattered. I tend to flattr external contributions to my own projects or commits fixing an annoying bug.

For example, I started flattering commits to the repository of the WordPress theme I use for this blog, even though the author was not on Flattr (he joined recently). If you like the theme of this blog, don’t hesitate to give a little Flattr to the Github repository.

Automatic flattering

The best of all is that you can make it automatic. In your Flattr preferences, you can link your accounts so, for example, each time you like a Youtube video or an Instagram picture it receives a flattr.

FlattrStar is a third party service which extends this functionality. It allows to flattr each favourite tweet, favourite artists on and many more.


Not everything is flattrable and this is a problem. Each time you interact with a creator, don’t hesitate to talk about Flattr. She might not be immediately interested but it may ring a bell if multiple fans start to ask for a Flattr button. Don’t hesitate to suggest ideas. What about a 9gag Flattr integration? Or a Reddit integration? (EDIT:Reddit integration is possible through Fleddit)

In the meantime, there’s already a huge amount of content that can receive flattrs. If this is not enough for you, keep your Flattr fee to the minimal 2€ per month and, like me, continue to make unclaimed flattrs. You can also subscribe to a few charities. Charities don’t pay the 10% Flattr fee. 100% of your money is directly going to them.

In the worst case, you will spend 2€ per months to help charities and creators. In the best case, you will have fuelled a cultural revolution.


Picture from Daniel Colquitt

flattr this!

by Lionel Dricot at June 24, 2013 06:28 PM

May 27, 2013

Lionel Dricot

Become a Patron of Arts and Letters

With Internet, artists are facing the challenge that people don’t need to buy material supports to enjoy their work. I believe that it is a very good thing as it allows to sell any piece of work for a Free Price while enjoying the freedom of the web. Thus, the next technical challenge is to make it as easy as possible to pay a free price for anything you like. I’ve already told you in length about Flattr, which allows you to “like with money” anything on the Internet.

But what if you really like an artist, a blogger, a filmmaker? What if you want to encourage a creator to do more or to keep going? Here’s come Patreon.

The principle of Patreon is very simple: for every piece of work by a given creator, you pledge a given amount. The more she/he releases, the more you spend (but you can fix a monthly limit). And, as for Kickstarter, you can have some extra with your pledge. Just see my page for an example.

The idea is so simple that, unlike Flattr, I don’t see how I will be able to make awfully long blog posts about the subject for months.

Of course, Patreon is not perfect. A given creator cannot have multiple projects (what if you are a blogger and a video maker? Or what if you have two blogs?). A credit card is required (Bitcoin support would be awesome). I will probably find more flaw but the idea is really nice and complementary with Flattr.

I don’t really hope to attract patrons but, being curious, I had to give it a try. If you like the idea too, don’t hesitate to test and become my patron.


Picture by Martin Beek

flattr this!

by Lionel Dricot at May 27, 2013 12:28 PM

May 22, 2013

Lionel Dricot

How I Learned to Stop Worrying and Love the Web

Ce billet est disponible en français.

When I started producing content for the web, as hobbyist film maker, I was very enthusiastic about Creative Commons licenses. But not for my videos. Someone could use them in a bad way. I didn’t want that. The “bad way” was not clearly defined, something about nazis or paedophiles, but I was nonetheless fearing it.

Then I created a blog. I decided to publish my posts under a CC By license, moving a step forward to openness. Except some more important texts, which were under the CC By-ND license. Because I didn’t want them to be modified. You know, those texts were “important”, I was an artist, I had to keep full power over my creations.

Comments were a metric for my success. The more the comments, the better. When comments started to fade out, replaced by social networks, my new metric for success was the number of visitors per day. I could spend hours watching my statistics, exploring the sources of visitors.

I gradually switched everything to a CC By license, realizing that my fear of being misused was too abstract for not giving freedom to my readers. But I still asked a link to my blog each time I could in order to attract visitors, to see them in my statistics. I rarely posted on other blogs. My creations had to stay centralized.

Like moths on a sparkling light, bloggers are attracted by statistics. Google analytics, Page Rank, Twitter followers, Klout, Ebuzzing. It is addictive, time consuming and useless. I decided to quit.

I started to cross post my content over multiple places where I don’t have full control, such as Medium. I removed everything but the Flattr button. Yes, everything, including the G+/Facebook buttons and the Piwik/Google Analytics plugins. I don’t know any more how many people are reading me, how many reshares I have. I don’t care. I want to be free and, in order to achieve that, I had to free my creation first.

It took me ten years to overcome my irrational fears of the web. Today, I feel like I’m just discovering a new world. I’m a newborn. I’m not a creator asking to be admired by the non-creator mass. I’m someone contributing and dropping some little creation into a huge creative chaos where everybody is, in a way or another, a creator. Which is awesome.

If you like something, copy it, modify it, share it, re-create it. A text lives only when someone is reading it. A creation needs an audience.

Thanks for caring, thanks for sharing.


Picture by EpoxidesCe billet est disponible en français.

flattr this!

by Lionel Dricot at May 22, 2013 05:28 PM

May 17, 2013

Lionel Dricot

The Fight for E-Clothing

I meet Karl Isrich in a small restaurant. You maybe heard about the company he founded, MyVirtualTaylor, a pioneer of e-clothing. You would probably imagine Karl as one of those twenty-something golden boy. Instead, I face an average anxious guy, approximately forty years old with greyish hairs.

He asked me to go to this cheap restaurant because he could not afford a more expensive dinner. Lawyers, he said. When we sat down, he gave me a business card that used to be shiny six months ago. It simply says “MyVirtualTaylor, Isrich CEO”.

Hello Karl, thanks for the meeting. MyVirtualTaylor is an e-clothing company. But what is e-clothing exactly ?

Simply put, it’s 3D printing for clothes. We have developed a clothing printer that we sell and which is the size of a washing machine. Not being bigger than a washing machine was one of our top requirements before the launch.

The clothing printer has a tank of polymer, that you need to refill regularly, and seven dye tanks. We discovered that having seven primary colors was a good deal to reproduce most of the colors.

Through wifi, you send a .clo file to the printer then wait between ten minutes and one hour, depending on the size and the complexity of the model. Everything is automatic, you can even print a bunch of .clo in a row.

How do you get a .clo file?

We have an online editor on our website that allows you to design your own clothes. We have also some standard templates: shirts, ties, stuff like that.

In fact, when we launched, we didn’t really think about that. We thought that there will be a new market for clothes creators. That’s why we wanted the .clo format to be open and documented. We sell the hardware but we didn’t want to enter the clothing market.

Can you really print anything? What are the limitations?

Currently, there are some constraints with the size. We have prototypes that can print as big as a king size bed sheet. But, of course, you can only print clothes made of polymer. No silk nor fabric.

Isn’t that a big limitation? After all, most of our clothes are made of fabric.

It should be noted that a lot of progress have been made with polymers. We can weave the polymer in a lot of different ways in order to have the properties we want.

But, most importantly, clothing material has always been about finding a compromise between style, comfort and durability. Durability being the critical point for quality clothes. The clothes have to go through hundred of washing cycles. Our solution was to remove durability from the equation.

Do you mean that printed clothes are not durable?

Not, they aren’t. But it is not the goal. Instead of cleaning them, you put them in the clothing printer and the polymer is cleaned, melted and ready to print new clothes.

Unfortunately, we still cannot extract the colors. The polymer is thus not perfect. We store the recycled polymer in a separate tank. When you print, you can allow the use of recycled polymer or not. It is good enough for every day but if you want a perfect white shirt for a wedding, you probably want the unused polymer.

The part of the polymer which is worn out goes with the waste to the sewers.

It sounds like an ecological disaster.

That’s exactly the rumor spread by our opponents.

But, while it is not perfect, you have to compare it with the traditional clothing industry. Clothes are usually made in huge factories in China, using harmful chemicals. Then, you have to take into account the transport, the storage, the shop. Not mentioning the gas needed to go to the shopping mall. To that, add the water and the soap used to wash the clothes. By contrast, we basically use electricity and release very little polymer. With time, we hope to be able to recycle more and more.

Did you talk about opponents?

You know, I’m an engineer. I never really cared about anything but the technological aspects. When the first clothing printers were sold, people immediately started to exchange .clo files. They took their own clothes and make .clo files to be able to reproduce them.

One day, I received a letter from lawyers of the FCIAA, the Fashion & Clothing Industry Association of America. I’ve never heard of them before but, basically, they wanted me to stop my company because I was threatening their business.

I thought it was a joke. Really. At first I was like: ”Funny. It’s like the candle industry suing Edison for inventing the lightbulb”. But it’s not funny any more.

I can talk about this for hours. They are bad. Really bad. They are trying to destroy my life.

Can’t you let the lawyers handle that?

For the lawsuit, of course. But there’s a lot more. I’ve been contacted by politicians. They say that I’m destroying the economy. If my product works, there will be no shops for clothes hence no jobs. They asked me: “Do you know how many Americans are working in clothing shops?”. I was accused of being anti-patriotic. From nowhere, some news laws appeared saying that clothes should have a certification in order to save children from accidental suffocation.

From that point, it became immoral to print clothes. Last year, nobody ever thought about printing clothes and, now, it is worse than eating babies alive. There’s even webshops where you can order “Not Printed” labelled t-shirts. I’ve been attacked personally, investors have turned me back and, at the same time, I still need to pay expensive legal fees.

Isn’t that true that it’s a threat for the economy?

It is a tool for making life easier. Any invention which free people from unnecessary labor seems to be a threat to the economy. But if our economy is threatened by inventions that make life better for everyone, it’s the economy we need to change, not the inventions.

What will you do next?

I feel bitter. I’m an engineer with a new useful idea and everyone turns against me: big corporations, lawyers, politicians. Even random people in the street think that “It’s the guy destroying jobs and suffocating babies”. I’ve never signed up for that. I’ve never been into politics or anything like that. Now, I’m thinking about settling somewhere in Europe but I’m afraid that the hand of the FCIAA will follow me there. 

Thanks Karl, I wish you the best.

Although, as a journalist, I know I should remain objective, I can’t help but feeling empathy for the guy. As I’m packing up, I notice his clothes for the first time. “So are those printed?” “Of course” “Very nice. It’s impressive.” He sighs then try to smile at me: “Thanks. If you are interested, you will find the .clo on the Pirate Bay.”. His smile feels sad, despaired. We shake hands and he slowly walk away while I stay there, helpless.


This post is part of the Letters from the Future collection and is dedicated to Brokep for announcing his political involvement during the writing of this text. Picture by Anna Banana.

flattr this!

by Lionel Dricot at May 17, 2013 04:28 PM

May 08, 2013

Lionel Dricot

The Cost of Being Convinced

When debating, we usually consider that opinions are merely resulting of being exposed to logical arguments. And understanding them. If arguments are logical and understood, people will change their mind.

Anybody having been connected long enough on the internet knows that it never happens. Everybody stays on his own position. But why?

The reason is simple: changing opinion has a cost. A cost that we usually ignore. A good exercice is to try to evaluate this cost before any debate. For yourself and for the counterpart.

Let’s take a music fan that was convinced that piracy hurts artists. Convincing him that it’s not the case and that piracy is not immoral means to him that, firstly, he was dumb enough to be brainwashed by major companies and that, secondly, the money spent on CD is a complete waste.

Each time you will tell him “Piracy is not hurting artists and not immoral”, he will ear “You are stupid and you wasted money for years”.

This is quite a high cost but not impossible to overcome. It means that arguments should not only convince him, but also overcome that cost.

Worst: intuitively, we take the symmetry of costs for granted.

Let’s take the good old god debate.

For the atheist, the cost of being convinced is usually admitting being wrong. This is a non-negligible cost but sometimes possible. Most non-hardcore atheists are thus quite ready to be convinced. They enter any religious debate expecting the same mindset from the opponents.

But the opposite is not true. For a religious person, believing in god is
often a very important part of her life. In most case, this is something inherited from her parents. Some life choices have been made because of her belief. The person is often engaged in activities and societies related to her belief. It could be as far as being the core foundation of her social circles.

When you say “God doesn’t exist”, the religious will hear “You are stupid, your parents were liars, you wrecked your life and you have no reason to see your friends anymore”.

It looks like a joke, right? It isn’t. But, subconsciously, it is exactly what people feel and understand. No wonder that religious debates are so emotional.

Why do you think that some religious communities are fighting any individual atheist? Why do you think that any religion always try to get money or personal involvement from you? Because they want to increase the cost of not believing in them. Scammers understand that very well: they will ask you more and more money to increase the cost of you realizing it’s a scam.
Before any argument, any debate, ask everyone to answer sincerely to the question “what will happen if I’m convinced? What will I do? What will change in my life?”.

More often than not, changing opinion is simply not an option. Which settle any debate before the start.

And you? Which of your opinions are too costly to be changed? And what can you do to improve the situation?


Picture by r.nial.bradshaw

flattr this!

by Lionel Dricot at May 08, 2013 09:42 AM

February 15, 2013

Martyn Russell

tracker-search gets colour & snippets!

Recently Carlos added FTS4 and snippet support to Tracker. We merged that to master after doing some tests and have reduced the database size on disk by doing this. I released 0.15.2 yesterday with the FTS4 work, and today I decided to add a richer experience to tracker-search.

Below you can see me searching for passport and sue found in some of the documents indexed on my machine. The colour there is quite nice to separate hits and snippets/contexts where the terms were found. This search without any arguments really will search ALL resources in the database:


This second screenshot shows searching for love with all music in particular. So you can use this for all areas of tracker-search:


With any luck, we will be releasing a 0.16.0 in time for the next GNOME release with this all available in!

by mr at February 15, 2013 05:11 PM

September 26, 2012

Eilidh McAdam

Importing OOXML Ink annotations into LibreOffice

So, I’ve been having fun traversing the LibreOffice .docx and .rtf import filters while trying to implement Ink annotations in LibreOffice Writer. As it turns out, I don’t strictly agree with the ISO press release that the Office Open XML file format is “intended to be implemented by multiple applications on multiple platforms”. However, despite having spent way too much time wading through a ~24k line XML file defining the document model in the importer, I’ve been very grateful that the format is XML-based and therefore human readable.

I’ve included some useful resources at the end for any intrepid programmers who wish to help with tackling the importer beast.

DOCX import

Ink annotation
Drawn Ink annotations

Ink allows you to annotate using a stylus on a tablet PC using Microsoft Word so that you can doodle over your documents as you see fit. Technical details ahead, so feel free to skip to the results.

Ink strokes are saved in docx documents as bezier curves expressed through VML paths (these are pretty similar to SVG paths, with commands and co-ordinates). I had quite a bit of fun hacking a parser together – here’s the patch, with a few tweaks, it could be generally useful. It produces a list of all the sub-paths in a path, each subpath consisting of a list of co-ordinates and co-ordinate flags indicating normal or control points.

Word’s storage of Ink annotations does highlight some of the problems with implementing Word-compatible OOXML. They’re represented something like this:

<v:shape path="[VML path]" ...>
<o:ink i="[base64 binary data]" annotation="t"/>

Now, [VML path] is really the important part as it contains the Ink shape geometry. But the [base64 binary data]? What’s stored in there is anybody’s guess – I’ve certainly not found any documentation on its contents. Anyone who has a tablet version of Word should feel free to take a crack at reverse engineering it ;)

It turns out that paths for Ink annotations consist of bezier curves. Beziers weren’t supported in the importer, so the path attribute, as well as any <v:curve> elements (which use control1, control2, to and from attributes), got ignored. So I added the support by getting the path and control1/control2 attributes and passing off the parsed result to LibreOffice using the UNO API.

RTF import

Word allows you to export a document with Ink to RTF. Most of the code for importing the RTF equivalent was already there, it just needed some adapting. I found something interesting that I haven’t seen documented elsewhere, furthering Miklos Vajna’s work (see README) on understanding the RTF spec. The geometry of the Ink shapes is described using the pVerticies [sic] and pSegmentInfo keywords. The pSegmentInfo section is a list of commands indicating what the points listed in pVerticies mean (move to, curve, end sub-path and so on).

Segment indicator Description Vertices associated
0×0001 Line to point 1 (x, y)
0×2001 Bezier curve with two control points and end point 3 (cx1, cy1, cx2, cy2, x, y)
0×4000 Move to point 1 (x, y)
0×6001 Close path 0
0×8000 End path 0

The plot thickens…

So, when importing a Word-generated .rtf with Ink annotations, why was I seeing segment indicators like 0x200A? Apparently, the low order bytes of certain segment indicators indicate the number of point sets to apply to – for example, if there were four curves in a row with three points each in pVerticies, it can be specified by using the low order bytes of the segment indicator, resulting in 0×2004 (encompassing 12 points in total). This may also apply to other relevant line segment types, but this is as of yet untested. You can easily extract the number of segments indicated using basic bitwise operators:

unsigned int segment = 0x200A; // Example segment indicator
unsigned int points = segment & 0x00FF; // Assuming two lowest order bytes are used for point count
segment &= 0xFF00; // Discard point count; just leave segment indicator

Woo ink in LibreOffice!!

Ink annotation in LibreOffice
Drawn Ink annotations in LibreOffice

LibreOffice now correctly displays not only Ink, but (in theory) any curves and shapes with paths when importing from .docx or .rtf. A minor bug with RTF image wrapping which caused the shape to be inline with the text instead of over it was also fixed (the property was just being ignored), so better imports all round!

Next step – correct export of bezier shapes to docx and rtf (no, I’m still not sure whether that blob of binary in the o:ink element is of any importance whatsoever, but this should be one way to find out).


v:shape schema information – this website is great for making sense of the OOXML standard, particularly if used alongside this:
ISO IEC 29500 – ISO standard document for OOXML (warning, big pdf in a zip).
RTF spec – only somewhat useful
UNO API reference – useful if used with the search function
writerfilter and oox – LibreOffice modules of interest for importing OOXML documents (cgit links for browsing the source/READMEs)

by Eilidh at September 26, 2012 05:23 PM

July 05, 2012

Eilidh McAdam

Tech update: LibreOffice cross compile MSI installer generation

Table of Contents

I’m working on allowing a Windows Installer (.msi) for LibreOffice to be built when cross compiling under Linux. So far, it has been a broad spanning project and has covered:

  • Windows, MSI and Cabinet APIs (C, SQL, Wine, winegcc)
  • LibreOffice build system (Perl, autotools)

Project status as of posting:

  • Developing on openSUSE 12.1 (x86_64) to target Windows (i686).
  • .msi files can be created and taken apart with the cross MSI tools (cgit) msidb and msiinfo.
  • Cabinet files can be extracted but not created. However, parsing Diamond Directive file (.ddf) format is supported through makecab (this is required when the LO build system creates a cabinet).
  • Remaining: hook up MSI transforms and patches (msitran and msimsp); fit the tools into the build system; clean up and maintenance.

The MSDN documentation for the win32 native tools has been linked to where appropriate.

1.1 Cross compiling LibreOffice

Luckily, LibreOffice cross compile support is already very good. README.cross in the LibreOffice root directory has far more information. Assuming you have checked out LibreOffice and have all the MinGW dependencies, cross compiling can be as simple as changing <lo_root>/autogen.lastrun to read:

CC=ccache i686-w64-mingw32-gcc
CXX=ccache i686-w64-mingw32-g++
CC_FOR_BUILD=ccache gcc
CXX_FOR_BUILD=ccache g++

This references <lo_root>/distro-configs/LibreOfficeMinGW.conf. This folder
contains various configurations for compiling under different circumstances. I
also found it helpful to add this line to LibreOfficeMinGW.conf to make life


1.2 Building the installer

The installer build logic can be found in <lo_root>/solenv/bin/modules/installer/windows. It makes use of several Microsoft utilities to eventually output an MSI file. Some of these utilities are already distributed by Wine.
Provided by Wine:
expand.exe – Used to unpack cabinet files.
cscript.exe – Command line script host.
Also expected:
msidb.exe – Manipulates installer database tables and streams.
msiinfo.exe – Manipulates installer meta data (summary information).
makecab.exe – Compresses files into cabinets.
msimsp.exe – Creates patch packages.
msitran.exe – Generates and applies database transforms.

Wine already exposes most of the required functionality via the API exposed by msi.dll (MSDN, Wine) and cabinet.dll (MSDN, Wine). My work has been focussed on writing command line utilities that support the interface expected by the LibreOffice build scripts.

  • solenv/bin/ is a very large Perl script that connects up the Perl modules which build the installer. The .pm files relevant to cross MSI building are listed below.
  • solenv/bin/modules/installer/ performs “nativeness” logic such as checking if the environment is Cygwin and whether the required utilities are in the system path.
  • solenv/bin/modules/installer/windows/ (expand.exe*, msidb.exe, msiinfo.exe)
  • solenv/bin/modules/installer/windows/ (expand.exe*, msidb.exe)
  • solenv/bin/modules/installer/windows/ (msidb.exe, msiinfo.exe, cscript.exe*, msitran.exe**, makecab.exe**)
  • solenv/bin/modules/installer/windows/ (msidb.exe, msimsp.exe**)
  • solenv/bin/modules/installer/windows/ (msidb.exe)
  • * Distributed by Wine
    ** In progress

1.3 Cross MSI tool development

The code for these tools can be found in the the feature/crossmsi branch of libreoffice. It currently resides in setup_native/source/win32/wintools in the tree.

To test the tools individually, grab the dev Makefile and make from the tool’s directory. You can then pass the -? or /? command for usage. I would suggest disabling Wine’s debug logs unless you specifically need them:

$ export WINEDEBUG=-all

  • msidb (MSDN msidb, LibreOffice msidb, dev Makefile)

    Usage: msidb [options] [tables]

    -d <path> Fully qualified path to MSI database file
    -f <wdir> Path to the text archive folder
    -c Create or overwrite with new database and import tables
    -i <tables> Import tables from text archive files – use * for all
    -e <tables> Export tables to files archive in directory – use * for all
    -x <stream> Saves stream as <stream>.idb in <wdir>
    -a <file> Adds stream from file to database
    -r <storage> Adds storage to database as substorage

  • msiinfo (MSDN msiinfo, LibreOffice msiinfo, dev Makefile)

    Usage: msiinfo {database} [[-b]-d] {options} {data}

    -c <cp> Specify codepage
    -t <title> Specify title
    -j <subject> Specify subject
    -a <author> Specify author
    -k <keywords> Specify keywords
    -o <comment> Specify comments
    -p <template> Specify template
    -l <author> Specify last author
    -v <revno> Specify revision number
    -s <date> Specify last printed date
    -r <date> Specify creation date
    -q <date> Specify date of last save
    -g <pages> Specify page count
    -w <words> Specify word count
    -h <chars> Specify character count
    -n <appname> Specify application which created the database
    -u <security> Specify security (0: none, 2: read only 3: read only (enforced)

  • makecab (MSDN makecab, LibreOffice makecab, dev Makefile)

    Usage: makecab [/V[n]] /F directive_file

    /F directives – A file with MakeCAB directives.
    /V[n] – Verbosity level (1..3)

by Eilidh at July 05, 2012 06:26 PM

June 21, 2012

Lanedo GitHub

June 08, 2012

Lanedo GitHub

June 06, 2012

Lanedo GitHub

April 17, 2012

Michael Natterer

Goat Invasion in GIMP

Once upon a time, like 5 weeks ago, there used to be the longstanding plan to, at some point in the future, port GIMP to GEGL.

We have done a lot of refactoring in GIMP over the last ten years, but its innermost pixel manipulating core was still basically unchanged since GIMP 1.2 days. We didn’t bother to do anything about it, because the long term goal was to do all this stuff with GEGL, when GEGL was ready. Now GEGL has been ready for quite a while, and the GEGL porting got assigned a milestone. Was it 2.10, 3.0, 3.2, I don’t remember. We thought it would take us forever until it’s done, because nobody really had that kind of time.

About 5 weeks ago, I happened to pick up Øyvind Kolås, aka Pippin the Goatkeeper to stay at my place for about a week and do some hacking. After one day, without intending it, we started to do some small GEGL hacking in GIMP, just in order to verify an approach that seemed a good migration strategy for the future porting.

The Problem: All the GimpImage’s pixels are stored in legacy data structures called TileManagers, which are kept by high level objects called GimpDrawables. Each layer, channel, mask in GIMP is a GimpDrawable.

A typical way to do things is:

TileManager *tiles = gimp_drawable_get_tiles (drawable);
PixelRegion region;

pixel_region_init (&region, tiles, x, y, w, h, TRUE);

/* do legacy stuff on the pixel region in order to change pixels */

After the GEGL porting, things would look like that:

GeglBuffer *buffer = gimp_drawable_get_buffer (drawable);

/* do GEGL stuff on the buffer, like running it through a graph in order to change pixels */

Just, how would we get there? Replacing the drawable’s tile manager by a buffer, breaking all of GIMP at the same time while we move on porting things to buffers instead of tile managers? No way!

The Solution: A GeglBuffer’s tiles are stored in a GeglTileBackend, and it’s possible to write tile backends for arbitrary pixel storage, so why not write a tile backend that uses a legacy GIMP TileManager as storage.

After a few hours of hacking, Pippin had the GimpTileBackendTileManager working, and I went ahead replacing some legacy code with GEGL code, using the new backend. And it simply worked!

The next important step was to make GimpDrawable keep around a GeglBuffer on top of its TileManager all the time, and to add gimp_drawable_get_buffer(). And things just kept working, and getting easier and easier the more legacy code got replaced by GEGL code, the more GeglBuffers were being passed around instead of TileManagers and PixelRegions.

What was planned as a one week visit turned into 3 weeks of GEGL porting madness. At the time this article is written, about 90% of the GIMP application’s core are ported to GEGL, and the only thing really missing are GeglOperations for all layer modes.

As a totally unexpected extra bonus, there is now even a GEGL buffer tile backend in libgimp, for plug-ins to use, so also plug-ins can simply say gimp_drawable_get_buffer(drawable_ID), and use all of GEGL to do their stuff, instead of using the legacy pixel region API that also exists on the plug-in side.

GIMP 2.10’s core will be 100% ported to GEGL, and all of the legacy pixel fiddling API for plug-ins is going to be deprecated. Once the core is completely ported, it will be a minor effort to simply “switch on” high bit depths and whatever color models we’d like to see. Oh, and already now, instead of removing indexed mode (as originally planned), we accidentally promoted indexed images to first class citizens that can be painted on, and even color corrected, just like any other image. The code doing so doesn’t even notice because GEGL and Babl transparently handle the pixel conversion magic.

The port lives in the goat-invasion branch in GIT. That branch will become master once GIMP 2.8 is relased, so the first GIMP 2.9 developer release will already contain the port in progress.

If you want to discuss GIMP and GEGL things with us face to face, join us at this year’s Libre Graphics Meeting in Vienna, in two weeks from now, a lot of GIMP people will be there; or simply check out the goat-invasion branch and see the goats yourself.

If you have some Euros to spare, consider donating them to Libre Graphics Meeting, it’s one of the few occasions for GIMP developers, and the people hacking on other projects, to meet in person; and such meetings are always a great boost for development.

During the 3 crazy weeks, quite some work time hours were spent on the port, thanks to my employer Lanedo for sponsoring this via “Labs time”.

by Mitch at April 17, 2012 12:28 PM

March 19, 2012

Eilidh McAdam

Get into open source with GSoC 2012

Student applications for Google Summer of Code 2012 will be open very soon. After an extremely enjoyable and rewarding experience with the program last year, I feel it’s my duty to student programmers to get the word out. So, here’s why you should apply.

You get paid to work on open source software. I became a long time user, first time contributor early last year. Looking to give something back, I attempted a LibreOffice Easy Hack. In a case of fantastic timing, they announced their involvement in GSoC a week or so later and I got in touch. The end result was a whole new open source library. I had an amazing experience working with LibreOffice but it’s ideal to choose a project that’s personally useful. GSoC doesn’t require that you’re an open source evangelist but if you are, it’s a strong argument for applying.

It’s fantastic experience working on a large project. I feel I learned more during those three months than during my undergraduate degree course. I have to say that I never particularly enjoyed groupwork at university but it’s completely different if you’re working with smart, motivated individuals who’re there either because they want to be or because they’re paid to be (quite often both). As a nice bonus, it’s great work experience and has essentially led me to my dream job. I’m not sure if that’s a typical result, but it certainly wouldn’t hurt to have it on your CV or resume.

You meet some of the smartest, most awesome people (not all of them programmers). I think this is my favourite outcome. I’ve met people from all over the world with an assortment of beliefs, opinions and backgrounds. My experience was that some of the best hackers and coolest people (no, seriously!) hang around open source communities.

Applying isn’t difficult, just choose a participating open source organisation or two and do a little research into the suggested projects before getting in touch with them. Good luck!

by Eilidh at March 19, 2012 08:56 PM

March 05, 2012

Lanedo GitHub

November 15, 2011

Martyn Russell

Lanedo is hiring

We’re currently looking for anyone who has LibreOffice experience and is interested in working on the project. If that sounds like something you would like to do, get in touch with us.

Additionally, if you or anyone you know has experience running an open source business, please get in touch. We’re looking for someone that could facilitate a CEO type position.

by mr at November 15, 2011 04:28 PM

October 23, 2011

Eilidh McAdam

LibreOffice Conference 2011

I’ve been home a week from the LibreOffice Conference in Paris and from a personal point of view, it was a huge success.

First of all, here are my slides from the short talk I gave about what we achieved with libvisio over the duration of Google Summer of Code. There is still work to be done but once end-user feedback starts coming in, we can sand down any rough edges.

The conference was a lot of fun, particularly the company. I had the pleasure of meeting the rest of the libvisio team, Fridrich Strba and Valek Filippov, who looked out for me the whole time I was there. I’m sure the Paris pickpockets are still cursing their names.

I also have to admit to being a little starstruck at meeting all the fantastic hackers whose work I have made so much use of. The LibreOffice team were a diverse, interesting and kind bunch who put up with my incessant (well-meaning) questions with good grace and gave me plenty to think about on coding, the universe and everything.

It was wonderful to be surrounded by programmers and Linux users without the geekier-than-thou attitude. Despite being younger (and greener) than most and female unlike many (with a few notable exceptions), I chatted away to my fellow hackers without once feeling patronised.

Finally, I’m staying out of the whole political situation – I started coding with LibreOffice for pragmatic reasons (I could get the code easily, Easy Hacks make getting to know the project simpler and LibreOffice was part of GSoC ’11). However, I think the conference really confirmed for me that as important as the code base is, the community that surrounds a project this size is as vital. Without their helpful, inclusive approach, I’d have found contributing to an open source project of that magnitude an insurmountable task.

So here’s to another year!

by Eilidh at October 23, 2011 01:49 PM

October 06, 2011

Martyn Russell

Tracker Needle with improved tagging

Given there have been a number of improvements to tracker-needle recently, I thought I would make a video to highlight some of them. A quick summary:

  • Searching for “foo” now finds files tagged with “foo”
  • Searches are limited to 500 items per category/query (to avoid abusing the GtkTreeView mainly)
  • A tag list is now available to show all hits by tags
  • Tags can be edited by the context menu per item (planned to be improved later)

Really nice to have tagging supported properly in tracker-needle now.

by mr at October 06, 2011 08:15 PM

September 16, 2011

Martyn Russell

Improved Tracker Preferences for Indexed Locations

Something I have been meaning to do for a long time, is to update the preferences dialog for Tracker to easily add locations which are special user directories (as per the GUserDirectory locations).

I wanted to do this in such a way that:

  • It was really easy to toggle locations as recursive or not
  • The file chooser was only necessary for non-standard locations
  • Better use of the space was made by integrating the two lists (previously) for single directory and recursive directory indexing
  • I could fix a few issues which had been reported when it came to saving using the special symbols (e.g. &DESKTOP for G_USER_DIRECTORY_DESKTOP, etc.) when one or more user directories evaluated to the same location

The result is this (now in master and 0.12.2 when it is released):

by mr at September 16, 2011 11:20 PM

August 29, 2011

Christian Kellner

Apple Filing Protocol (AFP) support for GVfs

Last Thursday I merged the Apple Filing Protocol (AFP) backend for GVfs; so we finally have support for Apple shares too now. It has been written by Carl-Anton Ingmarsson and it was his Summer of Code 2011 project. It is on the master branch and thus will be in the next unstable release. Please test it and report bugs against the "afp backend" component.

Carl-Anton did quite an impressive job - probably best depicted by the diffstat of the merge:

 client/            |    1 
 client/afpuri.c               |  269 ++
 client/gdaemonvfs.c           |    2                  |   31 
 daemon/            |   45 
 daemon/    |    8 
 daemon/           |    5 
 daemon/gvfsafpconnection.c    | 1651 ++++++++++++++++
 daemon/gvfsafpconnection.h    |  420 ++++
 daemon/gvfsafpserver.c        | 1033 ++++++++++
 daemon/gvfsafpserver.h        |   85 
 daemon/gvfsbackendafp.c       | 4292 +++++++++++++++++++++++++++++++++++++++++-
 daemon/gvfsbackendafp.h       |   23 
 daemon/gvfsbackendafpbrowse.c |  608 +++++
 daemon/gvfsbackendafpbrowse.h |   47 
 daemon/gvfsbackenddnssd.c     |    6 
 daemon/gvfsjobsetattribute.h  |    1 
 17 files changed, 8491 insertions(+), 36 deletions(-)

by gicmo at August 29, 2011 11:00 AM

June 22, 2011

Eilidh McAdam

Progress with gradient fills

So, I have finally made progress that isn’t so ground-breaking that my mentor wants to write about it but is big enough that certain people will stop making fun of my empty blog. So, frob (his wonderfully useful work can be found here), I hope you’re happy.

I’ve been working on shapes, lines and their properties, most recently on fills. Here’s how it’s going so far (Visio document on top, my output below).

Thanks to frob for the image, plus animated gif.

A few technical details for those who care: Visio draws shapes (including rectangles) as individual lines and before they can be filled, so we have to manually detect whether or not it’s a closed polygon. At the moment, we simply take the first point and compare it to the last point and make sure there are no gaps in between. It works for most simple cases but since when are things ever truly simple when reverse engineering?

You may also notice a difference between how gradients 31-34 are drawn in Visio vs my output. There’s no direct equivalent of this type of square gradient that I know of in the SVG or ODG specifications, so we’re approximating it. I have a whole new appreciation of slight imperfections when porting documents to different formats.

In the time it has taken to write this, I’ve already found that some of what I’ve written about will change. This is why I’m a programmer not a blogger ;)

by Eilidh at June 22, 2011 07:27 PM

March 29, 2011

Christian Kellner

Google Summer of Code 2011

International Neuroinformatics Coordinating Facility


Just a quick reminder: The student application period for the Google Summer of Code 2011 has opened as of yesterday (Monday, the 28th of March). Apply now! The starting point for Gnome is here; it has all the relevant information.

In addition to that, if you are happen to be interested in Neuroscience and Informatics the International Neuroinformatics Coordinating Facility (INCF) also got accepted as a organization (Thanks Raphael!). Among other very interesting project ideas there are also two proposals that are Gnome related (as in pygtk based applications). If you have a cool Neuroinformatics+Gnome based idea be sure to apply at the INCF. The starting point is here.

by gicmo at March 29, 2011 11:47 AM

October 23, 2010

Michael Natterer

GIMP on GTK+ 3.0

GIMP on GTK+ 3.0

At the GTK+ Hackfest in A Coruña I managed to get GIMP almost completely (minus one dialog and most plug-ins) running on GTK+ 3.0.

This turned out to be a great tool for finding bugs in the new GTK+. In fact, I found quite a few of them while still completing the port. Some bugs I fixed right away, others were fixed by fellow Hackfest hackers. Even while writing this post (the image was of course cropped with the ported GIMP), two more popped up and will eventually be fixed.

by Mitch at October 23, 2010 11:07 PM

July 17, 2010

Christian Kellner

Back online

After being down for a while the blog is now back online. The server moved - as I did - to Munich. EOM for now - more real news later ...

by gicmo at July 17, 2010 10:00 PM

May 12, 2009

Christian Kellner

Das Versagen der Regierung ...

... continued:

"In Deutschland ist nicht nur die Steuerbelastung so hoch wie in kaum einem anderen Industrieland, die Steuern und Abgaben sind auch noch besonders ungerecht verteilt. [...] Der DGB fordert, die wirtschaftlich Leistungsfähigen über die Anhebung des Spitzensteuersatzes und Wiedereinführung der Vermögenssteuer stärker zur Kasse zu bitten. [...] Die Union weist die Forderungen zurück. "Für mich ergibt sich kein Handlungsbedarf", sagte CDU-Sozialpolitiker Ralf Brauksiepe." (ZEIT Online)

Vermögende besteuern? Also das geht offenbar gar nicht. Aber halt: Das C in CDU steht doch immer noch für christlich, oder? Sein Hab und Gut den Armen geben - war da nicht was, Herr Sozialpolitiker? Ach quatsch, alles gerecht so wie es ist; auch dass Doppelverdiener mehr Abgaben zahlen. Oder? "Selbstverständlich", meint Herr Brauskiepe (ebd.).
"Ökonomen teilen diese Meinung nicht." - Pah. Experten. Was wissen die schon. Und überhaupt: Die vom Familienministerium kümmern sich ja auch nicht um die. - "Die deutschen Sozialsysteme seien nach wie vor auf die vierköpfige Standardfamilie mit zwei Kindern und einem Alleinverdiener." (ebd.) Aha! Wer hätte das gedacht. Wer nicht ins verstaubte Weltbild passt ist eh selber schuld und zahlt deswegen auch mehr; und womöglich Atheist, Agnostiker oder Schlimmeres. Christlich ist man halt nur unter sich. Doppel-Moral war ja noch nie ein Problem.
Außerdem hat die Regierung ja gerade auch viel, viel Wichtigeres zu tun; nämlich Medien zensieren und verbieten, dass Leute mit Farbkugeln in der Gegend rumschießen. Viel, viel wichtiger.

by gicmo at May 12, 2009 06:06 PM

Lobby-Arbeit mit Kindern

Die Deutsche Kinderhilfe plant eine offline Unterschriftenaktion für die Online-Zensur. Ja, wir erinnern uns, genau die Organisation, die "Unklarheiten bei Finanzstrukturen" und "enge Verbindungen zu einem Unternehmen" hat. Die WELT Online berichtete. Im krassen Gegensatz dazu sind Missbrauchsopfer selbst offenbar eher dagegen: "Missbrauchsopfer gegen Internetsperren" (MOGIS) und Trotz Allem e.V. (offener Brief). Es drängt sich sehr stark der Verdacht auf, dass die Deutsche Kinderhilfe nicht Lobby-Arbeit für Kinder macht, sondern mit.

Symptome behandeln anstatt Ursachen, und dabei getrieben von der Angst vor Neuem. Am besten noch ohne Sachverstand. Und bei jedem zweiten Gesetz muss das Grundgesetz geändert werden.

by gicmo at May 12, 2009 03:50 PM

April 26, 2009

Christian Kellner

Piraten und Strukturwandel

"Die Preise für derzeit erhältliche elektronische Bücher stimmen skeptisch. Sie liegen nur knapp, bei Hardcovern ein oder zwei Euro, unterm Ladenpreis für gedruckte Bände. Dabei wären heutige 20-Euro-Bücher in digitaler Form mit zehn Euro gut bezahlt, auch wenn den Autoren (und ihren Agenturen) deutlich mehr als die zurzeit üblichen zwei bis zweieinhalb Euro blieben. Doch obwohl alle Druck- und Vertriebskosten inklusive Verpackung, Transport sowie die in dieser Kette enthaltenen Löhne und Einnahmen und überdies der Buchhändlerrabatt von 40 bis 45 Prozent entfallen, erhalten die Urheber von dem kräftigen Zugewinn keinen Cent." (ZEIT online - Es war einmal)

Genau *dies* ist meiner Meinung nach die Wurzel warum die grossen Entertainment-, oh ich bitte um Entschuldigung, Kultur-Industrien so unglaubwürdig geworden sind und die Leute nicht bereit sind 12,90 €‚¬ für ein (kopiergeschütztes) "Download-Album" auszugeben, das sich praktisch in nichts, i.e. Kosten(!), Lieferumfang, von der "echten" CD unterscheidet. Oder $25 für einen wissenschaftlichen Artikel in PDF-From? Ganz zu schweigen von Filmen, für die man im Kino schon 10€‚¬ bezahlt hat, und die deswegen sowieso Millionen-Gewinn eingespielt haben; man schaue sich nur kurz die Preise im neuen Apple Store Movies an. Das soll man nicht für überholte und unverhältnismäßige Gier halten?
Wenigstens das mit dem Kopierschutz ändert sich (bei Musik) langsam. Wenn auch nicht ganz freiwillig.

Umso lächerlicher erscheint es wegen all dem, wenn man den Protektionismus der Alten und die Angst vor Neuem  auch noch als Niedergang der Kultur zeichnet. Zum Beispiel hier von Frau Gaschke. Ich empfehle die Kommentare zu diesem Artikel. Diese sind weitaus besser als der Artikel selbst  (irgendwie passend).

Und vor allem geht es doch um Inhalte. Und natürlich sollen die eigentlich Kunstschaffenden für ihre Arbeit fair entlohnt werden; aber sicher nicht auf die gleiche Art und Weise wie es vor dem "Digitalen Zeitalter" war. Tja, Zeiten ändern sich nunmal und wer sich nicht anpassen will gehört halt irgendwann zu den Dinosauriern ... hoffentlich. Salus populi est suprema lex:

"Freie Lektüre als Teil des Grundrechts auf Bildung €€“ und als Erfolgsmodell moderner Wissensgesellschaften. Open Access wäre nicht der Untergang des Abendlandes. Im Gegenteil." (ZEIT online - Es war einmal)

by gicmo at April 26, 2009 09:34 AM

August 30, 2007

Michael Natterer

Using the Mac OS X Menubar

Finally, after quite some debugging (of the very same bug for months), I committed preliminary support for the global Mac OS X menubar to GIMP trunk.

It’s the result of a project I’ve been involved with at Imendio. Check the project page.

For seeing the coolness without compiling yourself, check the video.

by Mitch at August 30, 2007 01:36 PM

October 31, 2006

Michael Natterer

October 18, 2006

Michael Natterer

Gimp statt Photoshop

Jetzt ist es amtlich! - stand ja schließlich im SPIEGEL ;-)

Ein mit GIMP gemachtes Bild hat beim SPIEGEL Bildbearbeitungs-Wettbewerb gewonnen. Glückwünsche an den erfolgreichen Manipulator!

by Mitch at October 18, 2006 02:00 PM

May 02, 2006

Michael Natterer

PDB and Plug-In Refactoring

One of the last remaining bits of antique code in the GIMP are the parts that talk to plug-ins, namely the plug-in handling code itself, and the PDB (procedural database) which provides a way for plug-ins to call GIMP’s internal functionality and which keeps all procedures provided by plug-ins.

Everything was organized around some crufty C-structs, lived in files without proper namespace, and was basically untouched by all the refactoring that was happening during the last few years. To illustrate the evilness: the PDB did not even really know which of its procedures were dynamically allocated, and which were constant structs that are defined at compile time (not to speak of the part where perl code generated perl code that generated perl code that…). This had to stop.

Right after LGM, I entered refactoring mode:

  • All the perl-that-generates-perl stuff had to die. Every procedure definition in the PDB files (tools/pdbgen/pdb/*.pdb, they are still perl) now looks the same: Inargs, Outargs, Code.
  • The homemade system to specify a procedure’s arguments and return values doesn’t exist any more. Everything is based on GParamSpec now. A lot of new GIMP-specific param specs was added in app/core/gimpparamspecs.c.
  • The procedure’s arguments from the same prehistoric era had to go too. Procedures now take and return everything as GValue, organized as GValueArray.
  • Procedures are GObjects now (app/pdb/gimpprocedure.c). Their memory-management was modernized a bit (they have memory management now). Plug-In procedures are a proper GimpProcedure subclass now (app/pdb/gimppluginprocedure.c), which made lots of code much more straightforward (they were separate structs before which had a pointer to the procedure they implement).
  • The PDB is now an object itself (app/pdb/gimppdb.c), instead of a bunch of global variables with some API around. The PDB instance emits signals when procedures are added or removed, so the GUI can for example create menu items for new plug-in procedures. No more direct calls from the core to the GUI via the ugly GUI-vtable.

The plug-in system is a similar mess, closely related to the PDB, but even worse. It has seen some refactoring, but just to the point where it was unavoidable to fix misbehavior or to get rid of calling deprecated GLib APIs. While the PDB cleanup has come a long way, I’m still in the middle of chopping and re-assembling the plug-in stuff:

  • Lots and lots of global variables have been moved to a new object, the GimpPlugInManager (app/plug-in/gimppluginmanager.c). Well they were not really global variables before, but members of the global Gimp instance, which is supposed to be the only “global” variable in the GIMP, but that doesn’t make much difference here.
  • Lots of functions are now methods of the GimpPlugInManager, which greatly helps finding them. Before, it was mostly unclear which function belonged to the plug-in instances themselves, and which to the infrastructure around that keeps is all together.

That’s where I am today, but there are still quite some hacks ahead before the stuff can be called “finished”:

The PlugIn struct (app/plug-in/plug-in.c) has to become an object, and this object needs some signals. Some code needs to listen to these signals, so cross-calling between unrelated scopes doesn’t happen any more. At some point people will even be able to understand how the plug-ins’ memory management is supposed to work ;-) Currently the calls to plug_in_ref() and plug_in_unref() are not really in places where one would expect them. I bet there is more uglyness that will go away as soon as I find it.

Now what are the benefits from all this work? Well, refactored code looks soooo much nicer :-)

But seriously:

  • The refactored code does look nicer, is easier to read and understand, is easier to change and fix.
  • The PDB can check the passed arguments much better now. Thanks to GParamSpec GIMP can tell a plug-in/script developer which of the passed arguments was wrong in which way.
  • Every argument has a default value now. After GIMP 2.4 this will allow us to change the plug-in side of calling PDB procedures to something that has named parameters with default values. No more breaking scripts just becuase somebody added an optional argument.
  • (actually, optional arguments were impossible before).
  • Now that it’s all cleaned up, people != hardcore_longtime_developers can understand and change it.
  • And many other benefits that usually show up after the refactored code is in use for some time.

And now, please get GIMP from CVS and test it until it breaks. Then report the bug so the new PDB will be as solid as the old one.

by Mitch at May 02, 2006 03:43 PM