Planet Lanedo

October 14, 2014

Martyn Russell

Tracker – What do we do now we’re stable?

Books

Over the past month or two, I’ve spent time working on various feature branches for Tracker. This coming after a 1.2 stable release and a new feature set which was added in 1.2.

So a lot has been going on with Tracker internally. I’ve been relatively quiet on my blog of late and I thought it would be a good idea to run a series of blogs relating to what is going on within the project.

Among my blogs, I will be covering:

  • What features did we add in Tracker 1.2 – how can they benefit you?
  • The difference between URIs, URNs, URLs and IRIs – dispelling any confusion; for the bugs we’ve had reported
  • Making Tracker more Git-like – we’re moving towards a new ‘git’ style command line with some new features on the way
  • Preparing for the divorce – is it time to finally split tracker-store, the ontologies and the data-miners?
  • Making Tracker even more idle – using cgroups and perhaps keyboard/mouse idle notifications

If anyone has any questions or concerns they would like to see answered in articles around these subjects, please comment below and I will do my best to address them! :)

by mr at October 14, 2014 06:55 PM

September 22, 2014

Lanedo Blog

WHITE PAPER: Qualcomm Gobi devices in Linux based systems

Over the past few years, Aleksander Morgado has written about some of the improvements happening in the Linux world for networking devices, including Improving ModemManager for 3GPP2 Gobi 2k3k devices, Workarounds for QMI modems using LTE and other modem advances

by Martyn Russell at September 22, 2014 01:56 PM

August 05, 2014

Tim Janik

C++ Month Name Hashing

In a time critical section of a recent project, I came across having to optimize the conversion of three digit US month abbreviations (as commonly found in log files) to integers in C++. That is, for “Jan” yield 1, for “Feb” yield 2, etc, for “Dec” yield 12. In C++ the simplest implementation probably looks [...]

by timj at August 05, 2014 03:35 PM

April 15, 2014

Tim Janik

Forward Secrecy Encryption for Apache

The basic need to encrypt digital communication seems to be becoming common sense lately. It probably results from increased public awareness about the number of parties involved in providing the systems required (ISPs, backbone providers, carriers, sysadmins) and the number of parties these days taking an interest in digital communications and activities (advertisers, criminals, state authorities, [...]

by timj at April 15, 2014 11:56 AM

December 18, 2013

Lanedo Blog

A quest for speed in compiling

Ever spent time looking a scrolling console wishing that compilation was taking less time? In this post we’re going to explore the various way to fasten the build of $YOUR_SOFTWARE_PROJECT. Let’s start with the simpler test case which will act as

by Pierre-Eric Pelloux-Prayer at December 18, 2013 11:09 AM

December 10, 2013

Lanedo Blog

The Main Loop: The Engine of a GUI Library

In this blog post, we will have a look at the “main loop”, which is the engine of most Graphical User Interface (GUI) libraries. Different GUI libraries implement their own main loop. When porting a GUI library to another platform,

by Kristian Rietveld at December 10, 2013 06:48 PM

December 04, 2013

Lanedo Blog

Exploring the LibreOffice code base

Opening LibreOffice’s source code for the first time, the amount of code that a new developer has to sift through can be intimidating and off-putting. So, here are some useful locations within the LibreOffice source directory structure that should help

by Eilidh McAdam at December 04, 2013 11:09 AM

November 26, 2013

Lanedo Blog

Filesystem monitoring in the Linux kernel

At Lanedo we’ve been working on file system monitoring in many contexts, like Gvfs/GIO development or Tracker, and usually we get asked about which are the available interfaces in the Linux kernel… The history behind filesystem monitoring interfaces in Linux

by Aleksander Morgado at November 26, 2013 04:37 PM

November 19, 2013

Lanedo Blog

Cross-Compiling DirectFB for an Embedded Device

At Lanedo, we quite often deal with embedded hardware and need to compile packages for it. Most of the time, this is not done on the embedded hardware itself but on our more powerful desktop machines. And most of the

by Michael Natterer at November 19, 2013 06:26 PM

November 13, 2013

Tim Janik

Tobin – Statistics from Webserver Logs

Reblogged from the Lanedo GmbH blog: During recent weeks, I’ve started to create a new tool “Tobin” to generate website statistics for a number of sites I’m administrating or helping with. I’ve used programs like Webalizer, Visitors, Google Analytics and others for a long time, but there’re some correlations and relationships hidden in web server [...]

by timj at November 13, 2013 12:42 AM

September 02, 2013

Tim Janik

Open Source In Business at Campus Party Europe

Next Friday I’ll be giving a talk on Open Source In Business at the Campus Party Europe conference in the O2 arena, London. The talk is part of the Free Software Track at 14:00 on the Archimedes stage. I’m there the entire week and will be happy to meet up, so feel free to drop me a line in [...]

by timj at September 02, 2013 12:24 AM

August 06, 2013

Tim Janik

Should We Include Documentation Builds In Tarballs?

Reblogged from the Lanedo GmbH blog: Would you want to invest hours or days into automake logic without a use case? For two of the last software releases I did, I was facing this question. Let me give a bit of background. Recently the documentation generation for Beast and Rapicorn fully switched over to Doxygen. [...]

by timj at August 06, 2013 10:24 PM

July 25, 2013

Tim Janik

Tor exit node for less than a week

During a conference some while ago, Jacob Appelbaum gave a talk on the usefulness of the Tor project, allowing you to browse anonymously, liberating speech online, enabling web access in censored countries, etc. Jacob described how the anonymizing Tor network consists of many machines world wide that use encryption and run the Tor software, which are [...]

by timj at July 25, 2013 09:44 AM

July 23, 2013

Lionel Dricot

The Last GUADEC?

Last year, during GUADEC, there was that running joke amongst some participants that this was the last GUADEC. It was, of course, a joke. Everybody was expecting to see each other in Brno, in 2013.

One year later, most of those who were joking are not coming to GUADEC. For them, the joke became a reality.

I believe the root cause is that GNOME has never been able to clearly offer an answer to one very simple question: what is GNOME? (baby don’t hurt me, don’t hurt me, no more)

People are increasingly leaving the desktop computer to use phones, tablets and services in the cloud. ChromeOS has successfully filled the gap between desktop and mobile devices and is becoming the dominant OS. Most people don’t need more than a Chromebook. In fact, it’s way easier to fill most current needs with a Chromebook.

One could say that the professional world is not following, that GNOME is targeting businesses or those who can’t work on a simple notebook/tablet. But we know that this is only a matter of time, that enterprises are simply lagging, on purpose or not. After all, some are still using Windows NT. And what was impossible to do in the cloud one year ago is already becoming the standard, like basic photo editing or video conference.

Of course, Android and Chrome OS are not free. Worst, the recent PRISM scandal has put under light the true importance of free software and transparent web services. Thousands of people understood the problem and decided to download the most popular free operating system of our time: Cyanogenmod, the free version of Android which reacted to PRISM by offering an incognito mode.

The switch is deeper and quicker than anything we imagined. Take a look at screens during a free software hackers conference. Yes, that’s it: Unity. Besides some Macbook and some Chromebook, it’s Unity everywhere. Unity who abandoned GTK+ to switch to Qt, renaming Qt Creator to Ubuntu SDK. Even Subsurface, Linus Torvald’s pet project, is switching from GTK+ to Qt. If you spot a GNOME desktop in a conference, chances are that you are dealing with a Red Hat employee. That’s it. According to Google Trends, interest in GNOME and GTK+ is soon to be extinct.

For years, I’ve been a proud GNOME supporter. I’ve been increasingly interested by the usability of the desktop, by the innovation of GNOME 3. But, today, who really cares about Unity/GNOME/KDE or GTK+/Qt when all you need to do is to launch a browser full screen? All I need, all I want are web based versions of the free software I use. Not an WinXP replacement.

Only a few years ago, GNOME was at the centre of the creative world. Remember Maemo and the N700? This was ground-breaking. A mobile full-fledged OS was the future. Multiple companies emerged from the chaos to provide support, expertise, innovation. But the last remaining bastard child of this era, Tizen, has been definitely buried only a couple of weeks ago.

I can’t accept that all we will keep from this wonderful story is a bunch of coloured t-shirts. We are multiple companies that were created during the GNOME golden era. We are a family of hackers, volunteers and friends. We are a community. We share a lot of experience, we share values. The free software ecosystem has produced hugely successful products which are still unmatched in the cloud offering: Gimp, Inkscape, LibreOffice, Blender to name a few. Even my own pet project, Getting Things GNOME, has no satisfactory web equivalent. And when a web solution exists, it is often a proprietary, centralized, privacy crushing one. There’s surely room for free solutions. That’s why LibreOffice is already investigating the web/mobile space.

For all those reasons, I would like to take the time to sit down together and discuss about the GNOME or free software business future during a BOF. How can we evolve? Can we move the GNOME spirit into a web browser? How can we make use of our history and bring freedom to users instead of becoming just another web dev consultancy company?

How can we ensure, together, that this will not be the last GUADEC?

 

Picture by Ana Rey

flattr this!

by Lionel Dricot at July 23, 2013 09:28 AM

June 24, 2013

Lionel Dricot

Flattr’s biggest problem

And how I work around it

A few months ago, I tried to convince you to spend 2€ per month to reward the content you like.

Lot of people are enthusiastic with Flattr. But there’s a recurring complain that there’s not enough content accepting Flattr.

This is a real concern. Flattr was build around a model where more or less every creator has a personal blog or website. Unfortunately, there’s a clear trend that creators are now increasingly regrouping on a few centralized platforms. Creators don’t have the control of the platform and, as such, cannot add a Flattr button.

This raises a lot of questions about centralization and gives the feeling that, besides a few blogs like mine, there’s nothing to Flattr on the web. It looks like you are standing alone on a desert island with your money.

Enter the unclaimed Flattr

Did you know that you can flattr someone which is not on Flattr yet? It is called an “unclaimed Flattr”. The money remains on your account until the creator sign-in. So don’t hesitate to use this feature.

What is great with unclaimed Flattrs is that you can flattr nearly everything on the web without thinking about it. If, for a given month, you flattr only unclaimed things, it will cost you nothing for that particular month.

To Flattr something, even if it has no Flattr button, install the Flattr browser extension for Firefox or for Chrome. When the content you are currently viewing can be flattered, a little Flattr icon will be show in the address bar. Click on it and confirm your flattr.

Unfortunately, not every web page can be flattered. So let me explain how I manage to give 62 flattrs this month.

Flattering blogs, articles, comics and social network messages

The web extension allows me to see immediately if a blog or a website has a Flattr account. I can even flattr each Wikipedia page!

If there’s no Flattr icon, I use this little trick: I flattr the Twitter account of the blog. If I liked the article Foo from blogger Bar, I simply find the tweet from @Bar that announced the article Foo. I then click on the tweeting date to open the tweet full page.

If you do that, you will see that the Flattr icon appears in your address bar. Indeed, you can flattr individual tweet. That makes one more pending flattr for this creator.

flattr_tweet

Of course, I also sometimes flattr individual tweets that I particularly enjoy. The same can be done with pictures on Instagram and even messages on App.net but I did not found anyone using the later. Maybe it would be nice to be able to flattr a Tumblr post?

Flattering videos, music and podcasts

Every video on Youtube or Vimeo can be flattered if you have the browser extension installed. Same for any audio track or podcast on Soundcloud. If I appreciate a content on this platform, I flattr it without any second thought. Unfortunately, Dailymotion support is missing.

Grooveshark also has a Flattr setting where you can automatically Flattr any artist you are listening to. Most artists are not registered on Flattr yet but I flattr anyway. If any artist I like ever complains about piracy, I will happily point to the money waiting on Flattr.

Flattering pictures

To get illustrations for this blog, I look on Flickr or 500px for Creative Commons pictures. Any picture on those platforms can be flattered and I make sure to do it for each picture I re-use. It even happens that the author is already on Flattr, such as this one, used in an article in French.

I still miss to be able to flattr pictures on DevianArt but, on 500px and Flickr, I’ve no hesitation to Flattr any nice picture randomly appearing in front of my eyes.

Software

There’s a growing list of software accepting Flattr donations. I try to regularly flattr software I use. If your favourite piece of code is not on Flattr, you can still flattr any tweet from the official account.

Also, any GitHub repository and any commit can be flattered. I tend to flattr external contributions to my own projects or commits fixing an annoying bug.

For example, I started flattering commits to the repository of the WordPress theme I use for this blog, even though the author was not on Flattr (he joined recently). If you like the theme of this blog, don’t hesitate to give a little Flattr to the Github repository.

Automatic flattering

The best of all is that you can make it automatic. In your Flattr preferences, you can link your accounts so, for example, each time you like a Youtube video or an Instagram picture it receives a flattr.

FlattrStar is a third party service which extends this functionality. It allows to flattr each favourite tweet, favourite artists on Last.fm and many more.

Conclusion

Not everything is flattrable and this is a problem. Each time you interact with a creator, don’t hesitate to talk about Flattr. She might not be immediately interested but it may ring a bell if multiple fans start to ask for a Flattr button. Don’t hesitate to suggest ideas. What about a 9gag Flattr integration? Or a Reddit integration? (EDIT:Reddit integration is possible through Fleddit)

In the meantime, there’s already a huge amount of content that can receive flattrs. If this is not enough for you, keep your Flattr fee to the minimal 2€ per month and, like me, continue to make unclaimed flattrs. You can also subscribe to a few charities. Charities don’t pay the 10% Flattr fee. 100% of your money is directly going to them.

In the worst case, you will spend 2€ per months to help charities and creators. In the best case, you will have fuelled a cultural revolution.

 

Picture from Daniel Colquitt

flattr this!

by Lionel Dricot at June 24, 2013 06:28 PM

May 27, 2013

Lionel Dricot

Become a Patron of Arts and Letters

With Internet, artists are facing the challenge that people don’t need to buy material supports to enjoy their work. I believe that it is a very good thing as it allows to sell any piece of work for a Free Price while enjoying the freedom of the web. Thus, the next technical challenge is to make it as easy as possible to pay a free price for anything you like. I’ve already told you in length about Flattr, which allows you to “like with money” anything on the Internet.

But what if you really like an artist, a blogger, a filmmaker? What if you want to encourage a creator to do more or to keep going? Here’s come Patreon.

The principle of Patreon is very simple: for every piece of work by a given creator, you pledge a given amount. The more she/he releases, the more you spend (but you can fix a monthly limit). And, as for Kickstarter, you can have some extra with your pledge. Just see my page for an example.

The idea is so simple that, unlike Flattr, I don’t see how I will be able to make awfully long blog posts about the subject for months.

Of course, Patreon is not perfect. A given creator cannot have multiple projects (what if you are a blogger and a video maker? Or what if you have two blogs?). A credit card is required (Bitcoin support would be awesome). I will probably find more flaw but the idea is really nice and complementary with Flattr.

I don’t really hope to attract patrons but, being curious, I had to give it a try. If you like the idea too, don’t hesitate to test and become my patron.

 

Picture by Martin Beek

flattr this!

by Lionel Dricot at May 27, 2013 12:28 PM

May 22, 2013

Lionel Dricot

How I Learned to Stop Worrying and Love the Web

Ce billet est disponible en français.

When I started producing content for the web, as hobbyist film maker, I was very enthusiastic about Creative Commons licenses. But not for my videos. Someone could use them in a bad way. I didn’t want that. The “bad way” was not clearly defined, something about nazis or paedophiles, but I was nonetheless fearing it.

Then I created a blog. I decided to publish my posts under a CC By license, moving a step forward to openness. Except some more important texts, which were under the CC By-ND license. Because I didn’t want them to be modified. You know, those texts were “important”, I was an artist, I had to keep full power over my creations.

Comments were a metric for my success. The more the comments, the better. When comments started to fade out, replaced by social networks, my new metric for success was the number of visitors per day. I could spend hours watching my statistics, exploring the sources of visitors.

I gradually switched everything to a CC By license, realizing that my fear of being misused was too abstract for not giving freedom to my readers. But I still asked a link to my blog each time I could in order to attract visitors, to see them in my statistics. I rarely posted on other blogs. My creations had to stay centralized.

Like moths on a sparkling light, bloggers are attracted by statistics. Google analytics, Page Rank, Twitter followers, Klout, Ebuzzing. It is addictive, time consuming and useless. I decided to quit.

I started to cross post my content over multiple places where I don’t have full control, such as Medium. I removed everything but the Flattr button. Yes, everything, including the G+/Facebook buttons and the Piwik/Google Analytics plugins. I don’t know any more how many people are reading me, how many reshares I have. I don’t care. I want to be free and, in order to achieve that, I had to free my creation first.

It took me ten years to overcome my irrational fears of the web. Today, I feel like I’m just discovering a new world. I’m a newborn. I’m not a creator asking to be admired by the non-creator mass. I’m someone contributing and dropping some little creation into a huge creative chaos where everybody is, in a way or another, a creator. Which is awesome.

If you like something, copy it, modify it, share it, re-create it. A text lives only when someone is reading it. A creation needs an audience.

Thanks for caring, thanks for sharing.

 

Picture by EpoxidesCe billet est disponible en français.

flattr this!

by Lionel Dricot at May 22, 2013 05:28 PM

May 17, 2013

Lionel Dricot

The Fight for E-Clothing

I meet Karl Isrich in a small restaurant. You maybe heard about the company he founded, MyVirtualTaylor, a pioneer of e-clothing. You would probably imagine Karl as one of those twenty-something golden boy. Instead, I face an average anxious guy, approximately forty years old with greyish hairs.

He asked me to go to this cheap restaurant because he could not afford a more expensive dinner. Lawyers, he said. When we sat down, he gave me a business card that used to be shiny six months ago. It simply says “MyVirtualTaylor, Isrich CEO”.

Hello Karl, thanks for the meeting. MyVirtualTaylor is an e-clothing company. But what is e-clothing exactly ?

Simply put, it’s 3D printing for clothes. We have developed a clothing printer that we sell and which is the size of a washing machine. Not being bigger than a washing machine was one of our top requirements before the launch.

The clothing printer has a tank of polymer, that you need to refill regularly, and seven dye tanks. We discovered that having seven primary colors was a good deal to reproduce most of the colors.

Through wifi, you send a .clo file to the printer then wait between ten minutes and one hour, depending on the size and the complexity of the model. Everything is automatic, you can even print a bunch of .clo in a row.

How do you get a .clo file?

We have an online editor on our website that allows you to design your own clothes. We have also some standard templates: shirts, ties, stuff like that.

In fact, when we launched, we didn’t really think about that. We thought that there will be a new market for clothes creators. That’s why we wanted the .clo format to be open and documented. We sell the hardware but we didn’t want to enter the clothing market.

Can you really print anything? What are the limitations?

Currently, there are some constraints with the size. We have prototypes that can print as big as a king size bed sheet. But, of course, you can only print clothes made of polymer. No silk nor fabric.

Isn’t that a big limitation? After all, most of our clothes are made of fabric.

It should be noted that a lot of progress have been made with polymers. We can weave the polymer in a lot of different ways in order to have the properties we want.

But, most importantly, clothing material has always been about finding a compromise between style, comfort and durability. Durability being the critical point for quality clothes. The clothes have to go through hundred of washing cycles. Our solution was to remove durability from the equation.

Do you mean that printed clothes are not durable?

Not, they aren’t. But it is not the goal. Instead of cleaning them, you put them in the clothing printer and the polymer is cleaned, melted and ready to print new clothes.

Unfortunately, we still cannot extract the colors. The polymer is thus not perfect. We store the recycled polymer in a separate tank. When you print, you can allow the use of recycled polymer or not. It is good enough for every day but if you want a perfect white shirt for a wedding, you probably want the unused polymer.

The part of the polymer which is worn out goes with the waste to the sewers.

It sounds like an ecological disaster.

That’s exactly the rumor spread by our opponents.

But, while it is not perfect, you have to compare it with the traditional clothing industry. Clothes are usually made in huge factories in China, using harmful chemicals. Then, you have to take into account the transport, the storage, the shop. Not mentioning the gas needed to go to the shopping mall. To that, add the water and the soap used to wash the clothes. By contrast, we basically use electricity and release very little polymer. With time, we hope to be able to recycle more and more.

Did you talk about opponents?

You know, I’m an engineer. I never really cared about anything but the technological aspects. When the first clothing printers were sold, people immediately started to exchange .clo files. They took their own clothes and make .clo files to be able to reproduce them.

One day, I received a letter from lawyers of the FCIAA, the Fashion & Clothing Industry Association of America. I’ve never heard of them before but, basically, they wanted me to stop my company because I was threatening their business.

I thought it was a joke. Really. At first I was like: ”Funny. It’s like the candle industry suing Edison for inventing the lightbulb”. But it’s not funny any more.

I can talk about this for hours. They are bad. Really bad. They are trying to destroy my life.

Can’t you let the lawyers handle that?

For the lawsuit, of course. But there’s a lot more. I’ve been contacted by politicians. They say that I’m destroying the economy. If my product works, there will be no shops for clothes hence no jobs. They asked me: “Do you know how many Americans are working in clothing shops?”. I was accused of being anti-patriotic. From nowhere, some news laws appeared saying that clothes should have a certification in order to save children from accidental suffocation.

From that point, it became immoral to print clothes. Last year, nobody ever thought about printing clothes and, now, it is worse than eating babies alive. There’s even webshops where you can order “Not Printed” labelled t-shirts. I’ve been attacked personally, investors have turned me back and, at the same time, I still need to pay expensive legal fees.

Isn’t that true that it’s a threat for the economy?

It is a tool for making life easier. Any invention which free people from unnecessary labor seems to be a threat to the economy. But if our economy is threatened by inventions that make life better for everyone, it’s the economy we need to change, not the inventions.

What will you do next?

I feel bitter. I’m an engineer with a new useful idea and everyone turns against me: big corporations, lawyers, politicians. Even random people in the street think that “It’s the guy destroying jobs and suffocating babies”. I’ve never signed up for that. I’ve never been into politics or anything like that. Now, I’m thinking about settling somewhere in Europe but I’m afraid that the hand of the FCIAA will follow me there. 

Thanks Karl, I wish you the best.

Although, as a journalist, I know I should remain objective, I can’t help but feeling empathy for the guy. As I’m packing up, I notice his clothes for the first time. “So are those printed?” “Of course” “Very nice. It’s impressive.” He sighs then try to smile at me: “Thanks. If you are interested, you will find the .clo on the Pirate Bay.”. His smile feels sad, despaired. We shake hands and he slowly walk away while I stay there, helpless.

 

This post is part of the Letters from the Future collection and is dedicated to Brokep for announcing his political involvement during the writing of this text. Picture by Anna Banana.

flattr this!

by Lionel Dricot at May 17, 2013 04:28 PM

May 08, 2013

Lionel Dricot

The Cost of Being Convinced

When debating, we usually consider that opinions are merely resulting of being exposed to logical arguments. And understanding them. If arguments are logical and understood, people will change their mind.

Anybody having been connected long enough on the internet knows that it never happens. Everybody stays on his own position. But why?

The reason is simple: changing opinion has a cost. A cost that we usually ignore. A good exercice is to try to evaluate this cost before any debate. For yourself and for the counterpart.

Let’s take a music fan that was convinced that piracy hurts artists. Convincing him that it’s not the case and that piracy is not immoral means to him that, firstly, he was dumb enough to be brainwashed by major companies and that, secondly, the money spent on CD is a complete waste.

Each time you will tell him “Piracy is not hurting artists and not immoral”, he will ear “You are stupid and you wasted money for years”.

This is quite a high cost but not impossible to overcome. It means that arguments should not only convince him, but also overcome that cost.

Worst: intuitively, we take the symmetry of costs for granted.

Let’s take the good old god debate.

For the atheist, the cost of being convinced is usually admitting being wrong. This is a non-negligible cost but sometimes possible. Most non-hardcore atheists are thus quite ready to be convinced. They enter any religious debate expecting the same mindset from the opponents.

But the opposite is not true. For a religious person, believing in god is
often a very important part of her life. In most case, this is something inherited from her parents. Some life choices have been made because of her belief. The person is often engaged in activities and societies related to her belief. It could be as far as being the core foundation of her social circles.

When you say “God doesn’t exist”, the religious will hear “You are stupid, your parents were liars, you wrecked your life and you have no reason to see your friends anymore”.

It looks like a joke, right? It isn’t. But, subconsciously, it is exactly what people feel and understand. No wonder that religious debates are so emotional.

Why do you think that some religious communities are fighting any individual atheist? Why do you think that any religion always try to get money or personal involvement from you? Because they want to increase the cost of not believing in them. Scammers understand that very well: they will ask you more and more money to increase the cost of you realizing it’s a scam.
Before any argument, any debate, ask everyone to answer sincerely to the question “what will happen if I’m convinced? What will I do? What will change in my life?”.

More often than not, changing opinion is simply not an option. Which settle any debate before the start.

And you? Which of your opinions are too costly to be changed? And what can you do to improve the situation?

 

Picture by r.nial.bradshaw

flattr this!

by Lionel Dricot at May 08, 2013 09:42 AM

February 15, 2013

Martyn Russell

tracker-search gets colour & snippets!

Recently Carlos added FTS4 and snippet support to Tracker. We merged that to master after doing some tests and have reduced the database size on disk by doing this. I released 0.15.2 yesterday with the FTS4 work, and today I decided to add a richer experience to tracker-search.

Below you can see me searching for passport and sue found in some of the documents indexed on my machine. The colour there is quite nice to separate hits and snippets/contexts where the terms were found. This search without any arguments really will search ALL resources in the database:

tracker-search-snippets

This second screenshot shows searching for love with all music in particular. So you can use this for all areas of tracker-search:

tracker-search-snippets2

With any luck, we will be releasing a 0.16.0 in time for the next GNOME release with this all available in!

by mr at February 15, 2013 05:11 PM

September 26, 2012

Eilidh McAdam

Importing OOXML Ink annotations into LibreOffice

So, I’ve been having fun traversing the LibreOffice .docx and .rtf import filters while trying to implement Ink annotations in LibreOffice Writer. As it turns out, I don’t strictly agree with the ISO press release that the Office Open XML file format is “intended to be implemented by multiple applications on multiple platforms”. However, despite having spent way too much time wading through a ~24k line XML file defining the document model in the importer, I’ve been very grateful that the format is XML-based and therefore human readable.

I’ve included some useful resources at the end for any intrepid programmers who wish to help with tackling the importer beast.

DOCX import

Ink annotation
Drawn Ink annotations

Ink allows you to annotate using a stylus on a tablet PC using Microsoft Word so that you can doodle over your documents as you see fit. Technical details ahead, so feel free to skip to the results.

Ink strokes are saved in docx documents as bezier curves expressed through VML paths (these are pretty similar to SVG paths, with commands and co-ordinates). I had quite a bit of fun hacking a parser together – here’s the patch, with a few tweaks, it could be generally useful. It produces a list of all the sub-paths in a path, each subpath consisting of a list of co-ordinates and co-ordinate flags indicating normal or control points.

Word’s storage of Ink annotations does highlight some of the problems with implementing Word-compatible OOXML. They’re represented something like this:

<v:shape path="[VML path]" ...>
...
<o:ink i="[base64 binary data]" annotation="t"/>
</v:shape>

Now, [VML path] is really the important part as it contains the Ink shape geometry. But the [base64 binary data]? What’s stored in there is anybody’s guess – I’ve certainly not found any documentation on its contents. Anyone who has a tablet version of Word should feel free to take a crack at reverse engineering it ;)

It turns out that paths for Ink annotations consist of bezier curves. Beziers weren’t supported in the importer, so the path attribute, as well as any <v:curve> elements (which use control1, control2, to and from attributes), got ignored. So I added the support by getting the path and control1/control2 attributes and passing off the parsed result to LibreOffice using the UNO API.

RTF import

Word allows you to export a document with Ink to RTF. Most of the code for importing the RTF equivalent was already there, it just needed some adapting. I found something interesting that I haven’t seen documented elsewhere, furthering Miklos Vajna’s work (see README) on understanding the RTF spec. The geometry of the Ink shapes is described using the pVerticies [sic] and pSegmentInfo keywords. The pSegmentInfo section is a list of commands indicating what the points listed in pVerticies mean (move to, curve, end sub-path and so on).

Segment indicator Description Vertices associated
0×0001 Line to point 1 (x, y)
0×2001 Bezier curve with two control points and end point 3 (cx1, cy1, cx2, cy2, x, y)
0×4000 Move to point 1 (x, y)
0×6001 Close path 0
0×8000 End path 0

The plot thickens…

So, when importing a Word-generated .rtf with Ink annotations, why was I seeing segment indicators like 0x200A? Apparently, the low order bytes of certain segment indicators indicate the number of point sets to apply to – for example, if there were four curves in a row with three points each in pVerticies, it can be specified by using the low order bytes of the segment indicator, resulting in 0×2004 (encompassing 12 points in total). This may also apply to other relevant line segment types, but this is as of yet untested. You can easily extract the number of segments indicated using basic bitwise operators:

unsigned int segment = 0x200A; // Example segment indicator
unsigned int points = segment & 0x00FF; // Assuming two lowest order bytes are used for point count
segment &= 0xFF00; // Discard point count; just leave segment indicator

Woo ink in LibreOffice!!

Ink annotation in LibreOffice
Drawn Ink annotations in LibreOffice

LibreOffice now correctly displays not only Ink, but (in theory) any curves and shapes with paths when importing from .docx or .rtf. A minor bug with RTF image wrapping which caused the shape to be inline with the text instead of over it was also fixed (the property was just being ignored), so better imports all round!

Next step – correct export of bezier shapes to docx and rtf (no, I’m still not sure whether that blob of binary in the o:ink element is of any importance whatsoever, but this should be one way to find out).

Resources

v:shape schema information – this website is great for making sense of the OOXML standard, particularly if used alongside this:
ISO IEC 29500 – ISO standard document for OOXML (warning, big pdf in a zip).
RTF spec – only somewhat useful
UNO API reference – useful if used with the search function
writerfilter and oox – LibreOffice modules of interest for importing OOXML documents (cgit links for browsing the source/READMEs)

by Eilidh at September 26, 2012 05:23 PM

July 05, 2012

Eilidh McAdam

Tech update: LibreOffice cross compile MSI installer generation

Table of Contents

I’m working on allowing a Windows Installer (.msi) for LibreOffice to be built when cross compiling under Linux. So far, it has been a broad spanning project and has covered:

  • Windows, MSI and Cabinet APIs (C, SQL, Wine, winegcc)
  • LibreOffice build system (Perl, autotools)

Project status as of posting:

  • Developing on openSUSE 12.1 (x86_64) to target Windows (i686).
  • .msi files can be created and taken apart with the cross MSI tools (cgit) msidb and msiinfo.
  • Cabinet files can be extracted but not created. However, parsing Diamond Directive file (.ddf) format is supported through makecab (this is required when the LO build system creates a cabinet).
  • Remaining: hook up MSI transforms and patches (msitran and msimsp); fit the tools into the build system; clean up and maintenance.

The MSDN documentation for the win32 native tools has been linked to where appropriate.

1.1 Cross compiling LibreOffice

Luckily, LibreOffice cross compile support is already very good. README.cross in the LibreOffice root directory has far more information. Assuming you have checked out LibreOffice and have all the MinGW dependencies, cross compiling can be as simple as changing <lo_root>/autogen.lastrun to read:

CC=ccache i686-w64-mingw32-gcc
CXX=ccache i686-w64-mingw32-g++
CC_FOR_BUILD=ccache gcc
CXX_FOR_BUILD=ccache g++
–with-distro=LibreOfficeMinGW

This references <lo_root>/distro-configs/LibreOfficeMinGW.conf. This folder
contains various configurations for compiling under different circumstances. I
also found it helpful to add this line to LibreOfficeMinGW.conf to make life
simpler:

–without-java

1.2 Building the installer

The installer build logic can be found in <lo_root>/solenv/bin/modules/installer/windows. It makes use of several Microsoft utilities to eventually output an MSI file. Some of these utilities are already distributed by Wine.
Provided by Wine:
expand.exe – Used to unpack cabinet files.
cscript.exe – Command line script host.
Also expected:
msidb.exe – Manipulates installer database tables and streams.
msiinfo.exe – Manipulates installer meta data (summary information).
makecab.exe – Compresses files into cabinets.
msimsp.exe – Creates patch packages.
msitran.exe – Generates and applies database transforms.

Wine already exposes most of the required functionality via the API exposed by msi.dll (MSDN, Wine) and cabinet.dll (MSDN, Wine). My work has been focussed on writing command line utilities that support the interface expected by the LibreOffice build scripts.

  • solenv/bin/make_installer.pl is a very large Perl script that connects up the Perl modules which build the installer. The .pm files relevant to cross MSI building are listed below.
  • solenv/bin/modules/installer/control.pm performs “nativeness” logic such as checking if the environment is Cygwin and whether the required utilities are in the system path.
  • solenv/bin/modules/installer/windows/admin.pm (expand.exe*, msidb.exe, msiinfo.exe)
  • solenv/bin/modules/installer/windows/mergemodule.pm (expand.exe*, msidb.exe)
  • solenv/bin/modules/installer/windows/msiglobal.pm (msidb.exe, msiinfo.exe, cscript.exe*, msitran.exe**, makecab.exe**)
  • solenv/bin/modules/installer/windows/msp.pm (msidb.exe, msimsp.exe**)
  • solenv/bin/modules/installer/windows/update.pm (msidb.exe)
  • * Distributed by Wine
    ** In progress

1.3 Cross MSI tool development

The code for these tools can be found in the the feature/crossmsi branch of libreoffice. It currently resides in setup_native/source/win32/wintools in the tree.

To test the tools individually, grab the dev Makefile and make from the tool’s directory. You can then pass the -? or /? command for usage. I would suggest disabling Wine’s debug logs unless you specifically need them:

$ export WINEDEBUG=-all

  • msidb (MSDN msidb, LibreOffice msidb, dev Makefile)

    Usage: msidb [options] [tables]

    Options:
    -d <path> Fully qualified path to MSI database file
    -f <wdir> Path to the text archive folder
    -c Create or overwrite with new database and import tables
    -i <tables> Import tables from text archive files – use * for all
    -e <tables> Export tables to files archive in directory – use * for all
    -x <stream> Saves stream as <stream>.idb in <wdir>
    -a <file> Adds stream from file to database
    -r <storage> Adds storage to database as substorage

  • msiinfo (MSDN msiinfo, LibreOffice msiinfo, dev Makefile)

    Usage: msiinfo {database} [[-b]-d] {options} {data}

    Options:
    -c <cp> Specify codepage
    -t <title> Specify title
    -j <subject> Specify subject
    -a <author> Specify author
    -k <keywords> Specify keywords
    -o <comment> Specify comments
    -p <template> Specify template
    -l <author> Specify last author
    -v <revno> Specify revision number
    -s <date> Specify last printed date
    -r <date> Specify creation date
    -q <date> Specify date of last save
    -g <pages> Specify page count
    -w <words> Specify word count
    -h <chars> Specify character count
    -n <appname> Specify application which created the database
    -u <security> Specify security (0: none, 2: read only 3: read only (enforced)

  • makecab (MSDN makecab, LibreOffice makecab, dev Makefile)

    Usage: makecab [/V[n]] /F directive_file

    Options:
    /F directives – A file with MakeCAB directives.
    /V[n] – Verbosity level (1..3)

by Eilidh at July 05, 2012 06:26 PM

June 21, 2012

Lanedo GitHub

June 08, 2012

Lanedo GitHub

June 06, 2012

Lanedo GitHub

April 17, 2012

Michael Natterer

Goat Invasion in GIMP

Once upon a time, like 5 weeks ago, there used to be the longstanding plan to, at some point in the future, port GIMP to GEGL.

We have done a lot of refactoring in GIMP over the last ten years, but its innermost pixel manipulating core was still basically unchanged since GIMP 1.2 days. We didn’t bother to do anything about it, because the long term goal was to do all this stuff with GEGL, when GEGL was ready. Now GEGL has been ready for quite a while, and the GEGL porting got assigned a milestone. Was it 2.10, 3.0, 3.2, I don’t remember. We thought it would take us forever until it’s done, because nobody really had that kind of time.

About 5 weeks ago, I happened to pick up Øyvind Kolås, aka Pippin the Goatkeeper to stay at my place for about a week and do some hacking. After one day, without intending it, we started to do some small GEGL hacking in GIMP, just in order to verify an approach that seemed a good migration strategy for the future porting.

The Problem: All the GimpImage’s pixels are stored in legacy data structures called TileManagers, which are kept by high level objects called GimpDrawables. Each layer, channel, mask in GIMP is a GimpDrawable.

A typical way to do things is:

TileManager *tiles = gimp_drawable_get_tiles (drawable);
PixelRegion region;

pixel_region_init (&region, tiles, x, y, w, h, TRUE);

/* do legacy stuff on the pixel region in order to change pixels */

After the GEGL porting, things would look like that:

GeglBuffer *buffer = gimp_drawable_get_buffer (drawable);

/* do GEGL stuff on the buffer, like running it through a graph in order to change pixels */

Just, how would we get there? Replacing the drawable’s tile manager by a buffer, breaking all of GIMP at the same time while we move on porting things to buffers instead of tile managers? No way!

The Solution: A GeglBuffer’s tiles are stored in a GeglTileBackend, and it’s possible to write tile backends for arbitrary pixel storage, so why not write a tile backend that uses a legacy GIMP TileManager as storage.

After a few hours of hacking, Pippin had the GimpTileBackendTileManager working, and I went ahead replacing some legacy code with GEGL code, using the new backend. And it simply worked!

The next important step was to make GimpDrawable keep around a GeglBuffer on top of its TileManager all the time, and to add gimp_drawable_get_buffer(). And things just kept working, and getting easier and easier the more legacy code got replaced by GEGL code, the more GeglBuffers were being passed around instead of TileManagers and PixelRegions.

What was planned as a one week visit turned into 3 weeks of GEGL porting madness. At the time this article is written, about 90% of the GIMP application’s core are ported to GEGL, and the only thing really missing are GeglOperations for all layer modes.

As a totally unexpected extra bonus, there is now even a GEGL buffer tile backend in libgimp, for plug-ins to use, so also plug-ins can simply say gimp_drawable_get_buffer(drawable_ID), and use all of GEGL to do their stuff, instead of using the legacy pixel region API that also exists on the plug-in side.

GIMP 2.10’s core will be 100% ported to GEGL, and all of the legacy pixel fiddling API for plug-ins is going to be deprecated. Once the core is completely ported, it will be a minor effort to simply “switch on” high bit depths and whatever color models we’d like to see. Oh, and already now, instead of removing indexed mode (as originally planned), we accidentally promoted indexed images to first class citizens that can be painted on, and even color corrected, just like any other image. The code doing so doesn’t even notice because GEGL and Babl transparently handle the pixel conversion magic.

The port lives in the goat-invasion branch in GIT. That branch will become master once GIMP 2.8 is relased, so the first GIMP 2.9 developer release will already contain the port in progress.

If you want to discuss GIMP and GEGL things with us face to face, join us at this year’s Libre Graphics Meeting in Vienna, in two weeks from now, a lot of GIMP people will be there; or simply check out the goat-invasion branch and see the goats yourself.

If you have some Euros to spare, consider donating them to Libre Graphics Meeting, it’s one of the few occasions for GIMP developers, and the people hacking on other projects, to meet in person; and such meetings are always a great boost for development.

During the 3 crazy weeks, quite some work time hours were spent on the port, thanks to my employer Lanedo for sponsoring this via “Labs time”.

by Mitch at April 17, 2012 12:28 PM

March 19, 2012

Eilidh McAdam

Get into open source with GSoC 2012

Student applications for Google Summer of Code 2012 will be open very soon. After an extremely enjoyable and rewarding experience with the program last year, I feel it’s my duty to student programmers to get the word out. So, here’s why you should apply.

You get paid to work on open source software. I became a long time user, first time contributor early last year. Looking to give something back, I attempted a LibreOffice Easy Hack. In a case of fantastic timing, they announced their involvement in GSoC a week or so later and I got in touch. The end result was a whole new open source library. I had an amazing experience working with LibreOffice but it’s ideal to choose a project that’s personally useful. GSoC doesn’t require that you’re an open source evangelist but if you are, it’s a strong argument for applying.

It’s fantastic experience working on a large project. I feel I learned more during those three months than during my undergraduate degree course. I have to say that I never particularly enjoyed groupwork at university but it’s completely different if you’re working with smart, motivated individuals who’re there either because they want to be or because they’re paid to be (quite often both). As a nice bonus, it’s great work experience and has essentially led me to my dream job. I’m not sure if that’s a typical result, but it certainly wouldn’t hurt to have it on your CV or resume.

You meet some of the smartest, most awesome people (not all of them programmers). I think this is my favourite outcome. I’ve met people from all over the world with an assortment of beliefs, opinions and backgrounds. My experience was that some of the best hackers and coolest people (no, seriously!) hang around open source communities.

Applying isn’t difficult, just choose a participating open source organisation or two and do a little research into the suggested projects before getting in touch with them. Good luck!

by Eilidh at March 19, 2012 08:56 PM

March 05, 2012

Lanedo GitHub

November 15, 2011

Martyn Russell

Lanedo is hiring

We’re currently looking for anyone who has LibreOffice experience and is interested in working on the project. If that sounds like something you would like to do, get in touch with us.

Additionally, if you or anyone you know has experience running an open source business, please get in touch. We’re looking for someone that could facilitate a CEO type position.

by mr at November 15, 2011 04:28 PM

October 23, 2011

Eilidh McAdam

LibreOffice Conference 2011

I’ve been home a week from the LibreOffice Conference in Paris and from a personal point of view, it was a huge success.

First of all, here are my slides from the short talk I gave about what we achieved with libvisio over the duration of Google Summer of Code. There is still work to be done but once end-user feedback starts coming in, we can sand down any rough edges.

The conference was a lot of fun, particularly the company. I had the pleasure of meeting the rest of the libvisio team, Fridrich Strba and Valek Filippov, who looked out for me the whole time I was there. I’m sure the Paris pickpockets are still cursing their names.

I also have to admit to being a little starstruck at meeting all the fantastic hackers whose work I have made so much use of. The LibreOffice team were a diverse, interesting and kind bunch who put up with my incessant (well-meaning) questions with good grace and gave me plenty to think about on coding, the universe and everything.

It was wonderful to be surrounded by programmers and Linux users without the geekier-than-thou attitude. Despite being younger (and greener) than most and female unlike many (with a few notable exceptions), I chatted away to my fellow hackers without once feeling patronised.

Finally, I’m staying out of the whole political situation – I started coding with LibreOffice for pragmatic reasons (I could get the code easily, Easy Hacks make getting to know the project simpler and LibreOffice was part of GSoC ’11). However, I think the conference really confirmed for me that as important as the code base is, the community that surrounds a project this size is as vital. Without their helpful, inclusive approach, I’d have found contributing to an open source project of that magnitude an insurmountable task.

So here’s to another year!

by Eilidh at October 23, 2011 01:49 PM

October 06, 2011

Martyn Russell

Tracker Needle with improved tagging

Given there have been a number of improvements to tracker-needle recently, I thought I would make a video to highlight some of them. A quick summary:

  • Searching for “foo” now finds files tagged with “foo”
  • Searches are limited to 500 items per category/query (to avoid abusing the GtkTreeView mainly)
  • A tag list is now available to show all hits by tags
  • Tags can be edited by the context menu per item (planned to be improved later)

Really nice to have tagging supported properly in tracker-needle now.

by mr at October 06, 2011 08:15 PM

September 16, 2011

Martyn Russell

Improved Tracker Preferences for Indexed Locations

Something I have been meaning to do for a long time, is to update the preferences dialog for Tracker to easily add locations which are special user directories (as per the GUserDirectory locations).

I wanted to do this in such a way that:

  • It was really easy to toggle locations as recursive or not
  • The file chooser was only necessary for non-standard locations
  • Better use of the space was made by integrating the two lists (previously) for single directory and recursive directory indexing
  • I could fix a few issues which had been reported when it came to saving using the special symbols (e.g. &DESKTOP for G_USER_DIRECTORY_DESKTOP, etc.) when one or more user directories evaluated to the same location

The result is this (now in master and 0.12.2 when it is released):

by mr at September 16, 2011 11:20 PM

September 09, 2011

Martyn Russell

Tracker 0.12.0 Released!

Given we (the tracker team) want to try to fit into the GNOME schedule for 3.2, we decided to bring the release of 0.12.0 ahead early. The roadmap is mostly complete anyway.

The official announcement can be seen here.

Thank you to everyone involved!

Recently I also updated the GtkSearchEngineTracker implementation to not use hacky dlopen() calls and to use DBus instead. This avoids us updating the work for each new version of Tracker that comes along too. The patch attached to the bug (658272) should be applied soon (given Matthias was pushing for this sooner rather than later). So, we’re all on track!

by mr at September 09, 2011 10:10 AM

August 29, 2011

Christian Kellner

Apple Filing Protocol (AFP) support for GVfs

Last Thursday I merged the Apple Filing Protocol (AFP) backend for GVfs; so we finally have support for Apple shares too now. It has been written by Carl-Anton Ingmarsson and it was his Summer of Code 2011 project. It is on the master branch and thus will be in the next unstable release. Please test it and report bugs against the "afp backend" component.

Carl-Anton did quite an impressive job - probably best depicted by the diffstat of the merge:

 client/Makefile.am            |    1 
 client/afpuri.c               |  269 ++
 client/gdaemonvfs.c           |    2 
 configure.ac                  |   31 
 daemon/Makefile.am            |   45 
 daemon/afp-browse.mount.in    |    8 
 daemon/afp.mount.in           |    5 
 daemon/gvfsafpconnection.c    | 1651 ++++++++++++++++
 daemon/gvfsafpconnection.h    |  420 ++++
 daemon/gvfsafpserver.c        | 1033 ++++++++++
 daemon/gvfsafpserver.h        |   85 
 daemon/gvfsbackendafp.c       | 4292 +++++++++++++++++++++++++++++++++++++++++-
 daemon/gvfsbackendafp.h       |   23 
 daemon/gvfsbackendafpbrowse.c |  608 +++++
 daemon/gvfsbackendafpbrowse.h |   47 
 daemon/gvfsbackenddnssd.c     |    6 
 daemon/gvfsjobsetattribute.h  |    1 
 17 files changed, 8491 insertions(+), 36 deletions(-)

by gicmo at August 29, 2011 11:00 AM

June 22, 2011

Eilidh McAdam

Progress with gradient fills

So, I have finally made progress that isn’t so ground-breaking that my mentor wants to write about it but is big enough that certain people will stop making fun of my empty blog. So, frob (his wonderfully useful work can be found here), I hope you’re happy.

I’ve been working on shapes, lines and their properties, most recently on fills. Here’s how it’s going so far (Visio document on top, my output below).

Thanks to frob for the image, plus animated gif.

A few technical details for those who care: Visio draws shapes (including rectangles) as individual lines and before they can be filled, so we have to manually detect whether or not it’s a closed polygon. At the moment, we simply take the first point and compare it to the last point and make sure there are no gaps in between. It works for most simple cases but since when are things ever truly simple when reverse engineering?

You may also notice a difference between how gradients 31-34 are drawn in Visio vs my output. There’s no direct equivalent of this type of square gradient that I know of in the SVG or ODG specifications, so we’re approximating it. I have a whole new appreciation of slight imperfections when porting documents to different formats.

In the time it has taken to write this, I’ve already found that some of what I’ve written about will change. This is why I’m a programmer not a blogger ;)

by Eilidh at June 22, 2011 07:27 PM

March 29, 2011

Christian Kellner

Google Summer of Code 2011

International Neuroinformatics Coordinating Facility

 

Just a quick reminder: The student application period for the Google Summer of Code 2011 has opened as of yesterday (Monday, the 28th of March). Apply now! The starting point for Gnome is here; it has all the relevant information.

In addition to that, if you are happen to be interested in Neuroscience and Informatics the International Neuroinformatics Coordinating Facility (INCF) also got accepted as a organization (Thanks Raphael!). Among other very interesting project ideas there are also two proposals that are Gnome related (as in pygtk based applications). If you have a cool Neuroinformatics+Gnome based idea be sure to apply at the INCF. The starting point is here.

by gicmo at March 29, 2011 11:47 AM

October 23, 2010

Michael Natterer

GIMP on GTK+ 3.0

GIMP on GTK+ 3.0

At the GTK+ Hackfest in A Coruña I managed to get GIMP almost completely (minus one dialog and most plug-ins) running on GTK+ 3.0.

This turned out to be a great tool for finding bugs in the new GTK+. In fact, I found quite a few of them while still completing the port. Some bugs I fixed right away, others were fixed by fellow Hackfest hackers. Even while writing this post (the image was of course cropped with the ported GIMP), two more popped up and will eventually be fixed.

by Mitch at October 23, 2010 11:07 PM

July 17, 2010

Christian Kellner

Back online

After being down for a while the blog is now back online. The server moved - as I did - to Munich. EOM for now - more real news later ...

by gicmo at July 17, 2010 10:00 PM

May 12, 2009

Christian Kellner

Das Versagen der Regierung ...

... continued:

"In Deutschland ist nicht nur die Steuerbelastung so hoch wie in kaum einem anderen Industrieland, die Steuern und Abgaben sind auch noch besonders ungerecht verteilt. [...] Der DGB fordert, die wirtschaftlich Leistungsfähigen über die Anhebung des Spitzensteuersatzes und Wiedereinführung der Vermögenssteuer stärker zur Kasse zu bitten. [...] Die Union weist die Forderungen zurück. "Für mich ergibt sich kein Handlungsbedarf", sagte CDU-Sozialpolitiker Ralf Brauksiepe." (ZEIT Online)

Vermögende besteuern? Also das geht offenbar gar nicht. Aber halt: Das C in CDU steht doch immer noch für christlich, oder? Sein Hab und Gut den Armen geben - war da nicht was, Herr Sozialpolitiker? Ach quatsch, alles gerecht so wie es ist; auch dass Doppelverdiener mehr Abgaben zahlen. Oder? "Selbstverständlich", meint Herr Brauskiepe (ebd.).
"Ökonomen teilen diese Meinung nicht." - Pah. Experten. Was wissen die schon. Und überhaupt: Die vom Familienministerium kümmern sich ja auch nicht um die. - "Die deutschen Sozialsysteme seien nach wie vor auf die vierköpfige Standardfamilie mit zwei Kindern und einem Alleinverdiener." (ebd.) Aha! Wer hätte das gedacht. Wer nicht ins verstaubte Weltbild passt ist eh selber schuld und zahlt deswegen auch mehr; und womöglich Atheist, Agnostiker oder Schlimmeres. Christlich ist man halt nur unter sich. Doppel-Moral war ja noch nie ein Problem.
Außerdem hat die Regierung ja gerade auch viel, viel Wichtigeres zu tun; nämlich Medien zensieren und verbieten, dass Leute mit Farbkugeln in der Gegend rumschießen. Viel, viel wichtiger.

by gicmo at May 12, 2009 06:06 PM

Lobby-Arbeit mit Kindern

Die Deutsche Kinderhilfe plant eine offline Unterschriftenaktion für die Online-Zensur. Ja, wir erinnern uns, genau die Organisation, die "Unklarheiten bei Finanzstrukturen" und "enge Verbindungen zu einem Unternehmen" hat. Die WELT Online berichtete. Im krassen Gegensatz dazu sind Missbrauchsopfer selbst offenbar eher dagegen: "Missbrauchsopfer gegen Internetsperren" (MOGIS) und Trotz Allem e.V. (offener Brief). Es drängt sich sehr stark der Verdacht auf, dass die Deutsche Kinderhilfe nicht Lobby-Arbeit für Kinder macht, sondern mit.

Symptome behandeln anstatt Ursachen, und dabei getrieben von der Angst vor Neuem. Am besten noch ohne Sachverstand. Und bei jedem zweiten Gesetz muss das Grundgesetz geändert werden.

by gicmo at May 12, 2009 03:50 PM

April 26, 2009

Christian Kellner

Piraten und Strukturwandel

"Die Preise für derzeit erhältliche elektronische Bücher stimmen skeptisch. Sie liegen nur knapp, bei Hardcovern ein oder zwei Euro, unterm Ladenpreis für gedruckte Bände. Dabei wären heutige 20-Euro-Bücher in digitaler Form mit zehn Euro gut bezahlt, auch wenn den Autoren (und ihren Agenturen) deutlich mehr als die zurzeit üblichen zwei bis zweieinhalb Euro blieben. Doch obwohl alle Druck- und Vertriebskosten inklusive Verpackung, Transport sowie die in dieser Kette enthaltenen Löhne und Einnahmen und überdies der Buchhändlerrabatt von 40 bis 45 Prozent entfallen, erhalten die Urheber von dem kräftigen Zugewinn keinen Cent." (ZEIT online - Es war einmal)

Genau *dies* ist meiner Meinung nach die Wurzel warum die grossen Entertainment-, oh ich bitte um Entschuldigung, Kultur-Industrien so unglaubwürdig geworden sind und die Leute nicht bereit sind 12,90 €‚¬ für ein (kopiergeschütztes) "Download-Album" auszugeben, das sich praktisch in nichts, i.e. Kosten(!), Lieferumfang, von der "echten" CD unterscheidet. Oder $25 für einen wissenschaftlichen Artikel in PDF-From? Ganz zu schweigen von Filmen, für die man im Kino schon 10€‚¬ bezahlt hat, und die deswegen sowieso Millionen-Gewinn eingespielt haben; man schaue sich nur kurz die Preise im neuen Apple Store Movies an. Das soll man nicht für überholte und unverhältnismäßige Gier halten?
Wenigstens das mit dem Kopierschutz ändert sich (bei Musik) langsam. Wenn auch nicht ganz freiwillig.

Umso lächerlicher erscheint es wegen all dem, wenn man den Protektionismus der Alten und die Angst vor Neuem  auch noch als Niedergang der Kultur zeichnet. Zum Beispiel hier von Frau Gaschke. Ich empfehle die Kommentare zu diesem Artikel. Diese sind weitaus besser als der Artikel selbst  (irgendwie passend).

Und vor allem geht es doch um Inhalte. Und natürlich sollen die eigentlich Kunstschaffenden für ihre Arbeit fair entlohnt werden; aber sicher nicht auf die gleiche Art und Weise wie es vor dem "Digitalen Zeitalter" war. Tja, Zeiten ändern sich nunmal und wer sich nicht anpassen will gehört halt irgendwann zu den Dinosauriern ... hoffentlich. Salus populi est suprema lex:

"Freie Lektüre als Teil des Grundrechts auf Bildung €€“ und als Erfolgsmodell moderner Wissensgesellschaften. Open Access wäre nicht der Untergang des Abendlandes. Im Gegenteil." (ZEIT online - Es war einmal)

by gicmo at April 26, 2009 09:34 AM

August 30, 2007

Michael Natterer

Using the Mac OS X Menubar

Finally, after quite some debugging (of the very same bug for months), I committed preliminary support for the global Mac OS X menubar to GIMP trunk.

It’s the result of a project I’ve been involved with at Imendio. Check the project page.

For seeing the coolness without compiling yourself, check the video.

by Mitch at August 30, 2007 01:36 PM

October 31, 2006

Michael Natterer

October 18, 2006

Michael Natterer

Gimp statt Photoshop

Jetzt ist es amtlich! - stand ja schließlich im SPIEGEL ;-)

Ein mit GIMP gemachtes Bild hat beim SPIEGEL Bildbearbeitungs-Wettbewerb gewonnen. Glückwünsche an den erfolgreichen Manipulator!

by Mitch at October 18, 2006 02:00 PM

May 02, 2006

Michael Natterer

PDB and Plug-In Refactoring

One of the last remaining bits of antique code in the GIMP are the parts that talk to plug-ins, namely the plug-in handling code itself, and the PDB (procedural database) which provides a way for plug-ins to call GIMP’s internal functionality and which keeps all procedures provided by plug-ins.

Everything was organized around some crufty C-structs, lived in files without proper namespace, and was basically untouched by all the refactoring that was happening during the last few years. To illustrate the evilness: the PDB did not even really know which of its procedures were dynamically allocated, and which were constant structs that are defined at compile time (not to speak of the part where perl code generated perl code that generated perl code that…). This had to stop.

Right after LGM, I entered refactoring mode:

  • All the perl-that-generates-perl stuff had to die. Every procedure definition in the PDB files (tools/pdbgen/pdb/*.pdb, they are still perl) now looks the same: Inargs, Outargs, Code.
  • The homemade system to specify a procedure’s arguments and return values doesn’t exist any more. Everything is based on GParamSpec now. A lot of new GIMP-specific param specs was added in app/core/gimpparamspecs.c.
  • The procedure’s arguments from the same prehistoric era had to go too. Procedures now take and return everything as GValue, organized as GValueArray.
  • Procedures are GObjects now (app/pdb/gimpprocedure.c). Their memory-management was modernized a bit (they have memory management now). Plug-In procedures are a proper GimpProcedure subclass now (app/pdb/gimppluginprocedure.c), which made lots of code much more straightforward (they were separate structs before which had a pointer to the procedure they implement).
  • The PDB is now an object itself (app/pdb/gimppdb.c), instead of a bunch of global variables with some API around. The PDB instance emits signals when procedures are added or removed, so the GUI can for example create menu items for new plug-in procedures. No more direct calls from the core to the GUI via the ugly GUI-vtable.

The plug-in system is a similar mess, closely related to the PDB, but even worse. It has seen some refactoring, but just to the point where it was unavoidable to fix misbehavior or to get rid of calling deprecated GLib APIs. While the PDB cleanup has come a long way, I’m still in the middle of chopping and re-assembling the plug-in stuff:

  • Lots and lots of global variables have been moved to a new object, the GimpPlugInManager (app/plug-in/gimppluginmanager.c). Well they were not really global variables before, but members of the global Gimp instance, which is supposed to be the only “global” variable in the GIMP, but that doesn’t make much difference here.
  • Lots of functions are now methods of the GimpPlugInManager, which greatly helps finding them. Before, it was mostly unclear which function belonged to the plug-in instances themselves, and which to the infrastructure around that keeps is all together.

That’s where I am today, but there are still quite some hacks ahead before the stuff can be called “finished”:

The PlugIn struct (app/plug-in/plug-in.c) has to become an object, and this object needs some signals. Some code needs to listen to these signals, so cross-calling between unrelated scopes doesn’t happen any more. At some point people will even be able to understand how the plug-ins’ memory management is supposed to work ;-) Currently the calls to plug_in_ref() and plug_in_unref() are not really in places where one would expect them. I bet there is more uglyness that will go away as soon as I find it.

Now what are the benefits from all this work? Well, refactored code looks soooo much nicer :-)

But seriously:

  • The refactored code does look nicer, is easier to read and understand, is easier to change and fix.
  • The PDB can check the passed arguments much better now. Thanks to GParamSpec GIMP can tell a plug-in/script developer which of the passed arguments was wrong in which way.
  • Every argument has a default value now. After GIMP 2.4 this will allow us to change the plug-in side of calling PDB procedures to something that has named parameters with default values. No more breaking scripts just becuase somebody added an optional argument.
  • (actually, optional arguments were impossible before).
  • Now that it’s all cleaned up, people != hardcore_longtime_developers can understand and change it.
  • And many other benefits that usually show up after the refactored code is in use for some time.

And now, please get GIMP from CVS and test it until it breaks. Then report the bug so the new PDB will be as solid as the old one.

by Mitch at May 02, 2006 03:43 PM