That today ‘s gonna be a good day

Today is the day the world is witnessing the most significant military leak in the history of mankind, so I have a feeling that today ‘s gonna be a good day.

To all the people at Wikileaks, and to all whistle blowers in past, present and future: you are heroes. You guy’s ideas will be with us for centuries ahead of us. You’ll be remembered in history books. Let’s make sure you guys will.

Why make things complicated?

There are no open source companies. There are companies and there are open source projects.

Some companies work on open source projects, some parent open source projects, some don’t.

Some of those companies are good at fostering a community that contributes to these open source projects. Others are unwilling and some don’t yet understand the process. And again others have many open source projects being done by teams that do get it and have at the same time other projects being done by teams that don’t get it. Actually that last dual situation is the most common among the large companies. You know, the ones that often sponsor your community’s main conference and the ones that employ your heros.

If you do a quick reality-check then you’ll conclude there are no black / white companies. Actually, nothing in life nor in ethics is black / white. Nothing at all.

What you do have is a small group of amazingly disturbing purists who do zero coding themselves (that is, near zero) but do think black / white, and consequently write a lot of absurd nonsense in blog post-comments, on slashdot in particular, forums and mailing lists. These people are the reason numéro uno why many companies quit trying to understand open source.

It’s sad that the actual (open source) developers have to waste time explaining companies, for whom they do consultancy, that these people can be ignored. It’s also sad that these purists have turned so vocal, even violent, that they often can’t really be ignored anymore: people’s employers have been harassed.

“You have to fire somebody because he’s being unethical by disagreeing with my religious believe-system that Microsoft is evil!”. Maybe it’s just me who’s behind on ethics in this world? Well, those people can still get lost because I, in ethics, disagree with them.

Now, let’s get back to the projects and away from the open source vs. open core debates. We have a lot of work to do. And a lot of companies to convince opening their projects.

Open source developers succeeded in (for example) getting some software on phones. The people who did aren’t the religious black / white people. Maybe the media around open source should track down the people who did, and write quite a bit more about their work, ideas and passion?

Finally, the best companies are driven by the ideas and passions of their best employees. Those are the people who you should admire. Not their company’s open core PR.

Wrapping up 4.57 billion years

In 4.57 billion years our solar system went from creating simple bacteria to a large group of species. Several of which highly capable of making fairly intelligent decisions, one of which capable of having the indulgence of believing that it can think. That’s us.

The sun has an estimated 5 billion years to go before it turns into a Red Giant that in its very early stages will wipe out truly every single idea that exists inside at least our own solar system.

Unless radio waves that our planet started emitting since we invented radio are seen and understood (which requires a recipient in the first place), that will be the ultimate end of all of our ideas and culture. Unless we figure out a way to let the ideas cultivate outside of our solar system. Just the ideas would already be an insane achievement.

But imagine going from bacteria to beings, colonized by bacteria, that think that they can think, in far less time than the current age of our sun. Unless, of course, bacteria somehow arrived into our solar system from outside (unlikely, but perhaps equally unlikely than us ever exporting our ideas and culture to another solar system).

Imagine what could happen in the next 5 billion years …

Domain indexes finished, technical conclusions

The support for domain specific indexes is, awaiting review / finished. Although we can further optimize it now. More on that later in this post. Image that you have this ontology:

nie:InformationElement a rdfs:Class .

nie:title a rdf:Property ;
  nrl:maxCardinality 1 ;
  rdfs:domain nie:InformationElement ;
  rdfs:range xsd:string .

nmm:MusicPiece a rdfs:Class ;
  rdfs:subClassOf nie:InformationElement .

nmm:beatsPerMinute a rdf:Property ;
  nrl:maxCardinality 1 ;
  rdfs:domain nmm:MusicPiece ;
  rdfs:range xsd:integer .

With that ontology there are three tables called “Resource”, “nmo:MusicPiece” and “nie:InformationElement” in SQLite’s schema:

  • The “Resource” table has ID and the subject string
  • The “nie:InformationElement” has ID and “nie:title”
  • The “nmm:MusicPiece” one has ID and “nmm:beatsPerMinute”

That’s fairly simple, right? The problem is that when you ORDER BY “nie:title” that you’ll cause a full table scan on “nie:InformationElement”. That’s not good, because there are less “nmm:MusicPiece” records than “nie:InformationElement” ones.

Imagine that we do this SPARQL query:

SELECT ?title WHERE {
   ?resource a nmm:MusicPiece ;
             nie:title ?title
} ORDER BY ?title

We translate that, for you, to this SQL on our schema:

SELECT   "title_u" FROM (
  SELECT "nmm:MusicPiece1"."ID" AS "resource_u",
         "nie:InformationElement2"."nie:title" AS "title_u"
  FROM   "nmm:MusicPiece" AS "nmm:MusicPiece1",
         "nie:InformationElement" AS "nie:InformationElement2"
  WHERE  "nmm:MusicPiece1"."ID" = "nie:InformationElement2"."ID"
  AND    "title_u" IS NOT NULL
) ORDER BY "title_u"

OK, so with support for domain indexes we change the ontology like this:

nmm:MusicPiece a rdfs:Class ;
  rdfs:subClassOf nie:InformationElement ;
  tracker:domainIndex nie:title .

Now we’ll have the three tables called “Resource”, “nmo:MusicPiece” and “nie:InformationElement” in SQLite’s schema. But they will look like this:

  • The “Resource” table has ID and the subject string
  • The “nie:InformationElement” has ID and “nie:title”
  • The “nmm:MusicPiece” table now has three columns called ID, “nmm:beatsPerMinute” and “nie:title”

The same data, for titles of music pieces, will be in both “nie:InformationElement” and “nmm:MusicPiece”. We copy to the mirror column during ontology change coping, and when new inserts happen.

When now the rdf:type is known in the SPARQL query as a nmm:MusicPiece, like in the query mentioned earlier, we know that we can use the “nie:title” from the “nmm:MusicPiece” table in SQLite. That allows us to generate you this SQL query:

SELECT   "title_u" FROM (
  SELECT "nmm:MusicPiece1"."ID" AS "resource_u",
         "nmm:MusicPiece1"."nie:title" AS "title_u"
  FROM   "nmm:MusicPiece" AS "nmm:MusicPiece1"
  WHERE  "title_u" IS NOT NULL
) ORDER BY "title_u"

A remaining optimization is when you request a rdf:type that is a subclass of nmm:MusicPiece, like this:

SELECT ?title WHERE {
  ?resource a nmm:MusicPiece, nie:InformationElement ;
            nie:title ?title
} ORDER BY ?title

It’s still not as bad as now the “nie:title” is still taken from the “nmm:MusicPiece” table. But the join with “nie:InformationElement” is still needlessly there (we could just do the earlier SQL query in this case):

SELECT   "title_u" FROM (
  SELECT "nmm:MusicPiece1"."ID" AS "resource_u",
         "nmm:MusicPiece1"."nie:title" AS "title_u"
  FROM   "nmm:MusicPiece" AS "nmm:MusicPiece1",
         "nie:InformationElement" AS "nie:InformationElement2"
  WHERE  "nmm:MusicPiece1"."ID" = "nie:InformationElement2"."ID"
  AND    "title_u" IS NOT NULL
) ORDER BY "title_u"

We will probably optimize this specific use-case further later this week.

Smile or Die

In followup on the RSA animation videos here’s the original talk by Barbara Ehrenreich titled Smile or Die.

I think part of GNOME’s crisis is caused by the same atmosphere of “go with the program, don’t complain, or you’re out”. I wrote about this before:

It’s not popular to be critical about a (the leader of a) popular idea. This is illustrated by the intellectually absurd criticisms David Schlesinger receives.

Yet is the critic who monitors the organs of a society key to that organ either producing for its stakeholders, or failing and dragging the entire society it serves down with it.

Acknowledging the problem and changing course is what I seek in a candidate this year.

OK, two is enough. Back to technical articles.

Friday’s performance improvements in Tracker

The crawler’s modification time queries

Yesterday we optimized the crawler’s query that gets the modification time of files. We use this timestamp to know whether or not a file must be reindexed.

Originally, we used a custom SQLite function called tracker:uri-is-parent() in SPARQL. This, however, caused a full table scan. As long as your SQL table for nfo:FileDataObjects wasn’t too large, that wasn’t a huge problem. But it didn’t scale linear. I started with optimizing the function itself. It was using a strlen() so I replaced that with a sqlite3_value_bytes(). We only store UTF-8, so that worked fine. It gained me ~ 10%; not enough.

So this commit was a better improvement. First it makes nfo:belongsToContainer an indexed property. The x nfo:belongsToContainer p means x is in a directory p for file resources. The commit changes the query to use the property that is now indexed.

The original query before we started with this optimization took 1.090s when you had ~ 300,000 nfo:FileDataObject resources. The new query takes about 0.090s. It’s of course an unfair comparison because now we use an indexed property. Adding the index only took a total of 10s for a ~ 300,000 large table and the table is being queried while we index (while we insert into it). Do the math, it’s a huge win in all situations. For the SQLite freaks; the SQLite database grew by 4 MB, with all items in the table indexed.

PDF extractor

Another optimization I did earlier was the PDF extractor. Originally, we used the poppler-glib library. This library doesn’t allow us to set the OutputDev at runtime. If compiled with Cairo, the OutputDev is in some versions a CairoOutputDev. We don’t want all images in the PDF to be rendered to a Cairo surface. So I ported this back to C++ and made it always use a TextOutputDev instead. In poppler-glib master this appears to have improved (in git master poppler_page_get_text_page is always using a TextOutputDev).

Another major problem with poppler-glib is the huge amount of copying strings in heap. The performance to extract metadata and content text for a 70 page PDF document without any images went from 1.050s to 0.550s. A lot of it was caused by copying strings and GValue boxing due to GObject properties.

Table locked problem

Last week I improved D-Bus marshaling by using a database cursor. I forgot to handle SQLITE_LOCKED while Jürg and Carlos had been introducing multithreaded SELECT support. Not good. I fixed this; it was causing random Table locked errors.

RDF propaganda, time for change

I’m not supposed to but I’m proud. It’s not only me who’s doing it.

Adrien is one of the new guys on the block. He’s working on integration with Tracker’s RDF service and various web services like Flickr, Facebook, Twitter, picasaweb and RSS. This is the kind of guy several companies should be afraid of. His work is competing with what they are trying to do do: integrating the social web with mobile.

Oh come on Steve, stop pretending that you aren’t. And you better come up with something good, because we are.

Not only that, Adrien is implementing so-called writeback. It means that when you change a local resource’s properties, that this integration will update Flickr, Facebook, picasaweb and Twitter.

You change a piece of info about a photo on your phone, and it’ll be replicated to Flickr. It’ll also be synchronized onto your phone as soon as somebody else made a change.

This is the future of computing and information technology. Integration with social networking and the phone is what people want. Dear Mark, it’s unstoppable. You better keep your eyes open, because we are going fast. Faster than your business.

I’m not somebody trying to guess how technology will look in a few years. I try to be in the middle of the technical challenge of actually doing it. Talking about it is telling history before your lip’s muscles moved.

At the Tracker project we are building a SPARQL endpoint that uses D-Bus as IPC. This is ideal on Nokia’s Meego. It’ll be a centerpiece for information gathering. On Meego you wont ask the filesystem, instead you’ll ask Tracker using SPARQL and RDF.

To be challenged is likely the most beautiful state of mind.

I invite everybody to watch this demo by Adrien. It’s just the beginning. It’s going to get better.

Tracker writeback & web service integration demo / MeegoTouch UI from Adrien Bustany on Vimeo.

I tagged this as ‘extremely controversial’. That’s fine, Adrien told me that “people are used to me anyway”.

Performance DBus handling of the query results in Tracker’s RDF service

Before

For returning the results of a SPARQL SELECT query we used to have a callback like this. I removed error handling, you can find the original here.

We need to marshal a database result_set to a GPtrArray because dbus-glib fancies that. This is a lot of boxing the strings into GValue and GStrv. It does allocations, so not good.

static void
query_callback(TrackerDBResultSet *result_set,GError *error,gpointer user_data)
{
  TrackerDBusMethodInfo *info = user_data;
  GPtrArray *values = tracker_dbus_query_result_to_ptr_array (result_set);
  dbus_g_method_return (info->context, values);
  tracker_dbus_results_ptr_array_free (&values);
}

void
tracker_resources_sparql_query (TrackerResources *self, const gchar *query,
                                DBusGMethodInvocation *context, GError **error)
{
  TrackerDBusMethodInfo *info = ...; guint request_id;
  TrackerResourcesPrivate *priv= ...; gchar *sender;
  info->context = context;
  tracker_store_sparql_query (query, TRACKER_STORE_PRIORITY_HIGH,
                              query_callback, ...,
                              info, destroy_method_info);
}

After

Last week I changed the asynchronous callback to return a database cursor. In SQLite that means an sqlite3_step(). SQLite returns const pointers to the data in the cell with its sqlite3_column_* APIs.

This means that now we’re not even copying the strings out of SQLite. Instead, we’re using them as const to fill in a raw DBusMessage:

static void
query_callback(TrackerDBCursor *cursor,GError *error,gpointer user_data)
{
  TrackerDBusMethodInfo *info = user_data;
  DBusMessage *reply; DBusMessageIter iter, rows_iter;
  guint cols; guint length = 0;
  reply = dbus_g_method_get_reply (info->context);
  dbus_message_iter_init_append (reply, &iter);
  cols = tracker_db_cursor_get_n_columns (cursor);
  dbus_message_iter_open_container (&iter, DBUS_TYPE_ARRAY,
                                    "as", &rows_iter);
  while (tracker_db_cursor_iter_next (cursor, NULL)) {
    DBusMessageIter cols_iter; guint i;
    dbus_message_iter_open_container (&rows_iter, DBUS_TYPE_ARRAY,
                                      "s", &cols_iter);
    for (i = 0; i < cols; i++, length++) {
      const gchar *result_str = tracker_db_cursor_get_string (cursor, i);
      dbus_message_iter_append_basic (&cols_iter,
                                      DBUS_TYPE_STRING,
                                      &result_str);
    }
    dbus_message_iter_close_container (&rows_iter, &cols_iter);
  }
  dbus_message_iter_close_container (&iter, &rows_iter);
  dbus_g_method_send_reply (info->context, reply);
}

Results

The test is a query on 13500 resources where we ask for two strings, repeated eleven times. I removed a first repeat from each round, because the first time the sqlite3_stmt still has to be created. This means that our measurement would get a few more milliseconds. I also directed the standard out to /dev/null to avoid the overhead created by the terminal. The results you see below are the value for “real”.

There is of course an overhead created by the “tracker-sparql” program. It does demarshaling using normal dbus-glib. If your application uses DBusMessage directly, then it can avoid the same overhead. But since for both rounds I used the same “tracker-sparql” it doesn’t matter for the measurement.

$ time tracker-sparql -q "SELECT ?u  ?m { ?u a rdfs:Resource ;
          tracker:modified ?m }" > /dev/null

Without the optimization:

0.361s, 0.399s, 0.327s, 0.355s, 0.340s, 0.377s, 0.346s, 0.380s, 0.381s, 0.393s, 0.345s

With the optimization:

0.279s, 0.271s, 0.305s, 0.296s, 0.295s, 0.294s, 0.295s, 0.244s, 0.289s, 0.237s, 0.307s

The improvement ranges between 7% and 40% with average improvement of 22%.

Focus on query performance

Every (good) developer knows that copying of memory and boxing, especially when dealing with a large amount of pieces like members of collections and the cells in a table, are a bad thing for your performance.

More experienced developers also know that novice developers tend to focus on just their algorithms to improve performance, while often the single biggest bottleneck is needless boxing and allocating. Experienced developers come up with algorithms that avoid boxing and copying; they master clever pragmatical engineering and know how to improve algorithms. A lot of newcomers use virtual machines and script languages that are terrible at giving you the tools to control this and then they start endless religious debates about how great their programming language is (as if it matters). (Anti-.NET people don’t get on your horses too soon: if you know what you are doing, C# is actually quite good here).

We were of course doing some silly copying ourselves. Apparently it had a significant impact on performance.

Once Jürg and Carlos have finished the work on parallelizing SELECT queries we plan to let the code that walks the SQLite statement fill in the DBusMessage directly without any memory copying or boxing (for marshalling to DBus). We found the get_reply and send_reply functions; they sound useful for this purpose.

I still don’t really like DBus as IPC for data transfer of Tracker’s RDF store’s query results. Personally I think I would go for a custom Unix socket here. But Jürg so far isn’t convinced. Admittedly he’s probably right; he’s always right. Still, DBus to me doesn’t feel like a good IPC for this data transfer..

We know about the requests to have direct access to the SQLite database from your own process. I explained in the bug that SQLite3 isn’t MVCC and that this means that your process will often get blocked for a long time on our transaction. A longer time than any IPC overhead takes.

Supporting ontology changes in Tracker

It used to be in Tracker that you couldn’t just change the ontology. When you did, you had to reboot the database. This means loosing all the non-embedded data. For example your tags or other such information that’s uniquely stored in Tracker’s RDF store.

This was of course utterly unacceptable and this was among the reasons why we kept 0.8 from being released for so long: we were afraid that we would need to make ontology changes during the 0.8 series.

So during 0.7 I added support for what I call modest ontology changes. This means adding a class, adding a property. But just that. Not changing an existing property. This was sufficient for 0.8 because now we could at least do some changes like adding a property to a class, or adding a new class. You know, making implementing the standard feature requests possible.

Last two weeks I worked on supporting more intrusive ontology changes. The branch that I’m working on currently supports changing tracker:notify for the signals on changes feature, tracker:writeback for the writeback features and tracker:indexed which controls the indexes in the SQLite tables.

But also certain range changes are supported. For example integer to string, double and boolean. String to integer, double and boolean. Double to integer, string and boolean. Range changes will sometimes of course mean data loss.

Plenty of code was also added to detect an unsupported ontology change and to ensure that we just abort the process and don’t do any changes in that case.

It’s all quite complex so it might take a while before the other team members have tested and reviewed all this. It should probably take even longer before it hits the stable 0.8 branch.

We wont yet open the doors to custom ontologies. Several reasons:

  • We want more testing on the support for ontology changes. We know that once we open the doors to custom ontologies that we’ll see usage of this rather sooner than later.
  • We don’t yet support removing properties and classes. This would be easy (drop the table and columns away and log the event in the journal) but it’s not yet supported. Mostly because we don’t need it ourselves (which is a good reason).
  • We don’t want you to meddle with the standard ontologies (we’ll do that, don’t worry). So we need a bit of ontology management code to also look in other directories, etc.
  • The error handling of unsupported ontology changes shouldn’t abort like mentioned above. Another piece of software shouldn’t make Tracker unusable just because they install junk ontologies.
  • We actually want to start using OSCAF‘s ontology format. Perhaps it’s better that we wait for this instead of later asking everybody to convert their custom ontologies?
  • We’re a bunch of pussies who are afraid of the can of worms that you guys’ custom ontologies will open.

But yes, you could say that the basics are being put in place as we speak.

Wikileaks

MSNBC: You have more tapes like this?
Julian Assange: Yes we do.
Assange: I won’t go into the precise number. But there was a rumor that the tape that we were about to release was about a similar incident in Afghanistan, where 97 people were bombed in May last year. We euhm, have that video.
MSNBC: Do you intent to release that video as well?
Assange: Yes, as soon as we have finished our analysis, we will release it.

Thank you Wikileaks. Thank you Julian Assange. You are bringing Wikileak’s perspective calm and clear in the media. You’re an example to all whistleblowers. Julian, you’re doing a great job.

I understand more people are involved in this leak; thanks everybody. You’re being respected.

Information technology is all about information. Information for humanity.

Don’t you guys stop believing in this! We now believe in you. Many people like me are highly focused and when intelligence services want a battle: we’ll listen. People like me are prepared to act.

I understand you guys like Belgium’s law that protects journalist’ sources. As the owner of a Belgian Ltd. maybe I can help?

I’m not often proud about my country. Last week I told my Swiss friends here in Zürich that I have about 3000 reasons to leave Belgium and a 1000 reasons to come to Switzerland. I wasn’t exaggerating.

I’m a guy with principles and ethics. So thank you.

Zürichsee

Today after I brought Tinne to the airport I drove around Zürichsee. She can’t stay in Switzerland the entire month; she has to go back to school on Monday.

While driving on the Seestrasse I started counting luxury cars. After I reached two for Lamborgini and three for Ferrari I started thinking: Zimmerberg Sihltal and Pfannenstiel must be expensive districts tooAnd yes, they are.

I was lucky today that it was nice weather. But wow, what a nice view on the mountain tops when you look south over Zürichsee. People from Zürich, you guys are so lucky! Such immense calming feeling the view gives me! For me, it beats sauna. And I’m a real sauna fan.

I’m thinking to check it out south of Zürich. But not the canton. I think the house prices are just exaggerated high in the canton of Zürich. I was thinking Sankt Gallen, Toggenburg. I’ve never been there; I’ll check it out tomorrow.

Hmmr, meteoswiss gives rain for tomorrow. Doesn’t matter.

Actually, when I came back from the airport the first thing I really did was fix coping with property changes in ontologies for Tracker. Yesterday it wasn’t my day, I think. I couldn’t find this damn problem in my code! And in the evening I lost three chess games in a row against Tinne. That’s really a bad score for me. Maybe after two weeks of playing chess almost every evening, she got better than me? Hmmrr, that’s a troubling idea.

Anyway, so when I got back from the airport I couldn’t resist beating the code problem that I didn’t find on Friday. I found it! It works!

I guess I’m both a dreamer and a realist programmer. But don’t tell my customers that I’m such a dreamer.

Bern, an idyllic capital city

Today Tinne and I visited Switzerland’s capital, Bern.

We were really surprised; we’d never imagined that a capital city could offer so much peace and calm. It felt good to be there.

The fountains, the old houses, the river and the snowy mountain peaks give the city an idyllic image.

Standing on the bridge, you see the roofs of all these lovely small houses.

The bear is the symbol of Bern. Near the House of Parliament there was this statue of a bear. Tinne just couldn’t resist to give it a hug. Bern has also got real bears. Unfortunately, Tinne was not allowed to cuddle those bears.

The House of Parliament is a truly impressive building. It looks over the snowy mountains, its people and its treasury, the National Bank of Switzerland.


As you can imagine, the National Bank building is a master piece as well. And even more impressive; it issues a world leading currency.

On the market square in Oerlikon we first saw this chess board on the street; black and white stones and giant chess pieces. In Bern there was also a giant chess board in the backyard of the House of Parliament. Tinne couldn’t resist to challenge me for a game of chess. (*edit*, Armin noted in a comment that the initial position of knight and bishop are swapped. And OMG, he’s right!)

And she won!

At the House of Parliament you get a stunning, idyllic view on the mountains of Switzerland.


Confoederatio Helvetica

It’s crossing my mind to move here in ~ two years.

Today we visited Zug; it has a Ferrari shop.

Zug, where an apartment costs far more than a villa in Belgium. Briefly a million euros.

It also comforts me. I could be here. Zug has a volière with exotic birds and a lake.

When Tinne and me were driving back to Oerlikon, we listened to Karoliina’s Symphonic dream.

The music; a canvas for the paint, Switzerland.

Die Lichter auf dem Berg. Die sind alle Seelen.

From grey mouse to putschist. That was quick.

Congratulations to Mr. Van Rompuy for helping the EU powers to find a compromise.

Diplomats credit him with a shrewd sense of deal-making and a determination that is belied by his quiet anti-charisma, and he has already begun to win plaudits from Paris, Berlin and other capitals.

Financial Times, Saturday Mar 27 2010 (alt. link)

Finally a politician to be proud of as a Belgian!

The mouse is dull grey
It steps into the sunshine
The mouse is snow white

Working hard at the Tracker project

Today we improved journal replaying from 1050s for my test of 25249 resources to 58s.

Journal replaying happens when your cache database gets corrupted. Also when you restore a backup: restore uses the same code the journal replaying uses, backup just makes a copy of your journal.

During the performance improvements we of course found other areas related to data entry. It looks like we’re entering a period of focus on performance, as we have a few interesting ideas for next week already. The ideas for next week will focus on performance of some SPARQL functions like regex.

Meanwhile are Michele Tameni and Roberto Guido working on a RSS miner for Tracker and has Adrien Bustany been working on other web miners like for Flickr, GData, Twitter and Facebook.

I think the first pieces of the RSS- and the other web miners will start becoming available in this week’s unstable 0.7 release. Martyn is still reviewing the branches of the guys, but we’re very lucky with such good software developers as contributors. Very nice work Michele, Roberto and Adrien!

The future of the European community, a European Monetary Fund.

I’m worried about the EURO’s M3 if a European version of the IMF (a EMF) is to be installed.

Nonetheless, I think the European community should do it just to strengthen Europe’s economy. I’m not satisfied by Europe’s economic strength: I want it to be undefeatable.

We must not let the IMF solve our problems. Europe might be a political dwarf, but we Europeans should show that we will solve our own problems. We’re an adult composition of cultures with vast amounts of experience. We know how to solve any imaginable problem. And let’s not, in our defeatism, pretend we don’t.

A EMF is a commitment to future member states: Europe often asks them fundamental changes; economic strength is what Europe offers in return. This needs to come at a highest price: Greece will have to fix their deficit problem. Even if their entire population goes on strike. Greece will be an example for countries like my own: Belgium has to fix a serious deficit problem, too.

An EMF comes at an equally high price, and that frightens me a bit: I don’t want the ECB to go as ballistic on money creation as the FED has been last two years. I want the EURO to be the strongest relevant currency mankind has ever created. No matter how insane the rest of the world thinks that ambition is: I believe that keeping the EURO’s M3 in check is a key to creating a wealthy society in Europe.

Politically I want European nations to negotiate more and more often. The European Union is a political dwarf only because finding agreement is hard. But in the long run will our solution be the most negotiated, most tested on this planet.

Together we can deal with anything. That doesn’t mean it’ll be easy; it has never been easy: just seventy years ago we were still killing each other. We’re all guilty of that one way or another. And before that it wasn’t any better. Today, not that many people still care: “it wasn’t me”, right? So stop being a bitch about it, then.

It’s time to let it be. It’s time to start a new European century that will be better. With respect for all European cultures, languages, nations, nationalities, values, borders and interests.

But also a European century with economic responsibilities for each member. It’s our strength: we figured out how to keep our population wealthy: let’s continue doing so in the future.

The Euro skeptics and pro Europeans are finally united in an opinion!

We both agree that Nigel Farage is a complete moron.

Perhaps we should put a damp rag like the one he mentions in his mouth next time he opens it?

Nigel Farage, you’re an disgrace to yourself. The European parliament is no place for personal attacks, and you aren’t fit to carry the title Member of the European Parliament. Please keep the honour to yourself and resign.

Every sensible person outside of the U.K. thinks you should. Even the Euro skeptics do. You’re an embarrassment for your country and its culture, so I hope for the people in the U.K. that they’ll kick you out of politics.

I fear you’re just playing the populist card, and that you’ll even get votes for this from other morons.

Please don’t rewrite softwares (that are) written in .NET

This (super) cool .NET developer and good friend came to me at the FOSDEM bar to tell me he was confused about why during the Tracker presentation I was asking people to replace F-Spot and Banshee.

I hope I didn’t say it like that, I would never intent to say that. But I’ll review the video of the presentation as soon as Rob publishes it.

Anyway, to ensure everybody understood correctly what I did wanted to say (whether or not I did, is another question):

The call was to inspire people to reimplement or to provide different implementations of F-Spot’s and Banshee’s data backends, so that they would use an RDF store like tracker-store instead of each app its own metadata database.

I think I also mentioned Rhythmbox in the same sentence because the last thing I would want is to turn this into a .NET vs. anti-.NET debate. It just happens to be that the best GNOME softwares for photo and music management are written in .NET (and that has a good reason).

People who know me also know that I think those anti-.NET people are disruptive ignorable people. I also actively and willingly ignore them (and they should know this). I’m actually a big fan of the Mono platform.

I’ll try to ensure that I don’t create this confusion during presentations anymore.

The role of media in the USA

Two posts ago I wrote that something like The Real news is quite unique in the U.S.’s completely broken media.

Today I found an interesting double interview on AlJazeeraEnglish by Riz Khan titled Has the mainstream media in the US replaced serious coverage with “junk news” and tabloidism?