Performance DBus handling of the query results in Tracker’s RDF service

Before

For returning the results of a SPARQL SELECT query we used to have a callback like this. I removed error handling, you can find the original here.

We need to marshal a database result_set to a GPtrArray because dbus-glib fancies that. This is a lot of boxing the strings into GValue and GStrv. It does allocations, so not good.

static void
query_callback(TrackerDBResultSet *result_set,GError *error,gpointer user_data)
{
  TrackerDBusMethodInfo *info = user_data;
  GPtrArray *values = tracker_dbus_query_result_to_ptr_array (result_set);
  dbus_g_method_return (info->context, values);
  tracker_dbus_results_ptr_array_free (&values);
}

void
tracker_resources_sparql_query (TrackerResources *self, const gchar *query,
                                DBusGMethodInvocation *context, GError **error)
{
  TrackerDBusMethodInfo *info = ...; guint request_id;
  TrackerResourcesPrivate *priv= ...; gchar *sender;
  info->context = context;
  tracker_store_sparql_query (query, TRACKER_STORE_PRIORITY_HIGH,
                              query_callback, ...,
                              info, destroy_method_info);
}

After

Last week I changed the asynchronous callback to return a database cursor. In SQLite that means an sqlite3_step(). SQLite returns const pointers to the data in the cell with its sqlite3_column_* APIs.

This means that now we’re not even copying the strings out of SQLite. Instead, we’re using them as const to fill in a raw DBusMessage:

static void
query_callback(TrackerDBCursor *cursor,GError *error,gpointer user_data)
{
  TrackerDBusMethodInfo *info = user_data;
  DBusMessage *reply; DBusMessageIter iter, rows_iter;
  guint cols; guint length = 0;
  reply = dbus_g_method_get_reply (info->context);
  dbus_message_iter_init_append (reply, &iter);
  cols = tracker_db_cursor_get_n_columns (cursor);
  dbus_message_iter_open_container (&iter, DBUS_TYPE_ARRAY,
                                    "as", &rows_iter);
  while (tracker_db_cursor_iter_next (cursor, NULL)) {
    DBusMessageIter cols_iter; guint i;
    dbus_message_iter_open_container (&rows_iter, DBUS_TYPE_ARRAY,
                                      "s", &cols_iter);
    for (i = 0; i < cols; i++, length++) {
      const gchar *result_str = tracker_db_cursor_get_string (cursor, i);
      dbus_message_iter_append_basic (&cols_iter,
                                      DBUS_TYPE_STRING,
                                      &result_str);
    }
    dbus_message_iter_close_container (&rows_iter, &cols_iter);
  }
  dbus_message_iter_close_container (&iter, &rows_iter);
  dbus_g_method_send_reply (info->context, reply);
}

Results

The test is a query on 13500 resources where we ask for two strings, repeated eleven times. I removed a first repeat from each round, because the first time the sqlite3_stmt still has to be created. This means that our measurement would get a few more milliseconds. I also directed the standard out to /dev/null to avoid the overhead created by the terminal. The results you see below are the value for “real”.

There is of course an overhead created by the “tracker-sparql” program. It does demarshaling using normal dbus-glib. If your application uses DBusMessage directly, then it can avoid the same overhead. But since for both rounds I used the same “tracker-sparql” it doesn’t matter for the measurement.

$ time tracker-sparql -q "SELECT ?u  ?m { ?u a rdfs:Resource ;
          tracker:modified ?m }" > /dev/null

Without the optimization:

0.361s, 0.399s, 0.327s, 0.355s, 0.340s, 0.377s, 0.346s, 0.380s, 0.381s, 0.393s, 0.345s

With the optimization:

0.279s, 0.271s, 0.305s, 0.296s, 0.295s, 0.294s, 0.295s, 0.244s, 0.289s, 0.237s, 0.307s

The improvement ranges between 7% and 40% with average improvement of 22%.

Focus on query performance

Every (good) developer knows that copying of memory and boxing, especially when dealing with a large amount of pieces like members of collections and the cells in a table, are a bad thing for your performance.

More experienced developers also know that novice developers tend to focus on just their algorithms to improve performance, while often the single biggest bottleneck is needless boxing and allocating. Experienced developers come up with algorithms that avoid boxing and copying; they master clever pragmatical engineering and know how to improve algorithms. A lot of newcomers use virtual machines and script languages that are terrible at giving you the tools to control this and then they start endless religious debates about how great their programming language is (as if it matters). (Anti-.NET people don’t get on your horses too soon: if you know what you are doing, C# is actually quite good here).

We were of course doing some silly copying ourselves. Apparently it had a significant impact on performance.

Once Jürg and Carlos have finished the work on parallelizing SELECT queries we plan to let the code that walks the SQLite statement fill in the DBusMessage directly without any memory copying or boxing (for marshalling to DBus). We found the get_reply and send_reply functions; they sound useful for this purpose.

I still don’t really like DBus as IPC for data transfer of Tracker’s RDF store’s query results. Personally I think I would go for a custom Unix socket here. But Jürg so far isn’t convinced. Admittedly he’s probably right; he’s always right. Still, DBus to me doesn’t feel like a good IPC for this data transfer..

We know about the requests to have direct access to the SQLite database from your own process. I explained in the bug that SQLite3 isn’t MVCC and that this means that your process will often get blocked for a long time on our transaction. A longer time than any IPC overhead takes.

Supporting ontology changes in Tracker

It used to be in Tracker that you couldn’t just change the ontology. When you did, you had to reboot the database. This means loosing all the non-embedded data. For example your tags or other such information that’s uniquely stored in Tracker’s RDF store.

This was of course utterly unacceptable and this was among the reasons why we kept 0.8 from being released for so long: we were afraid that we would need to make ontology changes during the 0.8 series.

So during 0.7 I added support for what I call modest ontology changes. This means adding a class, adding a property. But just that. Not changing an existing property. This was sufficient for 0.8 because now we could at least do some changes like adding a property to a class, or adding a new class. You know, making implementing the standard feature requests possible.

Last two weeks I worked on supporting more intrusive ontology changes. The branch that I’m working on currently supports changing tracker:notify for the signals on changes feature, tracker:writeback for the writeback features and tracker:indexed which controls the indexes in the SQLite tables.

But also certain range changes are supported. For example integer to string, double and boolean. String to integer, double and boolean. Double to integer, string and boolean. Range changes will sometimes of course mean data loss.

Plenty of code was also added to detect an unsupported ontology change and to ensure that we just abort the process and don’t do any changes in that case.

It’s all quite complex so it might take a while before the other team members have tested and reviewed all this. It should probably take even longer before it hits the stable 0.8 branch.

We wont yet open the doors to custom ontologies. Several reasons:

  • We want more testing on the support for ontology changes. We know that once we open the doors to custom ontologies that we’ll see usage of this rather sooner than later.
  • We don’t yet support removing properties and classes. This would be easy (drop the table and columns away and log the event in the journal) but it’s not yet supported. Mostly because we don’t need it ourselves (which is a good reason).
  • We don’t want you to meddle with the standard ontologies (we’ll do that, don’t worry). So we need a bit of ontology management code to also look in other directories, etc.
  • The error handling of unsupported ontology changes shouldn’t abort like mentioned above. Another piece of software shouldn’t make Tracker unusable just because they install junk ontologies.
  • We actually want to start using OSCAF‘s ontology format. Perhaps it’s better that we wait for this instead of later asking everybody to convert their custom ontologies?
  • We’re a bunch of pussies who are afraid of the can of worms that you guys’ custom ontologies will open.

But yes, you could say that the basics are being put in place as we speak.

Wikileaks

MSNBC: You have more tapes like this?
Julian Assange: Yes we do.
Assange: I won’t go into the precise number. But there was a rumor that the tape that we were about to release was about a similar incident in Afghanistan, where 97 people were bombed in May last year. We euhm, have that video.
MSNBC: Do you intent to release that video as well?
Assange: Yes, as soon as we have finished our analysis, we will release it.

Thank you Wikileaks. Thank you Julian Assange. You are bringing Wikileak’s perspective calm and clear in the media. You’re an example to all whistleblowers. Julian, you’re doing a great job.

I understand more people are involved in this leak; thanks everybody. You’re being respected.

Information technology is all about information. Information for humanity.

Don’t you guys stop believing in this! We now believe in you. Many people like me are highly focused and when intelligence services want a battle: we’ll listen. People like me are prepared to act.

I understand you guys like Belgium’s law that protects journalist’ sources. As the owner of a Belgian Ltd. maybe I can help?

I’m not often proud about my country. Last week I told my Swiss friends here in Zürich that I have about 3000 reasons to leave Belgium and a 1000 reasons to come to Switzerland. I wasn’t exaggerating.

I’m a guy with principles and ethics. So thank you.

Zürichsee

Today after I brought Tinne to the airport I drove around Zürichsee. She can’t stay in Switzerland the entire month; she has to go back to school on Monday.

While driving on the Seestrasse I started counting luxury cars. After I reached two for Lamborgini and three for Ferrari I started thinking: Zimmerberg Sihltal and Pfannenstiel must be expensive districts tooAnd yes, they are.

I was lucky today that it was nice weather. But wow, what a nice view on the mountain tops when you look south over Zürichsee. People from Zürich, you guys are so lucky! Such immense calming feeling the view gives me! For me, it beats sauna. And I’m a real sauna fan.

I’m thinking to check it out south of Zürich. But not the canton. I think the house prices are just exaggerated high in the canton of Zürich. I was thinking Sankt Gallen, Toggenburg. I’ve never been there; I’ll check it out tomorrow.

Hmmr, meteoswiss gives rain for tomorrow. Doesn’t matter.

Actually, when I came back from the airport the first thing I really did was fix coping with property changes in ontologies for Tracker. Yesterday it wasn’t my day, I think. I couldn’t find this damn problem in my code! And in the evening I lost three chess games in a row against Tinne. That’s really a bad score for me. Maybe after two weeks of playing chess almost every evening, she got better than me? Hmmrr, that’s a troubling idea.

Anyway, so when I got back from the airport I couldn’t resist beating the code problem that I didn’t find on Friday. I found it! It works!

I guess I’m both a dreamer and a realist programmer. But don’t tell my customers that I’m such a dreamer.

Bern, an idyllic capital city

Today Tinne and I visited Switzerland’s capital, Bern.

We were really surprised; we’d never imagined that a capital city could offer so much peace and calm. It felt good to be there.

The fountains, the old houses, the river and the snowy mountain peaks give the city an idyllic image.

Standing on the bridge, you see the roofs of all these lovely small houses.

The bear is the symbol of Bern. Near the House of Parliament there was this statue of a bear. Tinne just couldn’t resist to give it a hug. Bern has also got real bears. Unfortunately, Tinne was not allowed to cuddle those bears.

The House of Parliament is a truly impressive building. It looks over the snowy mountains, its people and its treasury, the National Bank of Switzerland.


As you can imagine, the National Bank building is a master piece as well. And even more impressive; it issues a world leading currency.

On the market square in Oerlikon we first saw this chess board on the street; black and white stones and giant chess pieces. In Bern there was also a giant chess board in the backyard of the House of Parliament. Tinne couldn’t resist to challenge me for a game of chess. (*edit*, Armin noted in a comment that the initial position of knight and bishop are swapped. And OMG, he’s right!)

And she won!

At the House of Parliament you get a stunning, idyllic view on the mountains of Switzerland.


Confoederatio Helvetica

It’s crossing my mind to move here in ~ two years.

Today we visited Zug; it has a Ferrari shop.

Zug, where an apartment costs far more than a villa in Belgium. Briefly a million euros.

It also comforts me. I could be here. Zug has a volière with exotic birds and a lake.

When Tinne and me were driving back to Oerlikon, we listened to Karoliina’s Symphonic dream.

The music; a canvas for the paint, Switzerland.

Die Lichter auf dem Berg. Die sind alle Seelen.

From grey mouse to putschist. That was quick.

Congratulations to Mr. Van Rompuy for helping the EU powers to find a compromise.

Diplomats credit him with a shrewd sense of deal-making and a determination that is belied by his quiet anti-charisma, and he has already begun to win plaudits from Paris, Berlin and other capitals.

Financial Times, Saturday Mar 27 2010 (alt. link)

Finally a politician to be proud of as a Belgian!

The mouse is dull grey
It steps into the sunshine
The mouse is snow white

Working hard at the Tracker project

Today we improved journal replaying from 1050s for my test of 25249 resources to 58s.

Journal replaying happens when your cache database gets corrupted. Also when you restore a backup: restore uses the same code the journal replaying uses, backup just makes a copy of your journal.

During the performance improvements we of course found other areas related to data entry. It looks like we’re entering a period of focus on performance, as we have a few interesting ideas for next week already. The ideas for next week will focus on performance of some SPARQL functions like regex.

Meanwhile are Michele Tameni and Roberto Guido working on a RSS miner for Tracker and has Adrien Bustany been working on other web miners like for Flickr, GData, Twitter and Facebook.

I think the first pieces of the RSS- and the other web miners will start becoming available in this week’s unstable 0.7 release. Martyn is still reviewing the branches of the guys, but we’re very lucky with such good software developers as contributors. Very nice work Michele, Roberto and Adrien!

The future of the European community, a European Monetary Fund.

I’m worried about the EURO’s M3 if a European version of the IMF (a EMF) is to be installed.

Nonetheless, I think the European community should do it just to strengthen Europe’s economy. I’m not satisfied by Europe’s economic strength: I want it to be undefeatable.

We must not let the IMF solve our problems. Europe might be a political dwarf, but we Europeans should show that we will solve our own problems. We’re an adult composition of cultures with vast amounts of experience. We know how to solve any imaginable problem. And let’s not, in our defeatism, pretend we don’t.

A EMF is a commitment to future member states: Europe often asks them fundamental changes; economic strength is what Europe offers in return. This needs to come at a highest price: Greece will have to fix their deficit problem. Even if their entire population goes on strike. Greece will be an example for countries like my own: Belgium has to fix a serious deficit problem, too.

An EMF comes at an equally high price, and that frightens me a bit: I don’t want the ECB to go as ballistic on money creation as the FED has been last two years. I want the EURO to be the strongest relevant currency mankind has ever created. No matter how insane the rest of the world thinks that ambition is: I believe that keeping the EURO’s M3 in check is a key to creating a wealthy society in Europe.

Politically I want European nations to negotiate more and more often. The European Union is a political dwarf only because finding agreement is hard. But in the long run will our solution be the most negotiated, most tested on this planet.

Together we can deal with anything. That doesn’t mean it’ll be easy; it has never been easy: just seventy years ago we were still killing each other. We’re all guilty of that one way or another. And before that it wasn’t any better. Today, not that many people still care: “it wasn’t me”, right? So stop being a bitch about it, then.

It’s time to let it be. It’s time to start a new European century that will be better. With respect for all European cultures, languages, nations, nationalities, values, borders and interests.

But also a European century with economic responsibilities for each member. It’s our strength: we figured out how to keep our population wealthy: let’s continue doing so in the future.

The Euro skeptics and pro Europeans are finally united in an opinion!

We both agree that Nigel Farage is a complete moron.

Perhaps we should put a damp rag like the one he mentions in his mouth next time he opens it?

Nigel Farage, you’re an disgrace to yourself. The European parliament is no place for personal attacks, and you aren’t fit to carry the title Member of the European Parliament. Please keep the honour to yourself and resign.

Every sensible person outside of the U.K. thinks you should. Even the Euro skeptics do. You’re an embarrassment for your country and its culture, so I hope for the people in the U.K. that they’ll kick you out of politics.

I fear you’re just playing the populist card, and that you’ll even get votes for this from other morons.

Please don’t rewrite softwares (that are) written in .NET

This (super) cool .NET developer and good friend came to me at the FOSDEM bar to tell me he was confused about why during the Tracker presentation I was asking people to replace F-Spot and Banshee.

I hope I didn’t say it like that, I would never intent to say that. But I’ll review the video of the presentation as soon as Rob publishes it.

Anyway, to ensure everybody understood correctly what I did wanted to say (whether or not I did, is another question):

The call was to inspire people to reimplement or to provide different implementations of F-Spot’s and Banshee’s data backends, so that they would use an RDF store like tracker-store instead of each app its own metadata database.

I think I also mentioned Rhythmbox in the same sentence because the last thing I would want is to turn this into a .NET vs. anti-.NET debate. It just happens to be that the best GNOME softwares for photo and music management are written in .NET (and that has a good reason).

People who know me also know that I think those anti-.NET people are disruptive ignorable people. I also actively and willingly ignore them (and they should know this). I’m actually a big fan of the Mono platform.

I’ll try to ensure that I don’t create this confusion during presentations anymore.

The role of media in the USA

Two posts ago I wrote that something like The Real news is quite unique in the U.S.’s completely broken media.

Today I found an interesting double interview on AlJazeeraEnglish by Riz Khan titled Has the mainstream media in the US replaced serious coverage with “junk news” and tabloidism?

Handling triplets arriving in tracker-store, CouchDB integration as use-case

At GCDS Jamie told us that he wants to make a plugin for tracker-store that writes all the triplets to a CouchDB instance.

Letting a CouchDB be a sort of offline backup isn’t very interesting. You want triples to go into the CouchDB at the moment of guaranteed storage: at commit time.

For the purpose of developing this we provide the following internal API.

typedef void (*TrackerStatementCallback) (const gchar *graph,
                                          const gchar *subject,
                                          const gchar *predicate,
                                          const gchar *object,
                                          GPtrArray   *rdf_types,
                                          gpointer     user_data);
typedef void (*TrackerCommitCallback)    (gpointer     user_data);

tracker_data_add_insert_statement_callback (TrackerStatementCallback callback,
                                            gpointer                 user_data);
tracker_data_add_delete_statement_callback (TrackerStatementCallback callback,
                                            gpointer                 user_data);
tracker_data_add_commit_statement_callback (TrackerCommitCallback callback,
                                            gpointer              user_data);

You’ll need to make a plugin for tracker-store and make the hook at the initialization of your plugin.

Current behaviour is when graph is NULL, it means that the default graph is being used. If it’s not NULL, it means that you probably don’t want the data in CouchDB: it’s data that’s coming from a miner. You probably only want to store data that is coming from the user. His applications won’t use FROM and INTO for their SPARQL Update queries, meaning that graph is NULL.

Very important is that your callback handler works with bottom halves: put your expensive task on a queue and handle the queued item somewhere else. You can for example use a GThreadPool or a GQueue plus a g_idle_add_full with G_PRIORITY_LOW callback picking items one by one on the mainloop. You should never have a TrackerStatementCallback or a TrackerCommitCallback that blocks. Not even a tiny tiny bit of blocking: it’ll bring everything in tracker-store on its knees. It’s why we aren’t giving you a public plugin API with a way to install your own plugins outside of the Tracker project.

By the way: we want to see code instead of talk before we further optimize things for this purpose.

Who the fuck is this guy?!

While you guys are all wondering who he is, we in Belgium are wondering who’s going to replace Herman Van Rompuy as our prime minister.

He’s the only prime minister who managed to give Belgium non-chaotic federal politics, for a few months.

I fear that Belgium will now plunge into a new political crisis. Not because the former prime-minister, Yves Leterme, is a bad one, but because the Walloons simply don’t want him. We know they’ll do everything in their power to discredit Yves. Especially their media will. Le Soir already publicly said that they’ll “veto” Yves Leterme as prime minister. As if a newspaper elects ministers. Arrogance.

Anyway.

If the price for delivering the first president of Europe is that we must pay with a new political crisis, I guess that we are so used to politic crisis that it’s okay. We’ll survive. You guys can have him.

He’s quite intelligent. He’s not a media guy. We don’t know more about him ourselves. Use wikipedia.

The real bad thing about Herman is that in the past he let religion influence his politics. He was for example against abortion laws. And he is against Turkey joining the union because of religious differences.

However. For the people from the United Kingdom: fuck your conservative tabloid magazines. To the idiot editors of those tabloids: discrediting Van Rompuy was easy, still you guys screwed up with retarded articles about Belgium.

ps. I don’t care that you don’t want politics on planet.gnome. It pulls from my blog, so ask the administrators of planet.gnome to pick the right categories. I say this because I know that people will otherwise comment about it. I want them to know that I don’t care.

Writeback, writing metadata back into your files

Today, I feel like exposing you to some bleeding edge development going on as we speak at the Tracker team. I know you’re scared of that and that’s precisely why I want to expose you! Hah.

We are prototyping writeback support for Tracker.

With writeback we mean writing metadata that the user passes to us via SPARQL UPDATE into the file that he’s describing.

This means that it must be about a thing that is stored, that it must update a property that we want to writeback and it means that we need to support the format.

OK, that’s three requirements before we write anything back. Let’s explain how this stuff works in the prototype!

In our prototype you mark properties that are eligible for being written into the files using tracker:writeback.

It goes like this:

nie:title a rdf:Property ;
   rdfs:label "Title" ;
   rdfs:comment "The title of the document" ;
   rdfs:subPropertyOf dc:title ;
   nrl:maxCardinality 1 ;
   rdfs:domain nie:InformationElement ;
   rdfs:range xsd:string ;
   tracker:fulltextIndexed true ;
   tracker:weight 10 ;
   tracker:writeback true .

Next you need a writeback module for tracker-writeback. We implemented a prototype one that can only write the title of MP3 files. It uses ID3lib‘s C API.

When the user is describing a file, the resource must have nie:isStoredAs. The property being changed ‘s tracker:writeback must be true. We want the value of the property too. That’s simple in SPARQL, right? Sure it is!

SELECT ?url ?predicate ?object {
    <$subject> ?predicate ?object ;
               nie:isStoredAs ?url .
    ?predicate tracker:writeback true
 }

You’ll find this query in the code, go look!

Now it’s simple: using ID3lib we map Nepomuk to ID3 and write it.

No don’t be afraid, we’re not going to writeback metadata that we found ourselves. We’ll only writeback data that the user provided in the form of a SPARQL Update on the default graph. No panic. Besides, using tracker-writeback is going to be completely optional (just don’t run it).

This is a prototype, I repeat, this is a prototype. No expectations yet please. Just feel exposed to scary stuff, get overly excited and then join us by contributing. It’s all public what we’re doing in the branch ‘writeback’.

ps. Whether this will be Maemo’s future metadata-write stuff? Hmm, I don’t know. Do you know? ;-)

Keeping the autotools guys happy with qmake

I’m still figuring out how to do the same thing with cmake, but various bloggers and comments appear to be promising that it’ll be even more easy.

But this is a message for probably all Nokia teams who are making Qt-based libraries:

First open your src/src.pro file and add this stuff:

CONFIG += create_pc create_prl
QMAKE_PKGCONFIG_REQUIRES = QtGui
QMAKE_PKGCONFIG_DESTDIR = pkgconfig
INSTALLS += target headers

Now open your debian/$package-dev.install file and add this line:

usr/lib/pkgconfig

You’ll be doing all the autotools people a tremendous favor.

Next, open the README file and document that you need to use qmake-qt4 on Debian or make either qmake-qt3 or qmake-qt4 work flawlessly with your build environment. Perhaps also mention how to set the install prefix, how to make qmake find and install .pc files in another location, stuff like that. I find that this is lacking for almost every Qt-based library.

You’ll be doing everybody who wants to use your software a tremendous favor.

Indentation

People,

Let’s all stop doing this:

static void
my_calling_function_wrong (void)
{
[tab]MyItem1 *item1;
[tab]MyItem2 *item2;
[tab]MyItem3 *item3;

[tab]my_long_funcion (item1,
[tab][tab][tab][tab]..item2,
[tab][tab][tab][tab]..item3);
}

And start doing this:

static void
my_calling_function_right (void)
{
[tab]MyItem1 *item1;
[tab]MyItem2 *item2;
[tab]MyItem3 *item3;

[tab]my_long_funcion (item1,
[tab].................item2,
[tab].................item3);
}

The former doesn’t make sense unless each and every code viewing text display understands Mode lines’ tab-width property. The latter just always works, with every normal text editor.

ps. The super cool guys at Anjuta have already fixed this for me. I’m sure the even more cool EMacsers and the uber cool vimers can also fix their text editors?

Unnecessary note: [tab] is a tab and . is a space in the examples.

Database cursors used in Tracker

A cursor on a query of a database is a finger pointing to the current row. Most databases do this without pulling the entire resultset into memory. It’s indeed much like a C/C++ pointer, except that a pointer can only point to memory in your process’ virtual memory. A cursor is a bit more abstract.

POSIX developers can compare a cursor with a pointer to a region in an mmap. For people who don’t know about mmap, mmap can be used to map a file into your process’ memory. You get a C/C++ pointer back, from which you can read the data as-if it’s in memory. With mmap, when you create a pagefault, the kernel will pull pages into your memory (from the file, or whichever resource is behind the mapping).

In Tracker all database operations used to be much like how using g_file_get_contents works: you read the entire thing into memory, and then you operate on that memory. Internally it used the database’s cursor API too, of course. The sqlite3_step is sqlite’s cursor API too.

First the database has filled up its pagecache with this data, then you copied it to your application’s memory, then you used it, then you freed it.

That’s kinda silly! Why not use it straight from the database’s caches instead? That’s what you use a DB cursor for.

The result is less copying of memory. This means less memory fragmentation and fewer memory operations to perform (which should result in a small performance improvement).

This effort is ongoing but a lot of Tracker’s internal loops over resultsets are now using a cursor instead of a in-memory result-set.

As it should be

Last week I wrote down why I believe the model should not have anything about columns. In .NET many people only ever used DataTables as their models. Because of that they often believe that in .NET the model must contain the columns.

They forget that DataGridView ‘s DataSource doesn’t require a DataTable at all, DataTable just happens to implement what DataSource needs: IList. It’s correct that if the model has all information that .NET’s many databinding components will get all the information they need out of your model. But it ain’t true that this is the only way nor a by-design in .NET. In .NET the by-design is that the view has all this and the model *can* pass it, if it has it, but it doesn’t have to.

In this example I illustrate that in .NET you can do a databinding with a simple .NET array. In .NET simple arrays implement IList. When a column of the DataGridView isn’t ReadOnly the property setter of the instance in the array will be called after the user edited the cell. I’ll illustrate this in the dataGridView1_CellEndEdit method: the property setter of the property Age of the Person instance being edited will be called. The view will as a result of a Refresh fetch the model’s new values. The Changed property will be rendered as True, for the Person that got changed.

People with VS.NET can drag a DataGridView and a Button on a Windows Form, and copypaste the Person class, button1_Click’s and dataGridView1_CellEndEdit’s code over. It’ll work.

// No DataTable, I'm not even importing System.Data, IList is fine
using System;
using System.Windows.Forms;

public partial class Form1 : Form
{
    public Form1() [+]

    private void button1_Click(object sender, EventArgs e) {
        dataGridView1.AutoGenerateColumns = false;

        DataGridViewColumn column;

        column = new DataGridViewTextBoxColumn();

        column.DataPropertyName = "Name";
        column.HeaderText = "The name of the person";
        column.Width = 180;

        // As you can see we are not doing anything on the model
        // to tell the view what the columns are.

        dataGridView1.Columns.Add(column);

        column = new DataGridViewTextBoxColumn();
        column.DataPropertyName = "Age";
        column.HeaderText = "The age";
        column.Width = 70;

        // Let's make this one editable
        column.ReadOnly = false;

        // We're just telling the view about the properties it
        // needs to bind, using the DataPropertyName member of
        // a DataGridViewColumn

        dataGridView1.Columns.Add(column);

        // Let's add a column that will show us that the view
        // will fetch property values at refresh

        column = new DataGridViewTextBoxColumn();
        column.DataPropertyName = "Changed";
        column.HeaderText = "?";
        column.Width = 45;

        dataGridView1.Columns.Add(column);

        // This is a normal array in .NET: it implements IList.
        // An IList is a collection with a known order.

        Person[] people = new Person[2];

        // Let's create two people in this array

        people[0] = new Person();
        people[0].Name = "Jos";
        people[0].Age = 30;
        people[0].Changed = false;

        people[1] = new Person();
        people[1].Name = "Jan";
        people[1].Age = 25;
        people[1].Changed = false;

        // And let's set the model of the view to be that array

        dataGridView1.DataSource = people;
        dataGridView1.CellEndEdit += new DataGridViewCellEventHandler(dataGridView1_CellEndEdit);
    }

    void dataGridView1_CellEndEdit(object sender, DataGridViewCellEventArgs e)
    {
        // This makes the view refresh its currently visible values, by reading
        // them from the model again. This callback happens after the user is
        // done editing a cell.

        dataGridView1.Refresh();
    }
}

public class Person {
    private string name, city;
    private uint age;
    private bool changed;
    public string Name {
        get { return name; }
        set { name = value; }
    }
    public bool Changed {
        get { return changed; }
        set { changed = value; }
    }
    public uint Age {
        get { return age; }
        set {
            age = value;
            Changed = true;
        }
    }
    public string City {
        get { return city; }
        set { city = value; }
    }
}

You can compare a GtkTreeModel with a DataTable in .NET: it’s a model that has its own memory storage and it contains both rows and columns. This means that GtkTreeModel isn’t a generic model, like IList in .NET actually is. With GtkTreeModel you must always represent your data as rows and columns. Even if the data ain’t rows and columns.

I indeed believe that Microsoft got databinding right in their .NET platform, and that Gtk+’s GtkTreeView and GtkTreeModel got it wrong.

Also feel free to have a huge array of Person instances. It’ll only read property values of the visible ones (plus a few more, shouldn’t be much). Fun tip: write something to the console in the property getters of the Person class, and start scrolling. Now you can easily discover yourself how to do lazy loading tricks with MVC in .NET, and make things scale.