https://www.youtube.com/watch?v=lIWy8taP1rE
Want daarin staat precies vermeld, hoe het met de dieren is gesteld.
https://www.youtube.com/watch?v=lIWy8taP1rE
Want daarin staat precies vermeld, hoe het met de dieren is gesteld.
Nu de rest van het land nog
Wanneer Isabel Albers iets schrijft ben ik aandachtig: wij moeten investeren in infrastructuur.
Dit creëert welvaart, distribueert efficiënt het geld en investeert in onze kinderen hun toekomst: iets wat nodig is; en waar we voor staan.
De besparingsinspanningen kunnen we beperken wat betreft investeringen in infrastructuur; we moeten ze des te meer doorvoeren wat betreft andere overheidsuitgaven.
Misschien moeten we bepaalde scudraketten lanceren? Een scudraket op de overheidsomvang zou geen slecht doen.
Een week mediastorm meemaken over hoe hard we snoeien in bepaalde overheidssectoren: laten we dat hard en ten gronde doen.
Laten we tegelijk investeren in de Belgische infrastructuur. Laten we veel investeren.
Had to adapt dnsmasq’s code today. My god, that tiny application has a shitty code quality. I now feel bad knowing that so many systems are depending on this stuff.
Current upstream situation
In Tracker‘s RDF store we journal all inserts and deletes. When we replay the journal, we replay every event that ever happened. That way you end up in precisely the same situation as when the last journal entry was appended. We use the journal also for making a backup. At restore we remove the SQLite database, put your backup file where the journal belongs, and replay it.
We also use the journal to cope with ontology changes. When an ontology change takes place for which we have no support using SQLite’s limited ALTER, we replay the journal over a new SQLite database schema. While we replay we ignore errors; some ontology changes can cause loss of data (ie. removal of a property or class).
This journal has a few problems:
This was indeed not acceptable for Nokia’s N9. We decided to come up with an ad-hoc solution which we plan to someday replace with a permanent solution. I’ll discuss the permanent solution last.
The ad-hoc solution for the N9
For the N9 we decided to add a compile option to disable our own journal and instead use SQLite’s synchronous journaling. In this mode SQLite guarantees safe writes using fsync.
Before we didn’t use synchronous journaling of SQLite and had it replaced with our own journal for earlier features (backup, ontology change coping) but also, more importantly, because the N9’s storage hardware has a high latency on fsync: we wanted to take full control by using our own journal. Also because at first we were told it wouldn’t be possible to force-shutdown the device, and then this suddenly was again possible in some ways: we needed high performance plus we don’t want to lose your data, ever.
The storage space issue was less severe: the device’s storage capacity is huge compared to the significance of that problem. However, we did not want the privacy issue so I managed to get ourselves the right priorities for this problem before any launch of the N9.
The performance was significantly worse with SQLite’s synchronous journaling, so we implemented manual checkpointing in a background thread for our usage of SQLite. With this we have more control over when fsync happens on SQLite’s WAL journal. After some tuning we got comparable performance figures even with our high latency storage hardware.
We of course replaced the backup / restore to just use a copy of the SQLite database using SQLite’s backup API.
Above solution means that we lost an important feature: coping with certain ontology changes. It’s true that the N9 will not cope with just any ontology change, whereas upstream Tracker does cope with more kinds of ontology changes.
The solution for the N9 will be pragmatic: we won’t do any ontology changes, on any future release that is to be deployed on the phone, that we can’t cope with, unless the new ontology gets shipped alongside a new release of Tracker that is specifically adapted and tested to cope with that ontology change.
Planned permanent solution for upstream
The permanent solution will probably be one where the custom journal isn’t disabled and periodically gets truncated to have a first transaction that contains an entire copy of the SQLite database. This doesn’t completely solve the privacy issue, but we can provide an API to make the truncating happen at a specific time, wiping deleted information from the journal.
Damned guys, we’re too shy about what we delivered. When the N900 was made public we flooded the planets with our blogs about it. And now?
I’m proud of the software on this device. It’s good. Look at what Engadget is writing about it! Amazing. We should all be proud! And yes, I know about the turbulence in Nokia-land. Deal with it, it’s part of our job. Para-commandos don’t complain that they might get shot. They just know. It’s called research and development! (I know, bad metaphor)
I don’t remember that many good reviews about even the N900, and that phone was by many of its owners seen as among the best they’ve ever owned. Now is the time to support Harmattan the same way we passionately worked on the N900 and its predecessor tablets (N810, N800 and 770). Even if the N9’s future is uncertain: who cares? It’s mostly open source! And not open source in the ‘Android way’. You know what I mean.
The N9 will be a good phone. The Harmattan software is awesome. Note that Tracker and QSparql are being used by many of its standard applications. We have always been allowed to develop Tracker the way it’s supposed to be done. Like many other similar projects: in upstream.
As for short term future I can announce that we’re going to make Michael Meeks happy by finally solving the ever growing journal problem. Michael repeatedly and rightfully complained about this to us at conferences. Thanks Michael. I’ll write about how we’ll do it, soon. We have some ideas.
We have many other plans for long term future. But let’s for now work step by step. Our software, at least what goes to Harmattan, must be rock solid and very stable from now on. Introducing a serious regression would be a catastrophe.
I’m happy because with that growing journal – problem, I can finally focus on a tough coding problem again. I don’t like bugfixing-only periods. But yeah, I have enough experience to realize that sometimes this is needed.
And now, now we’re going to fight.
A few months ago we added the implicit tracker:modified property to all resources. This property is an auto-increment. It used to be that the property was incremented on ~ each SQL update-query that happens. The value is stored per resource.
We are now changing this to be per transaction. A transaction in Tracker is one set of SPARQL-Update INSERT or DELETE queries. You can do inserts and deletes about multiple resources in one such sentence (a sentence can contain multiple space delimited Update queries). An exception is everything related to ontology changes. These ontology changes get the first increment as their value for tracker:modified. This is also for ontology changes that happen after the initial ontology transaction (at the first start, is this first transaction made). The exception is made for supporting future ontology changes and the possibly needed data conversions.
The per-resource tracker:modified value is useful for application’s synchronization purposes: you can test your application’s stored tracker:modified value against the always increasing (w. exception at int. overflow) Tracker’s tracker:modified value to know whether or not your version is older.
The reason why we are changing this to per-transaction is because this way we can guarantee that the value will be restored after a journal replay and/or a backup’s restore without having to store it in either the journal nor the backup. This means that we now guarantee the value being restored without having to change either the backup’s format nor the journal’s format.
Having a persistent journal we actually make a simple copy of the journal to deliver you a backup in a fast file-copy. But let this deception be known only by the people who care about the implementation. Sssht!
We’re already rotating and compressing the rotated chunks for reducing the journal size. We’re working on not journaling data that is embedded in local files this week. A re-index of that local file will re-insert the data anyway. This will significantly reduce the size of the journal too.
Although with SQLite WAL we have direct-access now, we don’t support direct-access for insert and delete SPARQL queries. Those queries when made using libtracker-sparql still go over D-Bus using Adrien’s FD passing D-Bus IPC technique. The library will do that for you.
After investigating a performance analysis by somebody from Intel we learned that there is still a significant overhead per each IPC call. In the analysis the person made miner-fs combine multiple insert transactions together and then send it over as a single big transaction. This was noticeably faster than making many individual IPC requests.
The problem with this is that if one of the many insert queries fail, they all fail: not good.
We’re now experimenting with a private API that allows you to pass n individual insert transactions, and get n errors back, using one IPC call.
The numbers are promising even on Desktop D-Bus (the test):
$ cd tests/functional-tests/ $ ./update-array-performance-test First run (first update then array) Array: 0.103675, Update: 0.139094 Reversing run (first array then update) Array: 0.290607, Update: 0.161749 $ ./update-array-performance-test First run (first update then array) Array: 0.105920, Update: 0.137554 Reversing run (first array then update) Array: 0.118785, Update: 0.130630 $ ./update-array-performance-test First run (first update then array) Array: 0.108501, Update: 0.136524 Reversing run (first array then update) Array: 0.117308, Update: 0.151192 $
We’re now deciding whether or not the API will become public; returning arrays of errors isn’t exactly ‘nice’ or ‘standard’.
While trying to handle a bug that had a description like “if I do this, tracker-store’s memory grows to 80MB and my device starts swapping”, we where surprised to learn that a sqlite3_stmt consumes about 5 kb heap. Auwch.
Before we didn’t think that those prepared statements where very large, so we threw all of them in a hashtable for in case the query was ran again later. However, if you collect thousands of such statements, memory consumption obviously grows.
We decided to implement a LRU cache for these prepared statements. For clients that access the database using direct-access the cache will be smaller, so that max consumption is only a few megabytes. Because our INSERT and DELETE queries are more reusable than SELECT queries, we split it into two different caches per thread.
The implementation is done with a simple intrinsic linked ring list. We’re still testing it a little bit to get good cache-size numbers. I guess it’ll go in master soon. For your testing pleasure you can find the branch here.
Tracker 0.8’s situation
In Tracker 0.8 we have a signal system that causes quite a bit of overhead. The overhead comes from:
Not all aggregators show this list as A, B, C, D, E, F and G. Sorry for that. I’ll nevertheless refer to the items as such later in this article.
Consumer’s problems with Tracker 0.8’s signal
The solution that we’re developing for Tracker 0.9
Direct access
With direct-access we remove most of the round-trip cost of a query coming from a consumer that wants a literal object involved in a changeset: by utilizing the TrackerSparqlCursor API with direct-access enabled, you end up doing sqlite3_step() in your own process, directly on meta.db.
For the consumers of the signal, this removes 3.
Sending integer IDs instead of string URIs
A while ago we introduced the SPARQL function tracker:id(resource uri). The tracker:id(resource uri) function gives you a unique number that Tracker’s RDF store uses internally.
Each resource, each class and each predicate (latter are resources like any other) have such an unique internal ID.
Given that Tracker’s class signal system is specific anyway, we decided not to give you subject URL strings. Instead, we’ll give you the integer IDs.
The Writeback signal also got changed to do this, for the same reasons. But this API is entirely internal and shouldn’t be used outside of the project.
This for us removes A, B, C, D and E. For the consumers of the signal, this removes 1.
Merge added, changed and removed into the one signal
We give you two arrays in one signal: inserts and deletes.
For consumers of the signal, this removes 4.
Add the class name to the signal
This allows you to use a string filter on your signal subscription in D-Bus.
For us this removes G. For consumers of the signal, this removes 5.
Pass the object-id for resource objects
You’ll get a third number in the inserts and deletes arrays: object-id. We don’t send object literals, although for integral objects we’re still discussing this. But for resource objects we give without much extra cost the object-id.
For consumers of the signal, this removes 2.
SPARQL IN, tracker:id(resource uri) and tracker:uri(int id)
We recently added support for SPARQL IN, we already had tracker:id(resource uri) and I implemented tracker:uri(int id).
This makes things like this possible:
SELECT ?t { ?r nie:title ?t . FILTER (tracker:id(?r) IN (800, 801, 802, 807)) }
Where 800, 801, 802 and 807 will be the IDs that you receive in the class signal. And with tracker:uri(int id) it goes like:
SELECT tracker:uri (800) tracker:uri (801) tracker:uri (802) tracker:uri (807) { }
For consumers this removes most of the burden introduced by the IDs.
Context switching of processes
What is left is context switching between tracker-store and dbus-daemon, F. Mostly important for mobile targets (ARM hardware). We reduce them by grouping transactions together and then bursting larger sets. It’s both timeout and data-size based (after either a certain amount of time, or a certain memory limit, we emit). We’re still testing what the most ideal timeouts and sizes are on target hardware.
Where is the stuff?
The work isn’t yet reviewed nor thoroughly tested. This will happen next few days and weeks.
Anyway, here’s the branch, documentation, example in Plain C, example in Vala
I made some documentation about our SPARQL-IN feature that we recently added. I added some interesting use-cases like doing an insert and a delete based on in values.
For the new class signal API that we’re developing this and next week, we’ll probably emit the IDs that tracker:id() would give you if you’d use that on a resource. This means that IN is very useful for the purpose of giving you metadata of resources that are in the list of IDs that you just received from the class signal.
We never documented tracker:id() very much, as it’s not an RDF standard; rather it’s something Tracker specific. But neither are the class signals a RDF standard; they are Tracker specific too. I guess here that makes it usable in combo and turns the status of ‘internal API’, irrelevant.
We’re right now prototyping the new class signals API. It’ll probably be a “sa(iii)a(iii)”:
That’s class-name and two arrays of subject-id, predicate-id, object-id. The class-name is to allow D-Bus filtering. The first array are the deletes and the second are the inserts. We’ll only give you object-ids of non-literal objects (literal objects have no internal object-id). This means that we don’t throw literals to you in the signal (you need to make a query to get them, we’ll throw 0 to you in the signal).
We give you the object-ids because of a use-case that we didn’t cover yet:
Given triple <a> nie:isLogicalPartOf <b>. When <a> is deleted, how do you know <b> during the signal? So the feature request was to do a select ?b { <a> nie:isLogicalPartOf ?b } when <a> is deleted (so the client couldn’t do that query anymore).
With the new signal we’ll give you the ID of <b> when <a> is deleted. We’ll also implement a tracker:uri(integer id) allowing you to get <b> out of that ID. It’ll do something like this, but then much faster: select ?subject { ?subject a rdfs:Resource . FILTER (tracker:id(?subject) IN (%d)) }
I know there will be people screaming for all objects, also literals, in the signals, but we don’t want to flood your D-Bus daemon with all that data. Scream all you want. Really, we don’t. Just do a roundtrip query.
The support for domain specific indexes is, awaiting review / finished. Although we can further optimize it now. More on that later in this post. Image that you have this ontology:
nie:InformationElement a rdfs:Class . nie:title a rdf:Property ; nrl:maxCardinality 1 ; rdfs:domain nie:InformationElement ; rdfs:range xsd:string . nmm:MusicPiece a rdfs:Class ; rdfs:subClassOf nie:InformationElement . nmm:beatsPerMinute a rdf:Property ; nrl:maxCardinality 1 ; rdfs:domain nmm:MusicPiece ; rdfs:range xsd:integer .
With that ontology there are three tables called “Resource”, “nmo:MusicPiece” and “nie:InformationElement” in SQLite’s schema:
That’s fairly simple, right? The problem is that when you ORDER BY “nie:title” that you’ll cause a full table scan on “nie:InformationElement”. That’s not good, because there are less “nmm:MusicPiece” records than “nie:InformationElement” ones.
Imagine that we do this SPARQL query:
SELECT ?title WHERE { ?resource a nmm:MusicPiece ; nie:title ?title } ORDER BY ?title
We translate that, for you, to this SQL on our schema:
SELECT "title_u" FROM ( SELECT "nmm:MusicPiece1"."ID" AS "resource_u", "nie:InformationElement2"."nie:title" AS "title_u" FROM "nmm:MusicPiece" AS "nmm:MusicPiece1", "nie:InformationElement" AS "nie:InformationElement2" WHERE "nmm:MusicPiece1"."ID" = "nie:InformationElement2"."ID" AND "title_u" IS NOT NULL ) ORDER BY "title_u"
OK, so with support for domain indexes we change the ontology like this:
nmm:MusicPiece a rdfs:Class ; rdfs:subClassOf nie:InformationElement ; tracker:domainIndex nie:title .
Now we’ll have the three tables called “Resource”, “nmo:MusicPiece” and “nie:InformationElement” in SQLite’s schema. But they will look like this:
The same data, for titles of music pieces, will be in both “nie:InformationElement” and “nmm:MusicPiece”. We copy to the mirror column during ontology change coping, and when new inserts happen.
When now the rdf:type is known in the SPARQL query as a nmm:MusicPiece, like in the query mentioned earlier, we know that we can use the “nie:title” from the “nmm:MusicPiece” table in SQLite. That allows us to generate you this SQL query:
SELECT "title_u" FROM ( SELECT "nmm:MusicPiece1"."ID" AS "resource_u", "nmm:MusicPiece1"."nie:title" AS "title_u" FROM "nmm:MusicPiece" AS "nmm:MusicPiece1" WHERE "title_u" IS NOT NULL ) ORDER BY "title_u"
A remaining optimization is when you request a rdf:type that is a subclass of nmm:MusicPiece, like this:
SELECT ?title WHERE { ?resource a nmm:MusicPiece, nie:InformationElement ; nie:title ?title } ORDER BY ?title
It’s still not as bad as now the “nie:title” is still taken from the “nmm:MusicPiece” table. But the join with “nie:InformationElement” is still needlessly there (we could just do the earlier SQL query in this case):
SELECT "title_u" FROM ( SELECT "nmm:MusicPiece1"."ID" AS "resource_u", "nmm:MusicPiece1"."nie:title" AS "title_u" FROM "nmm:MusicPiece" AS "nmm:MusicPiece1", "nie:InformationElement" AS "nie:InformationElement2" WHERE "nmm:MusicPiece1"."ID" = "nie:InformationElement2"."ID" AND "title_u" IS NOT NULL ) ORDER BY "title_u"
We will probably optimize this specific use-case further later this week.
The crawler’s modification time queries
Yesterday we optimized the crawler’s query that gets the modification time of files. We use this timestamp to know whether or not a file must be reindexed.
Originally, we used a custom SQLite function called tracker:uri-is-parent() in SPARQL. This, however, caused a full table scan. As long as your SQL table for nfo:FileDataObjects wasn’t too large, that wasn’t a huge problem. But it didn’t scale linear. I started with optimizing the function itself. It was using a strlen() so I replaced that with a sqlite3_value_bytes(). We only store UTF-8, so that worked fine. It gained me ~ 10%; not enough.
So this commit was a better improvement. First it makes nfo:belongsToContainer an indexed property. The x nfo:belongsToContainer p means x is in a directory p for file resources. The commit changes the query to use the property that is now indexed.
The original query before we started with this optimization took 1.090s when you had ~ 300,000 nfo:FileDataObject resources. The new query takes about 0.090s. It’s of course an unfair comparison because now we use an indexed property. Adding the index only took a total of 10s for a ~ 300,000 large table and the table is being queried while we index (while we insert into it). Do the math, it’s a huge win in all situations. For the SQLite freaks; the SQLite database grew by 4 MB, with all items in the table indexed.
PDF extractor
Another optimization I did earlier was the PDF extractor. Originally, we used the poppler-glib library. This library doesn’t allow us to set the OutputDev at runtime. If compiled with Cairo, the OutputDev is in some versions a CairoOutputDev. We don’t want all images in the PDF to be rendered to a Cairo surface. So I ported this back to C++ and made it always use a TextOutputDev instead. In poppler-glib master this appears to have improved (in git master poppler_page_get_text_page is always using a TextOutputDev).
Another major problem with poppler-glib is the huge amount of copying strings in heap. The performance to extract metadata and content text for a 70 page PDF document without any images went from 1.050s to 0.550s. A lot of it was caused by copying strings and GValue boxing due to GObject properties.
Table locked problem
Last week I improved D-Bus marshaling by using a database cursor. I forgot to handle SQLITE_LOCKED while Jürg and Carlos had been introducing multithreaded SELECT support. Not good. I fixed this; it was causing random Table locked errors.
Before
For returning the results of a SPARQL SELECT query we used to have a callback like this. I removed error handling, you can find the original here.
We need to marshal a database result_set to a GPtrArray because dbus-glib fancies that. This is a lot of boxing the strings into GValue and GStrv. It does allocations, so not good.
static void query_callback(TrackerDBResultSet *result_set,GError *error,gpointer user_data) { TrackerDBusMethodInfo *info = user_data; GPtrArray *values = tracker_dbus_query_result_to_ptr_array (result_set); dbus_g_method_return (info->context, values); tracker_dbus_results_ptr_array_free (&values); } void tracker_resources_sparql_query (TrackerResources *self, const gchar *query, DBusGMethodInvocation *context, GError **error) { TrackerDBusMethodInfo *info = ...; guint request_id; TrackerResourcesPrivate *priv= ...; gchar *sender; info->context = context; tracker_store_sparql_query (query, TRACKER_STORE_PRIORITY_HIGH, query_callback, ..., info, destroy_method_info); }
After
Last week I changed the asynchronous callback to return a database cursor. In SQLite that means an sqlite3_step(). SQLite returns const pointers to the data in the cell with its sqlite3_column_* APIs.
This means that now we’re not even copying the strings out of SQLite. Instead, we’re using them as const to fill in a raw DBusMessage:
static void query_callback(TrackerDBCursor *cursor,GError *error,gpointer user_data) { TrackerDBusMethodInfo *info = user_data; DBusMessage *reply; DBusMessageIter iter, rows_iter; guint cols; guint length = 0; reply = dbus_g_method_get_reply (info->context); dbus_message_iter_init_append (reply, &iter); cols = tracker_db_cursor_get_n_columns (cursor); dbus_message_iter_open_container (&iter, DBUS_TYPE_ARRAY, "as", &rows_iter); while (tracker_db_cursor_iter_next (cursor, NULL)) { DBusMessageIter cols_iter; guint i; dbus_message_iter_open_container (&rows_iter, DBUS_TYPE_ARRAY, "s", &cols_iter); for (i = 0; i < cols; i++, length++) { const gchar *result_str = tracker_db_cursor_get_string (cursor, i); dbus_message_iter_append_basic (&cols_iter, DBUS_TYPE_STRING, &result_str); } dbus_message_iter_close_container (&rows_iter, &cols_iter); } dbus_message_iter_close_container (&iter, &rows_iter); dbus_g_method_send_reply (info->context, reply); }
Results
The test is a query on 13500 resources where we ask for two strings, repeated eleven times. I removed a first repeat from each round, because the first time the sqlite3_stmt still has to be created. This means that our measurement would get a few more milliseconds. I also directed the standard out to /dev/null to avoid the overhead created by the terminal. The results you see below are the value for “real”.
There is of course an overhead created by the “tracker-sparql” program. It does demarshaling using normal dbus-glib. If your application uses DBusMessage directly, then it can avoid the same overhead. But since for both rounds I used the same “tracker-sparql” it doesn’t matter for the measurement.
$ time tracker-sparql -q "SELECT ?u ?m { ?u a rdfs:Resource ; tracker:modified ?m }" > /dev/null
Without the optimization:
0.361s, 0.399s, 0.327s, 0.355s, 0.340s, 0.377s, 0.346s, 0.380s, 0.381s, 0.393s, 0.345s
With the optimization:
0.279s, 0.271s, 0.305s, 0.296s, 0.295s, 0.294s, 0.295s, 0.244s, 0.289s, 0.237s, 0.307s
The improvement ranges between 7% and 40% with average improvement of 22%.
Every (good) developer knows that copying of memory and boxing, especially when dealing with a large amount of pieces like members of collections and the cells in a table, are a bad thing for your performance.
More experienced developers also know that novice developers tend to focus on just their algorithms to improve performance, while often the single biggest bottleneck is needless boxing and allocating. Experienced developers come up with algorithms that avoid boxing and copying; they master clever pragmatical engineering and know how to improve algorithms. A lot of newcomers use virtual machines and script languages that are terrible at giving you the tools to control this and then they start endless religious debates about how great their programming language is (as if it matters). (Anti-.NET people don’t get on your horses too soon: if you know what you are doing, C# is actually quite good here).
We were of course doing some silly copying ourselves. Apparently it had a significant impact on performance.
Once Jürg and Carlos have finished the work on parallelizing SELECT queries we plan to let the code that walks the SQLite statement fill in the DBusMessage directly without any memory copying or boxing (for marshalling to DBus). We found the get_reply and send_reply functions; they sound useful for this purpose.
I still don’t really like DBus as IPC for data transfer of Tracker’s RDF store’s query results. Personally I think I would go for a custom Unix socket here. But Jürg so far isn’t convinced. Admittedly he’s probably right; he’s always right. Still, DBus to me doesn’t feel like a good IPC for this data transfer..
We know about the requests to have direct access to the SQLite database from your own process. I explained in the bug that SQLite3 isn’t MVCC and that this means that your process will often get blocked for a long time on our transaction. A longer time than any IPC overhead takes.
It used to be in Tracker that you couldn’t just change the ontology. When you did, you had to reboot the database. This means loosing all the non-embedded data. For example your tags or other such information that’s uniquely stored in Tracker’s RDF store.
This was of course utterly unacceptable and this was among the reasons why we kept 0.8 from being released for so long: we were afraid that we would need to make ontology changes during the 0.8 series.
So during 0.7 I added support for what I call modest ontology changes. This means adding a class, adding a property. But just that. Not changing an existing property. This was sufficient for 0.8 because now we could at least do some changes like adding a property to a class, or adding a new class. You know, making implementing the standard feature requests possible.
Last two weeks I worked on supporting more intrusive ontology changes. The branch that I’m working on currently supports changing tracker:notify for the signals on changes feature, tracker:writeback for the writeback features and tracker:indexed which controls the indexes in the SQLite tables.
But also certain range changes are supported. For example integer to string, double and boolean. String to integer, double and boolean. Double to integer, string and boolean. Range changes will sometimes of course mean data loss.
Plenty of code was also added to detect an unsupported ontology change and to ensure that we just abort the process and don’t do any changes in that case.
It’s all quite complex so it might take a while before the other team members have tested and reviewed all this. It should probably take even longer before it hits the stable 0.8 branch.
We wont yet open the doors to custom ontologies. Several reasons:
But yes, you could say that the basics are being put in place as we speak.
Today after I brought Tinne to the airport I drove around Zürichsee. She can’t stay in Switzerland the entire month; she has to go back to school on Monday.
While driving on the Seestrasse I started counting luxury cars. After I reached two for Lamborgini and three for Ferrari I started thinking: Zimmerberg Sihltal and Pfannenstiel must be expensive districts too… And yes, they are.
I was lucky today that it was nice weather. But wow, what a nice view on the mountain tops when you look south over Zürichsee. People from Zürich, you guys are so lucky! Such immense calming feeling the view gives me! For me, it beats sauna. And I’m a real sauna fan.
I’m thinking to check it out south of Zürich. But not the canton. I think the house prices are just exaggerated high in the canton of Zürich. I was thinking Sankt Gallen, Toggenburg. I’ve never been there; I’ll check it out tomorrow.
Hmmr, meteoswiss gives rain for tomorrow. Doesn’t matter.
Actually, when I came back from the airport the first thing I really did was fix coping with property changes in ontologies for Tracker. Yesterday it wasn’t my day, I think. I couldn’t find this damn problem in my code! And in the evening I lost three chess games in a row against Tinne. That’s really a bad score for me. Maybe after two weeks of playing chess almost every evening, she got better than me? Hmmrr, that’s a troubling idea.
Anyway, so when I got back from the airport I couldn’t resist beating the code problem that I didn’t find on Friday. I found it! It works!
I guess I’m both a dreamer and a realist programmer. But don’t tell my customers that I’m such a dreamer.
Today Tinne and I visited Switzerland’s capital, Bern.
We were really surprised; we’d never imagined that a capital city could offer so much peace and calm. It felt good to be there.
The fountains, the old houses, the river and the snowy mountain peaks give the city an idyllic image.
Standing on the bridge, you see the roofs of all these lovely small houses.
The bear is the symbol of Bern. Near the House of Parliament there was this statue of a bear. Tinne just couldn’t resist to give it a hug. Bern has also got real bears. Unfortunately, Tinne was not allowed to cuddle those bears.
The House of Parliament is a truly impressive building. It looks over the snowy mountains, its people and its treasury, the National Bank of Switzerland.
As you can imagine, the National Bank building is a master piece as well. And even more impressive; it issues a world leading currency.
On the market square in Oerlikon we first saw this chess board on the street; black and white stones and giant chess pieces. In Bern there was also a giant chess board in the backyard of the House of Parliament. Tinne couldn’t resist to challenge me for a game of chess. (*edit*, Armin noted in a comment that the initial position of knight and bishop are swapped. And OMG, he’s right!)
And she won!
At the House of Parliament you get a stunning, idyllic view on the mountains of Switzerland.
At GCDS Jamie told us that he wants to make a plugin for tracker-store that writes all the triplets to a CouchDB instance.
Letting a CouchDB be a sort of offline backup isn’t very interesting. You want triples to go into the CouchDB at the moment of guaranteed storage: at commit time.
For the purpose of developing this we provide the following internal API.
typedef void (*TrackerStatementCallback) (const gchar *graph, const gchar *subject, const gchar *predicate, const gchar *object, GPtrArray *rdf_types, gpointer user_data); typedef void (*TrackerCommitCallback) (gpointer user_data); tracker_data_add_insert_statement_callback (TrackerStatementCallback callback, gpointer user_data); tracker_data_add_delete_statement_callback (TrackerStatementCallback callback, gpointer user_data); tracker_data_add_commit_statement_callback (TrackerCommitCallback callback, gpointer user_data);
You’ll need to make a plugin for tracker-store and make the hook at the initialization of your plugin.
Current behaviour is when graph is NULL, it means that the default graph is being used. If it’s not NULL, it means that you probably don’t want the data in CouchDB: it’s data that’s coming from a miner. You probably only want to store data that is coming from the user. His applications won’t use FROM and INTO for their SPARQL Update queries, meaning that graph is NULL.
Very important is that your callback handler works with bottom halves: put your expensive task on a queue and handle the queued item somewhere else. You can for example use a GThreadPool or a GQueue plus a g_idle_add_full with G_PRIORITY_LOW callback picking items one by one on the mainloop. You should never have a TrackerStatementCallback or a TrackerCommitCallback that blocks. Not even a tiny tiny bit of blocking: it’ll bring everything in tracker-store on its knees. It’s why we aren’t giving you a public plugin API with a way to install your own plugins outside of the Tracker project.
By the way: we want to see code instead of talk before we further optimize things for this purpose.