Apparently
Apparently this hasn’t been echoed enough times. A lot of teams are still wondering what they should use if they want to store RDF metadata in Nepomuk and how to query it.
What happened before
We have refactoring to bring Tracker’s codebase into a better state. This is being released as Tracker 0.6.9x. This one sentence is really not enough to describe the changes. We can’t continue talking about the past forever. Sorry guys.
We have introduced support for SPARQL and Nepomuk in Tracker. We also added the class-signals feature, Turtle import & export, and many other features like SPARQL UPDATE support. Making the storage engine effectively a generic Nepomuk RDF store that can be used to store and query RDF data.
What will happen
We are at this moment planning to rearchitect Tracker a little bit.
Among our plans we want to make the RDF metadata store standalone. The store stores your metadata using Nepomuk as ontology and enables the application developer to query in SPARQL. This means that it’ll be possible to use this storage service without the indexer even installed. This is already possible but right now we do the crawling and monitoring in the storage service.
We plan to move the crawling and monitoring to the indexer. One idea is that the indexer will instruct the extractor to do an analysis and then the extractor will push the extracted metadata to the RDF storage service. Making the indexer and extractor a provider & consumer like any other. Making them optional and separately packagable.
This because we get requests from other teams who don’t want the indexing. Modularizing is usually a good thing, so we now have plans to make this possible as a feature.
Other plans
Other plans that we haven’t thoroughly planned yet include support for custom ontologies. We have a good idea for this, though. We want to wait for it until after the rearchitecturing. Support for custom ontologies will include removing ontologies, installing ontologies and asking for a backup that’ll contain the metadata that is specific for an installed ontology.
Support for custom ontologies doesn’t mean that application developers should all go spastic and start making ontologies. I know you guys! Don’t do it! We want applications to reuse as much of the Nepomuk set as possible. The more Nepomuk gets reused, the more interopability between apps is possible.
Can you clarify how Gnome 3 fits into your plans. I have only a brief opportunity to skim the various indexing / tagging apps/frameworks out there, they seem like a great idea so when Gnome3/Zeitgeist appears it seems to be re-inventing the wheel (as I don’t know which pieces are solving which part of the puzzle). Can you clarify the world of indexing/tagging (high level)
Great that you make tracker even better. Are there plans to integrate tracker deeper in GNOME. E.g. replacing the Search Files app. Integrating it deeply in nautilus, rhythmbox, evolution?
That way people really start to use Tracker.
yes!!!! XD
I add me to..
I would like to see tracker with plugins in order to automatically update the libraries of amarok, rhytonbox, or f-spot..These functions make use intensive of the disc, and is a great duplicated work..
It would be good that only indexe tracker!!
http://jamiemcc.livejournal.com/10814.html
Any progress on this? It’s of _great_ importance to get tracker/beagle/whatever into a more usable state. Just imagine getting rid of updatedb, all the slow search facilities and replacing them with a fast system-wide indexed search engine. Think in terms of Apple’s Spotlight…
when can we expect a SPARQL supporting release?
Great work on getting RDF / SPARQL support in.
@Pavel: Pretty soon from now. The master of the git repository of Tracker is at this moment the Nepomuk + SPARQL version. Which means that we have brought the experimental stuff into the mainline development of Tracker.
@Anders Feder: Thanks. Feel free to join and help us implement your SemanticDesktop ideas, by the way (yes, we’ve read your wiki pages on live. A lot of the ideas are indeed on Tracker’s agenda and plans-to-implement).
@matias: Some of the developers of some of those projects have already contacted us in relation to their storage backends (to replace them with an RDF store like the one we are making). With SPARQL most (or all) of them have a flexible enough query language too. Feel free to contribute to those projects and help with with the integration. I’m sure they can use your help, and we will help you performing the task too.
@Jaap: Yes. But our team itself is heavily occupied with making Tracker itself rock solid, fast, lean, etc etc. Although we will eventually aid the many community projects with integration, we can’t do it for all (physically impossible for us, we can’t type code fast enough to please all the wolves who want this). Which means we honestly need *your* help.
@Craig: Zeitgeist’s maintainer is in contact with us. He’s very much interested in integrating with Tracker for his RDF/metadata and querying needs. I’m sure he’s interested in your contributions for this. Talk with them if you are interested in contributing this integration yourself a.s.a.p.
@skp +1 I think this is a really important thing for desktop indexing. This crawling thing feels like an unclean and bad solution to a solvable problem… OS X does this nicely I believe… Pressure on the Kernel devs! If they refuse to do something like this because it can have an impact on server performance then implement it in a way so it doesn’t do anything unless you have someone listening for events. Or server guys can just uncheck the feature when compiling their kernels if they feels it’s necessary.
Hello and thank you for this great software!
Is there any conversation with strigi developers (to cooperate or join efforts)
http://strigi.sourceforge.net/
Greetings
@Javier: Yes, we have integration with strigi’s streamanalyzer library put in place.