What I don’t like about it, is that it more or less requires each client of the service to keep a local copy of the calendar or contact data in its own memory. At least all the data that matched the search query. This data is not really small per item. In fact, it’s quite large. It never really surprised me a lot that both the calendar component of Evolution’s shell AND the Evolution Data Server consume quite a lot of memory.
This is a little bit silly if you think about it. You also give the reason why yourself, Ross: it’s a local daemon. Does it really make sense to have a copy of the item in each client if the daemon that already contains a copy is guaranteed to be local and therefore really quickly accessible?
Does it really make a lot of sense if you have a mobile device with a lot of applications that want to tightly integrate with calendaring and contact information? Your messenger, your E-mail client, your TODO-list app, your desktop searcher, your menu probably too will all be duplicating this data in their local address space.
What I would want is to have a cursor-like API (like a database cursor). We all know that certain people have been warning us about the extraordinary insanely unimaginably huge roundtrip cost of D-BUS and that therefore this is absolutely definitely not a good idea. My opinion is that this black and white point of view on that is quite … wrong.
I remember you being enthusiastic about Polymer, so I’ll use your favourite E-mail client as an example:
Mulberry and Polymer use that incredibly complicated IPC over for example GPRS. GPRS, a network with roundtrips measured in whole seconds. Let me illustrate: sending the query and waiting for its answer sometimes takes a longer time than you standing up from your chair, running around your chair quickly and going back to sitting.
Now that’s a real roundtrip!
Nonetheless are E-mail clients like Mulberry and Polymer using a concept that is similar to a db-cursor. The reason why they don’t suffer from a lot of roundtrip problems, is because they made sure that their queries are returning a carefully measured amount of items. On top of that will Polymer pipeline the queries (not sure about Mulberry’s pipelining).
Nonetheless is the concept very similar if not equal to cursor based data access: You get a finger that points to your current location (I usually call that, my iterator). Around that pointer you fetch the things that you need “right now”. For example the things that are visible, indeed. Each time you scroll you do a few iterator.Next(), iterator.Next(), iterator.Next() and you render what you just got. With GtkTreeModel you can actually do this. If you have a local daemon and definitely if on top of that you fetch a small amount of items in a group, per each roundtrip, and do some micro caching, this will be fast enough. Just make a proxy and let your proxy instances keep the real for a few seconds, before disposing it. Let the tree model hint the proxies in the unref_node about disposing the real.
Net result: only the visible ones and the cached ones will be in the memory of the client.
This indeed means doing the sorting at the service. Therefore keeping a session at the service, too. You can actually integrate this cursor-over-IPC with a cursor on the SQLLite table, if you’d write a database backend. Your database engine just needs to support multiple cursors.
Net result: only the visible ones and the cached ones will be in the memory of the client, and the service will be using a cursor too. Which has a memory usage similar to mmap (only data that you are using right now, is actually loaded). Usually are these database engines really good at making sure that you still get the data really fast.
ps. It’s less important to update the clients in case of events, if at the service the cursor is updated. The next scroll event will do new “Next(), Next(), next()”‘s, and those will be updated already. You can of course still provide a “I’ll call you” mechanism to make the view update itself even faster.
Woah, this is probably going to unleash some sort of huge flamewar. Let’s try to keep it polite if you make replies in the comments, okay?