Cheerleading Anjuta 2.2.0

I would, out of the blue, like to highlight the work the people doing the Anjuta 2.x series have done. I have been working with whatever is in Subversion (and before that, CVS) for multiple years now. It had its weeks of total instability, weeks of total stability too. But on features, especially on features, it has seen vast improvements. It’s a totally different experience when compared to the 1.x series of Anjuta.

pvanhoof@schtrumpf:~/repos/gnome/anjuta$ sudo make install
...
 /usr/bin/install -c -m 644 'anjuta.desktop' '/opt/anjuta//share/applications/anjuta.desktop'
make[2]: Leaving directory `/home/pvanhoof/repos/gnome/anjuta'
make[1]: Leaving directory `/home/pvanhoof/repos/gnome/anjuta'
pvanhoof@schtrumpf:~/repos/gnome/anjuta$ 

They are now working on integrating some bits and pieces with scratchbox. The integration with autotools (which utilizes gnome-build) is unimaginable good. I’ve become a total addict of features like the symbol browser and the symbol searcher. I’m sure I’m missing quite a lot of other features that could potentially glue and duck tape me forever to Anjuta’s 2.x. A requirement for any distribution that I’ll use has been, since more than a year, that I can easily build whatever I find in Anjuta Subversion -> “apt-get install gnome-devel”. Thank you, Ubuntu and Debian, for that.

I believe we need more of our developers to use this project. That way we can get this project to become a product, too. Because frustrated developers will fix Anjuta’s code. In contrast, I haven’t seen a lot of GNOME-developer’s frustration get converted into code for the Eclipse CDT project. I’m confident that since Anjuta is purely GNOME/Gtk+ style of development, Anjuta would be different.

I often got so pissed at whoever broke the search dialog of Anjuta’s Subversion this time. Forcing me to go back to ‘grep’ and ‘find’, to find things. Slowing me down. But then again, I should just help them in stead. Right? In stead of remaining pissed, I’ll truly apologize for being a user of their software yet not having contributed a lot to their fabulous 2.x series. On the bright side, I did do some work on the file dialog of the 1.x series of Anjuta. That code has now vanished and been replaced by superior things, of course.

I’ll put an icon on my desktop that throws Anjuta’s process into gdb. Being a developer you’ll be surprised how often you click it and then actually fix that annoying bug in stead of just restarting the software. I used to do this with Evolution too.

Stories from the land of a Tinymail release

We’re getting there. Although I’m thinking in months, not in weeks. If you’re working on a feature for Tinymail and you want it in its first release: you better start hurrying up.

It’s becoming a product that just works. A lot of the release work will be low hanging fruit like getting gtk-doc in perfect shape, making sure ‘make distcheck’ does the right thing, killing a few more major defects and writing some more documentation.

When those items are finished, the project can be delivered as a product. Above all, it must earn the status “product”. That’s the point where I’ll release a first version.

Stories from the land of Codethink

Rob (Taylor) used to be one of the lead developers of the Telepathy project, he co-founded Collabora and is now starting up Codethink Ltd. He has worked on HAL, D-BUS, Telepathy and various other bits and bytes of the many GMAE and GNOME components.

Although not the only participant in this, I believe he played an important role at encouraging a number of companies in working with us, the GNOME Free Software community. He combined his technical expertise with business expertise.

It’s therefore my opinion that Rob has become one of our most skilled and needed people. At LinuxTag we agreed to start working together under Codethink’s flag. I will be the first contractor to come on under the new Codethink plan.

With our expertise we are planning to change the mobile landscape. We’re both passionate people who’s goal is to make the difference, which is exactly what we’ll do.

Today’s new Evolution release

The Evolution maintainer announced a new release. This time the release fixes a significant security problem.

The problem is a remotely exploitable one. I strongly suggest everybody updating his or her Evolution setup. Even if your Evolution package is incredibly old. I think nearly all Evolution’s versions are affected.

Evolution-data-server:
=====================
...
#447414: Security Fix - negative index of an array (Philip Van Hoof)

Because competing is necessary and fun

At LinuxTag Rob and I met and talked quite a lot to some Qt and KDE guys. We somehow came to the conclusion that a combined conference for our communities would be something positive. We concluded that the majority of people in both communities who truly matter (the people who do things, not the people who only talk about things), enjoy the competition between GNOME and KDE. That they love the other guys. That competing each other is giving both project’s members a reason for innovating their own project. Yet that the same competition should focus on being the best at what we do while that this competition should not make it difficult for the many companies and users that use our infrastructure.

Therefore we agreed that as much as possible of the D-BUS API should be shared between the KDE and the GNOME projects.

Our conclusion was that we need more social networking for this synergy to happen. We concluded that both Akademy and GUADEC share the same conference atmosphere and goals: they are both heavily oriented at meeting each other.

I would therefore, hereby, like to propose using the FOSDEM conference to pioneer the idea of perhaps in future having the GUADEC and Akademy conference to take place at the same time and location. Note that in this idea neither GUADEC nor Akademy change names nor that it would become one big conference. Note that it’s only an idea and that I think for it to succeed, it needs a pilot event. FOSDEM can be used for this.

We could play with the idea of having a few shared social events, like a game of soccer. Maybe making music together (as that was one of the greater ideas at GUADEC last year)? Social events with an emphasis on cooperating yet competing? Friendly competing!

I’m planning to ask the FOSDEM organisation to, in stead of two rooms, have one big room for both GNOME and KDE people. Or two rooms while mixing the presentations equally over the two rooms.

I hope I will be flooded by opinions coming from both sides, as this is probably a hard decision. Though I believe that both communities need to do this for our users and for the companies who get to deal with our differences.

I would like to state, clearly, that as a GNOME community member: I love KDE, I love the people who do KDE, I clearly disagree with a lot of their strategies and architecture ideas and I totally love the competition that got created by this disagreement. Let’s strengthen both and get them to share as much D-BUS API as humanly possible. Yet let’s keep competing each other by trying to provide the best implementation behind those APIs.

Let’s ignore the people who think that we are duplicating efforts, as we know we are not.

Let’s be friends who meet each other often and who have respect for each other. Let’s agree to disagree, but let’s agree on the things that we can easily agree on: D-BUS API.

Finally and most importantly, let’s not just talk and have good intentions but let’s also make it happen. Let’s compete in style.

Adorable!

Look at those cute little devices! All doing Push E-mail with a Tinymail based E-mail client. All at the same time! A Nokia N800, a Nokia 770, my Feisty desktop and a OLPC. Next on the list .. the iLiad and Openmoko?

Video demo, Youtube version

Fixed all default platform builds

Just like with the OLPC build I have fixed the builds for GPE and the builds for Maemo with older distributions like Mistral and Gregale (the rootstraps in Scratchbox that’ll compile for your Nokia 770). The build for Maemo’s Bora rootstrap (for the Nokia N800) and the one for GNOME desktops were of course already functional.

On all these devices and platforms (N800, 770, OLPC, GPE) you can now out of the box compile yourself a Tinymail demo user interface with the latest features. I will bring a few such devices with me at LinuxTag. Although very unfinished, I can demo a recent development version of Modest running on my N800 too. Which, by the way, is becoming more and more useful as fine dog food. The GPE and GPEPhone folks told me they will bring some GPE devices too (it wont take long before we’ll have the Tinymail demo ui running on all of them).

:-(

~# apt-cache search empathy
~#

Fixed the OLPC build of Tinymail

While preparing my OLPC laptop for LinuxTag, I noticed some glitches in the OLPC build of Tinymail. These have now all been fixed and the demo-ui will run and work as-is. You will of course need to edit your $HOME/.xinitrc to override the “exec sugar” with an “exec xterm”, start matchbox-windowmanager and then the Tinymail demoui binary.

You can also try to use the development console which is accessible when typing the [alt]+[=] key combination. But I found it quite hard to make the application usable this way (it gets pushed to the back by the Sugar window manager or something).

The demoui is indeed not suitable nor has already been made suitable as a typical Sugar application. I would of course welcome attempts at this.

To compile do this on any typical x86 computer:

./autogen.sh --with-platform=olpc --prefix=/opt/tinymail-olpc
make && sudo make install

Now tar /opt/tinymail-olpc and untar it into the same directory on your device’s root filesystem.

After deployment follow these instructions for creating an account and in the xterm just launch /opt/tinymail-olpc/bin/demoui.

It’ll ask you for the password each time. I haven’t yet found which password storage API is going to be used on the laptop, so the platform specific implementation is just going to call a GtkDialog box that will each time ask you for the password.

Time for the GPE team to seek legal counsel?

Although the GPE project authors and contributors recently moved their hosting needs to linuxtogo, some handhelds.org administrators believe that they personally own the projects that are or were hosted there. A person who goes by the name France George decided to trademark GPE, Opie ad IPKG.

Unluckily Mr. France started to work with these trademarks already even if they are not assigned yet:

  • The OpieII project had to change its name.
  • Contributors were threatened and urged to give up the name GPE
  • The GPE IRC channel (#gpe) at freenode.net was hijacked
  • Freenode staff members were threatened when they decided to give #gpe back

I hope free software supporters with legal knowledge from all over the world will offer their skills to the people who worked on the many excellent GPE components. Let’s not allow people to steal project names.

Florian Boor, one of the GPE developers, blogged in “Threatened, how do we protect our projects” the open question to all of you: How do we get the affected projects out of this situation? Or maybe even more important: How can we reduce the risk for something like this happening again?

Distros, indeed …

Hey Benjamin, good that you mention “creativity” and Mozilla in the same blog post.

I’m sure the people at Mozilla and the distributions have a slightly better solution to this current situation of having to do funny things in your configure.ac to figure absurd things out. Things like having to search which of the Mozilla development packages are available: a developer can choose between nss.pc, mozilla-nss.pc, firefox-nss.pc and xulrunner-nss.pc. I’m sure there are a few more too. Like nspr.pc, mozilla-nspr.pc, firefox-nspr.pc and xulrunner-nspr. And like xpcom.pc, mozilla-xpcom.pc, firefox-xpcom.pc and xulrunner-xpcom.pc. And of course also gtkmozembed.pc, mozilla-gtkmozembed.pc, firefox-gtkmozembed.pc and xulrunner-gtkmozembed. Just to make sure our configure.acs will be bloated like crap just for figuring out what the system actually has installed for those libraries.

I wonder what the use of pkg-config is if everybody starts doing that, Mozilla team?

For some fun reason they also decided to have -rpath in some of their “Libs: ” lines too. Making it totally fun for the maintainer to figure out why certain symbols can’t be found in a lot of circumstances (like when you used to have a function that was static, and now became a non-static, those rpath tricks will trick your linker into trying to use the installed libraries that are located in the prefix location if you once did a “make install”).

Awesome things if you don’t know that and expect sanity from pkg-config configuration files.

Porting existing Evolution Camel providers to Tinymail

Tinymail’s camel-lite is still being developed extensively. While making changes, though, I made sure that the API that once got implemented by some existing custom Camel providers more or less stays the same. Myself, I added support for a bunch of Lemonade -and related features and improved the bandwidth consumption a lot in the IMAP provider, while adding support for summaries in the POP provider. These changes included a huge amount of bug fixing and other improvements too, of course. Especially on memory consumption. The majority of the changes, though, happened in the implementation, not so much Camel’s API.

Although an existing Camel provider usually needs a lot of bandwidth improvements before being acceptable for usage for Tinymail’s target audience (I looked at a lot of them, they do), you can very quickly make any such existing provider work with Tinymail. It comes down to rewriting one method: the CamelFolderSummary::message_info_load one. Noting that even the CamelFolderSummary::message_info_save stays the same. The load one needs to be rewritten because it’s in this method that in stead of read()-ing the data from the FILE*, you’ll have to implement getting the pointers from the mmap(). At entry of the method you’ll get the offset where the data that you wrote to the file using CamelFolderSummary::message_info_save starts. You are responsible for moving that offset further (for example += it).

The file I/O API of Camel has been modified to always do data padding, which is necessary for some architectures with mmap(), and to always end strings with a ‘\0’, which is of course important when pointing to offsets and using the pointer as a char pointer C string. You’ll always do it right if you use the standard Camel file I/O API.

I created a trac wiki page on the Tinymail documentation describing how to do this. It has some examples too. On bandwidth it’s of course vital to understand that a lot Tinymail based E-mail clients will be used on flaky networks like GPRS. Dealing with being offline (being very good at synchronization, while trusting that the connection will die at totally unexeptected locations in your code) and dealing well with timeout situations is of course very important. Compression and SSL are vital too.

Let’s start the next big thing in Tinymail, shall we?

Yes me shall.

Most E-mail clients want some sort of queuing mechanism. Not surprisingly every developer has the desire to implement his own queue thing. This by itself is not a big problem if only they create a wrapper for their queue that implements my interface.

After that, I have some guarantees about that implementation that I can use to develop higher level functionality. Guarantees like the availability of a specific certain API. I need to fine tune the Design By Contract clauses for this little bit more, but these queue interfaces are well documented and by-contract defined (or going to be even more strict).

A few months ago I announced this AsyncWorker thing. A normal person would maybe simply require that library and assume that all developers will bend over and let me put my AsyncWorker … where the sun doesn’t shine? That’s not Tinymail’s concept though: flexibility and being adaptive to Change is.

Although I added a libtinymail-asyncworker library that implements a default queue implementation, you can implement your own. I believe a lot E-mail clients will actually want to do that: maybe they want very tight integration with and a lot of control over their queue? I can imagine that.

The trick, though, is to build a bunch of standard queue functionalities yet make it possible that they all share the same top level queue. For that, I used the decorator design pattern. All the default queues, except the asyncworker one, implement the queue interface and will aggregate one too. Usually that will be a base queue implementation, like the asyncworker one, but it can also be any of the queues (nested situations, why not).

I’m starting with a queue that will in the background get full messages, one that will act right after a full message is received and one for sending messages.

The one that will act will for example (depending on your mua’s policy) share a top level queue with the one that gets full messages in the background. It will have tasks with a higher priority though: acting can for example mean “to display that message in the message view”. While perhaps messages are being queued for retrieval? While Push-Email events cause get-message tasks to get added to the queue?

The thing is, Push-Email right now will only get the summary info of a message. But if that Push E-mailed E-mail contains a calendar item, then you’ll need to fetch that if your application wants to do something interesting with the calendar content. Like playing a different ring-tone if it’s a calendar item coming from your boss? Your application can in that case use a queue that will act with something, right after the queued task received the full message.

You guys remember that Push E-mail events can be caught using a TnyFolderObserver, right? An implementation for your folder observer’s update method could be “put a task on the act queue that will add the calendar item in the E-mail to the device’s calendar” once the task finished receiving the message. That is already possible with Tinymail today, yes.

I’m writing documentation right now. A bunch of it is already available, but the best page to start is this one: TnyQueue and TnyQueueTask. I need to adapt my gtk-doc scripts to enable adding the API to the API reference manual of Tinymail. All of the API is already documented fully, though. The script just isn’t yet adding it to the gtk-doc build.

Note: that some other E-mail clients don’t have this amount of flexibility for queuing just means that they are too limited for mobile applications. Mobile applications need to cause events to wake the person up. I believe that Push E-mail is just the beginning of all this. Future will tell, of course.

And Tinymail will be ready for that.

Maemo roadmap

It has a very interesting item in the Applications section!

The project’s status on docs

I wrote this a few times already, but documentation is something that needs to be repeated over and over again. Until it bores you so hard that you will do it just so that the idiot who keeps repeating about it, would stop. Exactly.

For that reason I keep repeating it so often to myself, that I myself will get bored of hearing myself repeating that it’s extremely important. In the end it’s all about self-inducting psychological tricks, indeed. Forcing yourself, forcing myself.

Today I started cross-referencing Tinymail’s manuals and documentation. I for example created links to the API reference manual and started pages with some minimal documentation about types being mentioned in a manual.

My opinion is that any software is pure vaporware until it’s fully documented for at least its target audience. This means that Tinymail will be vaporware until its documentation is more than perfect (as my goal for the project is to get it better than the current best). Undocumented software by definition sucks in my opinion. Doing free software isn’t a valid reason for having no documentation.

Unit testing, though, has been slipping a little bit (luckily not completely). I’m hoping to see GNOME Buildbot becoming ready and integrated with GMAE so that this will psychologically force me and other contributors into making sure that it will never slip again.

Status information while retrieving stuff with a Tinymail E-mail client

Last few days together with Murray I have been working on for example the status infrastructure in the Tinymail framework.

We simplified how to internally deal with status callbacks, we made sure that they’ll happen in the right order and wont happen after the final callback happened.

I created two video demos this time. A first one is for most people. It will show you how much status you can get out of the framework in case you will create a client on top of Tinymail. The video shows you the demo user interface with its ‘glamorous’ :-) progress bar (which shows a lot more status than “some other” E-mail client’s progress user interface).

User video demo, Rescaled one on youtube

A second is a technical video demo for people who want to learn how Tinymail works. Although it’s mentioned, it doesn’t really try to illustrate how to use the API to for example update a progress bar and show a status bar. The technical video demo shows you how Tinymail deals with launching status callbacks internally.

Technical video demo

I did the technical one with a normal video camera. That’s why the quality isn’t ideal. After doing that one, while not being satisfied, I discovered xvidcap for making the user video demo. Its quality is excellent. Therefore I recommend xvidcap for screencasting and I’m probably going to use this tool more often indeed. Finally I use Cinelerra to glue audio to it, cut and paste the “eums”, “swallow” and “grrmm”s away.

Streaming APIs

It looks like Alexander is correctly making it possible in GVFS to have a GInputStream, a GOutputStream and a GSeekable that has nothing to do with a GFile. It’s therefore possible that while it being a major API and ABI breakage, I will remove TnyStream from Tinymail’s API and adopt GVFS’s types for this in a second Tinymail version. I’m indeed already planning those API breakages ahead of time.

I’ll clarify why I think this is important.

In Tinymail a stream can have any source and any destination. For example a TCP stream that represents a connection with a IMAP server. A converting stream that gets as input the encoded MIME part of a message and outputs it decoded. A file stream that gets as input for example a decoded MIME part and as output a file.

Making any assumption about streams like that a stream always either originates from or results in a file, would make any of above situations impossible. The API for a ‘stream’ type therefore shouldn’t assume a lot of the rest of the virtual file system layer or assume the availability of for example a file descriptor (although I’m perfectly okay with ‘the VFS layer’ as location for a stream – API). In the end I hope to share that stream API with for example the implementers of future HTML render components. Then I can ‘stream’ the decoded MIME part to that HTML renderer. That the HTML renderer component will ask for ‘stream instances’ to the inline embedded images (in case the HTML E-mail has attached images being used in the document).

The thing is, that efficient mobile E-mail clients will not first download the entire E-mail to for example /tmp, in stead they will actually stream the MIME part on-demand straight from the IMAP server to a converting stream that decodes the MIME part and then stream that output straight to the HTML renderer. In the process it’s possible that I’ll put a ‘locally caching stream’ somewhere in between, yes.

This makes it possible to not cache attachments on a device with few disk resources, yet to do make them available when the user is online, yet to have it perfectly transparent IF caching attachments locally is to be done and IF it was done in the past. ‘Streaming’ doesn’t necessarily have anything to do with files, yet often does (like when caching certain streamable things, like files, locally).

I hope this is being kept in mind while designing GVFS, though it looks like the current API indeed allows this (so that’s excellent!). I’m definitely looking forward to having GVFS. It’s great to finally see Java and .NET’s streaming APIs being put in place. Very exciting to soon have this.

API docs at 100%

Today Tinymail’s API reference manual is covering 100% of the API. This means that there are no undocumented functions being exported by the Tinymail libraries, none.

On desktop data services

A few days a go a few students asked me to help them with designing a better EDS. I know I’m going to be hated for doing this blog item by some people (because, well, I’m pointing to some flaws in the architecture of some components. People don’t always want you to identify flaws).

Although I’m not very focused on calendaring and other things related to what EDS offers, here’s my try on the subject:

Evolution data server will via the notify_matched_object_cb in the ECalData lib issue a notifyObjectsAdded for all matched objects that your query (ECalView) wants. It seems it’s not doing that for just the ones that became recently visible.

+ e_cal_view_start
|- CalView_start (goes over the IPC)
| - impl_EDataCalView_start
|  + foreach: notify_matched_object_cb
|  |- notifyObjectsAdded (goes over the IPC)
|  | - impl_notifyObjectsAdded
|  |  - g_signal_emit "objects-added"
|  + end foreach
+ end e_cal_view_start

After that you will also receive notifications when an item that matches your query gets removed, changed, added, yadi yada.

Although that sounds reasonable, on a desktop, there’s a problem with this on mobiles: Unless you limit the query to exactly what you will see on the view, you’ll needlessly transfer a lot of iCal data over the IPC and worse, you’ll need to store it in the memory of the user interface (the client).

Depending on the backend implementation of EDS, this means that it’s in memory twice. Given that EDS is a locally running service, that’s a little bit stupid (if it would be a service running over a slow GPRS connection, I would better understand the need to fully cache everything in the client).

Another reason why you want to keep the memory at the service, is that the service is the centralized infrastructure. All clients using it, share the same memory. If your clients always need their own copies, you are effectively doubling all memory consumption for calendaring.

Although the user will do queries that are much larger (like: give me all items of this month), a mobile device’s view will most often display only a few calendar items. Which of course makes a good developer think about the other ones: do you really need them in memory at all times? Or is a proxy at the client good enough? A proxy that will get the real thing from the service, by asking a factory, at the time the user starts using it.

Therefore wouldn’t for example a cursor style remote API be much better? The model of the view would get the currently visible one by simply iterating to it, and only then getting it.

A cursor is quite simple and looks a lot like a C++, Java or .NET iterator indeed:

c = create_cursor (expression)
c.move_next, c.move_prev
c.get_current

The view would get a model that utilizes that cursor efficiently. For example: the view asks the model for the 100th item, the current position of the model is 80. So the model will do a c.move_next 20 times and then give the view c.get_current. Finally the view unreferences the instance as soon as the item is not visible anymore.

That iterator doesn’t have to be implemented using only remote calls. It can be emulated by storing the query result as long as the cursor is kept alive (or let it die on timeout or something) in the service, and implementing a get_current that takes a query id and an “nth” parameter. The move_next and move_prev are implemented locally (just keeping a “current nth” or position status as an integer).

Is this slow? Probably will the experience for the user be a lot faster than having to initially download the entire result-set of a query. It’s true that the performance would be slow when a lot items are visible: that’s because a lot c.get_current calls would happen. But then again, most mobile devices have small screens and therefore can’t display a lot calendar items in a meaningful way to the user.

Also, as a solution for that, you can make a proxy that has the first 10 characters of each once received item’s description cached. The model can now instead of returning c.get_current, return a proxy. The view can once the item gets invisible clear the real from the proxy. If the view is set to display a lot items, it would only ask for those first 10 characters: the proxy would only the first time need to get the real to fulfill that API. Zooming in, though, would make the view asks the proxy for information that it doesn’t necessarily have (any more), so the proxy would ask the model (or a factory) to do a c.get_current (getting the real) to fulfill the interface of the type for the view.

But really. Instead of an implementation like EDS, both KDE and GNOME experts should stick their heads together and create a D-BUS specification for this. Perhaps one that copes with that cursor idea?

Clearly, both teams are most likely not going to agree on sharing one implementation soon.

I see frightened people screaming and yelling after I just said that. That’s not necessary. See, guys, dear users, we developers do talk with each other at conferences. We love each other! We love competing! Competing makes both sides better and sharper. Don’t you sometimes do friendly competition with your partner?

With a good specification, we could (and eventually would too) compete on implementation. It’s like agreeing on the rules of a game of Pool with your partner. Or bowling.

That is why I told those students to focus on a very good D-BUS spec. Perhaps do a proof of concept initial implementation to proof test your new D-BUS protocol?

Doing things in parallel, downloading messages while getting summary

A little bit more technical … some people like that, others don’t.

Today I did a cute hack on the embedded camel of Tinymail, camel-lite: I altered the camel_folder_get_message implementation in such a way that it would create a new CamelImapStore instance.

The CamelImapStore is a type that derives from CamelService and holds the connection lock. It also has the pointers to two CamelStreams who represent the access to the socket filedescriptor. That is your connection with the IMAP server. The CamelStream abstracts away SSL and yadi yada but the principle is the same: it’s the store that can only perform one procedure simultaneously in Camel (and therefore, also in Evolution).

In Camel this meant a lot of locking. Regretfully isn’t the IMAP implementation very fine grained in its locking (and actually, it sucks a little bit). Nor does the IMAP implementation do pipelining or any other such neat tricks. It’s a simple “lock, send query and fetch result, unlock”-concept put in practice. I have broken up some procedures, like getting the summary, into smaller queries: by looping until I have all of the summary. During that loop, the locks get unlocked. A get-message would therefore, in theory, get a chance to occur while the loop is happening in another thread.

That theory actually does work in practice. However, it was a little bit difficult to get it to behave absolutely correct. On top of that is Camel’s “design” far from perfect. Therefore in stead of endlessly trying to get it correct, I decided to make the decision and do a proof of concept that basically creates a new connection each time you download a message and store it locally in the cache.

The final idea for all this is to have a flexible queue mechanism that, for E-mail clients that want this functionality, will in the background download (new) messages while getting summary in parallel. While if the user clicks a message, while summary is being received or after it, the queue will get a high priority item added that will first download the clicked message and display it in the message view component.

I know that this is the core of a lot of E-mail clients. It’s exactly what I want tinymail to provide within the framework, as yet another component.

Next to that I will also implement a folder observer that will act on Push E-mail events by putting the request for getting the new E-mail on the queue. All of this will of course be optional behavior: on a GPRS network you specifically don’t want to retrieve all (new) messages. That would consume shitloads of bandwidth and would cost you a lot of money. But before going offline, you might want to ask your E-mail client to do indeed get all the messages and put them in the offline cache? While it’s doing this, you still want to work normally. And why not? Exactly. That’s why the second connection proof of concept was done.