Brainstorm

Tighter application integration through dynamic content dispatch and sourcing

Posted on 2009-09-30 16:05 UTC by Tim Edmonds. Status: Under consideration, Categories: User Experience.

Maemo is great at allowing many applications to run concurrently but they do not actually integrate well together. Fundamentally, integration between applications is very rigid and new applications added into the mix are not recognised by the current applications. Ideally, such integrations would be enabled by dynamic discovery.

As an example, imagine I wanted to write an application that added speech bubbles to photos. For this I would need to use the following interfaces:

  • Sourcing:
    • File Chooser (hildonFM) - to select photo from file
    • Image Browser (does this API exist?) - to select photo graphically
    • Camera (dbus? or gstreamer?)- to take a photo directly
  • Disposing:
    • File Chooser (hildonFM) - to save the photo
    • Email (modest dbus) - to send the photo
    • Share (sharing API) - to upload the photo
    • Desktop (gconf??) - to set my image as desktop wallpaper
    • Contacts (addressbook API) - to add the photo to a contact

Which is do-able but it's a lot of work when I really just want two operations "source me a photo" and "dispose of this photo". What is worse is that this will not integrate with other new apps (MMS, photoshop?) without changes to my app nor will any pre-existing app know about my speech bubble app - my app would be poorly integrated.

We can already see examples of this with new applications. In one case, THP has introduced "feedhandler" which performs this dynamic disposal for the special case of rss feeds from the browser.

 

Solutions for this brainstorm

0
0
0

Solution #1: New dynamic content source and disposal service

Posted on 2009-10-16 10:14 UTC by Tim Edmonds.

What is proposed is a Content Service providing a richer system of disposing or sourcing content (eg: photos, vcards, mp3s etc), thus allowing an application to leverage the other applications around it based on the content being dealt with.

Servant apps can register with the Content Service describing the type of content they can source or dispose of with suitable descriptors (including model, ui string etc).

Client apps then make requests to the Content Service to dispose or source may be parameterised with constraints such as mime-types, size, model (eg: edit, view, send, exec), interactiveness etc.

The Content Service sits between client and servant apps and mediates the exchanges based on some Policy, perhaps popping up a suitable UI for user selection as appropriate.  Of course an app can be both a Client and a Servant at the same time.

In a way this is similar to part of Android's Intent system.

For the Speech Bubble App example from above, this app would:

  • register "accepts image/* model EDIT" so other apps can see this
  • request "get image/* model ANY" to get a photo from somewhere
  • request "put image/jpeg model ANY exclude ME"  to dispose the image once done

Camera app would:

  • register "provides image/jpeg model CREATE" so other apps can see this
  • request "put image/jpeg model SAVE interactive NO" to save the image
  • request "put image/jpeg model ANY" to use the captured photo

File Manager app would:

  • register "accepts */* model SAVE" so other apps can see this
  • register "provides */* model LOAD" so other apps can see this
  • request "put mime/type model ANY-SAVE" when user selects a file

And so on...

 

Latest activities to brainstorm Tighter application integration through dynamic content dispatch and sourcing