Planet maemo: category "feed:43af5b2374081abdd0dbc4ba26a0b54c"

Philip Van Hoof

While trying to handle a bug that had a description like “if I do this, tracker-store’s memory grows to 80MB and my device starts swapping”, we where surprised to learn that a sqlite3_stmt consumes about 5 kb heap. Auwch.

Before we didn’t think that those prepared statements where very large, so we threw all of them in a hashtable for in case the query was ran again later. However, if you collect thousands of such statements, memory consumption obviously grows.

We decided to implement a LRU cache for these prepared statements. For clients that access the database using direct-access the cache will be smaller, so that max consumption is only a few megabytes. Because our INSERT and DELETE queries are more reusable than SELECT queries, we split it into two different caches per thread.

The implementation is done with a simple intrinsic linked ring list. We’re still testing it a little bit to get good cache-size numbers. I guess it’ll go in master soon. For your testing pleasure you can find the branch here.

Categories: condescending
Philip Van Hoof

We have a feature request to support return types and to give back variable names. We currently return an array (of array) of just strings, with no typing. This doesn’t work very well for knowing whether a cell is (for example) unbound. Empty string isn’t the same as unbound. So what can you do?

With direct-access the implementation is easy, we’ll just read it from the SPARQL engine. We have all this info already anyway. For filedescriptor passing with D-Bus we need to marshal it over the protocol.

Although we might come back to this decision short term, we wont yet do it for our “normal” (non-FD passing) D-Bus query method. SPARQL’s type system is different from D-Bus’s, so we shouldn’t try to match them somehow. Any custom format that we’d come up with, would be arbitrary.

Maybe someday we’ll add another “normal” D-Bus method that gives you a big string with SPARQL Query Results in JSON or SPARQL Query Results in XML back. Right now this has no priority for us, plus it would be a lot slower due to serialization. Post 0.9 everybody should be using libtracker-sparql and that’ll select either FD passing or direct-access.

Anyway, this will likely be the API for Sparql.Cursor. The methods get_value_type and get_variable_name got added.

public enum Tracker.Sparql.ValueType {
	UNBOUND, URI, STRING, INTEGER,
	DOUBLE, DATETIME, BLANK_NODE
}

public abstract class Tracker.Sparql.Cursor : Object {
	public Connection connection { get; }
	public abstract int n_columns { get; }
+	public abstract ValueType get_value_type (int column);
+	public abstract unowned string? get_variable_name (int column);
	public abstract unowned string? get_string (int column, out long length = null);
	public abstract bool next (Cancellable? cancellable = null) throws GLib.Error;
	public async abstract bool next_async (Cancellable? cancellable = null) throws GLib.Error;
	public abstract void rewind ();
}

I usually post about work in progress, not about something that is done. Same this time, of course. You can find the branch where we’re working on this here.

Categories: english
Philip Van Hoof

Tracker 0.8′s situation

Click to read 1678 more words
Categories: condescending
Philip Van Hoof

I made some documentation about our SPARQL-IN feature that we recently added. I added some interesting use-cases like doing an insert and a delete based on in values.

For the new class signal API that we’re developing this and next week, we’ll probably emit the IDs that tracker:id() would give you if you’d use that on a resource. This means that IN is very useful for the purpose of giving you metadata of resources that are in the list of IDs that you just received from the class signal.

We never documented tracker:id() very much, as it’s not an RDF standard; rather it’s something Tracker specific. But neither are the class signals a RDF standard; they are Tracker specific too. I guess here that makes it usable in combo and turns the status of ‘internal API’, irrelevant.

We’re right now prototyping the new class signals API. It’ll probably be a “sa(iii)a(iii)”:

That’s class-name and two arrays of subject-id, predicate-id, object-id. The class-name is to allow D-Bus filtering. The first array are the deletes and the second are the inserts. We’ll only give you object-ids of non-literal objects (literal objects have no internal object-id). This means that we don’t throw literals to you in the signal (you need to make a query to get them, we’ll throw 0 to you in the signal).

We give you the object-ids because of a use-case that we didn’t cover yet:

Given triple <a> nie:isLogicalPartOf <b>. When <a> is deleted, how do you know <b> during the signal? So the feature request was to do a select ?b { <a> nie:isLogicalPartOf ?b } when <a> is deleted (so the client couldn’t do that query anymore).

With the new signal we’ll give you the ID of <b> when <a> is deleted. We’ll also implement a tracker:uri(integer id) allowing you to get <b> out of that ID. It’ll do something like this, but then much faster: select ?subject { ?subject a rdfs:Resource . FILTER (tracker:id(?subject) IN (%d)) }

I know there will be people screaming for all objects, also literals, in the signals, but we don’t want to flood your D-Bus daemon with all that data. Scream all you want. Really, we don’t. Just do a roundtrip query.

Categories: condescending
Philip Van Hoof

Busy handling

Click to read 1320 more words
Categories: condescending
Philip Van Hoof

Neelie Kroes on open source

2010-07-15 10:18 UTC  by  Philip Van Hoof
0
0


Video link

Categories: english
Philip Van Hoof

The support for domain specific indexes is, awaiting review / finished. Although we can further optimize it now. More on that later in this post. Image that you have this ontology:

Click to read 1164 more words
Categories: condescending
Philip Van Hoof

SQLite’s WAL

SQLite is working on WAL, which stands for Write Ahead Logging.

The new logging technique means that we can probably keep read statements open for multiple processes. It’s not full MVCC yet as writes are still not doable simultaneously. But in our use-case is reading with multiple processes vastly more important anyway.

We’re investigating WAL mode of SQLite thoroughly these next few days. Jürg is working most on this at the moment. If WAL is fit for our purpose then we’ll probably also start developing a direct-access library that’ll allow your process to connect directly with our SQLite database, avoiding any form of IPC.

Adrien‘s FD-passing is in master, though. And it’s performing quite well!

We’re thrilled that SQLite’s team is taking this direction with WAL. Very awesome guys!

Domain specific indexes

Yesterday I worked on support for deleting a domain specific index from the ontology. Because SQLite doesn’t support dropping a column with its ALTER support, I had to do it by renaming the original table, recreating the table without the mirror column, and then copying the data from the renamed table. And finally dropping the renamed table. It’s nasty, but it works. I think SQLite should just add DROP COLUMN to ALTER. Why is this so hard to add?

I finally got it working, now it must of course be tested and then again tested.

Next for the feature is adapting the SPARQL engine to start using the indexed mirror column and produce better performing SQL queries.

Categories: english
Philip Van Hoof

Working on domain specific indexes

2010-07-01 15:01 UTC  by  Philip Van Hoof
0
0

So … what is involved in a “simple change” like what I wrote about yesterday?

First you add support for annotating the domain specific index in the ontology files. This is straight forward as we of course have a generic Turtle parser, and it’s just a matter of adding properties to certain classes, and filling the values from the ontology in in the instances in our in-memory representation of the ontology. You of course also need to change the CREATE-TABLE statements. Trivial.

Then you implement detecting changes in the ontology. And more complex; coping with the changes. This means doing ALTER on the SQL tables. You also need to copy from the InformationElement table to the MusicPiece table (I’m using MusicPiece to clarify, it’s of course generic) in case of such a domain specific index being added during an ontology change, and put an implicit index on the column. After all, that index is why we’re doing this.

I finished those two yesterday. I have not finished detecting a deletion of a domain specifix index yet. That will have to ALTER the table with a DROP of the column. The most difficult here is detecting the deletion itself. We don’t yet have any code to diff on multivalue properties in the ontology (the ontology is a collection of RDF statements like everything else, describing itself).

Today I finished writing copy values to the MusicPiece table’s mirror column

Next few days will be about adapting the SPARQL engine and of course coping with a deletion of a domain specific index. And then testing, and again testing. Mind that this has to work from a journal replay situation too. In which case no ontology is involved (it’s all stored in the history of the persistent journal).

Where’s my Redbull? Ah, waiting for me in the fridge. Good!

Categories: english
Philip Van Hoof

Domain specific indexes

2010-06-30 13:51 UTC  by  Philip Van Hoof
0
0

We store our data in a decomposed way. For single value properties we create a table per class and have a column per property. Multi value properties go in a separate table. For now I’ll focus on those single value properties.

Imagine you have a MusicPiece. In Nepomuk that’s a subclass of InformationElement. InformationElement adds properties like title and subject. MusicPiece has performer, which is a Contact, and duration, an integer. A Contact has a fullname.

Alright, that looks like this in our internal storage.

Querying that in SPARQL goes like this. I’ll add the Nepomuk prefixes.

SELECT ?musicpiece ?title ?subject ?performer {
   ?musicpiece a nmm:MusicPiece ;
               nmm:performer ?p ;
               nie:title ?title ;
               nie:subject ?subject .
   ?p nco:fullname ?performer .
} ORDER BY ?title

A problem if you ORDER BY the title field is that Tracker needs to make a join and a full table scan with that InformationElement table.

So we’re working on what we’ll call domain specific indexes. It means that we’ll for certain properties have a redundant mirror column, on which we’ll place the index. The native SQL query will be generated to use that mirror column instead. A good example is nie:title for nmm:MusicPiece.

ps. A normal triple store has instead a huge table with just three columns: subject, predicate and object. That wouldn’t help you much with optimizing of course.

Categories: Informatics and programming
Philip Van Hoof

IPC performance, the report

2010-05-13 19:58 UTC  by  Philip Van Hoof
0
0

The Tracker team will be doing a codecamp this month. Among the subjects we will address is the IPC overhead of tracker-store, our RDF query service.

Click to read 3214 more words
Categories: Informatics and programming
Philip Van Hoof

The crawler’s modification time queries

Yesterday we optimized the crawler’s query that gets the modification time of files. We use this timestamp to know whether or not a file must be reindexed.

Originally, we used a custom SQLite function called tracker:uri-is-parent() in SPARQL. This, however, caused a full table scan. As long as your SQL table for nfo:FileDataObjects wasn’t too large, that wasn’t a huge problem. But it didn’t scale linear. I started with optimizing the function itself. It was using a strlen() so I replaced that with a sqlite3_value_bytes(). We only store UTF-8, so that worked fine. It gained me ~ 10%; not enough.

So this commit was a better improvement. First it makes nfo:belongsToContainer an indexed property. The x nfo:belongsToContainer p means x is in a directory p for file resources. The commit changes the query to use the property that is now indexed.

The original query before we started with this optimization took 1.090s when you had ~ 300,000 nfo:FileDataObject resources. The new query takes about 0.090s. It’s of course an unfair comparison because now we use an indexed property. Adding the index only took a total of 10s for a ~ 300,000 large table and the table is being queried while we index (while we insert into it). Do the math, it’s a huge win in all situations. For the SQLite freaks; the SQLite database grew by 4 MB, with all items in the table indexed.

PDF extractor

Another optimization I did earlier was the PDF extractor. Originally, we used the poppler-glib library. This library doesn’t allow us to set the OutputDev at runtime. If compiled with Cairo, the OutputDev is in some versions a CairoOutputDev. We don’t want all images in the PDF to be rendered to a Cairo surface. So I ported this back to C++ and made it always use a TextOutputDev instead. In poppler-glib master this appears to have improved (in git master poppler_page_get_text_page is always using a TextOutputDev).

Another major problem with poppler-glib is the huge amount of copying strings in heap. The performance to extract metadata and content text for a 70 page PDF document without any images went from 1.050s to 0.550s. A lot of it was caused by copying strings and GValue boxing due to GObject properties.

Table locked problem

Last week I improved D-Bus marshaling by using a database cursor. I forgot to handle SQLITE_LOCKED while Jürg and Carlos had been introducing multithreaded SELECT support. Not good. I fixed this; it was causing random Table locked errors.

Categories: Informatics and programming