|
From: | Dmitry Gutov |
Subject: | Re: sqlite3 |
Date: | Tue, 14 Dec 2021 19:31:58 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 |
On 14.12.2021 11:57, Lars Ingebrigtsen wrote:
As for the sqlite part of this: My initial benchmarking of this was wrong. I thought sqlite3 was going to be a real advantage for this thing, since I'd benchmarked excellent performance (more than 50K updates per second, for instance). But that's only when not committing after every transaction, which we want to do here, really.
I'm guessing just the ability to avoid committing after every transaction (do it on a timer instead, perhaps) might become an advantage at some point.
But a "proper" database might give other advantages like a faster search in the loaded data (unless it's already "indexed" by using hash tables everywhere where they could be used).
Or being able to read the data without loading the whole file into memory. Which, for certain scenarios and data sets, might be a bigger advantage than faster writes.
But it turns out that sqlite3 is actually slower for this particular use case than just writing the data to a file (i.e., using the file system as the database; one file per value). So multisession.el now offers two backends (`files' and `sqlite'), and defaults to `files'.
Does the latter scenario use as many files as you do 'COMMIT' in the former scenario?
[Prev in Thread] | Current Thread | [Next in Thread] |