SQLite works great as an in-app database but can require some configuration to achieve optimal performance. This post covers 10 SQLite optimizations that can help maintain blazingly fast performance.
SQLite is small, fast, has a lot of built-in functionality, and works great as an in-app database. However, SQLite is designed for many different use cases, and requires some configuration for optimal performance as an in-app database. And being a C library designed to run on practically any platform, it leaves a lot up to higher-level libraries to implement.
To start with, it helps to have a good idea of expected baseline performance. If an app's performance is far from this, there is likely room for optimization.
Here are 10 optimizations that could collectively potentially result in significant SQLite performance gains in your application:
The default in SQLite is to use a rollback journal. This involves at least two disk writes for every transaction, and waiting for a filesystem sync.
WAL mode instead writes changes to a sequential write-ahead log, and then later synchronizes it back to the main database. Additionally, it allows safely using synchronous = normal, which avoids having to wait for an filesystem sync (fsync) operation in most transactions.
These settings are good for most applications:
The only durability caveat is that on a system crash (not application crash), the last transaction(s) may be rolled back. This is fine for most applications — if durability is that important, you should probably rely on more than just a phone's flash storage.
SQLite itself is fast, and can execute millions of individual statements per second. The official documentation mentions there is no problem with using lots of individual queries, unlike with typical client/server databases.
However, very often the higher-level library / SQLite bindings does add a lot of overhead to each statement, that could be an order of magnitude or two more than the underlying SQLite operation. This is especially true for (1) higher-level languages, (2) when the bindings provide asynchronous operations, or (3) when the operation passes through multiple layers of bindings. For example, if the call goes from React Native -> Java -> JNI → SQLite and back, that adds a lot of overhead.
The only options here are:
Android
Avoid the built-in Android SQLite implementation — it has severe limitations:
If you use bindings on another platform (e.g. Flutter or React Native), make sure that it doesn't use the built-in Android SQLite under the hood. Most libraries provide an option to bundle SQLite instead of using the built-in one, if it isn't the default.
By default, every write to the database is effectively its own transaction. The overhead per transaction is reduced in WAL mode, but not eliminated. For high throughput of writes, wrap multiple writes in the same transaction.
Even read performance can be improved with transactions, since it reduces the number of lock operations required. However, when WAL mode is used, this effect is small.
Running operations in the background is important to avoid the application UI stuttering. For many operations, SQLite is fast enough that this is not a problem, even when running on the main thread. However, there may be an occasional query that takes longer, especially when working with large sets of data.
For this reason it's usually best to run operations on a background thread. The SQLite bindings may do this automatically (e.g. ones that provide async methods for Dart), or may require manual work for this. This does often add some overhead to each operation, but it is still usually fast enough to not be an issue, and is worth the trade-off of not blocking the UI.
Even when running operations in the background, operations may still block each other. If the bindings run in single-threaded mode, one large transaction (e.g. downloading data in the background) would block all other queries.
SQLite natively supports concurrent transactions. However, each separate "connection" is effectively single-threaded. So to run multiple concurrent transactions, separate connections are needed on different threads.
In the default rollback journal mode, SQLite can run any number of concurrent read transactions, but when a transaction writes to the database, it locks it in exclusive mode (no other readers or writers can run at that point).
In WAL mode, readers can continue reading from the database while writing. The only limit is that only a single transaction can write at any point.
If multiple transactions attempt to write at the same time, it will fail with SQLITE_BUSY, and need to be retried. When only a single process uses the database at a time, it is often better to use an application-level write queue, rather than relying on SQLITE_BUSY. This could provide more fair writes, avoiding issues such as writer starvation.
SQLite also has an experimental mode called begin concurrent, allowing concurrent write transactions, as long as they don't read or write the same pages of data. This is not part of the main build yet, and requires a custom build from a separate branch.
When running the same small statement many times over, the overhead of parsing the SQL could add up.
For example, instead of executing the statement each time like this:
Prepare the statement once, and run it multiple times:
Bindings often do not expose the low-level prepare() call directly. When the bindings are asynchronous, the overhead of the bindings would be more than the gain of the prepared statement.
However, the bindings could:
In recent SQLite versions, the JSON1 extension is enabled by default. Even when not storing JSON data, this makes it easy to pass in large sets of data to a query.
This is one great way to avoid per-statement overhead, regardless of the bindings used.
Example 1: Lookup multiple rows in a single statement
Typical example for a single row lookup:
To query multiple rows in a single statement, use the IN operator:
However, the above typically involves generating SQL programmatically, which is error-prone, and could run into the limit on the maximum number of parameters per statement (999 or 32766 by default, depending on the SQLite version used).
Using JSON1 functions avoids that complexity, in addition to typically being faster:
Bulk insert
JSON1 can also be used to bulk insert data:
When the query gets large, use a CTE (Common Table Expressions) (WITH statement) to split it out from the rest of the query:
Bulk update
Bulk update using UPDATE FROM:
Bulk delete
Bulk delete using IN:
Indexes can increase the performance of read queries by orders of magnitude. There are many great guides on indexing, and won't be repeated here.
SQLite has a great guide on all the different optimizations the query planner can perform using indexes: https://www.sqlite.org/optoverview.html
Some tips:
Note that by default, SQLite does not know how many unique values are in each column, which may cause it to choose a suboptimal index. The default recommendation is to use ANALYZE (or OPTIMIZE) to analyze tables, but this may cause different indexes to be used for different users, resulting in difficult-to-debug performance issues.
Instead, use one of the other options:
SQLite has support for indexes on expressions and partial indexes, which may be very useful in some cases. For example, an index can be created on a field inside a JSON document using:
WAL checkpoints are where data is moved from the write-ahead log to the main database. This is one place that does wait synchronously for the filesystem to sync.
By default, this occurs once the WAL is greater than 1,000 pages, as part of a COMMIT statement. This means that every now and then, a transaction may be slightly slower to complete.
To avoid this overhead, the WAL checkpoint can be run in a separate thread on a separate database connection. This requires some low-level operations, described here.
By default, the database does not decrease in size after deleting data. There are two files that could grow indefinitely, and does not free space automatically:
While SQLite does support an incremental vacuum mode for the main database, it can increase fragmentation and decrease performance. Instead, manually run VACUUM to reclaim space after a lot of data has been deleted.
To estimate the amount of space that can be freed, use this query:
The actual space that can be freed may be more when many pages are only partially filled.
The WAL size is mostly managed automatically by the pragma journal_size_limit setting. It could still be useful to directly truncate the WAL after a VACUUM command. There is currently no built-in SQLite method to get the WAL size, but this can be done by checking the size of the -wal file using standard filesystem methods.
If there are many free pages (e.g. more than 10MB), run the following to reclaim the space:
Note that even though this will free up space, it actually requires more storage while running. This means it can't be used to free up space when the device storage is already full.
Both these statements require a write lock, and may block writing to the database for a long time if the database is large. It is best to run this while the device is idle, to avoid slowing down or blocking other operations.
On iOS, schedule a background processing task:
On Android, use WorkManager to schedule the operation when the device is idle:
Application developers should not have to care about most of this complexity. To assist with this in Flutter applications, we’ve created a library (sqlite_async) that contains many of the above optimizations built-in, including:
Additional support is planned for:
If you prefer using an ORM and typed queries, look at the great drift library instead. While it doesn’t have all the same optimizations out of the box, most of it can be enabled with some configuration.
PowerSync is a cloud service that keeps in-app SQLite databases in sync with your server-side PostgreSQL database, giving you the full power of SQLite for an offline-first app. Request developer preview access here.