WEBVTT

00:00.000 --> 00:15.000
Now, let me welcome Kevin, no, yes, Kevin, sorry who we'll talk about electric SQL,

00:15.000 --> 00:22.000
query driven sync in tanstack DB, let's welcome him with a round of applause.

00:22.000 --> 00:29.960
Hello everyone, I'm Kevin, I'm finding engineer at electric SQL,

00:29.960 --> 00:33.960
and a part-time post-doctoral researcher at the very university that Brussel.

00:33.960 --> 00:38.960
Today I will be talking about query driven sync in tanstack DB.

00:38.960 --> 00:42.960
So many applications still use a traditional web architecture,

00:42.960 --> 00:46.960
so you have the application, the API and database,

00:46.960 --> 00:49.960
and the API in the database, live on the backend,

00:49.960 --> 00:53.960
and the application makes HTTP requests to the API,

00:53.960 --> 00:55.960
in order to read and write data,

00:55.960 --> 00:59.960
the API then reads and writes to the database,

00:59.960 --> 01:02.960
the database returns some data to the API,

01:02.960 --> 01:06.960
and the API returns the data to the application.

01:06.960 --> 01:12.960
Now, the fontent of the application typically uses a reactive UI library,

01:12.960 --> 01:15.960
such that anytime the local state changes,

01:15.960 --> 01:18.960
this is automatically reflected in the UI.

01:18.960 --> 01:23.960
The problem is that this reactivity stops at the network boundary,

01:23.960 --> 01:30.960
so whenever the local state changes the UI is automatically updated,

01:30.960 --> 01:37.960
but the local state does not automatically remain in sync with the backend state.

01:37.960 --> 01:41.960
So if you want the local state to remain in sync with the backend state,

01:41.960 --> 01:45.960
you typically end up implementing some long-pulling mechanism,

01:45.960 --> 01:50.960
or a custom mechanism on top of web sockets or SSE.

01:50.960 --> 01:54.960
So many people in this room are local first experts,

01:54.960 --> 01:57.960
and probably have a better architecture in mind,

01:57.960 --> 02:01.960
so you're probably thinking about the local first architecture of course.

02:01.960 --> 02:05.960
So let's copy all the data to the front end.

02:05.960 --> 02:09.960
So we still have our application and a database,

02:09.960 --> 02:12.960
but now we also have a sync engine.

02:12.960 --> 02:17.960
So the idea is that the application will keep a local cache of the data,

02:17.960 --> 02:22.960
and initially of course that local cache will be empty.

02:22.960 --> 02:26.960
So we need to populate that local cache,

02:26.960 --> 02:30.960
and so we will start out by doing an initial sync phase.

02:30.960 --> 02:36.960
So we do an initial sync and the sync engine will then query the database

02:36.960 --> 02:40.960
to load the data, the database will return a partial data set,

02:40.960 --> 02:44.960
and the sync engine will then return that data to the fontent,

02:44.960 --> 02:48.960
where it will be used to populate the cache.

02:48.960 --> 02:55.960
And so once we have loaded all the data in the local cache,

02:55.960 --> 02:59.960
we also have a background sync mechanism,

02:59.960 --> 03:04.960
which ensures that the local cache remains in sync with the backend.

03:04.960 --> 03:07.960
And so now that we have our local cache,

03:07.960 --> 03:11.960
the application always reads and writes from and to the cache,

03:11.960 --> 03:14.960
so all the interactions feel instant.

03:14.960 --> 03:17.960
So this leads to great user experience.

03:17.960 --> 03:22.960
Now this local first architecture also has a couple of downsides.

03:22.960 --> 03:25.960
The first one is that we now have a sync engine,

03:25.960 --> 03:30.960
so we have to rewrite the fontent and the backend around this sync engine.

03:30.960 --> 03:35.960
As I explained, we need an initial sync phase to populate the cache,

03:35.960 --> 03:40.960
and this initial sync can be quite slow if our data sets are big.

03:41.960 --> 03:45.960
Also, if the local cache grows and becomes very big,

03:45.960 --> 03:48.960
then querying that local cache can become slow,

03:48.960 --> 03:50.960
and the UI becomes laggy.

03:50.960 --> 03:52.960
So if you want to avoid that,

03:52.960 --> 03:54.960
we need to index the local cache,

03:54.960 --> 03:58.960
we need to do some optimized querying and so on.

03:58.960 --> 04:03.960
So we end up basically building our own client site database,

04:03.960 --> 04:06.960
and this is truly a rabbit hole.

04:06.960 --> 04:09.960
So what is it that we actually want?

04:09.960 --> 04:13.960
We want our application to have great UX and great DX.

04:13.960 --> 04:17.960
So great UX means that the application needs to be fast,

04:17.960 --> 04:20.960
there should be no loading spinners and so on.

04:20.960 --> 04:23.960
So we already know that in order to achieve great UX,

04:23.960 --> 04:26.960
the data must be locally available.

04:26.960 --> 04:29.960
On the other hand, we also want to achieve great DX

04:29.960 --> 04:32.960
such that the developer simply needs to declare

04:32.960 --> 04:35.960
which data they need for their view,

04:35.960 --> 04:37.960
rather than how to fetch that data.

04:37.960 --> 04:41.960
So we don't want developers to have to write networking code,

04:41.960 --> 04:43.960
caching code, thinking code,

04:43.960 --> 04:46.960
conflict resolution code, and so on.

04:46.960 --> 04:51.960
So the main challenge here consists in how are we going to sync data

04:51.960 --> 04:55.960
just in time, so it's available when the user needs it,

04:55.960 --> 04:59.960
without requiring the developer to be a magician.

04:59.960 --> 05:04.960
And this is exactly what led us to develop a text like DB.

05:04.960 --> 05:07.960
So 10-StackDB is a reactive client-site store.

05:07.960 --> 05:09.960
It has backend-agnostic collections

05:09.960 --> 05:12.960
and incremental queries over those collections.

05:12.960 --> 05:16.960
So you can think of 10-StackDB as a client-site database.

05:16.960 --> 05:19.960
So it's a reactive client-site store,

05:19.960 --> 05:21.960
meaning that whenever the data in the store changes,

05:21.960 --> 05:23.960
you'll UI automatically reflect it,

05:23.960 --> 05:27.960
so it's integrated to view, react, and so on.

05:27.960 --> 05:29.960
It has backend-agnostic collections,

05:29.960 --> 05:31.960
so the collections hold your data locally,

05:31.960 --> 05:35.960
and it's backend-agnostic, meaning you can populate it from any data source.

05:35.960 --> 05:38.960
So you can even populate it from your existing REST API,

05:38.960 --> 05:41.960
a GraphQL API, SyncEngines, whatever.

05:41.960 --> 05:44.960
It has incremental queries over those collections,

05:44.960 --> 05:48.960
so you write queries like that are very SQL-like.

05:48.960 --> 05:50.960
And whenever the results of the query change,

05:50.960 --> 05:54.960
it's incrementally computed, so it uses IBM,

05:54.960 --> 05:57.960
so in terms of view maintenance to do that.

05:57.960 --> 05:59.960
So let's take a look at an example app,

05:59.960 --> 06:01.960
so we all know this example app,

06:01.960 --> 06:04.960
it's an issue tracker like GitHub or Linear,

06:04.960 --> 06:07.960
the typical example app, so we have projects,

06:07.960 --> 06:09.960
projects can have issues,

06:09.960 --> 06:11.960
can have comments.

06:11.960 --> 06:14.960
So if we already have such an app,

06:14.960 --> 06:17.960
we probably have a frontend that looks like this,

06:17.960 --> 06:19.960
so we can have function-get projects,

06:19.960 --> 06:21.960
get issues or get comments,

06:21.960 --> 06:23.960
which basically just makes a get request

06:23.960 --> 06:27.960
to our existing REST API, right?

06:27.960 --> 06:31.960
In this case, we are just loading all the projects issues

06:31.960 --> 06:33.960
or comments, because we saw before,

06:33.960 --> 06:35.960
in a local first world,

06:35.960 --> 06:38.960
it's nice to have all of that data locally.

06:38.960 --> 06:40.960
So we also want to be able to,

06:40.960 --> 06:43.960
of course, create projects issues and so on,

06:43.960 --> 06:45.960
so this is just a create project very easy,

06:45.960 --> 06:48.960
we'd make a post request to our existing API,

06:48.960 --> 06:51.960
and the project is now created.

06:51.960 --> 06:54.960
So, up till here, there's nothing special,

06:54.960 --> 06:57.960
but it's like 99% of all apps do this kind of things,

06:57.960 --> 07:00.960
you already have this code.

07:00.960 --> 07:03.960
So if you want to incrementally start adopting

07:03.960 --> 07:06.960
the first thing you're going to do is to define

07:06.960 --> 07:08.960
some data collections.

07:08.960 --> 07:11.960
So we can define a collection with create collection here,

07:11.960 --> 07:15.960
and it's going to hold all our projects.

07:15.960 --> 07:18.960
And we already have an existing REST API,

07:18.960 --> 07:20.960
so we want to basically populate the collection

07:20.960 --> 07:22.960
from that REST API.

07:22.960 --> 07:24.960
And to do that, we use this query collection option,

07:24.960 --> 07:26.960
so it's going to be a query collection,

07:26.960 --> 07:30.960
and the query collection is basically using a ten-step query,

07:30.960 --> 07:32.960
so I'm sure many of you know that,

07:32.960 --> 07:35.960
it's just a way to query your existing REST API.

07:35.960 --> 07:38.960
So you have to provide a query function here,

07:38.960 --> 07:41.960
which is a get-projects function from before,

07:41.960 --> 07:43.960
so it's just a regular get request,

07:43.960 --> 07:46.960
and we have to provide these three handles.

07:46.960 --> 07:48.960
So it only insert on update on the lead,

07:48.960 --> 07:51.960
and basically whenever somebody locally insert

07:51.960 --> 07:54.960
or updates or deletes a record in a collection,

07:54.960 --> 07:56.960
it's going to call this handler.

07:56.960 --> 07:58.960
And so this handler is just going to call

07:58.960 --> 08:00.960
create projects here,

08:00.960 --> 08:02.960
which makes a post-lickers to the backend.

08:02.960 --> 08:04.960
So this query collection is going to load

08:04.960 --> 08:06.960
the data from our existing REST API,

08:06.960 --> 08:08.960
and it's going to periodically pull it

08:08.960 --> 08:12.960
to keep the collection in sync with the backend.

08:12.960 --> 08:14.960
So you can define some more collections,

08:14.960 --> 08:17.960
the issues collection, and the comments collection,

08:17.960 --> 08:19.960
and basically what I want you to show here,

08:19.960 --> 08:21.960
what I want to show you here,

08:21.960 --> 08:24.960
is that the collections wrap your existing API,

08:24.960 --> 08:28.960
nothing special, no backend changes needed, very easy.

08:28.960 --> 08:30.960
Now that we have these collections,

08:30.960 --> 08:33.960
we can define some views over them.

08:33.960 --> 08:36.960
So imagine we want a view listing all the issues

08:36.960 --> 08:38.960
in a certain project.

08:38.960 --> 08:41.960
Well, we can write a live query,

08:41.960 --> 08:44.960
so the query basically reads from the issues collection.

08:44.960 --> 08:47.960
It filters out because it only wants the issues

08:47.960 --> 08:51.960
on a certain project, and only in a certain status, for instance,

08:51.960 --> 08:54.960
and it orders them in a reverse chronological order.

08:54.960 --> 08:57.960
We define two dependencies here,

08:57.960 --> 08:59.960
so whenever a project idea of status changes,

08:59.960 --> 09:02.960
this query is going to be a recomputed.

09:02.960 --> 09:04.960
And I find that even we have the query results,

09:04.960 --> 09:06.960
it's bound to this live variable,

09:06.960 --> 09:08.960
and so whenever this query results change,

09:08.960 --> 09:11.960
the view is here and not.

09:11.960 --> 09:15.960
So you can also write a issues detailed view,

09:15.960 --> 09:17.960
so here we show the issue details,

09:17.960 --> 09:19.960
and the comments on it.

09:19.960 --> 09:22.960
Again, we write a live query,

09:22.960 --> 09:24.960
we read from the issues collection,

09:24.960 --> 09:27.960
we filter to only the issue we're interested in,

09:27.960 --> 09:29.960
and here interestingly, we join it,

09:29.960 --> 09:32.960
we left join it with the comments,

09:32.960 --> 09:35.960
but only the comments on this issue, right?

09:35.960 --> 09:39.960
And then we order them in chronological order.

09:39.960 --> 09:41.960
So again, anytime the issue already changes,

09:41.960 --> 09:43.960
the query is going to be completed.

09:43.960 --> 09:45.960
So what I want you to understand here,

09:45.960 --> 09:49.960
is that we simply write a SQL-like declarative query,

09:49.960 --> 09:51.960
and we can do quite complex stuff,

09:51.960 --> 09:54.960
like filters, turns, ordering,

09:54.960 --> 09:57.960
and all of this is done efficiently for you.

09:57.960 --> 10:00.960
So internally, it tends like the B keeps on indexing,

10:00.960 --> 10:02.960
optimizes your queries, and so on.

10:02.960 --> 10:05.960
So you don't have to worry about that.

10:05.960 --> 10:09.960
So one more thing, we may want to write,

10:09.960 --> 10:11.960
for instance, the comment on an issue,

10:11.960 --> 10:13.960
so how do we do that?

10:13.960 --> 10:15.960
We have an ad comments function here,

10:15.960 --> 10:18.960
which is just going to call insert on the collection,

10:18.960 --> 10:21.960
and insert that record.

10:21.960 --> 10:23.960
As I said before,

10:23.960 --> 10:26.960
each collection defines these handlers.

10:26.960 --> 10:28.960
So when we insert here locally,

10:28.960 --> 10:30.960
it's going to, in the background,

10:30.960 --> 10:32.960
asynchronously call this handler,

10:32.960 --> 10:35.960
which is going to create it on the backend.

10:35.960 --> 10:37.960
So this diagram,

10:37.960 --> 10:41.960
exemplifies this, your writes are to the local cache,

10:41.960 --> 10:43.960
your UI reflects it immediately.

10:43.960 --> 10:45.960
In the background, we make an API call,

10:45.960 --> 10:47.960
which is going to write into the database,

10:47.960 --> 10:50.960
you get a response, and the write is confirmed.

10:50.960 --> 10:52.960
At that point in time, the front and nose,

10:52.960 --> 10:55.960
that write went through everything's fine, it's confirmed.

10:55.960 --> 10:58.960
So we saw these kind of queries,

10:58.960 --> 11:02.960
which are quite complex,

11:02.960 --> 11:04.960
and I thought you don't have to worry about it,

11:04.960 --> 11:06.960
and they will be efficient.

11:06.960 --> 11:08.960
So how can those be efficient, right?

11:08.960 --> 11:12.960
So we certainly don't want to recompute the query

11:12.960 --> 11:14.960
every time the underlying collection changes,

11:14.960 --> 11:17.960
because computing a query with joins,

11:17.960 --> 11:19.960
etc., is very expensive.

11:19.960 --> 11:21.960
So instead, we use a technique called

11:21.960 --> 11:23.960
incremental view maintenance.

11:23.960 --> 11:25.960
So instead of recomputing,

11:25.960 --> 11:29.960
we basically keep track of the previous query results,

11:29.960 --> 11:31.960
and we also know the delta,

11:31.960 --> 11:33.960
so the change for instance,

11:33.960 --> 11:35.960
and insert or delete or whatever.

11:35.960 --> 11:37.960
Based on that, we can incrementally compute

11:37.960 --> 11:40.960
the changes to the result of the query.

11:40.960 --> 11:43.960
So to give you an intuition here,

11:43.960 --> 11:45.960
imagine you have an order query like this one,

11:45.960 --> 11:47.960
and you have the previous result,

11:47.960 --> 11:49.960
then you already know they are ordered, right?

11:49.960 --> 11:51.960
And then if you have a change,

11:51.960 --> 11:55.960
like insert a comment in this list of order comments,

11:55.960 --> 11:57.960
well, you can do it very efficiently,

11:57.960 --> 11:59.960
because you can use binary search to figure out

11:59.960 --> 12:01.960
where to put it inside,

12:01.960 --> 12:03.960
and that's basically in logarithmic time,

12:03.960 --> 12:05.960
and that's very more efficient and sorting,

12:05.960 --> 12:07.960
which is in and look.

12:07.960 --> 12:10.960
So the key here is that

12:10.960 --> 12:12.960
incremental view maintenance allows

12:12.960 --> 12:16.960
of sub query milliseconds query execution

12:16.960 --> 12:19.960
even on large datasets.

12:19.960 --> 12:22.960
Then there is one final problem here,

12:22.960 --> 12:24.960
that remains,

12:24.960 --> 12:27.960
and that's the initial load times.

12:27.960 --> 12:30.960
So we have these collections from before,

12:30.960 --> 12:32.960
and as I told you,

12:32.960 --> 12:35.960
we need to operate them with initial data.

12:35.960 --> 12:37.960
The data can be big,

12:37.960 --> 12:40.960
and so the user is stuck on the loading screen

12:40.960 --> 12:42.960
while the data is loading.

12:42.960 --> 12:44.960
But maybe the user just opens the app

12:44.960 --> 12:46.960
to scheme a few issues, right?

12:46.960 --> 12:49.960
So it's a bit silly that we are waiting to load all the data,

12:49.960 --> 12:51.960
if they just want to see a few issues.

12:51.960 --> 12:53.960
So in some cases,

12:53.960 --> 12:55.960
you don't want to load everything in front,

12:55.960 --> 12:58.960
and you just want to load it lazily when the user needs it.

12:58.960 --> 13:02.960
So PENSEGDB has three different sync modes

13:02.960 --> 13:04.960
for the collections,

13:04.960 --> 13:07.960
eager modes on demand and progressive.

13:07.960 --> 13:09.960
So eager mode is a default,

13:09.960 --> 13:10.960
if you don't specify any,

13:10.960 --> 13:11.960
it's going to be eager,

13:11.960 --> 13:13.960
and it loads all data up from.

13:13.960 --> 13:16.960
So that's best used for small datasets.

13:16.960 --> 13:20.960
The on demand mode is best for loud datasets

13:20.960 --> 13:22.960
because it's going to load the data

13:22.960 --> 13:24.960
only when it's needed by query.

13:24.960 --> 13:26.960
So if a query needs a data,

13:26.960 --> 13:27.960
it's going to check what data does it need,

13:27.960 --> 13:30.960
and it's going to load only that data.

13:30.960 --> 13:34.960
And then progressive mode is kind of a combination of both.

13:34.960 --> 13:37.960
So it starts out in the on demand mode,

13:37.960 --> 13:40.960
such that you are not stuck on a loading screen,

13:40.960 --> 13:42.960
and then in the background,

13:42.960 --> 13:45.960
it does a full initial sync.

13:45.960 --> 13:48.960
So when the initial sync is completed,

13:48.960 --> 13:50.960
it switches back into either mode.

13:50.960 --> 13:52.960
So that's best for medium datasets,

13:52.960 --> 13:56.960
where basically you don't want to wait

13:56.960 --> 13:58.960
for all the data to load because it's too big,

13:58.960 --> 14:00.960
but you can still fit it into memory,

14:00.960 --> 14:02.960
so you do it in the background.

14:02.960 --> 14:06.960
So on demand mode is basically what we call

14:06.960 --> 14:07.960
period-driven sync,

14:07.960 --> 14:10.960
because the query expresses what data is needed,

14:10.960 --> 14:13.960
and the system fetches it and the query needs it.

14:13.960 --> 14:17.960
So where would you use period-driven sync?

14:17.960 --> 14:20.960
Well imagine you have a list of issues,

14:20.960 --> 14:21.960
which is very big, right?

14:21.960 --> 14:25.960
So typically you're going to paginate that list of issues.

14:25.960 --> 14:29.960
So you can just change your query a bit to be paginated.

14:29.960 --> 14:34.960
So you say, okay, I want only limited amount of results,

14:34.960 --> 14:36.960
namely the size of the page,

14:36.960 --> 14:39.960
and I want to start reading from the offset

14:39.960 --> 14:41.960
where the page starts.

14:41.960 --> 14:45.960
So that's basically the only change you need to the query,

14:45.960 --> 14:47.960
and then in the collection,

14:47.960 --> 14:50.960
you don't need to forget to put it on demand mode.

14:50.960 --> 14:53.960
So because the collection is on demand mode,

14:53.960 --> 14:56.960
it starts out empty with no data,

14:56.960 --> 14:59.960
and then when the user opens the app and goes to the first page,

14:59.960 --> 15:02.960
it will at runtime load that page.

15:02.960 --> 15:04.960
Then the user can scroll down,

15:04.960 --> 15:06.960
click through to the second page,

15:06.960 --> 15:08.960
it's going to load the second page and so on.

15:08.960 --> 15:11.960
So I hear you that you're here like, okay,

15:11.960 --> 15:14.960
but now we are going to load data at runtime,

15:14.960 --> 15:17.960
so we are going to be again stuck on a loading screen, right?

15:17.960 --> 15:22.960
But you can avoid it if you are smart about when to preload the data,

15:22.960 --> 15:26.960
but if you use, for instance, the Amazon app and your browsing for items,

15:26.960 --> 15:29.960
it looks like an infinite feat of data.

15:29.960 --> 15:33.960
You can keep scrolling and you never see a loading screen.

15:33.960 --> 15:37.960
And so of course, Amazon doesn't load everything at once,

15:37.960 --> 15:40.960
I'm in a phone, it just loads the first page,

15:40.960 --> 15:43.960
and when you're almost at the end of the page,

15:43.960 --> 15:45.960
it's going to pre-fetch the next page, right?

15:45.960 --> 15:48.960
So by the time you're at the end of the first page,

15:48.960 --> 15:50.960
the next page is already loaded.

15:50.960 --> 15:52.960
And you can also do that with TENSECDB,

15:52.960 --> 15:57.960
so you can basically preload a live query

15:57.960 --> 16:02.960
so that you can get this infinite scroll experience without spinners.

16:02.960 --> 16:09.960
One technical detail is that I told you that TENSECDB is back in TENSEC,

16:09.960 --> 16:12.960
so how can TENSECDB then load data from your back,

16:12.960 --> 16:15.960
and if it doesn't know how your back and looks like, right?

16:15.960 --> 16:20.960
So every back-end integration needs to implement a certain interface,

16:20.960 --> 16:24.960
and the key here is this load subset function that needs to be implemented.

16:24.960 --> 16:28.960
So this load subset function, TENSECDB can call it

16:28.960 --> 16:32.960
with an optional workloads or a bike loss and limit.

16:32.960 --> 16:36.960
And it's basically TENSECDB saying, hey, please load this data

16:36.960 --> 16:38.960
for me, so nothing special.

16:39.960 --> 16:44.960
Up till now, we've been using our existing rest APIs,

16:44.960 --> 16:47.960
so the TENSEC query integration.

16:47.960 --> 16:50.960
And so the TENSEC query integration looks like this,

16:50.960 --> 16:53.960
it's highly simplified, but the gist of it is here.

16:53.960 --> 16:58.960
So load subset is going to fetch the query function from the collection.

16:58.960 --> 17:01.960
It's going to call it with the option,

17:01.960 --> 17:05.960
so a warehouse, potentially an older bike loss and a limit.

17:05.960 --> 17:09.960
And then it's going to wait for the query function to load the data,

17:09.960 --> 17:13.960
and when it has a new state, it possesses it in the collection,

17:13.960 --> 17:15.960
so it updates the collection.

17:15.960 --> 17:19.960
So we are basically passing these options through to the query function,

17:19.960 --> 17:25.960
because TENSEC query itself is a gnostic to the API of your application.

17:25.960 --> 17:27.960
So what does that mean as a developer?

17:27.960 --> 17:30.960
Well, you wrote this live query in on the management,

17:30.960 --> 17:33.960
and you have this workloads and this limit,

17:33.960 --> 17:36.960
how inside your collection, inside your query function,

17:36.960 --> 17:39.960
you have to handle the workloads and the limits here.

17:39.960 --> 17:42.960
So you use the workloads to compute the project ID,

17:42.960 --> 17:44.960
you use the limit to compute the page number,

17:44.960 --> 17:47.960
and you make the right API call.

17:47.960 --> 17:52.960
And so of course, this starts to become a bit annoying,

17:52.960 --> 17:55.960
if you have to translate these complex queries at your right.

17:55.960 --> 18:01.960
So this may be a good time to get rid of your traditional rest API,

18:01.960 --> 18:03.960
which I think engine instead.

18:03.960 --> 18:11.960
So what we envision here is that your application is still going to do local rights,

18:11.960 --> 18:15.960
and in the background propagate and through your existing rest API,

18:15.960 --> 18:18.960
so the right path still goes through your API,

18:18.960 --> 18:20.960
but the read path goes through the sink engine,

18:20.960 --> 18:23.960
so the data loading and the sink engine.

18:23.960 --> 18:25.960
So once you have the sink engine,

18:25.960 --> 18:30.960
the only change you need to do is to swap your query collection

18:30.960 --> 18:33.960
for a electric collection.

18:33.960 --> 18:35.960
And that's much easier because you just say,

18:35.960 --> 18:39.960
hey, here's the URL of the electric API,

18:39.960 --> 18:42.960
and I'm interested in the issues table.

18:42.960 --> 18:48.960
So no more translating queries into rest API calls.

18:48.960 --> 18:50.960
Okay.

18:50.960 --> 18:53.960
Now it also becomes really real time,

18:53.960 --> 18:56.960
so before that we're using a dense query,

18:56.960 --> 18:58.960
so it was pulling your beckons.

18:58.960 --> 19:03.960
Now you're using electric, so updates flow in real time to the front end,

19:03.960 --> 19:08.960
and no other changes are needed to the UI components or the live queries.

19:08.960 --> 19:10.960
I don't know how much time I've left.

19:10.960 --> 19:12.960
One minute, perfect.

19:12.960 --> 19:16.960
So a colleague of mine, Samuel is some of you may know him,

19:16.960 --> 19:19.960
has developed his great demo app,

19:19.960 --> 19:22.960
so it's a collaborative cat app.

19:22.960 --> 19:27.960
So on the left, you had projects and projects have files

19:28.960 --> 19:30.960
and files can be part.

19:30.960 --> 19:34.960
You had an AI chatbot, so he asked a chatbot to build a new part.

19:34.960 --> 19:36.960
Now he's updating that part,

19:36.960 --> 19:39.960
and you see that the user on the right is basically

19:39.960 --> 19:41.960
following in real time the changes,

19:41.960 --> 19:43.960
so everything's things automatically.

19:43.960 --> 19:46.960
He's asking the AI to make an extrude here.

19:46.960 --> 19:49.960
There's just that.

19:49.960 --> 19:51.960
And you see that there's even a presence marker

19:51.960 --> 19:54.960
that shows what's in is doing.

19:54.960 --> 19:56.960
So all of this is built on top of dense.

19:56.960 --> 19:58.960
Electric, SQL, and YGS.

19:58.960 --> 20:00.960
So to conclude,

20:00.960 --> 20:02.960
I'll then stack the B enables for local first steps.

20:02.960 --> 20:04.960
You have great UX and DX.

20:04.960 --> 20:08.960
So great UX is achieved by the creative and sync mechanism.

20:08.960 --> 20:10.960
In combination with some smart preloading,

20:10.960 --> 20:14.960
meaning that the data is ready when the user needs it.

20:14.960 --> 20:18.960
Our IBM engine internally enables the queries

20:18.960 --> 20:22.960
to be very efficient, so your apps are fast and snappy.

20:22.960 --> 20:25.960
It also achieves great DX, so as a developer,

20:25.960 --> 20:27.960
you just write declarative queries,

20:27.960 --> 20:30.960
known as working caching of syncing code.

20:30.960 --> 20:32.960
Good, needed.

20:32.960 --> 20:36.960
And most importantly, it integrates with your existing stack,

20:36.960 --> 20:38.960
because we have modular backend integration.

20:38.960 --> 20:42.960
So you just pick the right integration for your stack.

20:42.960 --> 20:45.960
So yeah, this concludes my presentation.

20:45.960 --> 20:47.960
Thank you for your attention.

20:47.960 --> 20:49.960
I will take any questions.

20:50.960 --> 20:52.960
Thank you for your presentation.

20:52.960 --> 20:53.960
Very interesting.

20:53.960 --> 20:55.960
We have five minutes for questions.

20:55.960 --> 20:56.960
Please raise your hand.

20:56.960 --> 21:01.960
Do we have a question on the internet?

21:01.960 --> 21:04.960
So, a question there.

21:12.960 --> 21:15.960
Thanks for your great talk.

21:15.960 --> 21:18.960
Do you envision that it's possible to have the sync server

21:18.960 --> 21:22.960
be blind that it's engine encrypted over the sync server,

21:22.960 --> 21:26.960
but the database and clients can decrypt of course?

21:26.960 --> 21:30.960
Yeah, I mean, the sync server here,

21:30.960 --> 21:34.960
just as many tools before, it just propagates updates.

21:34.960 --> 21:38.960
So I mean, it's not doing anything fancy,

21:38.960 --> 21:42.960
like conflict resolution where you need to know the data.

21:42.960 --> 21:45.960
So it can be agnostic to what the data actually is.

21:45.960 --> 21:49.960
It just needs to know to whom it needs to sync it to.

21:49.960 --> 21:51.960
Yeah.

21:51.960 --> 21:53.960
Hello. Can you hear me?

21:53.960 --> 21:55.960
Yeah.

21:55.960 --> 21:59.960
Yeah. So how fancy can pan stack the big queries be?

21:59.960 --> 22:03.960
And did you have to deal with the problem of joy and algorithm,

22:03.960 --> 22:06.960
where you have to, when sometimes, if you do the wrong journey decision,

22:06.960 --> 22:07.960
it's slow.

22:07.960 --> 22:08.960
Yeah.

22:08.960 --> 22:11.960
So your first question was how fast can this increase?

22:11.960 --> 22:12.960
How complex?

22:12.960 --> 22:15.960
Well, it's a sequel life, right?

22:15.960 --> 22:20.960
So you can have filters, joins, aggregation, ordering.

22:20.960 --> 22:23.960
It's quite complete.

22:23.960 --> 22:26.960
And your second question was about the joins, right?

22:26.960 --> 22:27.960
Yeah.

22:27.960 --> 22:30.960
So we try to optimize the joins.

22:30.960 --> 22:33.960
We use, we compute them incrementally,

22:33.960 --> 22:37.960
based on this incremental view maintenance algorithms.

22:37.960 --> 22:40.960
So they are extremely fast, but of course,

22:40.960 --> 22:43.960
if you write a very bad join, that's going to be optimized,

22:43.960 --> 22:46.960
then we cannot do anything about it.

22:46.960 --> 22:49.960
Good answer.

22:49.960 --> 22:51.960
Thank you.

22:51.960 --> 22:52.960
Yeah.

22:52.960 --> 22:56.960
You showed the example with RESTPI.

22:56.960 --> 23:02.960
I wanted to know more regarding integrating with GraphQL.

23:02.960 --> 23:03.960
Yeah.

23:03.960 --> 23:04.960
Yeah.

23:04.960 --> 23:07.960
So as I showed around the end,

23:07.960 --> 23:12.960
it tends to be modular, so it's also open source, of course.

23:12.960 --> 23:15.960
So you can write your own integration.

23:15.960 --> 23:19.960
So currently, you have the tends to create integration.

23:19.960 --> 23:21.960
We have electric integration.

23:21.960 --> 23:23.960
We have several other sync engines.

23:23.960 --> 23:27.960
We don't have GraphQL integration yet.

23:27.960 --> 23:30.960
So you could write your own integration if you want,

23:30.960 --> 23:32.960
you just need to implement a right interface.

23:32.960 --> 23:34.960
So the one I showed was a bit simplified.

23:34.960 --> 23:38.960
There are some other methods you need to implement, but it's very,

23:38.960 --> 23:39.960
very achievable.

23:39.960 --> 23:42.960
We have had many people just contributing their own integration,

23:42.960 --> 23:46.960
so it's definitely feasible.

23:46.960 --> 23:49.960
I'm curious, what is the relationship between electric SQL and

23:49.960 --> 23:53.960
Tens Actib, both on a technical and an organizational basis?

23:53.960 --> 23:54.960
Yeah.

23:54.960 --> 23:59.960
So on a technical basis, electric SQL is just a sync engine

23:59.960 --> 24:02.960
backend for Tens Actib.

24:02.960 --> 24:05.960
On a more philosophical and business aspect,

24:05.960 --> 24:08.960
it's on a philosophical aspect.

24:08.960 --> 24:12.960
I mean, we just wanted to build this software,

24:12.960 --> 24:15.960
because we believe all the sync engines and all the applications

24:15.960 --> 24:18.960
that want to have sync needs something like this,

24:18.960 --> 24:22.960
and we think like if you want to push the standards forward,

24:22.960 --> 24:25.960
you need a common base where everyone can contribute to,

24:25.960 --> 24:28.960
and at once, yeah.

24:28.960 --> 24:31.960
And then on a business side, it's like,

24:31.960 --> 24:35.960
well, let's do your best and hope to be the sync engine people

24:35.960 --> 24:40.960
to use and listen to people, and we actually are the main

24:40.960 --> 24:42.960
main trainers of Tens Actib.

24:42.960 --> 24:46.960
Like, I'm almost full time just working on Tens Actib.

24:46.960 --> 24:47.960
Yeah.

24:47.960 --> 24:50.960
So Tens Actib have any server-side components, then or no?

24:50.960 --> 24:53.960
We are working on server-side components, yes.

24:53.960 --> 24:56.960
But there are some technical complexities around it,

24:56.960 --> 24:59.960
so it's taking a bit longer than expected.

24:59.960 --> 25:02.960
Great session of Q&A.

25:02.960 --> 25:05.960
Let's thank Kevin again.

25:05.960 --> 25:07.960
Thank you.

