WEBVTT

00:00.000 --> 00:10.000
Thank you, my name is Keegan. I'm one of the core developers who helped create matrix.

00:10.000 --> 00:18.000
I'd like to spend some time talking to you about Federation. The aim is to give you a deeper understanding about how matrix

00:18.000 --> 00:27.000
is Federation actually works. Isn't necessarily going to help you write a home server, or that it will be some tools that you may be able to use that will actually help you.

00:28.000 --> 00:35.000
So, first of all, what is the Federation API? Hopefully, this is all kind of familiar. That most people have used the client's server API.

00:35.000 --> 00:43.000
The Federation API is the way that two home servers will talk to each other. That's a bit high level, so you can go a bit deeper in that.

00:43.000 --> 00:52.000
So, here's the server API with all the HPM points, all Jason objects, and that basically tells you everything you need to know. So, that's it. Thank you very much.

00:52.000 --> 01:06.000
So, what is the Federation API really and what makes it hard to understand, right? Because it is a set of HPP APIs and is a way to exchange Jason.

01:06.000 --> 01:10.000
It is also a pub sub mechanism if you have devices updates, for example.

01:10.000 --> 01:16.000
But the main thing that makes it hard is that last one, the way to synchronize data structures. In this case, it's the rooms.

01:16.000 --> 01:22.000
How do you synchronize rooms between servers? And that's what I'd like to talk to you about.

01:22.000 --> 01:30.000
A few of the constraints that Matrix has is we want these kind of properties, right? So, we want it to be open to anyone who has a server.

01:30.000 --> 01:39.000
It should be able to participate. There shouldn't be any single point to failures, and no central server that if you can't talk to it or it goes down, then you can't communicate.

01:39.000 --> 01:47.000
Particularly communicate to other users who are on your own server. So, the fact that you can't talk to the other server shouldn't matter.

01:47.000 --> 01:55.000
Likewise, works during networks, there's network partitions. It shouldn't matter your users and your server should still always be able to communicate.

01:55.000 --> 02:06.000
And it needs to be by as in time fault-tolerant because, you know, the internet has a scary place, and people get up to all sorts of mischief, so you need to have a protocol that's going to tolerate time.

02:07.000 --> 02:16.000
So, if we had more time, I would explain how I build up to see RETs, but we don't have time, so you're just going to have to go with me for now.

02:16.000 --> 02:25.000
But if you don't want to see RETs, this is the Wikipedia definition, along with some bold and underlying bits, which detail the relevance to Matrix here.

02:25.000 --> 02:33.000
So, the first one is mentioning, I think that's working during networks. The second one indicates there's some sort of merge algorithm for us, this is the state resolution algorithm,

02:33.000 --> 02:40.000
and then the last one at the point there is basically saying that, if you've seen the same events, then you are in the same state, so you're room state, for example.

02:40.000 --> 02:51.000
And the term conflict free is a little bit misleading, because in general, CRITs can't prevent conflicts, but they do give you a deterministic way to resolve those conflicts.

02:51.000 --> 03:01.000
It's all that one of your heads, you can think it's kind of like get, but with automatic merge resolution resolution, so as a kind of way of thinking about this.

03:02.000 --> 03:10.000
The thing with CRITs is they're quite special, because they prioritize AET of capped here, and they prioritising availability and partition tolerance.

03:10.000 --> 03:18.000
At the cost of consistency, the consistency guarantees they can give you are quite strong, in fact they call it strong eventual consistency in the literature,

03:18.000 --> 03:30.000
which basically consists of the two points of mention, the eventual update and convergence, so the eventual update being that, you know, if one corrects server sees some events, then everyone else is going to see the same events,

03:30.000 --> 03:38.000
apply that event, but they will see it, and then convergence, once you've applied all the same events, you will be in the same state.

03:38.000 --> 03:58.000
The key thing that makes this different from most other forms of distributed systems is that there's no consensus mechanism, so if familiar with blockchain and stuff, you have the idea of 51% attack with, if you have more than 51% of the hash rate, then you can kind of control the direction of where the chain goes, you can't do that with this.

03:58.000 --> 04:11.000
Likewise, things like raft and IDFT, those are CP style systems, so the moment that you can't talk to, you don't have a majority in a network partition, then your system doesn't work.

04:11.000 --> 04:19.000
So, we're going to be talking about Delta State CRITs. Now, if you notice the RITs that's fine, if you don't, there's a few different types of CRITs.

04:19.000 --> 04:28.000
I'm talking about Delta State CRITs because that's actually what matrix is, and I'm also, if you look at CRITs, most of them will talk about it in a peer-to-peer-like manner.

04:28.000 --> 04:35.000
I'm not going to do it, I'm going to talk to it in a federation-like manner, which again mirrors what actually more than a matrix actually does.

04:35.000 --> 04:45.000
So, your A and B, who are going to be exchanges and stuff, we're going to be implementing a set as opposed to exchanging messages, but you'll see it all comes very similar.

04:45.000 --> 05:00.000
In the middle here, you're going to see the operations that are going to be performed, some of those operations we've done concurrently, that's why it's a graph splits, and each one of these servers is going to walk along this explorer graph in a slightly different way.

05:00.000 --> 05:15.000
So, first of all, red adds pizza to the set, which then you can see the red line as a peer at the top there to see that server has seen up to the pizza event, and then that event gets propagated over to the blue server, who then also applies it.

05:15.000 --> 05:25.000
Then you can have the blue guy adds beer to the set, which then gets applied, and then concurrently, red does the same thing because who doesn't like beer.

05:26.000 --> 05:32.000
And of course, now you've got two beers in existence, but it's a set, so be actually only care about it once.

05:32.000 --> 05:43.000
And then the operations get exchanged between the two servers, and then we have applied the same updates, and therefore we're in the same states, which is that there is pizza and beer in the set.

05:43.000 --> 05:53.000
Now what happens is a merge operation, because now the next event that happens is someone decides they don't like beer anymore, and they remove beer from the set.

05:53.000 --> 06:02.000
Now what happens here is application defined, because we are implementing a set, we probably want to remove beer from the set entirely.

06:02.000 --> 06:09.000
If this was a counter, in which case we've added the beer thing twice, then we might just want a decrement by one.

06:09.000 --> 06:15.000
But because we're implementing a set, we're going to remove beer from the set entirely, and then propagate that over.

06:15.000 --> 06:23.000
And if you can see how the two servers have explored this graph, they eventually have seen the same events, they are in the same state.

06:23.000 --> 06:30.000
So it's worth noting that this could have gone slightly differently.

06:31.000 --> 06:44.000
So for example, if we concurrently add those beers again, before a and b could communicate that they'd both add a beer, perhaps there was a network partition or something, they couldn't exchange updates.

06:44.000 --> 06:48.000
And now the blue guy is going to come along and delete the beer as before.

06:48.000 --> 06:51.000
So in this case, the graph shape is different, right?

06:51.000 --> 06:57.000
Because the delete is not pointing to both of the add, it's only pointing to one of them, because it hasn't seen the other one.

06:57.000 --> 07:11.000
But the state on this guy is the same as it was before, but on this guy it's going to be different, because what the blue server is now going to say is they're going to say, I'm going to add beer, and I'm going to immediately remove the beer that I just made, which affects me is a no up.

07:11.000 --> 07:21.000
And then at some point, the red server is going to go and send the beer that they did add, and then you'll end up again converging on the same state.

07:21.000 --> 07:25.000
Now what happens here is application defined as well.

07:25.000 --> 07:39.000
So concurrently, you've got beer added and beer removed, but because this is a set, we want to, the fact that there were two beers added and anyone removed, there's still beer in the set.

07:39.000 --> 07:54.000
However, if the delete here was a ban, and the ad beer was say, the person who was being banned, they're changing the red name, then you're going to want that delete to overwrite what happened.

07:54.000 --> 08:03.000
So it's application defined exactly the semantics that's done here, and we're going to keep going back to this as you'll see in a moment.

08:03.000 --> 08:12.000
Everything I'm described with until now is not by as in time for a tolerant, because you can choose what those IDs are, it's very sensitive to which IDs you choose.

08:12.000 --> 08:23.000
So one of the key things that you can do is you can have a hash link tag, which is what matrix does, where the ID is the hash of the event JSON itself.

08:23.000 --> 08:32.000
And then in that event JSON, you include the previous events, so that's forming a DAG structure, and they're all hash link together.

08:32.000 --> 08:43.000
This is mostly to make it so you can't create an event that collides with an already existing event, because if you can do that, you can create causality.

08:43.000 --> 08:50.000
Senders can be spoofed, so naturally you should sign your events, that's hopefully obvious lot of systems do this.

08:50.000 --> 08:56.000
And the third one is interesting as well, because the third one is that updates can be dropped.

08:56.000 --> 09:04.000
There are malicious servers, and you are only going to talk to certain malicious servers, they can just drop all your updates, in which case you're not going to converge, you're not going to see the same updates, and that's not good.

09:04.000 --> 09:09.000
So matrix has a point to point architecture at the moment, which means that we get around this problem.

09:09.000 --> 09:15.000
It obviously is not very performant, we'd like to fix that, maybe there's something that random gossiping, but at the moment we don't.

09:15.000 --> 09:18.000
But that is partly for the security reasons.

09:18.000 --> 09:32.000
But if you combine these things, you end up with bison time fault tolerant system, that can cope with an arbitrary number of bad actors, so it's immune to silver attacks, which I was leading to earlier with the 51% attack.

09:32.000 --> 09:45.000
Hopefully, this is all looking very familiar at this point, because the hatchling tag, the signatures, the reading, all the little graphs with the tag forming, that's literally what matrix is at the Federation level.

09:45.000 --> 09:57.000
And we wrote a tool, well actually Mackey wrote a tool back in 2020, called TARDIS, which is a way of effectively visualizing this graph in matrix rooms, and it looks a bit like that.

09:57.000 --> 10:11.000
And all it does is it takes a JSON file, and then it will render the events that are in that JSON file, and the key thing, which I added in a couple months ago, was to then be able to perform state resolution on those events.

10:11.000 --> 10:18.000
So I think, hopefully, I can go and give a quick demo of this.

10:18.000 --> 10:22.000
So this is TARDIS.

10:22.000 --> 10:27.000
No, this is nothing. That's awkward.

10:27.000 --> 10:39.000
Screen mirroring, currently extending change to entire screen, this one.

10:40.000 --> 10:45.000
Okay, cool. It's a bit weird, but okay, financial.

10:45.000 --> 10:50.000
And now, if I do this, so you can see here's a graph of a room.

10:50.000 --> 11:01.000
These are rooms that are fake, they're not from a real room, they're not real Alice and Bob here, but TARDIS allows you to inject fake events, and it will calculate events and do all sorts of stuff for you.

11:01.000 --> 11:06.000
But you can see here, there are concurrent operations being done here, that's why the graph is forks.

11:06.000 --> 11:20.000
And you can see that concurrently, what's happened is Alice has removed Bob's permissions, and on another branch, so concurrently Bob did not know this, Bob made Charlie a mod, and Charlie went and said the room name.

11:20.000 --> 11:25.000
So what we want, semantically, is we prefer safety of a live-ness, we don't want bad things to happen.

11:25.000 --> 11:32.000
So we're going to stop Bob making Charlie an admin, and we're going to stop Charlie from setting the room name.

11:32.000 --> 11:47.000
So if you hit resolve state here, which actually may not work, because I'm not sure if I'm running the, yep, that's my family.

11:47.000 --> 11:52.000
So if I hit resolve state here, you can see that some of these nodes have gone green.

11:52.000 --> 11:59.000
So this is the result of state resolution. So what this means is that at this event, which is this end event, this is what the room state is.

11:59.000 --> 12:08.000
So you can see at the room state, once these two forks have been merged, Bob Charlie is not a modulator, and the room name has not been set by Charlie either.

12:08.000 --> 12:22.000
But if you, what you can do is you can go back in time and see, you know, back here before, before these operations were done, then the actions had been applied, right, because it's optimistically applying these updates.

12:23.000 --> 12:41.000
This is why sometimes if you have concurrent behavior, and then you can talk, your service can talk to each other once again, you might see state get rolled back, because if something has happened like this, there's been a concurrent revocation that you weren't aware of, then, you know, we have to apply.

12:41.000 --> 12:48.000
So we have to decide what's going to happen to those circumstances, and what we decide is that you're not going to be able to.

12:48.000 --> 12:59.000
We prefer whatever the admins want to do, if actually sorted by power, but I will get on to that. Right, now if I go back to here.

12:59.000 --> 13:15.000
Go, okay, cool. So state resolution, state resolution to hopefully I've convinced you that state resolution algorithm and CRDT's no diagrams are basically the same thing, and we just need to decide what the rules are for concurrent behavior.

13:15.000 --> 13:23.000
And that's the fact that what the state resolution algorithm does, like I said before, it's application defined and we prefer safety over liveness.

13:23.000 --> 13:29.000
So I'm going to go through the state resolution algorithm at like a medium-ish level.

13:29.000 --> 13:40.000
I'm lying in a few places, but in essence, I'm mostly not lying. It's only small ethnicalities in my defense, so it makes it easier to explain.

13:40.000 --> 13:51.000
Here you've got some, he wants two folks you want to merge, and they have some sort of shared state where they originate from, which is the stuff in yellow yarns.

13:51.000 --> 13:59.000
The stuff in yellow yarns is perhaps, you know, obviously called uncomflected state, but it's not conflicted between the two folks.

13:59.000 --> 14:15.000
The blue and red stuff is conflicted, but the blue and red stuff may contain events that, like if one of those was setting the root name, or that may not be conflicted, but the making the person a moderator is conflicted.

14:15.000 --> 14:28.000
So you need to include all the authorization events in this conflicted set. So that's basically where we pull in, okay, you're changing the root name, okay, but you also now need to include the power level event that major a moderator.

14:28.000 --> 14:30.000
It's that's all the thing.

14:30.000 --> 14:38.000
Once you've done that, there's two stages to the state resolution algorithm. The first one only deals with the subset of the room state called power events.

14:38.000 --> 14:45.000
So in power events, these are power levels and things that can revoke the ability for people to do certain actions.

14:45.000 --> 14:57.000
We're going to take the events and we're going to order them. We're going to split them based on the full conflicted set, and we're going to order them based in what the specification says.

14:57.000 --> 15:07.000
The reverse topological power ordering, which is a set of ordering rules, which initially is honoring the causal relationships, then when you, there is no causal relationship, because it's concurrent.

15:07.000 --> 15:12.000
We're preferring power levels and then type-breaking on time-sams and event IDs.

15:12.000 --> 15:17.000
Once we've done that, we're going to go and apply itchative forth checks.

15:17.000 --> 15:28.000
So we're going to use the un-conflicted set as the base state, and then we're going to go through each event in turn and see if the event passes iterative forth checks.

15:28.000 --> 15:32.000
If they don't pass, then we won't include them in the final set.

15:32.000 --> 15:39.000
So once we've done this, we take the stuff that we did get, and that will be called the partially resolved state.

15:39.000 --> 15:44.000
It's partially resolved because we're only dealing with a subset of the events here.

15:44.000 --> 15:52.000
And then the second step is to handle all the other remaining events that we didn't handle before. Things like setting a rim name, setting a rim topic, that sort of thing.

15:52.000 --> 16:00.000
And again, similarly, we need to order these events. So we're going to compare each event to where it sits from the power level main nine.

16:00.000 --> 16:07.000
Every single state event points to a power level event, which gives it the authorization to, for that action to have taken place.

16:07.000 --> 16:12.000
And if you use that, you can then order it, basically use that as a primary order mechanism.

16:12.000 --> 16:19.000
Now, some of those events may have been rejected from the earlier itchative forth checks that were applied, like in this scenario that we're about to see.

16:19.000 --> 16:24.000
If that happens, then you just go to the earlier power level event.

16:24.000 --> 16:31.000
But if you keep doing this, then you'll end up with an ordered set of remaining state events.

16:32.000 --> 16:34.000
Wait for the animation. There we go.

16:34.000 --> 16:38.000
That's called the main line ordering, because it's based on the power level main line.

16:38.000 --> 16:44.000
And then again, we're going to do a very similar thing. We're going to use the partially resolved state this time.

16:44.000 --> 16:49.000
And again, apply it to all checks and likewise, it should all checks may reject certain events or not.

16:49.000 --> 16:53.000
And then the result of that becomes the final result state.

16:53.000 --> 16:58.000
And that's mostly how this algorithm works.

16:59.000 --> 17:06.000
A few things that most of the search stuff I've touched upon, so yeah, on convicted set is things that both folks agree on.

17:06.000 --> 17:12.000
The power events are things that may remove the ability for used to serve actions.

17:12.000 --> 17:19.000
The power level main line, the reason why this exists is because we want to split room states and all.

17:19.000 --> 17:27.960
Yeah, we want to split room state into epochs effectively. Like if there's concurrent behavior, you need to try to order the events in a way that isn't

17:27.960 --> 17:34.960
just using the time stand, which can be gained. So we use the power level main line to order things roughly causally.

17:34.960 --> 17:40.960
So the same resolution algorithm is not infallible, okay?

17:40.960 --> 17:45.960
It isn't perfect. There are cases where it behaves in ways we do not expect it to behave.

17:45.960 --> 17:51.960
And part of this is, well, most of this is down to the fact that servers are walking this graph in different ways.

17:51.960 --> 17:54.960
And therefore processing events in different orders.

17:54.960 --> 18:01.960
And this is due to connectivity issues predominantly. And you know, if we know of certain failure modes, that's great.

18:01.960 --> 18:06.960
We can look at them in TARDIS and we can resolve them and make sure everything works or doesn't work.

18:06.960 --> 18:13.960
But there would be nice if we could kind of discover new failure modes and discover new ways that things can break.

18:13.960 --> 18:18.960
So this is introducing a new tool, which is only recently been released called Chaos.

18:18.960 --> 18:22.960
Chaos uses similar stuff which I've done before with Compton Compton Crypto.

18:22.960 --> 18:29.960
It runs home servers in Docker containers and it uses a man and middle proxy to sniff on the Federation traffic.

18:29.960 --> 18:35.960
And it can also modify the Federation traffic to just reject the traffic entirely to cause partition faults.

18:35.960 --> 18:45.960
Likewise, we also introduce crash faults by sig killing home servers at random or at determine times.

18:45.960 --> 18:52.960
So the best way to do this will be to demo this one as well.

18:52.960 --> 18:59.960
Okay, so if I, so I did manage, oh, yep, it's fine.

18:59.960 --> 19:03.960
So can I do that? Okay, thank you.

19:03.960 --> 19:11.960
So here is Chaos, Chaos has got two clients on different home servers.

19:11.960 --> 19:18.960
These are real home servers, by the way. And you can hit start and what's going to happen is it's going to send lots of clients over API traffic.

19:18.960 --> 19:23.960
And then lots of Federation traffic that you're going to see at the bottom then you can see everything's happy.

19:23.960 --> 19:30.960
You can then hit next but which will then block the traffic so every time they try to send you a request it will say blocks.

19:30.960 --> 19:33.960
And then you can at some point here the next but.

19:33.960 --> 19:40.960
And then what's going to happen is it's going to reconcile the state and you'll see a bunch of extra requests being made.

19:40.960 --> 19:46.960
There you go, which is it kind of re-synchronizing what the room state is effectively.

19:46.960 --> 19:58.960
And then you can hit test and now what test is going to do is it's going to ensure that the room membership list on the client on home server one and home server two matches is an agreement.

19:58.960 --> 20:05.960
Also matches what Chaos believes is the room state because they should all all three of those are match and chaos is like the gold state of what should be.

20:05.960 --> 20:09.960
And you can see here it said success because the room state did match.

20:09.960 --> 20:26.960
And so using this as a tool you know we can introduce net splits we can crash the home servers and we can do the idea behind this is to kind of have one of these guys running until something that happens and then we can see what went wrong.

20:26.960 --> 20:38.960
So I will just close that down and then go to here and the other thing is obviously you can run this without the web UI in which case you just get more of a log like sort of.

20:38.960 --> 20:43.960
So you can see the federation requests going through there each tick is the number of clients.

20:43.960 --> 20:49.960
So if you are request and then you'll see at some point in the next split and it will block some traffic there you go.

20:49.960 --> 20:57.960
So that's been a really useful tool for us to investigate improvements to the federation API protocol.

20:58.960 --> 21:07.960
So yeah the kind of thing is is why why make these tools at all in the first place and mostly it's to.

21:07.960 --> 21:16.960
Develop to to make some changes to the Federation API to make it more more secure and to make it more performance as well.

21:16.960 --> 21:29.960
So we've been working quite closely with academia particularly far in Jakob and we're going to be making will be proposing some protocol changes to the Federation API just to make it a bit more robust to.

21:29.960 --> 21:35.960
To various faults and if you want to read any more feel free to take a picture of this this is.

21:35.960 --> 21:43.960
Some academic papers which talk about the artistes and more detail and particularly in relation to matrix as well.

21:43.960 --> 21:46.960
So woods would highly recommend going through those if you're interested.

21:46.960 --> 21:48.960
And thank you very much.

21:48.960 --> 22:03.960
Yeah, how do you differentiate fairly much from.

22:04.960 --> 22:13.960
Yeah, how do you differentiate fairly much from malicious attacks ultimately you can't right because a malicious cat could be just partitioning off the network.

22:13.960 --> 22:21.960
Something like chaos only does non malicious attacks so we're talking crashing petition faults.

22:21.960 --> 22:37.960
It's very hard to automatically model not automatically model malicious attacks because it may be altering certain fields in the event or altering signatures or like something like that so we can't automate the detect automate testing for them.

22:37.960 --> 22:45.960
If we see stuff in the wild then obviously we can you can go and see kind of what went wrong, but there is limits to to what you can do.

22:45.960 --> 22:47.960
And it's an automatic.

22:47.960 --> 22:53.960
Can you use this for deterministic fuzzy and non deterministic fuzzy.

22:53.960 --> 22:55.960
Can you can use it for us?

22:55.960 --> 22:57.960
I'm going to make it for us.

22:57.960 --> 23:06.960
Well, it will effectively it kind of does already because it will run until there is a failure even on the clients their API or on the Federation API.

23:06.960 --> 23:10.960
So in that way it is already a fuzz tool.

23:11.960 --> 23:20.960
I would like to make it deterministic so there was a talk yesterday in the testing room which I went to about deterministic simulation testing.

23:20.960 --> 23:27.960
And if you've ever seen the tiger beetle presentation that's the sort of stuff I'm talking about and that was inspiration for chaos.

23:27.960 --> 23:39.960
I would love to have that but to do that you need to effectively re engineer your home server so you can mock out things like your network IO your disk IO and clocks.

23:39.960 --> 23:45.960
So that's quite an undertaking and it's something which we're going to we don't really have time to do right now.

23:45.960 --> 23:47.960
So maybe in the future.

23:47.960 --> 23:48.960
Yes.

23:48.960 --> 23:56.960
We're talking about application being able or application simply to decide how to resolve the conflict.

23:56.960 --> 24:05.960
Can we then see maybe insert the major matrix implementations or depth like that that you can then maybe as a sort of super moderator choose.

24:06.960 --> 24:10.960
So yeah.

24:10.960 --> 24:15.960
So yeah.

24:15.960 --> 24:23.960
So yeah.

24:23.960 --> 24:27.960
So yeah.

24:27.960 --> 24:30.960
So the question is basically can you.

24:30.960 --> 24:37.960
And a per room level define what the application specific merge algorithm is going to be.

24:37.960 --> 24:46.960
And you know, let it up to be more of a social problem and say, you know, this moderator is going to say these are these are the rules ultimately know because all servers need to come to an agreement.

24:46.960 --> 24:54.960
So just because one guy says this is what you should do to merge things the other servers all have to agree and why should they agree or your server says.

24:54.960 --> 24:57.960
So it has to be begsens the protocol.

24:58.960 --> 24:59.960
Yeah.

24:59.960 --> 25:00.960
Yeah.

25:00.960 --> 25:03.960
Just the kind of goes in the same direction I'm wondering.

25:03.960 --> 25:18.960
When Matthew talks a lot about matrix and other contexts like, I don't know, I have to do whatever use case might that in the future require changes to the state resolution algorithm where these kind of applications specific conflicts solutions are not.

25:18.960 --> 25:20.960
So is it.

25:20.960 --> 25:21.960
Yeah.

25:21.960 --> 25:31.960
Matthew Matthew wanting to use matrix for non messaging IoT or things like that and does it require changes to the application defined merge algorithms.

25:31.960 --> 25:32.960
Yes.

25:32.960 --> 25:34.960
I think so.

25:34.960 --> 25:41.960
I think at some point we may want to have the ability to put additional CITs on top of matrix.

25:41.960 --> 25:44.960
I mean, that's just what I think.

25:44.960 --> 25:56.960
But I agree at some point you're going to want more complex structures which then need to have additional rules for how you how you handle matters.

25:56.960 --> 25:57.960
You have a question.

25:57.960 --> 25:58.960
Hello.

25:58.960 --> 26:12.960
Are you looking mostly only at the state resolution are you also looking at how other layers of matrix possibly the server key infrastructure and stuff then affect the state resolution and.

26:12.960 --> 26:13.960
Yes.

26:13.960 --> 26:14.960
I mean.

26:14.960 --> 26:15.960
Sorry.

26:15.960 --> 26:20.960
So are we looking only at state resolution or are we looking at other parts of the Federation API as well.

26:20.960 --> 26:22.960
We are looking at other parts of the Federation API.

26:22.960 --> 26:24.960
We are looking at server keys.

26:24.960 --> 26:32.960
We're also looking at the way that we reconcile state for partial synchronization.

26:32.960 --> 26:40.960
So if you if you've been offline for a very long time and you need to catch up then you know what what we currently do versus what we could do kind of thing.

26:40.960 --> 26:41.960
We are looking to that.

26:41.960 --> 26:42.960
Yeah.

26:42.960 --> 26:48.960
And chaos looks into the internal state of some home server, right?

26:48.960 --> 26:49.960
No.

26:49.960 --> 26:52.960
Chaos does not look into the internal state of the home server.

26:52.960 --> 26:54.960
We have another implementation.

26:54.960 --> 26:55.960
Yes.

26:55.960 --> 26:56.960
Yes.

26:56.960 --> 26:59.960
So it's all it needs is the client server API and the Federation API.

26:59.960 --> 27:05.960
And in fact the main thing it does intercept it all is you literally just said HTP proxy the environment variable.

27:05.960 --> 27:09.960
You set that to the amount of the middle proxy that's basically all you need to do.

27:09.960 --> 27:13.960
So it's really easy to get it working with your conduits and you don't write to the world.

27:13.960 --> 27:15.960
So thank you again.

27:15.960 --> 27:24.960
I really impressed by the visuals you use in the demonstration and also the tools and for the naming of the tools.

27:24.960 --> 27:25.960
Thank you very much.

27:25.960 --> 27:26.960
Thank you for explaining.

27:26.960 --> 27:27.960
Thank you.

