WEBVTT

00:00.000 --> 00:13.000
How do you do this?

00:13.000 --> 00:16.000
Sorry, I just-

00:16.000 --> 00:20.000
Hello, hello, everyone can hear me?

00:20.000 --> 00:23.000
Good.

00:23.000 --> 00:25.000
All right, listen up.

00:25.000 --> 00:27.000
We've got session Ali.

00:27.000 --> 00:32.000
I come who's going to tell us about how he's optimized Z-Bus 95%.

00:32.000 --> 00:35.000
Hello.

00:35.000 --> 00:39.000
Hello, everyone.

00:39.000 --> 00:42.000
Thank you.

00:42.000 --> 00:47.000
I think you should wait and you're not disappointed before you plough.

00:47.000 --> 00:54.000
So I have a very catchy title, and I'm guessing that's how you came here.

00:54.000 --> 00:58.000
But let's see if I can actually entertain you with this thought.

00:58.000 --> 01:02.000
So first of all, about myself.

01:02.000 --> 01:08.000
I'm generally just an open source Linux guy from a long time.

01:08.000 --> 01:12.000
I mean, doing Linux related stuff,

01:12.000 --> 01:16.000
systems stuff for 25 years now, almost.

01:16.000 --> 01:20.000
Yeah, love open source, and that's why I'm here.

01:21.000 --> 01:23.000
And I'm from different countries.

01:23.000 --> 01:24.000
We're already from Pakistan.

01:24.000 --> 01:26.000
I also have finished citizenship.

01:26.000 --> 01:28.000
Yeah, I lived in many countries.

01:28.000 --> 01:31.000
These days I'm in Berlin settled down quite a bit.

01:31.000 --> 01:33.000
So probably not moving.

01:33.000 --> 01:34.000
Anytime soon.

01:34.000 --> 01:36.000
But you never know.

01:36.000 --> 01:39.000
So, and for last many years, people who know me,

01:39.000 --> 01:43.000
they know that I'm very much obsessed with this crap thing,

01:43.000 --> 01:46.000
that you're probably also into.

01:47.000 --> 01:52.000
And I work for a company called user, and it's a startup in Berlin,

01:52.000 --> 01:55.000
and we do end-to-end charging solutions.

01:55.000 --> 02:00.000
And the MIT works on the charging station itself, all the software,

02:00.000 --> 02:02.000
and it's all written in Rust.

02:02.000 --> 02:04.000
Cool.

02:04.000 --> 02:07.000
So, I'm really cool stuff.

02:07.000 --> 02:10.000
But let's get to the talk.

02:10.000 --> 02:14.000
And the first question is, like, what does it have as zibas?

02:14.000 --> 02:19.000
Well, it's a pure Rust Dibas library.

02:19.000 --> 02:23.000
And I'm guessing a lot of people don't know what that means.

02:23.000 --> 02:26.000
And whatever the other zibas thing.

02:26.000 --> 02:30.000
And I don't blame you because it's a very specific technology

02:30.000 --> 02:34.000
that is in a specific domain solely.

02:34.000 --> 02:37.000
And so if you don't know, that means you're just never,

02:37.000 --> 02:42.000
you're not blessed or cursed or whatever you can say.

02:43.000 --> 02:50.000
But it's designed to be a very efficient binary IPC protocol.

02:50.000 --> 02:56.000
And it's like all the systems stuff that currently is there,

02:56.000 --> 03:00.000
like all the system D services, all the grom,

03:00.000 --> 03:06.000
k-d services, and application, they use this protocol heavily till this day.

03:06.000 --> 03:11.000
It started more than 20 years ago, and it's still being heavily used.

03:11.000 --> 03:15.000
And I needed that, so I created something for that.

03:15.000 --> 03:19.000
And that's zibas, it's a Dibas library, as I said.

03:19.000 --> 03:23.000
And before zibas, there was another library already called zibas.

03:23.000 --> 03:27.000
Well, if you go in pretty high notes, they have squatted the name dibas.

03:27.000 --> 03:31.000
And so actually a lot of people end up using it just because of that,

03:31.000 --> 03:34.000
because that's the first hit when you search for dibas.

03:34.000 --> 03:37.000
So that was already there for years.

03:37.000 --> 03:42.000
And it was like a wrapper around a binary notoriously horrible C library,

03:42.000 --> 03:45.000
CAPI, that nobody uses in C-worm.

03:45.000 --> 03:48.000
So when it translated to rust, it wasn't that great either.

03:48.000 --> 03:53.000
And it had multiple issues, not only because it was a wrapper around C,

03:53.000 --> 03:56.000
but also the API wasn't that great, and those many things.

03:56.000 --> 04:03.000
So that's why I created zibas, and we also have a really cool illustration by

04:03.000 --> 04:06.000
Genome designer, Yakub Steiner.

04:06.000 --> 04:11.000
It is really cool illustrations, and he made one for us, that's cool.

04:11.000 --> 04:13.000
Yeah, I'm supposed to be the driver.

04:13.000 --> 04:18.000
I used to have beard cutted today this morning actually.

04:18.000 --> 04:24.000
And the thing is that over the years, it has become the go-to dibas create.

04:24.000 --> 04:29.000
As I mentioned, the only reason that I know people still end up using dibas,

04:29.000 --> 04:31.000
because of the name squatting.

04:31.000 --> 04:36.000
But otherwise, if they really pay attention, then they always choose zibas,

04:36.000 --> 04:39.000
because their API is much more economic, it's pure rust.

04:39.000 --> 04:43.000
So porting is also easy to different architectures and stuff.

04:43.000 --> 04:48.000
So yeah, it has won, basically.

04:48.000 --> 04:52.000
And also like there's a lot of praises from people who when they use it,

04:52.000 --> 04:55.000
they come and say, okay, this is awesome.

04:55.000 --> 05:01.000
They might find a bug or two here and there, but usually they're super happy about this great.

05:01.000 --> 05:04.000
So what is the problem then?

05:04.000 --> 05:05.000
Everything is done, right?

05:05.000 --> 05:09.000
Like it's a work, so it's everyone loves it, what's the problem then?

05:09.000 --> 05:15.000
But before we go there, I have to tell you a bit more detail about how zibas is working under hood.

05:15.000 --> 05:21.000
So before I made zibas, I made the lower layer of it called zivarian,

05:21.000 --> 05:25.000
which is the encoding and decoding, or as we in rust formed,

05:25.000 --> 05:32.000
like to call serialization and serialization, of the protocol, of the wire protocol, the encoding.

05:32.000 --> 05:36.000
And zivarian is responsible just for that.

05:36.000 --> 05:43.000
And the first version wasn't survey based, but the first issue anyone of someone file was,

05:43.000 --> 05:47.000
why do you not providing a certain interface, because as you probably know,

05:47.000 --> 05:51.000
in the rust formed for serialization and decerelization,

05:51.000 --> 05:55.000
zivarian is decaying, like everyone just uses that, is taken from granted.

05:55.000 --> 05:59.000
It's maybe it will be at some point part of the standard library even.

05:59.000 --> 06:06.000
But 2.0 was the main change was that I made it based on zivarian,

06:06.000 --> 06:10.000
and it was much nicer API as well because of that.

06:11.000 --> 06:17.000
But I also noticed while working on this serdeic API,

06:17.000 --> 06:22.000
that there are some very fundamental incompatibilities between zibas

06:22.000 --> 06:25.000
and and serdeic model.

06:25.000 --> 06:31.000
And also Dr. Samxten generally in rust, like for example, option.

06:31.000 --> 06:36.000
So option is a type that is used a lot in the rust world for very good reasons,

06:36.000 --> 06:41.000
because sometimes you need nullability, and this is a very safe way of presenting that.

06:41.000 --> 06:45.000
But also it's true for any serdeic API that you,

06:45.000 --> 06:49.000
and some process are actually require things to be aligned properly,

06:49.000 --> 06:51.000
the specific types in a specific way.

06:51.000 --> 06:56.000
And the way you do alignment is that you add zero bytes, like in serialization,

06:56.000 --> 07:01.000
and in decerelization you skip these zero bytes.

07:01.000 --> 07:06.000
But these, this alignment, this padding that you're adding in parts skipping,

07:06.000 --> 07:08.000
is dependent on the data type.

07:08.000 --> 07:12.000
So for example, that it will require four bytes,

07:12.000 --> 07:16.000
64 one would be requiring eight bytes and stuff like that.

07:16.000 --> 07:21.000
So it depends a lot on the type, and that's where the problem comes up.

07:21.000 --> 07:28.000
Because in dbus spec, if you have an empty array,

07:28.000 --> 07:31.000
you are supposed to add padding.

07:31.000 --> 07:36.000
You have supposed to align for the first element, even if there is no first element.

07:36.000 --> 07:41.000
So if it's an empty array, you are supposed to align the first element.

07:41.000 --> 07:46.000
And that's, in my opinion, quite stupid, but that's what the stack is,

07:46.000 --> 07:49.000
and it hasn't changed for 20 years, and it can't be changed now.

07:49.000 --> 07:51.000
It's like it will break the whole world.

07:51.000 --> 07:56.000
So that does not work well with serdeic, because in serdeic,

07:56.000 --> 08:02.000
in serdeic, you have, for example, this is my very simple serializer, right?

08:02.000 --> 08:07.000
This is what you, this is the trade that you implement if you are implementing a serializer,

08:07.000 --> 08:10.000
which is obviously called serializer.

08:10.000 --> 08:16.000
And in this one, when you are serializing a sequence,

08:16.000 --> 08:21.000
which is the serdeic data type for a name, for arrays, for example,

08:21.000 --> 08:28.000
that's what it maps to, that you first get like the serialized sequence called on you.

08:28.000 --> 08:35.000
And this one, as you can see, does not have any information about the type that will be encoded,

08:35.000 --> 08:37.000
not even in a generic way.

08:37.000 --> 08:41.000
So you only, only thing you know here is that something is about to be,

08:41.000 --> 08:44.000
some array is about to be serialized, what type of array?

08:44.000 --> 08:45.000
No idea.

08:45.000 --> 08:48.000
So if you have an empty array, this is when you get this,

08:48.000 --> 08:53.000
you don't know how much padding to add or how much to skip in case of serializer.

08:53.000 --> 08:56.000
And that's what makes it completely incompatible.

08:56.000 --> 09:03.000
So I was hitting my head around this for a long time, because all encoding API that you will see with serdeic,

09:03.000 --> 09:08.000
they just take your data type that you have, that your data.

09:08.000 --> 09:14.000
And it only needs to implement two traits, from serdeic, serialized and deserolized,

09:14.000 --> 09:17.000
serialization, deserolized, deserolization.

09:17.000 --> 09:23.000
And as long as your type implement those, it's going to work for any of those farmers.

09:23.000 --> 09:29.000
So you can put the same type, you can encoded into tomel,

09:29.000 --> 09:32.000
and you can encoded into the JSON and all of those.

09:32.000 --> 09:39.000
But it doesn't work exactly like that for debas, because of this fundamental incompatibility.

09:39.000 --> 09:42.000
So I needed something extra.

09:42.000 --> 09:49.000
So as you can see, like this, and you when you implement this sequence serializer,

09:49.000 --> 09:56.000
you implement it, you get a serialized element, but in empty sequence,

09:56.000 --> 09:58.000
you will not, this will never, never be called.

09:58.000 --> 10:01.000
So you never get any information.

10:01.000 --> 10:03.000
So I had to, how did I solve it?

10:04.000 --> 10:06.000
Enter debas type signatures.

10:06.000 --> 10:13.000
So debas has the concept of signatures that define what is encoded in a specific message.

10:13.000 --> 10:19.000
And that is always actually transferred, transmitted with the message itself.

10:19.000 --> 10:27.000
So you can write generic APIs, sorry generic clients that can encode and decode any service

10:27.000 --> 10:29.000
without knowing anything about it.

10:29.000 --> 10:32.000
So it's very introspectable, and this is part of that introspection.

10:32.000 --> 10:36.000
So I was like, okay, I can just make use of this.

10:36.000 --> 10:43.000
It's not as nice as for other formats, but I just need to have another trade called type

10:43.000 --> 10:48.000
that people have to implement in addition to serialize and deserize,

10:48.000 --> 10:51.000
if they want to do debas communication.

10:51.000 --> 10:53.000
And it looks a bit like that.

10:53.000 --> 10:55.000
It's extremely oversimplified here.

10:55.000 --> 10:59.000
It's like very different, actually, in reality, the signature.

10:59.000 --> 11:03.000
And it encoded in this, like this, in the beginning.

11:03.000 --> 11:08.000
And it's just like, it's a wrapper around string, basically.

11:08.000 --> 11:14.000
And then you have a type trade that all types need to implement,

11:14.000 --> 11:17.000
and they just provide me a signature.

11:17.000 --> 11:21.000
And there are cases where you need a dynamic type,

11:21.000 --> 11:24.000
where the signature is more at runtime.

11:24.000 --> 11:26.000
So that for that you have dynamic type,

11:26.000 --> 11:29.000
dynamic signature, basically.

11:29.000 --> 11:32.000
It's a bit me because, set day is everywhere.

11:32.000 --> 11:34.000
Everyone can implement it.

11:34.000 --> 11:39.000
Actually, most of the crates you find where they provide some data type that you can use.

11:39.000 --> 11:43.000
They implement, set day four, you set day serialize and deserize.

11:43.000 --> 11:49.000
You just have to enable a cargo feature usually called set day.

11:49.000 --> 11:54.000
But they all provide that, but nobody is going to provide for debas,

11:54.000 --> 11:59.000
special features or implementation because it's very niche.

11:59.000 --> 12:02.000
So it's a bit me in that sense.

12:02.000 --> 12:06.000
But it's also me in the sense that if there's a location.

12:06.000 --> 12:09.000
So usually it's represented as a string in the end.

12:09.000 --> 12:13.000
So when someone gives you a complex type, usually it is a complex type,

12:13.000 --> 12:17.000
if it's a custom type because it's a struct or an enum.

12:17.000 --> 12:23.000
And that one, you will have to put multiple strings together.

12:23.000 --> 12:25.000
And it has to be a runtime.

12:25.000 --> 12:28.000
For reasons I won't get into.

12:28.000 --> 12:30.000
I tried really hard actually.

12:30.000 --> 12:31.000
I talked to many rust experts.

12:31.000 --> 12:34.000
It's actually possible that I put these strings together.

12:34.000 --> 12:38.000
But the way I need it, it's not possible with current rust.

12:38.000 --> 12:40.000
So that's also a problem.

12:40.000 --> 12:42.000
And that means a lot of allocations.

12:42.000 --> 12:45.000
And it's not constant anything.

12:45.000 --> 12:51.000
But to make it still nice enough, as nice as I could possibly make it.

12:51.000 --> 12:56.000
I've provided this type derived macro.

12:56.000 --> 13:01.000
So you can just like, in set there, you derive serialized and

13:01.000 --> 13:02.000
serialized.

13:02.000 --> 13:05.000
You can also derive type at the same time.

13:05.000 --> 13:07.000
So you just have to add one more thing.

13:07.000 --> 13:08.000
And that's it.

13:08.000 --> 13:12.000
So it's not a lot of work then.

13:12.000 --> 13:16.000
So yeah, it's just like this in addition to these two,

13:16.000 --> 13:19.000
you add one more to your type.

13:19.000 --> 13:24.000
And also implementation for all the common types, like all the standard library types,

13:24.000 --> 13:26.000
all the basic types, of course.

13:26.000 --> 13:31.000
And also like popular external creeks, like Krono or UID,

13:31.000 --> 13:33.000
and all the types that provide.

13:33.000 --> 13:38.000
Because there's a ubiquitous, it's just makes sense.

13:38.000 --> 13:40.000
But it's still like not not the best.

13:40.000 --> 13:42.000
Still too many allocations, as I said.

13:42.000 --> 13:47.000
And it's not constant, which I would really want to have because then you can,

13:47.000 --> 13:50.000
you know, it's just zero constant, right?

13:50.000 --> 13:54.000
And you want zero things to be zero cost, not high cost.

13:54.000 --> 13:59.000
Oh well, I was like, okay, what about the performance?

13:59.000 --> 14:02.000
Like, should we actually look at the performance?

14:02.000 --> 14:08.000
Because I was just talking to someone earlier that a lot of times when you are talking about performance

14:08.000 --> 14:12.000
and things like that, most people don't look at actual numbers and data.

14:12.000 --> 14:15.000
They just assume that, oh yeah, you are depending on a lot of things.

14:15.000 --> 14:19.000
So the compiled I must be really slow, but if you actually look into it,

14:19.000 --> 14:20.000
usually it's not that bad.

14:20.000 --> 14:22.000
So I was like, okay, what about the performance?

14:22.000 --> 14:26.000
And fortunately, this one person who decided to write yet another

14:26.000 --> 14:29.000
divas lives, lived divas.

14:29.000 --> 14:33.000
And most people use another library from GIO.

14:33.000 --> 14:37.000
And that one has really, really nice API.

14:37.000 --> 14:40.000
For C API, it's really awesome.

14:40.000 --> 14:46.000
It's a lot more inefficient than the divas for many reasons.

14:46.000 --> 14:50.000
So the bar is low for efficiency and there as well.

14:50.000 --> 14:53.000
But in any case, I thought like, what is the biggest bottleneck?

14:53.000 --> 14:55.000
I need to know anyway, right?

14:55.000 --> 15:00.000
So that's where we get to something called flame graphs.

15:00.000 --> 15:05.000
Like, the flame graphs are something that shows you a lot of information of which,

15:05.000 --> 15:07.000
what is your bottleneck?

15:07.000 --> 15:13.000
What are the functions that are taking most time at the runtime of your benchmarks?

15:13.000 --> 15:15.000
For example.

15:15.000 --> 15:19.000
And there's a tool for cargo flame graph.

15:19.000 --> 15:24.000
And you can use that for benchmarking your applications as well as your benchmarks.

15:24.000 --> 15:27.000
And I use that for those same benchmarks.

15:27.000 --> 15:31.000
And I know it's like really depressing.

15:31.000 --> 15:33.000
I don't know what happened there.

15:33.000 --> 15:40.000
Like when people created flame graphs, if they didn't know that there are other colors in the spectrum,

15:40.000 --> 15:44.000
or they thought that this is good enough.

15:44.000 --> 15:47.000
But it's insanely unreadable.

15:47.000 --> 15:50.000
But the good thing is that it creates an SVG.

15:50.000 --> 15:52.000
And you can then lower it in your browser.

15:52.000 --> 15:54.000
And then you click on specific parts.

15:54.000 --> 15:55.000
And then it expands it.

15:55.000 --> 15:57.000
And then it's a lot more readable.

15:57.000 --> 15:59.000
So this might not have any colors.

15:59.000 --> 16:01.000
And it would be my think more useful, actually.

16:01.000 --> 16:03.000
But anyway, that's what you have.

16:03.000 --> 16:04.000
And that's the tool you have.

16:04.000 --> 16:05.000
But it works.

16:05.000 --> 16:10.000
So from that I realized that the biggest bottleneck was signature parsing.

16:10.000 --> 16:14.000
Because as I said, I represented the signature with a string.

16:14.000 --> 16:17.000
And then when I'm decoding, decentralizing and serializing,

16:17.000 --> 16:22.000
I have to look at that signature because otherwise I don't know what type I'm encoding.

16:22.000 --> 16:26.000
And let's say I don't, then I wouldn't know, for example, how much to pad.

16:26.000 --> 16:32.000
So the MPRA problem was solved with that because I had access to the signature.

16:32.000 --> 16:36.000
And I knew exactly what type is encoded in the area.

16:36.000 --> 16:39.000
So but I was parsing it all the time.

16:39.000 --> 16:42.000
And especially this was a special bottleneck for larger arrays.

16:42.000 --> 16:48.000
Because for larger arrays, for each and every element that was I was encoding or decoding.

16:48.000 --> 16:51.000
I was parsing the signature again and again and again.

16:51.000 --> 16:54.000
And that was just that.

16:54.000 --> 16:58.000
So I had many sleepless nights because of this.

16:58.000 --> 17:01.000
And I was like, what can I do?

17:01.000 --> 17:05.000
And I recently found out that I'm a bit high on the autism spectrum.

17:05.000 --> 17:11.000
So that was explained me having sleepless nights about things that don't matter to most people.

17:11.000 --> 17:13.000
But yeah.

17:13.000 --> 17:18.000
And yeah, I was thinking a lot about this thing.

17:18.000 --> 17:23.000
This is also a bit shows about how my wife knows me.

17:23.000 --> 17:26.000
So she knows what I'm always obsessed about and stuff.

17:26.000 --> 17:31.000
So she knew that I was thinking was it was, but yeah, this is not our picture, but yeah.

17:31.000 --> 17:34.000
It's just a representation.

17:34.000 --> 17:37.000
Yeah, so I was thinking about like, how can I make it constant?

17:37.000 --> 17:38.000
Like it would be so awesome.

17:38.000 --> 17:39.000
It would be constant.

17:39.000 --> 17:44.000
There would be no cause at all of this biggest bottleneck I have.

17:44.000 --> 17:48.000
And then if I had to move the biggest bottleneck, then I have really good performance.

17:48.000 --> 17:52.000
And years go by, I'm still thinking from time to time.

17:52.000 --> 17:55.000
I'm still asking people meeting rust experts like, how can I make it better?

17:55.000 --> 17:57.000
Nobody's telling me anything.

17:57.000 --> 18:02.000
Until last year, when I joined my company, and we used something called postcard and postcard PC.

18:02.000 --> 18:10.000
Postcard is also a binary efficient IPC, or actually more like RPC and coding.

18:10.000 --> 18:15.000
And postcard PC is just on top of a way to do it through USB.

18:15.000 --> 18:18.000
Like it's designed for microcontrollers.

18:18.000 --> 18:24.000
So to communicate between microcontrollers and a host from James Mann.

18:24.000 --> 18:30.000
And I realized that James was doing something very similar.

18:30.000 --> 18:40.000
He had a trade also that he was requiring in postcard RPC from all the types if they wanted to be encoded.

18:40.000 --> 18:43.000
So it's like, but his was a constant.

18:43.000 --> 18:45.000
So I had this kind of moment.

18:45.000 --> 18:47.000
This is what I need.

18:47.000 --> 18:49.000
Why am I not doing this?

18:49.000 --> 18:53.000
I need to rethink my signature representation.

18:53.000 --> 19:00.000
Because why did I think that representation has to be in a non-part format?

19:00.000 --> 19:03.000
Why can't it be in part format from the beginning?

19:03.000 --> 19:05.000
And that was like a mind-blowing revelation.

19:05.000 --> 19:09.000
And I immediately started coding it during the night or something I think it was late.

19:09.000 --> 19:10.000
And I was like, no, I have to do this.

19:10.000 --> 19:13.000
I have to find out if this actually can work.

19:13.000 --> 19:15.000
I couldn't sleep.

19:15.000 --> 19:20.000
So I created this genome representation and it's really simple.

19:20.000 --> 19:24.000
And at the one of the problems I had was like, in some cases, as I said,

19:24.000 --> 19:26.000
I needed the signature to be dynamic.

19:26.000 --> 19:29.000
So how do I make it dynamic if it's static only, right?

19:29.000 --> 19:36.000
So and then I figured out that I can have further enums inside it like for child types or

19:36.000 --> 19:38.000
a few types of structs.

19:38.000 --> 19:41.000
I can have it either or.

19:41.000 --> 19:44.000
And in case of static, I can just create it in static way.

19:44.000 --> 19:47.000
And in case of dynamic in more like a dynamic way.

19:47.000 --> 19:48.000
And then you do allocation.

19:48.000 --> 19:50.000
If you need allocation, then you do.

19:50.000 --> 19:53.000
But if you don't, you don't need to do any allocation.

19:53.000 --> 19:55.000
You can make it completely constant static.

19:55.000 --> 19:57.000
And that's what enabled this.

19:57.000 --> 20:00.000
So my type trait became so simple.

20:00.000 --> 20:05.000
And all constant static exactly as I was dreaming about for years.

20:05.000 --> 20:10.000
And that was, that you have no idea how good it felt.

20:10.000 --> 20:14.000
And this, this serializer and serializer don't pass anymore.

20:14.000 --> 20:19.000
And if they have a representation pass in presentation now always about the types.

20:19.000 --> 20:22.000
And they can, yeah, they just do a match basically.

20:22.000 --> 20:25.000
And that's all handed at compile time.

20:25.000 --> 20:27.000
Awesome.

20:27.000 --> 20:29.000
Well, not exactly.

20:29.000 --> 20:30.000
Actually.

20:30.000 --> 20:34.000
Because we have something called various in deep as which is a dynamic type.

20:34.000 --> 20:38.000
So that type and codes that type, which could be anything.

20:38.000 --> 20:40.000
But it also encodes the signature with it.

20:40.000 --> 20:43.000
So anyone reading it first gets a signature.

20:43.000 --> 20:45.000
And they know, OK, what is it?

20:45.000 --> 20:46.000
What is it?

20:46.000 --> 20:47.000
They actually value in there.

20:47.000 --> 20:50.000
And then they can do encoding based on the signature.

20:50.000 --> 20:53.000
But that you think you get the signature at runtime.

20:53.000 --> 20:55.000
So you have to do parsing at runtime.

20:55.000 --> 20:57.000
So what do I do about that?

20:57.000 --> 21:03.000
I was like, OK, let's also make the parser more robust and efficient for the key.

21:03.000 --> 21:06.000
It's very efficient for the cases where I have to parse at runtime.

21:06.000 --> 21:10.000
And I used this query where this is also very ubiquitous.

21:10.000 --> 21:13.000
No, I'm creating parsing any kind of parsing you do.

21:13.000 --> 21:15.000
You use a norm.

21:15.000 --> 21:22.000
But I later was, I shared a bus ride along bus ride with the ad.

21:22.000 --> 21:24.000
Still with the old stuff.

21:24.000 --> 21:29.000
It was like that the case where we were better than deep as RS.

21:29.000 --> 21:32.000
We are now really, really good better than that.

21:33.000 --> 21:37.000
And then, and some things from milliseconds to microseconds.

21:37.000 --> 21:41.000
And there is the nan in there.

21:41.000 --> 21:45.000
That's because we got a panic.

21:45.000 --> 21:49.000
And we got a panic because it became so efficient.

21:49.000 --> 21:56.000
It was because this particular test was so fast that the deepest broker,

21:56.000 --> 22:00.000
because it's the dimension that deepest is based on a broker architecture.

22:00.000 --> 22:03.000
So there is a broker that sits between everyone.

22:03.000 --> 22:06.000
And that was disconnecting us because we were so fast.

22:06.000 --> 22:08.000
We were doing the connections now so fast.

22:08.000 --> 22:10.000
Previously, they were not too fast.

22:10.000 --> 22:11.000
Like we were doing multiple connections.

22:11.000 --> 22:17.000
But it was so slow that we were not going above the limit that security limits of the deepest broker.

22:17.000 --> 22:20.000
But now we were, so it just stopped us.

22:20.000 --> 22:22.000
A few times it worked.

22:22.000 --> 22:23.000
It was fine.

22:23.000 --> 22:27.000
So in those times, I noticed that it was very small lumber.

22:27.000 --> 22:30.000
But I didn't record it unfortunately.

22:30.000 --> 22:33.000
Awesome.

22:33.000 --> 22:39.000
But if you're good at math and like me, you would have noticed that I lied.

22:39.000 --> 22:43.000
It was not that those numbers that I showed were not a 95%.

22:43.000 --> 22:45.000
But that's not because I lied.

22:45.000 --> 22:49.000
It's because it's very much hardware dependent.

22:49.000 --> 22:55.000
And if you are on a high-end machine where these benchmarks were done, it's fine.

22:55.000 --> 23:02.000
It's not, I mean, it's, it's not that bigger difference.

23:02.000 --> 23:07.000
Because the locations and everything, they are super easy on high-end machines.

23:07.000 --> 23:09.000
And there's a lot of parallelism and all that.

23:09.000 --> 23:12.000
So it's, you don't see that bigger difference.

23:12.000 --> 23:15.000
But it was still quite big actually, you saw.

23:15.000 --> 23:20.000
But if you run the benchmarks, I asked people who had low-end machines.

23:20.000 --> 23:22.000
And they did the benchmarks.

23:22.000 --> 23:28.000
The machines, many of the test cases, they were 94.95.96% about in that range.

23:28.000 --> 23:30.000
So it's not a lie.

23:30.000 --> 23:34.000
So if you have a low-end machine, it will, you will see a huge difference.

23:34.000 --> 23:36.000
And also, it's type depending.

23:36.000 --> 23:41.000
So that 95% I'm seeing that was like fixed size arrays.

23:41.000 --> 23:44.000
Even if they were huge arrays, still if they were fixed size arrays.

23:44.000 --> 23:47.000
Like for example, integers and those kind of things,

23:47.000 --> 23:52.000
then it was, it was really, really efficient.

23:52.000 --> 23:55.000
So how much time I have left?

23:55.000 --> 23:57.000
12 minutes.

23:57.000 --> 23:58.000
Okay, good.

23:58.000 --> 24:01.000
So I'm going to talk about the future now.

24:01.000 --> 24:06.000
Sorry, not to test it.

24:06.000 --> 24:19.000
So there are still some, I would say, maybe bottleneck,

24:19.000 --> 24:22.000
small button, etc. in all of Z-bus.

24:22.000 --> 24:25.000
But and I'm sure people can make it more efficient.

24:25.000 --> 24:27.000
I have some to do as well for it.

24:27.000 --> 24:30.000
But in general, there's not a lot to do left.

24:30.000 --> 24:34.000
So, but what is left is for us to go move away from

24:34.000 --> 24:35.000
us now.

24:35.000 --> 24:37.000
Because it has lifted time.

24:37.000 --> 24:43.000
It's time for it to slowly move towards its demise and death.

24:43.000 --> 24:46.000
Because we, we are going to live in a modern times now.

24:46.000 --> 24:49.000
And the, the bus is from very old times.

24:49.000 --> 24:52.000
And it doesn't, it just doesn't fit anymore things.

24:52.000 --> 24:53.000
Let not has a talk.

24:53.000 --> 24:57.000
I think later about system D, future and fast.

24:57.000 --> 24:59.000
And I think he will also talk a bit about this.

24:59.000 --> 25:03.000
Because system D is very much moving towards this new protocol

25:03.000 --> 25:04.000
Warlick.

25:04.000 --> 25:06.000
It's very simple actually.

25:06.000 --> 25:07.000
It's based on Jason.

25:07.000 --> 25:09.000
It's like the unique sockets.

25:09.000 --> 25:11.000
And you have Jason on top.

25:11.000 --> 25:13.000
And there's special and coding and stuff.

25:13.000 --> 25:17.000
But it's essentially Jason.

25:17.000 --> 25:21.000
And as you know, Jason, like, if you know anything about

25:21.000 --> 25:24.000
Cersei, it was like the first target was Jason.

25:24.000 --> 25:26.000
So it's like the Cersei and Jason.

25:26.000 --> 25:28.000
They're really, really good match well.

25:28.000 --> 25:32.000
And the person who is meant in her is the same person.

25:32.000 --> 25:38.000
Like Cersei and developer and Cersei Jason as the same people.

25:38.000 --> 25:41.000
So it's not a big surprise.

25:41.000 --> 25:45.000
And also all the data types match well like you can use options.

25:45.000 --> 25:47.000
You can, you can have no level types.

25:47.000 --> 25:50.000
Twenty-first century stuff.

25:50.000 --> 25:54.000
And yeah, it's in general it's just better.

25:54.000 --> 25:57.000
But you might ask like, what about the parsing cost?

25:57.000 --> 26:01.000
And you're right like, of course there's a bigger cost because it's not a binary protocol.

26:01.000 --> 26:02.000
It's, it's text.

26:02.000 --> 26:04.000
It's a human readable thing.

26:04.000 --> 26:06.000
So there is that.

26:06.000 --> 26:09.000
But you would be surprised.

26:09.000 --> 26:12.000
Because I, Leonard was telling me like I sometimes meet him for lunches.

26:12.000 --> 26:14.000
And he's, he was telling me like about whirling.

26:14.000 --> 26:17.000
And I was like always like, you're real wrong.

26:17.000 --> 26:19.000
There's a lot of performance problems there.

26:19.000 --> 26:20.000
And you said, no.

26:20.000 --> 26:24.000
There's a lot more problem with context switching.

26:24.000 --> 26:27.000
And that's a, that's a bigger bottleneck.

26:27.000 --> 26:31.000
And since Debus has designed more to be a broker based,

26:31.000 --> 26:33.000
There's a lot more context switching involved in,

26:33.000 --> 26:36.000
Warling, which is a P2P protocol.

26:36.000 --> 26:38.000
And I didn't believe in for a long time.

26:38.000 --> 26:39.000
And I was sure that he's wrong.

26:39.000 --> 26:42.000
So I went on and wrote some benchmarks.

26:42.000 --> 26:46.000
And those benchmarks proved him right and proved me wrong.

26:46.000 --> 26:52.000
So I was very, it was very hard for me to send him a link about that.

26:52.000 --> 26:54.000
But yeah, it, he was right.

26:54.000 --> 26:56.000
Usually he's right in the end.

26:56.000 --> 26:59.000
So yeah.

26:59.000 --> 27:02.000
And, and the thing is that it's because of this P2P,

27:02.000 --> 27:05.000
It's not just that there's less context switching.

27:05.000 --> 27:07.000
It's also enabled.

27:07.000 --> 27:10.000
Other things that you would normally not think about.

27:10.000 --> 27:15.000
For example, in Debus, you have limited amount of connection.

27:15.000 --> 27:17.000
As I said, like our test case was panicking,

27:17.000 --> 27:21.000
Because it was launched into many, many connections.

27:21.000 --> 27:24.000
But that's because Debus limits your connections.

27:24.000 --> 27:26.000
And that's because there's like a broker.

27:26.000 --> 27:29.000
And it has to have some limits for security reasons.

27:29.000 --> 27:34.000
But if you have a peer to peer, you can have as many connections as you want.

27:34.000 --> 27:37.000
The same sign, same service, you can have as many as you want.

27:37.000 --> 27:40.000
And that means you can have exterior mutability.

27:40.000 --> 27:44.000
Instead of interior mutability, I don't know how much people are here.

27:44.000 --> 27:48.000
Familiar with most model of doing things.

27:48.000 --> 27:53.000
But with exterior mutability, you can have really, really good performance

27:53.000 --> 27:58.000
Because you don't need to have locks at runtime to guarding things.

27:58.000 --> 28:01.000
And you can have it at more at compile time.

28:01.000 --> 28:06.000
The compiler is checking if you have no mutable references to the same thing

28:06.000 --> 28:07.000
And stuff.

28:07.000 --> 28:10.000
So it's really good for performance.

28:10.000 --> 28:14.000
And that being able to have as many connections as you want,

28:14.000 --> 28:15.000
Really enables that.

28:15.000 --> 28:18.000
And also in Debus, you have connection overhead.

28:18.000 --> 28:22.000
Like when you establish it, there's a handshake,

28:22.000 --> 28:23.000
protocol.

28:23.000 --> 28:26.000
But in case of a violent, you don't have that.

28:26.000 --> 28:31.000
So it also makes it super cheap to launch new connections between things,

28:31.000 --> 28:33.000
between programs.

28:33.000 --> 28:35.000
So it avoids a lot of allocation,

28:35.000 --> 28:37.000
Cloning because of that as well.

28:37.000 --> 28:40.000
In Zebus, we have quite a lot of cloning mainly because of that.

28:40.000 --> 28:44.000
Because you can have multiple, because you need to share the same connection.

28:44.000 --> 28:47.000
That's the way to do it in Debus work.

28:47.000 --> 28:50.000
So when you share the same connection and you multiple parts of your code,

28:50.000 --> 28:53.000
multiple threads, in task, in task,

28:53.000 --> 28:56.000
are wanting to receive messages at the same time.

28:56.000 --> 28:58.000
You have to do a broadcast channel.

28:58.000 --> 29:00.000
And that means you have to clone.

29:00.000 --> 29:03.000
I do it efficiently using Arc.

29:03.000 --> 29:08.000
But Arc also has its, it has a smaller overhead.

29:08.000 --> 29:12.000
It's the smaller your machine, the more the overhead.

29:12.000 --> 29:18.000
And also it would be very difficult for me to do no STD, for example,

29:18.000 --> 29:23.000
to not use STD and have really small size in Zebus.

29:23.000 --> 29:28.000
Maybe I can do no STD, but no analog is out of the question.

29:28.000 --> 29:31.000
I think for now, at least it would be really, really difficult.

29:31.000 --> 29:33.000
Maybe I can do it, I think.

29:33.000 --> 29:37.000
But in case of a warling, that would be a lot easier again

29:37.000 --> 29:40.000
because of being able to have multiple connections.

29:40.000 --> 29:43.000
That enables a lot of things.

29:43.000 --> 29:46.000
If you're interested in more in depth, why that is the case,

29:46.000 --> 29:50.000
I can talk to me afterwards, I can explain to you.

29:50.000 --> 29:55.000
Anyway, there is a risk rate that exists for warling.

29:55.000 --> 29:57.000
Guess the name?

29:57.000 --> 29:59.000
It's called warling.

29:59.000 --> 30:01.000
But it has a few issues.

30:01.000 --> 30:05.000
First of all, main problem is that it's just blocking API.

30:05.000 --> 30:10.000
It doesn't provide any sync API, which is me because it's an IPC.

30:10.000 --> 30:13.000
And it's designed for code generation only,

30:13.000 --> 30:15.000
which is sometimes it's good.

30:15.000 --> 30:17.000
But it's not always what you want.

30:17.000 --> 30:19.000
You want to use your existing type that

30:19.000 --> 30:22.000
implement serialized and decentralized with it as well.

30:22.000 --> 30:23.000
So it's not good for that.

30:23.000 --> 30:25.000
It's not designed for it, at least.

30:25.000 --> 30:27.000
And it does a lot of allocations.

30:27.000 --> 30:29.000
It does allocation as much as it wants.

30:29.000 --> 30:31.000
So it's not like I would want it to be.

30:31.000 --> 30:33.000
And it's not really maintained.

30:33.000 --> 30:35.000
I'm the new mainteriner, one of them.

30:35.000 --> 30:38.000
So that tells you that how convenient it is

30:38.000 --> 30:41.000
because I got the mainternership so that maybe

30:42.000 --> 30:45.000
there would be a bit more maintenance than there is right now.

30:45.000 --> 30:49.000
But I'm not familiar with the actual code that you're working all of it.

30:49.000 --> 30:51.000
So it's very hard for me to maintain it.

30:51.000 --> 30:54.000
Especially if I know that we can do better.

30:54.000 --> 30:59.000
So if we can do better, why not do better and do it from scratch.

30:59.000 --> 31:01.000
And I have a plan how to do it.

31:01.000 --> 31:05.000
It's a bit weak, but it's based on experience.

31:05.000 --> 31:10.000
I recently at work did SDK for a protocol.

31:10.000 --> 31:13.000
That is very, very similar to Warlink.

31:13.000 --> 31:18.000
And yeah, based on that experience, I realize how I can do a

31:18.000 --> 31:22.000
no-estidiv compatible implementation that is anything.

31:22.000 --> 31:27.000
And it uses very little allocation, if at all.

31:27.000 --> 31:31.000
And hopefully it would be the same credit because I'm now the

31:31.000 --> 31:34.000
maintenance or I have a say, but if not, then it's not

31:34.000 --> 31:36.000
we're going to do our own.

31:36.000 --> 31:38.000
And also that just final thing.

31:38.000 --> 31:41.000
The diva is not going away any time soon, at least.

31:41.000 --> 31:45.000
It will hopefully go away slowly, but it not any time soon,

31:45.000 --> 31:48.000
because there's so much belt on top of it.

31:48.000 --> 31:51.000
But it will remain around for a long time as legacy.

31:51.000 --> 31:53.000
So you still need z-bus for a while.

31:53.000 --> 31:55.000
So yeah.

31:57.000 --> 31:58.000
Sorry.

31:58.000 --> 31:59.000
Am I over time?

31:59.000 --> 32:00.000
No.

32:00.000 --> 32:10.000
Any questions?

32:10.000 --> 32:19.000
Ooh, question.

32:19.000 --> 32:21.000
So you introduce a signature.

32:21.000 --> 32:24.000
But how does that signature get matched then?

32:24.000 --> 32:25.000
Is that just a...

32:25.000 --> 32:26.000
Sorry, I can't hear you.

32:27.000 --> 32:32.000
Yeah, you need to speak up of it.

32:32.000 --> 32:33.000
Okay.

32:33.000 --> 32:35.000
The signature you're introducing.

32:35.000 --> 32:37.000
How does that get matched?

32:37.000 --> 32:40.000
Or does the compiler ready like ordered that?

32:40.000 --> 32:43.000
Is it a one time or a lock-end sign?

32:43.000 --> 32:45.000
I haven't measured in that way.

32:45.000 --> 32:47.000
I'm how it makes it super efficient.

32:47.000 --> 32:50.000
It somehow enables some compiler optimizations.

32:50.000 --> 32:54.000
So it does, I think it does matching in parallel using

32:54.000 --> 32:56.000
same-do or something.

32:56.000 --> 33:03.000
So there is a lot of optimizations in Vinau itself that one can use.

33:03.000 --> 33:05.000
And we are, I'm using them.

33:05.000 --> 33:06.000
Some of them, it is.

33:06.000 --> 33:07.000
Thank you.

33:16.000 --> 33:18.000
You'll have to speak up, even with the mic.

33:18.000 --> 33:19.000
Sorry.

33:19.000 --> 33:26.000
Can you get this one closer?

33:26.000 --> 33:29.000
When should we start using bar length?

33:29.000 --> 33:33.000
Well, once we have good API for you.

33:33.000 --> 33:37.000
I mean, if you're doing it in Rust, you would want a good Rust library, right?

33:37.000 --> 33:42.000
The hope is that I will be able to work on it on my work time.

33:42.000 --> 33:44.000
But we'll see.

33:44.000 --> 33:46.000
I'm working at a start-up.

33:46.000 --> 33:48.000
I don't know what the promises mean.

33:48.000 --> 33:53.000
But that's at this desire because we will benefit from it too.

33:53.000 --> 33:57.000
And my plan includes making it work for microcontrollers as well,

33:57.000 --> 34:02.000
like to be communication between host and a microcontroller, for example.

34:08.000 --> 34:10.000
Cool.

34:10.000 --> 34:11.000
Awesome.

34:11.000 --> 34:12.000
Thank you.

34:12.000 --> 34:22.000
Thank you, everyone.

