WEBVTT

00:00.000 --> 00:10.000
So if we get every attention, we have our next speaker.

00:10.000 --> 00:13.000
This is everyone likes profiling.

00:13.000 --> 00:18.000
So Brennan is going to tell us about profiling rest applications.

00:18.000 --> 00:21.000
Thank you.

00:21.000 --> 00:27.000
All right, so hi, I'm Brennan.

00:27.000 --> 00:31.000
And when you're talking about profiling rest applications with Parko,

00:31.000 --> 00:38.000
has anyone here who is not an employee of the same company as me ever heard of Parko?

00:38.000 --> 00:39.000
One, two, three.

00:39.000 --> 00:40.000
OK, perfect.

00:40.000 --> 00:44.000
A few more of you, hopefully, will by the end of this talk.

00:44.000 --> 00:46.000
Hello, my name is Brennan Vincent.

00:46.000 --> 00:47.000
This is my email address.

00:47.000 --> 00:51.000
It'll also be at the end if you want to contact me with questions.

00:51.000 --> 00:56.000
I live in Tucson, so I had to come to Brussels to see sort of like clouds

00:56.000 --> 00:59.000
and wind and that kind of stuff.

00:59.000 --> 01:01.000
I'm a team member of Parko specifically,

01:01.000 --> 01:05.000
I'm a maintainer of the Parko agent kind of sub-project.

01:05.000 --> 01:07.000
And I'm employed by Polar Signals,

01:07.000 --> 01:12.000
which is the main company that funds Parko development.

01:12.000 --> 01:15.000
So first of all, what is profiling?

01:15.000 --> 01:20.000
I can give a sort of abstract definition first, analyzing resource usage

01:20.000 --> 01:23.000
by inspecting running programs.

01:23.000 --> 01:27.000
So more concretely, what do we mean by resource usage?

01:27.000 --> 01:33.000
Very things like CPU time, memory, while clock time, GPU utilization,

01:33.000 --> 01:39.000
whatever's the resource on a computer that you care about being efficient with.

01:39.000 --> 01:44.000
The goal is to usually attribute resource usage to a location in the code

01:44.000 --> 01:46.000
or even better a stack trace.

01:46.000 --> 01:48.000
So you might want to say something like,

01:48.000 --> 01:52.000
I ran this program for X amount of time.

01:52.000 --> 01:56.000
And during that X time for Y CPU seconds,

01:56.000 --> 02:00.000
this stack trace was visible on the,

02:00.000 --> 02:02.000
it was active on the CPU.

02:02.000 --> 02:05.000
So how do you make that visible so that people can understand it?

02:05.000 --> 02:08.000
There's a couple of kinds of visualizations.

02:08.000 --> 02:11.000
Most popular probably annotated source for assembly listings,

02:11.000 --> 02:17.000
so saying like this line of code to this many resources when we ran it

02:17.000 --> 02:21.000
on flame graphs, which I'm going to go into more in a bit.

02:22.000 --> 02:27.000
There's some famous profilers that probably more of you would have heard of

02:27.000 --> 02:28.000
in parka.

02:28.000 --> 02:31.000
There's Perf, probably the most famous one on Linux,

02:31.000 --> 02:36.000
Firefox Profiler, which is mainly for client side JavaScript code.

02:36.000 --> 02:37.000
It's bundled in Firefox.

02:37.000 --> 02:43.000
And then there's Apple's Instruments.app for the sort of iOS, macOS, et cetera,

02:43.000 --> 02:45.000
ecosystem.

02:45.000 --> 02:50.000
So here's an example of Perf and a script called FlameGraph.pl,

02:50.000 --> 02:53.000
which is what I used before I heard of Perka.

02:53.000 --> 02:59.000
So this is a function that prints the 40th of a Nazi number over and over,

02:59.000 --> 03:04.000
and using a naive recursive algorithm that is very slow,

03:04.000 --> 03:08.000
but we're going to look at what it looks like in Perf.

03:08.000 --> 03:12.000
Okay?

03:12.000 --> 03:14.000
Hopefully this is visible.

03:20.000 --> 03:24.000
So okay, I remember leaves this the same program.

03:24.000 --> 03:28.000
I'm not going to go into like the meaning of these flags,

03:28.000 --> 03:31.000
because the point of the stock is not to talk about Perf,

03:31.000 --> 03:35.000
but we'll let it run for a few seconds, kill it,

03:35.000 --> 03:40.000
and then do Perf report.

03:40.000 --> 03:44.000
And you see this kind of confusing interface in my opinion,

03:44.000 --> 03:49.000
but basically 98% was in this test rust fib function,

03:49.000 --> 03:52.000
and we can annotate it and see, okay,

03:52.000 --> 03:55.000
a lot of time was spent in the function prologue,

03:55.000 --> 03:57.000
whatever stuff like that.

03:57.000 --> 04:00.000
But like I said, I think it's a little bit of a,

04:00.000 --> 04:03.000
and then it's a little bit of a clunky interface.

04:03.000 --> 04:07.000
So this kind of brand-end rig,

04:07.000 --> 04:10.000
made this tool called FlameGraph.pl,

04:10.000 --> 04:14.000
which translates Perf output to FlameGraphs.

04:14.000 --> 04:17.000
I have a screenshot of what that is.

04:17.000 --> 04:20.000
Basically, if you haven't seen a FlameGraph before,

04:20.000 --> 04:24.000
essentially, if one of these rectangles takes up,

04:24.000 --> 04:26.000
let's say 60% of the width,

04:26.000 --> 04:30.000
then it means, in 60% of the stock traces,

04:30.000 --> 04:34.000
that frame and everything below was like the frames

04:34.000 --> 04:37.000
that you can read off of the graphics.

04:37.000 --> 04:42.000
So like for example, let's say 60% it's one of these like test rust fib,

04:42.000 --> 04:44.000
that's like 15 layers up.

04:44.000 --> 04:48.000
That means that in 60% which I'm making up on this eyeballing this,

04:48.000 --> 04:52.000
but of stock traces while the profile is being taken,

04:52.000 --> 04:55.000
we had these sort of startup frames, main,

04:55.000 --> 04:58.000
pre-main, whatever, and then like 15 layers of fib.

04:58.000 --> 05:03.000
So I'll show more realistic example,

05:03.000 --> 05:06.000
especially since I'm here to talk about parka,

05:06.000 --> 05:10.000
so let's talk about what parka is.

05:11.000 --> 05:16.000
So how does parka differ from these kind of,

05:16.000 --> 05:20.000
other pro-followers that exist like Perf? Why not just use Perf?

05:20.000 --> 05:24.000
So the first and biggest difference is that it's intended to run

05:24.000 --> 05:27.000
all the time, profiling everything on the host,

05:27.000 --> 05:31.000
which is what we call continuous profiling.

05:31.000 --> 05:34.000
Whereas Perf, it's more like, okay,

05:34.000 --> 05:39.000
decide I want to profile something so I'm going to run it under Perf right now.

05:40.000 --> 05:43.000
It has very low overhead.

05:43.000 --> 05:47.000
Obviously it's going to kind of depend on what your specific workload is,

05:47.000 --> 05:52.000
but the rule of thumb we try to use is it should take less than 1%

05:52.000 --> 05:58.000
of CPU time overhead across the entire system running at all the time.

05:58.000 --> 06:02.000
Obviously this second point is what enables the first point.

06:02.000 --> 06:06.000
And if I have time at the end, I'll try to talk about how it's implemented

06:06.000 --> 06:11.000
in such a way that the overhead can be solo.

06:11.000 --> 06:15.000
You typically have end machines sending profiles to one backend,

06:15.000 --> 06:19.000
so you run the parka server on one machine,

06:19.000 --> 06:21.000
and then on every machine that you want to profile,

06:21.000 --> 06:24.000
you run the parka agents pointing out the server,

06:24.000 --> 06:27.000
and it sends profiles to the backend.

06:27.000 --> 06:32.000
And the point of that is that you get a browser interface

06:32.000 --> 06:37.000
that generates flame graphs on the fly based on queries

06:37.000 --> 06:40.000
across everything that has been submitted to this backend.

06:40.000 --> 06:42.000
So you could say, for example,

06:42.000 --> 06:47.000
generate a flame graph of all the stack traces I were taken

06:47.000 --> 06:53.000
from 10 a.m. to 11 a.m. yesterday on nodes that match this

06:53.000 --> 06:58.000
projects with process scenes that have that match some string,

06:58.000 --> 07:02.000
et cetera, and then it'll go back to the backend,

07:02.000 --> 07:07.000
the backend will generate the data and then render a flame graph.

07:07.000 --> 07:10.000
So let me show you a real world example.

07:10.000 --> 07:13.000
I didn't want to use multiple machines because I thought if the

07:13.000 --> 07:16.000
conference Wi-Fi goes out, it's going to make for a bad demo,

07:16.000 --> 07:20.000
so just like believe that this could have been done on multiple machines.

07:20.000 --> 07:24.000
So it's going to be done on one machine today.

07:24.000 --> 07:28.000
So I'm going to run the parka back ends.

07:28.000 --> 07:31.000
Don't worry too much about these flags.

07:31.000 --> 07:34.000
They're all documented, but probably don't have time to go into

07:34.000 --> 07:36.000
how to configure it.

07:36.000 --> 07:41.000
But I'm going to also run parka agents,

07:41.000 --> 07:45.000
which has to be run as route on every machine that you want to

07:45.000 --> 07:46.000
profile.

07:46.000 --> 07:49.000
If you use Kubernetes, it's a Damon side or whatever run it on every

07:49.000 --> 07:53.000
machine, but in this case, I'm just running it on the command line

07:53.000 --> 07:56.000
locally.

07:56.000 --> 07:58.000
Okay, that's running.

07:58.000 --> 08:03.000
And now for a workload, I'm going to run RG, which is sort

08:03.000 --> 08:07.000
of like rip crap across my entire file system,

08:07.000 --> 08:10.000
looking for some random string over and over.

08:10.000 --> 08:13.000
So once everything's in the file system cast,

08:13.000 --> 08:16.000
it should be taking up 10 cores or so.

08:17.000 --> 08:20.000
And then for good measure, I'm also going to run this

08:20.000 --> 08:25.000
Fibonacci program, why not?

08:25.000 --> 08:28.000
So let's see what that looks like in parka.

08:28.000 --> 08:30.000
This is Wikipedia.

08:30.000 --> 08:33.000
I did not invent this, unfortunately.

08:33.000 --> 08:35.000
So this is parka.

08:35.000 --> 08:38.000
You get this sort of metric graph here by default,

08:38.000 --> 08:42.000
broken down by comm, which is Linux is kind of word for

08:42.000 --> 08:45.000
process name, roughly speaking.

08:45.000 --> 08:49.000
So the one using the most is we expected is RG.

08:49.000 --> 08:53.000
Then there's a test rest that's a Fibonacci program named kind of

08:53.000 --> 08:56.000
uninspiredly, but then we have other stuff like

08:56.000 --> 09:00.000
GNOME shell, just from Ubuntu, Firefox, parka,

09:00.000 --> 09:02.000
agent itself, GNOME terminal, whatever.

09:02.000 --> 09:06.000
So you can see it's profiling everything on the system.

09:06.000 --> 09:09.000
There's a flame graph of everything going on on the system,

09:09.000 --> 09:13.000
but it's a little bit more interesting if I show you the

09:13.000 --> 09:17.000
filtering logic, so like let's do comm equals the

09:17.000 --> 09:24.000
RG or test rest to sort of show you, you know,

09:24.000 --> 09:27.000
the workflows that we actually care about.

09:27.000 --> 09:30.000
And then you have it here, and then the flame graph itself,

09:30.000 --> 09:33.000
you can also group by various things.

09:33.000 --> 09:35.000
So I just grouped by comm.

09:35.000 --> 09:40.000
So now we have on the left comm equals RG, which I'm going to

09:40.000 --> 09:45.000
kill these by the way, because I don't want to run out of battery.

09:45.000 --> 09:51.000
Yeah, so you see on the right is the Fib stuff and sort of

09:51.000 --> 09:55.000
the form of this graph, other than being upside down,

09:55.000 --> 09:58.000
like the root is at the top, but other than that, the form of this graph

09:58.000 --> 10:03.000
sort of looks the same as the visualization I showed you before,

10:03.000 --> 10:06.000
we can zoom in.

10:06.000 --> 10:09.000
So that is a bug, sorry.

10:09.000 --> 10:13.000
Of course, the bugs happen when I'm giving a demo of

10:13.000 --> 10:18.000
costum, that's time for them, but anyway, okay, we're back.

10:18.000 --> 10:22.000
And then on the left here, this is seemingly realistic

10:22.000 --> 10:27.000
stat frames from RG, like whatever.

10:27.000 --> 10:31.000
RG search path file open, et cetera.

10:31.000 --> 10:33.000
This sounds like RG stuff.

10:33.000 --> 10:35.000
So I understand what we did there.

10:35.000 --> 10:39.000
So let's go back to my slides.

10:39.000 --> 10:44.000
So it's reasonable for someone to ask, well, what is,

10:44.000 --> 10:46.000
what is this guy's talking about?

10:46.000 --> 10:47.000
Have anything to do with Russ?

10:47.000 --> 10:49.000
Like, why are you talking about this in the restroom?

10:49.000 --> 10:51.000
And this could have been, you know, park it,

10:51.000 --> 10:54.000
can also profile C++ and go and whatever.

10:54.000 --> 10:59.000
So for the second half of my talk, I'm going to talk about kind of

10:59.000 --> 11:02.000
some rest specific features that we added.

11:02.000 --> 11:06.000
Which do require instrumentation, like you have to decide

11:06.000 --> 11:10.000
I want to use this feature from your code, but that's okay.

11:10.000 --> 11:14.000
So the first rest specific feature is custom labels, which lets you

11:14.000 --> 11:18.000
annotate regions of code with application specific labels,

11:18.000 --> 11:20.000
dynamically at runtime.

11:20.000 --> 11:26.000
For example of this, let's go back to my VM.

11:26.000 --> 11:31.000
Okay, I don't know if people can see this, hopefully you can,

11:31.000 --> 11:37.000
but this is essentially a web server that it takes in a user name

11:37.000 --> 11:39.000
and length.

11:39.000 --> 11:44.000
It generates many bytes and takes the shot to 56,

11:44.000 --> 11:46.000
some of all of them.

11:46.000 --> 11:49.000
And then it renders string that says,

11:49.000 --> 11:56.000
thanks, username, the shot to 56 of your random data is whatever.

11:57.000 --> 12:03.000
So you can, you can run this and then you can profile it in park

12:03.000 --> 12:09.000
and you'll see about 70% of the time we've been spent generating

12:09.000 --> 12:14.000
the shot sum and then 30% of the time generating the random bytes.

12:14.000 --> 12:17.000
But what if you want to know how much resources was consumed

12:17.000 --> 12:20.000
by a person with each username?

12:20.000 --> 12:24.000
So we published a library called custom labels, so let's instrument that.

12:24.000 --> 12:29.000
So let me edit the code to have that with label.

12:29.000 --> 12:34.000
username and we'll make the value of the label be like the actual username

12:34.000 --> 12:40.000
and then you pass it a function and then essentially it runs that function

12:40.000 --> 12:46.000
and while that function is running, that label is applied.

12:46.000 --> 12:59.000
So let's run this.

12:59.000 --> 13:02.000
Let's hit this.

13:02.000 --> 13:08.000
So I'm going to do username equals brand-in, length is a billion bytes.

13:08.000 --> 13:09.000
Okay, thanks, brand-in.

13:09.000 --> 13:15.000
Your shot is whatever could not have predicted that.

13:15.000 --> 13:20.000
And then I'm going to do, okay, I'll do username petros

13:20.000 --> 13:24.000
because then others are getting petros watching this online.

13:24.000 --> 13:26.000
So hello petros.

13:26.000 --> 13:27.000
Thank you.

13:27.000 --> 13:33.000
Your random data is shot sum blah blah.

13:33.000 --> 13:40.000
Now let's see again what that looks like in parka.

13:40.000 --> 13:51.000
Okay, the comm is Russ Shaw server filtered down to that.

13:51.000 --> 13:55.000
And now in labels, one will show up called username if it worked,

13:55.000 --> 13:59.000
which I didn't, sorry.

13:59.000 --> 14:03.000
Another bug perhaps.

14:03.000 --> 14:06.000
Oh, it's because I didn't save main data as soon.

14:06.000 --> 14:08.000
I actually added the custom label.

14:08.000 --> 14:09.000
Not a bug in parka.

14:09.000 --> 14:20.000
It's a bug in my ability to use VS code.

14:20.000 --> 14:25.000
So let's run this again.

14:25.000 --> 14:30.000
Let's do one with brand-in.

14:30.000 --> 14:31.000
Reload it again.

14:31.000 --> 14:34.000
Use some more time.

14:34.000 --> 14:38.000
Now let's hopefully this will show up in parka.

14:38.000 --> 14:42.000
Okay, now we can group by username.

14:42.000 --> 14:46.000
And you see, okay, the one with no username on the right.

14:46.000 --> 14:48.000
That's from before when I screwed it up.

14:48.000 --> 14:50.000
But then you see on the left of username equals brand-in.

14:50.000 --> 14:53.000
Was these stack frames, username equals petros.

14:53.000 --> 14:55.000
Was these ones in the middle here.

14:55.000 --> 14:59.000
So in real life, our users were using this.

14:59.000 --> 15:04.000
Use it for stuff like trace IDs if they do distributed tracing.

15:04.000 --> 15:09.000
Or if you run a SQL database, you can say, okay, set a label for the query ID.

15:09.000 --> 15:11.000
And then later if some query was taking a ton of time,

15:11.000 --> 15:16.000
you can try to kind of investigate why with the profiler.

15:16.000 --> 15:25.000
But let's go back to the slides.

15:25.000 --> 15:29.000
The second feature is memory profiling.

15:29.000 --> 15:34.000
If you use J. E. Mallock, parka, can scrape profiles of memory.

15:34.000 --> 15:38.000
That has been allocated but not freed.

15:38.000 --> 15:44.000
And I consider this a rust specific feature because you can technically use this from C,

15:44.000 --> 15:48.000
but we wrote the library that kind of translates this in rust.

15:48.000 --> 15:51.000
So it's much easier to use from rust.

15:51.000 --> 16:00.000
So let's see, let's say I had some code now that with a 10%

16:00.000 --> 16:05.000
every buffer of a thousand bytes with a 10% chance leaks it.

16:05.000 --> 16:20.000
So we'll get some, we'll get some weak memory, not free size.

16:20.000 --> 16:29.000
Forget it.

16:29.000 --> 16:38.000
Okay, you guys all know rust, so hopefully you believe this is leaking memory.

16:38.000 --> 16:49.000
Going to save the file this time, I'll be great.

16:49.000 --> 16:53.000
Let's load it a few times to waste some memory.

16:53.000 --> 16:58.000
This should be leaking about 100 megabytes every time.

16:58.000 --> 17:02.000
Okay, that's probably enough of that.

17:02.000 --> 17:14.000
And go look at the memory profile.

17:15.000 --> 17:19.000
Okay, you see now it started close to zero and jumped all the way up here.

17:19.000 --> 17:24.000
If I select one of the points on this profile, this is okay.

17:24.000 --> 17:25.000
We got to fail.

17:25.000 --> 17:28.000
There's other people who work on parka watching this.

17:28.000 --> 17:31.000
Let's figure out what's causing this bug.

17:31.000 --> 17:35.000
So unfortunately I cannot show this, but if I clicked on this,

17:35.000 --> 17:42.000
you'd see a breakdown similarly a flame graph of the bytes that had been allocated

17:42.000 --> 17:47.000
and the stack trace is where those bytes were allocated.

17:47.000 --> 17:52.000
Sorry?

17:52.000 --> 18:05.000
Okay, I can try that, let's see.

18:05.000 --> 18:09.000
This is, no, this is not.

18:10.000 --> 18:12.000
Thank you, Frederick, that works.

18:12.000 --> 18:18.000
But yeah, so you see, basically this shot ran function,

18:18.000 --> 18:24.000
you know, it says it's in main.rs plus 34, so line 34.

18:24.000 --> 18:27.000
That's where stuff is getting leaked.

18:27.000 --> 18:32.000
And indeed, if I go back to my editor,

18:32.000 --> 18:36.000
that's indeed line 34 where we're leaking the memory.

18:36.000 --> 18:41.000
So that's the second rest specific feature.

18:41.000 --> 18:45.000
I'm going to have to be brief for the rest of this,

18:45.000 --> 18:47.000
but I just have one more slide.

18:47.000 --> 18:48.000
How it works.

18:48.000 --> 18:50.000
Keyword is EBPF.

18:50.000 --> 18:52.000
Have you heard of that?

18:52.000 --> 18:58.000
Essentially, I don't know, well, I'll time for you guys to digest this entire slide,

18:58.000 --> 19:02.000
but basically the idea is that with EBPF,

19:03.000 --> 19:07.000
you can write something that runs in kernel space,

19:07.000 --> 19:12.000
but so in the, like, in the address space of your process,

19:12.000 --> 19:14.000
but the kernel runs it.

19:14.000 --> 19:17.000
And so you're not having to like,

19:17.000 --> 19:21.000
pop you a bunch of memory back and forth to different,

19:21.000 --> 19:26.000
to different address spaces in order to walk the stack.

19:26.000 --> 19:29.000
And I should give a shout out that,

19:30.000 --> 19:34.000
our on-winder and our agents is largely based on

19:34.000 --> 19:36.000
although we have our own kind of features,

19:36.000 --> 19:40.000
but it's largely based on this project open tomatry EBPF profile,

19:40.000 --> 19:43.000
which several companies, including us,

19:43.000 --> 19:44.000
collaborate on.

19:44.000 --> 19:48.000
So thanks to all of those contributors.

19:48.000 --> 19:52.000
And yeah, so if you want to learn more about parka,

19:52.000 --> 19:55.000
you can join our discord, which is this QR code,

19:55.000 --> 19:57.000
website is parka.dev,

19:58.000 --> 20:01.000
or you're going to email me, this is my email address,

20:01.000 --> 20:03.000
and that is it.

20:03.000 --> 20:05.000
So I think we have like five minutes for questions,

20:05.000 --> 20:07.000
if anyone has questions.

20:18.000 --> 20:21.000
For the custom integrations, did you come?

20:21.000 --> 20:23.000
Sorry, can you speak up again?

20:23.000 --> 20:26.000
Sorry, for the custom integrations of labeling,

20:26.000 --> 20:29.000
are you using like the tracing crate that's built in or?

20:29.000 --> 20:30.000
Do we consider what?

20:30.000 --> 20:33.000
The tracing crate, a lot of libraries use.

20:33.000 --> 20:36.000
Yeah, custom integration with the tracing crate.

20:36.000 --> 20:39.000
We have considered that, but we don't,

20:39.000 --> 20:42.000
we are not implemented anything along those lines yet,

20:42.000 --> 20:44.000
but like, it's definitely something we want to do

20:44.000 --> 20:48.000
because it's sort of huge in the Rusty ecosystem.

20:48.000 --> 20:55.000
With custom, custom user space decisions like building

20:56.000 --> 21:03.000
your own stuff into the binary, like you did there with the user name.

21:03.000 --> 21:09.000
What is the limit of using EBPF like that?

21:09.000 --> 21:14.000
If I go ham and start adding every single thing,

21:14.000 --> 21:19.000
like let's say I start putting in every time you read a single bite,

21:19.000 --> 21:22.000
I will write down the memory address.

21:22.000 --> 21:25.000
You're talking about custom labels features?

21:25.000 --> 21:26.000
Yeah, custom labeling.

21:26.000 --> 21:30.000
Yeah, so there are pretty tight limits because we do have to read those in EBPF.

21:30.000 --> 21:36.000
So it's like around, at depends on version of perka agent you're running,

21:36.000 --> 21:38.000
but it's around like 10 unique labels.

21:38.000 --> 21:41.000
And the key in the value size are also constrained,

21:41.000 --> 21:45.000
but we've made them long enough for like typical things like UIDs and stuff like that.

21:45.000 --> 21:50.000
Do you have any experience on what is the practical performance limit of using EBPF

21:50.000 --> 21:57.000
in that sort of like jumping into EBPF to do some kind of labeling or something?

21:57.000 --> 22:03.000
I don't think it has a particular significant performance impact

22:03.000 --> 22:08.000
because it runs by default 19 times a second,

22:08.000 --> 22:13.000
and then we just got a copy out like a tiny bit of memory when it happens.

22:13.000 --> 22:18.000
Like I said, the park agent itself has to use some CPU,

22:18.000 --> 22:23.000
but we think it should typically be like again less than 1% of CPU.

22:35.000 --> 22:38.000
Speaking of CPU overhead and such,

22:38.000 --> 22:44.000
have you ever or has anyone that you've heard of ever tried running this on embedded Linux devices

22:44.000 --> 22:48.000
to profile stuff remotely?

22:48.000 --> 22:54.000
There are, I can't say the name in the company because it's confidential,

22:54.000 --> 22:58.000
but there are people using this on things that are like out in the field

22:58.000 --> 23:00.000
that don't have network connections.

23:00.000 --> 23:04.000
So we do have like a, we do have a mode where you can say,

23:04.000 --> 23:08.000
if you have an offline mode and then like later I'm going to upload all the traces.

23:08.000 --> 23:11.000
When I said embedded, it depends on what you mean and I'm better

23:11.000 --> 23:16.000
because I should mention this only right now works on X86 and ARM.

23:16.000 --> 23:20.000
So I have the idea someday that we should also port it to risk 5,

23:20.000 --> 23:23.000
but for now it's only X86 and ARM.

23:23.000 --> 23:26.000
It only works on Linux.

23:26.000 --> 23:28.000
Hey, just one question.

23:28.000 --> 23:31.000
How does it work with ACingCrust?

23:31.000 --> 23:34.000
How does it work with ACingCrust?

23:34.000 --> 23:43.000
So we don't directly attempt to like reify the ACing like logical stack trace.

23:43.000 --> 23:46.000
So you are going to see a bunch of inner stack traces,

23:46.000 --> 23:51.000
a bunch of sort of Tokyo, this that like stuff in the middle, whatever,

23:51.000 --> 23:55.000
but it is, again, something we want to do someday.

23:55.000 --> 23:59.000
Most users in practice about it, it's still useful even with ACingCrust,

23:59.000 --> 24:04.000
but it could be integrated more tightly for sure.

24:04.000 --> 24:07.000
I have a question.

24:07.000 --> 24:11.000
What are some of the, how does it handle high card now?

24:11.000 --> 24:16.000
So if you had like a hundred or a thousand users using your system

24:16.000 --> 24:19.000
and the name tags start to explore.

24:19.000 --> 24:24.000
The performance shouldn't depend on the card now.

24:24.000 --> 24:26.000
Okay, that's of the labels.

24:26.000 --> 24:27.000
Yeah.

24:27.000 --> 24:31.000
That's nice for people using Prometheus and tools like this,

24:31.000 --> 24:36.000
because the card now becomes a problem.

24:36.000 --> 24:38.000
Anything else?

24:38.000 --> 24:39.000
All right, thank you so much.

24:39.000 --> 24:40.000
Thank you.

24:40.000 --> 24:50.000
Thank you.

