WEBVTT

00:00.000 --> 00:10.280
Hello everyone, I'm Maxine, I'm from France, and I'm working at an area in Orande as the

00:10.280 --> 00:11.280
Pages student.

00:11.280 --> 00:16.400
So I started recently, and last year when I discovered the BFF, I had the opportunity

00:16.400 --> 00:21.120
to make a performance evaluation of the BFF earlier, and I learned something on it, and

00:21.120 --> 00:24.320
today I will show you some submissions I made.

00:25.280 --> 00:30.960
So I started by making some differences between some kind of versions, and then I'll compare

00:30.960 --> 00:34.000
the BFF earlier with prevail because it gives a comparison point.

00:37.600 --> 00:43.520
So this is simple representation of the BFF earlier, so it is a static analyzer.

00:43.520 --> 00:48.880
It verify the behavior of the received program inside the kernel, and it checks that the

00:49.520 --> 00:54.000
set of specification, like not jumping outside the instructions or leaking canal data.

00:55.200 --> 00:59.840
I do the two-step verification, it starts by working on the instruction to check if there are

00:59.840 --> 01:04.160
on which are the instructions or back edges, if you don't have capabilities, and this case,

01:04.160 --> 01:08.880
it's subject to your program. And the second step is to verify the behavior of the program,

01:09.600 --> 01:15.040
so it's up with the set-pron, and then the prime, and if the prime is accepted, it is then

01:15.200 --> 01:18.800
compiled, and attached to a hook point, waiting for an event to trigger the prime.

01:22.800 --> 01:27.120
Although there are new features, I've been added, and the behavior is getting more complex.

01:28.080 --> 01:33.040
Some bugs could be introduced, and it is complex components. It is out to assure the safety

01:33.040 --> 01:38.560
of the entire kernel, and of the BFF earlier, because it is complex, and it is not some

01:38.560 --> 01:46.000
added defined, and it is not well documented. And the BFF pair can also reject point, which would

01:46.000 --> 01:52.880
be active, so it does fast-spelitude. However, since few years, BFF has been an active research

01:52.880 --> 01:59.920
wheel, some works have been done on phantom bugs, so dynamically using fuser, or statically,

01:59.920 --> 02:07.200
with, for example, active, which uses the same fuser. It's our observation, using software and

02:07.200 --> 02:13.120
hardware-based approach, which is in telemplk or RMT. There are two other methods to verify

02:13.120 --> 02:18.320
problems, which have been proposed like prevail. Proball is another of the failure of BFF problems,

02:18.880 --> 02:26.560
introduced in 2019, it is used on some projects, and it is used on Windows, so I'll come back

02:26.560 --> 02:34.320
a bit later to the prevail, in the second part of the talk. So, BFF exists on the list,

02:34.320 --> 02:41.520
it can have since 2014, and since the years, or many modifications that have been made on the

02:41.520 --> 02:48.400
VFAR, and some of these modifications could have an impact on the performance of the VFAR.

02:48.400 --> 02:53.360
So, we are going to check all the some versions of the VFAR BFF's, in terms of the

02:53.360 --> 02:58.640
execution time, and the more we could print and power correction. For our experiment, we use

02:58.880 --> 03:04.720
version of Ubuntu, so we start this channel for .NETing to 6.10. We use a set of pre-compile

03:04.720 --> 03:10.640
problems from the array repository, and since some of these problems contain custom sections

03:10.640 --> 03:17.280
and legacy maps, we use prevail as a loader for the VFAR. However, previous, they do the bad

03:17.280 --> 03:22.800
problem type for some of these problems, which is the VFAR to reject the problem for a bad context

03:23.120 --> 03:28.400
or bad elders. So, I just remove our distance, I use only one of the great info

03:28.400 --> 03:35.520
different problems for my test. This figure shows us the very same time of that type of

03:35.520 --> 03:41.680
programs, encouraging the number of instructions. And we can see that for print with less than

03:41.680 --> 03:48.560
1, 0, and 6, 100 instructions, the verification tape for all the VFAR seems the same. However,

03:48.640 --> 03:53.840
with larger programs, we can observe that for the candidate 4.8 mentioned, the verification

03:53.840 --> 04:00.480
time is higher. It can be explained in part because more functions were generated on this channel.

04:02.880 --> 04:06.560
The second figure shows us this time the memory should print, according to the number of

04:06.560 --> 04:12.240
instructions of the attempted programs. And we have actually the same observation that the memory

04:12.320 --> 04:20.480
footprint of the 4.8 is higher than the other. I observe that the VFAR in 4.8 is generated

04:20.480 --> 04:28.640
for this range of instructions, like seven or eight more branches, which imply that some

04:28.640 --> 04:33.520
instruction would be the effect multiple times, and it requires more memory to sort of branches.

04:33.600 --> 04:42.000
I am going to check how many prints were accepted by the VFAR. And we can see that

04:42.000 --> 04:47.200
none of the VFAR accept all the poems. However, if we check at the second part of the table,

04:47.200 --> 04:52.080
we can see that there are six reasons for the first one is complexity limit, which is

04:52.080 --> 04:59.200
reached for the 4.9 soon. There are also bad candidate configurations or a lockdown mode, but if

04:59.280 --> 05:04.000
we manage to solve all these issues, we can see that the VFAR accepts all the programs of

05:05.840 --> 05:14.000
accept one because of the complexity limit. So the idea is that the VFAR is not always to

05:14.000 --> 05:19.040
play for the rejection. It can be a bit more than just the VFAR. It can be a kind of configurations

05:19.040 --> 05:26.240
of the way you load your programs. Now, as a comparison, the VFAR was prevailed.

05:26.640 --> 05:35.280
As I said, the VFAR of the VFAR programs is being published in 2019, and it was in user space

05:35.280 --> 05:42.240
compared to the VFAR, which means it can be expressed in terms, because it uses an absolute

05:42.240 --> 05:47.200
domain which requires more memory and time to execute. So the abstract domains are used to

05:47.200 --> 05:57.200
represent the values on the abstract world, so at the advantage of the time. So in the store,

05:58.000 --> 06:03.440
values are in our constraints, so it keeps information between them. In court reasons,

06:03.440 --> 06:08.240
the VFAR uses the interval domain, so it's the interval of values with the minimum in the

06:08.240 --> 06:14.080
maximum bond, and the TNU, which stands for sophisticated numbers, which add some bit precision with

06:14.080 --> 06:19.360
the mask. So the mask indicate if a particular bit in the value is known or unknown, if it is a known,

06:19.360 --> 06:26.320
it can be zero or one. There are some different things the way past these are managed.

06:28.160 --> 06:33.280
On the VFAR since many bunches are very failed, the state printing has been added to

06:33.280 --> 06:38.160
reduce the number of bunches, so the bunches is very failed, so the very failure starts on the

06:38.160 --> 06:44.160
checkpoints, and when it's very fun as a bunch, if it finds that the current bunch of states,

06:44.160 --> 06:51.600
which is the same as one checkpoint, it can detect the bunch, otherwise it will be the same,

06:51.600 --> 06:57.360
which it helps to reduce by the number of bunches. In court, I don't prefer to use a general

06:57.360 --> 07:03.600
parameter, which means that when two bunches are meeting at the same instruction, it just

07:03.600 --> 07:10.560
melts the values. The same thing also, differentties in the way loops are handled. The VFAR will

07:10.560 --> 07:17.200
loop on the, when it will deteriorate over the loop, until the condition is very failed, and it will

07:17.200 --> 07:23.120
generate possible a lot of bunches, while prevail use the fixed-point computation, it means that

07:23.120 --> 07:28.960
when you find a fixed point between two different loops, it can just leave the loop and continue

07:28.960 --> 07:35.600
to verify the rest. The rest was some things to say about the sum this. We have seen that

07:36.480 --> 07:42.400
symbols have different on the VFAR file, and some makes state, and as I said, this is not for

07:42.400 --> 07:47.760
many differentties, it's hard to understand, so it's hard to prove the failure entirely, so we

07:47.760 --> 07:55.600
don't know if it's really insane or on some. For prevail, by design, it should be sound, however,

07:55.600 --> 08:02.000
as the implementation is different than the design, it could contain bugs, so we don't really

08:02.000 --> 08:13.920
know for prevail. Now we're going to see how the VFAR behaves against prevail in terms of

08:13.920 --> 08:18.320
verification time in my construction and accepted programs, because it deserves a comparison point.

08:18.400 --> 08:27.920
So, I did the same point as before. I use the KFAR-13 and 6.10. The first version of prevail,

08:27.920 --> 08:33.920
which is the KLDI preparation, will run on the 4.910, and the lattice will run with the KL-6.10.

08:37.680 --> 08:42.640
This figure shows us the verification time of the accepted programs, according to the number

08:42.720 --> 08:48.080
of instructions of this program. And we can see that for larger programs prevail, we have

08:48.080 --> 08:54.720
a lot more times to verify the print, like eight or nine more seconds like the VFAR.

08:56.560 --> 09:01.520
It's in part due, because of the zone domain, you go to each other, you have more times to

09:01.520 --> 09:08.640
execute, compare to the interval or tenium. This figure, so this is the memory, if we'd print

09:08.640 --> 09:15.280
of the programs. And we have the same remark as before that prevail, because you're more memory

09:15.280 --> 09:21.040
to verify the print. However, we can see that between the selection of KL and the lattice one,

09:21.040 --> 09:25.840
you have a huge difference in terms of a very construction, like one gigabyte. It's because

09:25.840 --> 09:31.200
when prevail was presented, it uses some ethanol libraries, and now all this stuff isn't

09:31.200 --> 09:34.720
produced in the source code, which I used it's memory usage. It's good.

09:34.800 --> 09:41.280
So, previously, we saw the number of accepted programs for here, let's say, let's say,

09:41.280 --> 09:47.840
about prevail, prevail, at one goal, also, which is reduced the number of false

09:47.840 --> 09:54.000
politicians. You can see that for the subversion of prevail, there's one rejected program

09:54.000 --> 09:58.640
if it's a letter version, there are four rejected programs. It seems that there are a

09:58.640 --> 10:04.880
regression. If you look at the second table, it shows the number of rejected programs by

10:04.880 --> 10:09.520
inventory in the left column, which are accepted on the source code by a very failure. We can see

10:09.520 --> 10:14.800
that the source from the rejected by prevail are accepted in the fourth version of prevail.

10:14.800 --> 10:20.240
And all the rejected programs by both the version of prevail are accepted by the units.

10:20.240 --> 10:25.680
So, prevail at a regression, in terms of accepted programs, and it's a refight left

10:25.680 --> 10:33.360
for them, and the decayed very failure, for this set of samples. Now, I'll show you some differentties

10:33.360 --> 10:41.680
on the way the refighted by this mini-passes. I'll use the new SRMP template from the

10:41.680 --> 10:47.360
pre-alienery predatory, because it's interesting. It contains two non-unroll loops, and some if

10:47.360 --> 10:52.480
else in it, and I will configure the number of loop iterations, which he has the very failure

10:52.480 --> 10:59.600
will indulge the number of loops. So, the two figures are the verification time, and the

10:59.600 --> 11:05.280
memory footprints, I can't need to do the number of the loop iterations. And you can see that

11:05.280 --> 11:09.520
while prevail at a fixed point earlier, the B.P. is like I have continued to increase

11:09.520 --> 11:16.560
the verification time, until reaching the context limit at around 155 loop

11:16.880 --> 11:22.160
iteration value. And we can see that the memory footprints seems the same, or is

11:22.160 --> 11:27.440
nearly the same for the number of loop iterations, for the very failure. We can see also that the

11:27.440 --> 11:32.080
sub-partient in the U.K.L. project programs is because the band-aid loop verification was

11:32.080 --> 11:38.480
added in 5.3. And for prevail, I don't know if they are rejected, in the paper, they said that

11:38.480 --> 11:43.680
they should be accepted, but as they didn't give us information, the compiler they use,

11:43.680 --> 11:52.160
it can be because of this. So, I learned to for this time. To conclude, we saw that the purpose of the

11:52.160 --> 11:56.720
very failure has improved over the versions of the, of the, of the candidate. We have sort of no

11:56.720 --> 12:01.840
regression, which is a good thing. And the point prediction is not only limited to the B.P.

12:01.840 --> 12:08.480
life I have already said, it could be larger than this. It was an interesting approach to the

12:08.480 --> 12:12.960
band-aid loop verification with the fixed-print computation. And the association that prevails

12:12.960 --> 12:18.640
have reduced its memory, which is a good thing. However, it seems that there is a regression

12:18.640 --> 12:22.000
with the added version of prevails, but it would require physical investigation, because it

12:22.000 --> 12:27.440
was not one of my main ideas for the talk. Thank you, everyone, for your time joining.

12:27.440 --> 12:31.120
I hope it was interesting, and now we look for questions for the last minute.

12:38.480 --> 12:59.520
The question is, which version I use on mid-camera of prevails? I do the fourth version of prevails

12:59.520 --> 13:04.160
in the fourth-print painting, because it was, you can't use the experiment of the paper. And

13:04.160 --> 13:08.720
I use the original prevails on the sixth-print painting, because it was as a time by doing the experiments.

13:10.720 --> 13:17.440
So the difference is, like, in terms of program rejections or acceptance, it's like in terms of the

13:18.080 --> 13:22.960
evolution of prevails. Yes, it's not due to the camera. Yes.

13:22.960 --> 13:30.960
So it's not due to the problem. It's not due to the problem.

13:30.960 --> 13:34.960
In terms of usability, it's not due to the problem.

13:34.960 --> 13:42.960
You see some error, for example, it was the error on the stand where

13:42.960 --> 13:48.960
the program rejected these more in the third-print painting. So the idea is, those prevails are

13:48.960 --> 13:56.960
better old-put in terms of already. Personally, as I use sometimes a BPS on early news,

13:58.960 --> 14:02.960
I understand more of the old-put of the very fire when there are rejections.

14:02.960 --> 14:08.960
The privilege of you something linked to the linear conference, so it can be a bit complicated to

14:08.960 --> 14:10.960
ignore them.

14:20.960 --> 14:22.960
About the rejected problem?

14:24.960 --> 14:26.960
I don't do the rejected actually.

14:26.960 --> 14:30.960
I see that I saw a commit, it changes something on the

14:30.960 --> 14:36.960
verification. At the moment, where the

14:36.960 --> 14:46.960
pre-invariant, we are perhaps to the post, but I did it in your family, what is the purpose of this?

14:46.960 --> 14:56.960
It's not the time the memory comes out.

14:56.960 --> 15:02.960
It's the previous one, a couple more of it.

15:02.960 --> 15:08.960
Yes. So what can you explain, so did the character that can

15:09.960 --> 15:12.960
this is one gigabyte?

15:12.960 --> 15:14.960
Yes.

15:16.960 --> 15:24.960
The command which I couldn't get back in 400, 500 gigabyte.

15:26.960 --> 15:30.960
For the time being, this character is pretty normal for the

15:30.960 --> 15:34.960
question, like, 200 megabyte, 200 megabyte.

15:48.960 --> 15:51.960
I also say that when there are more inspiration to verify,

15:51.960 --> 15:54.960
it will pay a lot more memory. I tested the

15:54.960 --> 16:00.960
double-surfing pay template, with not only loops, and at each

16:00.960 --> 16:04.960
iteration, we can have more memory to verify the prompt.

16:14.960 --> 16:18.960
I use some, I use some, an old version of the

16:18.960 --> 16:22.960
one, and I add more and more instructions, but I did not add the

16:22.960 --> 16:26.960
time to put it on my search.

16:26.960 --> 16:28.960
Someone else?

16:28.960 --> 16:34.960
What would you say is the conclusion of your work, so what would you say,

16:34.960 --> 16:38.960
something about whether it should use the name, the kernel, or the

16:38.960 --> 16:43.960
system about the actual evolved, what is the very

16:43.960 --> 16:47.960
higher level of the memory consumption of frevel,

16:47.960 --> 16:50.960
as soon we can't really use it, and I thought it turned

16:50.960 --> 16:54.960
into a user space, but maybe we could use some of its

16:54.960 --> 16:57.960
mechanisms to improve the data.

16:57.960 --> 16:58.960
Try to be an agent.

17:08.960 --> 17:10.960
Thank you.

