WEBVTT

00:00.000 --> 00:11.360
Before we dive into the talk, I would first like to ask how many of you have ever worked

00:11.360 --> 00:18.400
on a document and you've ignored the battery warnings for the past hour when suddenly your

00:18.400 --> 00:24.480
battery dies and you realize the last time you saved was about an hour ago and the hour

00:24.480 --> 00:26.480
is gone.

00:26.640 --> 00:35.520
Well, it's happened to me and it's very annoying so I hope this work can help alleviate this

00:35.520 --> 00:42.960
issue and it can prove useful not just for programmers but for regular computer users in the future.

00:43.920 --> 00:46.640
With that out of the way, let's officially begin.

00:46.720 --> 00:57.120
Right, so first let's get our definitions out of the way. What is persistence?

00:57.920 --> 01:06.320
Academically, it's defined as the period of time during which an object is usable and since we're

01:06.320 --> 01:15.520
dealing with operating systems and kernels here, what we mean by object can basically be any

01:15.520 --> 01:23.600
state of a program or like for example variables, expressions, etc. And what we want to

01:23.600 --> 01:32.240
persist is to persist exactly what I mentioned in my example, system crashes, system outages that

01:32.240 --> 01:40.880
happen unexpectedly. In short, your programs should not fear death.

01:41.840 --> 01:52.000
Now, this is not a new topic, it's been around in academia for the past couple decades and there

01:52.000 --> 01:59.760
are some traditional ways that traditional operating systems such as Linux and Windows for example,

02:00.720 --> 02:06.320
they do some have some persistence mechanisms and I'm sure you're all familiar with them,

02:07.280 --> 02:23.120
files. The problem with files is that the persistence mechanism is, the user space application is

02:23.120 --> 02:29.600
responsible for the persistence mechanism. It has to serialize the data once to persist,

02:30.320 --> 02:35.360
it has to create the file, write to it, update to it, etc. etc.

02:37.200 --> 02:46.480
Persistence is in the hand of the application, right? So let me introduce to you something you may

02:46.480 --> 02:55.520
not be familiar with Phantom OS. Is it a non-linear operating system offering orthogonal persistence?

02:56.480 --> 03:05.120
Actually, Phantom OS has a new methodology in mind, they want to do a way with files as a

03:05.120 --> 03:13.120
persistence mechanisms. A big caveat here in terms of the user space programs, you'll see why in a bit.

03:14.080 --> 03:22.800
But it wants to do a way with files in short. And yeah, I'll explain how it does so, but as you can see here,

03:22.800 --> 03:29.040
I've highlighted a term, orthogonality, which you may not be familiar with, but you will be shortly.

03:30.080 --> 03:38.640
So orthogonality refers to these three concepts in dependence, which states that all data should

03:38.640 --> 03:46.720
be treated the same, no matter how long the data lives for. The data type should not

03:46.720 --> 03:51.680
infer anything about the persistence either, which is what call them to call them three

03:53.280 --> 04:01.920
state, which you can read here. But in short, orthogonality, orthogonal persistence refers to the

04:01.920 --> 04:09.120
fact that persistence is in the hand of the kernel, not in the hands of the user space program.

04:09.600 --> 04:17.120
The programmers should only worry about getting the business logic out, no need to worry about saving

04:17.120 --> 04:27.360
data. In short, in traditional systems, system outages are a full state. In Phantom OS,

04:27.360 --> 04:35.360
it's just regular day life. Now, I'm sure you're curious how does Phantom OS achieve this

04:35.920 --> 04:42.000
new type of persistence? Well, if you've played a video game in the past 20 or so years,

04:42.560 --> 04:49.360
you know that most games have some sort of checkpoint mechanism, and it's much the same here.

04:49.360 --> 04:53.520
Instead, that instead of calling it checkpoints, we call them snapshots.

04:55.280 --> 05:02.800
So, okay, let's take a pause on that for a second. All user space programs are compiled to run on the

05:02.880 --> 05:09.520
Phantom virtual machine, which is a language virtual machine, much like JVM, the Java virtual machine,

05:10.240 --> 05:18.160
and it has strong tipization, which is backed up by memory protections. Now, when the different

05:18.160 --> 05:25.200
user space programs run, they all run inside one singular Azure space, which is shared by all

05:26.160 --> 05:38.480
programs. So now, the snapshot mechanism, please note that this is all done asynchronously to the running

05:38.480 --> 05:46.160
programs. So, the user space programs are interrupted very minimally. Yep, for performance reasons.

05:46.720 --> 05:54.960
So, first, we want to freeze the persistent Azure space, and to enable this to be done asynchronously

05:55.120 --> 06:03.360
we use a copy on the right mechanism. Once we have frozen the Azure space, we copy all the data to

06:03.360 --> 06:10.720
disk in what's known as a superblock, and then we unfreeze the persistent Azure space.

06:12.160 --> 06:18.160
And of course, we can then do the inverse to recover the data back from the superblock after the

06:18.160 --> 06:29.600
system outage. Now, this way of doing the snapshot is fast, and it's very easy to implement

06:29.600 --> 06:37.520
from a developer perspective. However, what happens if your disk storage gets corrupted?

06:38.320 --> 06:45.680
Well, you're cooked, right? You don't have any way to recover the data, it's corrupted.

06:45.920 --> 06:52.400
And I hear you thinking, wouldn't it be nice to have an abstraction layer such as a file system

06:52.400 --> 07:02.240
that can handle this sort of recovery? Well, that's exactly what the new snapshot mechanism aims to

07:02.240 --> 07:10.640
do by integrating a file system as part of the snapshot mechanism. Now, I know how in the first part

07:10.640 --> 07:19.680
of the presentation, I was bashing against files. However, this file system is just a way to manage

07:19.680 --> 07:25.920
the data storage. It has no implications for the user space programs.

07:28.080 --> 07:33.920
Yeah, and by using the file system, we can have retention and redundancy policies which I'll go

07:34.880 --> 07:43.120
after in a bit. We added possible snapshots and multiple generations of snapshots, which I'll

07:43.120 --> 07:51.760
cover later. Right, a brief caveat. Phantom OS was ported to the Genode framework by a student

07:51.760 --> 08:00.000
working on this before me. This enabled us to use a Genode ecosystem of drivers and also to utilize

08:00.160 --> 08:06.080
the micro-corner approach and configurable security policies and capabilities.

08:07.920 --> 08:16.240
And of course, one such project in the Genode ecosystem is LWX-T4 developed by Joseph,

08:16.800 --> 08:26.640
which proved the invaluable in this work. So, how does this just give you a brief overview of

08:26.720 --> 08:31.440
what's going on under the hood? We have user land, which lives on top of the persistent

08:31.440 --> 08:40.000
address space, then we have Genode running back and everything up. And we have the new snapshot

08:40.000 --> 08:48.560
mechanism running as a separate Genode component. Now, this has isolation advantages and it also allows

08:48.880 --> 08:58.080
us to manage snapshots from many different components and to also use the snapshot mechanisms

08:58.080 --> 09:11.440
in other Genode projects. Right, so how does this new snapshot mechanism work? Well, let's start

09:11.440 --> 09:16.800
from the basics again. You have your one singular persistent address space on which every single

09:17.120 --> 09:24.400
program lives. And of course, it's split up into virtual memory pages. So, the first step would

09:24.400 --> 09:31.680
be to label all of these pages. I will refer to these labels as snapper identifiers.

09:34.480 --> 09:39.920
And so, let's take a snapshot of the persistent address space, but instead of saving the raw

09:39.920 --> 09:46.560
data into disk, let's just save it into files into a file system and let's give it some sort of

09:46.560 --> 09:56.240
direct directory hierarchy. The snapshot data will be stored in the black files, which I will refer to

09:56.240 --> 10:03.520
as snapshot files. And please note that the labels on the pictures here do not necessarily correspond

10:03.520 --> 10:11.280
to snapper identifiers. They're just locations in this directory hierarchy and the mapping

10:11.280 --> 10:18.080
between the snapper identifiers and the paths are managed by the white files, which you can see over

10:18.080 --> 10:26.160
there by the archive file. Now, okay, this is all well and good. We take one snapshot, we take another

10:26.240 --> 10:38.320
snapshot. However, while this may be good in terms of redundancy, let's say, yes, we took a snapshot here,

10:38.320 --> 10:44.480
we have some data stored here. And then maybe we have the same data because it didn't change

10:44.480 --> 10:51.680
between the snapshots, it's still saved here. We have the same file storing the same data. Now,

10:52.640 --> 10:59.120
this, again, it's good for redundancy, but in terms of disk storage efficiency, it's suboptimal.

11:00.720 --> 11:09.440
So, what do we do when we have the same file? Well, we create a link, right? We build the new generations

11:09.440 --> 11:18.240
on top of previous ones. And we just link back to older versions that contain the same snapshot data.

11:19.120 --> 11:24.000
You can think of this link as a hard link. However, it's not implemented as a hard link,

11:24.000 --> 11:30.960
despite my best efforts, because it's not supported in genode. So, this linking capability

11:30.960 --> 11:36.480
is done via mappings in the archive file and reference counting in the file themselves.

11:39.680 --> 11:47.840
This is the data layout of the snapshot file. We have some version of the snapper mechanism,

11:48.800 --> 11:54.880
which stores the integrity of the version and the data. And the reference count to manage links,

11:54.880 --> 12:02.640
linking. And the archive file, much very similar. We have a version, a hash, which has

12:02.640 --> 12:09.280
the two here, the end of the data, the end refers to how many mappings we have in each archive file.

12:09.280 --> 12:17.120
And data are the actual key value pairs of snapper identifiers and snapshot file paths.

12:18.800 --> 12:24.960
All right, so benchmarks are a favorite topic. Let's get into it. Here are the boring specs.

12:26.480 --> 12:34.480
Ooh, okay, so it doesn't do very well. And no, this is not a mistake in the graph. This is the original

12:34.480 --> 12:41.200
super block approach. As you can see, it's quite fast compared to our brute implementation.

12:42.160 --> 12:49.360
Yeah, it takes 43 seconds. Now, it's important to note that these 43 seconds are not interrupting

12:49.360 --> 12:57.360
the user space programs. They are running just fine. This 43 seconds basically stops us from taking

12:57.360 --> 13:03.520
a new snapshot within this time frame. So, we have to wait until this is over until we can take

13:04.480 --> 13:09.440
the snapshot itself is synchronous between different snapshot. And also, we have a wider

13:10.560 --> 13:16.640
range where we can have a power outage during the snapshot. And we can lose the data that we're trying

13:16.640 --> 13:26.480
to save, which is not good. So, let's optimize. We have our snapshot file, which stores some data,

13:26.480 --> 13:35.040
which is referred to by the snapper identifier. One to one mapping. Why don't we squeeze more

13:36.240 --> 13:45.760
the virtual memory pages into this? In order to reduce the IO workload. So, that's what we did.

13:45.760 --> 13:52.720
We grouped a bunch of memory pages and made it so that all the data is backed into one snapshot file.

13:53.600 --> 13:59.600
Just a quick note. This optimization is done on Phantom OS. It's not managed by the snapper mechanism.

14:00.320 --> 14:06.960
Since we want to keep the snapshot mechanism flexible to capture whatever the client component wants to

14:06.960 --> 14:14.800
give it. So, again, this is on Phantom OS. And with this, we can see drastic performance

14:14.800 --> 14:24.880
improvements, which level out about 128 pages per snapshot file. And it takes around one to seconds.

14:26.000 --> 14:30.320
Again, much slower than the original, but we have four tolerance. So,

14:32.400 --> 14:39.520
yes. And one quick thing to mention here, since we are grouping a lot of pages into one snapshot

14:39.520 --> 14:49.200
file, if we do change the virtual memory pages changes, it's content. Then we have to update.

14:49.200 --> 14:53.920
We can't just link to a previous snapshot file, right, in the previous generation. We have to

14:55.120 --> 15:00.880
basically create a new snapshot file. And that's where you can see here. For example, in the last

15:01.840 --> 15:10.080
column, we are changing 25 per almost 25% of our snapshot files every new generation.

15:10.960 --> 15:17.440
And if we have one virtual memory page, we change only the ones that we need to change effectively.

15:19.440 --> 15:26.000
And how you can use and configure this, you can configure the redundancy levels,

15:26.080 --> 15:32.240
which refer to after how many generations do you want to start making redundant copies?

15:32.240 --> 15:38.080
Because if you just have links back to other generations and you something happens to that

15:38.080 --> 15:48.720
all generation, you lose data. So, this redundancy basically creates copies after certain amount of

15:48.720 --> 15:54.960
generations. And you can configure the amount of snapshots you want to have at all times

15:56.240 --> 16:00.320
the expiration date, some other stuff, look at the GitHub for more.

16:02.160 --> 16:12.160
And some four tolerance frequent ask questions. What happens if the system shuts down during

16:12.480 --> 16:19.200
the snapshot process and it corrupts the file system? Well, since we are using the file system,

16:19.200 --> 16:29.360
we can benefit from the utilities around it. So, you can just use FSCK to just fix your system.

16:29.360 --> 16:35.680
However, this leads to the second question. After using this tool, we had corrupted data.

16:35.680 --> 16:40.960
Okay, we recovered the file system, but that data is still corrupted and it invalidates the hashes.

16:42.160 --> 16:49.200
So, unfortunately, you will have to go into the file system, remove the problematic files to just

16:49.200 --> 16:58.640
get rid of the warnings. But then your system should recover the non-problematic pages.

17:00.880 --> 17:06.320
Now, towards the end of the talk, I would like to mention some other developments in the Phantom OS world,

17:06.880 --> 17:13.680
especially these two projects done by students prior to me. We have the introduction of

17:13.680 --> 17:20.560
a WASM runtime in order to increase support for more programming languages. And we also have

17:22.000 --> 17:34.000
a way to restore connections. For example, TCP, if one machine dies, if it's before a certain

17:34.000 --> 17:40.960
timeout, you can still recover the connection, however, with this working work,

17:40.960 --> 17:47.520
you can restore the connection if both machines die and, yeah, you can restore the connection.

17:48.960 --> 17:55.280
Now, let's see demo of Phantom OS using this new mechanism.

17:55.360 --> 18:11.280
So, here we start the operating system. As you can see, we have a weatherup displaying some sort of

18:11.280 --> 18:19.280
states, it's making some progress. And we started the snapshot, it committed here. So, now,

18:19.360 --> 18:26.640
if the system crashes, just remember where the graph is. Okay, the system crashed.

18:28.320 --> 18:34.400
Yep. So, let's see if we recover the state. Here we reboot.

18:38.640 --> 18:48.720
Recovering. And there you go. Right? Like, yeah, you didn't lose anything. No files needed

18:49.520 --> 18:56.000
that you know of. It's so managed by the kernel. They have. You didn't lose any of your work.

18:57.040 --> 19:12.800
So, to conclude, what did we recover the question of traditional persistence mechanisms and

19:12.800 --> 19:21.440
whether they're outdated or not? Of course, we covered how Phantom OS used to take snapshots

19:21.440 --> 19:27.680
of the data, as well as the newer alternative. Whether it's better or not, I leave that up to you.

19:29.440 --> 19:35.120
I do want to mention that this work is experimental and not ready for production use,

19:35.760 --> 19:42.720
but I think it's a cool concept. And of course, here are some references in case you're interested

19:43.280 --> 19:49.520
or yeah, also Phantom OS was present in, I had a talk in 2020. So, if you want to go check

19:49.520 --> 19:54.320
that out, please do. And I leave the floor up to any questions that you may have.

19:54.720 --> 19:56.720
Thank you.

20:01.760 --> 20:06.800
Yep. Right. Talk in one of your slides. You mentioned that your Heshes are just four

20:06.880 --> 20:16.880
fights, why isn't that a little bit narrow? Yes. It's a 32-bit Heshes that does need to be

20:18.160 --> 20:26.560
a question theory. Yes. So, that's a great question. So, to restate the question,

20:28.320 --> 20:32.960
I am using Heshes that are, let's say, not the best for error detection,

20:33.440 --> 20:41.120
since they're not really collision-free. And it's restricted to four bites. Of course,

20:41.840 --> 20:44.560
this is very, I'll just go back to the slide.

20:46.640 --> 20:54.800
Yeah, for instance, this. This is still in development. So, yes, it could be expanded to larger,

20:54.800 --> 21:04.160
but I prioritized a quickness. So, those were the trade-offs I had in mind. And also, the Heshes

21:04.160 --> 21:12.640
mainly to keep track of whether the state has changed, if that makes sense. So, a force.

21:12.640 --> 21:17.440
What happens if you actually run into a collision? Would it ruin your snapshot?

21:17.440 --> 21:20.480
Yeah, it will just create a new snapshot file. Because once,

21:21.120 --> 21:30.080
this is the text, once we detect an erroneous hash, because the data doesn't match, or if it matches,

21:30.080 --> 21:38.320
then yes, yes. Okay, I see which, yes, you would reuse some state, which is not the best, but yeah,

21:38.320 --> 21:45.120
this should probably be changed, yeah. Yeah. Thanks, want to talk about how does your work

21:45.120 --> 21:49.680
relate to some of the hardware advances like non-volatile or in the access memory?

21:49.680 --> 21:54.400
That might actually sort of render your work obsolete, I don't know.

21:54.400 --> 22:00.640
I haven't looked into that, but it's basically a RAM that stays state-spot-system, even if you

22:00.640 --> 22:10.160
don't need that. Ah, okay, yeah, this works. This works for traditional RAM. Obviously,

22:10.240 --> 22:16.960
non-volatile memory would be great to have, and you can buy it even nowadays.

22:18.080 --> 22:27.440
Yes, yes, but you can actually, yeah, I mean, just for that, you might still use it to use your mechanism

22:27.440 --> 22:36.000
for any multiple stack shelves, and basically use the NVRIM as an accelerator, so, so, yeah,

22:36.000 --> 22:42.720
perhaps, yeah, it's, I'll look into it, but of course, yeah, this, if it's non-volatile,

22:42.720 --> 22:48.000
but then we don't need to manage the snapshots, right? Like, if it's non-volatile, we'll just persist.

22:48.000 --> 22:57.440
So, maybe, yeah, definitely, I mean, yeah, just look into it and see, see, if it does not

22:57.520 --> 23:04.880
like your work, so, okay, okay. Another question very quickly, so, how does this, what's the

23:04.880 --> 23:10.240
line, actually, the originals, call them, call them, call them, call them, and this one, it's really

23:10.240 --> 23:18.480
just a continuation, recrystallment, how would you call it? It's mainly worked on by students,

23:18.480 --> 23:24.720
so it's just exploring a new direction, building on top of the microchernal capabilities that

23:24.800 --> 23:32.720
you know, it provides, so, I mean, in my opinion, it would be great if that would be the main

23:32.720 --> 23:37.520
stream, but the developer, there's also a separate Phantom OS branch, which is,

23:38.800 --> 23:45.440
just purely Phantom OS, however, this has the benefits of, like, using the networking stack of

23:45.440 --> 23:52.640
genomes, so, yeah, so the originals are still being done, yes, yes, yes, yes, yes, yes, let's go,

23:52.640 --> 23:59.360
question from the back. Yeah, so I remember the original Phantom OS thought, okay, not what the

23:59.360 --> 24:06.000
address space is like, 62 bits, right? Yep, it's you, but, but here you show that you could run both

24:06.000 --> 24:11.680
Phantom OS instances, right? So, I'm not going to confirm what is kind of the granularity of

24:11.680 --> 24:16.400
Phantom OS, it's like with you, but a brown run and one thing, the last one is one,

24:16.400 --> 24:22.320
other applications, the differences, the one and one. Okay, so the question was, could we have

24:23.440 --> 24:27.600
more than one Phantom OS instances, and would we use the different

24:29.440 --> 24:39.120
instances for different apps, is that correct? From what I've read and what I've worked on,

24:39.120 --> 24:47.600
it's under the assumption that it's one Phantom OS instance, and wow, I guess, I don't know,

24:48.880 --> 24:55.120
it seems like you promote like having different containers for different, different protection

24:55.120 --> 25:00.800
domains for different applications on Phantom OS, you really want to, but it's designed to run

25:00.800 --> 25:07.920
all your programs for this instance. Wow, yes, maybe you could find the use case for having

25:08.000 --> 25:13.040
different Phantom OS instances, they wouldn't be able to communicate within one, with each other.

25:14.080 --> 25:18.480
So, they'll have to be managed separately. But I guess you could then link it back to this

25:18.960 --> 25:26.720
snap or mechanism, which is running as it's own server, to manage all of your instances. So,

25:26.720 --> 25:33.040
I could see it being used like you suggested. Yep.

25:33.040 --> 25:38.640
So, if your premise is to get rid of the file system for applications, right? Yeah.

25:38.640 --> 25:43.440
And the file system is not only used for participants, but also for communication in a sense, right?

25:43.440 --> 25:53.840
Well, you open an editor, write a C file, and compile it with a different program, so how does that fit into your

25:53.840 --> 25:58.480
concept, basically, that you also use in your file system for purposes, other than just, you know,

25:59.440 --> 26:05.920
Yeah. So, the question was, how do we, files are not only used for persistence, they're also used for

26:05.920 --> 26:15.120
message passing, and how does Phantom OS accommodate for that? Well, Phantom OS doesn't, it still

26:15.120 --> 26:22.880
wants to support files. However, it doesn't want to, it promotes writing programs that don't

26:23.440 --> 26:30.960
depend on files for persistence. Mainly, that's, that's, it's mean like, right, with it. I guess

26:31.840 --> 26:39.280
it still supports files for, you can still use files for persistence, but that's not what it's

26:40.000 --> 26:44.160
promotes. So, you could use it for message passing. The files, yeah. Yeah.

26:45.520 --> 26:46.400
Yep. Yeah.

26:46.400 --> 27:08.160
So, the question was, can you take snapshots of just certain processes that you're interested in?

27:09.280 --> 27:14.080
That's a very interesting topic, one which I really want to look into.

27:15.040 --> 27:23.360
However, I haven't looked into it yet since this is all running into all the processes are running

27:23.360 --> 27:32.080
on a single persistent Azure space, which is, again, it uses strong tipization by the virtual machine.

27:33.360 --> 27:41.600
So, I like looking through the code base, I couldn't really discern which, which data was used

27:41.680 --> 27:48.640
for which file. Again, more research is needed in that area, but it's a really cool concept,

27:48.640 --> 27:56.320
because currently, when it freezes, the snapshot mechanism has to go through all of the pages,

27:56.320 --> 28:03.680
and it would be really cool if you could go through only a certain region, let's say, and then

28:03.680 --> 28:07.600
pause that and pause another one. So, yeah. Do you have a connection with me,

28:07.600 --> 28:13.040
what's in your process ID and application, because then if you don't have what's in the context,

28:13.040 --> 28:18.480
and you see, yeah, this is positive. So, it cannot kind of simply kill the app or remove the app from

28:18.480 --> 28:23.280
their context, because as to make you need to store all kind of information about the processes as well.

28:23.280 --> 28:29.120
So, yeah, that was when I was going to say, yeah, I think it's doable. So, and that's it.

28:29.120 --> 28:33.440
Yeah. I haven't looked into the, yeah, well,

28:33.440 --> 28:37.600
because if you have the different types, not sure that you're going to store them, then you

28:37.600 --> 28:42.080
will have a good application, you'll look at the trace, what would you do? Yeah, that is it,

28:42.080 --> 28:47.600
that also looks bad. Excuse me, can you probably read this? That's it, that's it, right?

28:47.600 --> 28:51.600
Yeah. Yeah. Yeah. Yeah.

28:54.160 --> 29:00.040
You used what? The system address best for all applications. Yeah. Is it all sorts of

29:00.040 --> 29:08.520
the case for the virtual memory address space? Yeah. Well, the browser process could

29:08.520 --> 29:15.720
exist memory from the web app, in that example. There are some memory protections in place.

29:15.720 --> 29:23.640
I haven't looked into the code of the pvm. So, I can't answer in a detailed way, but I've read

29:23.640 --> 29:29.240
that there are memory protections, and that's as far as my knowledge extends in that area.

29:30.040 --> 29:36.360
If you would, yeah, memory protections, you could use a communication mode on just like a smart

29:36.360 --> 29:41.800
of VM dip back in the day. The eight is where you could have the web objects that could be

29:41.800 --> 29:48.280
accessible for each other objects. Yeah. I assume there is some sort of protection, but again,

29:48.280 --> 29:55.400
I can't answer you with 100% certainty. Yeah. It sounds like when you take a snapshot,

29:55.400 --> 30:00.280
you have to freeze everything and scan every page. Have you experimented this right protecting

30:00.280 --> 30:04.680
pages so that you can actually see which ones are being modified? That's exactly what it does,

30:04.680 --> 30:09.880
because it's a copy on the right. So, it sets the all the pages to read only.

30:11.800 --> 30:16.600
And then all the new pages that happen during the snapshot process. Yeah. Yeah.

30:16.600 --> 30:26.600
Yes. Okay. Thank you. Thank you.

