WEBVTT

00:00.000 --> 00:13.400
Hello everyone, hi, so this is my, this is our talk with Lorena and we are going to

00:13.400 --> 00:19.360
describe how we can investigate screen-considence with forensic katana track point in

00:19.360 --> 00:25.200
Kubernetes environments and this cooperation with my supervisors, a professor of

00:25.200 --> 00:32.240
Dr. Bruno and professor of Sarmor at University, PhD student in scientific

00:32.240 --> 00:41.160
computing and Lorena is also a master student in focusing on trade detection and

00:41.160 --> 00:49.400
Adrian and Victoria are also here. This work is in part with done with the Kubernetes

00:49.400 --> 00:53.320
track-minter store working group and if you have any questions, we will be

00:53.320 --> 01:00.320
happy to answer again. Okay, yeah, I'll try to speak up. Thanks Lorena, do you want to

01:00.320 --> 01:10.720
introduce yourself and we can start. Good morning. So in Kubernetes, most

01:10.720 --> 01:17.600
successful attacks are most invisible and that's because instead of using

01:17.600 --> 01:26.000
common malwares, attackers are relying on native tools or API, like Cubsityals, Bash or

01:26.000 --> 01:33.800
Python. In addition, they offer masquerade as legitimate workloads. For example,

01:33.800 --> 01:42.080
deploying normal looking pods and also the persistence is maintained in an

01:42.080 --> 01:51.200
native Kubernetes way, like demon sets or cronjobs. So long story short, the problem

01:51.200 --> 02:03.600
is that attackers behave like normal common Kubernetes users. On the other side,

02:03.640 --> 02:12.440
traditional forensics systems assumes that systems are stable, long-lived, and they

02:12.440 --> 02:20.720
could be observable after the fact. But Kubernetes breaks all these aspects because

02:20.720 --> 02:30.380
accessing nodes could be disruptive and difficult. So another aspect of classical

02:30.380 --> 02:37.900
forensics systems is that they are based on these artifacts. But in Kubernetes

02:37.900 --> 02:47.780
attacks, usually they use live memory malware and runtime state. So we said that

02:47.780 --> 02:59.700
data are a funeral and the forensics traditional ways doesn't work correctly. And also

02:59.700 --> 03:08.700
attackers knows that. So they designed cloud native and runtime-focused malware,

03:08.700 --> 03:17.940
lived in memory and they use a living of the land techniques. Abusing the APIs,

03:17.940 --> 03:27.380
native Kubernetes, and native tools like Bash or Python, that they are trusted by the system.

03:29.700 --> 03:44.220
So they use also the RBAC misconfigurations and tokens abuse. We said that classical forensics

03:44.220 --> 03:51.940
doesn't work, but how can we detect these attacks? Detection is still available, of course.

03:51.940 --> 04:00.860
But the detection is just the first part of the security chain. So detection said that

04:00.860 --> 04:09.900
is something bad happen. Yes, probably or no. But we have to know what happened by whom

04:09.900 --> 04:17.180
why and what's the state of the system after the talk. So detection is just the initial

04:17.180 --> 04:24.180
part of the security team, but it's not enough because it's based on signals. So even

04:24.180 --> 04:31.860
based on if you're all data and we need something persistence. That's because we need

04:31.860 --> 04:41.860
a way to do a complete investigation plan based on the state and durable artifacts. So

04:41.860 --> 04:52.500
we need snapshots. Thank you, Loren. So as Loren mentioned, one of the problems

04:52.500 --> 05:01.540
in Kubernetes' clusters is that container environments are FML. So workloads start very quickly

05:01.540 --> 05:07.860
and disappear. And it's very difficult to preserve evidence during cyber security, during

05:07.860 --> 05:16.340
cyber attacks. So one of the solutions that we are exploring is how to create a snapshot

05:16.340 --> 05:23.460
of the container that can be used as digital evidence. Forensic readiness is essentially the

05:23.460 --> 05:29.620
approach where cluster administrators want to prepare, practically prepare the environment

05:29.620 --> 05:38.020
so that it can capture evidence during cyber attacks. So that when a security incident

05:38.020 --> 05:47.620
is detected, the collection of snapshots of the containers can be fully automated. This

05:47.620 --> 05:53.220
is essentially complementary to the existing audit walking mechanisms that would, for example,

05:53.220 --> 05:59.420
capture the different events happening in Kubernetes and the different, for example, actions

05:59.420 --> 06:10.460
performed by the APA server. So to understand the type of information, we can, we can essentially

06:10.460 --> 06:17.900
extract during essentially cyber attack. We have to understand what the different attackers

06:17.900 --> 06:23.020
can be. And there's a wide paper published by the quote, quote, native foundation. Essentially

06:23.020 --> 06:29.020
describing the threat model that is commonly used in Kubernetes' clusters. So in this case,

06:29.020 --> 06:35.420
we can have, for example, malicious outsider. This is an attacker that would be using exposed

06:35.420 --> 06:41.340
APIs, for example, for a web application and trying to exploit, for example, non-vonar abilities.

06:41.340 --> 06:46.540
This type of attacker can gain, for example, access to essentially be able to run

06:46.540 --> 06:51.420
actions within the container. We can have also an informed insider, for example,

06:51.420 --> 06:55.820
if you inject malicious code in a container image and someone deploys this in the Kubernetes

06:55.820 --> 07:02.220
cluster. And then the attacker doesn't know what, for example, the cluster looks like. And then

07:02.220 --> 07:09.580
they can try to exploit data and steal, for example, a security tokens. And the third type of

07:09.580 --> 07:16.860
attackers, when, for example, they have already stolen service account credentials. And now they can execute

07:16.860 --> 07:23.100
CPC documents, create their own malicious containers, and look for this in the cluster.

07:25.420 --> 07:32.300
So the way the checkpoint mechanism works is we have integrated checkpoint API, which in the

07:32.300 --> 07:39.260
Q-plet, that then calls the container runtime. So this could be cryo or container D. And we have

07:39.260 --> 07:45.500
essentially checkpoint infunctionality that saves the configuration that was used to create the container,

07:45.500 --> 07:52.620
and other metadata data. And then this API calls a tool called cryo, checkpoint risk or

07:52.620 --> 07:58.620
user space. This tool essentially saves the state and the runtime state of all processes or

07:58.620 --> 08:04.060
you can set up the container. So this includes everything from open files, network connections,

08:04.060 --> 08:10.220
memory pages, unique sockets, a pipes, pretty much everything that you need to essentially

08:10.220 --> 08:17.740
reconstruct the exact same environment. And this is completely transparent. So the applications

08:17.740 --> 08:26.220
cannot detect that checkpoint is being created. But it also, it preserves the exact environment

08:26.220 --> 08:32.380
including environment variables and memory pages that can be used for analysis. We have developed

08:32.380 --> 08:40.140
the tool called checkpoint CTO that essentially extracts the information from the checkpoint.

08:40.140 --> 08:45.980
And then it parses in this serializes the data so that it can be essentially displayed and analyzed.

08:47.740 --> 08:53.340
So one of the challenges with checkpointing for forensic analysis is that if you

08:53.660 --> 08:59.100
ultimately automate this, for example, to perform checkpointic every second, you quickly run out of

08:59.100 --> 09:05.740
this space. Because for example, large language models can be very large or some applications can

09:05.740 --> 09:12.700
use large memory state. So we have a organization taking that's used in lack of

09:12.700 --> 09:18.220
compression. For example, where we keep track of which memory pages are being trained, modified,

09:18.220 --> 09:24.380
and then we stay on this memory pages. So we have integrated this with the checkpointic mechanism

09:24.380 --> 09:30.220
to be able to reduce them out of storage that is required to create essentially a chain of

09:30.220 --> 09:36.220
checkpoint that can be used to reconstruct the timeline during a cyber security incident.

09:37.100 --> 09:41.900
And this in combination with, for example, audit walks in Kubernetes gives you

09:42.620 --> 09:49.580
exact information about exactly what happened, why happened and it allows a security team to

09:50.540 --> 09:59.900
essentially prevent future attacks. So in this case, we created a common scenario where we

09:59.900 --> 10:05.900
have an escur injection. So I'm tackled with sent an HP request that will essentially use

10:05.900 --> 10:15.100
net cuts to create a reverse of this. So it will essentially execute code that will, in this case,

10:15.100 --> 10:20.220
check if the admin password starts with the number seven. So what we can see here is in the first

10:20.220 --> 10:27.180
checkpoint, we have listening socket. And then four seconds later, we have, in the second checkpoint,

10:27.180 --> 10:34.300
we have a this established disappear connection. And we can see that the TCP stream input and output

10:34.300 --> 10:40.860
use mt. This means that the application has already read the data from the disappear request and

10:40.860 --> 10:43.460
We can see that there is a new process thread.

10:43.460 --> 10:47.980
In this case with Phd7 that has been created.

10:47.980 --> 10:49.740
And then we can essentially see

10:49.740 --> 10:52.060
what are the memory pages of this thread.

10:52.060 --> 10:54.340
And we can extract in this case the code

10:54.340 --> 10:56.020
that was sent by the attacker.

10:56.020 --> 10:59.660
So this quite interesting that, for example,

10:59.660 --> 11:04.020
when malware wants to hide the actions

11:04.020 --> 11:06.460
that are being performed, they usually use encryption.

11:06.460 --> 11:09.260
So even if you use something like Warshark

11:09.260 --> 11:12.300
or try to investigate the data that's transferred

11:12.300 --> 11:15.060
over the network, you cannot see the encrypted data.

11:15.060 --> 11:17.500
But because we check point the memory state,

11:17.500 --> 11:19.580
you can see essentially the data after it

11:19.580 --> 11:23.060
has been already encrypted.

11:23.060 --> 11:28.100
Another use usage scenario is,

11:28.100 --> 11:30.580
when you have, for example, reverse show.

11:30.580 --> 11:35.380
So in this case, we have an attack where

11:35.380 --> 11:38.860
the attacker creates a reverse show inside

11:38.860 --> 11:40.060
of a container.

11:40.060 --> 11:43.700
Now the important thing here is that after the container

11:43.700 --> 11:47.020
exits, all the information about the actions performed

11:47.020 --> 11:50.540
by the attacker would essentially disappear.

11:50.540 --> 11:53.740
So what we have is a periodic checkpoint

11:53.740 --> 11:56.460
that creates a snapshot of the running container

11:56.460 --> 11:57.900
every two seconds.

11:57.900 --> 11:59.820
And you can see here, for example,

11:59.820 --> 12:02.340
what are the different processes that are created

12:02.340 --> 12:03.180
by the attacker.

12:03.180 --> 12:10.620
And just to demonstrate how this looks like in practice.

12:10.620 --> 12:13.860
So here we have essentially a Kubernetes cluster

12:13.860 --> 12:15.700
with a web application.

12:15.700 --> 12:20.420
Essentially showing the same sort of reverse show attack.

12:20.420 --> 12:23.460
So in this case, we are starting

12:23.460 --> 12:24.460
to make check point.

12:24.460 --> 12:27.500
And so this will create a checkpoint every second.

12:27.500 --> 12:30.860
And then while the web application is running,

12:30.860 --> 12:36.220
we send essentially the exploit in this case

12:36.220 --> 12:37.820
that will create a reverse show.

12:37.820 --> 12:41.020
And then the attacker in the first show essentially

12:41.020 --> 12:43.780
lists all the files inside of the container.

12:43.780 --> 12:46.740
And essentially the checkpoint mechanism

12:46.740 --> 12:48.100
will capture all of this.

12:55.900 --> 13:00.060
And here we can, for example, generate a JSON file

13:00.060 --> 13:03.140
that contains the full analysis of all check points

13:03.140 --> 13:05.180
that have been created sequentially.

13:05.180 --> 13:07.820
And what we can see here is a lot of information,

13:07.820 --> 13:10.500
but essentially the command in this case

13:10.500 --> 13:11.820
that the attacker was running.

13:11.820 --> 13:13.820
We can also see the memory pages

13:13.820 --> 13:15.700
and all other information about it.

13:21.980 --> 13:26.860
So the key takeaways is today, what we have in Kubernetes

13:26.860 --> 13:28.660
is most of the research is focusing

13:28.660 --> 13:31.180
on how to detect different types of attacks

13:31.180 --> 13:33.580
and how to preserve, for example, walks.

13:33.580 --> 13:35.860
But this doesn't give you a lot of information

13:35.860 --> 13:39.580
when you try to, for example, extract evidence

13:39.580 --> 13:43.220
of the different types of malware attacks

13:43.220 --> 13:44.460
that have been performed.

13:44.460 --> 13:47.380
You cannot see what was happening in the container.

13:47.380 --> 13:52.380
And this is especially difficult with traditional forensic tools,

13:52.380 --> 13:55.020
mainly because in Kubernetes, workloads

13:55.020 --> 13:58.340
are being constantly distributed across different nodes.

13:58.340 --> 14:01.660
And if you just create a snapshot of the disk space,

14:01.660 --> 14:06.620
you will lose a lot of information that has disappeared.

14:06.620 --> 14:09.380
And our checkpoint mechanism allows you

14:09.380 --> 14:14.660
to essentially preserve this state during cybersecurity incidents.

14:14.660 --> 14:18.300
And you can use this information to analyze what

14:18.300 --> 14:21.460
has happened during the attack.

14:21.460 --> 14:23.900
And thank you very much for listening.

14:23.900 --> 14:27.700
So this is a link to the project.

14:27.700 --> 14:29.740
This is checkpointsity, always the tool

14:29.740 --> 14:32.060
that can be used to analyze checkpoints.

14:32.060 --> 14:35.420
And the crew page has more information

14:35.420 --> 14:37.620
how checkpoints works in Kubernetes.

14:37.620 --> 14:39.180
I will be happy to take any questions.

14:51.860 --> 14:52.060
Hi.

14:52.060 --> 14:52.900
Yes?

14:54.140 --> 14:56.860
It's because you're trying to capture more

14:56.860 --> 15:00.860
to carry on within the containers as things are happening.

15:00.860 --> 15:02.540
How do you tell the needs doesn't

15:02.540 --> 15:06.020
about the results in the client that I can't care?

15:06.020 --> 15:10.620
And any kind of curiosity, and any use of my old years,

15:10.620 --> 15:15.740
to decide, I'm only going to have the most risky parts

15:15.740 --> 15:18.020
of my work, for example.

15:18.020 --> 15:23.500
So the way this works is, we're, yeah, the question is,

15:23.500 --> 15:27.180
essentially, whether we have analyzed,

15:27.180 --> 15:30.700
whether we have a way of defining what information

15:30.700 --> 15:32.500
exactly is being captured.

15:32.500 --> 15:36.220
And whether we have studied what resources

15:36.220 --> 15:39.500
are needed to do this capture.

15:39.500 --> 15:43.340
So the way checkpoint use works is we create,

15:43.340 --> 15:44.940
like a full snapshot of everything

15:44.940 --> 15:46.860
that's running inside of the container.

15:46.860 --> 15:49.740
So we can choose, in Kubernetes, the minimum

15:49.740 --> 15:52.340
the point of the portal unit is a pod.

15:52.340 --> 15:55.020
So you can have multiple pods within a container,

15:55.020 --> 15:57.580
but you can check point individual containers.

15:57.580 --> 15:59.820
So for example, if you have a monitoring tool,

15:59.820 --> 16:01.220
we don't need to check point this,

16:01.220 --> 16:02.940
but you can check point, for example,

16:02.940 --> 16:05.540
the application that is inside it.

16:05.540 --> 16:08.980
And we preserve the full state of the processes

16:08.980 --> 16:09.700
that are running.

16:09.700 --> 16:12.660
So this is also that you can recreate,

16:12.660 --> 16:14.580
you can restore essentially the container

16:14.580 --> 16:17.460
from this checkpoint, and you can live it in frozen state.

16:17.460 --> 16:21.780
So for example, you can analyze how the container looks

16:21.780 --> 16:23.980
like at specific point in time.

16:23.980 --> 16:27.460
And you can pretty much see all the file system changes,

16:27.460 --> 16:32.580
the memory of the allocated memory of the application.

16:32.580 --> 16:34.940
And for example, if you have a malware

16:34.940 --> 16:36.940
that is running gone in memory,

16:36.940 --> 16:39.020
you can still preserve this state.

16:39.020 --> 16:43.260
But we also save information about TCP connections

16:43.260 --> 16:45.380
and other sockets.

16:45.380 --> 16:50.380
So you can investigate this type of actions as well.

16:51.020 --> 16:53.300
Did that answer your question?

16:53.300 --> 16:54.300
OK.

16:54.300 --> 16:55.500
Yeah.

16:55.500 --> 16:59.900
Just to add something, rendition to what said the medicine,

16:59.900 --> 17:01.620
I mean, the detection is not enough,

17:01.620 --> 17:03.060
but it still works.

17:03.060 --> 17:06.860
So if there's an alert, based on, I don't know,

17:06.860 --> 17:09.380
and I reverse the shell has been run.

17:09.380 --> 17:15.020
You may be focused on that process and that pods and so on.

17:15.020 --> 17:17.740
So the detection still works, but it's not enough.

17:20.380 --> 17:22.900
And one last point was about evidence.

17:22.900 --> 17:26.820
So in digital forensics people, usually care about whether

17:26.820 --> 17:30.220
you can use the evidence that are collected in court, for example,

17:30.220 --> 17:34.060
as whether they will withstand scrutiny.

17:34.060 --> 17:37.220
So this is also essential to try to do

17:37.220 --> 17:40.300
is create artifacts that can be useful in that way.

17:43.980 --> 17:45.700
Yes.

17:45.700 --> 17:46.660
You can go first.

17:46.660 --> 17:47.740
I think you were first.

17:51.380 --> 17:59.180
So external state in Kubernetes is represented.

17:59.180 --> 18:00.540
Oh, sorry.

18:00.540 --> 18:02.940
The question is, what about external state,

18:02.940 --> 18:07.180
like a secret, for example, volumes, et cetera?

18:07.180 --> 18:13.580
So in this case, the way this works in Kubernetes is it's amount.

18:13.580 --> 18:15.220
So essentially, it's, for example,

18:15.220 --> 18:17.940
how that is mounted inside of a container.

18:17.940 --> 18:19.980
And depending on what it is,

18:19.980 --> 18:22.100
like, for example, for temporary file system

18:22.100 --> 18:25.780
that will disappear after the container exits,

18:25.780 --> 18:27.540
essentially, that creates the checkpoint

18:27.540 --> 18:31.140
in two saves the content of this temporary file system.

18:31.140 --> 18:33.020
If it is an external amount, this something

18:33.020 --> 18:35.500
we have been discussing, what we usually do.

18:35.500 --> 18:39.140
For example, in Podman is just create a copy of the volume.

18:39.140 --> 18:41.500
But in Kubernetes, this solves something

18:41.500 --> 18:42.900
that we're currently working on.

18:42.900 --> 18:47.460
And yeah, I mean, it's still something,

18:48.460 --> 18:51.540
it depends how big the volume is.

18:51.540 --> 18:53.500
There are different snapshots in mechanism

18:53.500 --> 18:57.620
that you can use to create snapshots of the volume itself.

18:57.620 --> 18:59.540
Did that answer the question?

18:59.540 --> 19:00.860
OK, thank you.

19:00.860 --> 19:01.700
And yeah.

19:01.940 --> 19:05.820
First one is, where is the actual snapshots?

19:05.820 --> 19:08.700
So in the view of that, it's a file that saves

19:08.700 --> 19:10.900
somewhere from the connection environment.

19:10.900 --> 19:12.740
In case it would be saved somewhere else.

19:12.740 --> 19:14.340
Where would you save that?

19:14.340 --> 19:18.540
In the second part of the coming from this, the entry will give.

19:18.540 --> 19:20.340
Did you consider that the checkpoint risk

19:20.340 --> 19:23.100
or the effect of the factor considering

19:23.100 --> 19:24.660
that you would have following information

19:24.660 --> 19:26.300
or the ones that they would consider?

19:26.300 --> 19:28.660
Yeah, this is actually very interesting question.

19:28.660 --> 19:33.540
So the question was, if we have, during the checkpoint

19:33.540 --> 19:36.820
mechanism, where the snapshots are being created, saved.

19:36.820 --> 19:40.980
And essentially, I guess the second part of the question

19:40.980 --> 19:42.580
is about anti forensics attacks.

19:42.580 --> 19:46.380
So whether the attacker can prevent us from capturing this state.

19:46.380 --> 19:49.460
Could you explain, actually, the state of the file

19:49.460 --> 19:52.380
would check upon that that's being attacked and created?

19:52.380 --> 19:55.140
You can have a domain of attack from there.

19:55.140 --> 19:59.540
So even if, for example, the attacker can delete, for example,

19:59.540 --> 20:01.060
the checkpoint.

20:01.060 --> 20:02.740
So these are very interesting questions.

20:02.740 --> 20:07.940
And so the first part is, we save the checkpoint

20:07.940 --> 20:12.100
in a folder called QBlet slash checkpoints.

20:12.100 --> 20:14.820
So there is a specific location that

20:14.820 --> 20:16.820
is used to save these checkpoints.

20:16.820 --> 20:18.740
And because they contain sensitive data,

20:18.740 --> 20:22.820
so they contain the full snapshot of the application's memory.

20:22.820 --> 20:25.940
So this could include like secrets, user credentials,

20:25.940 --> 20:26.740
et cetera, everything.

20:26.740 --> 20:31.300
So we use permissions that don't allow anyone other than administrative

20:31.300 --> 20:32.500
to be able to access this.

20:32.500 --> 20:35.860
So if the attacker gets ruled access on the node,

20:35.860 --> 20:37.780
and they can see any file on the node, they

20:37.780 --> 20:39.780
can be able to delete the checkpoints.

20:39.780 --> 20:41.620
But this is what we have so far.

20:41.620 --> 20:44.420
We have also worked on anti-encryption,

20:44.420 --> 20:49.460
so that we encrypt the data with public, private encryption

20:49.460 --> 20:50.340
during the checkpoint.

20:50.340 --> 20:54.020
So this happens as soon as the memory pages are copied,

20:54.020 --> 20:57.300
and then before they return to disk.

20:57.300 --> 21:01.140
And yeah, in terms of financial forensics,

21:01.140 --> 21:04.980
so we have been discussing this actually different ways of handling this.

21:04.980 --> 21:07.540
One thing is you can use a random number

21:07.540 --> 21:10.900
for how often you create checkpoints.

21:10.900 --> 21:13.940
But there are, in general, there are a lot of things that can be done.

21:13.940 --> 21:19.300
Like, for example, attacker can use a system code that is

21:19.300 --> 21:23.940
not something that the checkpoint into is able to handle

21:23.940 --> 21:26.180
at the moment, in this case, it would create a error.

21:26.180 --> 21:28.420
So what we can do is we can ignore the errors,

21:28.420 --> 21:31.540
and still save incomplete checkpoint.

21:31.540 --> 21:35.300
But essentially, there are a lot of, it's still a research area.

21:35.300 --> 21:38.500
So what I'm trying to say?

21:38.500 --> 21:40.100
Hi, yes.

21:40.100 --> 21:43.460
What is the number having found in this abstraction?

21:43.460 --> 21:47.460
I guess he wouldn't just run it every all the time

21:47.460 --> 21:51.140
that you would buy a template,

21:51.140 --> 21:53.220
that actually might happen to get a change of something

21:53.220 --> 21:57.380
that is happening, start taking these platforms, I guess.

21:57.380 --> 21:59.620
Yeah, I would just repeat the question.

21:59.620 --> 22:04.180
So the question is, what is the overhead of this checkpoints?

22:04.180 --> 22:07.780
And when do you start creating checkpoints pretty much?

22:07.780 --> 22:09.620
So there are different types of overhead.

22:09.700 --> 22:11.460
The first thing is performance overhead.

22:11.460 --> 22:13.860
The time, for example, the application is not running,

22:13.860 --> 22:15.700
because we are saving checkpoints.

22:15.700 --> 22:17.460
And then there is also storage overhead.

22:17.460 --> 22:21.460
How much disk space we need to essentially save all the checkpoints

22:21.460 --> 22:22.580
that we create.

22:22.580 --> 22:25.700
And the first problem is performance overhead.

22:25.700 --> 22:29.300
Well, we sort of address both problems with memory tracking.

22:29.300 --> 22:32.580
So the largest component of a checkpoint, essentially,

22:32.580 --> 22:35.380
are the memory pages of the application.

22:35.380 --> 22:39.460
Yes, and during live migration, we are addressing this problem

22:39.460 --> 22:41.780
because we care about downtime.

22:41.780 --> 22:46.020
So when we are moving one application from one node to another,

22:46.020 --> 22:48.980
we want to keep the application running.

22:48.980 --> 22:51.860
And we essentially do this iteratively.

22:51.860 --> 22:55.540
So we transfer first the full state, then we keep track how

22:55.540 --> 23:01.220
with which memory pages have changed while we are transfering the state.

23:01.220 --> 23:05.540
Then we transfer the remaining over and over until there is small enough,

23:05.540 --> 23:08.100
essentially, data that we need to transfer.

23:08.100 --> 23:10.180
So we kind of do this in king here.

23:10.180 --> 23:14.900
So we keep track of what memory has changed,

23:14.900 --> 23:17.140
and save only this within the checkpoint.

23:17.140 --> 23:21.780
And this allows us to reduce the disk space needed for checkpoints,

23:21.780 --> 23:25.460
but also to reduce the performance overhead.

23:25.460 --> 23:27.620
So the amount of time it takes to create the checkpoint.

23:29.460 --> 23:32.500
And when we create checkpoints, so this also

23:32.500 --> 23:35.620
what Lorena was describing.

23:35.620 --> 23:39.700
So essentially, when there are many tools in Kubernetes

23:39.700 --> 23:44.180
that allow you to detect, for example, other suspicious activity,

23:44.180 --> 23:47.940
or malicious HTTP request.

23:47.940 --> 23:51.380
And then you can configure different types of rules

23:51.380 --> 23:55.300
that will essentially trigger the checkpoint mechanism.

23:55.300 --> 23:57.860
So essentially, the checkpoint mechanism is

23:57.860 --> 24:02.980
for preserving evidence while thread detection is separate

24:03.060 --> 24:08.020
category where you can specify the rules.

24:08.020 --> 24:11.940
Yeah, I hope this answers a question.

24:11.940 --> 24:13.940
Thank you very much.

