WEBVTT

00:00.000 --> 00:10.480
Thank you, so it's a pleasure to be here and having the chance to talk about our work

00:10.480 --> 00:13.640
on virtualization system security.

00:13.640 --> 00:18.640
So today we will explore how virtualization techniques can enhance the security of the Linux

00:18.640 --> 00:19.640
kernel.

00:19.640 --> 00:25.480
Specifically, I will provide an overview of the Blurock security architecture which leverages

00:25.480 --> 00:34.240
virtualization extensions to enhance the security of the subsystems of the Linux kernel.

00:34.240 --> 00:37.000
So the problem that we are trying to solve is the following.

00:37.000 --> 00:43.480
So the Linux architecture currently faces, well, let's call it, interact challenges in

00:43.480 --> 00:44.920
its security design.

00:44.920 --> 00:50.360
Fundamentally, the OS distinguishes between the well less privileged user space and

00:50.360 --> 00:52.360
the highly privileged kernel space.

00:52.360 --> 00:58.400
So this design ensures that the kernel is all remains in control of user space applications.

00:58.400 --> 01:04.920
In other words, the kernel is responsible for protecting and isolating user space applications.

01:04.920 --> 01:09.800
At the same time, it is also the kernel who bears the responsibility of protecting itself

01:09.800 --> 01:11.400
from unauthorized access.

01:11.400 --> 01:15.960
And if we, for a moment, assume that the kernel has been compromised.

01:15.960 --> 01:22.160
The question arises, who could actually protect the Linux kernel from unauthorized access?

01:22.160 --> 01:28.320
And we believe that the answer lies in virtualization assisted security.

01:28.320 --> 01:33.400
So the idea behind virtualization assisted security is as follows.

01:33.400 --> 01:39.640
So it is about designing an architecture that alleviates the strict separation between the

01:39.640 --> 01:42.440
Linux kernel and a VMM.

01:42.440 --> 01:49.320
So we propose to empower the kernel to directly integrate virtualization extensions as inner-end

01:49.320 --> 01:52.120
building blocks into their subsystem.

01:52.120 --> 01:59.800
We consider virtualization extensions as not limited to VMMs, but rather as a hardware resource,

01:59.800 --> 02:05.720
host capabilities can really greatly benefit the security of OS systems.

02:05.720 --> 02:11.480
So by equipping the Linux subsystems with security primitives provided by the VMM, the VMM

02:11.480 --> 02:16.680
evolves beyond its traditional role of basically implementing the virtual hardware interface

02:16.680 --> 02:21.520
and becomes a resilient security support layer for virtual machines.

02:21.520 --> 02:27.000
In short, through virtualization assisted security, we aim to strengthen the Linux kernel

02:27.000 --> 02:34.480
against basically the defense of the Linux kernel against malicious actors even in the presence

02:34.480 --> 02:39.520
of potential kernel vulnerabilities.

02:39.520 --> 02:46.040
Now before we can dive deeper into the topic, I suggest to introduce a blur of security architecture.

02:46.280 --> 02:52.080
This architecture encompasses several key components which would also allow us to follow

02:52.080 --> 02:53.920
the presentation.

02:53.920 --> 02:56.280
The first component is the security support layer.

02:56.280 --> 03:02.160
So the architecture uses the Nova microhypervisor as its security support layer for VMMs.

03:02.160 --> 03:07.160
It provides a hypercalling interface to supply VMs with fast capabilities and at the moment

03:07.160 --> 03:12.800
it supports 64-bit Intel and RB8 applications CPUs.

03:12.800 --> 03:17.480
So since the underlying concepts are fully hypervisor-nostic, they can be similarly applied

03:17.480 --> 03:22.360
to also other implementations such as Linux KVM.

03:22.360 --> 03:24.960
The next component of the architecture is the workload VM.

03:24.960 --> 03:30.320
So it runs in enlightened or paramilitarized Linux kernel and combined with the underlying

03:30.320 --> 03:37.800
security support layer, this VM or this component presents the primary focus of this presentation.

03:37.800 --> 03:43.720
So this VM collaborates actively with the underlying security support layer and thus leverages

03:43.720 --> 03:49.640
vast building blocks to enhance the security of the kernel subsystem.

03:49.640 --> 03:55.680
So depending on the system's configuration the workload VM can also operate as the sole VM

03:55.680 --> 04:02.800
on the system and does not necessarily require any additional VMs to assist its security.

04:02.800 --> 04:06.640
The workload VM interacts with the underlying security support layer through hypercalls

04:06.680 --> 04:13.200
and hypercalls are used here in this context to initialize and also to commit to a given security

04:13.200 --> 04:14.200
policy.

04:14.200 --> 04:19.840
But they are also used to verify whether a specific operations are permitted within a given context.

04:19.840 --> 04:26.080
So, optionally the workload VM can generate high-level security events such as process

04:26.080 --> 04:30.880
life cycle and container drift events and they are then sent through these on through

04:30.880 --> 04:33.400
these circuit to an external monitor.

04:33.400 --> 04:39.240
So these events provide insights for an external security monitor which can react to potential

04:39.240 --> 04:40.520
threats.

04:40.520 --> 04:44.920
Any policy violation triggers the security support layer to eject faults into the workload

04:44.920 --> 04:51.040
VM and thus basically avoid an authorized operation.

04:51.040 --> 04:56.080
The final component of this architecture is the control day in VM and this component is optional

04:56.080 --> 04:59.760
and it won't be considered in the rest of the presentation.

04:59.760 --> 05:03.920
It's still for the sake of completeness I would like to quickly introduce it.

05:03.920 --> 05:08.760
So it receives the security events which were originated or generated by the workload

05:08.760 --> 05:16.000
VM and it allows security experts to basically define and maintain OPA-based security policies

05:16.000 --> 05:20.280
which can be then enforced through the underlying security support layer.

05:20.280 --> 05:26.520
So this design basically decouples policy decision from policy enforcement points.

05:26.520 --> 05:31.880
Now that we have covered the overall security architecture, I would like to start talking

05:31.880 --> 05:35.840
about specific vast primitives that our system implements.

05:35.840 --> 05:40.480
So the Linux currently integrity primitive is one of the main primitives and it empowers

05:40.480 --> 05:47.080
the workload VM to basically protect its kernel code in read-only sections against unauthorized

05:47.080 --> 05:48.080
modification.

05:48.080 --> 05:54.000
So even if a attacker is gained kernel privileges, this primitive prevents them from modifying

05:54.000 --> 06:01.520
protected code and from, for example, from injecting code into the kernel in an unauthorized

06:01.520 --> 06:02.520
way.

06:02.520 --> 06:08.040
The selective data structure in pointer protection primitive is designed to protect the

06:08.040 --> 06:13.280
integrity of sensitive or critical data structures and well their pointers.

06:13.280 --> 06:20.520
It binds pointers to sensitive data or of sensitive data structures to unique and immutable

06:20.520 --> 06:26.720
contexts and verifies their integrity and designated verification points.

06:26.720 --> 06:32.280
Now the security support layer implements also further primitives including control register

06:32.280 --> 06:36.960
value blocking of a simple policy protection driver signature enforcement and many more.

06:36.960 --> 06:41.920
And all of them can be leveraged by the workload VM to support or to protect its subsystems.

06:41.920 --> 06:47.480
But since we have timing constraints, I will only cover the first primitive which is the

06:47.480 --> 06:51.720
Linux kernel integrity protecting protection.

06:51.720 --> 06:57.960
Now as such I suggest to take a closer look at this primitive.

06:57.960 --> 07:04.920
So the VM or as I mentioned before the workload VM leverages hypercalls to communicate

07:04.920 --> 07:07.200
with the underlying security support layer.

07:07.200 --> 07:13.160
In this context the VM employees hypercalls to register memory regions within the security

07:13.160 --> 07:14.160
support layer.

07:14.160 --> 07:19.760
So this concept applies not only to static kernel code in data segments but also to dynamically

07:19.760 --> 07:26.760
loadable code in data for example this includes kernel modules and of course ppf programs.

07:26.760 --> 07:32.280
So by using this vast primitive the VM enhances the kernel memory management capabilities.

07:32.280 --> 07:39.760
In fact it combines the kernel memory management with the system's second level address

07:39.760 --> 07:45.120
translation capabilities to grant access permissions exclusively to memory regions which

07:45.120 --> 07:48.720
happened previously registered.

07:48.720 --> 07:54.160
Now in this way the system becomes able to identify or intercept for example unauthorized

07:54.160 --> 08:00.760
supervised execution attempts and this way effectively to award in for example injections

08:00.760 --> 08:04.080
of unknown or malicious code into the kernel.

08:04.080 --> 08:09.360
And as you can see here on the slide in this example we have an unauthorized execution

08:09.440 --> 08:16.560
attempt of the unknown memory region which is marked as red and because it was not registered

08:16.560 --> 08:21.320
beforehand it can be intercepted and basically awarded.

08:21.320 --> 08:26.160
So all of this works because each registered memory region is categorized by memory

08:26.160 --> 08:28.280
type and associated flex.

08:28.280 --> 08:35.280
So the memory type is hypervisor and hardware independent and it is basically internally

08:35.600 --> 08:39.200
hyperdefined memory permission format.

08:39.200 --> 08:44.040
But the memory flex in particular transient and mutable describe additional properties

08:44.040 --> 08:49.000
of the associated memory region which we should quickly look at.

08:49.000 --> 08:55.280
So first the transient flag indicates whether a given memory region is static or dynamic.

08:55.280 --> 09:00.560
So this flag determines if a previously registered memory region can be retrospectively removed

09:00.560 --> 09:02.280
from the system.

09:02.280 --> 09:07.560
And as you can see on the slide here the workload VM uses the integrity create hypercall

09:07.560 --> 09:13.960
to first register a transient memory region which belongs to a kernel module.

09:13.960 --> 09:19.240
Now at the later point in time when the kernel module gets unloaded the workload VM can unload

09:19.240 --> 09:20.240
the VM.

09:20.240 --> 09:26.560
Sorry can can unregister the previously registered memory or to release the memory.

09:26.560 --> 09:32.080
Please note that any attempt to release non-transit memory will cause the security support

09:32.080 --> 09:36.760
layer to inject the fault into the VM.

09:36.760 --> 09:39.440
The mutable flag allows to update the type of the memory.

09:39.440 --> 09:42.440
So the permissions of the registered memory regions.

09:42.440 --> 09:48.320
And on this slide we see again that the workload VM uses creates or registers a memory region

09:48.320 --> 09:54.000
which is first mutable and it has read write permissions.

09:54.000 --> 09:59.480
At a later point in time the VM can use the integrity update hypercall to change the

09:59.480 --> 10:03.400
permissions of the previously registered memory region.

10:03.400 --> 10:09.480
So in this case the type is changed or the permissions are changed from read writes to read only.

10:09.480 --> 10:16.040
Here again please note that the mutable flag allows updating only to more restrictive memory

10:16.040 --> 10:17.040
types.

10:17.120 --> 10:23.600
Subsequent updates cannot grant more permissions than we are initially assigned to the memory

10:23.600 --> 10:25.600
region.

10:25.600 --> 10:30.200
Now by using the presented memory model we become able to support basic operations.

10:30.200 --> 10:34.800
So we can boot up the loans kernel we can load kernel modules and BPM programs.

10:34.800 --> 10:43.040
However in reality the kernel is much more dynamic than that and heavily relies on runtime

10:43.040 --> 10:46.040
patching to optimize performance and functionality.

10:46.040 --> 10:51.600
So this means that without explicit support for patching the previously presented integrity

10:51.600 --> 10:56.760
primitive will raise a false positive every time the system tries to patch itself.

10:56.760 --> 10:59.040
And this is a problem.

10:59.040 --> 11:04.640
Here some key patching facilities that are supported by the kernel just to provide

11:04.640 --> 11:09.840
a small overview we have something like alternative instructions which replace instructions

11:09.840 --> 11:13.920
with their most optimized version depending on the CPU.

11:13.920 --> 11:21.080
We have jump labels which are used to toggle rarely use conditional code paths again by patching

11:21.080 --> 11:22.600
kernel code.

11:22.600 --> 11:28.120
Then we have set a key study calls which use jump labels underneath and we have also something

11:28.120 --> 11:36.000
like trace points which allow to text probes to hooks in the kernel code for tracing purposes.

11:36.000 --> 11:40.600
And while all of these patching facilities are very useful they can present significant

11:40.600 --> 11:41.800
security risks.

11:41.800 --> 11:48.320
So attackers can abuse these patching facilities to take over control over the Linux kernel.

11:48.320 --> 11:56.400
So attackers can reuse patching related code or even modify patching related data structures

11:56.400 --> 11:59.880
to basically to take over the system.

11:59.880 --> 12:05.920
So this can lead to an authorized modification of the kernel code or even to disabling

12:06.320 --> 12:09.320
particular security events in the system.

12:09.320 --> 12:14.920
And all of that of course even though we have more and see if I'm Michigan's in place.

12:14.920 --> 12:23.080
So at this point the question arises how or is it even possible to reliably distinguish

12:23.080 --> 12:26.840
between legitimate and malicious patching activities?

12:26.840 --> 12:33.880
Well generally to vortex targeting the patching facility we would need to first guarantee

12:33.880 --> 12:39.160
that the patching facility is called only from a trusted benign context.

12:39.160 --> 12:44.520
And second we need to maintain the integrity of the patching related data structures.

12:44.520 --> 12:49.280
So the issue of hand is that even though we can address the first point through something

12:49.280 --> 12:54.080
like CFI the second requirement will remain an open challenge.

12:54.080 --> 13:01.960
And because of this we propose again virtualization is security to meet both requirements.

13:01.960 --> 13:05.040
In this context we introduce the vault.

13:05.040 --> 13:11.500
The vault is a well new security primitive which is designed to isolate subsystems in the Linux

13:11.500 --> 13:12.500
kernel.

13:12.500 --> 13:18.600
In a nutshell we provide an API to encapsulate and isolate sensitive code and data in dedicated

13:18.600 --> 13:20.640
sections that belong to the vault.

13:20.640 --> 13:27.840
So in this way we become able to logically partition the Linux kernel and shift entire subsystems

13:27.840 --> 13:33.120
if needed and there are associated data structures into separate vaults.

13:33.120 --> 13:39.120
So the goal here is to really restrict the kernel from directly accessing memory within

13:39.120 --> 13:40.120
the vault.

13:40.120 --> 13:43.960
So access to this memory should be controlled through a strictly governed transition

13:43.960 --> 13:51.120
points into and out of the vault and any unauthorized attempt to to basically access

13:51.120 --> 13:53.960
this memory should raise a violation.

13:53.960 --> 13:58.280
So by implementing the vault our main goal is to effectively prevent attackers from

13:58.280 --> 14:04.640
reusing protected code and from modifying sensitive data within the vault.

14:04.640 --> 14:10.720
Now before we continue I suggest we quickly recap the concepts of second level address

14:10.720 --> 14:11.720
translation.

14:11.720 --> 14:17.360
And here typically a VMM uses a single set of slatables which translate gas physical

14:17.360 --> 14:20.720
to machine or host physical memory.

14:20.720 --> 14:26.320
So in other words a single set of slatables defines the gas view on its physical memory.

14:26.320 --> 14:33.160
If we change the permissions of this global set of slatables which is by the way very

14:33.160 --> 14:39.960
slow operation we will inevitably affect the global view of the memory as it is perceived

14:39.960 --> 14:42.440
by all these computers in the system.

14:42.440 --> 14:47.880
Now instead of using a single global view on the gas physical memory the Nova gas

14:47.880 --> 14:53.400
spaces subsystem allows us to maintain and dynamically switch among different views.

14:53.400 --> 14:58.920
So by switching the views we basically can can efficiently change permissions without having

14:58.920 --> 15:03.880
to walk this global set of slatables.

15:03.880 --> 15:10.600
And even even more important this doesn't require us to affect any other or the views

15:10.600 --> 15:13.960
of any other VCUs.

15:13.960 --> 15:18.840
So in all of this creates the basis that we need for implementing the vault.

15:18.840 --> 15:23.400
So as we also know the hardware allows us to have only one memory view to be active at

15:23.400 --> 15:24.800
a given time.

15:24.800 --> 15:29.320
And at the same time we would like to configure multiple memory views to guard this joint

15:29.320 --> 15:32.720
memory sections that belong to a vault.

15:32.720 --> 15:35.600
And these destroyed sections cannot be accessible at the same time.

15:35.600 --> 15:40.560
So what I'm trying to say is that we cannot simply associate one vault with a given

15:40.560 --> 15:47.600
or with only one memory view instead we require two views to implement one vault.

15:47.600 --> 15:52.440
So we dedicate one view which is the restricted kernel view to unify all memory restrictions

15:52.440 --> 15:56.560
of the system and this becomes the default view in the system.

15:56.560 --> 16:02.280
We use the second view to relax restrictions for a given memory region or for given memory

16:02.280 --> 16:05.960
regions that belong to a vault.

16:05.960 --> 16:11.560
Now to accomplish and vaults we would need to define n plus one memory views and by entering

16:11.560 --> 16:17.560
a specific vault we basically relax the permissions of the memory region that belongs to

16:17.560 --> 16:24.920
this particular vault while all other views would remain in their restricted state.

16:24.920 --> 16:29.800
Now that we have established a technical concept behind the vault primitive I would like

16:29.800 --> 16:32.000
to discuss how this works in practice.

16:32.000 --> 16:39.040
And the following slides are going to be a little bit fun so please bear with me.

16:39.040 --> 16:45.360
We have integrated the vault into the kernel to protect against attacks targeting the patching

16:45.360 --> 16:47.080
and tracing facility.

16:47.080 --> 16:52.920
So by using the vaults API we basically encapsulate the necessary code and data of the patching

16:52.920 --> 16:57.360
and tracing facility and designated memory sections.

16:57.360 --> 17:03.480
Additionally we clearly define the vault's entry and exit points and share this information

17:03.480 --> 17:07.760
during early boot with the underlying security support layer.

17:07.760 --> 17:12.960
Now to implement the security patching primitive that is compatible with the previously

17:12.960 --> 17:18.040
previously introduced integrity primitive we define the following requirements.

17:18.040 --> 17:22.400
First we consider the code outside of the vault is untrusted.

17:22.400 --> 17:28.320
So our goal is to really prevent a potentially compromised kernel from reusing code gadgets

17:28.320 --> 17:29.400
inside of the vault.

17:29.400 --> 17:36.040
Second it is important that only the code that is inside of the vault has access to the

17:36.040 --> 17:38.560
sensitive data structures inside of the vault.

17:38.560 --> 17:43.920
So any unauthorized access to these data structures from the outside of the vault shall

17:43.920 --> 17:44.920
be blocked.

17:44.920 --> 17:51.360
And third and this applies to this example with patching the code within the vault should not

17:51.360 --> 17:54.120
be able to directly patch the Linux kernel itself.

17:54.120 --> 18:00.120
Instead it should instruct the security support layer through hypercalls to patch the respective

18:00.120 --> 18:03.800
code regions on behalf of the subsystems.

18:03.800 --> 18:10.480
So these patching requests are considered trusted because they originate from a secure trusted

18:10.480 --> 18:18.560
or verified location in the vault and they hold data that is also a vault protected.

18:18.560 --> 18:24.480
Now to isolate the subsystem in the vault we integrate or leverage Nova gas spaces and

18:24.480 --> 18:28.280
specifically we create two additional views on the gas physical memory.

18:28.280 --> 18:31.680
So the first view is the kernel view as we discussed before.

18:31.680 --> 18:36.400
It restricts access to code in data inside of the vault while at the same time it of course

18:36.400 --> 18:39.920
allows access to the remaining parts of the kernel.

18:39.920 --> 18:41.840
The second is the vault view.

18:41.840 --> 18:45.680
It grants access permissions to the memory within the vault.

18:45.680 --> 18:47.840
So basically yes.

18:47.840 --> 18:53.120
And to show how it works in practice we look at the following slides.

18:53.120 --> 19:00.960
So like we said before during early boot the kernel communicates designated entry points

19:00.960 --> 19:03.760
into the vault with the underlying security support layer.

19:03.760 --> 19:09.760
So these entry points define the vault's interface and they comprise addresses of selected

19:09.760 --> 19:12.920
function entries inside of the vault.

19:12.920 --> 19:18.400
This means only branches to these specific entry points are authorized to enter the vault.

19:18.400 --> 19:23.440
So when the kernel jumps to one of these entry points the system will trap the execution

19:23.440 --> 19:28.200
attempt and end over control flow to the underlying security support layer.

19:28.200 --> 19:32.440
Then the security support layer compares to trapped address against the set of authorized

19:32.440 --> 19:39.200
entry points and only if the check passes the VMM switches to the vault view allowing

19:39.200 --> 19:43.840
the kernel execution to proceed from inside of the vault.

19:43.840 --> 19:49.200
If there is a need to temporarily exit the vault for example to call a function an external

19:49.200 --> 19:53.840
function that is not part of the vault or due to an interrupt the underlying security

19:53.840 --> 19:58.760
support layers which is back to the kernel view and this way the system basically temporarily

19:58.760 --> 20:04.160
closes the vault and maintains its integrity.

20:04.160 --> 20:11.160
This soon as the interrupt handler finishes its execution the system can reenter the

20:11.160 --> 20:12.160
vault.

20:12.160 --> 20:17.600
For this the VMM must verify that the vault was opened legitimately in the first place and

20:17.600 --> 20:25.160
second the return address must match the location of the initially exited of the

20:25.160 --> 20:26.160
vault.

20:26.160 --> 20:32.920
The address must match the location that initially exited the vault and finally as soon

20:32.920 --> 20:38.920
as the execution reaches its designated exit point the VMM closes the vault and this

20:38.920 --> 20:43.160
completes the life cycle of the vault.

20:43.160 --> 20:51.000
Now as we saw the vault lengthens itself perfectly to reinforce something like the patching

20:51.000 --> 20:58.160
and tracing facility and their data structures but what about key probes and I have explicitly

20:58.160 --> 21:02.960
not mentioned key probes in my list before and the reason for this is that I believe

21:02.960 --> 21:08.160
key probes deserve a special issue so key probes combined with ppf program serve as a

21:08.160 --> 21:12.880
foundation for advanced tracing and security frameworks however their extensive flexibility

21:12.880 --> 21:19.760
presented WF sword so key probes allow to place hooks at almost any locations in the

21:19.760 --> 21:28.120
kernel and this poses significant challenges when we try to authenticate the patching

21:28.120 --> 21:29.120
attempts.

21:29.120 --> 21:34.680
Now overall key probes presents a great tool for debugging, profiling, generating security

21:34.680 --> 21:41.080
events more but they can become an interest tool in the wrong hands.

21:41.080 --> 21:47.560
Because of this my question to the community is well how do you believe could vass reduce

21:47.560 --> 21:52.680
the tax surface of key probes and I'm sure there's a number of scenarios in which for

21:52.760 --> 22:00.760
example the the vault technology could assist the current security state of k probes however

22:00.760 --> 22:05.160
as this had we have yet to pinpoint the definite solution with this regard so if you have

22:05.160 --> 22:10.680
suggestions or would like to discuss topics please feel free to reach out we are

22:11.800 --> 22:14.200
let's collaborate and then basically talk about your ideas.

22:16.280 --> 22:21.000
So at this point I would like to conclude this presentation with an outlook and also a call

22:21.080 --> 22:26.440
for action so in summary I can say that there is still a lot left to investigate.

22:26.440 --> 22:32.040
We at Blurock security will continue investigative techniques that allow us to further

22:32.040 --> 22:37.640
alleviate the strict well strict separation between the Linux kernel and AVMM.

22:38.440 --> 22:43.320
We truly believe that virtualization extensions can become inner and building blocks of

22:43.320 --> 22:48.440
the Linux kernel. At the same time we would like to engage with the Linux community so

22:49.320 --> 22:54.040
Blurock's best Linux kernel as well as the Nova Microwiser are both open source.

22:54.040 --> 22:58.680
It one of our next goals is to start preparing patches to get at least a part of our

22:58.680 --> 23:04.840
code base into into main land. Also from what I can tell virtualization system security

23:04.840 --> 23:09.720
receives increasing attention from the industry so software companies like Microsoft Samsung

23:10.440 --> 23:16.360
away and of course more seem to put us strong emphasis on virtualization system security.

23:16.360 --> 23:23.400
So we are open to start discussions in order to define a common hypervisor hypervisor

23:23.400 --> 23:28.840
hypervisor hyprical interface for Linux that would meet the requirements for all parties.

23:30.360 --> 23:34.760
Now at the end of this presentation I still claim that virtualization system security

23:34.760 --> 23:40.600
has not been explored to its full extent and because of this I look forward to further research

23:41.160 --> 23:45.320
basically investigating new avenues of virtualization that are yet to be discovered.

23:46.760 --> 23:49.240
This concludes my presentation. Thank you very much.

23:56.920 --> 24:01.800
All right I was like this short on time for questions but I'm sure you can you can go outside

24:01.800 --> 24:06.440
and anyone who's got questions about it can come meet you very easily. Thank you. Perfect.

