WEBVTT

00:00.000 --> 00:15.960
All right, hi everyone, so yeah, my name is Eduardo Vacki, all right, and besides a bunch

00:15.960 --> 00:21.000
of other things, you can find me online with that nickname in different places, and so I've

00:21.000 --> 00:29.800
worked on was zero for the best of a year, a year and a half, 23, 20, 24, and now at a company

00:29.800 --> 00:34.800
called TechRit, and I'm going to tell you a little bit of a was zero, and then I joined

00:34.800 --> 00:40.200
this company called Delipso, and there I'm working a little bit on was zero, and especially

00:40.200 --> 00:45.760
recently on Cheekery. So I'm going to tell you how these two web assembly runtime work,

00:45.760 --> 00:52.360
then gave you a great introduction on web assembly in case you were that familiar with

00:52.360 --> 00:59.080
it, so I'll obviously skip the details there, and I want you to tell you a little bit

00:59.080 --> 01:07.680
bit about Delipso, so we built open source framework called ex-dism. Ex-dism is a frame

01:07.680 --> 01:13.480
to build web assembly plugins, so this is a different perspective, you sell the perspective

01:13.480 --> 01:19.960
of virtualizing and containerizing your workload with then, and here instead we are proposing

01:19.960 --> 01:26.560
web assembly as a host, as a way to create code that can be hosted in different platforms,

01:26.600 --> 01:31.120
which is the thing that was born for after all. So instead of the browser, you have a

01:31.120 --> 01:37.200
bunch of different run times, and these run times work better when they are embedded in some

01:37.200 --> 01:48.200
languages, so I don't know, wasm time was great for rust, V8 works great for a C in JavaScript,

01:48.440 --> 01:56.080
but what about web assembly that you want to run in your go software, or on your job

01:56.080 --> 02:01.480
of EM, all right? So we have different implementation, there are tailored for different run times

02:01.480 --> 02:07.160
depending on the application where you want to host code, and then we have the opposite side.

02:07.160 --> 02:13.800
We have frameworks so that you can compile your code into web assembly and host it on all

02:13.880 --> 02:20.200
these platforms. The key thing is, this ABI is tender diced, I mean it is the same across

02:20.200 --> 02:27.000
platform, so you can run the same plugin across these different platforms. Yeah, I think I made

02:27.000 --> 02:32.200
this pretty clear. One thing that we just launched is go mcp.run, I don't know if you've heard

02:32.200 --> 02:39.560
all the buzz in the AI space, or if you care. So this modal context protocol, it's something

02:39.560 --> 02:45.240
that was recently released by unfrop, it can allow you essentially to do agents, plugin functions,

02:45.240 --> 02:52.520
inside your LLM, whatever. All right, anyway, it's kind of fun. We built this using our open source

02:52.520 --> 02:57.240
tooling, it's called ncp.run, and you can play with it, there's a bunch of different services

02:57.240 --> 03:04.840
that you can plug into your LLM, and it's pretty fun. You can write your own using web assembly

03:04.840 --> 03:11.160
and our tooling, and you can see web assembly in the wild for practical use cases with this,

03:11.160 --> 03:17.880
and yeah, I'm going to show you a LLM example at the end. All right, so point is,

03:17.880 --> 03:24.680
wasn't lets your users write software extensions, and then use their favorite language to run them

03:24.680 --> 03:30.280
in a sandbox environment. So we made pretty clear that the peak here is sandboxing, right?

03:31.000 --> 03:37.880
All right, in wasom IO 2024, my good friend Andrea, who's working on security, and then

03:37.880 --> 03:45.000
who's our CTO, he produced a concept of language native run times. What is a language native run time?

03:45.000 --> 03:49.480
Language, native wasom run times are untimed, it is written in the host language, and language is

03:49.480 --> 03:56.440
going to host the web assembly, the web assembly run time. Isn't that right for all run times?

03:56.440 --> 04:01.960
I mean, if it's the host, it's going to be rust. So yeah, we make this distinction,

04:02.760 --> 04:08.760
especially for managed languages, where it's a bit harder and more complicated and inconvenient

04:08.760 --> 04:15.960
to interact with native so-called native code. So in the go run time, we have a wasom run time that

04:15.960 --> 04:24.200
it's actually written in go, and for the JVM, we have a run time written in Java. So what

04:24.280 --> 04:28.120
out of the benefits of this approach? Well, why are we doing this? And that's the whole point

04:28.120 --> 04:34.360
of the talk, in a way. All right, so wasom run time written in go, it comes with an interpreter,

04:34.360 --> 04:41.320
and the interpreter is portable everywhere, go runs, who wasn't the happen to be in the go room yesterday?

04:42.520 --> 04:49.080
All right, quite a few people. So you can leave the room, pick a coffee, and come back for the rest.

04:49.720 --> 04:56.520
Because there's a bit of a little overlap here. We're going to see how with zero translates

04:58.120 --> 05:02.760
web assembly into native code, for the second part of the web part. So the first part of the

05:02.760 --> 05:09.320
interpreter, it's code that's been interpreted obviously, and runs everywhere, go runs, and compiles.

05:10.360 --> 05:15.400
But we also have on the head of time optimizing compiler for Andy 64, and I'm 64. I'm going to

05:15.400 --> 05:19.560
show you that in depth if we have enough time, and then we're going to do a comparison with

05:19.560 --> 05:24.680
chicory. Now, chicory, it's kind of different, well, conceptually, the same, like we have a run time

05:24.680 --> 05:29.960
that's written in Java for the Java run time, implements an interpreter, and that's not supposed

05:29.960 --> 05:35.320
to be fast by this portable everywhere, and then we have on a head of time bytecode translator. Now,

05:36.760 --> 05:43.160
it's a translator to the Java bytecode, so that means that is also portable.

05:44.120 --> 05:51.160
But not quite. I mean, that's this guy here, and we're going to have a section about that

05:51.160 --> 05:57.240
too, which is the most exciting part of this to me, at least. All right. So we're going to first

05:57.240 --> 06:05.160
do a comparison between Go and Java, and hopefully, surprising for a few of you. I'm going to talk

06:05.160 --> 06:08.920
a little bit more about language native run times, and then we're going to go into function

06:08.920 --> 06:15.800
compilation and evaluation was zero, and then the same for chicory. So Go versus Java,

06:15.800 --> 06:21.080
couldn't be more different, right? So one has a different, they have a different form of an

06:21.080 --> 06:30.440
inheritance, like Java is the OOP, object-oriented language, like for forever, like the

06:30.440 --> 06:38.360
distender nowadays in a way, because of its history, and because it was designed like 30 years

06:38.360 --> 06:48.280
ago now, it has deep time hierarchies in their standard library. While Go tends to prefer shallow

06:48.280 --> 06:53.240
type hierarchies, and the form of an inheritance is different, it's nesting for strots, and that's

06:53.240 --> 06:59.320
going to different. In Go, you tend to use a lot of libraries while in Java, you tend to use frameworks.

06:59.400 --> 07:03.560
That's kind of, yeah, kind of through, but it depends also on the language that you use on the

07:03.560 --> 07:09.880
JVM. So JVM now hosts many languages, and they might prefer libraries too, but in general,

07:09.880 --> 07:16.360
that's a, that's a, I would say, good distinction. Go Compost in native executables, and

07:16.360 --> 07:22.440
in the Chiefs Cross, as it shifts portability by having a great cross compilation story. So you

07:22.440 --> 07:29.320
don't need to bring in a sequimpiler, have a CIS route, you just use the GoTool chain, and it will

07:29.320 --> 07:34.040
just support cross compilation everywhere, and it will just work, and you can compile for arm

07:34.040 --> 07:42.120
on Windows, X86 machine, and target Linux, and that's awesome. Right, and then the portability

07:42.120 --> 07:48.440
is also through static linking, and libraries are shared as source code, but it's super fast as

07:48.440 --> 07:54.440
a compiler, because it's as some aggressive compile time cache. On the other hand, Java works

07:54.440 --> 07:58.840
in a different way. It's a byte-cut target, portability is achieved by porting the runtime,

07:59.400 --> 08:08.680
much like WebAssembly, duh. And it's really based a lot around dynamic linking.

08:08.680 --> 08:12.280
I'm going to tell you a little bit about that in a minute, and libraries are shared as binary

08:12.360 --> 08:21.400
artifacts, where binary means here, byte-cut. Okay, a little bit about dynamic linking.

08:21.400 --> 08:28.520
So this is the diagram, this is a diagram of startup time for Java. It's in a talk by one of the

08:29.480 --> 08:38.120
Java, GVM contributors and architect, then I think, starting fast. So this shows an

08:38.120 --> 08:45.960
typical Java framework, how long it takes to actually boot user code, and it takes a while.

08:45.960 --> 08:51.880
Why? Because before you actually reach to user code, there's a bunch of plus loaders and

08:51.880 --> 09:00.760
plus as initializers that have to run. And this increases startup time. This was it's all about

09:00.760 --> 09:05.960
moving stuff at build time to make things faster, and one go into that detail, but

09:06.120 --> 09:11.720
keep the capabilities. Someone was asking about dynamic linking. This is a way to deal with that

09:11.720 --> 09:18.520
in a byte-cut environment. And it brings some issues to, right? But yeah, the way the JVM works

09:18.520 --> 09:25.800
is loading a lot of these modules, these classes, and every class, every non-treegel class,

09:25.800 --> 09:31.560
as an initializer, just like WebAssembly modules, might have an initializer, and loading each of

09:32.440 --> 09:39.960
them, obviously, takes some time. All right. Okay, they also are very similar. In many ways,

09:39.960 --> 09:48.520
they have somewhat large runtime, and they have got garbage collection. Gill has good routines.

09:48.520 --> 09:54.360
Well, Java makes some obstruction over threads. You can instantiate system threads, but

09:54.360 --> 10:00.520
very recently, especially they have a so-called virtual threads, which are really similar to

10:00.520 --> 10:09.480
go routines. Nowadays, it's not news, but it's not new, but now recently we have a

10:10.040 --> 10:14.440
better system to build native executables. So the difference between the two really are

10:14.440 --> 10:23.000
getting smaller. All right. Now, when you want to interact with foreign code, and usually that

10:23.000 --> 10:30.360
means native code, you have FFI, right? Foreign function interfaces. In Java, those are called

10:31.080 --> 10:36.760
the daddy's called Java native interface, J&I, and you go that's usually called, that's let's see

10:36.760 --> 10:43.400
go. That's what we know as see go. Now, surprisingly enough, maybe. When it comes to limitations,

10:43.400 --> 10:49.640
they are actually kind of similar. So you have portability issues, because now you rely on a

10:49.640 --> 10:57.720
second piler. You have two link issues, because native code is opaque to the tooling of your platform.

10:57.720 --> 11:03.960
You cannot debug step-wise debugging using the go-ty bugger, the native code that you are hosting,

11:03.960 --> 11:10.200
and the same goes for the J&M. A runtime issues, because your native code is not aware of your virtual

11:10.200 --> 11:17.960
threads or your guru-teens or your garbage collector. So essentially, the native code runs on

11:18.040 --> 11:24.120
its own thread, that it's stolen from the rest of the VM, and there's not much we can do about it,

11:24.120 --> 11:29.800
because it's a completely opaque to the rest of the runtime. And you also have potential performance

11:29.800 --> 11:36.360
issues, because crossing the FFI boundary usually as a cost and safety issues. Let's talk about that in a minute.

11:37.640 --> 11:42.360
So let's suppose you want to do software extensions in go. You could use go-plug-in or

11:42.360 --> 11:47.880
bright and native extensions, which are kind of related. Go-plug-in, there's two different flavors

11:47.880 --> 11:54.200
of go-plug-in, but the one that comes with a standard library essentially only works with unique

11:54.200 --> 12:01.800
systems, and it's super easy to break, because it depends on the specific version of it's compiler.

12:01.800 --> 12:08.680
And it's kind of like works everywhere, form-tating, and dynamic libraries, and that's painful.

12:08.760 --> 12:16.680
The other version that you have, it's something that comes from an article, exactly, which essentially

12:16.680 --> 12:23.960
spawns a new process and calls a JRPC instead. So yeah, the other thing that you can do,

12:25.800 --> 12:31.800
use a scripting language, we can go. There are so many to pick from, like, two, three, I don't know.

12:32.680 --> 12:38.200
And you have then to make a choice for your end users. And if you have native code

12:39.720 --> 12:45.160
within any foreign language, what happens is that your memory kind of looks like this,

12:45.160 --> 12:53.800
it's all shared. So you may be able to corrupt the space from Zig, let's say you write some code,

12:53.800 --> 12:59.880
some buggy code, some unsafe code, that corrupts the state of the go-vm, the guru and time.

13:00.840 --> 13:05.080
So there's no isolation there. Well, wasm gives you that kind of isolation.

13:06.120 --> 13:11.320
What about Java? You could use class loading, because that's, as we said at the beginning,

13:11.320 --> 13:15.800
class loading, it's one of the core mechanisms for which you achieve dynamicity. You achieve

13:18.360 --> 13:24.280
achieve, it needs to shape code in Java. But then you'll code, your code will run into

13:24.280 --> 13:28.680
the same levels or privileges of the rest of the code. You could use a scripting language

13:28.760 --> 13:34.280
implemented in Java. There are many, actually, but you are then, again, making a choice for

13:34.280 --> 13:38.120
your end users. All you could write native extensions, and you have all of the issues.

13:38.120 --> 13:45.480
Then you said earlier, including the problem with memory. So this is where we like language

13:45.480 --> 13:53.160
native run times. We talk about this earlier and it's the same slide, in fact. They allow you

13:53.160 --> 13:59.880
to use this code without having an FFI because they implement the wasm run time in the language.

14:00.600 --> 14:06.920
Thereby actually maximum portability, safely interacting with the platform. The performance may

14:06.920 --> 14:15.320
not be a state of the art. There are better run times that there are more optimized, but the cost

14:15.320 --> 14:22.440
of crossing FFI boundaries sometimes is so high that even having slightly worse performances

14:23.160 --> 14:29.960
is good enough. Especially if you're doing a lot of IO. And I have an example on that. So yeah,

14:29.960 --> 14:36.600
I'm going to try to give you a very fast overview of how was zero does stuff because I want to

14:36.600 --> 14:45.880
move on to Gickory as well, so we have FFI for each. All right. So let's say you're loading a

14:45.880 --> 14:51.880
web assembly module. First, you parse it, you validate it, and then what we do is compiling it.

14:51.880 --> 14:56.440
What that means is that depending whether you're running an interpreted mode or compiling mode,

14:56.440 --> 15:02.280
a completion mode, two different things happen. In the interpreter mode, it's not really a

15:02.280 --> 15:06.360
compilation. We just translated to an internal representation. The internal representation is

15:06.360 --> 15:11.960
essentially has fewer or fewer op codes and uses unstructured control flow. Instead of the structure

15:11.960 --> 15:19.160
control flow that's appropriate to wasm. So instead we have number of different instructions

15:19.160 --> 15:27.560
in wasm for the specialized for each type. Instead, we use just one operation and we switch

15:27.560 --> 15:33.560
over the type and it's how we do it, but it's just an internal implementation detail. And then

15:33.560 --> 15:39.880
it's just like any interpreter you might have seen, like it's switch over the op code and loop

15:39.880 --> 15:44.280
and it's all right. What about the optimizing compiler? You're optimizing compiler,

15:44.280 --> 15:49.640
I think some hints, a lot of hints from state-of-the-art compiler architecture, like VA,

15:49.640 --> 15:56.520
sportsjitting, or LVM, or goes on compiler. And it works pretty much how most of these optimizing

15:56.520 --> 16:02.600
compiler work. It has a front end and a backend. The front end translates into an essay. It

16:02.600 --> 16:08.360
does optimizations. There are independent from the platform, from the target platform, and then

16:08.360 --> 16:13.640
on the backend you generate the actual code for that. But that's pretty interesting because this

16:13.640 --> 16:18.120
is essentially a sort of adjusting time compiler, even though usually I don't call it just the

16:18.120 --> 16:28.200
time of call it low time I end up time, or reasons. You can ask me later. But it does. And then the

16:28.200 --> 16:35.560
interesting part is how this gets executed. I'll go super fast here because we don't have a lot

16:35.560 --> 16:41.640
of time. Essentially, you start from the wasm bytecode that looks like that and it's a stack-based

16:41.640 --> 16:47.320
machine. You translate it into representation that uses values. It's a standard kind of transformation.

16:48.600 --> 16:53.960
You split the code into basic blocks, like you will do usually for compilers, and then this

16:53.960 --> 17:01.800
basic blocks are linked together by edges that are represents the branch instructions in the code.

17:01.800 --> 17:07.880
And this allows you to do relatively simple control flow. Now is this? It's a standard procedure.

17:08.120 --> 17:16.760
Yeah, let's get forward this one. Now once you have this SSA representation, you can do optimizations

17:16.760 --> 17:21.880
over this code. And a few classic optimizations are debt code elimination. Let's say you have this

17:21.880 --> 17:28.440
C code here, and you have a debug flag that evaluates to 1 or 0. If it's 0, then you know statically

17:28.440 --> 17:35.080
at built time that this block of code will never execute, so you can drop it. And let's say you have

17:35.080 --> 17:42.680
this expression here, A times B divided by 2, A and B in this expression are actually

17:43.320 --> 17:48.520
effectively constant, so you can propagate those constants, evaluate the expression of

17:48.520 --> 17:55.800
build time, and that's called constant folding, and essentially your function co-lapses to a single return

17:55.800 --> 18:01.400
instruction. And well, not a lot of deviations delete code. Some optimization might blow up the

18:01.400 --> 18:07.960
size of your code depending on strategies, but it's a good strategy strategies. Anyway, once you've

18:07.960 --> 18:15.080
done that, you have your new code in your intermediate representation. And what you want to do,

18:15.080 --> 18:19.560
now is generate the code on your back and forward, your target platform. Let's say it's arm.

18:20.600 --> 18:25.800
You use a bunch of virtual registers because the register and your machine are limited in numbers.

18:25.960 --> 18:32.200
And first you generate this code, and then this is the instruction selection process.

18:33.000 --> 18:38.200
And then you fit those virtual registers onto the limited number of registers that you have.

18:39.560 --> 18:44.840
Arm as a limited number of register, but MD64 it's even more limited, and these are actually

18:44.840 --> 18:51.880
plenty. It's 32 for SIMD and floating point and 34 general purpose, which is quite a lot compared to

18:51.880 --> 19:02.040
X86. And finally, you allocate registers. And finally, we encode all the code that we have with

19:02.040 --> 19:10.680
the proper epilogue and product into actual machine code. This happens all in the gore and time,

19:10.680 --> 19:18.280
which is kind of funny because it goes not a jitter language. So, so we generate bytes like

19:19.160 --> 19:25.480
right, and then we jump into it, that's how the function invocation works. So, we compile at

19:25.480 --> 19:30.840
load time, and then at run time, this still happens at run time. So, in your go code, you will

19:30.840 --> 19:35.880
easily shape, compile your wasn't binary, then you easily shape it. And then you can

19:35.880 --> 19:43.880
use to shape many times, it will be only compile once. And then you jump into this code, using a

19:43.880 --> 19:51.320
trampoline, which basically stashes away the registers that are used by the gore and time.

19:51.320 --> 19:59.320
And then restores them when you have to leave the compile code and get back into the gore

19:59.320 --> 20:04.840
compile code. And these happens, for instance, when there's an error, in that case, we return with an

20:04.840 --> 20:11.320
error code, let it be handled properly by traditional go code. And finally enough, that's also

20:11.320 --> 20:16.360
what happens when you have host functions, that is, functions that are written in the go language,

20:16.360 --> 20:22.200
in this case. We return a code that represents a function as to be called, and then we resume

20:22.200 --> 20:28.440
the wasn't execution. Or what about chicory? So, I've been going around and telling me, yeah,

20:28.440 --> 20:33.720
wasn't means now the same as the jvm, but now I have a wasn't runtime that runs into jvm.

20:33.720 --> 20:38.600
The wasn't byte code is kind of different, but in many ways it's similar. So, for our

20:38.600 --> 20:43.000
written, I think it's really similar, just like there's a one-to-one correspondence. But for

20:43.000 --> 20:50.440
control flow, I mentioned this earlier, the jvm uses unstructured control flows, conditional branches,

20:50.440 --> 20:55.800
and unconditional branches, and labels, and you can jump anywhere, even backwards. This is only

20:55.800 --> 21:04.840
jumping forwards, but you can jump also backwards, and nobody will care. Right. And instead,

21:05.240 --> 21:10.680
wasm, you have structure control flow, and then allows the, I mean, loves the byte code to be

21:10.680 --> 21:17.880
validated more safely than what you can do on the, and quickly faster. Okay, the interpreted is

21:17.880 --> 21:23.480
relatively simple. It's even simpler than was zero. Was zero does this transformation on

21:23.480 --> 21:28.840
control flow, while the interpreter in what in a chicory is actually following the spec. So,

21:28.840 --> 21:33.160
it's easy to hack, it's written in pure job, it's really easy to follow.

21:34.120 --> 21:40.920
And it's obviously portable to any jvm, right? It's, of course. We also have a nailty compiler

21:40.920 --> 21:50.360
that translates WebAssembly byte code to, to Java byte code. And it works in two modes, and essentially

21:50.360 --> 21:56.920
the mechanism is the same. It can generate byte code at one time and add through class loading,

21:57.080 --> 22:04.520
which is a very common strategy in the jvm work. It loads that dynamically, but you can also dump

22:04.520 --> 22:09.720
that byte code on disk and then load it whenever you want, so you don't have to recompile it

22:09.720 --> 22:18.840
every single time. There's, well, there's benefits and downsides to both things. And then this is

22:18.840 --> 22:23.160
the interface. You have this machine factory. When you instantiate your module, you can pass this

22:23.240 --> 22:29.240
machine factory. It's a factory in Java tradition. We love factories. And you can have an

22:29.240 --> 22:37.880
interpreter factory, or you can have a compiler factory. Let's keep over to the ALT machine factory.

22:37.880 --> 22:45.080
The ALT machine factory what it does is, in this space generating byte code. So, of course,

22:45.080 --> 22:52.520
we validate the module, we parse it, but then we recompile. All right. The interesting thing here is

22:52.600 --> 22:58.280
that because both of them are stack-based machines, the analysis and the steps to generate code

22:58.280 --> 23:04.360
are really much simpler than with the was zero that has to generate in any case native code. So,

23:04.360 --> 23:11.800
we don't have to target R or MD or whatever. We just target the jvm byte code. So, and we generate

23:11.800 --> 23:18.360
a bunch of classes, specifically implementing some interfaces, and then essentially all we do is

23:18.360 --> 23:25.160
calling those functions and those methods. This is, for instance, a few lines of code that we

23:25.160 --> 23:30.520
used to do control flow analysis. This is where most of the code goes for the analysis because

23:30.520 --> 23:37.800
that's where the differences are. But then there's a huge, analyze, there's a huge default case,

23:37.800 --> 23:41.080
well, it's small, but this, analyze simple, it's kind of huge. And the reason is,

23:42.600 --> 23:48.120
this is which is over all of the up codes that don't need any special analysis.

23:48.200 --> 23:55.000
And there are many because, in fact, we can reuse a lot of the code across the interpreter

23:55.000 --> 24:02.600
and the compiler. In fact, a lot of this code, it's in our methods of the interpreter. And we can

24:02.600 --> 24:10.680
rely on the just-intentioned compiler to align that code. So, they're not even aligned. I 50 or like

24:10.680 --> 24:17.240
two minutes to talk about how Android fits in the picture, and I'll try to be quick. These are

24:17.960 --> 24:24.360
performance figure. I'll have to go quick and skip those. But thing is, even compared to a

24:24.360 --> 24:32.680
state-of-the-art implementation on the wasm runtime, it's pretty good. And we didn't make a good

24:32.680 --> 24:39.880
huge effort to make it good. So, that's cool. So, speaking of Android, Rubber Cheekery, say,

24:39.880 --> 24:48.280
call it, this is ncp.run, the thing I showed you earlier. And this is an ncp library based on

24:48.280 --> 24:55.640
chicory running an interpreted mode on Android, interacting with Gemini, and calling WebAssembly

24:55.640 --> 25:01.240
functions to do, in this case, calling Colling Google Maps. And because this is essentially

25:01.240 --> 25:11.160
calling a service, an external service, and external service, it's mostly viewing IO. So,

25:11.160 --> 25:16.840
it's pretty fast, even though it's interpreted mode. But what about ALT? So, if you are generating

25:16.840 --> 25:23.560
code on this, go flying, then it works, because that's how the Android will change work, one second.

25:24.200 --> 25:31.160
So, the missing thing is runtime AOT, and that requires some changes, because the Dolby

25:31.160 --> 25:37.400
VM is a registered based machine. Most of the code is still shared, and that's pretty cool,

25:37.400 --> 25:42.360
and the end goal is to actually be able to do this at the end. And that was it. Thank you.

25:54.280 --> 26:03.080
What question is, was the arrow is it runtime compilation?

26:03.080 --> 26:09.080
It's runtime compilation. Yes.

