WEBVTT

00:00.000 --> 00:11.120
Hi, so, yeah, in research, it is actually common, right, that the paper builds on

00:11.120 --> 00:18.120
previous work, so this can be done with numbers from tables or getting model parameters

00:18.120 --> 00:24.080
or reference data set. Despite now, all the open source initiatives and open access

00:24.080 --> 00:32.000
nowadays, reuse is still hard and requires, but it is required to have the results such

00:32.000 --> 00:37.400
that they can be made reproducible. So, what often happens in practice is that we still

00:37.400 --> 00:45.220
have to extract values from tables or get the data from the figures, especially in the modeling

00:45.220 --> 00:52.160
fields to then implement or implement analysis because they are not there or only half

00:52.160 --> 00:58.840
shared. So, it is not because the other researchers don't want to do that, but it is

00:58.840 --> 01:04.640
because the original research is not shared in a reusable form. And even when the data

01:04.640 --> 01:11.720
and the code are shared, reproducing the results requires to reconstruct the environment

01:11.720 --> 01:17.640
that the original authors have at the time. So, the problem we face isn't that we don't

01:17.640 --> 01:25.360
share. The problem is that we publish research as static artifacts. So, thank you very much

01:25.360 --> 01:31.160
for the organized, great organization and opportunity to present my name in Andreas. And

01:31.160 --> 01:37.200
I want to talk in the next few minutes about how we can turn research outputs into executable

01:37.200 --> 01:47.280
and reusable project using open source infrastructure and RAP as one concrete exam. So,

01:47.320 --> 01:53.520
to start and very hard today compared to 10, 20 years ago, where most of the information

01:53.520 --> 02:01.320
was contained in an article. It has changed a lot with open data becoming widespread and

02:01.320 --> 02:08.200
thanks to repositories such as senodo or fixture, the data is accessible, but also because

02:08.200 --> 02:16.200
of funder requirements and new policies. The same is true for code, sharing code on platforms

02:16.240 --> 02:25.840
like it lab is now common practice in many fields. And yet when people actually try to run

02:25.840 --> 02:33.640
published analysis results still don't reproduce. So, this is not a story about no data or

02:33.640 --> 02:39.400
no code. It's about the fact that data and code is not accessible. And even if it's accessible,

02:39.440 --> 02:45.960
that's not enough to reuse and reuse the research. So, let's see why this still happens and

02:45.960 --> 02:55.440
very breaks down. But first, maybe to bring everyone on the same page, there is repeatability.

02:55.440 --> 03:01.680
And this means that the original researcher can rerun the analysis and generate the same results

03:01.680 --> 03:09.320
repeatedly using some data and some code of locally. Next, we have the reproducibility.

03:09.320 --> 03:15.320
This is often used in a live computational reproducibility. When we are talking about some

03:15.320 --> 03:21.320
analysis workflows and pipelines, here for example, researcher A, share state and code with

03:21.320 --> 03:29.320
researcher B. And then in researcher B's environment, we try to reproduce the same results.

03:29.320 --> 03:38.320
Then there's the replication or reuse. So, here we share some code with researcher B and

03:39.320 --> 03:46.320
researcher B tries them to extend it with new data. So, we say a result is generalizable.

03:46.320 --> 03:53.320
If this is possible with new data. And this last step is actually what researchers actually

03:53.320 --> 04:02.320
want. But this takes a lot of time and effort to achieve. So, still today, researcher A

04:02.320 --> 04:06.320
B is able to do that. So, it takes time to publish fair. The concept is around since 10 years

04:06.320 --> 04:15.320
now and the data should be findable, accessible, introproble, and reusable. But so, what happened

04:15.320 --> 04:21.320
is that we have now the policy expectation that if we publish a manuscript with the data and

04:21.320 --> 04:28.320
delete the eLeCode to a data repository, science magically becomes reproducibly. It's true

04:28.320 --> 04:35.320
that such data is then findable and accessible. But not by default, introproble, for example,

04:35.320 --> 04:40.320
proprietary file formats were used at the time. And reusable only in the sense that we have

04:40.320 --> 04:46.320
now access to it and someone can use it. But the researcher experience is still that

04:46.320 --> 04:52.320
methods description, for example, are scattered between the manuscript supporting information.

04:52.320 --> 04:58.320
And the DUI to a data repository just leads to one folder with a bunch of files, which

04:58.320 --> 05:04.320
is also true for code, but there at least we have a history how the stuff evolved.

05:04.320 --> 05:09.320
But then the questions we need to leave for a researcher rise like, how do I rerun this?

05:09.320 --> 05:15.320
What environment is needed and to reproduce it? So, we are missing the context and

05:15.320 --> 05:22.320
execute the ability. But research artifacts they don't just exist, they evolve.

05:22.320 --> 05:29.320
Guided by a research question, they evolve in different cycles. So, data follows one cycle,

05:29.320 --> 05:36.320
for example, from planning, acquisition, or processing to a one point archiving. On the other

05:36.320 --> 05:41.320
side, we have code that follows a different life cycle from development, testing, publication,

05:41.320 --> 05:48.320
and also maintenance. But these two life cycles are managed by different tools, different

05:48.320 --> 05:54.320
systems, and often out different people, such as collaborators or maintainers of third-party

05:54.320 --> 06:01.320
libraries. So, what we are missing is a mechanism that binds a specific version of the data with

06:01.320 --> 06:07.320
a specific version of the code, under a specific execution environment in the current

06:08.320 --> 06:17.320
system. So, now even if both are open, they don't form this unit. This may now sound like

06:17.320 --> 06:22.320
a publication problem, but it's not because it fails much earlier, inside the lab.

06:22.320 --> 06:28.320
Many of you may have seen such messy folder like this semester project, and their data

06:28.320 --> 06:34.320
code, lab nodes, and different versions of the report are just something to one folder with

06:34.320 --> 06:41.320
your parent structure. But this is not necessarily bad practice. This happens if the tools don't

06:41.320 --> 06:48.320
capture the context. Similarly, in larger research projects, for example, this lab tries to

06:48.320 --> 06:54.320
study how protein acts, regulates the cell cycle. So, now the whole team works on the same

06:54.320 --> 07:02.320
project, it's exactly the same data, same tools. But here we have, for example, a student

07:03.320 --> 07:12.320
working on a mac laptop, a recent Python version, and creates some results. But then it

07:12.320 --> 07:18.320
starts to differ, not because the student made a mistake, but because assumption about the

07:18.320 --> 07:24.320
environment are implicit. So, if the student leaves at one point, and it's a postdoc tries

07:24.320 --> 07:31.320
to reproduce the study, but now runs on a Linux system with slightly older Python version,

07:31.320 --> 07:39.320
suddenly results slightly different. Then, say, a PI tries to help on a Windows system with

07:39.320 --> 07:46.320
an even older version, and just gets there. So, the question is then, is it now the Python version

07:46.320 --> 07:52.320
that causes you the problem or the pre-processing steps run manually that no one captured

07:52.320 --> 08:00.320
to sort this out on a long term. It's very difficult, time consuming, in a completely inefficient.

08:01.320 --> 08:08.320
And, these differences, they all happen and slowly accumulate long before the papers written.

08:08.320 --> 08:12.320
And even if you have the in the paper, it doesn't fix the problem, it just freezes the problem

08:12.320 --> 08:21.320
into a snapshot of the PDF. So, we've seen, I mean, while sharing data, code, alone does not automatically

08:21.320 --> 08:28.320
give for a possibility, but I also want to stress the point that reproducibility should never be the end

08:28.320 --> 08:36.320
code. It's the minimum requirement, and questions like, can I rerun this? And does do the result match

08:36.320 --> 08:44.320
the paper? They should be in certain ways, if a clear yes. Upon publication. However, a recent study that

08:44.320 --> 08:50.320
looked at around 200 random selected paper from the journal science found that they could only

08:50.320 --> 08:57.320
get the data of 44% of it, and then only reproduce the result of roughly one quarter.

08:57.320 --> 09:04.320
So, this is reset, as it is today, and is extremely far from the researcher and the

09:04.320 --> 09:10.320
new research community bonds, to reuse the work and ask new questions.

09:10.320 --> 09:18.320
So, when data and code of the study are reproducible and give the same result, we can then finally

09:18.320 --> 09:24.320
work on extension, for example, developed new analysis methods, use such data for benchmarking, for example,

09:24.320 --> 09:33.320
of new algorithms, and that extend work, bringing in new data, reuse their analysis,

09:33.320 --> 09:42.320
flows, and address new questions from a different angle. So, with open source code, the environment

09:42.320 --> 09:48.320
is assumed to be obvious, but it's not, it's really the key missing dependency. From this

09:48.320 --> 09:55.320
execution stack, we see that code runs on some infrastructures or a laptop that then runs

09:55.320 --> 10:02.320
some operating system that comes with some libraries, and then on top, you have your packages

10:02.320 --> 10:09.320
of code that you write for your research to process some data. Traditionally, this stack is implicit.

10:09.320 --> 10:17.320
It lives in people's laptop and disappears of the publication, and is relevant, and almost never

10:17.320 --> 10:24.320
published. What are the other options? So, bare metal is not portable, but systems can be

10:24.320 --> 10:31.320
virtualized, however, this is also heavy. So, then there are containers, and they made the environment

10:31.320 --> 10:38.320
portable and executable, and have been used in research projects, but are very complex to set up.

10:38.320 --> 10:47.320
And finally, binder used this idea to make repository executable, but sadly, it's stateless.

10:47.320 --> 10:58.320
But long-lived research involve data set in access control, and needs to preserve this over a long period of time.

10:58.320 --> 11:06.320
So, we've seen how binder can help now to make an environment to take it into account,

11:06.320 --> 11:13.320
but we haven't talked about how data is managed. So, it lives within a folder, even if it has DOI,

11:13.320 --> 11:20.320
but they don't have identity and code, however, produced. On the other side, we have a research

11:20.320 --> 11:27.320
management system, where data sets are treated as first class object, and it has persisted

11:27.320 --> 11:34.320
and identifiers, metadata, explicit provenance, and this turns a collection of data into reusable

11:34.320 --> 11:40.320
research objects. So, in our work, we use the open source, RMS tool called open piece.

11:40.320 --> 11:46.320
It's used in many institutions and scales to terabytes of data, and is specifically designed

11:46.320 --> 11:52.320
for academic and lab environments. So, here we see laboratory information system.

11:52.320 --> 11:59.320
We have some methods and protocols, and then you have a tool section with clear APIs,

11:59.320 --> 12:07.320
how to get the data in and how to get it out, publish it to open research repositories.

12:07.320 --> 12:14.320
Then here we have the lab notebook section, we see raw data, an example of microscopy data,

12:14.320 --> 12:19.320
where we can describe all the details, and more importantly, link to the parents,

12:19.320 --> 12:25.320
so like which materials, chemicals, and so on have been used, what was the goal, and then link it to analysis.

12:25.320 --> 12:29.320
These relationships can be searched, so you see which chemical,

12:29.320 --> 12:36.320
let's to which modification of the biological strain, and then this strain is imaged on the microscope

12:36.320 --> 12:41.320
and leads to the data, and then eventually which analysis produces a result.

12:41.320 --> 12:44.320
But the system has no execution context.

12:44.320 --> 12:51.320
So that's where RAP comes in and it reaches the data, and code life cycle that we saw earlier

12:51.320 --> 12:57.320
to generate an open shareable unit of research around the question.

12:57.320 --> 13:06.320
So we need to have the data by reference, the code in version control, and we need the compute context.

13:07.320 --> 13:15.320
The data in our system lives in an RDS like open piece, and the environment and the code is on the version control.

13:15.320 --> 13:22.320
So RAP then takes this and builds a project where one can do the whole management,

13:22.320 --> 13:28.320
you can collaborate on it, and you can share this whole environment.

13:28.320 --> 13:36.320
So quickly on the system architecture, a user just interface through a web browser with it,

13:36.320 --> 13:43.320
and the environment is built on a Kubernetes system by taking the repository, building a Docker image,

13:43.320 --> 13:51.320
mounting the data directly into it, and then has means to publish it on different repositories.

13:51.320 --> 14:00.320
So in practice, this looks right. We assume the data to be either registered or generated by the lab automatically in the argument system.

14:00.320 --> 14:07.320
You just specify which is your database and what data sets to use.

14:07.320 --> 14:12.320
Then you specify your environments of which dependency which packages,

14:12.320 --> 14:17.320
but you can already specify what kind of analysis you want to do, but this you can also do later.

14:17.320 --> 14:24.320
And then you have an example, a project like this, so the environment lives on a bander folder,

14:24.320 --> 14:28.320
the data set in RAP folder, like the data sets, where you keep track of it,

14:28.320 --> 14:36.320
and the rest is completely free for you to manage and organize and depending the needs of your fields and projects.

14:36.320 --> 14:45.320
One year you reach a milestone, you can take the whole thing, freeze it, and register it to the argument system,

14:45.320 --> 14:52.320
and eventually publish it to an open repository, where others can use and run it.

14:52.320 --> 15:01.320
So here, if you use the interaction with the TI, so you can either build it from scratch from the gift or reuse it through OpenBeeshare 25.

15:01.320 --> 15:08.320
You can define the computational resources you need, memory RAM, start, stop delete your project,

15:08.320 --> 15:13.320
you can mount new data anytime and it's immediately available within your project.

15:13.320 --> 15:24.320
You can upload intermediate data, if necessary, or save the whole thing, and you can share it within your institution or with the world.

15:24.320 --> 15:31.320
So you have a container that contains the data, the code, and the environment and runs anywhere.

15:31.320 --> 15:40.320
So here to case studies, this is from the field of archaeology, the first registered data in the RMS system,

15:40.320 --> 15:50.320
then added the R packages that the study developed as a kit submodule to make it available, adopt the path to an Rmarkdown script,

15:50.320 --> 15:55.320
and we're able to reduce the results.

15:55.320 --> 16:05.320
Here, another example in the field of neuroscience, there they had data code in a kit repository, but the data they fetched from data bases.

16:05.320 --> 16:14.320
So this we could just use, but for example, didn't specify the execution context, so we had to find the right path in version and the library,

16:14.320 --> 16:21.320
but once we saw it to this out, we were basically able to rerun all their analysis script and generate the results.

16:21.320 --> 16:31.320
So these are two examples from two different fields where you can now reuse that system, trace everything back and build on it.

16:31.320 --> 16:37.320
Now, RP adapts to the researcher and not the other way around.

16:37.320 --> 16:44.320
So this is important because we all know, with time, we have our favorite tools, and we are less open to adopt.

16:45.320 --> 16:53.320
So we used open source, and took it to lab interface that has a huge community and supports many languages for analysis.

16:53.320 --> 17:05.320
Computational scientists prefer our studio, but this is also accessible, engineering engineers like MATLAB, despite its proprietary, but even that runs on the platform.

17:05.320 --> 17:18.320
Or if you're coding, you may like visual studio code or similar with a debugger, but then if you really like GUIs and like to click around, you can have a full Linux desktop environment with your

17:18.320 --> 17:25.320
Libro office or only office and here we see, for example, some data mounted into it.

17:25.320 --> 17:38.320
So once we have an RP instance running and the user defined their environments with the data and tools, the user experience for the user is almost like on the laptop.

17:38.320 --> 17:45.320
So it can be installed on local infrastructure, such as lab server or institutional infrastructure.

17:45.320 --> 17:50.320
Small labs can manage it themselves, but also for the IT department.

17:50.320 --> 18:03.320
It's low maintenance and scale the RGMS system to use data, handled long-term preservation, enable cheap tape storage, or provide this high-performing computing resources.

18:03.320 --> 18:11.320
All this technical complexities hidden away from the users, when interacting with the system and requiring more computational power.

18:12.320 --> 18:23.320
Even after the export, it contains everything necessary to run anywhere and it integrates well with the established tools and services such as LATCH.

18:23.320 --> 18:34.320
So once we have the environment specified here, an example on a institutional server, we can bundle it and now this is how it would look on the local computer,

18:34.320 --> 18:41.320
and we get exactly the same visualization, same tools, and the user just needs to run it.

18:41.320 --> 18:49.320
Thanks to the rich open source ecosystem in TupiT arrived, we can use it in many disciplines.

18:49.320 --> 18:58.320
Here, engineering, 3D design, PCB design for electronics, in our group we work a lot with large image data sets, terabytes in size.

18:58.320 --> 19:04.320
To segment, we can do that, use deep learning for segmenting images.

19:04.320 --> 19:09.320
Maybe not everyone is fluent in coding, so we can link it to LLMs, ask for help.

19:09.320 --> 19:15.320
For example, Gemini, hey, can you give me an example in Python, how to count the cells on my images?

19:15.320 --> 19:21.320
And yes, it speeds out some code and found the 47 cell on that image.

19:21.320 --> 19:28.320
It can even use in molecular biology to make it more reproducible, because there's still a lot of stuff is done manually.

19:28.320 --> 19:43.320
Here we load all the strains, the sequences and so on from the database, can simulate the processes long before, for example, using automated systems to then construct the strains in silicon.

19:43.320 --> 19:48.320
And finally, at the end of every research project, there's a report summarizing the work.

19:48.320 --> 20:09.320
Here, I show you, overleaf, a latex editor, where we can start a project, sync it as a git sub module into RP, right for example, an introduction, and sync it back, and use synergies with such existing tools, for example, to publish it to a preprint server or to the publisher.

20:09.320 --> 20:22.320
So RP not only unifies data code, but it also includes the written documentation, making the publication an integral portable part of it, so it runs everywhere in future.

20:22.320 --> 20:27.320
So what's next? We have plans to integrate it with the European Open Science Cloud.

20:27.320 --> 20:40.320
We plan to engage with the communities to allow mounting, not downloading reference data from senor to for example, and use LLM assistance to lower the adoption barrier even more.

20:40.320 --> 20:53.320
This work is the result of a close collaboration between software engineer, domain scientist, and the scientific infrastructure team at ETH, which I want to specifically thank and highlight here.

20:53.320 --> 21:05.320
And the early adopters in our team, who built now their workflows on this system, and the funders to make it possible to develop this platform as open source infrastructure.

21:05.320 --> 21:18.320
So to conclude, RP is one example how this can work, but the broader point is not RP, so research should really be something that we can run, inspect, and extend anytime.

21:18.320 --> 21:24.320
So that's how we move away from reproducibility as the exception to reuse as the default.

21:24.320 --> 21:33.320
The cure codes link to the manuscript or the demo if you like to install it locally on your infrastructure or laptop.

21:33.320 --> 21:38.320
Thanks for the attention. I'm happy to take questions now.

21:38.320 --> 21:43.320
Thank you.

21:43.320 --> 21:44.320
Yes.

21:44.320 --> 21:59.320
My question is about the runtime. Sometimes when I want to reproduce some experiments in physics, sometimes there are repositories very old.

21:59.320 --> 22:13.320
And some packages don't even exist anymore to download these data through for this in our view.

22:13.320 --> 22:21.320
So the question was, in physics, there are sometimes really old projects without data packages.

22:21.320 --> 22:27.320
So obviously we cannot, there's no magic right if the package is gone it's gone right.

22:27.320 --> 22:38.320
So but that's why we kind of emphasize to build something like or use something like RP because then it's never gone because once you export this bundle it's all there.

22:38.320 --> 22:50.320
So you have the system that kind of recreates a computation environment and it includes all the libraries, even if they're long gone on the repositories or by the maintainers.

22:50.320 --> 22:55.320
But to basically find all the ones it's impossible, right?

22:55.320 --> 23:02.320
So when you use RP it creates a package by all this.

23:02.320 --> 23:03.320
Yes.

23:03.320 --> 23:06.320
So someone looks at it 15 years later.

23:06.320 --> 23:07.320
Yes.

23:07.320 --> 23:08.320
Yes.

23:08.320 --> 23:13.320
So it handles the code, the data, the environment potentially the documentation.

23:13.320 --> 23:17.320
So everything that was really related to this research.

23:18.320 --> 23:22.320
All the dependency everything, but there are different export options.

23:22.320 --> 23:25.320
You can also decide to share a lightweight, right?

23:25.320 --> 23:37.320
If you have terabytes of data, then we have a mechanism that you basically only bundled the environment and your analysis code and the data is then downloaded from, as in order for example.

23:37.320 --> 23:42.320
Because otherwise these bundles, it's basically a zip file, get very large.

23:42.320 --> 23:48.320
But for smaller meat size project, you can really bundle it, you have one big thing that you download.

23:48.320 --> 23:54.320
You press start and in the browser it looks exactly as what I showed and you can run.

23:54.320 --> 23:58.320
It's as someone stopped there.

23:58.320 --> 23:59.320
Yes.

23:59.320 --> 24:00.320
Yes.

24:00.320 --> 24:03.320
The code, the project of the new page.

24:03.320 --> 24:07.320
How from 2DL and we actually have we run the code sheet project.

24:07.320 --> 24:08.320
You talk to a reproducibility.

24:08.320 --> 24:09.320
Yes.

24:09.320 --> 24:10.320
Yes.

24:10.320 --> 24:11.320
You've already got a publication of earlier.

24:11.320 --> 24:13.320
You talk to somebody else.

24:13.320 --> 24:14.320
Is there something in here?

24:14.320 --> 24:15.320
There's a little process.

24:15.320 --> 24:16.320
This one read.

24:16.320 --> 24:18.320
What does it take on that kind of process?

24:18.320 --> 24:21.320
Of course, or intense manual labor involved?

24:21.320 --> 24:24.320
How do you compare that to what you basically have here?

24:24.320 --> 24:28.320
So the question is if I'm aware of code.

24:28.320 --> 24:29.320
Check.

24:29.320 --> 24:32.320
So no, I'm not.

24:32.320 --> 24:37.320
So then I cannot comment on how it compares.

24:37.320 --> 24:39.320
But any initiative, right?

24:39.320 --> 24:46.320
I don't tell you now everyone should use RAP, but what I'm saying as a field in the open research community

24:46.320 --> 24:53.320
we should move towards tools that kind of enable us like reproducible research.

24:53.320 --> 24:56.320
And it's really not just putting the code somewhere.

24:56.320 --> 24:58.320
But we have to have the environment.

24:58.320 --> 25:00.320
We have to have the data.

25:00.320 --> 25:07.320
And as we heard before, we need also the packages because it is common that some lose the maintenance

25:07.320 --> 25:09.320
or they just get deleted.

25:09.320 --> 25:16.320
So if we want to make and preserve this work of research for far future, we need to bundle it everything.

25:16.320 --> 25:17.320
Here we can do it.

25:17.320 --> 25:18.320
Maybe if you're too, as well.

25:18.320 --> 25:20.320
Then that would be great.

25:20.320 --> 25:23.320
Otherwise maybe you want to integrate it.

25:23.320 --> 25:24.320
Yes.

25:24.320 --> 25:28.320
So there's your compiled fund where there's an includes sort of the things you've got for

25:28.320 --> 25:32.320
file because of course the issue is sort of sort of a finished document image.

25:32.320 --> 25:35.320
I plan repository and I run.

25:35.320 --> 25:36.320
It's like an act.

25:36.320 --> 25:37.320
It's actually using apps.

25:37.320 --> 25:39.320
So what is in underlying repositories?

25:39.320 --> 25:41.320
There is changing even in this.

25:41.320 --> 25:42.320
I type Python 3.10.

25:42.320 --> 25:44.320
So we got kind of 1.2.

25:44.320 --> 25:45.320
I think it's a data.

25:45.320 --> 25:46.320
It's a different data.

25:46.320 --> 25:53.320
So are you outputting like the SHA256 check-up that results when you build a doctor image?

25:53.320 --> 26:05.320
So the question was if we have basically the image and if we have some kind of check sum to check if I go

26:05.320 --> 26:08.320
the correct if everything is still available.

26:08.320 --> 26:14.320
So yeah, so basically what we share or publish as OCE to OCE containers like Tokrap is the full image.

26:14.320 --> 26:16.320
So that's everything.

26:16.320 --> 26:21.320
But we can also publish the whole container to the node.

26:21.320 --> 26:23.320
It's then it's just the tip file, right?

26:23.320 --> 26:25.320
It's on the check thing.

26:25.320 --> 26:28.320
Maybe you want that?

26:28.320 --> 26:30.320
Yes, please.

26:30.320 --> 26:35.320
No, it's different.

26:35.320 --> 26:41.320
So the question was if this is related to RENCO.

26:41.320 --> 26:43.320
It's called RENCO.

26:43.320 --> 26:45.320
It's exactly.

26:45.320 --> 26:50.320
So RENCO is also an initiative developed by the Swiss data science center.

26:50.320 --> 26:54.320
And so this is fully part and based.

26:54.320 --> 26:59.320
And there are also just, you know, a user or no, from this side.

26:59.320 --> 27:09.320
But if I understand the correct list, what you do is you have kind of the system and you can put into the system like you can put in files and like everything.

27:09.320 --> 27:12.320
And once it's in the system, it's kind of track.

27:12.320 --> 27:19.320
But I don't think that they have some possibilities to have the whole container to have.

27:19.320 --> 27:23.320
Like a computer environment really in it.

27:23.320 --> 27:26.320
So it's always like on demand, right?

27:26.320 --> 27:31.320
So you need to have access to the packages to the repositories and so.

27:31.320 --> 27:32.320
Yeah.

27:32.320 --> 27:33.320
Thanks.

27:33.320 --> 27:36.320
Thank you.

