WEBVTT

00:00.000 --> 00:13.000
So, I have a pleasure to interview Sedrick, and Sedrick will tell us what he said,

00:13.000 --> 00:16.000
like, okay, who invited the intern on stage?

00:16.000 --> 00:21.000
So, he worked on a draft for ordinary three years, as intern now, he's full-time work,

00:21.000 --> 00:25.000
but he's presenting something that he did during these three years.

00:26.000 --> 00:29.000
And, you know, thank you, yours.

00:30.000 --> 00:33.000
So, yay.

00:33.000 --> 00:41.000
So, my name is Sedrick Clyburn, and actually, first and foremost,

00:41.000 --> 00:45.000
I want to give a big thank you to the organizers, and all you for being here,

00:45.000 --> 00:48.000
we couldn't be possible without the community.

00:48.000 --> 00:51.000
So, big round of applause for the organizers.

00:51.000 --> 00:57.000
I'm a first-time attendee, and first-time speaker.

00:57.000 --> 01:03.000
Here are a pause down, and really, really excited to share with you the session on building AI applications

01:03.000 --> 01:09.000
from your local machine using an open source project called the podman, AI Lab.

01:09.000 --> 01:12.000
Unfortunately, my co-hosted pod, which is the PM for the project,

01:12.000 --> 01:16.000
could have joined us today, but I'm sure you're watching, hello.

01:16.000 --> 01:18.000
So, let's go ahead and hop in.

01:18.000 --> 01:23.000
So, today's schedule, we've got a lot to cover, and I'm glad the air kind of was before me

01:23.000 --> 01:27.000
talking about containers and talking about AI, because it's a perfect match.

01:27.000 --> 01:31.000
We're going to introduce podman as a container engine, and podman desktop and show you

01:31.000 --> 01:36.000
the podman AI Lab, and how it solves a lot of the challenges that developers like myself

01:36.000 --> 01:41.000
have faced when trying to build applications with AI models, and take those applications

01:41.000 --> 01:44.000
and move them to a production environment.

01:44.000 --> 01:46.000
So, we're going to have a couple fun demos.

01:46.000 --> 01:51.000
We're going to look at different use cases, open source, and open weight models as well,

01:51.000 --> 01:56.000
and do a final crem-de-like crem-de adding AI to an existing application.

01:56.000 --> 02:00.000
If you want to check out the slides or try podman desktop feel free to.

02:00.000 --> 02:04.000
But without further ado, I want to introduce some friends here.

02:04.000 --> 02:08.000
These are the seals that make up the podman and the podman desktop projects.

02:08.000 --> 02:14.000
And recently, he was accepted into the CNCF as a sandbox project just last week.

02:14.000 --> 02:17.000
They approved it, so that's very, very exciting.

02:17.000 --> 02:20.000
They had a long night yesterday.

02:20.000 --> 02:24.000
This guy in the top right was a delirium cafe, and so on like four.

02:24.000 --> 02:27.000
And, you know, they're big fans of fonds them as well.

02:27.000 --> 02:32.000
If you haven't used podman before, podman is an open source container engine.

02:32.000 --> 02:36.000
It's open, extensible, and it's Damon.

02:36.000 --> 02:40.000
So, instead of a client server approach where you have a Damon running in the background,

02:40.000 --> 02:42.000
what happens is you actually do a four-click exec.

02:42.000 --> 02:45.000
That child process then becomes the container.

02:45.000 --> 02:49.000
And because of that, you can run containers in a non-group fashion by default.

02:49.000 --> 02:54.000
So, it's a great way to use container technology in larger organizations, for example.

02:54.000 --> 02:58.000
And it's open source, and it works with OCI images, containers, volumes,

02:58.000 --> 02:59.000
registries.

02:59.000 --> 03:03.000
There's a little bit about the differences between, say, for example,

03:03.000 --> 03:04.000
Docker.

03:04.000 --> 03:08.000
If you've used that before, and if you want to learn more about it on the left-hand side,

03:08.000 --> 03:13.000
there's a 10x engineer, really smart guy, tell my boss, who made a video on it.

03:13.000 --> 03:17.000
So, you can learn more online if you haven't tried out podman.

03:17.000 --> 03:22.000
But, podman desktop is this visual of graphical interface for working with containers,

03:22.000 --> 03:27.000
for working with Kubernetes, and as we're going to see today for working with AI models and applications.

03:28.000 --> 03:31.000
And it's paid for deploying to Kubernetes.

03:31.000 --> 03:35.000
So, you can spin up a local kind, or a mini-cube cluster,

03:35.000 --> 03:39.000
or you can connect to your cook and fig, and deploy something straight to your cluster

03:39.000 --> 03:43.000
without having to leave your container development environment.

03:43.000 --> 03:47.000
So, it's very easy, and it's all open and extensible by default.

03:47.000 --> 03:49.000
And it's built on a really nice stack.

03:49.000 --> 03:52.000
So, you've got spell and tailwind on top of the electrons,

03:52.000 --> 03:54.000
so you could use it, Mac window is Linux.

03:54.000 --> 03:59.000
And now, recently, for Mac, we were using lib-k run to enable GPU passed through,

03:59.000 --> 04:06.000
so you can use Lama Cpp with, like, bokeh, and then make those API calls to use the GPU for model inferencing.

04:06.000 --> 04:08.000
So, that's very excited.

04:08.000 --> 04:12.000
Now, the main focus of the talk is transitioning to this new era that we're all living in,

04:12.000 --> 04:17.000
which, as you can see from our lovely little mascot, is the world of AI,

04:17.000 --> 04:21.000
which, for me, in the past few years, was kind of daunting,

04:21.000 --> 04:26.000
as application developer myself, because I would be talking to the data scientists on my team,

04:26.000 --> 04:29.000
trying to see what they were working on, and they would send me a Jupyter notebook,

04:29.000 --> 04:31.000
and I didn't even know how to open it.

04:31.000 --> 04:36.000
But now, things are a little bit different, and we've got all of these open source tools and technologies

04:36.000 --> 04:41.000
that we've talked about today to spin up local AI model, as an API,

04:41.000 --> 04:44.000
and make those requests without even having to need Wi-Fi.

04:44.000 --> 04:50.000
So, I think it's amazing, and it all starts with enabling developers to use models

04:50.000 --> 04:52.000
from their local machine.

04:52.000 --> 04:57.000
There's a lot of benefits because, for example, we could run an AI model as a service,

04:57.000 --> 05:01.000
just like we would with a database, like a test container, for example,

05:01.000 --> 05:04.000
and we have control over that model, right?

05:04.000 --> 05:09.000
So, no data is leaving our local machine, but also is completely for free,

05:09.000 --> 05:11.000
and this is amazing.

05:11.000 --> 05:14.000
The PodMan AI Lab is what we're taking a look at today.

05:14.000 --> 05:19.000
It's an open source project and it's extension of PodMan Desktop,

05:19.000 --> 05:24.000
so that you can have an environment to develop applications that use AI,

05:24.000 --> 05:29.000
and run it all in containers so that when you want to move to a development or production environment

05:29.000 --> 05:35.000
on Kubernetes, everything is running in the same predictability that you would have throughout the process.

05:35.000 --> 05:37.000
So, I think it's cool.

05:37.000 --> 05:42.000
And as a developer, it can kind of feel overwhelming to start working with AI.

05:42.000 --> 05:44.000
You might be looking at different AI models.

05:44.000 --> 05:49.000
You might have picked one and then bound there's a new deep seek that's out and release the next day,

05:49.000 --> 05:50.000
right?

05:50.000 --> 05:54.000
So, we want to have a little bit of predictability and standardization, right,

05:54.000 --> 05:58.000
with working with models, and that's the whole ideation and prototyping videos.

05:58.000 --> 06:02.000
But then you have to add things on, connect with services and data sources,

06:02.000 --> 06:07.000
with drag or agent to connect to APIs and change things together.

06:07.000 --> 06:13.000
So, it can seem overwhelming, but we want to kind of make it easier for developers like me,

06:13.000 --> 06:18.000
and you guys to try out prompts and to build applications and run models.

06:18.000 --> 06:22.000
And so, they're going to show with all the demos today is the AI lab.

06:22.000 --> 06:28.000
So, it includes recipes, so, for example, with drag, you would have a inference server,

06:28.000 --> 06:31.000
and you would have a model and you would have a vector database,

06:31.000 --> 06:35.000
all included in containers in one singular pod,

06:35.000 --> 06:39.000
so that you can quickly spend things up without having to worry about dependencies,

06:39.000 --> 06:45.000
and do your iteration of build, run, and test all in containers from your local machine,

06:45.000 --> 06:49.000
using the same workflow that you've been using for container development.

06:49.000 --> 06:52.000
So, if you want to learn more about it, feel free to check out the slides,

06:52.000 --> 06:55.000
but I love demos, and I hope you guys do too.

06:55.000 --> 06:58.000
So, who's ready for some live demos?

06:58.000 --> 07:05.000
To be, all right, so here's the extension.

07:05.000 --> 07:12.000
I'm running pod and desktop here on my Mac, and so the container engine is running itself.

07:12.000 --> 07:16.000
Under the hood, what's happening is we've set up a virtual machine.

07:16.000 --> 07:21.000
This was configured all for me, just with the click of a button, so it's very nice.

07:21.000 --> 07:26.000
And we have GPU passed through, so that we can inference with this new M3 chip,

07:26.000 --> 07:28.000
which is very nice.

07:28.000 --> 07:31.000
So, let's go ahead and check it out.

07:31.000 --> 07:34.000
This is the extension where we've installed it,

07:34.000 --> 07:37.000
and there's tons of other similar types of extensions.

07:37.000 --> 07:42.000
We'll also kind of look at bootable containers today with AI as well,

07:42.000 --> 07:45.000
which is pretty cool, but the recipe catalog.

07:45.000 --> 07:49.000
So, let's say you want to build a new POC,

07:49.000 --> 07:51.000
or you want to test something out.

07:51.000 --> 07:54.000
As a developer, you've got to think about the models,

07:54.000 --> 07:56.000
and the frameworks, the language change,

07:56.000 --> 07:58.000
and you've got to think about an inference server.

07:58.000 --> 08:02.000
It can be a lot to think about two complex, right?

08:02.000 --> 08:05.000
But with these recipes, we just have the ingredients

08:05.000 --> 08:07.000
that you need to set up this application.

08:07.000 --> 08:11.000
So, you can learn about it here in this little catalog,

08:11.000 --> 08:14.000
but I'll go ahead and start one up and show you how it works.

08:14.000 --> 08:17.000
So, there's a list of different open source,

08:17.000 --> 08:21.000
Apache 2.0 models that you can download straight from Huggenface,

08:21.000 --> 08:22.000
the click of a button,

08:22.000 --> 08:26.000
and you use that model in this recipe.

08:26.000 --> 08:29.000
So, we're essentially pulling down the chatbot,

08:29.000 --> 08:30.000
GitHub repo.

08:30.000 --> 08:34.000
We're mounting that model to the inference server,

08:34.000 --> 08:35.000
and we're starting in it.

08:35.000 --> 08:38.000
We're starting in the ChromaDB vector database,

08:38.000 --> 08:40.000
in order to chunk in our data,

08:40.000 --> 08:42.000
and we've got this app running.

08:42.000 --> 08:44.000
So, I'll go ahead and show you this.

08:44.000 --> 08:45.000
What is it?

08:45.000 --> 08:48.000
And second to go, it's all running in a singular pod,

08:48.000 --> 08:50.000
and let's check it out.

08:50.000 --> 08:53.000
So, it's a simple streamlit application,

08:53.000 --> 08:55.000
let's ask a question.

08:55.000 --> 08:57.000
Who is keynoteing,

08:57.000 --> 09:01.000
phased in this year in 2025?

09:01.000 --> 09:04.000
So, this is a standard example of,

09:04.000 --> 09:10.000
right, asking the model a question that it wasn't previously trained on.

09:10.000 --> 09:14.000
The answer we get, hey, we don't have information about who's keynoteing.

09:14.000 --> 09:17.000
I'm sad now, right, I want it to know.

09:17.000 --> 09:19.000
And so, I've got a little brochure in here,

09:19.000 --> 09:20.000
actually, I'll show you,

09:20.000 --> 09:23.000
that just has a little bit of information about the event.

09:23.000 --> 09:27.000
And let's provide that to the application,

09:27.000 --> 09:29.000
and let's go in the background here,

09:29.000 --> 09:30.000
and see what's actually happening.

09:30.000 --> 09:32.000
So, here we have all the logs.

09:32.000 --> 09:36.000
Oh, that was supposed to happen.

09:36.000 --> 09:40.000
Okay, so let's say our applications are working,

09:40.000 --> 09:43.000
and we want to come back here and restarted.

09:43.000 --> 09:45.000
Well, it's very easy.

09:45.000 --> 09:47.000
We just restart the recipe.

09:47.000 --> 09:49.000
Let's say we've made some changes to our code base.

09:49.000 --> 09:51.000
We've added a new feature.

09:51.000 --> 09:52.000
We've started iterating.

09:52.000 --> 09:55.000
Well, this will be a great way to just, you know, code,

09:55.000 --> 09:57.000
and restart code and restart.

09:57.000 --> 09:58.000
And then, hopefully,

09:58.000 --> 09:59.000
not have the same issue.

09:59.000 --> 10:01.000
So, we'll give it a second.

10:01.000 --> 10:03.000
The containers have stopped running,

10:03.000 --> 10:05.000
and hopefully, it'll just start out for me again.

10:05.000 --> 10:08.000
And we can try to ask the question again.

10:08.000 --> 10:10.000
So, just like that,

10:10.000 --> 10:12.000
all the containers have restarted.

10:12.000 --> 10:14.000
So, for vector-day-to-base,

10:14.000 --> 10:16.000
the stream-lit application,

10:16.000 --> 10:18.000
hey, who is keynoteing,

10:18.000 --> 10:20.000
Faust in this year?

10:20.000 --> 10:22.000
So, we'll try it.

10:22.000 --> 10:24.000
Faust in.

10:24.000 --> 10:26.000
Okay, so, we'll get the same answer.

10:26.000 --> 10:28.000
Doesn't have the context.

10:28.000 --> 10:30.000
Fingers crossed.

10:30.000 --> 10:32.000
I'll give the same answer.

10:32.000 --> 10:34.000
I'll give the same question.

10:34.000 --> 10:36.000
Mmm.

10:36.000 --> 10:38.000
Good thing I haven't back up.

10:38.000 --> 10:40.000
Okay.

10:40.000 --> 10:42.000
So, now we ask the question,

10:42.000 --> 10:44.000
provide the context.

10:44.000 --> 10:46.000
And give it a second.

10:46.000 --> 10:48.000
Now, while it's processing the document,

10:48.000 --> 10:50.000
I want to show you the magic for

10:50.000 --> 10:52.000
what's going on behind the scenes.

10:52.000 --> 10:54.000
So, here is the source code

10:54.000 --> 10:57.000
that we've cloned when we started this recipe, right?

10:57.000 --> 11:00.000
And we've got a container file to package our application.

11:00.000 --> 11:02.000
We make this a little bit bigger.

11:02.000 --> 11:06.000
We've got our rag application in order to,

11:06.000 --> 11:09.000
say, for example, connect to the model that's running locally.

11:09.000 --> 11:13.000
And we've got all the code here to accept, say,

11:13.000 --> 11:15.000
PDF, text, some front end here,

11:15.000 --> 11:17.000
and be able to pass a prompt template.

11:17.000 --> 11:19.000
Hey, don't answer the question.

11:19.000 --> 11:21.000
If you don't have the context,

11:21.000 --> 11:25.000
I don't know which one.

11:25.000 --> 11:27.000
One of these was supposed to work.

11:27.000 --> 11:29.000
Just trust me.

11:29.000 --> 11:31.000
We'll have to another demo.

11:31.000 --> 11:33.000
So, code base container file,

11:33.000 --> 11:37.000
our application package, just like that.

11:37.000 --> 11:39.000
We've also stopped here in these recipes

11:39.000 --> 11:41.000
in order to, say, for example,

11:41.000 --> 11:44.000
take our application and run it as a service,

11:44.000 --> 11:47.000
say, using quadlet, which is a feature of podlet,

11:47.000 --> 11:51.000
or to take it and actually run it as an operating system itself.

11:51.000 --> 11:53.000
With, say, for example, we take

11:53.000 --> 11:56.000
sent-to-stream as a bootable container base image,

11:56.000 --> 11:58.000
then we overlay, as containers,

11:58.000 --> 12:00.000
all of the different components,

12:00.000 --> 12:03.000
the model, the inference server,

12:03.000 --> 12:05.000
the database, and the application itself.

12:05.000 --> 12:07.000
And we can actually take this and run it

12:07.000 --> 12:09.000
as an operating system wherever we'd like,

12:09.000 --> 12:11.000
you know, the edge, IoT devices,

12:11.000 --> 12:14.000
etc., etc., where the model needs to be as close

12:14.000 --> 12:16.000
as possible to the actual hardware itself.

12:16.000 --> 12:19.000
So, that's pretty dope, as well.

12:19.000 --> 12:22.000
Let me show you a demo that hopefully works here.

12:22.000 --> 12:25.000
A lot of our applications for AI might be working with text,

12:25.000 --> 12:27.000
but it's really important to remember that audio,

12:27.000 --> 12:31.000
vision, etc., are becoming more and more popular.

12:31.000 --> 12:34.000
So, here's an example of audio to text.

12:34.000 --> 12:39.000
We're using the whisper port for that came from OpenAI

12:39.000 --> 12:41.000
to power this application.

12:41.000 --> 12:44.000
So, what we're going to be able to do is, say, for example,

12:44.000 --> 12:46.000
if someone is out in the field,

12:46.000 --> 12:49.000
maybe it's a doctor, and they need to transcribe their conversations,

12:49.000 --> 12:52.000
we could build some type of application to be able to

12:52.000 --> 12:55.000
understand the language in real time.

12:55.000 --> 12:57.000
So, let's go over here,

12:57.000 --> 13:00.000
and I'm going to talk a little bit about

13:01.000 --> 13:03.000
my first time here.

13:03.000 --> 13:06.000
It's very crazy, and I have a hard time navigating.

13:06.000 --> 13:10.000
But, maybe OpenAI, I'm too close to it.

13:10.000 --> 13:14.000
The results were almost Ablar in Espanol.

13:14.000 --> 13:17.000
You know, a couple of different languages to test this out.

13:17.000 --> 13:21.000
And let's go ahead and upload this file to our little applications.

13:21.000 --> 13:23.000
So, let's go over here.

13:23.000 --> 13:26.000
And fingers crossed.

13:26.000 --> 13:29.000
We'll give the MP3 file, and let's see what the result is.

13:29.000 --> 13:31.000
Yeah.

13:31.000 --> 13:33.000
Pretty cool, right?

13:37.000 --> 13:41.000
So, you can take whatever language whisper has about

13:41.000 --> 13:43.000
80 languages that was trained on,

13:43.000 --> 13:45.000
and start to build an application just like this.

13:45.000 --> 13:46.000
Click on the button.

13:46.000 --> 13:49.000
You have the model, the server for inferencing,

13:49.000 --> 13:51.000
and the front end to open, and then you can make it,

13:51.000 --> 13:52.000
it's not very wrong.

13:52.000 --> 13:54.000
Because what we're using at OpenAI,

13:54.000 --> 13:57.000
that you saw earlier, was Lomacy plus plus.

13:57.000 --> 14:00.000
But if you like BLL and where you could replace,

14:00.000 --> 14:02.000
and kind of build your own recipes,

14:02.000 --> 14:05.000
and use kind of this model catalog here,

14:05.000 --> 14:07.000
of these open, open whisper license,

14:07.000 --> 14:09.000
poshi 2.0 models,

14:09.000 --> 14:11.000
or maybe import your own.

14:11.000 --> 14:13.000
So, for example, deep-sea,

14:13.000 --> 14:15.000
I've been playing around with it recently.

14:15.000 --> 14:17.000
It's very interesting.

14:17.000 --> 14:19.000
You know, we could test that out here.

14:20.000 --> 14:22.000
Then, or...

14:26.000 --> 14:27.000
Let's see.

14:27.000 --> 14:30.000
I'm not liable for the output of this model.

14:30.000 --> 14:33.000
So, I love how it thinks.

14:33.000 --> 14:37.000
It's like, it's a wrapper, like I said.

14:37.000 --> 14:39.000
So, fun to play around with models.

14:39.000 --> 14:42.000
But, of course, the real use case is,

14:42.000 --> 14:45.000
and let me see if I can open this other,

14:46.000 --> 14:47.000
or give it a refresh.

14:47.000 --> 14:49.000
The real use case is doing some experimentation

14:49.000 --> 14:51.000
and prompting.

14:51.000 --> 14:53.000
I can't open up my other playground,

14:53.000 --> 14:57.000
but it was just some kind of messy run

14:57.000 --> 14:59.000
of the temperature and doing some prompt templates.

14:59.000 --> 15:01.000
Five minutes here.

15:01.000 --> 15:03.000
Let's do the creme de la creme demo.

15:03.000 --> 15:07.000
I want to show you the model survey inside of things.

15:07.000 --> 15:08.000
So, here we go.

15:08.000 --> 15:11.000
We've got a couple different models that we've kind of spun up.

15:11.000 --> 15:14.000
Let's take, for example, this granite 3.1 model.

15:14.000 --> 15:15.000
We've been spun up.

15:15.000 --> 15:19.000
This is just running inside of Mama C++ container.

15:19.000 --> 15:23.000
So, we've containerized this infant server.

15:23.000 --> 15:26.000
We could check out the, for example,

15:26.000 --> 15:30.000
Swagger API and kind of, you know, try it out.

15:30.000 --> 15:32.000
What's the capital of France?

15:32.000 --> 15:34.000
Hope it says Paris.

15:34.000 --> 15:35.000
That was close.

15:35.000 --> 15:40.000
We could say, take some code and integrate this into our application.

15:40.000 --> 15:45.000
I play around with Langshane and Langshane for J on forks.

15:45.000 --> 15:50.000
It's a pretty good way to start using AI if you're a job developer.

15:50.000 --> 15:54.000
And this model is running on this specific port.

15:54.000 --> 15:58.000
I could copy this code in and start using it into my application.

15:58.000 --> 16:06.000
But let's see, let me hop over to, here we go.

16:06.000 --> 16:07.000
Cool.

16:08.000 --> 16:13.000
So, I have to do a quick step and actually I need to change one more thing.

16:13.000 --> 16:16.000
So, this is Langshane for J.

16:16.000 --> 16:21.000
Let's see, I'm going to replace the port here.

16:21.000 --> 16:23.000
So, the model is serving, right?

16:23.000 --> 16:26.000
We just tested out the endpoint a second ago.

16:26.000 --> 16:29.000
Let's start off our application.

16:30.000 --> 16:32.000
For eight thousand six, cool.

16:32.000 --> 16:37.000
So, let's say someone in this organization, for example,

16:37.000 --> 16:41.000
has to go through a large amount of backlogs of claims, right?

16:41.000 --> 16:44.000
Everyone's getting in car accidents for some reason.

16:44.000 --> 16:46.000
So, here's a new claim.

16:46.000 --> 16:49.000
Marty McFly, crashed as DeLorean.

16:49.000 --> 16:52.000
We want to be able to have a model to be able to ask,

16:52.000 --> 16:58.000
hey, who is it fault here?

16:58.000 --> 17:00.000
So, let's try it.

17:00.000 --> 17:04.000
You saw the poster quest go off in the background.

17:04.000 --> 17:06.000
Okay, enough audio.

17:06.000 --> 17:09.000
And here we have, in token, the response back.

17:09.000 --> 17:10.000
But it's really cool, right?

17:10.000 --> 17:13.000
The models running locally, we're making that poster quest,

17:13.000 --> 17:15.000
getting the response back.

17:15.000 --> 17:18.000
And hopefully this is a great way to start experimenting and

17:18.000 --> 17:22.000
getting from our local machine and building some cool applications that use AI.

17:22.000 --> 17:24.000
Let's see who is at fault.

17:24.000 --> 17:25.000
Okay.

17:25.000 --> 17:28.000
So, you know, very nice, very nice.

17:28.000 --> 17:33.000
So, I hope you guys have enjoyed this little kind of live demo for the AI lab.

17:33.000 --> 17:34.000
There's a lot of cool stuff you can do.

17:34.000 --> 17:37.000
And if you want to replicate a lot of us for it,

17:37.000 --> 17:40.000
and be able to call models and actions with the API,

17:40.000 --> 17:44.000
or you can replicate it specifically here for this local server.

17:44.000 --> 17:49.000
Let me kind of close up here and let's see.

17:49.000 --> 17:50.000
Cool.

17:50.000 --> 17:54.000
And as you might know, containers are a perfect solution for AI in the moment.

17:54.000 --> 17:58.000
Because of the portability, the ability to scale up on demand,

17:58.000 --> 18:01.000
let's say when our application is in use nine to five,

18:01.000 --> 18:04.000
or be able to scale down afterwards and be able to save.

18:04.000 --> 18:07.000
Not only money, but maybe the tree is outside as well,

18:07.000 --> 18:08.000
and be economical.

18:08.000 --> 18:11.000
So, this GPU acceleration, whatever platform you're on,

18:11.000 --> 18:14.000
and a highly recommend that you try out the pop-end AI lab.

18:14.000 --> 18:16.000
There's a great community behind it.

18:16.000 --> 18:17.000
It's an open source project.

18:17.000 --> 18:19.000
It just joined the CNCF.

18:19.000 --> 18:24.000
And it will free to provide feedback on the repository

18:24.000 --> 18:28.000
to contribute, create PRs, whatever you would like to do.

18:28.000 --> 18:29.000
And be happy.

18:29.000 --> 18:32.000
On behalf of the whole team, I want to say thank you.

18:32.000 --> 18:34.000
And this is a pleasure.

18:34.000 --> 18:35.000
This is really fun time.

18:35.000 --> 18:37.000
And thanks for coming to Faustin.

18:37.000 --> 18:39.000
Thank you.

