WEBVTT

00:00.000 --> 00:11.000
Hi everyone, so this is a presentation about modernizing

00:11.000 --> 00:12.000
Rostos skills.

00:12.000 --> 00:14.000
I'm here in Focicale.

00:14.000 --> 00:18.000
I'm the sexual well-engineered at University College London.

00:18.000 --> 00:21.000
I'm presenting this go on behalf of the Advanced Society

00:21.000 --> 00:24.000
Competition and the Geomatic Engineering teams.

00:24.000 --> 00:29.000
So mainly the challenge is how we teach all these

00:29.000 --> 00:34.000
like computer and hardware skills to students.

00:34.000 --> 00:38.000
We need to manage lots of things,

00:38.000 --> 00:42.000
the server, the virtual machines, the robot sensors,

00:42.000 --> 00:46.000
and the way we are trying to do this is by using cloud computing.

00:46.000 --> 00:50.000
And the reason is because participants

00:50.000 --> 00:53.000
or persons have different skills and levels,

00:53.000 --> 00:56.000
so we need to handle all these kind of dependencies

00:56.000 --> 01:01.000
of making sure they strive for where to start the developing.

01:01.000 --> 01:06.000
So what we are trying to do is orchestrating a few of the infrastructure.

01:06.000 --> 01:13.000
Like a bare metal server that runs all the packages

01:13.000 --> 01:17.000
that collect the sensor data, control the robots,

01:17.000 --> 01:22.000
and we use a signal router to communicate through

01:23.000 --> 01:27.000
Rostos middle-guests through these virtual machines.

01:27.000 --> 01:30.000
So this metal machine is making easier for users to start

01:30.000 --> 01:34.000
connecting to them via SSH and start playing with

01:34.000 --> 01:37.000
Rostos packages, Rostos Playbacks.

01:37.000 --> 01:41.000
They really start working on those kind of life fun stuff.

01:41.000 --> 01:43.000
And not really struggling,

01:43.000 --> 01:48.000
I need to install my Python version 3.12,

01:48.000 --> 01:51.000
3.11, 3.12 and playing with all these flavors of Rostos.

01:51.000 --> 01:54.000
And everything, which is also fun for me,

01:54.000 --> 01:56.000
but not maybe for everyone.

01:56.000 --> 02:00.000
So that's why we are kind of playing with this infrastructure.

02:00.000 --> 02:04.000
So this is like for you kind of like a demos.

02:04.000 --> 02:07.000
We have been doing this kind of workflow

02:07.000 --> 02:12.000
where we develop the Docker containers with all the Rostos dependencies

02:12.000 --> 02:16.000
with them, create the GitHub container registry,

02:16.000 --> 02:20.000
which is make it easier to make it portable.

02:20.000 --> 02:25.000
And then that way we pull that image into the virtual machine

02:25.000 --> 02:29.000
and that's more easier for users to start

02:29.000 --> 02:31.000
just playing with the data.

02:31.000 --> 02:34.000
So that's what we are doing now.

02:34.000 --> 02:37.000
We have done this virtual and virtual

02:37.000 --> 02:39.000
and virtual metal communications.

02:39.000 --> 02:45.000
We are this and uploading Rostos DS.

02:45.000 --> 02:48.000
Also the other thing, we are playing around

02:48.000 --> 02:53.000
is how we interact with GPU, server capacities.

02:53.000 --> 02:57.000
So for example, the ISAXINA and ISA lab.

02:57.000 --> 02:59.000
So we have been running this into the virtual machine.

02:59.000 --> 03:02.000
Just the users, just connect to SSH.

03:02.000 --> 03:05.000
And then start playing with the interaction

03:05.000 --> 03:08.000
and all the kind of the models.

03:08.000 --> 03:12.000
So lots of things that we want to do

03:12.000 --> 03:17.000
in the future, like we more of the multi-modal sensing

03:17.000 --> 03:21.000
and going from the low level to the high level attention.

03:21.000 --> 03:25.000
So there's a lot of things that depends on the sensor we focus on.

03:25.000 --> 03:31.000
And of course, we are kind of also developing, sorry about this.

03:31.000 --> 03:35.000
But we are working on the, I don't know what's happening.

03:35.000 --> 03:41.000
But one thing we are working on is developing visual language models.

03:42.000 --> 03:47.000
And action models are also the visual language navigation models.

03:47.000 --> 03:49.000
Sorry about this.

03:49.000 --> 03:52.000
But that's what we are trying to do.

03:52.000 --> 03:54.000
That's kind of hit a wall.

03:54.000 --> 03:59.000
The only thing we are working on is on healthcare

03:59.000 --> 04:04.000
and robotics, medical robotics applications.

04:04.000 --> 04:09.000
So we've been organizing workshops.

04:09.000 --> 04:12.000
The handling medical symposium.

04:12.000 --> 04:16.000
So last year, we have like 50 participants,

04:16.000 --> 04:20.000
10 speakers, these posters, and Ben was all out.

04:20.000 --> 04:23.000
And we are organizing a new event.

04:23.000 --> 04:27.000
If anyone is interested in around these topics, you just talk to me.

04:27.000 --> 04:30.000
And lastly, we have like a few takeaways.

04:30.000 --> 04:34.000
What we want to do is lower in the barriers to advanced robotics

04:34.000 --> 04:35.000
as a scale.

04:35.000 --> 04:38.000
We have a proven and scale model for collaboration.

04:38.000 --> 04:42.000
We have a test bed for research training and innovation.

04:42.000 --> 04:46.000
And of course, if anyone is keen to work with us,

04:46.000 --> 04:51.000
or collaborate, please talk to me or Marlon who is also in the audience.

04:51.000 --> 04:53.000
Yeah, thank you.

04:53.000 --> 04:56.000
And to all the colleagues from UCL and the panelists.

04:56.000 --> 04:59.000
And yeah, we have everything on GitHub as well on the website as well.

04:59.000 --> 05:01.000
So yeah, thank you very much.

05:01.000 --> 05:07.000
Thank you so much Miguel.

