WEBVTT

00:00.000 --> 00:13.280
Hi everyone, good afternoon. My name is Pratikshar and I am a Geomatic

00:13.280 --> 00:18.000
Engineer from the Open Year Team. Today I am joined by Michael Lee Mell, who is one of

00:18.000 --> 00:25.000
the developers or the face behind Open Year. And the main topic that we are here

00:25.000 --> 00:30.680
trying to cover within Open Year is about the reusability concept that we support for

00:30.680 --> 00:36.360
the algorithms that I developed using Open Year. But before that I want to quickly

00:36.360 --> 00:41.240
have a recap because last year as well I got the opportunity to be here. That was the first

00:41.240 --> 00:47.960
time when I was here in fourth time, as well as it was our first time presenting CDSC in

00:48.040 --> 00:54.520
first step. And Open Year is one of these open source components that is supported in the

00:54.520 --> 01:00.840
CDSC which is Copernicus data space ecosystem. Quickly giving a recap for those who are

01:00.840 --> 01:08.760
still not aware or who completely have no idea what a CDSC is. So CDSC is a user centric platform

01:08.760 --> 01:15.480
where we provide satellite data sets all at one single point with the focus on the Copernicus

01:15.800 --> 01:24.600
data sets. Copernicus is a European missions for capturing satellite data sets that would be used

01:24.600 --> 01:30.600
for several applications related to earth observation. And Open Year is one of the tool that is

01:30.600 --> 01:37.560
provided also within this ecosystem for not just accessing the data sets that in the ecosystem,

01:37.560 --> 01:44.680
but also process it in cloud environment. So that was what we covered last year, but this year

01:44.760 --> 01:50.840
we thought why not let's go a bit more into technicalities of what is Open Year and how we say

01:50.840 --> 01:56.760
that okay Open Year solves this this problem. But how does it does behind in behind the scene

01:56.760 --> 02:01.960
or how does it suppose the concept of fair that we always try to present with Open Year.

02:04.760 --> 02:09.000
To start with a classical example of can you guess where it could be?

02:09.480 --> 02:20.360
Yeah there was voice Brussels to be more specific I think the dark image that you see below there

02:21.000 --> 02:28.680
that somewhere next to us if I'm correct. And it is actually a beautiful snowy image that we had

02:28.680 --> 02:34.760
in the second week of January this year. So this is the Brussels itself.

02:35.720 --> 02:42.360
But I wanted to show this picture saying that it's a pretty picture to look at yes a snowy

02:42.360 --> 02:48.440
picture of Brussels but along with that it also gives us insights on looking about this information

02:48.440 --> 02:53.640
on what are the landscapes what could be the additional information we can gather from this

02:53.640 --> 02:59.720
kind of data sets and these are a couple of other pretty images I would say but these talk

02:59.800 --> 03:08.840
much more than just visualization. In the right you can see a image that describes about the

03:08.840 --> 03:15.960
land surface temperature and to the left you can see couple of ships that are detected using

03:15.960 --> 03:22.920
one of these Sentinel data sets. So these are some examples of information that we can gather from

03:22.920 --> 03:28.840
this satellite data sets along with that we can also study several earth observation earth

03:28.920 --> 03:35.640
related phenomenon disasters and also perform several applications related to this task.

03:36.360 --> 03:44.040
So here you can see hurricane followed by wildfire and there are gas flames so these are all

03:44.040 --> 03:50.120
detected with satellite data sets. How able they are often challenges when working with this

03:50.120 --> 03:56.600
kind of data set that they are terrified of data set that are acquired every day and in order to

03:56.600 --> 04:02.680
work with them to make a useful information out of them it could be quite troublesome to download

04:02.680 --> 04:09.240
all these data set in your own environment and to process it furthermore the data set that I

04:09.240 --> 04:13.640
showed you they were like Sentinel 1 Sentinel 2 these are just named to couple of sensors that

04:13.640 --> 04:18.920
capture the data set and they can provide you data in different formats. So the algorithm that you

04:18.920 --> 04:23.720
will be developing for for example net cityf could be very much different to the other format.

04:24.600 --> 04:31.000
So you will every time have to think about how do I change my working method in order to work with

04:31.000 --> 04:37.960
this kind of data sets as well as where are they stored. With cityf now it has been simplified

04:37.960 --> 04:43.400
that we try to bring all the data set at one location so that you can directly access the data set

04:43.400 --> 04:50.440
from in there. But to address data formats that was still quite a challenge I would say.

04:50.680 --> 04:57.720
That is what open you try to answer but in addition to that I would like to show what where

04:57.720 --> 05:05.000
the traditional method that we as a remote sensing engineers or sorry remote sensing data

05:05.000 --> 05:12.440
engineers try to do with the data sets previously. So for example I wanted to identify what was

05:12.440 --> 05:17.640
the change detection in this location so that time I used to download the data set from the previous

05:17.640 --> 05:24.840
year for how much data set was available. I just downloaded it in my location and I want to be

05:24.840 --> 05:29.880
specific with Brussels so I just do additional processing and all those things and then I again

05:29.880 --> 05:35.800
do this same step for the next year and I try to find the difference between those two data set

05:35.800 --> 05:43.400
by finally focusing on developing my algorithm which is often tricky if I want to do it again in

05:43.480 --> 05:51.240
the same calculation in a different location. So that didn't support the concept of reusability

05:51.240 --> 05:58.840
I would say or reproducibly of what I wanted to do in a difference scenario. Apart from that it was also

05:58.840 --> 06:04.840
a challenge I think someone else also addressed this challenge earlier in this room that I prepared

06:04.840 --> 06:10.760
the data set but it is very tricky on how do I share it with the other or within my team as well

06:10.760 --> 06:16.120
how do I easily share it. Do I download it? Copy it in a pen drive and shape please can you

06:16.120 --> 06:23.960
look over this data set in your own environment so that was another tricky that tricky question

06:23.960 --> 06:29.800
that open you tried to answer because following this traditional method it was often slow

06:31.080 --> 06:37.080
it was not just hard way expensive but also computationally expensive in doing the same thing as I

06:37.080 --> 06:43.800
mentioned earlier reproducing of the output was quite tricky part in the whole algorithm development

06:43.800 --> 06:50.440
and more over scaling up of the algorithm. I managed to do the same work for the Brussels reason

06:50.440 --> 06:56.440
how do I take it to a continental label or to a global label. So that was a question that always

06:56.440 --> 07:02.440
lied on the table when working with the more sensing data set and as I have been always matching

07:02.440 --> 07:09.160
okay open you is the one that tried to solve it me as a remote sensing data engineer I always

07:09.160 --> 07:15.160
face the problem that I mentioned earlier and open you solved it but how to answer that question

07:15.160 --> 07:21.160
how ML will be helping me out because he is one of the developers of open you who has been

07:21.160 --> 07:27.960
intensely working with improving the algorithm developing various features and processes that could

07:28.040 --> 07:35.720
be used by the remote sensing community so ML is too. Thank you Platicia to explain the challenges

07:35.720 --> 07:42.520
and also showing the nice images of Copernicus so hello my name is Emil Sonofeld I'm a developer

07:42.520 --> 07:49.080
for an open you one of the developers and open you is actually an open standard that's useful

07:49.080 --> 07:54.920
to work on a geospatial data so the eostains for a herbs observation and the open

07:54.920 --> 08:02.600
instance because it's open source and also works a lot with freely available data which is always

08:02.600 --> 08:08.360
nice open you is an open source API that provides standardized access to earth observation data

08:08.360 --> 08:12.040
and simplifies the deployment of processing workflows of scalable cost efficient services

08:12.040 --> 08:18.680
so what does it mean in practice we have a few examples where open you are learning and I'll

08:18.840 --> 08:26.360
cover the the most recent example it's here on cdz and cdz there is a data center very large

08:26.360 --> 08:32.600
with multiple petabytes of earth observation data on it and often we want to do competitions

08:32.600 --> 08:40.520
on this data to compute road comfort to compute how much mice is growing everywhere but this

08:40.520 --> 08:48.040
calculations they are sold or large and work on so much data that is quite difficult to write

08:48.040 --> 08:54.360
efficient algorithms on them so what's open you'll help is to take a very large task good

08:54.360 --> 09:00.200
into into pieces process them in parallel on different computers in the same data center where

09:00.200 --> 09:06.840
the data is also residing and then once everything is being calculated the results are stitched

09:06.840 --> 09:12.440
back together in a very large stack at a look for example and put on the same data center which is

09:12.840 --> 09:21.240
very yeah the the latest way possible to store your results in a very efficient manner

09:23.240 --> 09:30.200
so we have a few examples where open you is being used so here at at a left we have a calculation

09:30.200 --> 09:36.520
don't on a whole africa which is a very large calculation continental scale and this one

09:36.520 --> 09:42.280
took only 25 hours because of the shortcuts we can take by doing the calculations in the same data

09:43.240 --> 09:48.920
we have an under a larger example here if we have growth cereal which is calculated on the whole

09:48.920 --> 09:57.160
road on a to begin with it's calculated on a 10 meter resolution and here it's then down samples

09:57.160 --> 10:06.360
to 0.004 degrees with still very large and it gives an insight about how much cereal is growing

10:06.440 --> 10:13.000
everywhere around the world it can be very interesting for food security among many other things

10:13.000 --> 10:19.240
those calculations are very very heavy sometimes takes thousands of euros just in electricity to

10:19.240 --> 10:27.240
to calculate and can be done very efficient with open you the actual annual median composites

10:27.240 --> 10:34.040
of Sentinel 1 Sentinel 1 is a radar satellite of Copernicus and sometimes it's handy to have already

10:34.120 --> 10:42.280
a pre-made median of a whole time series maybe like maybe this is a medium of a year

10:43.160 --> 10:47.800
the annual meaning and then it's easy to continue working on this data so this is a

10:48.360 --> 10:55.160
calculation that open you does and the result of it is also very easily usable in a stack at a

10:55.720 --> 11:06.680
low so either a little bit of points which is covered also very important about OpenIO is

11:06.680 --> 11:15.080
it's an open standard and it helps for the code reusability we'll cover it later about a U.D. piece

11:15.080 --> 11:21.160
the user defined processes it's a way to easily share processes that you already made like algorithms

11:21.160 --> 11:27.800
you package them in a way to make it easily usable for for other users to be working with

11:28.520 --> 11:35.720
also what is very important is reusability because if you follow the open source and the open

11:35.720 --> 11:42.600
AOS standard the way you need to program makes it easier to make sure that someone else can

11:42.600 --> 11:50.040
rerun it to because if someone is running calculations on their own laptop they might be using

11:50.040 --> 11:55.480
Docker and good build files and everything but if you run on your own laptop there is often an issue

11:55.480 --> 12:02.200
about reproducibility it might not work on someone else's laptop and so one of the advantages you have

12:02.200 --> 12:08.200
about running on a data center is it's not your own laptop so if it could work there someone else can

12:08.200 --> 12:15.160
also run it on the same data center and we also have a good machine learning support of the

12:15.320 --> 12:24.040
classifications need a machine learning and we're quite smooth with that so OpenIO is this open

12:24.040 --> 12:29.560
standard I just talked about the example in a CDAZ the Copernicus data space ecosystem

12:30.760 --> 12:35.480
we are running that on a very large data center but before we were running there we also had OpenIO

12:35.480 --> 12:44.600
running in a interscope which is in Belgium it uses more Belgium related data and also

12:44.600 --> 12:51.560
we cache also some European satellite imagery in the Terrascope for the scope of Belgium and

12:51.560 --> 12:56.040
Europe but not for the whole world and so it's very easy for Belgium scientists to do calculations

12:56.040 --> 13:03.800
about agriculture and what's in the Terrascope and then we also have an OpenIO platform which is

13:03.800 --> 13:10.440
kind of a way to distribute tasks to the relevant spaces there might be a way that you connect

13:10.520 --> 13:16.920
through OpenIO to the OpenIO platform and then you can load two data sets for example Sentinel one

13:16.920 --> 13:23.080
and Sentinel one median data set and it might be delegating it's two different beckons

13:23.720 --> 13:29.160
seamlessly so that you don't even need to realize where your data is is living but it will do it

13:29.160 --> 13:38.280
effectively for you and a few other examples where we can have a rapid of OpenIO to access

13:38.920 --> 13:45.640
different locations Sentinel hope we are moving away from that for the moment but there is many other

13:46.680 --> 13:55.480
connectors so yeah how does it work so this was more like a corporate view of the OpenIO

13:55.480 --> 14:01.080
but more or less a technical how do you work with it we often use a concept of data cubes

14:01.080 --> 14:07.240
data cubes are an dimensional arrays and they often have dimensions like the

14:07.320 --> 14:12.680
X dimension of why dimension of very important also the bands like red green blue but also

14:12.680 --> 14:19.240
infrared and ultra are many different kinds of bands and there is also the time dimension of them

14:21.160 --> 14:26.520
and so it makes a lot of data and it's a lot of different dimensions sometimes you don't have a

14:26.520 --> 14:31.000
time dimension sometimes you do sometimes you have a band dimension sometimes it's single band

14:31.800 --> 14:37.080
so it's different many different kinds what we provide is an abstraction layer that the data

14:37.080 --> 14:43.160
stored on disk and the data you work with we will load it as a nice yes possible

14:44.200 --> 14:49.320
the data on disk might be in a stack catalog it might use open search it might be in tips in net CDF

14:50.680 --> 14:55.560
GPG 2000 it might be in any kind of format or users don't need to worry about they just get

14:56.520 --> 15:03.320
a data cube full of pixels and how do we work with those data cubes here at the right we see an

15:03.320 --> 15:09.400
example of a processing pipeline or a micolitor process graph of OpenIO where it will be loading

15:09.400 --> 15:17.320
Sentinel 2 data it will be loading an external stack catalog and then it will work on those dimensions

15:18.040 --> 15:24.680
call some processes on it take the averages and then at the end it can save the results and here it's

15:24.760 --> 15:32.680
chosen to save the results as a CSV but those process graphs they can get really complex if someone

15:32.680 --> 15:37.000
wants to use machine learning people says the data are clean things open remove clouds

15:38.120 --> 15:45.640
and so on and we have a very nice tutorial it's written in very small that you

15:45.640 --> 15:50.600
will collect it's a plenty of pictures like this to show all the different kinds of operations

15:51.000 --> 15:59.400
but a little bit more about the core how does the code look like we often use Python to interact

15:59.960 --> 16:06.040
with with OpenIO but you can also use R or Julia but in the examples we're going to use Python

16:06.760 --> 16:11.640
so first of you need to connect to the to the backend so here there is an openIO data space

16:11.800 --> 16:17.960
Copernicus it's openIO running on the Copernicus server somewhere and so you connect with

16:20.040 --> 16:24.440
then you can load to the data you see how we're going to work with Sentinel 1 the rate of

16:24.440 --> 16:30.040
satellite we use a spatial extent a temporal extent and the bands we specify what we want to load

16:31.720 --> 16:36.760
then we do some computations with it for example here you just want to have the first timestamp

16:36.760 --> 16:40.360
of the data and with this discord rest it's a very simple loading operation

16:41.320 --> 16:46.680
but we can also do way more complex things on the data we can also run some Python code on it

16:47.400 --> 16:50.520
to process it to do machine learning and then openIO go

16:51.480 --> 16:56.040
terrorizes Python automatically so the Python script will get us inputs and x array

16:57.000 --> 17:01.720
and then as outputs you have to also return x array but this little snippet of code will be run

17:02.280 --> 17:07.160
at a hundred computers at the same time giving a very efficient processing

17:08.680 --> 17:12.840
and then when you're done with specifying all your processing then you're going to actually

17:12.840 --> 17:18.600
execute the task and then the calculations will start and he'd also specify the output file this

17:18.600 --> 17:27.480
is an NC file so it's a netcbf file but you can specify to do this on CSV or whatever is a

17:27.480 --> 17:36.360
relevant so well that was more of the code examples a more of a technical stack how does it look like

17:36.360 --> 17:43.080
so we've openIO openIO can load different stack catalogs amongst other kind of data those

17:43.080 --> 17:48.600
stack catalogs often are stored on as three buckets as three buckets were like kind of FTP servers

17:48.600 --> 17:53.880
but then like the modern more performance version they often use cloud optimized geotips

17:53.880 --> 18:00.520
but might also be using many other kind of formats just as an example openIO will then also

18:01.560 --> 18:06.920
meet a few standard processes that you are running on the data you can run some Python on the data

18:06.920 --> 18:12.760
and then openIO will cut it into pieces run it on spark to have it working distributed and this

18:12.760 --> 18:20.040
spark is often running on Kubernetes which will distribute all the compute making new servers

18:20.920 --> 18:27.800
that will be running your commands and retrying if something fails if the stack catalog was offline

18:27.800 --> 18:35.320
for a minute then spark will retry the task till it's working so it gives some stability in there

18:36.680 --> 18:42.520
so voila so we want to use it to only experience implementation and then the deployment of maintenance

18:42.520 --> 18:48.920
the scaling they should all be not aware of it if there is a server going down we will make sure

18:48.920 --> 18:55.400
to retry or use another server to work on if there is a stack catalog changing their way they strip

18:55.400 --> 19:00.920
shorter data the way we will load a stack catalog we will adapt it fixed on bugs that the user can

19:00.920 --> 19:08.120
still use any stack catalogs they want which is which we often need to do there is a lot of

19:08.920 --> 19:13.960
a lot of collections in this world a lot of satellites that provide information there are new

19:13.960 --> 19:20.760
getting launched Sentinel three has some new ones Sentinel 2 Sentinel 1 they're the keep on deploying

19:20.760 --> 19:27.800
and we keep on supporting them all for a seamless experience and then we have an example

19:28.520 --> 19:35.000
with even more code also connecting to the same backend if you can play this awesome

19:35.800 --> 19:44.360
so here there is a GP that notebook and this GP that notebook is connecting also to

19:45.400 --> 19:50.360
to the backend it loads a bunch of connections you can see a little piece of Python that will be

19:50.360 --> 19:56.040
running on the data the job has been running half an hour you can see the larger process graph

19:58.600 --> 20:04.360
and then there is a get results this is all the data sources being used to do this processing

20:04.360 --> 20:12.840
so it's already quite a lot and then at the end we have the calculation so here you see the heat

20:14.120 --> 20:20.200
yeah heat waves in Europe and it's also very nice that we could compute this in like half an

20:20.200 --> 20:27.320
hour for a continental scale always a nice now you also have some visualization tools baked in

20:27.320 --> 20:32.200
if you have your data cube out of your results and you want to show it we have some nice

20:32.440 --> 20:37.400
usability features that work pretty good with the Jupyter notebooks

20:40.600 --> 20:46.040
and see if the clicker works on this one all right so a moment ago we were

20:46.040 --> 20:51.560
talking about making the processes more real you're really usable sometimes someone writes a

20:51.560 --> 20:56.680
very nice algorithm to detect if there are potatoes growing somewhere to detect if there are

20:56.680 --> 21:02.200
for us to be burned or not and those algorithms they take some time to make and it's nice to share

21:02.200 --> 21:09.320
them afterwards so there is a place for that there is the algorithm Plaza and that is a bunch of calculations

21:09.320 --> 21:17.320
to calculate for example a cropped serve somewhere or yeah the NDVI a very big classic in

21:17.320 --> 21:23.800
earth observation and all those algorithms are also usable for other users and so what they can do

21:23.880 --> 21:30.120
is they can specify the spatial temporal extent and do a quick calculation on the area they want to

21:30.120 --> 21:37.640
if they want to see like how are the forest doing next to my house you can take the NDVI for example

21:37.640 --> 21:43.640
and calculate it next to your house because not everything is already calculated rolled white

21:43.640 --> 21:51.160
because it's quite heavy you can do some calculation on demand and we have the same also

21:51.160 --> 21:57.160
oh maybe a nice example the real serial temporary crop extent is calculated everywhere that is

21:57.160 --> 22:04.600
yeah seeing you're being produced it's a very high resolution and it's pretty nice you can

22:04.600 --> 22:08.840
run it also on Brussels and then you see that there is a little bit of agriculture still happening

22:08.840 --> 22:19.560
in Brussels for example you can see it with those mobs we also have a place where we can upload

22:19.560 --> 22:27.720
algorithms that are more approaching based so this is apex it's also a catalog of algorithms

22:28.760 --> 22:33.640
and those algorithms they are more like that we developed and we're like proud of it and we want to

22:33.640 --> 22:39.000
share them with a road but also some other projects when they had done a lot of effort in their

22:39.000 --> 22:45.160
algorithms and they can put it on here you also have the advantage that we will be monitoring them

22:45.240 --> 22:51.000
and if something breaks if a stack catalog goes offline if our code changes some

22:51.000 --> 22:58.440
subtilities about how processes are being run we have a bunch of tests running on them all the time

22:59.080 --> 23:04.440
and if something goes wrong something changes we will catch it and we will fix it for you

23:04.440 --> 23:10.680
or notify you like hey you're relying on an external stack catalog and you kind of put it offline

23:10.760 --> 23:16.760
it will not be working anymore anything we can do try to we try to make this work as smooth as possible

23:20.040 --> 23:27.400
so we're working a lot on my open deal by Fito we toys a Flamengo Institute for research

23:28.200 --> 23:36.040
but there are many people also developing on open deal we have yeah contributors from all around the road

23:36.680 --> 23:41.400
but also some people they implemented that own backend for the open deal so we have for example

23:41.400 --> 23:48.280
open craft they they like to use our a little bit more and then they have an our backend

23:49.160 --> 23:54.280
so that's awesome we love it when other people also implement the same standard and then

23:54.280 --> 24:01.560
recently a development sheet made a backend that's also working with a tea Tyler I don't know if

24:02.120 --> 24:08.280
it's correctly but it's very performance they also recently showed a demo where you could

24:09.160 --> 24:15.320
change your process graph and the calculations you want to do and in real time it would update

24:15.880 --> 24:22.120
the output of it in your browser window so they also use a good thing as technologies like

24:22.120 --> 24:27.640
our web assembly I'll hit our web assembly I'm not saying they calculate only on a zoom level in your

24:27.720 --> 24:32.680
browser if you're a zoomed out quite a lot they won't be calculating everything up pixel they just

24:33.800 --> 24:39.800
calculate the zoomed out pixel we're very nice implementation but it's still in a in progress

24:42.360 --> 24:50.920
another piece of connectivity with open source cookies very known platform we have a cookies

24:51.000 --> 24:59.160
plegin and here in you can see and explore some of the layers that are available from open

24:59.160 --> 25:05.240
you so you can zoom around in the cookies look around and then it will download the layer where

25:05.240 --> 25:12.040
you're on the location you are looking at so nice way to explore you can also do this online

25:12.040 --> 25:18.040
but if you have like complicated things in cookies with georgisans and some geotips that you're

25:18.040 --> 25:23.240
visualizing it's nice to also be able to visualize a layer in the same setting

25:26.120 --> 25:32.600
about the open source part we are available on hitup you can deploy your own open

25:32.600 --> 25:39.640
your back and somewhere everything is available we have like 68 contributors which is nice

25:39.640 --> 25:48.520
yeah I invite you to take a look see if there is something you can work with

25:51.400 --> 25:58.600
and yeah we also have a forum so if you have any questions we are kind of obliged to

25:58.600 --> 26:03.640
answer them as quick as we can and we love having questions because it's it shows the usage

26:03.640 --> 26:10.920
and it shows what the users are working on so you can search for the forums I didn't put

26:10.920 --> 26:16.280
the URL here but if you search an open your forum you won't get on it quite easily

26:17.800 --> 26:23.640
and it's fairly active so far open your is built by community what will you build with open

26:23.640 --> 26:53.560
your folder and questions are there any constraints on how much data we can use I mean you said

26:53.560 --> 27:00.920
this is a lot of resources to do you care or do you just knock yourself out yeah there are

27:00.920 --> 27:09.400
so limits so when you when you start processing you get 1000 credits and when you're computing

27:09.400 --> 27:15.080
with them you start using them and you get 1000 credits a month and for personal use you don't

27:15.080 --> 27:20.840
touch this this limit quite fast but if you want to process whole Europe you will get to this

27:20.840 --> 27:27.800
limit and then it's a it's a paid service if you have a project and you need to process the whole

27:27.800 --> 27:36.280
world and it will be very heavy you can contact the user network of resources the European space

27:36.280 --> 27:42.520
agency has a lot of money they want to invest in such projects and you can apply for them and then

27:42.600 --> 27:54.200
you can do your whole project with it so there are some limits and need to so you

27:54.200 --> 28:01.160
listed two implementations of the standard like yours and the one by deficit are there any other

28:01.160 --> 28:07.080
implementations of open you are there or for implementations yeah there are plenty of them

28:07.400 --> 28:13.160
open here sometimes rocks against other earth observation platforms that already exist and then they

28:13.880 --> 28:18.440
provide kind of a wrapper so that you can speak to their backends in the open you away so then it's

28:19.640 --> 28:28.360
yeah so the game the aim is that one standard could fit them all so yeah there was one for Sentinel

28:29.320 --> 28:39.400
for Earth and for Google Earth's engine also one we did have access yeah at the moment we

28:39.400 --> 28:44.680
operationally support a company case data space ecosystem and terrace scope and an open you

28:44.680 --> 28:49.880
plan with Google Earth Engine as I mentioned we can access the data sets that's there but we cannot

28:49.880 --> 28:57.720
do any processing on them due to their licensing reason yeah but they is like if you have your own data

28:57.720 --> 29:03.160
set and you deploy it as stack then you can directly use in any of the backends so that's possible

29:03.800 --> 29:12.520
but it's also might at dusk a tool that works with x-ray on distributed weight also has an

29:12.520 --> 29:21.000
opening implementation and first time I still open you I made me think of geodpp from the

29:21.000 --> 29:25.880
jrc I don't know if this is something you're heard of and I was wondering whether there were

29:25.960 --> 29:31.160
sort of converging and trying to adopt at least the open you expect for their own services is there

29:31.960 --> 29:39.800
work being done there I'm not sure I think that is yeah indeed indeed there is some similar

29:39.800 --> 29:47.000
API for Earth observation processing but yeah open you and that one is not related this one is completely

29:47.000 --> 29:52.520
different and completely open source and as a part of the cda's reconciliation but also as

29:52.600 --> 29:59.080
you promotion jrc one is slightly different than this one but in the concept why it's somewhat similar

30:10.520 --> 30:16.840
you the examples you showed were mostly like the typical raster examples sent now

30:16.840 --> 30:24.520
and there are also capabilities to work with vector geometries in the platform yeah it's also possible

30:24.520 --> 30:33.000
to load a vector data but often the vector data is used for for masking for example or if you

30:33.000 --> 30:40.440
want to calculate the NDVI on different fields and you want just one value per field you can use

30:40.440 --> 30:50.760
the raster data we also have some tooling to convert raster images to vector images so yeah we do have

30:50.760 --> 31:06.360
a little factor support also hi thanks for the presentation my question is does the Python API

31:06.360 --> 31:12.600
allows for client applications to exert some sort of control in terms of the scaling the

31:12.600 --> 31:19.240
computational usage behind the API if not does it provide feedback in terms of what was used

31:19.240 --> 31:25.720
at each execution thank you very good question the technical question is if you run

31:25.720 --> 31:31.400
Python code in open you is that any way to specify how to be scaling we do have some tooling for

31:31.400 --> 31:37.400
it because if we want to run a user code it might be that user code is very heavy or very light

31:37.400 --> 31:43.480
we cannot read the code and know so we provide some tooling for the user to specify how much

31:43.480 --> 31:51.480
RAM will be used in the processing and then it can allocate computers with an of RAM to

31:51.480 --> 31:57.000
make sure it works it's a bit more expensive but if you need it and then also you can the user can

31:57.000 --> 32:06.120
specify in how small of a tiles it will cut the data and then smaller tiles allow you for doing

32:06.120 --> 32:11.320
more compute per tile if you have an expensive machine learning model you might want to work

32:11.320 --> 32:19.080
on tiles of 128 on 128 if you're doing nothing very heavy of processing you might work with tiles

32:19.080 --> 32:22.080
that are 512 on 512.

32:23.040 --> 32:27.080
Let me have a question

32:31.840 --> 32:34.600
絕絕絕絲.

