WEBVTT

00:00.000 --> 00:10.000
So, this thing works, but just for the lack of time, I don't know if I can do it, but I can't do it.

00:10.000 --> 00:20.000
So, this thing works, but just for the lack of time, I don't know if I can do it.

00:20.000 --> 00:36.000
So, this thing works, but just for the lack of time, so I should speak loud.

00:36.000 --> 00:38.000
I don't know if I can do it.

01:06.000 --> 01:18.000
So, this thing works, but just for the lack of time, I don't know if I can do it.

01:18.000 --> 01:32.000
So, this thing works, but just for the lack of time, I don't know if I can do it.

01:32.000 --> 01:40.000
So, hi everyone, welcome, thank you for being here.

01:40.000 --> 01:56.000
This is my first presentation, I hope not to mess it up.

01:56.000 --> 02:02.000
The reason I wanted to give it is because I'm sorry.

02:02.000 --> 02:16.000
So, the reason I wanted to give this presentation is that first of all, I wanted to present mirror hall, which is this application to use other Linux devices as kind of peer to peer extended monitors over the network.

02:16.000 --> 02:20.000
So, you like use them as virtual monitors.

02:20.000 --> 02:36.000
And the reason I wanted to present it was because I think basically how it was built this kind of layer, and it's faster to go a bit deeper into it and explain how to basically remake your own virtual display sharing solution at home.

02:36.000 --> 02:48.000
So, I just wanted to start with a bit of history saying that when it is the 90s, person computers became more popular and people started promoting mobile devices and mobile development.

02:48.000 --> 03:02.000
One of the core ideas, the Xerox, which was the research institute, and then Apple got the ideas from kind of that long story, was to get those devices to seamlessly interact with each other.

03:02.000 --> 03:10.000
Basically, you had a portable device that could be connected to a larger one to use the larger one as a monitor, all that kind of stuff.

03:10.000 --> 03:16.000
Why did that never happen? Why are devices still so hard to interface with each other?

03:16.000 --> 03:31.000
That is because industry never favored standards for peer-to-peer communication, for open communication between proprietary platforms, but everyone went for their own personal implementation in a way.

03:31.000 --> 03:39.000
So, convergence and peer-to-peer sharing and wireless desktop sharing never really became a unique thing.

03:39.000 --> 03:51.000
So, to be fair, we have a lot of solutions that work really well for simple wireless desktop mirroring or Linux, such as Moonlight or Sunshine, which are mostly meant for games.

03:51.000 --> 03:53.000
I never used them, I just know they are good.

03:53.000 --> 03:59.000
And no network displays, which I did use, and it is pretty good for like Miracass Chromecasts.

03:59.000 --> 04:03.000
But we don't really have anything for more interesting kinds of mirroring.

04:03.000 --> 04:08.000
So, first of all, I think we are familiar with how mirroring works on the hood, so it is basically...

04:08.000 --> 04:14.000
I have my screen here, this is a display buffer. We record that screen, we reuse the same buffer.

04:14.000 --> 04:18.000
We encode it, we transmit it over the network, we play somewhere else.

04:19.000 --> 04:27.000
But what if we instead wanted to not mirror the same screen, but mirror another display buffer?

04:27.000 --> 04:38.000
Then in that case, we basically have to interact with the backend, and to collaborate with the backend, which everything is to create somehow a virtual display, a virtual buffer,

04:38.000 --> 04:42.000
and then that is the one we aim to record, the one we aim to stream.

04:42.000 --> 04:47.000
And that is something that is not clearly standard.

04:47.000 --> 04:53.000
So, one solution that kind of does that proprietary solution is Apple's sidecar, which also never used,

04:53.000 --> 05:00.000
but it's kind of you extend your max screen, you use your iPad as a second screen, some iPads,

05:00.000 --> 05:03.000
because you need expensive ones.

05:03.000 --> 05:15.000
But during this only looks using existing solution, we wouldn't have worked, because for example, all those protocols are proprietary, so they are not easy to implement,

05:15.000 --> 05:23.000
and they are also pretty high latency, because most of them use TCP, most of the implementations are also on their receiving end,

05:23.000 --> 05:30.000
like on your TV or software encode, so there are a lot of latency, because it is more important to get accurate video,

05:30.000 --> 05:35.000
normally than fast video, and what I wanted was kind of the opposite.

05:35.000 --> 05:43.000
And also it's really easy to stream, formerly in some words, you record the screen, you stream it, but it's really hard to then become a receiver for that stream,

05:43.000 --> 05:53.000
currently, because for example, to do it on my request, you need very strange like patches, you need to stop Wi-Fi, to set a paddock, very, very complicated.

05:53.000 --> 06:05.000
So, starting from the virtual mirroring we saw before, I was wondering, like, what happens when you scale it up like what happens when you have different monitors, different devices?

06:05.000 --> 06:11.000
Can you still do some kind of virtual virtual screen buffers that really depends?

06:11.000 --> 06:20.000
So if we have available video buffers, if we have available harder encoding buffers, because for example, when you use your hardware encoder for video,

06:20.000 --> 06:29.000
usually you can't really put more than one stream at once, so you have to do like software encoding for some, and harder encoding for one.

06:29.000 --> 06:48.000
And what if we're going for another one to make it like bidirectional, so that we can turn every device into both a receiver and a streamer, so that I can just decide, and that is basically what I wanted to do for a long while.

06:48.000 --> 07:04.000
So, in 2020, known for it introduced a really cool, headless native back end, and virtual monitors API, which is basically you ask matter to create a virtual monitor on my land.

07:04.000 --> 07:09.000
It does it without any kind of hacks, any kind of stuff.

07:09.000 --> 07:26.000
So I made my first prototype in 2022 using those APIs and a bunch of Python stuff, and that was back then, like I was using my Librum 5 as our wireless screen for my wireless extension to just have it on my keep track of stuff.

07:26.000 --> 07:36.000
And then I made the first setup in November 2023, and the first release just a month ago.

07:36.000 --> 07:43.000
So, basically, the mirror hall looks like this right now.

07:43.000 --> 07:54.000
I have to be really honest with you, I break my test like my test tablet this morning, so I have an alternative one, which I hope works as well.

07:54.000 --> 08:23.000
But basically, this is how it looks, I open it here as well. It uses MDNS internally, the same protocol as like the LNA, I think, Chromecast for sure, like, so right now, I basically turn this device into a mirror, it's kind of not very practical to be honest, but oh god, I hope this is setup correctly, because I'm not in turn, no, there's no network.

08:23.000 --> 08:47.000
Okay, never mind, so this device apparently doesn't have a working network, but it had 30 minutes ago when I flashed it, so anyway, okay, let's do a very boring thing then.

08:47.000 --> 08:57.000
Let's spawn up another instance from last, I hope this works a bit better.

08:57.000 --> 09:07.000
And in theory, if I do this, we should be able to get something out of it.

09:07.000 --> 09:15.000
Okay, this is not really clear what's happening, because that's the same device, I'm really sorry, okay, this was not really,

09:15.000 --> 09:36.000
but basically what we have here is that we can kind of, kind of, yeah, okay, you see what's happening, it's literally mirroring itself and look back more, which is not very meaningful, I wanted to do it better, but I'm sorry.

09:36.000 --> 10:00.000
So you didn't see shit, thank you, matter, those are very, very experimental APIs for a reason, can I, can I, can I mirror, oh god, it is a mirror mode, but it's not really, yeah, okay, this is the only way I can do it, apparently.

10:00.000 --> 10:14.000
So you didn't see anything, basically, you didn't see, okay, yeah, this was the idea, like you can basically switch between it and now it went somewhere,

10:14.000 --> 10:22.000
I mean, what, what the actual, okay, like this is not, what the fuck.

10:22.000 --> 10:34.000
Like this is right, okay, I never tested this presentation using multiple screens and doing something that this is the multiple screen API to test it, maybe it wasn't, okay, anyway.

10:34.000 --> 10:43.000
So the way this works is basically that we have a mirror hall, which is this player and standard app, which is just GTK that you just have.

10:43.000 --> 10:50.000
And below we have three small libraries that are lip mirror that detects the desktop environment that you're using.

10:50.000 --> 10:57.000
And uses the bus to communicate with matter, it was to negotiate a virtual monitor to create it and then get a pipe wires turn back.

10:57.000 --> 11:09.000
Then a lip cast basically has a tiny database of pipelines so that it can use harder acceleration, you're a device to generate a fast, including pipeline and we use all UDP packets.

11:09.000 --> 11:14.000
It's not RTSP, not any kind of wrapper protocol because it's way faster, I know.

11:14.000 --> 11:24.000
And then it basically takes the stream that you just created and it feeds it to it, can I? Yeah, yeah, okay, I can see it.

11:24.000 --> 11:38.000
And then we have lip network, which instead does stuff like MDS, so the thing that you can see the device in the list, when you infer, when you plug it in, you also see it there, but the network interface is down.

11:38.000 --> 11:42.000
Basically, that would be the idea.

11:42.000 --> 11:50.000
So that's, yeah, basically, lip, a debust to know to matter, get back the stream through pipe wire.

11:50.000 --> 11:55.000
Then you encode the stream using a pipeline and you transmit it there.

11:55.000 --> 11:59.000
And the receiver part is a bit simpler because it's basically a video player.

11:59.000 --> 12:07.000
It's basically a network video player, you can also use RTSP as a fallback, which you will see in a second, thank you.

12:07.000 --> 12:19.000
And yeah, because we also thought, if you really don't care about latency, if two seconds of latency is enough for you, then RTSP can also work.

12:19.000 --> 12:33.000
And it was like a CLI, so that you can basically type a very long g-streamer launch command without having me or holding installed and use a device as a receiver versus H without installing anything.

12:33.000 --> 12:42.000
Okay, so now for the more technical part, like, I wanted to go a bit deeper into how this worked.

12:42.000 --> 12:50.000
I didn't want to talk about an DNS that kind of stuff, but just about the fact first of all, the disownly support smother, unfortunately.

12:50.000 --> 12:57.000
Because as far as I know, there were no other standardized API, except maybe in sway.

12:57.000 --> 13:15.000
And KD-6 apparently someone told me has introduced something similar or is in planning too, so, but in theory, like one star APIs, this library is really easy to extend, because it's basically like creating more device calls, detecting your configuration and creating more device calls.

13:15.000 --> 13:26.000
So the way this works and yeah, I'm not sure if we have a tiny bit of time, but you don't see my screen anymore, which is not great.

13:26.000 --> 13:36.000
It's basically here, we would have this, okay, because I can't see my screen.

13:36.000 --> 13:41.000
Can we make this larger?

13:41.000 --> 13:54.000
Okay, we have a matter screen cast API, and here there should be screen cast, so what we basically do first of all is we create a session.

13:54.000 --> 14:03.000
It will probably want some parameter, but I think it's it was an empty array here, because it's like the array of options you're passing in.

14:03.000 --> 14:10.000
So here we receive back the name of the object that we're using.

14:10.000 --> 14:27.000
Then we kind of go back to that object that we just received, which in this application is a bit obvious, we have to scan cast again.

14:27.000 --> 14:38.000
And you see this newly made object, and here under session we can do a record virtual call.

14:38.000 --> 14:48.000
That will create basically a stream object.

14:48.000 --> 14:51.000
And then we could already start the stream.

14:51.000 --> 15:01.000
The thing is if we do, we wouldn't intercept the event that tells us where the pipe wires, when the pipe wires stream appears.

15:01.000 --> 15:07.000
So to do that we basically have to.

15:07.000 --> 15:12.000
Okay.

15:12.000 --> 15:20.000
Intercept using the bus monitor.

15:20.000 --> 15:28.000
And in theory now if we go back to the session that we created, and we empty it.

15:28.000 --> 15:36.000
We should receive a pipe wire stream here, and you also see this thing, this yellow thing, which means basically that we're casting the screen.

15:36.000 --> 15:39.000
Okay, you see now we received an event here.

15:39.000 --> 15:54.000
And if we use a pipe wire, SRC, which is the one of the one component in.

15:54.000 --> 16:04.000
So we saw just now that the event was, was it 115 something like that?

16:04.000 --> 16:06.000
105.

16:06.000 --> 16:11.000
In theory, here's our virtual display.

16:11.000 --> 16:17.000
So now we kind of see the loop-play thingy again.

16:17.000 --> 16:20.000
And yeah, that's basically the interesting part.

16:20.000 --> 16:23.000
So it is not too hard to get a virtual display out of matter.

16:23.000 --> 16:30.000
And when you then also like, yeah, that's basically it.

16:30.000 --> 16:39.000
And once we have it, basically we create this system of pipeline that we saw, which if we divide it in components, just we have the source, which picks up the element.

16:39.000 --> 16:42.000
And then we have cues for buffering.

16:42.000 --> 16:48.000
Then we use in this case X264, but again it's chosen by the X264 on Intel.

16:48.000 --> 16:56.000
Right now on this specific build, but basically if you are using arm, it will try to pick up better one.

16:56.000 --> 17:04.000
And then after we package it, after we encode it in the format, we package it into, with the payload encoder.

17:04.000 --> 17:08.000
And then we send it in this case using UDP sync.

17:08.000 --> 17:16.000
And yeah, that's, instead the pipeline that we just used further demo is just simplified on the just uses thank you.

17:16.000 --> 17:20.000
Video convert and auto sync, because that's easier.

17:20.000 --> 17:23.000
And the other way around is also very simple.

17:23.000 --> 17:35.000
So you can also do it from the CLI and you basically see the command to also replicate it outside the app, so it's just basically the same components, the opposite direction.

17:35.000 --> 17:39.000
UDP is an all the source and so on.

17:39.000 --> 17:49.000
So yeah, this is something I honestly just kind of talked about, so maybe it's not super important, but we do the harder configurations we use.

17:49.000 --> 17:58.000
There should be a priority for like better over worse encoders and decoders, which we don't do right now.

17:58.000 --> 18:10.000
And right now we're also capping for example the steering quality because we can't really negotiate it in real time, like a really nice video protocol, but we're just kind of guessing the best configuration.

18:10.000 --> 18:19.000
And we are using some additional configuration parameters to minimize the amount of computation effort that has to be done to encoding each stream.

18:19.000 --> 18:29.000
So it's maybe slightly larger, but also like, or if necessary, slightly less precise, but also faster if you have many streams at once.

18:29.000 --> 18:38.000
And also right now, a very big limitation that is I think kind of annoying to most people is that you have to manually patch your firewall to use it,

18:38.000 --> 18:43.000
because you need to authorize a bunch of UDP ports.

18:43.000 --> 18:48.000
That is something that the flat pack currently doesn't do.

18:48.000 --> 19:07.000
And another very big, very interesting limitation is that as soon as I release the 0.1.0, there was a regression in no, because nobody has uses this API yet that made me or whole crash matter, which was quite interesting.

19:07.000 --> 19:12.000
Like I was like, okay, finally I have a very reason that was like one week after that.

19:12.000 --> 19:25.000
So I had what that very scary warning and I don't really recommend to use it in production stuff until we're sure it's stable, we're sure the API is very tested and so on.

19:25.000 --> 19:32.000
So for the next steps, I think what's important will be to kind of first of all, I think there are many things at once.

19:32.000 --> 19:46.000
And we, it would be a bit more manageable if we split it up because maybe many people don't really care about the UI, but care about only the heavy gun end point that basically has a helper to create a virtual display on each platform.

19:46.000 --> 19:56.000
So we would ideally want to split that up, we would want to add encryption, that's very important because right now this term is not encrypted, so it's fine if I connect it via USB.

19:56.000 --> 20:01.000
But if I use MDNS over the network, it's a bit more, I don't know, I wouldn't do it.

20:02.000 --> 20:16.000
And also we would, like apparently there is a lot of the headaches, a lot of the heavy lifting of punching holes in UDP, trying to find the right route on the network and so on.

20:16.000 --> 20:32.000
I would be solved by using a pretty good stack in Rust that is I, I don't know how you pronounce it, which is basically yeah peer to peer on UDP with automated whole punching and so on.

20:32.000 --> 20:46.000
And also to be really nice to add input methods in some way, proxy in the input method from device to another, but that is very, I, I really like I looked it up and for now it will take a lot of time.

20:46.000 --> 21:04.000
So that's all, yeah, I want to help to thank all people who helped me, so that's the scale up there, there's Sony, there's a few of the known people, there's people I chat with who helped me, like in understand the whole pipeline thing.

21:04.000 --> 21:26.000
And if someone is interested in this, please let me know, because I don't really have a lot of time, so to work on it, so like I'm just very slowly developing, but if someone is like in there is in testing it on strange platforms and so on, that would be really nice, just let me know if it breaks white breaks, so yeah we can also meet in Berlin if you want.

21:26.000 --> 21:36.000
So that's like just preferences and I have 30 seconds for questions, no, yeah, zero seconds.

21:36.000 --> 21:40.000
Okay, if you want to.

21:57.000 --> 22:06.000
Yeah, so basically we're using RTSP right now as a, can you repeat the question?

22:06.000 --> 22:22.000
I'm sorry, I was like, okay, so the question was whether we can use Chromecast to repeat to stream this like to use a fallback sync for Chromecast and the answer is yes, you can do that.

22:22.000 --> 22:37.000
It would be very slow, like probably one second this latency, but you can do that, yeah, and that is one of those I wanted to split it like in the part that creates this plan the other way.

22:37.000 --> 22:52.000
Thank you very much.

22:52.000 --> 23:07.000
Thank you very much.

23:07.000 --> 23:17.000
Thank you very much.

23:37.000 --> 24:00.000
Thank you very much.

24:07.000 --> 24:35.000
Thank you very much.

24:37.000 --> 25:05.000
Thank you very much.

25:07.000 --> 25:17.000
Thank you very much.

25:37.000 --> 26:02.000
Thank you very much.

26:02.000 --> 26:19.000
Thank you very much.

26:19.000 --> 26:27.000
And other thing related to it, why super as a protocol is not private and how one can work around it.

26:27.000 --> 26:35.000
And at the end also better about how open nature best works and very currently are with the project.

26:35.000 --> 26:52.000
So the quick super TLDR, but what it is is it's an open source genesis assistance data server, which crowdsources, genesis frame data along with the ability to ideally better aggregated from other sources, including potentially official ones in the future, maybe.

26:52.000 --> 27:10.000
The goals are to have implemented a super compatible server for a GPS use or a genesis use and to be able to access a data source either for other projects that may be interested in using this data or for people who just self host their own super service as well.

27:10.000 --> 27:18.000
How it came to be is interesting out switch to a bunch of touch a while ago and then noticed that GPS was very slow.

27:18.000 --> 27:28.000
I found lots of foreign posts talking about how the reason why it was very slow was that it's a GPS was not supported on a bunch of touch.

27:28.000 --> 27:42.000
While the reasoning for while it didn't work is a bit complicated, there are a few core issues for people who use foster devices or foster open systems or just want a fully private fully open source system.

27:42.000 --> 28:00.000
Previously being a main model that I'll go into a bit along with potential licensing issues if you have, for example, a highly unbased OS or running postmarker OS because super google.com, which is the main one that Andrew uses, his mind standing is licensing is a bit finicky if you don't have Android.

28:00.000 --> 28:10.000
That also goes straight into the fact that the main super service in the Western world is added to China is at this point run by Google or local governments.

28:10.000 --> 28:18.000
There's also no existing fossil implementation for this so basically have to start fully from scratch for this one.

28:18.000 --> 28:34.000
How Genesis works, skipping over a bunch of complicated physics stuff, basically each satellite is a very precise clock, where the positioning of a phone or a handset of receiver is done via time of flight from signals from each satellite.

28:34.000 --> 28:42.000
So each satellite knows where it is, it transmits that and the current time very precisely to a device and you can calculate that.

28:42.000 --> 28:54.000
The data need for the femoral and the ammo knock data, which is basically data such as where the satellite is, the current correction parameters needed and other physics stuff.

28:54.000 --> 29:10.000
Transmitting this is very slow, each satellite signals are designed for signal integrity and to put a punch through things, not for speed, which results in a frame speed of about 25 bits a second, for example, for GPS, specifically.

29:10.000 --> 29:12.000
This is system development.

29:12.000 --> 29:22.000
Nowadays people have made systems, it's system systems, which download the state of an internet, which is a lot faster.

29:22.000 --> 29:28.000
There is a standardised protocol for doing this over the internet specifically, it is the secure user plane location protocol.

29:28.000 --> 29:34.000
It has heritage from mobile data protocols from our OP and things like that.

29:34.000 --> 29:46.000
This needs an approximate receiver position, so it needs to know which satellites are approximately in view, so normally sound towers are used for positioning in this case.

29:46.000 --> 29:53.000
This is also where more cellular network character comes into this being case.

29:53.000 --> 30:08.000
Also straight into the privacy issue number one for SIPL, which is that the protocol sends the iMSI number of the SIM card, which is unique for each SIM card.

30:08.000 --> 30:12.000
So this just goes straight into the fact that it's not designed to be private.

30:12.000 --> 30:17.000
It's designed to be not private specifically.

30:17.000 --> 30:21.000
The iMSI number is often identified to a singular person.

30:21.000 --> 30:35.000
While most super or for this particular instance, ROP connections, which is the name of the protocol, if being done over purely a cellular network, not over the internet, are done by the handsets of the phone.

30:35.000 --> 30:45.000
It was for the network to initiate connections as well, which is what I'm used by emergency services to get a position of a specific receiver.

30:45.000 --> 30:53.000
For example, if someone has called the police or fire ambience and doesn't say where they are, the police can then determine the location for it.

30:53.000 --> 30:56.000
It also means that you have to trust the government too.

30:56.000 --> 30:58.000
Always use this for good reasons.

30:58.000 --> 31:07.000
Also, knowing people here, the fact that super the google.com is the main domain used probably is also not optimal.

31:08.000 --> 31:13.000
Now, sadly, you have to learn to work around these things, because super is here to stay.

31:13.000 --> 31:19.000
There is really to be speaking at least for devices that have closed source GPS drivers.

31:19.000 --> 31:24.000
So anything that runs Halyam, for example, it is here to stay.

31:24.000 --> 31:32.000
Because it is industry standard, basically everything supports only it, which means you have to support it as well.

31:32.000 --> 31:38.000
The assistance requests are additionally sometimes made on the Android phones directly by the chipset,

31:38.000 --> 31:46.000
which means that not even the OS that's running can always see the requests being made, which is also a quite scary thought.

31:46.000 --> 31:53.000
There is also the fact that the chipset firmware itself is almost always going to respond to super requests,

31:53.000 --> 32:03.000
that the network for legal reasons, so that the police can determine where the phone or other thing is at any time.

32:03.000 --> 32:09.000
The workaround for this would be to have a fully open source super stack with the ability to self host as well,

32:09.000 --> 32:15.000
so that for the most paranoid of people we could, for example, potentially you can do something sort of host the server,

32:15.000 --> 32:19.000
the super server itself on your own phone, maybe or on your local network,

32:19.000 --> 32:24.000
and have it to get data from somewhere else, so that you know exactly what source code is running,

32:24.000 --> 32:31.000
and that nothing is being intercepted by anyone as well, which is especially good because at least some Chinese phones

32:31.000 --> 32:37.000
have been known to strip with TLS verification off for super, which is cheating the amazing.

32:37.000 --> 32:46.000
The way that OpenH GPS specifically works is that satellite data is crowdsourced for base stations.

32:46.000 --> 32:51.000
We could also import or collect data from other projects, specifically trying to work with Gaomon,

32:51.000 --> 32:57.000
which is an existing project made to track, specifically reaching Gaulet layer,

32:57.000 --> 33:01.000
but now generally all GNSS satellites and orbit, and already has an existing network for this,

33:01.000 --> 33:06.000
but person time is a bit very busy at the moment, so we haven't gotten that to work quite yet.

33:06.000 --> 33:10.000
The base stations basically collect the GNSS frames.

33:10.000 --> 33:19.000
They just sit there listening, the positioning they're doing isn't important more than the data that they're getting from each satellite is.

33:19.000 --> 33:25.000
This would then be forwarded by a two a server, which would either be able to directly handle super requests

33:25.000 --> 33:29.000
or via other project protocols that will make you slightly more modern,

33:29.000 --> 33:36.000
be able to forward the raw satellite frames to something else to be able to, for example,

33:36.000 --> 33:40.000
to use a data for their own users or host the service themselves.

33:40.000 --> 33:44.000
So access the data source, basically.

33:44.000 --> 33:48.000
At the moment, all of the code we've written is specifically designed to be stateless,

33:48.000 --> 33:53.000
except for the super code, because super as a protocol is not stateless, sadly.

33:53.000 --> 33:57.000
All of the base stations push requests are done over HTTPS.

33:57.000 --> 34:03.000
Well, at the moment, the base station code itself doesn't specifically mandate TLS, we do intend on doing this.

34:03.000 --> 34:09.000
It's just easier for development at home to have just fast API dev.

34:09.000 --> 34:14.000
In addition, the fact that it's always wanted to say this means that containerizing things,

34:14.000 --> 34:20.000
which would be important for if we have potentially have to figure out ways to put servers or

34:20.000 --> 34:26.000
some of the servers around the world, so that people in America can get a reasonable time of acquisition.

34:26.000 --> 34:31.000
In addition, we've also designed things for the idea that potentially you could make the service handling,

34:31.000 --> 34:33.000
super data disclos.

34:33.000 --> 34:39.000
The same thing, in that, for example, certain VPN companies have disclos servers,

34:39.000 --> 34:42.000
but this isn't a present goal at the moment.

34:42.000 --> 34:46.000
It just would be a nice, if you could manage that.

34:46.000 --> 34:50.000
We haven't fully decided on what infrastructure we're using, but in case anyone's interested,

34:50.000 --> 34:57.000
at the moment we are co-leasing MariaDB for storing various pieces of data.

34:57.000 --> 35:03.000
The back end, which is at the moment just a codename power gun, is written in Python.

35:03.000 --> 35:06.000
And the base station code is written in C++.

35:06.000 --> 35:14.000
At the moment, we're using ESP32C3 to connect to a new blocks receiver, specifically.

35:14.000 --> 35:18.000
The base station code at this point is basically complete.

35:18.000 --> 35:21.000
It works reasonably well.

35:21.000 --> 35:25.000
It's basically designed to be as cheap as possible.

35:25.000 --> 35:31.000
If you want to find your cheapest things from AliExpress, you can build one of these things back 15 years if you want.

35:31.000 --> 35:34.000
Approximately.

35:34.000 --> 35:37.000
The back end isn't done yet.

35:37.000 --> 35:39.000
It's not even close to it.

35:39.000 --> 35:41.000
But it's work has begun.

35:41.000 --> 35:47.000
It will be back end include the super server and basically the genus as data collection server.

35:47.000 --> 35:53.000
We originally hoped to have a very basic super server ready for a demo.

35:53.000 --> 35:58.000
Here, we were a bit too optimistic on that timeline.

35:58.000 --> 36:04.000
Have, yeah, getting such a quick 30 time as an having university didn't have.

36:04.000 --> 36:06.000
We still have a demo though.

36:06.000 --> 36:11.000
It's just not as impressive as one, as I originally hoped.

36:11.000 --> 36:17.000
So, I got after.

36:18.000 --> 36:27.000
Basically, what I have here is the features of your code day.

36:27.000 --> 36:41.000
And if I can get a GPS reception in here, I should be able to show the base station booting up and sending frames to my laptop, which is running the back end.

36:41.000 --> 36:46.000
But I'm not sure I'm going to reception in here.

36:46.000 --> 36:50.000
It doesn't seem like it, well, I'll try it.

36:50.000 --> 36:57.000
Yeah, it's really not going to happen.

36:57.000 --> 37:07.000
Well, seemingly not there, you don't know that it didn't work today.

37:07.000 --> 37:15.000
We'll allow for going to questions I'd like to specifically thank the end of that foundation for giving this project a grant.

37:15.000 --> 37:25.000
And from the NGO, maybe if you found, along with Leath Cloud, a Dutch hosting company, which is sponsored some server infrastructure for this.

37:25.000 --> 37:29.000
Welcome to an also mentioned that we have a website at OpenHepest.net.

37:29.000 --> 37:30.000
It's very much not complete.

37:30.000 --> 37:33.000
And it's basically just a single blog post to the exists.

37:33.000 --> 37:36.000
The source here is available on GitHub, under the OpenHepest group.

37:36.000 --> 37:43.000
If you want to contact me, you can do so either on Matrix, on the lecturers, from Matrix.org.

37:43.000 --> 37:49.000
This is called with a lecturer, or you can send me email address at lectured.gmail.com.

37:49.000 --> 37:52.000
So, I'd like to thank all of you for your attention.

37:52.000 --> 37:54.000
I'm open to any questions you might have.

37:55.000 --> 38:04.000
Thank you.

38:04.000 --> 38:06.000
Hi.

38:06.000 --> 38:08.000
I run in very simple.

38:08.000 --> 38:12.000
It's kind of what I spy across the West, owned, and everything's in Flash, and it's a very spade.

38:12.000 --> 38:14.000
I'm in GNSS, literally.

38:14.000 --> 38:21.000
I might be able to buy a little patch on a server name and launch it towards your server.

38:22.000 --> 38:29.000
So, the question was, could you, it's really possible to binary patch a, basically,

38:29.000 --> 38:35.000
a mobile handset that can not have configuration changes will be rooted to your users.

38:35.000 --> 38:37.000
Sadly, probably not.

38:37.000 --> 38:40.000
What you could potentially do, I don't know about Clio specifically.

38:40.000 --> 38:41.000
It's made.

38:41.000 --> 38:43.000
It's not secure goods, actually.

38:43.000 --> 38:44.000
Yeah.

38:44.000 --> 38:50.000
But what you might get to do is providing that, it actually supports HEPS, and it doesn't go through the chipset itself.

38:50.000 --> 38:54.000
Which a lot of these really cheap chipset firmers do.

38:54.000 --> 39:00.000
You could try making it manually redirect super the Google to come to a different server,

39:00.000 --> 39:05.000
because a shocking amount of drivers media tech is especially good to have this,

39:05.000 --> 39:07.000
and welcome to a lesser degree in Chinese phones.

39:07.000 --> 39:13.000
They specifically turn TLS certificate verification off for some unknown reason.

39:13.000 --> 39:19.000
Yeah, you could try doing that other than that if it's a media tech chipset.

39:19.000 --> 39:23.000
There's an XML file, you can change, which has the configuration for all of this in it.

39:23.000 --> 39:29.000
On welcome, I think it's just a text file, but don't poke me on that, I haven't checked.

39:29.000 --> 39:30.000
Okay.

39:33.000 --> 39:34.000
You are first.

39:34.000 --> 39:36.000
Super interesting project.

39:36.000 --> 39:37.000
Thanks.

39:37.000 --> 39:42.000
How many crowds or base stations would be moved for both wide coverage?

39:42.000 --> 39:46.000
To one percent light is a thousand.

39:46.000 --> 39:47.000
How many would be moved?

39:47.000 --> 39:52.000
Because it was how many base stations would one need for worldwide coverage?

39:52.000 --> 39:57.000
That's really depends on how good each station's antenna is,

39:57.000 --> 40:03.000
how good the location is in many ways, because we had stuff, and if it's indoors as well.

40:03.000 --> 40:09.000
The best estimate I can probably give is on average, one every few hundred kilometres.

40:09.000 --> 40:15.000
But in theory, if one had an absolute high quality big antenna,

40:15.000 --> 40:21.000
you could get close to a hemisphere of reception in each position, but that is not realistically possible.

40:21.000 --> 40:28.000
In addition, many especially cheaper receivers, such as the ones I'm using, have limits on how many satellites that you can acquire to once.

40:28.000 --> 40:35.000
So I'm mostly going on a thing of trying to see if I can get as many people as possible to consider doing this in the future.

40:36.000 --> 40:37.000
So, yeah.

40:48.000 --> 40:49.000
Sorry, what?

40:56.000 --> 41:01.000
Yes. The question was, would a Google or any supercell work me send them a fake armassilin?

41:01.000 --> 41:06.000
How does it can tell if you don't have a SIM card in your phone, it would send zero zero zero zero zero zero zero zero zero.

41:06.000 --> 41:09.000
So we could just send that all the time in the public work.

41:24.000 --> 41:25.000
Sorry, what is that?

41:32.000 --> 41:36.000
The question was, if I was planning to run this and I said, what would be the privacy implications?

41:49.000 --> 41:50.000
Yes.

41:50.000 --> 41:57.000
Basically, the privacy implications, the data that is received by the supercell, includes the IMSI,

41:57.000 --> 41:58.000
and zero zero zero zero zero zero.

41:58.000 --> 42:03.000
The list of cellular towers that have been within range of the request.

42:03.000 --> 42:10.000
For more, there's also superv3, I think it is, which can technically support Wi-Fi access points I believe.

42:10.000 --> 42:15.000
Don't croak me or not, but I haven't looked into that one as much because there's some weird pattern stuff on that one.

42:15.000 --> 42:24.000
This is focusing on superv1, which is the bare minimum unit in superv2 to certain degree, which you need for Galileo satellites, which doesn't support that.

42:24.000 --> 42:33.000
Risky speaking, if you're running a AGPS server and you're smart enough, you could probably figure out,

42:33.000 --> 42:40.000
I did have the thought of what happens if the government asks us to, but that's just push that aside and hope it doesn't happen.

42:46.000 --> 42:49.000
The question was, how big is the whole data set?

42:49.000 --> 42:56.000
Basically, the data set you need is a list of every cellular tower close to it at least.

42:56.000 --> 43:02.000
It isn't very big, this project's open cell ID and I think Neon's beacon DB, which maintains such lists.

43:02.000 --> 43:05.000
There are a few gigabytes in size of CSV files, it's not actually a big.

43:05.000 --> 43:13.000
The satellite data is probably substantially smaller, but gets updated a lot more frequently, would be my way of putting it probably.

43:13.000 --> 43:22.000
Each satellite frame, as much as it takes up to 12 and a half minutes in theory to get all the frames from a GPS satellite, give them back to 25 to a second.

43:22.000 --> 43:26.000
It's not that much physical data, it just changes frequently.

43:26.000 --> 43:39.000
Yes.

43:39.000 --> 43:46.000
The question was, how do you consider the possibility of an adversary spamming a malicious data?

43:46.000 --> 43:52.000
Yes, I have, well, at the moment we don't have any service running to accept data at the moment,

43:52.000 --> 43:58.000
we'll be making a blog post in Athens for when we want to try and find people to post basations.

43:58.000 --> 44:04.000
At the moment our plan would be that we'd be getting an issue in tokens for each base station.

44:04.000 --> 44:15.000
So each request is authenticated with a token and we'd be able to track that back to the user, probably in email address, at least, from who to send that.

44:15.000 --> 44:23.000
And be able to, obviously, if you notice that something's wrong, we can remove the token and hopefully, but it's just stopped up.

44:23.000 --> 44:45.000
So the question was, is there a baseline for what kind of computer you'd need to run this other?

44:45.000 --> 44:59.000
The entire stack, not that much, the, it is surprising massive computation, but it should probably be doable if you're willing to, for a singular person, but just to think,

44:59.000 --> 45:10.000
I don't have a good data track set on that, but I did, you were a bit of measuring how much a SQL request to get the position of a cellular tower,

45:10.000 --> 45:20.000
but it's trying to think of several things in one SQL entry would take, or my personal computer, which is, but recently quick, it did take something like 100 milliseconds.

45:20.000 --> 45:28.000
It's not that, it's surprising massive computation is a huge database, but it's not that much.

45:28.000 --> 45:34.000
It was reply can probably do it, I asked myself, but I can't really give you a good answer on that one yet, sorry.

45:40.000 --> 45:50.000
Thank you very much.

