WEBVTT

00:00.000 --> 00:16.880
Let me start off by saying that we're really glad to be here, and I'm joined today so far

00:16.880 --> 00:22.560
of how much I heard a lot of great talks, and I think it's great what we're doing here.

00:22.560 --> 00:28.460
I really believe that through sharing software and knowledge together, that's a really

00:28.460 --> 00:37.380
essential part in the challenges we're facing in the energy transition, but with all

00:37.380 --> 00:43.220
this great thinking power, building data-driven applications, I think we can do great

00:43.220 --> 00:49.060
things together, and I want to start off a little bit by asking you in the room what

00:49.060 --> 00:54.740
you're back on this, so I'm on the by show of hands, see a little bit about which people

00:54.740 --> 01:00.340
we have here, so we're going to talk about loadflow calculations and modeling the energy

01:00.340 --> 01:05.500
grid, so for the people here in the audience, how many people of you have worked with

01:05.500 --> 01:13.060
energy grid, modeling them, doing loadflow calculations, so I see some hands, so if we

01:13.060 --> 01:19.180
talk about loadflow and sometimes we try to give really simple examples, but sometimes it

01:19.260 --> 01:25.260
will be a little bit more in depth, but we'll keep interesting for all I hope.

01:25.260 --> 01:33.100
So let's start with where this journey started, and the story for today, it actually started

01:33.100 --> 01:42.860
five years ago, when TICE and I were working at a project at Aliander, and in this project

01:42.860 --> 01:48.580
we are simulating the energy grid, so we are trying to see what effects we can expect for

01:48.580 --> 01:56.620
the coming 40 years on the grid, and to do this we were developing a Python library,

01:56.620 --> 02:03.100
which can model that, but over time we were switching teams, so we worked in all the teams,

02:03.100 --> 02:07.500
but we thought, well, the things we were building then, they would really be useful in this

02:07.500 --> 02:15.660
team as well, so we started sharing some code within Aliander in the source, and over time

02:15.660 --> 02:19.260
other teams came along and it thought, oh, that's pretty cool, we want to use that as well,

02:19.260 --> 02:25.340
so we started maturing the software, and that's how we built like an inner source library

02:25.340 --> 02:34.540
for modeling the energy grid, and then five years later, we're standing here, and this

02:34.540 --> 02:41.700
project has now become quite some bigger thing, at least in our eyes, and we're launching

02:41.700 --> 02:48.700
it today, officially talking for the first time about it in this setting, yeah, thank you,

02:48.700 --> 02:56.180
as an addition to the public model suite, so that's already a project within Aliander,

02:56.180 --> 03:02.500
and we're adding a new package focused on data science applications, and that's what we're

03:02.500 --> 03:07.140
going to talk about today, show some examples and show some applications we're using

03:07.140 --> 03:14.700
it for, so let me start off by the base and that's the pougat model, so give a little bit

03:14.700 --> 03:20.220
of background about that, because that's really important part of what we're doing, it's

03:20.220 --> 03:28.020
really high performing load flow engine, developed in C++, which we use to model loads

03:28.020 --> 03:35.660
on a network, we are not developing the core of the pougat models, so don't add two difficult

03:35.660 --> 03:43.860
questions about physics, but we're using it, so we are involved in data science projects,

03:43.860 --> 03:50.060
which really depend on having such a fast engine, so we have learned over time how that

03:50.060 --> 03:54.980
can be used in different data science applications and both additions were needed and we're

03:54.980 --> 04:05.300
now adding to that software, so what are we doing in that, so this is a really simple

04:05.300 --> 04:11.820
application about an energy grid, but in the basis it's what we are modeling, and that's

04:11.820 --> 04:21.260
how energy is going through the electricity network, and we work at Aliander, so we are working

04:21.260 --> 04:29.140
at a distribution system operator, and we want to manage this grid on the shorter, on the

04:29.140 --> 04:36.420
long term, and make sure that it stays within bounds, everybody stays happy and keeps

04:36.420 --> 04:46.340
connected to the electricity grid, and that means we're modeling how this network is

04:46.340 --> 04:53.180
evolving, and we'll simplify a lot in the story, so we'll talk about how nodes and lines

04:53.180 --> 04:58.860
are connecting components in this grid, and we want to see whether they will be overloaded

04:58.980 --> 05:04.980
in the future short term or long term, so what happens when, for example, lines get overloaded

05:04.980 --> 05:12.300
they heat up, they become less efficient, which is of course not good in the first sense,

05:12.300 --> 05:20.260
but in the end they will also break if they get overloaded, and when we put too much energy

05:20.260 --> 05:34.660
through the grid, also the voltages for customers they get out of bounds, so they become

05:34.660 --> 05:42.300
dysfunctional, so as data scientists at Aliander and so far engineers at Aliander will continue

05:42.300 --> 05:48.500
to see developing applications which try to manage this for the organization, so that means,

05:48.580 --> 05:55.540
for example, looking on the short term, so we're forecasting the next 24 or 48 hours on the

05:55.540 --> 06:01.540
network, and for example, using things like we saw before about solar forecast for the short

06:01.540 --> 06:08.820
term, and see what will happen to the energy grid tomorrow, and have ways to manage that if we

06:08.820 --> 06:18.020
expect overloads, put the intenders on the short term and electricity market, for example, to prevent

06:18.100 --> 06:24.900
that, and we do that by simulating, so simulations will be the key of what we're doing,

06:24.900 --> 06:32.660
so we do a lot of simulations daily in which we approach all different scenarios on the electricity

06:32.660 --> 06:39.940
grid and make changes to that grid, so if you look a little bit more into the future than

06:39.940 --> 06:46.020
we're also working on how can we manage maintenance on the network or outages on the network,

06:46.020 --> 06:52.420
so when things in the network fail, we need to go through all the different combinations of how we

06:53.220 --> 07:00.020
would be able to solve like an outage in the network to keep everybody connected, and again,

07:00.020 --> 07:05.140
that takes a lot of simulations interfacing that the network changing the network and seeing what

07:05.220 --> 07:18.020
would be the impact, and then the situation as it is today is that we have a lot of new customers

07:18.020 --> 07:23.700
or existing customers who want to use more electricity, and that, frankly, puts a big stress on

07:23.700 --> 07:30.340
our network, and we are not always able to manage that, so we are continuously looking to find solutions,

07:30.420 --> 07:39.140
flexibility in the grid to keep those customers still being able to be connected and making

07:39.140 --> 07:47.300
forecast on how likely it will be, that solar farm needs to be disconnected for a couple of hours

07:47.300 --> 07:55.380
in the year, and simulating on a large scale with that as well, life in tooling for people at

07:55.380 --> 08:03.060
all the other who are responsible for that. And then last but not least, because that's

08:03.060 --> 08:09.620
what Ty's is doing at his team, but also I saw the talk of at the earlier, we're looking

08:09.620 --> 08:16.420
like the energy scenarios for 2050, 2060, and simulating like 40 years in the future, so you

08:16.420 --> 08:21.780
can imagine if we want to calculate all those scenarios, well, we have a very powerful

08:22.100 --> 08:27.700
engine which is able to do that, and now we're adding the interfacing options to that as well.

08:32.580 --> 08:38.340
So we're going to go a little bit more into the nitty-gritty details of how this load flow

08:38.340 --> 08:43.780
works, and then we're going to show how the package does this. In the essence, this is what

08:43.780 --> 08:51.300
the Pougat model interface looks like, so this is a JSON interface towards the rate, towards the

08:51.300 --> 08:59.060
load flow, which gives you output also in a array manner, and this is what already as it was

08:59.060 --> 09:07.460
available in the Pougat model, and what we now build is a wrapper to really easily interact with this,

09:07.460 --> 09:13.620
and to ask questions which are from a data science sense, very interesting about this,

09:13.620 --> 09:19.940
so we want to ask questions about how this network is connected, which parts of it can be filtered

09:19.940 --> 09:28.100
out, are interesting, is it overloaded, and what if we change something? So those are the things

09:28.100 --> 09:36.500
we're focusing on with this package, and we would summarize that as preparation of the network,

09:36.500 --> 09:42.020
interpretation of the outputs, and simulating that to close the loop. So I'm going to go a

09:42.020 --> 09:47.940
little bit faster because I want to give some time also to ties, so it's, in essence, this is

09:47.940 --> 09:53.540
the summary, it's a Python wrapper, which is, doesn't add a lot of dependencies, so it takes

09:53.620 --> 10:05.140
Pougat model, adds a NumPy, has a managing of the grid output, and we also add that worst work

10:05.140 --> 10:12.900
X to simulate the graph of the network, and with that I want to give it over to my colleague

10:12.980 --> 10:16.260
to show you how that works, and look like.

10:16.260 --> 10:26.980
Yes. Thank you. I'll just hold this one. So yeah, just to get into the more technical part of our

10:26.980 --> 10:32.420
application, just to get back to what we were simulating, nodes, lines, and transformers,

10:33.620 --> 10:41.860
you can represent them in two ways, either as a graph, which is quite natural, I guess, or as a

10:41.940 --> 10:49.620
race, what we were already doing in the power grid model, and both can answer different questions.

10:49.620 --> 10:57.220
So the race, you can use mainly for the load flow questions, or specific properties of specific

10:57.220 --> 11:02.740
assets in the network, and the graph is mainly used for asking questions like connection topology

11:02.740 --> 11:10.740
questions, and that, yeah, as a, yeah, just said, for one we used just rust work X, which is a

11:10.820 --> 11:15.460
variant of a network X, if for those familiar with it, and the other one uses a numpy structure

11:15.460 --> 11:22.900
the race, which was already present in the power grid model. So back to the questions like the

11:22.900 --> 11:31.060
graph questions like this, these can be easily asked for my graph, and so we have methods like

11:31.060 --> 11:38.580
these on our, in our package to find out which node is connected to another node, so for example,

11:38.660 --> 11:43.300
the first node is connected to the four node, but it's not connected to the 10 node, or if you

11:43.300 --> 11:48.100
ask the question like, what are the components within this network? This is one component on the

11:48.100 --> 11:53.620
left side, and the 10, 11, 12 nodes is another component, and these can be very useful in our

11:53.620 --> 12:01.620
data science applications. And then on the other hand, we have the array questions, so power flow

12:01.620 --> 12:08.260
analysis or state estimation, we build an interface to the power grid model, application that was

12:08.340 --> 12:14.660
already there, and so it's easy to interact with it, and it will update the results from the

12:14.660 --> 12:24.340
power flow into our data science application that you're developing, made it a bit more easier in

12:24.340 --> 12:36.980
our opinion, and another other features we added are filtering values, so the goal here is really

12:36.980 --> 12:41.860
to make the life of the data scientist a bit more easier. The the power grid model is really

12:41.860 --> 12:46.900
as a calculation core, and we are trying to make the experience for the data scientist a bit

12:46.900 --> 12:54.260
easier, so we added things like filtering, updating values, and inheritance of, of, of,

12:54.260 --> 12:59.460
detox data types, or for example, there's a line array on the right there, but you can add custom

13:00.420 --> 13:08.420
fields to the array, or defaults, and you can even create your own complete custom array,

13:09.620 --> 13:15.300
but it's all in under the hood, it's still non-pirease, so that won't affect performance, for example.

13:18.500 --> 13:23.300
And then one question we you have, you might have is like, how do you manage both

13:23.300 --> 13:29.780
representation at the same time, so that's also something that we added, we summarize them,

13:29.780 --> 13:34.580
or we combine them both in one great object, and then we added functions to that object to

13:34.580 --> 13:40.340
manage both representation, so there's a grass attribute, which contains the graph, the rougher

13:40.340 --> 13:48.340
x graph, and then these are all the below that, they're all the arrays, and whenever a node is added,

13:48.420 --> 13:52.740
it's both added to the graph, for example, and also added to the node array, and the save goes

13:52.740 --> 13:58.660
for branches, or activating, or deactivating branches, so that makes it easier to also make a

13:58.660 --> 14:04.180
dynamic grid, and simulate dynamic grid, add node to the grid, delete node for the grid, and then

14:04.180 --> 14:12.980
see what that does, does for load flows, and stuff like that. That's basically our application,

14:12.980 --> 14:21.140
and we hope we can find new people to join us in managing these building these applications,

14:21.140 --> 14:27.540
and a comfort route, so thank you for joining us, and listening to our tour talk.

14:27.540 --> 14:37.540
Yes.

14:37.540 --> 14:42.820
Does the power of model only save nodes and versus the central component, or can you also,

14:42.820 --> 14:49.140
if you have, like, smart, and the things from consumers, can you set to perform the model?

14:49.140 --> 14:53.140
So I think that's part of, let me see the question.

14:53.380 --> 14:59.860
So the question, I think if I summarise correctly, is can we also use measured data

14:59.860 --> 15:06.740
into the network to inform the network? So that revolves a little bit more about the calculation

15:06.740 --> 15:14.180
course, so it has different parts, so one part is the load flow engine, which is doing the physical

15:14.180 --> 15:20.180
calculations of the load flow, but it also has a state estimation part in which you can input

15:20.260 --> 15:25.940
measurements, and give them probabilities, so it's more like a statistical calculation,

15:25.940 --> 15:30.820
and that's where we use it as well with measurements of, yeah, all points in the network.

15:30.820 --> 15:33.460
Yep.

15:33.460 --> 15:39.300
Do you use your correct to make the dynamic simulations as in the course of the simulation

15:39.300 --> 15:43.860
in the change of the grid, or do you have this option of the simulation change of the

15:44.340 --> 15:45.620
course?

15:45.620 --> 15:50.180
No, we actually do dynamic simulation, so what we are, for example, in my project,

15:50.180 --> 15:56.180
doing is simulating 10 years ahead, and we actually, during runtime, we simulate adding new

15:56.180 --> 16:01.860
clients to the grid, so that will build new lines, new nodes, and then we do new flow,

16:01.860 --> 16:06.580
it will sometimes cause congestion, then we have to replace lines with thicker lines,

16:06.580 --> 16:11.300
or stuff like that, so in the runtime, during runtime, we're changing the grid, updating it,

16:11.300 --> 16:22.660
and then seeing the results, any other questions?

16:22.660 --> 16:29.060
How long does it take us to run a simulation?

16:29.060 --> 16:35.860
I think it depends entirely on what you build in your own simulation, so we are doing, in my project,

16:35.860 --> 16:41.220
we're simulating a lot of the work that Aliander is doing, so all kinds of congestion management

16:41.300 --> 16:47.620
strategies and all kinds of ways of adding clients, so that's, yeah, we simulate 10 years in

16:47.620 --> 16:51.780
about five hours or something a little bit, this now.

16:51.780 --> 17:17.060
Yeah, so I'm not really sure I can repeat the question, but to answer the question, at least,

17:17.060 --> 17:24.580
the power grid model is really designed for performance, so it's really fast in quick and

17:24.580 --> 17:29.780
calculating all these things through, so it's entirely possible to, I think, get a quick answer,

17:29.780 --> 17:36.260
but you have to build the simulation yourself.

17:37.540 --> 17:45.620
So for low voltage grid, you mean, yeah, so we are using it also in low voltage, so it's focused on low

17:45.620 --> 17:49.540
and medium voltage, so for us as a distributed.

17:58.420 --> 18:04.980
Yeah, so the question is, is there a specific Dutch network model in this or not,

18:06.180 --> 18:12.500
you should see this as like a general definition of how a network can be modeled, and we internally,

18:12.580 --> 18:17.060
at Aliander, of course, modeling our network, but you can model every network you want in this,

18:18.020 --> 18:23.380
whether it's low voltage, medium voltage in, yeah, the only other data is not included in the

18:23.380 --> 18:31.060
project. And maybe also to add on the question about performance, so of course, it depends on the

18:31.060 --> 18:37.140
use case, so I sort of talked about some strategic use cases, for example, ties is one where, yeah,

18:37.220 --> 18:44.900
the speed is mainly cost concern, but it's also used in operation, operational,

18:45.460 --> 18:48.980
where, of course, it is much more of the essence, yeah.

18:51.380 --> 18:57.140
How badly will it go wrong if you use incomplete data, so we can get a little bit there, reverse end,

18:57.140 --> 19:01.540
but there's not a click to provide margin, so I'm doing, is that completely useless all now, if I can

19:01.860 --> 19:06.340
use a little bit, and what kind of incomplete data do you need?

19:06.340 --> 19:12.100
The street map has fair round the grid, but no, there's gaps in things where people don't

19:12.100 --> 19:18.820
want to show you. Okay, yeah, so the question is how this, the model handle, incomplete data,

19:20.420 --> 19:25.140
and find it a little bit of a difficult question, but I'm going to try to answer, so I think

19:26.020 --> 19:31.780
on one hand, on one hand, for the physical modeling, of course, it handles the data as it is

19:31.780 --> 19:36.980
presented and has the power grid model has some validation options to make sure that it's

19:37.700 --> 19:44.980
at least valid for the calculation, and then whatever you put in, it will work with, but for

19:44.980 --> 19:51.620
example, we also have projects where we don't exactly know everything of the data, for example,

19:52.580 --> 19:57.300
switching options in the network, we're not quite sure of, and then we actually use like the

19:57.300 --> 20:01.700
status summation we were talking about, which can handle and certainly in the better way,

20:02.820 --> 20:08.500
or we can simulate, for example, the different options, and see which is most likely, so there's

20:08.500 --> 20:14.020
different ways of handling it, but it depends on what kind of data is missing, I think.

20:14.020 --> 20:20.060
Thanks, iTunes, thanks! Thank you!

