WEBVTT

00:00.000 --> 00:10.360
That if you would like to benchmark your BPM programs, you most likely need to get creative.

00:10.360 --> 00:14.600
And sometimes even a little bit weird in a sense of what kind of work loads you're using

00:14.600 --> 00:17.160
to benchmark your applications.

00:17.160 --> 00:19.480
Workload plus configuration.

00:19.480 --> 00:24.000
And the reason being as we have learned in a hard way is that if you've got your BPM programs

00:24.000 --> 00:30.200
for security, like we do for example, or for monitoring or for something else, all your

00:30.200 --> 00:35.000
programs are exposed within the kernel to all sorts of weird and strange phenomenon

00:35.000 --> 00:36.400
unfortunately.

00:36.400 --> 00:43.680
And this leads to a situation when sometimes you get a very counterintuitive performance

00:43.680 --> 00:45.480
results when you test in your application.

00:45.480 --> 00:48.640
So that's what we're going to talk about.

00:48.640 --> 00:52.080
And to poor point, I've got one particular example.

00:52.080 --> 00:57.160
It's some sort of a deep dive in the next couple slides about the BPM framework.

00:57.160 --> 01:02.680
Sort of a case, study to show what kind of unexpected things you might actually expect.

01:02.680 --> 01:06.840
Well, unexpected things that you're going to expect, it's a strange thing but nevertheless.

01:06.840 --> 01:09.960
So it's a benchmark from the kernel self test.

01:09.960 --> 01:14.520
So you know that there's some amount of those that are quite decently written, very interesting

01:14.520 --> 01:15.520
benchmarks.

01:15.520 --> 01:16.520
They are very interesting.

01:16.520 --> 01:24.480
And this is roughly the diagram how the Ring Buffer benchmark looks like.

01:24.480 --> 01:26.240
It's a very simple one.

01:26.240 --> 01:29.240
So we've got a small BPM program.

01:29.240 --> 01:34.000
This program is getting attached to a simple CIS call, this one, I believe, like, get

01:34.000 --> 01:36.840
process a group ID or something like that.

01:36.840 --> 01:40.960
Every time when the program is getting triggered, it tries to create a bunch of freckards

01:40.960 --> 01:45.240
and tries to write this bunch of freckards into the Ring Buffer.

01:45.240 --> 01:48.840
So this essentially very simple kernel part.

01:48.840 --> 01:51.720
And now there's also user space part obviously.

01:51.720 --> 01:57.440
So you see there are some amount of producers marked with P. They are duty essentially

01:57.440 --> 02:00.200
to call those like to trigger those CIS calls.

02:00.200 --> 02:03.840
So they are essentially producing an input into our system.

02:03.840 --> 02:06.080
They are creating the workload here.

02:06.080 --> 02:09.960
And on the other side of the thing, we've got some amount of consumers that are reading

02:09.960 --> 02:11.800
from the Ring Buffer.

02:11.840 --> 02:16.200
And by this, we're essentially getting some sort of a loop of events that are flowing

02:16.200 --> 02:17.400
through our system.

02:17.400 --> 02:18.760
So that's how it looks like.

02:18.760 --> 02:21.760
And obviously, it's a benchmark.

02:21.760 --> 02:23.880
We have to measure something, right?

02:23.880 --> 02:26.680
So there are two things that we measure here.

02:26.680 --> 02:29.480
One is a successful event, writing activities.

02:29.480 --> 02:33.400
So every time when we were able, actually, to reserve space and submit something to the

02:33.400 --> 02:35.800
Ring Buffer, it counts as a hit.

02:35.800 --> 02:38.920
And every time we were forward, whatever reason, we were not able to do this, whether

02:38.920 --> 02:41.880
it's a lock-in issue or Ring Buffer is full of something like that.

02:41.880 --> 02:44.600
It's called as an event drop.

02:44.600 --> 02:47.920
And essentially, based on those two metrics, we could get some rough understanding,

02:47.920 --> 02:53.800
okay, how quickly our system is performing within this configuration, right?

02:53.800 --> 02:55.280
That's pretty much it.

02:55.280 --> 02:56.560
Very simple configuration.

02:56.560 --> 03:00.280
And I said, I really like this benchmark because it's very, sort of, like, an

03:00.280 --> 03:04.160
good example of how could you test something?

03:04.160 --> 03:07.080
You could, you know, you can provide concurrency, for example.

03:07.080 --> 03:10.160
You could create different amount of producers and consumers.

03:10.160 --> 03:14.000
You could pin them to a different CPU course to see what is going to be like.

03:14.000 --> 03:16.200
Intercore interaction.

03:16.200 --> 03:20.760
You could also figure out, for example, how the batch size is going to affect the overall

03:20.760 --> 03:23.080
performance and all those, you know, pipeline and effects.

03:23.080 --> 03:24.480
Oh, it's very interesting.

03:24.480 --> 03:27.000
But there's one interesting thing that got my attention.

03:27.000 --> 03:33.360
The work of generation itself is a tight loop, meaning that bearing the back-to-back mode,

03:33.360 --> 03:37.480
essentially what happens is that those producers are just like creating their firing

03:37.480 --> 03:41.320
the Cisco and then Cisco is finished and that's when it's possible they're just firing

03:41.320 --> 03:44.080
another one within the wild loop.

03:44.080 --> 03:46.600
It sounds very straightforward and reasonable, absolutely reasonable.

03:46.600 --> 03:48.480
I'm not saying that it's better anything.

03:48.480 --> 03:52.920
It's a way to drive your system essentially to the edge and that's a little reasonable first

03:52.920 --> 03:53.920
thing to do.

03:53.920 --> 03:57.360
But every time when I see such a situation, I have to think about this article.

03:57.360 --> 04:02.560
I have mentioned here, open versus closed.

04:02.560 --> 04:06.120
Now going into the deep details, it's essentially a very interesting result from the

04:06.120 --> 04:07.120
Q and theory.

04:07.120 --> 04:11.240
As you may know, everything in computer science is a Q that's like Q and theory is sort

04:11.240 --> 04:15.760
of important here and this result in particular shows us a very interesting situation.

04:15.760 --> 04:20.880
It compares closed system that are somewhat similar to this tight loop where essentially

04:20.880 --> 04:25.040
what happens is that we've got a system, we send an event into the system and then we wait

04:25.040 --> 04:27.760
until the event is getting processed to send another one.

04:27.760 --> 04:32.160
So there was a fixed amount of event in the system where the open system is something

04:32.160 --> 04:37.280
where we just push an event into the system based on some distribution and we don't care

04:37.280 --> 04:42.400
how fast or quickly are they getting a no-process, we just push them in.

04:42.400 --> 04:47.760
And the point being that the last one, the open system, they're actually more realistic

04:47.760 --> 04:52.000
and they are those that actually not see in quite frequently in the benchmarking exactly

04:52.000 --> 04:54.320
because of this tight loop situation.

04:54.320 --> 04:59.040
And at the same time, in terms of metrics, those open systems tend to be, they don't have

04:59.040 --> 05:02.120
a higher latency unfortunately.

05:02.120 --> 05:05.320
And in that for the realistic workload, maybe you will get actually quite different

05:05.320 --> 05:06.720
tail latencies.

05:06.720 --> 05:12.120
So, I haven't said that, I always think okay, what will happen if we will replace this

05:12.120 --> 05:15.480
workload generation with something different?

05:15.480 --> 05:19.920
And on this premises, I have performed a couple of experiments.

05:19.920 --> 05:23.920
This is just a description of what the environment looks like.

05:23.920 --> 05:28.400
So I was trying to reduce the concurrency as much as possible to make it stable, all the things

05:28.400 --> 05:34.680
were pinned to single core, disabled, hyper-threading, scaling error, like a turbo was disabled

05:34.680 --> 05:35.840
of course.

05:35.840 --> 05:39.640
And the only thing that was special in this case, it's that we were using a different

05:39.640 --> 05:41.560
workload generation tool.

05:41.560 --> 05:45.840
So this is, I mentioned this is a berserkered tool, we have created like a toy tool, I'm

05:45.840 --> 05:47.640
about this a little bit later.

05:47.640 --> 05:52.320
But essentially what happens is that the tool is generating a random process, which is

05:52.320 --> 05:57.080
scientifically called a Poisson process, where essentially what happens is that we are firing

05:57.080 --> 05:58.920
into C-scores a little bit randomly.

05:58.920 --> 06:05.520
So not so regularly about a tight loop, but just a little bit with a delay in between.

06:05.520 --> 06:09.160
To make it clear, there is a difference in the sense, so it's not obviously we do not

06:09.160 --> 06:11.040
do like a synchronous C-scores or anything.

06:11.040 --> 06:13.440
There is still one C-score and after another one.

06:13.440 --> 06:17.400
But there were three workers, they were producing some amount of concurrency, although they

06:17.400 --> 06:21.880
were still never going over the one tight loop in terms of performance.

06:21.880 --> 06:25.240
But like essentially three workers, they are still producing Poisson distribution because

06:25.240 --> 06:26.240
of activity.

06:26.920 --> 06:32.320
Now on this diagram, what we are doing here is we are replacing this part essentially,

06:32.320 --> 06:33.320
right?

06:33.320 --> 06:38.340
So we are just literally doing a different workload type, where before we were doing this

06:38.340 --> 06:42.880
tight loop, right now a worker, what it's doing is essentially just a firing C-score,

06:42.880 --> 06:47.080
then weights a little bit around the random amount of time and then proceed forward.

06:47.080 --> 06:50.000
Very silly idea why would you do this, right?

06:50.000 --> 06:53.720
So now let's take a look at the results of this interesting benchmark.

06:53.720 --> 06:57.640
So first of all, we need to establish a baseline.

06:57.640 --> 07:02.280
For this test, the baseline was the amount of C-scores, this whole setup was produced.

07:02.280 --> 07:06.080
Because obviously, the expectation would be that since we are doing the lace, literally

07:06.080 --> 07:11.640
the lace between the C-scores for the worker, we are just doing less load and doing eventually

07:11.640 --> 07:13.160
less C-scores.

07:13.160 --> 07:18.600
And yeah, here we are, pretty much as expected, x-axis is time, y-axis is a C-score,

07:18.600 --> 07:23.640
1,000 per second and yeah, clearly this randomized version I am going to call it later

07:23.640 --> 07:27.360
on a berserker one and the tight loop I am going to call it the kernel like a vanilla

07:27.360 --> 07:28.840
implementation.

07:28.840 --> 07:33.680
And the berserker is producing less C-scores, I expect it all fine, so far so good.

07:33.680 --> 07:36.840
Now let's take a look at our metrics from the benchmark.

07:36.840 --> 07:39.440
Again, no surprises here so far.

07:39.440 --> 07:43.680
First of all, we are not getting any drops yet, so events are not getting drops at all,

07:44.000 --> 07:50.120
here and okay, we are producing less C-scores and yeah, obviously we are getting a little

07:50.120 --> 07:55.120
bit more heat for the tight loop, so far so good as I said.

07:55.120 --> 07:59.520
But now since I was mentioning this open versus close, let's try to take a look at the

07:59.520 --> 08:00.920
latencies.

08:00.920 --> 08:06.440
Now unfortunately it is a little bit hard to get, the point is that essentially a lot of

08:06.440 --> 08:10.200
tooling that we have got so far is not allowing us to collect no histograms and all that stuff

08:10.200 --> 08:15.280
so I had to patch with bf2 actually and what happens here is that we just profile the

08:15.280 --> 08:20.120
bf program under the test and we on top of this with thanks to this patch, we just also

08:20.120 --> 08:24.680
collecting not only cycles or how long how many cycles it will take to execute this

08:24.680 --> 08:30.400
bf program but also histograms like those buckets within which those cycles are fallen

08:30.400 --> 08:34.520
into and let's take a look at the results.

08:34.520 --> 08:40.000
So this is a row data, y-axis is an amount of cycles that was spent within the program

08:40.000 --> 08:44.560
and the y-axis is a frequency, so how often this result has appeared.

08:44.560 --> 08:49.640
So you call or you see from the row data that suddenly there was a shift right, so this

08:49.640 --> 08:54.280
run the mice data, do a producing less load into our system, strangely enough the latency

08:54.280 --> 08:56.320
that latency is actually a little bit higher.

08:56.320 --> 09:01.760
So on other edge our queries are actually a little bit slower than that and to make it

09:01.760 --> 09:07.640
a little bit even more clear than before I have created a some sort of a probability

09:07.640 --> 09:13.440
density where it's the same as you can see the berserker one is an origin fortunately

09:13.440 --> 09:18.120
here whatever for what I originally know why, but you see it's shifted right now to the

09:18.120 --> 09:22.600
right side, meaning that on average it takes a little bit longer time to execute one

09:22.600 --> 09:27.120
operation like one run of this bf program within this context.

09:27.120 --> 09:31.960
If you're a statistics freak I have to tell you it's not exactly probability density,

09:31.960 --> 09:36.640
it's something like a derivative from the histograms just to be completely mathematically

09:36.680 --> 09:45.120
correct, yes we care about it, okay so the summarized, we do less work load but we get

09:45.120 --> 09:50.160
in slower execution, that's strange right, weird, the next example is going to be even

09:50.160 --> 09:56.280
more weirder, so I was showing the environment before, and let's try to change one thing,

09:56.280 --> 10:00.040
let's try to enable turbo back, so essentially what will happen is that we're just going

10:00.040 --> 10:06.400
to get a little bit of support from the CPU, purely on the hardware basis for our execution,

10:06.440 --> 10:11.880
things go completely, completely like mad, so first of all, send it to check, everything

10:11.880 --> 10:18.360
so far so good, you see that we're executing now much more C-scals because we're getting

10:18.360 --> 10:23.600
support from the hardware, but still the randomized version is performing poorly, we're

10:23.600 --> 10:30.520
producing less C-scals in this case even more significantly less than before, but then

10:30.520 --> 10:38.920
benchmark results look completely different, so here is it, the legend is the same, X axis is

10:38.920 --> 10:43.720
a time, Y axis is a amount of records, a million per seconds here, and you can see first

10:43.720 --> 10:49.600
and you can see is that randomized version for whatever reason produces more hits than the

10:49.600 --> 10:54.360
title for the reason, it's a very strange situation where doing less work and yet a little

10:54.360 --> 11:00.360
bit faster than before, even more, you can see that randomized version does not have any

11:00.360 --> 11:07.800
event drops whatsoever where the title, producing a little bit less amount of hits, it also produces

11:07.800 --> 11:13.640
the same level of event drops, so you probably can combine it in your head, so the title

11:13.640 --> 11:19.240
version is actually producing more activity indeed, but how of this activity is a drop event,

11:19.320 --> 11:26.440
so the effective work load here is like essentially housed unfortunately, yeah, so that's

11:26.440 --> 11:30.200
how it looks like you might notice also this drop in the middle, it's a very interesting one,

11:30.200 --> 11:34.920
I haven't really investigated it deeply unfortunately, but the results were actually not done in

11:34.920 --> 11:40.200
parallels, so it was they were actually done asynchronously, like at different moment in time,

11:40.200 --> 11:44.120
meaning that this drop happens still at the very same moment on the graph like close to the

11:44.120 --> 11:48.600
middle, so probably it has something to do with the boost effects, but yeah, as I said,

11:48.600 --> 11:54.760
I did not really investigated it deeper, what I did though was I've tried to investigate what the

11:54.760 --> 11:58.920
hell is going on here, because it's a very interesting question, right? We just changed the workload a

11:58.920 --> 12:03.960
little bit and then changed one configuration in our system, admittedly not necessarily production

12:03.960 --> 12:08.600
way but still, it's a very reasonable change that you can apply and you could get a very strange result,

12:08.600 --> 12:15.160
so what happens? We could try to figure out what happens using top-down approach. So fortunately,

12:15.160 --> 12:19.480
nowadays, this just works out of the box, it's really magic, literally two years ago I had

12:19.480 --> 12:23.400
to struggle with this but then there were a couple of fixes in Perf and since then everything works,

12:23.400 --> 12:28.040
just as magic. I'm not going to explain what top-down approach is, you can just search for

12:28.040 --> 12:33.480
this very powerful tool but essentially what it gives us, it tells us that most likely what we're

12:33.560 --> 12:40.120
dealing with is a core-bound workload. In particular, we've got some amount of stole memory activity

12:40.120 --> 12:45.800
and much more of that for the kernel side of things for the tight loop. Accidentally enough,

12:45.800 --> 12:51.400
those are also the geographical coordinates of some point in the Polish forest, I don't know,

12:51.400 --> 12:59.880
some are in the middle. So yeah, some arise again, we're doing less workload but in a different pattern

12:59.880 --> 13:05.320
and we're getting completely wild results. So now a little bit about the tool I was talking about

13:05.320 --> 13:11.640
and I was using for actually driving this benchmark, it's a bit surfer, it's a tool that we're using

13:11.640 --> 13:17.720
for our project about like a security projects, tech rocks. Originally, it was pretty much a toy

13:17.720 --> 13:22.280
project now, we're using it a little bit more actively but overall it's a very beginning,

13:22.280 --> 13:27.400
so to manage your expectations, it's not something mind-blowing and what it does, it essentially

13:27.400 --> 13:33.080
does what I was talking about. It creates very strange extreme workload that might potentially break

13:33.080 --> 13:37.960
your program. In fact, not in terms of performance but we have caught a couple of buggers thanks to

13:37.960 --> 13:43.400
this tool in our project. And yeah, so it creates essentially some amount of workloads like

13:43.400 --> 13:50.280
process-based workloads, endpoint-based workloads, some unreliven c-scores, for example, unreliven

13:50.280 --> 13:56.360
for your application, with a different distributions, with a Poisson, random uniform distribution,

13:57.480 --> 14:01.480
and we're also doing a very interesting thing, user-space networking, it's a clever idea of one of

14:01.480 --> 14:06.200
my colleagues, where essentially the point is that thanks to this user-space thing, we could also

14:06.200 --> 14:10.280
first of all, we could save lots of performance and at the same time we could also seem late

14:10.280 --> 14:14.440
sort of a real connectivity where connections are coming from all over the world and not from

14:14.440 --> 14:20.520
the cluster where we test in it. And yeah, at least recently, landed a program, BPA programs

14:20.520 --> 14:24.920
contention, I don't have any results from this but it sounds like a really cool idea, what will

14:24.920 --> 14:28.440
happen if you have a program that's been attached somewhere, and then there are also like

14:28.440 --> 14:33.000
hundreds of other BPA programs also attached to the very same trace point. So whether it's going to

14:33.000 --> 14:38.040
affect your performance or not. And some other things that are like right now working progress.

14:39.480 --> 14:46.200
So yeah, that's pretty much it, but I already see you asking about that. So why do you want to

14:46.200 --> 14:51.320
come up with some custom tool for these type of things, right? Why not just use stress and g?

14:51.480 --> 14:57.080
That's a robust question. And in my defense, I have to answer, probably the most straightforward

14:57.080 --> 15:01.720
answer is just purely my ignorance. I did not know back in the days how awesome stress injury is,

15:01.720 --> 15:05.880
it's really great. You can do a lot of stuff. So for example, the first benchmark I was showing,

15:05.880 --> 15:10.200
you can really do this with stress injury. But at the same time, you have to keep in mind,

15:10.200 --> 15:16.760
the stress injury was and is designed to exercise kernel interface, to stress test kernel interfaces.

15:16.760 --> 15:21.160
Where what we would like to do is we would like to stress test our BPA programs. And sometimes

15:21.160 --> 15:26.280
their goals, those goals are a little bit different and fortunately. And in this particular case,

15:26.280 --> 15:30.200
it was actually a good idea that we have started with something custom, because then otherwise we

15:30.200 --> 15:35.160
would need to go probably with iPad or a netbench or something like that to get like networking

15:35.160 --> 15:39.800
performance. And even those two are doing this point to point performance. I haven't found any

15:39.800 --> 15:46.680
single one that does this pretending to be from all over the world. So yeah, this is a

15:46.680 --> 15:52.120
essentially an explanation. It's just a good idea to have something custom. And I think that's

15:52.120 --> 15:54.840
pretty much it. So I hope you've got some amount of questions for me.

16:04.520 --> 16:05.880
Yes, please?

16:05.880 --> 16:12.680
Does the training install that you see? Does the BPA programs have like garbage collection?

16:13.480 --> 16:17.800
Well, not the time of the world. I mean, it's a very actively developed area. Maybe it was

16:17.800 --> 16:23.000
developed like why I was speaking, but not the time of the world. Why? Why are you resting?

16:24.200 --> 16:31.640
We see that thing in the Java world all the time. Yeah. Yeah, okay. Yeah, obviously, but now I don't

16:31.640 --> 16:36.280
really think that I just have something to do. You're talking about this one, right? This in the

16:36.280 --> 16:40.440
middle, right? It's probably most likely have something you need to do with this like time frame.

16:40.440 --> 16:44.600
You can get your boost over from the hardware because it looks like exactly this amount of

16:44.600 --> 16:49.000
interval. So that's what I think it's going on here. There was some, yeah, please?

16:55.400 --> 17:00.360
Yeah, it's not a public patch. It's something that I have hacked on my laptop. But I'm thinking

17:00.360 --> 17:07.080
actually, there was some ideas about actually sort of extending BPA to with various cool metrics.

17:07.160 --> 17:11.160
I was actually trying to do the same thing with BPF trace. And unfortunately, the overhead was so high

17:11.160 --> 17:17.240
that you could see like we were doing 10,000 of hits with BPF trace trace and it was 2,000. So it was

17:17.240 --> 17:22.920
just incomparable. And the only concern of mine is that actually you could do a lot of this stuff

17:22.920 --> 17:26.920
in Perfor example, and I'm not sure what should be the distribution between those tools. And I don't

17:26.920 --> 17:30.200
want to start any holy war between those. So that's why I'm trying to be careful about it.

17:31.880 --> 17:33.080
Okay, great. Yeah, please.

17:33.080 --> 17:38.600
So in this graph, you show brackets, units per second. Yeah, brackets to do a slow drop.

17:38.600 --> 17:43.400
From the previous one, you were showing a number of system calls, right? Yep. You compare the number of

17:43.400 --> 17:47.400
brackets between those two, like before you and I made the tools on all of them after.

17:48.680 --> 17:51.000
Do you mean do you mean between those two configurations? Yeah.

17:51.000 --> 17:57.640
Yeah. But before you and I go to a boy and after? Well, you could get this, those numbers from this

17:57.720 --> 18:05.720
graph, right? So you're, okay. Yeah. So this is it. Yeah, there is always difference. You see?

18:06.520 --> 18:11.160
Exactly. Yeah. That's why you get drawn. Well, essentially, yeah, essentially what happens is that,

18:11.160 --> 18:16.600
yeah, I think I did not actually explain this, but yeah, the tight loop is overwhelming,

18:16.600 --> 18:19.960
the green buffers infrastructure, of course. Yeah, then we're falling back on their work

18:19.960 --> 18:24.200
correction, and this whole thing starts to be probably less efficient in terms of memory, of course.

18:24.200 --> 18:28.120
And the first factor, I assume, like, when you, when you wait, right, you can be wrong,

18:28.120 --> 18:33.000
you probably will do some sort of sleep. Yeah, it's a thread sleep. So, like, you don't

18:33.000 --> 18:36.600
burn sleeping, you have to be wrong. Yeah. Yeah. Yeah. So you get, like, sort of, this

18:37.400 --> 18:41.720
batching and relaxation, just because of the context, which is a two-car model.

18:41.720 --> 18:47.640
Oh, I mean, yes, it may be the case, it may be the case. To be honest, now, I haven't thought

18:47.640 --> 18:52.440
from this perspective, I need to take a look maybe, from this context as well. I see what you mean,

18:52.440 --> 18:57.240
yeah. General speaking, the batch market we have is self-cast, right?

18:57.240 --> 19:01.080
It was man to, like, just pass, kind of look this artificial. Mm-hmm.

19:01.080 --> 19:06.280
Yeah, yeah, of course, of course. Yeah. I mean, and it's not clear, it's reasonable way to do this,

19:06.280 --> 19:12.200
of course. And all the rest of the things are just, like, a silly, silly things to, you know,

19:12.200 --> 19:16.040
to try to play around with this whole thing. Those, by the way, that if you're asking yourself,

19:16.040 --> 19:19.880
five-sealer things, those are just the reference to what the tool is doing, five different

19:19.880 --> 19:23.800
workloads, so nothing more. So, there were more questions I've seen, yeah, please?

19:23.800 --> 19:27.880
Hey, so, this is, this is the smallest piece, and I think,

19:27.880 --> 19:32.200
I just want to understand this kind of piece, so it's, why are you so defining,

19:32.200 --> 19:36.040
kind of, does it also replicate where this people are doing this?

19:36.040 --> 19:39.160
Well, I mean, it depends on what you mean by realistic, because again,

19:39.160 --> 19:42.600
the distribution, this portion distribution, I was saying, it claimed to be a little bit more

19:42.600 --> 19:46.920
realistic than anything else, because think about this. Imagine you've got to be a program

19:46.920 --> 19:50.440
attached, for example, to a web server. And this web server is, in turn,

19:50.440 --> 19:54.680
copyright-based on some customers triggering an API. And those customers arrive,

19:54.680 --> 19:58.200
well, it's usually modeled by plus on distribution, so at the end of the day, just by pure

19:58.200 --> 20:02.920
transitivity of those activities, it's quite easy to imagine that your BPS program is going to

20:02.920 --> 20:04.520
be called in a similar fashion.

