WEBVTT

00:00.000 --> 00:13.000
Thank you for sticking around for the last talk of the day.

00:13.000 --> 00:16.000
I'm going to start by apologising slightly.

00:16.000 --> 00:19.000
I'm Richard Morsier or Mort from the University of Cambridge.

00:19.000 --> 00:23.000
I'm talking on behalf of Amjad who actually is because we did this work in his PhD,

00:23.000 --> 00:27.000
but this visit didn't come through in time, so he wasn't able to attend.

00:27.000 --> 00:34.000
I should also note that it was funded in part by the Edmunds Project, which is an EU project.

00:34.000 --> 00:39.000
It was complicated because it breaks it, but it was an EU project.

00:39.000 --> 00:46.000
So the motivation for this was that we were originally looking to try to work on getting unicernals,

00:46.000 --> 00:49.000
really densely packed on service for those who know anything about unicernals,

00:49.000 --> 00:53.000
but essentially small tasks, densely packed on service.

00:54.000 --> 00:59.000
We were looking at how Kubernetes performed for that, and we were trying to analyse what performance we were achieving,

00:59.000 --> 01:03.000
in particular how the Kubernetes Autoscaler was behaving.

01:03.000 --> 01:07.000
And again and again and again, the results just made no sense at all.

01:07.000 --> 01:11.000
It could make any sense out of the performance measurements that we were getting.

01:11.000 --> 01:14.000
After some considerable time, I think I've spent about a year looking into this,

01:14.000 --> 01:16.000
trying to work out what was going on.

01:16.000 --> 01:21.000
We came to the conclusion that it was to do with the way the kernel schedule was behaving on the work notes.

01:21.000 --> 01:26.000
So essentially, context which overhead was going up much more than it really should.

01:26.000 --> 01:32.000
And that has some pretty poor knock-on effects to have the overall cluster will operate when Kubernetes is scheduling work across it.

01:32.000 --> 01:37.000
The original focus for this was a serverless style workload, hence the relationship for the Edmunds Project,

01:37.000 --> 01:40.000
but we suspect this is actually more general issue here.

01:40.000 --> 01:47.000
It's not just a serverless, although a serverless is a particularly poor workload in the sense that it will see this problem quite badly.

01:48.000 --> 01:54.000
So extremely briefly, and I'm going to refer questions to Amjad if you ask any of the two hard.

01:54.000 --> 01:58.000
CFS is the scheduler in it's currently the main sort of scheduler.

01:58.000 --> 02:03.000
It is trying to be fair, and it's also trying to be reasonably efficient.

02:03.000 --> 02:06.000
So it's giving each task that's running a minimum of time slice,

02:06.000 --> 02:12.000
and then it's trying to scale the scheduling period so that each task is able to get the same approximately time slice from that.

02:12.000 --> 02:15.000
So it's trying to be fair and it's trying to be efficient.

02:16.000 --> 02:23.000
Layered kind of on top of that is secrets, so the idea that you can group tasks together and schedule them as a group.

02:23.000 --> 02:31.000
And that means that you end up with these quite complex data structures that are trying to track individual tasks and groups of tasks

02:31.000 --> 02:36.000
and make sure that they're scheduled effectively and efficiently onto different cores in your systems.

02:36.000 --> 02:40.000
This is a multi-core scheduling problem with groups of tasks being scheduled.

02:40.000 --> 02:43.000
So it's fairly involved.

02:44.000 --> 02:52.000
It turns out that there's a reasonable amount of optimization that's gone on to try to make sure that selecting the next task to run is efficient.

02:52.000 --> 02:57.000
So this is essentially captured in a red battery, so if you look at the images on the right there,

02:57.000 --> 03:00.000
I think if that was a treat that's fallen over on its side.

03:00.000 --> 03:05.000
So you start at the root of the right hand side, and then you've got these kind of subgroups of tasks within that.

03:05.000 --> 03:09.000
So in the case of Kubernetes, you end up with a coupon slice,

03:10.000 --> 03:15.000
then you have coupon's burstable and coupon's best effort within that or beneath that.

03:15.000 --> 03:19.000
And then within coupon's burstable, you've got all the pods that are running in each of those pods is a C group,

03:19.000 --> 03:25.000
which may contain multiple executing tasks for the different containers that are within that pod.

03:25.000 --> 03:28.000
So you've got this kind of nested structure to it.

03:28.000 --> 03:33.000
The data structure that's used to represent this is a red battery, so that should be pretty efficient, right?

03:33.000 --> 03:35.000
So good asymptotic performance.

03:35.000 --> 03:41.000
And the other leave the leftmost bottommost task in the red battery, which is the next one that's going to be scheduled is cached.

03:41.000 --> 03:45.000
So the point is that that's cached, so you can get find that one very quickly.

03:45.000 --> 03:50.000
Unfortunately, because it's a red battery, when you deschedule something, when you do the context switch,

03:50.000 --> 03:54.000
and you take something off the CPU, you have to put that back into the red battery,

03:54.000 --> 03:58.000
and that means traversing it and rebalance in the tree on the way down.

03:58.000 --> 04:02.000
So that can take some time, and that time can increase as the load on the system increases,

04:02.000 --> 04:04.000
and the number of tasks increases.

04:04.000 --> 04:10.000
So you end up with higher per context switch costs, and then you have a lot of tasks hanging around,

04:10.000 --> 04:12.000
because they're not getting completed.

04:12.000 --> 04:16.000
You also have a higher rate of context switching, and so you get this kind of multiplicative effect,

04:16.000 --> 04:22.000
where you end up increasing the overhead in the system quite substantially.

04:22.000 --> 04:28.000
We did some benchmarking to try to show this, so we've got netters resource control framework,

04:28.000 --> 04:30.000
and we've got multiple modified sozats.

04:30.000 --> 04:34.000
We can look at the impact of increasing the number of concurrent secrets,

04:34.000 --> 04:36.000
as the number of secrets increases.

04:36.000 --> 04:41.000
We looked at doing this for a very simple kind of structure, so we could try to understand the results.

04:41.000 --> 04:46.000
So that's function traces from the azure serverless functions of the service workload.

04:46.000 --> 04:50.000
Grouped by essentially how much computation, how heavy the function is.

04:50.000 --> 04:54.000
So put into 10 groups, you'll notice the log scale on the y-axis,

04:54.000 --> 04:58.000
and essentially a small number of tasks, which are extremely computationally intensive,

04:58.000 --> 05:02.000
and then it sort of tails off, so you've got most of the tasks don't do very much computation.

05:02.000 --> 05:06.000
So you have a large number of light tasks and a few heavy tasks in that workload.

05:06.000 --> 05:10.000
We also looked at comparing that, which was a kind of closely workload,

05:10.000 --> 05:14.000
where new work is only introduced as all work is finished, so it's a responsive workload.

05:14.000 --> 05:18.000
Again, it's just doing sort of trace free play from this azure trace.

05:18.000 --> 05:21.000
Okay, which is kind of open loop, it's not responsive in that way.

05:21.000 --> 05:23.000
And then we measured the overhead using F trace,

05:23.000 --> 05:29.000
instrumenting the total time spent during the course schedule of logic, spent in the schedule.

05:29.000 --> 05:34.000
What we find, if you look, for example, throughput, which is the top plot plot A,

05:34.000 --> 05:37.000
and you consider the time, at the same time, you're looking at the time,

05:37.000 --> 05:42.000
spent in context switching, and the time of an individual context switch.

05:42.000 --> 05:46.000
So this is the schedule overhead, and the per-contoi, context switch overhead.

05:46.000 --> 05:49.000
What you see is that you're at the green section here,

05:49.000 --> 05:54.000
you're actually getting peak performance, the throughput's higher, for the azure trace.

05:54.000 --> 05:59.000
Okay, but as you increase past that point, so you increase the density,

05:59.000 --> 06:03.000
you're trying to pack more things onto the system, the performance goes down,

06:03.000 --> 06:06.000
and it's going down as the average time per schedule operation,

06:06.000 --> 06:09.000
and in fact the average time per context switch is increasing.

06:09.000 --> 06:12.000
So you can see on the bottom plot there that the context switch time,

06:12.000 --> 06:17.000
so the time per context switch, is going up from perhaps a few,

06:17.000 --> 06:22.000
a small handful of microseconds, is starting to get up towards, you know,

06:22.000 --> 06:28.000
10, 15, 20 microseconds, so it's going up quite a lot.

06:28.000 --> 06:34.000
So it's a significant factor, is this increase in the average cost of context switching.

06:34.000 --> 06:39.000
So the schedule adjustment that we made, we're to say,

06:39.000 --> 06:42.000
the problem here is that we have these long-run cues.

06:42.000 --> 06:45.000
We've got a lot of tasks hanging around that are actually most of the idle,

06:45.000 --> 06:48.000
as I said, this is particularly bad in serverless workflows,

06:48.000 --> 06:50.000
because one of the things that you do in serverless workloads

06:50.000 --> 06:54.000
to try to avoid cold start problems where you've got to spin up the task

06:54.000 --> 06:57.000
in order to respond to the incoming request is you keep idle tasks around

06:57.000 --> 06:59.000
a bit longer than you would do normally.

06:59.000 --> 07:02.000
So you have this kind of deliberate idea that you want to keep things hanging

07:02.000 --> 07:05.000
about just in case you need them in the near future in the next minute or two.

07:05.000 --> 07:08.000
So you've got even longer run cues in that workload, which is why that workload

07:08.000 --> 07:10.000
is particularly bad for this problem.

07:10.000 --> 07:14.000
But even without that, you have this problem that you've got tasks hanging around

07:14.000 --> 07:17.000
idle, you're not getting the easy tasks out of the ways that run cues

07:17.000 --> 07:20.000
stay long so the context where it's overhead stay high.

07:20.000 --> 07:24.000
So what was implemented instead was the usual sort of approximation to a

07:24.000 --> 07:26.000
shortest remaining time first schedule.

07:26.000 --> 07:30.000
To try to get the easy tasks, get the light tasks out of the way,

07:30.000 --> 07:33.000
keep the run cues short, don't waste your time doing lots of

07:33.000 --> 07:37.000
system bookkeeping overheads, get on and do actual useful work.

07:37.000 --> 07:41.000
What that does is it improves the latency and it reduces all the overheads

07:41.000 --> 07:44.000
because you're getting c-groups out of the way so you're getting work

07:44.000 --> 07:47.000
completed and off the run cue.

07:47.000 --> 07:50.000
The approximation I'm using is by the C-group load credit metric,

07:50.000 --> 07:53.000
which is tracking the recent C-group usage for all the threads in a given

07:53.000 --> 07:55.000
C-group.

07:55.000 --> 08:00.000
So this is the sort of plot that we have.

08:00.000 --> 08:04.000
So to try to read this, it's a cumulative distribution function.

08:04.000 --> 08:08.000
So you're looking at the latency of completing jobs of completing

08:09.000 --> 08:12.000
requests and the cumulative proportion of those.

08:12.000 --> 08:15.000
And what you can see is that with the blue line, the vanilla CFS line,

08:15.000 --> 08:19.000
for the both heavy functions of the ten sort of heaviest functions that are doing the

08:19.000 --> 08:24.000
most work and also the 90% of the lighter functions at the end.

08:24.000 --> 08:29.000
With the blue line, that's pushed quite a long way to the right.

08:29.000 --> 08:33.000
So there's quite a considerable proportion of those functions, which are taking quite a long

08:33.000 --> 08:35.000
time to complete.

08:35.000 --> 08:39.000
If you do the Oracle version, so this is a static, best possible case that you

08:39.000 --> 08:43.000
could achieve if you knew the future in the short remaining time for this case,

08:43.000 --> 08:45.000
that's the purple pink line.

08:45.000 --> 08:49.000
And you can see that that line is shifted way to the left.

08:49.000 --> 08:53.000
And what's happening is by allowing the tail of the lightest functions to complete.

08:53.000 --> 08:55.000
So you're getting those right hand, that right hand is like plot.

08:55.000 --> 08:57.000
You move another long way to the left.

08:57.000 --> 08:59.000
That's getting things out of the way.

08:59.000 --> 09:02.000
That's also enabling there to be more time available for the

09:02.000 --> 09:05.000
computation intensive functions to also execute and do useful work,

09:05.000 --> 09:08.000
because you're not wasting your time context, which you know at the time.

09:08.000 --> 09:12.000
So it's actually doing better for all of the functions in the system,

09:12.000 --> 09:15.000
not just the ones that you're deliberately targeting,

09:15.000 --> 09:19.000
because you're reducing the overall overheads.

09:19.000 --> 09:23.000
The realisation of this was done as a sub scheduling architecture,

09:23.000 --> 09:26.000
so you can apply custom policies to specific C groups.

09:26.000 --> 09:29.000
So in this case, given it's a sort of Kubernetes setup,

09:29.000 --> 09:32.000
replying custom policy to the Kubernetes burstable C group,

09:32.000 --> 09:35.000
so things beneath that, having this policy applied.

09:35.000 --> 09:38.000
So we're not affecting the rest of the system.

09:38.000 --> 09:42.000
Other tasks that are running in a way that they would have done previously.

09:42.000 --> 09:45.000
They're not being affected by this.

09:45.000 --> 09:49.000
And so load credit becomes the scheduling priority that's used for the server

09:49.000 --> 09:52.000
of functions in the server's function sequence.

09:52.000 --> 09:56.000
There's a couple of details there about how that's actually realised.

09:56.000 --> 09:58.000
The patches of public, if you want to have a look at them,

09:58.000 --> 10:04.000
and I'm glad I'll be happy to answer any questions if you contact him or contact him directly or through me.

10:04.000 --> 10:09.000
The result of this is that with this in place,

10:09.000 --> 10:12.000
you can get greater function collocation,

10:12.000 --> 10:15.000
so you can pack functions in more tightly onto those,

10:15.000 --> 10:18.000
and you don't see the performance degradation.

10:18.000 --> 10:21.000
So you're mitigating the overheads, what you're seeing on the right

10:21.000 --> 10:24.000
with the green schedule, you see the green line remains below the blue line

10:24.000 --> 10:29.000
as the density increases.

10:29.000 --> 10:32.000
Looking at the performance of this in more detail,

10:32.000 --> 10:36.000
so looking at the base line there, if you take that as the,

10:36.000 --> 10:38.000
if you like the sort of a current practice,

10:38.000 --> 10:40.000
which is essentially to over-provision to avoid this problem,

10:40.000 --> 10:43.000
if you don't hit these kind of heavily loaded work in those,

10:43.000 --> 10:45.000
then you don't see the problem.

10:45.000 --> 10:47.000
So if you do that, for this particular work there,

10:47.000 --> 10:49.000
you needed 14 nodes,

10:49.000 --> 10:52.000
and if you were based on the idea that you were going to pack them in

10:52.000 --> 10:55.000
according to what their peak requested load was,

10:55.000 --> 10:59.000
so in the Kubernetes request field that you're putting in.

10:59.000 --> 11:01.000
So you're making sure you've definitely got enough capacity

11:01.000 --> 11:02.000
to support this workload.

11:02.000 --> 11:05.000
So this is pretty wasteful, but it means that things do

11:05.000 --> 11:08.000
at least operate efficiently in the sense of on an individual node.

11:08.000 --> 11:11.000
So you can see the blue line there is weight for the left.

11:11.000 --> 11:13.000
So the request latency is quite low.

11:13.000 --> 11:15.000
Everything is humming along quite nicely,

11:15.000 --> 11:18.000
but with quite a lot of excess capacity available.

11:18.000 --> 11:20.000
So you see that low CPU utilization,

11:20.000 --> 11:25.000
only 30% utilization on the work nodes, that's all the case.

11:25.000 --> 11:28.000
If you do CFS, then you can see that's the right-hand line,

11:28.000 --> 11:31.000
the sort of RNG line, I guess it is.

11:31.000 --> 11:33.000
So that can do better.

11:33.000 --> 11:36.000
It's multiplexing the workload onto a smaller number of nodes.

11:36.000 --> 11:39.000
It's multiplexing workload onto 12 nodes.

11:39.000 --> 11:42.000
But you can't go pack the things in too tightly,

11:42.000 --> 11:45.000
because as soon as you start going above about 45%

11:45.000 --> 11:48.000
node utilization, you begin to see this degradation take effect.

11:48.000 --> 11:51.000
Contacts which overheads become significant.

11:51.000 --> 11:55.000
With CFS lags, we can get the same performance as with CFS,

11:55.000 --> 11:59.000
as in fact, with the base case, but with only 10 nodes,

11:59.000 --> 12:02.000
and we're able to push the CPU utilization on the work nodes up to 55%.

12:02.000 --> 12:05.000
So you can pack things in or you get more done,

12:05.000 --> 12:07.000
or you get the same amount on, I guess,

12:07.000 --> 12:10.000
but you only need 10 nodes instead of 14 nodes to achieve that.

12:10.000 --> 12:13.000
So it's a reasonably significant improvement

12:13.000 --> 12:16.000
in the cluster efficiency, I think.

12:17.000 --> 12:20.000
We looked at some different alternatives to doing this.

12:20.000 --> 12:22.000
So there's some current work, so, or recent work,

12:22.000 --> 12:25.000
EBDF, for example, which does improve things a bit,

12:25.000 --> 12:27.000
but the same underlying mechanism is still there,

12:27.000 --> 12:30.000
and the same underlying problem still occurs.

12:30.000 --> 12:32.000
I guess it's the headline for that.

12:32.000 --> 12:34.000
It's also quite hard to tune that, effectively,

12:34.000 --> 12:37.000
to find out what the parameters are that actually make that work,

12:37.000 --> 12:38.000
as well as you can.

12:41.000 --> 12:44.000
And so, I guess the conclusions here,

12:45.000 --> 12:50.000
are that C groups are really important for managing these workloads in production.

12:50.000 --> 12:52.000
Right, you do need them.

12:52.000 --> 12:57.000
But the problem you've got is that, as the workload increases on the node,

12:57.000 --> 13:00.000
reinsertion of the switched out task,

13:00.000 --> 13:05.000
back into the red battery, begins to take considerable time,

13:05.000 --> 13:06.000
appreciable time.

13:06.000 --> 13:10.000
And that, because of that, you end up with more tasks still live,

13:10.000 --> 13:12.000
because you're wasting time switching things around,

13:12.000 --> 13:13.000
instead of getting worked on.

13:13.000 --> 13:16.000
And because of that, you end up with more context, which is as well.

13:16.000 --> 13:20.000
And these two things combine more flickatively to give you quite high overheads on the node,

13:20.000 --> 13:21.000
on the worker node.

13:23.000 --> 13:27.000
The effect of that is to increase latency variation quite dramatically.

13:27.000 --> 13:30.000
It decrees the effective capacity on the node.

13:30.000 --> 13:33.000
And the thing that that then causes is Kubernetes,

13:33.000 --> 13:36.000
when it's trying to pin-pack and schedule work on different worker nodes,

13:36.000 --> 13:39.000
it's got the wrong idea completely about how big the node is,

13:39.000 --> 13:42.000
what the effect of capacity of a given node is.

13:42.000 --> 13:45.000
So then put more work onto a node, thinking that it's got capacity,

13:45.000 --> 13:48.000
and it doesn't have capacity, and so it also spirals down.

13:51.000 --> 13:54.000
At the moment, it appears that you avoid that by other provisioning,

13:54.000 --> 13:56.000
in the traditional way.

13:58.000 --> 14:01.000
That's not necessarily the most efficient thing you could do.

14:01.000 --> 14:04.000
By tweaking the schedule a bit, I mean,

14:04.000 --> 14:07.000
I don't think it's a big patch in a normal way,

14:07.000 --> 14:10.000
but it's quite a fiddly patch to get right.

14:10.000 --> 14:13.000
You can end up essentially getting 10 to 20% extra capacity

14:13.000 --> 14:17.000
after your cluster for these sorts of workflows at least.

14:17.000 --> 14:20.000
Because you're allowing the short task to complete,

14:20.000 --> 14:23.000
keeping the run queue short means you don't have all these contacts

14:23.000 --> 14:24.000
which overheads.

14:24.000 --> 14:26.000
So you're not wasting your time in bookkeeping.

14:26.000 --> 14:28.000
You're getting actual work on instead.

14:28.000 --> 14:32.000
And with that, with a few minutes to spare you, I'll finish.

14:37.000 --> 14:42.000
Do we have questions?

14:42.000 --> 14:43.000
Yes, Alex.

14:48.000 --> 14:52.000
So I'm guessing this work has been done over several years,

14:52.000 --> 14:55.000
but I'm sure you're aware that Shedex was merged.

14:55.000 --> 14:56.000
Yeah.

14:56.000 --> 14:57.000
Yeah.

14:57.000 --> 14:59.000
So I'm sure you're aware that Shedex was merged last year,

14:59.000 --> 15:01.000
and I guess the question I have is,

15:01.000 --> 15:04.000
oh, actually, you actually have it on the slide.

15:05.000 --> 15:07.000
Whether on this approach, we would apply to Shedex,

15:07.000 --> 15:09.000
whether there's stuff that you can't do with Shedex.

15:09.000 --> 15:11.000
So essentially, a whole battle.

15:11.000 --> 15:13.000
I've got it, so I have a master student.

15:13.000 --> 15:14.000
I'm asking on some athletes here,

15:14.000 --> 15:17.000
but I have a master student doing his research projects,

15:17.000 --> 15:18.000
exploring, precisely that.

15:18.000 --> 15:22.000
So seeing how far down this path we can get using the Shedex extensions,

15:22.000 --> 15:24.000
and in fact, looking at whether it will,

15:24.000 --> 15:26.000
how much it might supply an arm as well just for added from.

15:26.000 --> 15:29.000
So he's doing it on a Raspberry Pi, just because why not.

15:29.000 --> 15:30.000
Yeah.

15:30.000 --> 15:32.000
We've definitely seen a bunch of that going on.

15:33.000 --> 15:36.000
The numbers you've seen on the Kubernetes side,

15:36.000 --> 15:38.000
that we've seen, particularly on high core counts,

15:38.000 --> 15:41.000
very busy sellers, that's something that you're effectively getting

15:41.000 --> 15:45.000
locked out of 30 to 40 percent of cases of your hardware,

15:45.000 --> 15:46.000
because of that.

15:46.000 --> 15:50.000
One of the conservation that we tend to do is just slice the machine

15:50.000 --> 15:52.000
in a few big VMs, then call it done,

15:52.000 --> 15:55.000
which then kind of fixes the issue the other way.

15:55.000 --> 15:58.000
But Skedex is pretty interesting for that.

15:59.000 --> 16:01.000
So we're talking with Mike.

16:01.000 --> 16:04.000
It's getting an actual change into a CFS.

16:04.000 --> 16:09.000
It might be not tricky, because if you look at it the wrong way,

16:09.000 --> 16:14.000
it's going to redress someone else somewhere that you didn't really think of.

16:14.000 --> 16:16.000
But Skedex makes that a lot easier because night,

16:16.000 --> 16:18.000
so the need becomes a Skedule extension,

16:18.000 --> 16:20.000
that's optimized for Kubernetes,

16:20.000 --> 16:23.000
optimized for Docker, optimized for whatever you're running on the machine,

16:23.000 --> 16:26.000
and then people are going to be very willing to just run that.

16:27.000 --> 16:29.000
I think that's completely fair point.

16:29.000 --> 16:31.000
That's why we're interested in exploring how much you can,

16:31.000 --> 16:34.000
how far down this patty could go over Skedex.

16:34.000 --> 16:37.000
Having said that, putting it maybe slightly more controversial,

16:37.000 --> 16:40.000
academic hat on this sort of thing, like I mentioned it,

16:40.000 --> 16:43.000
I don't think it's the case that CFS is the perfect schedule

16:43.000 --> 16:45.000
for all possible workflows,

16:45.000 --> 16:48.000
and the sorts of situations that this is deployed into,

16:48.000 --> 16:51.000
and not necessarily the same as when CFS was first conceived

16:51.000 --> 16:53.000
and designed and implemented.

16:53.000 --> 16:56.000
I think that it would be a long-term mistake

16:56.000 --> 16:58.000
to decide that CFS can't ever be changed.

16:58.000 --> 17:01.000
You may need to present suitable evidence,

17:01.000 --> 17:03.000
a suggestion it's a good change you're making,

17:03.000 --> 17:06.000
but to say you can't change it ever is,

17:06.000 --> 17:10.000
I think, clearly wrong, particularly as kind of workload,

17:10.000 --> 17:11.000
keep changing in this way.

17:11.000 --> 17:14.000
So we did get, we've tried to submit this work,

17:14.000 --> 17:16.000
trying to get it published in several academic venues,

17:16.000 --> 17:18.000
and we keep getting pushback for different reasons,

17:18.000 --> 17:19.000
which is becoming infuriating.

17:19.000 --> 17:22.000
But one quite reasonable pushback we had earlier on,

17:22.000 --> 17:23.000
which is actually the question of,

17:23.000 --> 17:25.000
okay, this works for your workload.

17:25.000 --> 17:27.000
What does it need for everybody else's workloads?

17:27.000 --> 17:29.000
So that's why, after I did spend some time

17:29.000 --> 17:31.000
with Reds couple, trying to explore other workloads

17:31.000 --> 17:33.000
that were not this workload,

17:33.000 --> 17:37.000
and why the changes are located just to the cube,

17:37.000 --> 17:40.000
what's it, the cube burstable pods,

17:40.000 --> 17:42.000
so it shouldn't affect anything else,

17:42.000 --> 17:43.000
and then we tried to demonstrate

17:43.000 --> 17:45.000
that it didn't affect anything else,

17:45.000 --> 17:48.000
it just improved things for these workloads.

17:49.000 --> 17:50.000
Other questions.

17:54.000 --> 17:56.000
All right, thank you very much.

17:56.000 --> 17:57.000
Thank you.

17:57.000 --> 17:58.000
Thank you.

18:02.000 --> 18:04.000
And we've done that's a wrap for the Canada Devrims,

18:04.000 --> 18:07.000
so we'll see you next year, hopefully.

