WEBVTT

00:00.000 --> 00:15.000
We're here, we're going to hear about the state of the cephalopod, so give him a hand,

00:15.000 --> 00:25.000
you can take a drink of water and then we'll sew it.

00:25.000 --> 00:30.000
So much, is this too loud? Is it just right?

00:30.000 --> 00:34.000
Okay.

00:40.000 --> 00:48.000
Okay, so, unfortunately, and I couldn't join me here today, so I have to pretend to be an

00:49.000 --> 00:57.000
as well as myself. So we'll cherry pick a few things here and try to cover where we are

00:57.000 --> 01:04.000
from variety of sources. Usually, when I do state of the cephalopod, I cover the more

01:04.000 --> 01:11.000
vendor-E IBM stuff and Joshua and I have covers the community side, but I'll try to

01:12.000 --> 01:16.000
go about, let's see, let's see how I do.

01:16.000 --> 01:25.000
Instead of looking at road map so much, we wanted to look at progress.

01:25.000 --> 01:32.000
So, no, this one is not. We had a few metrics that we shared at cephalopod,

01:32.000 --> 01:40.000
we're worth repeating. So, these are the latest in terms of where we are after 20 years of

01:40.000 --> 01:51.000
cephalopod, 1400 contributors, 700,000 lines more than 700,000 of code change,

01:51.000 --> 02:01.000
and 156,000 commits. So, the community has been going really well.

02:01.000 --> 02:13.000
As a Linux Foundation project, we also have the LF stats, let's see if I can zoom in a

02:13.000 --> 02:21.000
little bit more. The contrast on the screen is not doing good service, but while they're

02:21.000 --> 02:34.000
up and up into the right, if you can see, 10,300 stars, 140,000 commits.

02:34.000 --> 02:44.000
Sorry? That's not letting me do. That is not letting me do.

02:44.000 --> 02:50.000
Let me see. That's a little bit better, but it's really the contrast.

02:50.000 --> 02:57.000
It's the problem because the line is really, but you can kind of see it.

02:57.000 --> 03:04.000
Not a lot of data there. It's really a line. So, apparently we're very predictable.

03:04.000 --> 03:11.000
And this is the community telemetry. Since it started a few years ago, now it's been

03:11.000 --> 03:19.000
already five years, well. So, we're approaching two exabytes when I have made this chart.

03:19.000 --> 03:25.000
Also looking great, but this is a massive underestimate. These are the clusters that sign up

03:25.000 --> 03:32.000
to report their existence to the community server. There is a heck of a lot more

03:32.000 --> 03:38.000
cephalo there and I'll give you some numbers that I do internally for IBM that I always

03:38.000 --> 03:44.000
promise to release and never get wrong to. And obviously we have been doing great

03:44.000 --> 03:50.000
in terms of releases. We keep the pace. There has been a little bit of pain and suffering

03:50.000 --> 03:57.000
in the upstream lab as we moved it from the right-hat data center to the IBM data center.

03:57.000 --> 04:04.000
Actually, the move is not done yet, but a lot of people are suffering from that.

04:04.000 --> 04:14.000
So, thanks to those doing that. Hopefully, the critical infrastructure has come back online

04:14.000 --> 04:19.000
on the IBM lab already, but there is still quite a bit of work to do there.

04:19.000 --> 04:25.000
Unfortunately, despite being the same company, we couldn't just leave our lab at

04:25.000 --> 04:30.000
head. They told us to move it. I guess it's a good opportunity to clean up house

04:30.000 --> 04:37.000
and throw away the old servers and put in new switches and stuff like that.

04:37.000 --> 04:50.000
These were the metrics that Josh, Neha, and Dan show that cephalocon.

04:50.000 --> 05:01.000
I wanted to show you.

05:01.000 --> 05:10.000
So, this is a much bigger presentation that I usually do internally at IBM to show where we

05:10.000 --> 05:16.000
think the market is going. You don't need all of this crap to be honest.

05:16.000 --> 05:24.000
But I have one caveat that I'll say that it's an IBM analysis, so it's a built-in

05:24.000 --> 05:31.000
suspenders analysis. Usually, when you make a guess, a market estimate, you say,

05:31.000 --> 05:37.000
here is the pessimistic estimate, here is the W-A-G, wild, something guess,

05:37.000 --> 05:42.000
and then you take the midpoint between the two as something reasonable.

05:43.000 --> 05:47.000
When you do an estimate for IBM, you use a conservative estimate, a more conservative estimate,

05:47.000 --> 05:51.000
an even more conservative estimate, and you take the midpoint between the three.

05:51.000 --> 06:00.000
So, that is to say that this is the 2025 estimate there in the corner.

06:00.000 --> 06:08.000
It was March of 2025 when I made this. It was between 6.9 and 7.8 exabytes.

06:08.000 --> 06:13.000
And this is using three conservative estimate methods.

06:13.000 --> 06:18.000
If we were using Gardner, I'd see you or any kind of aggressive analyst method,

06:18.000 --> 06:21.000
we would be talking about 20 exabytes.

06:21.000 --> 06:27.000
So, this is pretty much guaranteed to be there.

06:27.000 --> 06:34.000
In fact, one of these metrics is me counting clusters that I personally know exist.

06:34.000 --> 06:39.000
So, that's the most conservative estimate that you can possibly use.

06:39.000 --> 06:45.000
So, in terms of deployments out there, we're doing absolutely great.

06:45.000 --> 06:49.000
As I said, there is quite a bit of strategy stuff in here,

06:49.000 --> 06:55.000
but I think the trend is interesting between my different internal estimates over the years.

06:55.000 --> 06:59.000
So, the gap between the different methods is closing,

06:59.000 --> 07:04.000
maybe it's because I do all the work, so I'm consistent with myself, but I don't know.

07:04.000 --> 07:12.000
The trend is if anything, increasing even faster, so that's great.

07:12.000 --> 07:17.000
I don't think I want to spend my time on the rest of this.

07:17.000 --> 07:27.000
I promise to write a blog with this at some point, but you have seen the important bits.

07:28.000 --> 07:30.000
What else is there?

07:30.000 --> 07:33.000
There are heads sorted out that I wanted to show you.

07:33.000 --> 07:38.000
Okay, so I wanted to do the roadmap bits from the IBM perspective before that.

07:38.000 --> 07:43.000
There is this other slide that I usually show internally.

07:43.000 --> 07:48.000
This is very unusual for me because I never quote analysts.

07:48.000 --> 07:53.000
In fact, I usually consider industrialists a negative signal in my analysis.

07:54.000 --> 07:58.000
But first of all, Coldago research is an analyst that IBM doesn't pay.

07:58.000 --> 08:01.000
So that's a good sign.

08:01.000 --> 08:08.000
Secondly, for some reason, the side that IBM is leading the object storage market, which is great.

08:08.000 --> 08:14.000
But the reason why I'm flagging this is that IBM has two object products,

08:14.000 --> 08:17.000
cause which used to be clever safe and safe.

08:18.000 --> 08:25.000
And so that statement that IBM is leading the storage market is mostly about safe.

08:25.000 --> 08:29.000
The rest are analyst quotes and customer quotes you don't need those.

08:29.000 --> 08:35.000
But this is interesting because if you know anything about storage market,

08:35.000 --> 08:41.000
object is about a tenth of the size of block and file, so it's a smaller market.

08:41.000 --> 08:48.000
But that bullet there that says IBM really should say so.

08:48.000 --> 08:51.000
So that's a good sign too.

08:51.000 --> 08:59.000
Now, let's look at roadmap bits for IBM.

08:59.000 --> 09:08.000
And obviously, whatever we do also goes into the community, so it's largely the same thing.

09:09.000 --> 09:14.000
Sorry, somehow something here is not lined correctly, but oh well.

09:14.000 --> 09:21.000
This is Seth Nine, which is part of the contrast is killing it.

09:38.000 --> 10:05.000
Ah, that's why.

10:05.000 --> 10:15.000
Is that why is it dead?

10:15.000 --> 10:30.000
Oh, and now, of course, to say that I'm not on the VPN.

10:30.000 --> 10:38.000
Come on, should have never pressed refresh.

10:38.000 --> 10:43.000
Okay, so the old ones, you don't need to know because they're old hat.

10:43.000 --> 10:46.000
But this one is tentacle effectively.

10:46.000 --> 10:59.000
This is nine zero is what nine, actually more than nine zero is what IBM calls tentacle.

11:00.000 --> 11:09.000
It's PowerPoint, don't blame me.

11:09.000 --> 11:14.000
Right hat uses Google, that's all I have to say.

11:14.000 --> 11:22.000
Anyway, my main PowerPoint is not as bad as it used to be, but it can still be fun.

11:22.000 --> 11:33.000
So these are, no, these are the tentacle items that we highlight to IBM customers.

11:33.000 --> 11:40.000
And obviously, I think nearly everything of this is upstream.

11:40.000 --> 11:47.000
One thing that is near and dear to me is the arm support is finally there.

11:47.000 --> 11:51.000
We've been working towards this for quite some time.

11:51.000 --> 12:04.000
Big item, the one that we want to be loud about in terms of commercial is that we're going to support your coding for block and file, which,

12:04.000 --> 12:06.000
commercially, we didn't do before.

12:06.000 --> 12:16.000
There are plenty of community users using the previous implementation of a racial coding with RDD, especially, but also it's a FS.

12:16.000 --> 12:20.000
And like the latency variance there.

12:20.000 --> 12:28.000
Now, the new implementation, the fasty scene, smooth that out very nicely.

12:28.000 --> 12:36.000
So now we're supporting that for everybody that's the big item that our marketing will highlight for nine.

12:36.000 --> 12:44.000
We have a project for a big customer that I cannot name that wants to deploy a cluster with nine thousand OSDs.

12:44.000 --> 12:50.000
So far we've been comfortable supporting clusters of 5,000 OSDs.

12:50.000 --> 12:56.000
We're moving the number up and we have been doing data path enhancements.

12:56.000 --> 13:03.000
Those were in the release and the squid base release that we did mid last year, which was labeled 8.1.

13:03.000 --> 13:08.000
It was one of the point releases of squid from the upstream point of view.

13:08.000 --> 13:22.000
But now with nine we have done the control path so that mostly survey the M and the bash word will take that kind of scale without healing over.

13:22.000 --> 13:32.000
And we're getting to the point where we're actually going to test at the customers site and let's see if it works.

13:32.000 --> 13:38.000
We should be really close to be able to claim that 10,000 OSDs in a single cluster.

13:38.000 --> 13:44.000
There is the new tech preview of Crimson and it keeps moving forward.

13:44.000 --> 13:51.000
We aren't seeing, we aren't seeing a VA date for Crimson yet, but we keep working on it.

13:51.000 --> 13:53.000
Intel keeps working on it.

13:53.000 --> 13:56.000
It's working progress.

14:03.000 --> 14:10.000
There is, there are a bunch of enhancements in management.

14:10.000 --> 14:25.000
The one that, the ones that are more interesting are the multi-tenancy enhancements and the ability to manage multiple clusters.

14:26.000 --> 14:34.000
There is an external provisioner which is an IBM tool to deploy on bare metal.

14:34.000 --> 14:46.000
And there are some multi-name space workflows that the dashboard supports for NVME of F.

14:46.000 --> 14:52.000
I think everything here is being contributed back to the community, except the provisioner.

14:52.000 --> 14:59.000
And I think probably the ones of you that have been watching will have noticed that the Vsphere plugin is not open source.

14:59.000 --> 15:04.000
And that's a problem with VMware. It's not a problem with us.

15:04.000 --> 15:11.000
There is a licensing problem with headers in VMware. I think you know the story.

15:11.000 --> 15:18.000
Essentially, all that it does is it enables Vsphere to manage a self-cluster.

15:18.000 --> 15:31.000
Let's see. On file, there is right limiting per share on Samba, which mirrors what we have already had for,

15:31.000 --> 15:38.000
I don't know how many years in Libver, while in Libver first, but then in LibRVD.

15:38.000 --> 15:43.000
We then added that to NFS and RVW. Samba is the new target.

15:43.000 --> 15:51.000
And these are throttles where you can essentially, it's not really QOS if you want to be a computer science precise.

15:51.000 --> 16:01.000
But because you can cap how many apps a certain volume class can do, if you combine some capacity planning with that,

16:01.000 --> 16:05.000
basically what AWS gives you when you provision volumes.

16:05.000 --> 16:09.000
You have a certain number of IOPS per volume based on the class that's coming.

16:09.000 --> 16:11.000
And that the volume is coming from.

16:11.000 --> 16:15.000
Which is a convenient way to make QOS easy to understand for the user.

16:15.000 --> 16:25.000
So we like that model and we're being putting that all over the place.

16:25.000 --> 16:31.000
On the block side, most of the work is going on the NVMe of F gateway.

16:31.000 --> 16:35.000
This is an interesting one because the NVMe of F gateway is quite complete.

16:35.000 --> 16:42.000
In fact, if you look at those things there, I would say that they are starting to deal with things that are not really problems.

16:42.000 --> 16:48.000
Right? Who needs a export?

16:48.000 --> 16:59.000
OEL is more interesting, but the big problem here with US is that actually X is nice because it has the driver and supports it.

16:59.000 --> 17:06.000
Windows doesn't. OEL is getting there right now.

17:06.000 --> 17:09.000
Right hat is there and the MW is there.

17:09.000 --> 17:15.000
So the big ones, RL and VMware are capable of doing NVMe of F.

17:15.000 --> 17:23.000
Microsoft hasn't gotten there yet. Oracle Linux is kind of depends on the kernel you're running getting there.

17:23.000 --> 17:28.000
So we expect to see a lot more NVMe of F use.

17:28.000 --> 17:33.000
Once there are enough client operating systems to consume it.

17:33.000 --> 17:39.000
But the reason why this is interesting is that it's not open source leading in terms of creating something new,

17:39.000 --> 17:42.000
which is the place where we usually see open source leading.

17:42.000 --> 17:49.000
This is open source leading in the sense that we have the product complete before the market is ready to consume it.

17:49.000 --> 17:55.000
So hopefully, I don't see frankly any alternative to blocking NVMe of F.

17:55.000 --> 18:00.000
It is the standard unless you want to fall back to 30 years ago, Ice Cazzy.

18:00.000 --> 18:06.000
So that's got to happen. Either you use LibRBD directly or it's that.

18:06.000 --> 18:11.000
So it's really nice to see how complete this has become.

18:11.000 --> 18:16.000
The CSI driver is an interesting extension. Now it's going to be in tech preview right now,

18:16.000 --> 18:19.000
but it's going to iterate rather quickly.

18:19.000 --> 18:24.000
This is interesting for services that are providing as a service,

18:24.000 --> 18:30.000
kind of the deployments because they don't want to expose the Seth protocol.

18:30.000 --> 18:33.000
They want to expose something external.

18:33.000 --> 18:40.000
They want to expose a three for object and this gives them the way to expose NVMe of F as the block protocol outside.

18:40.000 --> 18:46.000
Of the Seth cluster. Without having to expose the OSD and the Seth protocol itself.

18:46.000 --> 18:53.000
So for certain type of as a service vendor, that's quite interesting.

18:53.000 --> 18:57.000
Object has been doing a lot more stuff than that,

18:57.000 --> 19:01.000
but those are the things that we're called out on our roadmap slide.

19:01.000 --> 19:09.000
There are enhancements for the eternal quest for resharding.

19:10.000 --> 19:16.000
Some things for the duplication of large objects and.

19:16.000 --> 19:19.000
Well, you can read as well as I can.

19:19.000 --> 19:25.000
This is the stuff that is in tentacle and tentacle released in December.

19:25.000 --> 19:28.000
We released this at the end of January.

19:28.000 --> 19:37.000
So it's unprecedented that we managed to actually ship the enterprise product in less than two months from the upstream one.

19:37.000 --> 19:43.000
Of course, there were reasons because the upstream one was super late.

19:43.000 --> 19:48.000
But still, let's take the upside. This was this is quite nice.

19:48.000 --> 19:52.000
We managed to finally close the gap.

19:52.000 --> 20:00.000
If you're an IBM or a red head customer, it has always been an issue that you had to wait kind of like six months before you could get the enterprise product.

20:00.000 --> 20:03.000
So it's nice that we're closing that gap.

20:03.000 --> 20:08.000
Now, this is, as you can see from this, because I haven't written that yet.

20:08.000 --> 20:16.000
A working progress slide from my team where we're trying to figure out what we're going to do for this year.

20:16.000 --> 20:19.000
Now, we actually have a plan that's quite complete.

20:19.000 --> 20:25.000
The part that we're trying to figure out is what goes in Q2, what goes in Q3 and what goes in Q4.

20:25.000 --> 20:28.000
And we're in the middle of that negotiation and moving things around.

20:28.000 --> 20:36.000
So I'm not going to say that this is going to be Q2, but these are the things that folks are calling out.

20:36.000 --> 20:47.000
And since I tend to fill half of block and core, you see that I haven't filled in yet because I want to know when I'm filling it for.

20:47.000 --> 20:54.000
The other leads that decided to stay Q2 and this is the stuff that they think they can do by Q2.

20:54.000 --> 21:07.000
We'll see. The official plan will be ready in a couple of weeks, so you can ask me again at the next set day.

21:07.000 --> 21:16.000
On the management side, the big push is for multi-cluster management because there are customers that deploy a lot of self-clusters.

21:16.000 --> 21:25.000
We have a customer that has close to 30 on the western side of Europe.

21:25.000 --> 21:31.000
We have a customer that has 26 on the eastern side of Europe.

21:31.000 --> 21:42.000
And these are new scenarios. In the past, it used to be that you would deploy as big as a self-cluster as you can,

21:42.000 --> 21:48.000
because you would get a performance advantage to manage one single thing advantage and all that.

21:48.000 --> 21:58.000
These fleet type customers are going there because they want to separate blast radius or because they have edge use cases.

21:58.000 --> 22:04.000
Or most interesting one is because they are very tightly locked with Kubernetes.

22:04.000 --> 22:10.000
And so they put their workload on Kubernetes and consequently on the self-storage that attach to that Kubernetes.

22:10.000 --> 22:18.000
They want to keep the individual Kubernetes cluster small in terms of workloads so that they can try to eventually upgrade it.

22:18.000 --> 22:23.000
Because the big problem with upgrading Kubernetes cluster turns out is not Kubernetes.

22:23.000 --> 22:25.000
It's the workloads that are running in there.

22:25.000 --> 22:30.000
And if you have a sprawl of workloads, it becomes a very difficult problem to preach.

22:30.000 --> 22:40.000
So fleet of clusters tends to be what some customers are going for, which gives an angle to multi-cluster management.

22:40.000 --> 22:44.000
We now have a designer at IBM.

22:44.000 --> 22:47.000
So again, we haven't had it for a few years.

22:48.000 --> 22:53.000
So there is new work on the dashboard to try to improve the UX and simplify things.

22:53.000 --> 23:01.000
The dashboard sometimes suffers from being a straight cut of being too close to the UI.

23:01.000 --> 23:07.000
You reproduce what's in the UI in the dashboard that's not what design should be.

23:07.000 --> 23:12.000
That's something that had tends to do badly actually.

23:12.000 --> 23:15.000
I'll see if we can step up a little bit there.

23:15.000 --> 23:18.000
This mysterious FCM deployment support and data.

23:18.000 --> 23:20.000
FCM is flash core modules.

23:20.000 --> 23:23.000
It's a flash storage module from IBM.

23:23.000 --> 23:27.000
It's what the flash system arrays have.

23:27.000 --> 23:33.000
They're cool technology, but it only matters if you have an array made by IBM.

23:34.000 --> 23:46.000
We have now set in 14 products, branded IBM, right-hat or another company that I should name.

23:46.000 --> 23:49.000
That all come from my team.

23:49.000 --> 23:55.000
It's the same build of FCF with different support scopes and different selections.

23:55.000 --> 24:02.000
So we're working on number 15 and hopefully number 16 after that.

24:03.000 --> 24:14.000
So we're putting the technology a little bit everywhere in the companies that we can reach and quite successfully I would say.

24:14.000 --> 24:18.000
I am not the expert on file systems, but you can see what's in there.

24:18.000 --> 24:24.000
NFS over TLS. This is for one cloud deployment.

24:24.000 --> 24:32.000
There are now cloud deployments using CFFS backing NFS and soon SMB.

24:32.000 --> 24:44.000
So we have a security hardening campaign going on there.

24:44.000 --> 24:53.000
I don't want to tell you what cloud it is because it's not my job to make announcements for that cloud.

24:53.000 --> 24:58.000
But you can guess.

24:58.000 --> 25:07.000
There are performance enhancements for SMB and NFS that are being targeted and improvements to CFFS.

25:07.000 --> 25:17.000
So CFFS has been slow to improve but it's moving forward and we're continuing to invest in it.

25:17.000 --> 25:24.000
We are definitely not willing to give up on making CFFS the first class citizen.

25:24.000 --> 25:32.000
The thing is business wise we have products that use CFFS but we don't sell CFFS as a product.

25:32.000 --> 25:39.000
So that complicates things and also it complicates things because file systems are hard as products.

25:39.000 --> 25:43.000
The customer can do anything with the file system.

25:43.000 --> 25:46.000
So that's wonderful and it's also terrible.

25:46.000 --> 25:55.000
But if you have some limitations then it simplifies things because it gives you the customer will try to do these three use cases.

25:55.000 --> 25:57.000
Let's start from there.

25:57.000 --> 26:08.000
That will be a point release of Tentacle.

26:08.000 --> 26:12.000
The question is is 9-1 going to be based on you?

26:12.000 --> 26:18.000
No, that is going to be a point release of Tentacle.

26:18.000 --> 26:21.000
We use Rook in several of the products.

26:21.000 --> 26:28.000
The ones that are branded data foundation open shift data foundation fusion data foundation and there are three or four more.

26:28.000 --> 26:39.000
Those are Rook managed.

26:39.000 --> 26:51.000
And we're also targeting what would be an IBM storage release 10 at the end of the year and that will be the start of you.

26:51.000 --> 26:54.000
Okay, I think we're two more minutes.

26:54.000 --> 26:55.000
All right.

26:55.000 --> 26:58.000
But I think I'm done.

26:58.000 --> 27:00.000
Did I show you everything?

27:00.000 --> 27:01.000
Yes.

27:01.000 --> 27:05.000
Let's, since as I told you, 9-1 is very much in flux.

27:05.000 --> 27:09.000
I don't think I want to talk about 10.

27:09.000 --> 27:12.000
And this is two vendor.

27:12.000 --> 27:15.000
We certify with any backup vendor out there.

27:15.000 --> 27:16.000
Yeah.

27:16.000 --> 27:17.000
All right.

27:17.000 --> 27:18.000
Happy.

27:18.000 --> 27:23.000
Any questions?

27:23.000 --> 27:24.000
Yes.

27:24.000 --> 27:25.000
Big numbers are great.

27:25.000 --> 27:27.000
What about small clusters?

27:27.000 --> 27:28.000
How small?

27:28.000 --> 27:33.000
And the question is about small clusters.

27:33.000 --> 27:42.000
Let's call it sub 15 nodes, so essentially just a couple of terabytes.

27:42.000 --> 27:43.000
Yeah.

27:43.000 --> 27:46.000
Not everything is an except bytes mentioned.

27:46.000 --> 27:50.000
Okay, so the question is less than 15 nodes a couple of terabytes.

27:50.000 --> 27:58.000
So commercially, I usually dismiss it with, if it's less than, where is it now?

27:58.000 --> 28:02.000
If it's less than 250 terabytes, I don't want to hear about it.

28:02.000 --> 28:08.000
But commercially, you make decisions in terms of where you spend your money trying to make a sale, right?

28:08.000 --> 28:13.000
Technically, there is a different point of view here, which is,

28:14.000 --> 28:20.000
Seth is Seth's claim to fame obviously scale and more scale and even more scale.

28:20.000 --> 28:27.000
But with things like the OpenShift integration or in general the Kubernetes integration,

28:27.000 --> 28:30.000
those deployments are not about scale.

28:30.000 --> 28:37.000
They are usually 100 terabytes or less because containers are not that space hunger.

28:37.000 --> 28:41.000
This will change as cube work becomes more common.

28:41.000 --> 28:45.000
Then you will see those deployments go to a couple of petabytes probably.

28:45.000 --> 28:50.000
But right now there are 100 terabytes usual.

28:50.000 --> 28:56.000
And the vast majority of the clusters we support in that scenario are three nodes,

28:56.000 --> 29:02.000
also it's state.

29:02.000 --> 29:09.000
They have to be also in state because with three nodes, you wouldn't be able to do the failover if it wasn't really fast.

29:09.000 --> 29:15.000
So there is quite a bit of success in small scale, either in Kubernetes or at the edge in OpenStack,

29:15.000 --> 29:20.000
they tend to have small clusters like that, or at some telcos.

29:20.000 --> 29:27.000
I think the data foundation and generally the work has made Seth a lot more interesting at small scale.

29:27.000 --> 29:32.000
And usually the reason to consume that is because we have the APIs and interfaces worked out.

29:32.000 --> 29:35.000
So you don't need the storage, but you want all those interfaces.

29:35.000 --> 29:42.000
So there are those scenarios, but I would say we have made progress.

29:42.000 --> 29:45.000
I wouldn't say that it's the perfect product for small things.

29:45.000 --> 29:50.000
In general there is always the question of why don't you buy a couple of hard drives, right?

29:50.000 --> 29:54.000
We're out of time, so I'm going to take the questions at the back.

29:54.000 --> 29:57.000
And whoever is next should come up to Sarah.

29:57.000 --> 29:59.000
Thank you.

