WEBVTT

00:00.000 --> 00:12.600
Hi, I'm going to talk about the Essinox problem that casts Manslope Shadow, which is a story

00:12.600 --> 00:15.760
about Essinox and TechDepth.

00:15.760 --> 00:20.560
Before we start, do we have any real customers, real users, past or present?

00:20.560 --> 00:21.560
OK.

00:22.560 --> 00:27.560
As the start of this disclaimer, that these clients are my own reflection of the events

00:27.560 --> 00:31.560
and nothing contain them is official or the communication.

00:31.560 --> 00:38.560
If you are interested, please talk to our support if you need information or need help with your systems.

00:38.560 --> 00:45.560
So, where am I? I'm a software engineer and a product owner for content management and data collection in the client tool steam.

00:45.560 --> 00:49.560
That's where real, run it inside or satellite or meet.

00:49.560 --> 00:54.560
And directly we support much more use cases like OpenShift or Podran desktop.

00:54.560 --> 00:56.560
And our team is directly for real.

00:56.560 --> 01:03.560
We don't have any true upstream, we target the real while supporting Federa and Centos indirectly.

01:03.560 --> 01:08.560
So, let's start with keywords about what to talk about.

01:08.560 --> 01:13.560
Well, first is sort of the insights that the software is associated with hosted by Red Hat,

01:13.560 --> 01:15.560
a figure of charge for all related customers.

01:15.560 --> 01:21.560
Essinox, which is actually a architecture for Linux and it's part of RL.

01:21.560 --> 01:27.560
And insights client tool that collects data over a host and uploads it to Red Hat insights.

01:27.560 --> 01:30.560
This is how Red Hat insights work.

01:30.560 --> 01:33.560
As you can see, there are some systems in the inventory.

01:33.560 --> 01:37.560
You can see, we detect this, some CVEs, we know that we can remade it.

01:37.560 --> 01:41.560
Some of them, and you can see, you can do wild stuff with that.

01:41.560 --> 01:48.560
Remootly update packages, change configurations to ensure your host is compliant and secure.

01:48.560 --> 02:00.560
And Essinox is a kind of module that allows you to prevent programs from randomly changing content for other packages or programs.

02:00.560 --> 02:11.560
Every file, every process, every socket is labeled and we ensure that nothing can go wrong there.

02:11.560 --> 02:14.560
When it's set up correctly.

02:14.560 --> 02:18.560
So, let's start with the intro.

02:18.560 --> 02:22.560
Our team had a history with Essinox.

02:22.560 --> 02:29.560
In 2022, issues filed by our support engineer that insights client is producing a lot of AVEs.

02:29.560 --> 02:32.560
A lot of the Niles.

02:32.560 --> 02:38.560
So, a few months later, the Essinox shipped an updated policy in RL 86 and 9.

02:38.560 --> 02:46.560
And a few months later, the process was updated again because of complexity of behavior of insights client.

02:46.560 --> 02:53.560
When you bring that to the specific topic of this talk, slash rules, slash new PG.

02:53.560 --> 03:05.560
Also, in 2022, one of our engineers filed a baczilla saying that insights client tries to create rules GPG folder and put stuff in there.

03:05.560 --> 03:11.560
So, around a year later, a year and a half, the cart was picked up by the developer.

03:11.560 --> 03:16.560
Then, after a few days or weeks, the code review started.

03:16.560 --> 03:19.560
One of the focus I asked was GPG, actually.

03:19.560 --> 03:22.560
We knew that it's the one to make us with the race conditions.

03:22.560 --> 03:29.560
When it comes up after itself, our colleagues from Convert to RL and RL upgrades let us know.

03:29.560 --> 03:40.560
It took some months, but in February, 2024, our Q-engineer performed the maneuverability verification and the PR got merged.

03:40.560 --> 03:45.560
Now, we have to start mentioning dates, specific days.

03:45.560 --> 03:55.560
So, on Monday, 12, the big insights client outage started and engineering wouldn't know for another week that there are some problems.

03:55.560 --> 03:57.560
Actually, how does that happen?

03:57.560 --> 04:00.560
We are talking about RL, that's enterprise distribution.

04:00.560 --> 04:03.560
Doesn't it really cycle, segating, or component tests?

04:03.560 --> 04:08.560
So, here, this where we have to make a small B-tour.

04:09.560 --> 04:16.560
The client in core, while insights client repository itself is RPMShell around the insights core.

04:16.560 --> 04:23.560
The client is a minimal program that ships the configuration, file, system D, services, and ensures the core is up to date.

04:23.560 --> 04:33.560
The core is GPG site archive that contains collectors and powersers that later are the insights used to display recommendations or ensure compliance.

04:33.560 --> 04:36.560
So, what was the problem there?

04:36.560 --> 04:40.560
The patch was that we shouldn't touch the directories in the road.

04:40.560 --> 04:47.560
It's ruled, it's weird, when this admin's look at the content, they can see some keys they didn't put in there.

04:47.560 --> 04:51.560
It's just, it's not how you should treat the system.

04:51.560 --> 04:55.560
So, the task was to upstart GPG away from the rest of the business logic.

04:55.560 --> 05:04.560
And instead of substituting directly with ono additional setup, we create a temporary directory, use it as the home for GPG to perform all actions.

05:04.560 --> 05:12.560
And so, when we have verified the payloads, we finished and cleaned up the temporary directory.

05:12.560 --> 05:14.560
This is the protocol for it.

05:14.560 --> 05:17.560
It was like 200 lines of Python plus unit tests.

05:17.560 --> 05:23.560
As I said, first we create the home, then we import the keys that are known to be signing the payloads,

05:23.560 --> 05:27.560
we verify the file, and we delete the home used by GPG.

05:27.560 --> 05:29.560
That's it.

05:29.560 --> 05:32.560
Now comes the sling of spart.

05:32.560 --> 05:41.560
By default, the GPGT, the label for GPG process, writes to GPG directory, which has label GPG secret team.

05:41.560 --> 05:49.560
Python stem file, use a team p by default, and because it's created run by inside client, the Python code,

05:50.560 --> 05:58.560
it's the directory that's created, it's labeled as its size client TMPT, temporary directory of inside client.

05:58.560 --> 06:05.560
Well, and GPGT is an allowed to write to inside client TMPT, which is what this then now looks shows.

06:05.560 --> 06:10.560
The source content context is GPG, target is inside client TMPT.

06:10.560 --> 06:16.560
You can see we are trying to do something with directory and we get denied.

06:17.560 --> 06:19.560
So as a result, what happens?

06:19.560 --> 06:22.560
The denial results in OS error being carried out.

06:22.560 --> 06:26.560
We just are not allowed to touch that directory or files.

06:26.560 --> 06:30.560
The 100 error is propagated from the core into the client wrapper.

06:30.560 --> 06:33.560
The wrapper exists with nonzero status code as well.

06:33.560 --> 06:35.560
Again, it's nonstarter behavior.

06:35.560 --> 06:40.560
And the system disservice that runs these transitions to failed state.

06:41.560 --> 06:49.560
Well, was it bad? For story reasons, this service configured as restart equals no.

06:49.560 --> 06:53.560
And it's been there since 2018 or seven.

06:53.560 --> 06:57.560
And it's because was seven was the first error version that shipped with system D.

06:57.560 --> 07:03.560
Before it was six views grown, which runs every time no matter the previous executions.

07:03.560 --> 07:06.560
And this behavior was just replicated.

07:06.560 --> 07:10.560
It's been known that system on network glitches may cause the service to fail.

07:10.560 --> 07:16.560
And it was never addressed because other issues that were more visible, very paradise.

07:16.560 --> 07:22.560
So we have some systems that were used, that were running system D service.

07:22.560 --> 07:28.560
And now they are not, so that means the systems are not collecting the data archives.

07:28.560 --> 07:33.560
And they are not uploading them and they are not updating the core.

07:34.560 --> 07:36.560
So we know we have a problem.

07:36.560 --> 07:39.560
As I said, on Monday, the core was published a production.

07:39.560 --> 07:45.560
On Tuesday, the server-site team discovered that the component is broken due to the dependency on the inside client.

07:45.560 --> 07:48.560
On Wednesday, we released the fix into a production.

07:48.560 --> 07:50.560
We just reverted the commits.

07:50.560 --> 07:56.560
On Monday, the next week, we discovered that the bug is affecting more engineering groups.

07:57.560 --> 08:03.560
So the next day on Tuesday, 12, 12, 12, 12, the first emergency meeting was held.

08:03.560 --> 08:05.560
Where we have reproduced the issue.

08:05.560 --> 08:07.560
We published a support article.

08:07.560 --> 08:11.560
So both our customers and support knew what to do there.

08:11.560 --> 08:15.560
And we had more of them.

08:15.560 --> 08:17.560
So who is affected?

08:17.560 --> 08:23.560
To run out around eight, six plus and nine oh plus with a ceiling of an e-bolt.

08:23.560 --> 08:27.560
How will inside services react when the host is supporting?

08:27.560 --> 08:33.560
Well, you can assign a metadata to the host in inventory or in other services.

08:33.560 --> 08:37.560
And if the host gets deleted because they are not reporting anymore,

08:37.560 --> 08:40.560
these configurations would get deleted as well.

08:40.560 --> 08:43.560
So we said, let's pause the duration and worry about that later.

08:43.560 --> 08:46.560
And how many customers and system are affected?

08:46.560 --> 08:48.560
We actually didn't know.

08:49.560 --> 08:51.560
Because, or it is a big organization,

08:51.560 --> 08:54.560
the end of client tooling doesn't handle the services.

08:54.560 --> 08:57.560
So they don't really know how many systems there are.

08:57.560 --> 08:58.560
We don't need to.

08:58.560 --> 09:03.560
So how to get the numbers and how to even figure out who the relevant teams are.

09:03.560 --> 09:07.560
And there was no percent on informing customers about these problems.

09:07.560 --> 09:09.560
Would it be best way to send emails?

09:09.560 --> 09:10.560
How many to stack?

09:10.560 --> 09:11.560
Scared the customer?

09:11.560 --> 09:14.560
Some are really scared about their infrastructure because

09:14.560 --> 09:17.560
it may be running something very critical.

09:18.560 --> 09:21.560
So for three weeks, the representative from engineering,

09:21.560 --> 09:25.560
product managers, support, and people managers met on a daily basis

09:25.560 --> 09:28.560
to prepare the next steps.

09:28.560 --> 09:31.560
All the time, discussions were constructive

09:31.560 --> 09:36.560
and everyone on the problem and no single person was blind for the problems.

09:36.560 --> 09:39.560
That was great.

09:39.560 --> 09:42.560
This is what we were working with.

09:42.560 --> 09:44.560
I can tell you everything about this graph,

09:44.560 --> 09:48.560
but it shows the deep of when the host still reporting.

09:48.560 --> 09:50.560
This is on a scale of weeks,

09:50.560 --> 09:52.560
but I can't really watch more about it.

09:52.560 --> 09:56.560
I can't tell you which systems were displayed on this graph,

09:56.560 --> 09:58.560
but we knew that there was a drop,

09:58.560 --> 10:01.560
and then the system started slowly growing up.

10:01.560 --> 10:03.560
As the system started,

10:03.560 --> 10:05.560
the system decided to service again,

10:05.560 --> 10:08.560
because they were used to it crushing.

10:08.560 --> 10:11.560
And when I was in the cop, it wasn't in the cop.

10:11.560 --> 10:14.560
The manual query verification didn't check for silenced alerts.

10:14.560 --> 10:18.560
The automatic QE test also didn't check for silenced alerts.

10:18.560 --> 10:23.560
The staging environments for insights didn't use the staging core.

10:23.560 --> 10:27.560
Well, service engineers don't check their end-to-end pipelines

10:27.560 --> 10:29.560
or weekends, because why would they, right?

10:29.560 --> 10:33.560
So no one saw something most failing.

10:33.560 --> 10:36.560
And the worst part is that we almost found it before

10:37.560 --> 10:39.560
it went to production.

10:39.560 --> 10:42.560
Because the same event was made also to the client wrapper.

10:42.560 --> 10:45.560
And the same week the core patch was merged,

10:45.560 --> 10:50.560
but they also were raised when we were trying to make a release build.

10:50.560 --> 10:54.560
So my colleague created a buck report,

10:54.560 --> 10:56.560
we touched it the next Monday,

10:56.560 --> 11:03.560
and we decided to make the revert both patches before we investigated further.

11:04.560 --> 11:09.560
But at this point that Monday the core was already in a product production.

11:09.560 --> 11:13.560
So what we have here learned, and it's a lot.

11:13.560 --> 11:14.560
It's a lot.

11:14.560 --> 11:18.560
So first is that you should have some emergency strategy planned.

11:18.560 --> 11:20.560
Nothing is too big to fail.

11:20.560 --> 11:24.560
It's all bit built by individual people.

11:24.560 --> 11:29.560
It's all individual components, and all of them can fail.

11:29.560 --> 11:32.560
It is important to make informed decisions.

11:32.560 --> 11:38.560
If you don't have the data, well, you have to guess and hope your guess was correct.

11:38.560 --> 11:41.560
And you should be aware of what you are special in.

11:41.560 --> 11:44.560
You have to own both the advantages and the problems that come with it.

11:44.560 --> 11:48.560
For example, here, overhead is in software as a service first company,

11:48.560 --> 11:50.560
and inside is an outlier.

11:50.560 --> 11:53.560
So the whole organization should have ways,

11:53.560 --> 12:01.560
documents that will describe step-by-step what should be done to prevent issues like this.

12:02.560 --> 12:05.560
Responsible software development is about people,

12:05.560 --> 12:06.560
mainly about people.

12:06.560 --> 12:09.560
It's not or that much about code in the long run.

12:09.560 --> 12:13.560
So developers should talk to the quality and journey of scoundrel part

12:13.560 --> 12:15.560
to ensure they understand the problem.

12:15.560 --> 12:18.560
Especially if they are junior engineers, they might need some guidance.

12:18.560 --> 12:20.560
So don't skip them.

12:20.560 --> 12:23.560
Don't because they guarantee the software works,

12:23.560 --> 12:29.560
and if something breaks, they are the one who will also be blamed for the issue.

12:29.560 --> 12:33.560
Because, well, the quality engineers, why didn't they catch it?

12:33.560 --> 12:36.560
They just didn't know.

12:36.560 --> 12:38.560
And talk to others, like extent visibility,

12:38.560 --> 12:41.560
established context in the company with different teams,

12:41.560 --> 12:43.560
different parts of the organization,

12:43.560 --> 12:44.560
make friends.

12:44.560 --> 12:48.560
It makes it all easier when you need to solve some problems.

12:48.560 --> 12:51.560
For communication with the clients,

12:51.560 --> 12:55.560
I think this is far better to over communicate.

12:56.560 --> 13:00.560
Because knowing about issues that you are not affected by,

13:00.560 --> 13:04.560
is far better than not knowing about issues that you are affected by,

13:04.560 --> 13:07.560
and you just have to figure out on your own.

13:07.560 --> 13:11.560
And if emails start to get a bit annoying,

13:11.560 --> 13:16.560
you can always set up a filter to ignore that source email,

13:16.560 --> 13:19.560
and that's fine, that's on you.

13:19.560 --> 13:24.560
And it is hard to tell the ping customers that they are not getting the features yet.

13:25.560 --> 13:27.560
I'm the product owner.

13:27.560 --> 13:30.560
I face this on a daily basis, it's not nice.

13:30.560 --> 13:33.560
But if it's much harder to tell them that they have to fix the problems,

13:33.560 --> 13:37.560
you cost, and you didn't test.

13:37.560 --> 13:41.560
Also, exit a ceiling of C, yes, it's hard to get into.

13:41.560 --> 13:45.560
It's still on my to do list to go through the learning,

13:45.560 --> 13:48.560
to understand how to write the policies.

13:48.560 --> 13:51.560
But if the supported use case for your program,

13:51.560 --> 13:55.560
develop and test with it, that's the only responsible way to deal with it.

13:55.560 --> 13:58.560
That also works for other constraints systems,

13:58.560 --> 14:00.560
like FAPES or other compliance.

14:00.560 --> 14:03.560
And you should test further the analysis automatically

14:03.560 --> 14:06.560
if you need to work with a ceiling of C.

14:06.560 --> 14:10.560
If it's done by humans, something will slip through,

14:10.560 --> 14:13.560
and it's like nothing happened.

14:13.560 --> 14:15.560
And code reviews.

14:15.560 --> 14:19.560
Well, everyone likes to focus on a code style or how the methods are split.

14:19.560 --> 14:21.560
That's the easy way, how to do the review.

14:21.560 --> 14:25.560
But performing a thorough behavior of verification,

14:25.560 --> 14:27.560
it's much harder.

14:27.560 --> 14:31.560
But it's much more rewarding when you find some bug that wouldn't be caught.

14:31.560 --> 14:34.560
And the developer, when they are doing the PR review,

14:34.560 --> 14:36.560
they should put in the QA head as well,

14:36.560 --> 14:40.560
because the quality engineers will, by definition,

14:40.560 --> 14:43.560
never discover all HKC's continuous source code.

14:43.560 --> 14:45.560
There know how is somewhere else,

14:45.560 --> 14:48.560
it's not just in knowing all the very spot the code can make take.

14:48.560 --> 14:51.560
So we corrected a lot since then.

14:51.560 --> 14:53.560
The service, the system, the service file,

14:53.560 --> 14:55.560
fixed-line it in real-time beta, and the online five,

14:55.560 --> 14:57.560
about five months ago.

14:57.560 --> 14:59.560
And we plan to do more back ports.

14:59.560 --> 15:01.560
I think they're all back for that should be public,

15:01.560 --> 15:04.560
so you can look it up with new, new,

15:04.560 --> 15:06.560
and maintainable integration tests through it,

15:06.560 --> 15:08.560
that catches the regressions.

15:08.560 --> 15:11.560
The ultimate signal checks are still there,

15:11.560 --> 15:12.560
which is to bring me,

15:12.560 --> 15:15.560
but it's in a really developed manner.

15:15.560 --> 15:18.560
And we hope to tackle it in the next few months,

15:18.560 --> 15:22.560
and engineering now also has access to the usage trends.

15:22.560 --> 15:25.560
So when something gets, seems wrong,

15:25.560 --> 15:28.560
we can look at the data we can see,

15:28.560 --> 15:30.560
is our field just wrong,

15:30.560 --> 15:32.560
or is it something that's happened?

15:32.560 --> 15:33.560
That's not correct.

15:33.560 --> 15:36.560
So how is the final fix look like?

15:36.560 --> 15:38.560
What was basically one line change?

15:38.560 --> 15:42.560
Instead of creating the temporary director in DMP,

15:42.560 --> 15:46.560
we created the group EG home in something that was already present

15:46.560 --> 15:48.560
in this SEO-news policy.

15:48.560 --> 15:50.560
I think it's something like a var,

15:50.560 --> 15:52.560
or something like that.

15:52.560 --> 15:56.560
Otherwise they could say the same and it works.

15:56.560 --> 15:58.560
And by the way, I don't tell the story,

15:58.560 --> 16:00.560
well, I'm doing how to do the patch,

16:00.560 --> 16:01.560
so that's on me.

16:01.560 --> 16:04.560
And the tech lead for the real component,

16:04.560 --> 16:09.560
fun fact is that I took over one week before this started.

16:09.560 --> 16:13.560
So it was very harsh, but it taught me a lot.

16:13.560 --> 16:17.560
And as I said, I'm PO for the whole area,

16:17.560 --> 16:22.560
and I think this is something that should be shared with the public.

16:22.560 --> 16:27.560
So it hopefully doesn't happen for others as well.

16:27.560 --> 16:29.560
So that's it.

16:29.560 --> 16:31.560
You can conduct me in my email,

16:31.560 --> 16:34.560
or if you want to conduct me during first them,

16:34.560 --> 16:36.560
just catch me on the website.

16:36.560 --> 16:40.560
First them, just catch me on the hallways,

16:40.560 --> 16:42.560
or ping me on matrix.

16:42.560 --> 16:43.560
That's it.

16:43.560 --> 16:45.560
Thank you very much.

16:50.560 --> 16:51.560
Question time.

16:51.560 --> 16:52.560
Are there any?

16:52.560 --> 16:53.560
Please.

16:53.560 --> 16:54.560
I have the question.

16:54.560 --> 16:56.560
So I was always wondering,

16:56.560 --> 16:58.560
how did the development of the community

16:58.560 --> 16:59.560
with the edge of the looks,

16:59.560 --> 17:01.560
and whether the black people,

17:02.560 --> 17:03.560
like for example,

17:05.560 --> 17:08.560
or like how does it work?

17:08.560 --> 17:09.560
Right.

17:09.560 --> 17:11.560
How do developers communicate with us in an extreme?

17:11.560 --> 17:12.560
Well, for us,

17:12.560 --> 17:14.560
we have a monthly sync in our calendars.

17:14.560 --> 17:16.560
So we put our agenda there,

17:16.560 --> 17:17.560
our questions.

17:17.560 --> 17:18.560
Then we visit them,

17:18.560 --> 17:21.560
and we get our answers from them.

17:21.560 --> 17:23.560
But if there's something urgent,

17:23.560 --> 17:24.560
we ping them on Slack,

17:24.560 --> 17:26.560
and we have our context,

17:26.560 --> 17:27.560
we are friends.

17:27.560 --> 17:29.560
So when there's something vertical,

17:29.560 --> 17:31.560
we can always talk to them directly,

17:31.560 --> 17:34.560
even outside of the sketch of the agenda.

17:36.560 --> 17:37.560
Yes, please.

17:37.560 --> 17:38.560
I really like this in the mix,

17:38.560 --> 17:40.560
and I use it for my own home server.

17:40.560 --> 17:41.560
Once at work,

17:41.560 --> 17:43.560
I'm putting it very difficult to convince my eyes

17:43.560 --> 17:44.560
to implement it,

17:44.560 --> 17:47.560
especially when most applications,

17:47.560 --> 17:48.560
I work in telecoms,

17:48.560 --> 17:50.560
so most charging billing applications.

17:50.560 --> 17:52.560
They don't really like us in the mix.

17:52.560 --> 17:56.560
So did you find a way to automate

17:56.560 --> 17:58.560
some of these policy creations,

17:58.560 --> 18:01.560
other than audit to allow it?

18:01.560 --> 18:03.560
How to convince others to use a slain of

18:03.560 --> 18:06.560
and to how to create the policies?

18:06.560 --> 18:08.560
I don't have any clear answer on that.

18:08.560 --> 18:12.560
We are solving that with some of other other packages.

18:12.560 --> 18:13.560
I would need to allow,

18:13.560 --> 18:14.560
give you something,

18:14.560 --> 18:15.560
but it's not enough.

18:15.560 --> 18:17.560
So for that,

18:17.560 --> 18:19.560
we created some gyro cards that are shared

18:19.560 --> 18:21.560
between our team and a slain of steam,

18:21.560 --> 18:23.560
and they need to help us write the policies

18:23.560 --> 18:25.560
and to ensure they are actually correct,

18:25.560 --> 18:27.560
and we didn't miss anything.

18:27.560 --> 18:29.560
So it's hard,

18:29.560 --> 18:32.560
but it has its value as you said.

18:32.560 --> 18:33.560
Yeah.

18:33.560 --> 18:35.560
If my main,

18:35.560 --> 18:38.560
I read the Slinging statement at the red head,

18:38.560 --> 18:41.560
and we can maybe have a chat at the red

18:41.560 --> 18:43.560
because if you want to confirm

18:43.560 --> 18:44.560
containers, for example,

18:44.560 --> 18:47.560
you are going to generate a certain policy from scratch.

18:47.560 --> 18:50.560
That's one of the options to put your services

18:50.560 --> 18:51.560
to do with containers,

18:51.560 --> 18:53.560
and we have some kind of base.

18:53.560 --> 18:55.560
We can call things like,

18:55.560 --> 18:57.560
you are calling them about that,

18:57.560 --> 18:59.560
but let's be about to write them in the wrong way.

18:59.560 --> 19:02.560
So, to repeat on microphone,

19:02.560 --> 19:04.560
look at service over there.

19:04.560 --> 19:07.560
We'll help you with anything you will.

19:07.560 --> 19:09.560
Wait, what is this all of?

19:09.560 --> 19:11.560
Yes.

19:11.560 --> 19:14.560
So in the title of the presentation,

19:14.560 --> 19:15.560
you talked about,

19:15.560 --> 19:16.560
he said,

19:16.560 --> 19:17.560
it says, story about technical debt.

19:17.560 --> 19:20.560
I'm not sure what the spark was technical debt.

19:21.560 --> 19:23.560
So it was on several bits.

19:23.560 --> 19:25.560
One was the restart to know,

19:25.560 --> 19:28.560
declaration in the in the systemy service.

19:28.560 --> 19:30.560
It should have been fixed.

19:30.560 --> 19:32.560
It should have been there since the beginning,

19:32.560 --> 19:34.560
but I wasn't in the team yet.

19:34.560 --> 19:36.560
So I don't know what happened back then.

19:36.560 --> 19:39.560
The other part was that our integration,

19:39.560 --> 19:41.560
the suite for the two was very old,

19:41.560 --> 19:44.560
and we were about to create a new one from scratch.

19:44.560 --> 19:45.560
It was man thanable.

19:45.560 --> 19:46.560
And we didn't have it yet,

19:46.560 --> 19:48.560
but we shipped big change,

19:48.560 --> 19:50.560
but we didn't know how,

19:50.560 --> 19:53.560
how threatening it could be.

19:53.560 --> 19:56.560
So that's one of the other things that we fixed.

19:56.560 --> 19:58.560
Now we have new tests to it

19:58.560 --> 20:01.560
that works in on older algorithms.

20:01.560 --> 20:02.560
We need to test.

20:02.560 --> 20:04.560
It's written in readable Python.

20:04.560 --> 20:07.560
It has abstractions that make sense.

20:07.560 --> 20:09.560
And,

20:09.560 --> 20:11.560
yeah,

20:11.560 --> 20:13.560
there is a lot of tech that we can talk about

20:13.560 --> 20:14.560
the later we want.

20:14.560 --> 20:16.560
Or you can just read the source code,

20:16.560 --> 20:18.560
only to help you go find the list.

20:22.560 --> 20:24.560
If there's nothing else,

20:24.560 --> 20:26.560
I think you again.

