WEBVTT

00:00.000 --> 00:12.000
More than 80% kind of, so that's great, so people repeat this because you will really enjoy.

00:12.000 --> 00:16.800
Today we are going to learn how we convert a hard file into an open telemetry trace, and how

00:16.800 --> 00:21.200
that's really fine in our browser observability.

00:21.200 --> 00:25.800
Always before I start a talk, I would like to say like, what is the goal?

00:25.800 --> 00:30.800
We want to monitor the bar and optimize better web applications.

00:30.800 --> 00:34.800
That is the reason why we are here, and now we are going to understand how we are going

00:34.800 --> 00:37.800
to do it.

00:37.800 --> 00:41.800
For the agenda today, we are going to have a quick intro about open telemetry, which

00:41.800 --> 00:44.800
is the tool that we are going to do for that project.

00:44.800 --> 00:49.800
For hard files, some of you might know what they are, but let's go a little bit deep,

00:49.800 --> 00:52.800
and first analyze what we are implementing it.

00:52.800 --> 00:56.800
Later, we are going to explain how we are doing that hiring to open telemetry,

00:56.800 --> 01:00.800
our architecture, how we are building it,

01:00.800 --> 01:04.800
and of course, we are going to end up with a lesson about open project.

01:04.800 --> 01:07.800
So now you know the topic, let's meet also the speaker,

01:07.800 --> 01:10.800
Antonio, come in from Spain, working in Tisco-Phersonized,

01:10.800 --> 01:13.800
and specialized on observability.

01:13.800 --> 01:15.800
Go to the intro.

01:15.800 --> 01:21.800
I would like to ask all of you to go to the following cure code if internet allowed.

01:21.800 --> 01:25.800
And let me know what came to your mind when you think about open telemetry.

01:25.800 --> 01:31.800
When you hear about open telemetry, what came to your mind.

01:31.800 --> 01:35.800
So my early of the suns are coming from a rehearsal that I did before,

01:35.800 --> 01:39.800
but many people think this is mainly open source, observability,

01:39.800 --> 01:44.800
metrics, traces, locks, all of those is true in a sense.

01:44.800 --> 01:51.800
But the way how I like to explain this is our observability,

01:51.800 --> 01:53.800
open source, observability framework.

01:53.800 --> 01:55.800
But this is not a single thing.

01:55.800 --> 01:57.800
It's a tools like the collector.

01:57.800 --> 02:00.800
I guess many of you know the open telemetry collector.

02:00.800 --> 02:04.800
APIs, SDKs, SDKs for the different languages,

02:04.800 --> 02:06.800
like go Java, many more.

02:06.800 --> 02:09.800
It is mainly used to manage telemetry data,

02:09.800 --> 02:13.800
because we want to instrument generate and export that telemetry data,

02:13.800 --> 02:15.800
so we can make use of.

02:15.800 --> 02:17.800
So this is open telemetry, and we also would like to

02:17.800 --> 02:20.800
regulate one for size different signals.

02:20.800 --> 02:23.800
We have traces, metrics, locks, and profiling.

02:23.800 --> 02:25.800
We are going to focus today on traces.

02:25.800 --> 02:28.800
I hope all of you know about this.

02:28.800 --> 02:30.800
And now let's move on.

02:30.800 --> 02:32.800
Let's talk about the collector.

02:32.800 --> 02:35.800
Collected one of the pieces that we are using in this project,

02:35.800 --> 02:39.800
and this is mainly used to receive data in a variety of different formats,

02:39.800 --> 02:43.800
coming from different source, like it could kind from OTLP,

02:43.800 --> 02:46.800
Yager, Prometheus, but you can even implement your own component

02:46.800 --> 02:51.800
to read data from a database or from read data from a browser or whatever.

02:51.800 --> 02:53.800
This is done in parallel or the reception,

02:53.800 --> 02:56.800
and now you go to process that data.

02:56.800 --> 02:57.800
This is done sequential.

02:57.800 --> 02:59.800
And here you can do many things also.

02:59.800 --> 03:01.800
You can filter out data that you don't want.

03:01.800 --> 03:04.800
You can enrich that data when it's extra,

03:04.800 --> 03:06.800
you can buy it for performance reasons.

03:06.800 --> 03:09.800
And then, similar as receiver, we have exported.

03:09.800 --> 03:12.800
We export that data outside to the different observability buttons,

03:12.800 --> 03:14.800
or even as I said before into a file.

03:14.800 --> 03:16.800
This is totally up to developer.

03:16.800 --> 03:18.800
There are full flexibility here.

03:18.800 --> 03:21.800
And we have also the concept of extension.

03:21.800 --> 03:25.800
They are serve component over all the collector.

03:25.800 --> 03:30.800
And they are like, help for the collector, profiling,

03:30.800 --> 03:34.800
memory, storage, and many more.

03:35.800 --> 03:37.800
So now you know about the open-telling design.

03:37.800 --> 03:38.800
Let's talk about the hard.

03:38.800 --> 03:41.800
Hard is nothing else than the browser interaction.

03:41.800 --> 03:44.800
I might many of you have seen that before that picture.

03:44.800 --> 03:47.800
Like, you are opening a web space,

03:47.800 --> 03:49.800
and then you see all the resources that are milled and loaded.

03:49.800 --> 03:51.800
All the requests that are being performed.

03:51.800 --> 03:52.800
This is the hard file.

03:52.800 --> 03:54.800
But what is the problem of the hard file?

03:54.800 --> 03:56.800
That it's contained since it is data.

03:56.800 --> 03:58.800
Like, it can contain your cookies,

03:58.800 --> 04:00.800
it can contain your headers,

04:00.800 --> 04:02.800
like with your password, authorization,

04:03.800 --> 04:05.800
user, email, all the things.

04:05.800 --> 04:06.800
So it's quite sensitive.

04:06.800 --> 04:08.800
Usually it's engaged,

04:08.800 --> 04:10.800
and it's for debugging purposes.

04:10.800 --> 04:12.800
When you have a problem, you send out hard file

04:12.800 --> 04:14.800
to other person who helped you here.

04:14.800 --> 04:16.800
This looks like something like that.

04:16.800 --> 04:18.800
It's a list of requests,

04:18.800 --> 04:21.800
and each request is going to have a variety of different elements.

04:21.800 --> 04:23.800
Like the method, the URL that you're hitting,

04:23.800 --> 04:26.800
headers, body, response code, response,

04:26.800 --> 04:28.800
a object, and many more.

04:28.800 --> 04:31.800
Let's talk about quickly about fossilized.

04:31.800 --> 04:33.800
We are using those two standards.

04:33.800 --> 04:35.800
In fossilized and network as well as platform.

04:35.800 --> 04:37.800
We monitor the network.

04:37.800 --> 04:39.800
And we have AIN all around the globe.

04:39.800 --> 04:41.800
Those AIN are acting as real user.

04:41.800 --> 04:45.800
So we have AIN, a Europe Asia, America.

04:45.800 --> 04:49.800
And we can also have AIN inside of your own infrastructure.

04:49.800 --> 04:53.800
Because you might also want to know what's the problem inside of your infrastructure,

04:53.800 --> 04:56.800
of your employees or in your office.

04:56.800 --> 04:58.800
And all this data is in real time.

04:58.800 --> 05:00.800
Because you want to discover when the problem is.

05:00.800 --> 05:02.800
At the time that it is happening,

05:02.800 --> 05:04.800
not like 5 or 10 minutes before.

05:04.800 --> 05:06.800
Sorry, after.

05:06.800 --> 05:08.800
Inside of fossilized, I would like to emphasize

05:08.800 --> 05:10.800
about the paid slow test.

05:10.800 --> 05:12.800
It's a test that is using the hard file.

05:12.800 --> 05:15.800
That's just render our webpades as it was a user.

05:15.800 --> 05:18.800
And then capture all the hard file or the hard file data.

05:18.800 --> 05:20.800
And it's only live on the webpades of fossilized.

05:20.800 --> 05:23.800
That's why we say, okay, let's convert that into a trace

05:23.800 --> 05:26.800
and send it over to our customer.

05:26.800 --> 05:29.800
And this is what we are going to discuss now.

05:29.800 --> 05:31.800
We have our hard file.

05:31.800 --> 05:33.800
And we want to convert it to a student tracing.

05:33.800 --> 05:35.800
I might, my idea of you have seen that before.

05:35.800 --> 05:38.800
This is like all the spans that are one after the other.

05:38.800 --> 05:40.800
The half hierarchy between the other.

05:40.800 --> 05:43.800
And you may be thinking, how you represent a hard.

05:43.800 --> 05:49.800
We're going to have like a root span that represent the URL that you are downloading.

05:49.800 --> 05:52.800
You're downloading a, let's say, fiscal.com.

05:52.800 --> 05:54.800
And you are getting that what pays.

05:54.800 --> 05:56.800
That is going to be the root span.

05:56.800 --> 06:00.800
It's going to have the time that it starts and the time that it ends.

06:00.800 --> 06:04.800
And now for every resource on the webpades.

06:04.800 --> 06:06.800
We're going to have an span.

06:06.800 --> 06:08.800
That is why it's going to have like HTTP data.

06:08.800 --> 06:13.800
It's going to have the URL, the method, all this info.

06:13.800 --> 06:14.800
That appear on the hard file.

06:14.800 --> 06:17.800
We're going to convert it into open telemetic span.

06:17.800 --> 06:20.800
And all of those are going to be child of the root.

06:20.800 --> 06:23.800
Because there is no dependency between those as span.

06:24.800 --> 06:26.800
The dependency is only with the root.

06:26.800 --> 06:29.800
And the cool part is like you have also the time of it's span.

06:29.800 --> 06:34.800
So you can see the performance of your webpades perfectly.

06:34.800 --> 06:40.800
Each span is going to have things like the name that contain the method on the URL.

06:40.800 --> 06:43.800
The kind is going to be always climb because we are doing our request.

06:43.800 --> 06:45.800
We are doing our request to our webpades.

06:45.800 --> 06:47.800
We are doing our request to our server.

06:47.800 --> 06:51.800
We have also, as I said before, the start time and end time.

06:51.800 --> 06:55.800
The status, if there was any error, we are going to put the start to error.

06:55.800 --> 06:58.800
And of course, it's going to have the span ID.

06:58.800 --> 07:02.800
And now let's talk about which attribute we put in those as span.

07:02.800 --> 07:05.800
Here is a key pillar of open telemetics, the semantic convention.

07:05.800 --> 07:10.800
They don't only say how you have to write down your attribute and what is the format.

07:10.800 --> 07:12.800
You also provide extra meaning.

07:12.800 --> 07:15.800
And for me, this is the most important about semantic convention.

07:15.800 --> 07:20.800
Because it's teaching me method means the method that you have been used to make that request.

07:20.800 --> 07:26.800
And not only that, it's also explained to you how it is going to be breathing every observably.

07:26.800 --> 07:29.800
So everyone is going to understand it perfectly.

07:29.800 --> 07:32.800
So we are following the open telemetry.

07:32.800 --> 07:38.800
Record attribute like method, address, for URL full and the status code.

07:38.800 --> 07:41.800
And those are key to represent a request if you think about it.

07:41.800 --> 07:43.800
Sorry, I want to do it.

07:43.800 --> 07:45.800
And you may also thinking about headers.

07:45.800 --> 07:48.800
Heather are optional in the convention.

07:48.800 --> 07:51.800
And they represent like the list of it's array.

07:51.800 --> 07:53.800
They represent the list of header.

07:53.800 --> 07:56.800
But the problem is that they might contain sensitive information.

07:56.800 --> 08:02.800
For that reason, in 1000 as we have decided not to include it because that could lead into PII reasons problem.

08:02.800 --> 08:05.800
So they are not being at.

08:05.800 --> 08:08.800
Then we have also the concept of resource in an span.

08:08.800 --> 08:11.800
A resource is nothing else that who is generating that telemetry data.

08:11.800 --> 08:14.800
For us, it's 1000ize, test, and the alien.

08:14.800 --> 08:16.800
So we are going to have all the information.

08:16.800 --> 08:19.800
1000 is the CD, the name, and the type.

08:19.800 --> 08:24.800
Also the alien, as I said before, 1000 is half alien or around the world or in your infrastructure.

08:24.800 --> 08:26.800
So those say you are going to have a location.

08:26.800 --> 08:28.800
And alien ID and alien 9.

08:28.800 --> 08:33.800
So you can identify who is sending a query in that web page.

08:33.800 --> 08:38.800
This is how a trace is going to look like just in a JSON.

08:38.800 --> 08:41.800
So you're going to have all the resource attribute.

08:41.800 --> 08:45.800
And after that, you're going to have all the root span.

08:45.800 --> 08:48.800
After the root span, you're going to have all the child span.

08:48.800 --> 08:52.800
Quite simple, just a trace in JSON.

08:52.800 --> 08:56.800
Now let's deep a little bit more into our architecture.

08:56.800 --> 09:00.800
How we are building that, how we are transforming the data.

09:00.800 --> 09:05.800
We have two collectors, as I said before, we are using open telemetry collector for that.

09:05.800 --> 09:11.800
So we have two collectors, one is for reading the half data and converting it into a trace.

09:11.800 --> 09:14.800
And the other one is mainly to route in that data.

09:14.800 --> 09:16.800
I want to mention something else.

09:16.800 --> 09:19.800
We have the concept of integration in 1000 as an integration.

09:19.800 --> 09:23.800
It's like with data, you want to send to which observability back end.

09:23.800 --> 09:29.800
And that's quite important that why we have the integrations collector that say that data go to that observability back end.

09:29.800 --> 09:33.800
That could be a splung, giant trace, graphana, any other.

09:34.800 --> 09:39.800
And then the reason why we have also two collectors, because they are going to scale out at a different piece.

09:39.800 --> 09:45.800
In the left side, you're going to have the trace collectors, you're going to scale depending on how much data you receive from Kafka.

09:45.800 --> 09:54.800
On the right side, at least from my perspective, the integration collector is going to be the scaling based on how many integration you have.

09:54.800 --> 10:02.800
Because for each integration, we are going to have an open telemetry, or it'll be exported, sending the data to the destination.

10:02.800 --> 10:05.800
Let's zoom in on each of the collectors.

10:05.800 --> 10:07.800
First, we're going to receive the data in Kafka.

10:07.800 --> 10:12.800
We're going to transform that data into an open telemetry, so we just explained before.

10:12.800 --> 10:16.800
We're going to have in the processor, we're going to filter that data.

10:16.800 --> 10:20.800
Why? Because in my data point, it's not going to be sent to any observability back end.

10:20.800 --> 10:25.800
We don't want to go through all the processor and export and so if it is not going to be sent.

10:25.800 --> 10:27.800
So we filter out there.

10:27.800 --> 10:31.800
We also have the attribute processor to enrich that data.

10:31.800 --> 10:35.800
Because what came from Kafka is nothing else like TSID, ANID.

10:35.800 --> 10:39.800
So you're missing like the test name or you're missing like, sorry.

10:39.800 --> 10:44.800
You're missing like the ANNN, all the data is added at that processor time.

10:44.800 --> 10:48.800
And we also have a batching processor for performance reasons.

10:48.800 --> 10:51.800
And then we export the data to the next collector.

10:51.800 --> 10:56.800
And in the next collector, what we're doing is receiving the data and we have our integration processor.

10:56.800 --> 10:58.800
What are we doing here?

10:58.800 --> 11:02.800
Imagine a data point, the customer wants to be sent to Grafana and Dynatres.

11:02.800 --> 11:08.800
So we need to duplicate that data point here and inject the new attribute, which is the trace AD.

11:08.800 --> 11:10.800
Sorry, the integration AD.

11:10.800 --> 11:16.800
We add that attribute and then in the routing exported, the only thing that you do is like, you see the integration AD.

11:16.800 --> 11:20.800
You say, okay, that's go for Grafana and then you are just exporting that data.

11:20.800 --> 11:24.800
So if you think about it, it's like a router.

11:25.800 --> 11:28.800
And now I want to mention also the extension.

11:28.800 --> 11:32.800
We have extension for the health check to know when the collector is ready.

11:32.800 --> 11:36.800
Store it together all the cuts data about the test, Aging and so on.

11:36.800 --> 11:37.800
Event bus.

11:37.800 --> 11:42.800
So it's quite useful because every time that something happened on our collector, we sent a,

11:42.800 --> 11:44.800
it is a pop-sup system.

11:44.800 --> 11:46.800
We sent a event.

11:46.800 --> 11:50.800
And then another component might receive that event like the metric collector.

11:50.800 --> 11:52.800
Sorry, the metrics extension.

11:52.800 --> 11:58.800
We also have the sentry extension, which it's intercepting every panic and error,

11:58.800 --> 12:02.800
and then sending it to the sentry platform.

12:02.800 --> 12:05.800
Okay, so now you know about all of this.

12:05.800 --> 12:08.800
So let's go into the deep and see an example.

12:08.800 --> 12:15.800
In thoughtsonize, we're going to create a simple page load test that go to fiscal.con, as we said before.

12:16.800 --> 12:23.800
That 1000S test is going to contain data like how much times take to load that word page, how many completion.

12:23.800 --> 12:28.800
We are running test periodically, like every minute, every number of seconds you decide.

12:28.800 --> 12:34.800
And then it's also generally what I mentioned before, the hard data, which is the key in that presentation.

12:34.800 --> 12:37.800
But that data only leaves in 1000S.

12:37.800 --> 12:41.800
That way we need to export it to the different observability backends.

12:42.800 --> 12:47.800
We create that integration as I said before, where we specify where we are sending that data.

12:47.800 --> 12:51.800
In that case, you're going to be younger using the RPC.

12:51.800 --> 12:56.800
We as I am running local thoughts, and don't have to put any authentication.

12:56.800 --> 12:59.800
But you could add in your observability backends.

12:59.800 --> 13:02.800
And then you select also the test data that you want to send.

13:02.800 --> 13:05.800
And that's all. You don't need anything else.

13:05.800 --> 13:07.800
And now we are in younger.

13:07.800 --> 13:10.800
And we can start seeing our data and our traces as we are familiar.

13:10.800 --> 13:13.800
So we serve for traces. And this is one for this call.com.

13:13.800 --> 13:19.800
And you can see all the spans that represent all the resources and requests of our what page.

13:19.800 --> 13:29.800
For each span as we agree before, it's going to have all the semantic convention, like HTTP status code, method, real full.

13:29.800 --> 13:35.800
And it's going to have also all the resources achieved, like the 1000S test ID, test name,

13:36.800 --> 13:39.800
agent name, and all those details.

13:39.800 --> 13:43.800
And you may be thinking of whether or not what that is the advantage.

13:43.800 --> 13:45.800
You have the same that you have before in 1000S, right?

13:45.800 --> 13:50.800
This is not true because now you have all the flexibility and functionality for any observability backends.

13:50.800 --> 13:56.800
If you think about it, now you can do things like, this is an screenshot from before.

13:56.800 --> 14:04.800
Now you can do things like, I want to filter only for the traces that have an error or a status code for 100.

14:04.800 --> 14:11.800
And now you go to your work page and you find that there are a resource that have at that error have a battery quest.

14:11.800 --> 14:16.800
So you are going to discover that you have customer having the problem.

14:16.800 --> 14:20.800
And this is quite useful because you can create alerts. You can see how often that happened.

14:20.800 --> 14:26.800
As you have the agent location, you can see that this happening in Asia or that it's having on Europe or any other place.

14:26.800 --> 14:33.800
So this is quite useful because now you have all the capabilities from our observability backends to understand your traces.

14:33.800 --> 14:39.800
But not only that, you can see, okay, I want to see what are the timing of my spans.

14:39.800 --> 14:43.800
Remember the time is going to be the time that you need to load up your page.

14:43.800 --> 14:49.800
For fiscal.com, it's taking around, I don't see from here, but around 5 seconds, 7 seconds.

14:49.800 --> 14:57.800
So I want to see why there are those different, are those different because they are in different regions, are those different because they are more aspiring one and the other.

14:57.800 --> 15:01.800
So you have the capability in error to compare two traces.

15:01.800 --> 15:06.800
And that's quite cool. This is a huge trail, but if you're familiar with that, you're going to identify that.

15:06.800 --> 15:12.800
We sure what's downloaded here, not here. Why? Are these sequential? If one failed, the other is not called.

15:12.800 --> 15:19.800
So you can find out those problems that your customer are hitting and you don't even realize.

15:19.800 --> 15:25.800
Now that we have seen all the process from architecture and so on, now let's go to the lesson line.

15:25.800 --> 15:29.800
We are using tracing always for microservices to get each other.

15:29.800 --> 15:35.800
If you think about it, this is another use case, and it's quite useful because our customer can take advantage of.

15:35.800 --> 15:41.800
The open telemetry is super helpful for us. The community, we were having some bugs with home.

15:41.800 --> 15:45.800
They were solved where it needs some component. They were helping us.

15:45.800 --> 15:53.800
Also for the semantic convention, it's what you find is mature enough to be used and I recommend everyone. This is one of the pillars.

15:53.800 --> 15:58.800
And the open telemetry collector at the beginning was really hard to understand, really hard to use.

15:58.800 --> 16:04.800
But after a few spring, you start creating your own component and play in your own. It's quite much you.

16:04.800 --> 16:11.800
And let's go to the beginning of the session. What is our goal? We want to understand the performance of our browser, better.

16:11.800 --> 16:15.800
Improve our monitoring and identify performances.

16:15.800 --> 16:18.800
Thank you so much. I hope you enjoy the session.

16:18.800 --> 16:27.800
I don't know if someone will have any questions. I will be more happy to answer.

16:27.800 --> 16:33.800
Okay. So there are a few there. I will be in the door right after the meeting. Sorry, the presentation.

16:33.800 --> 16:39.800
So happy to answer any other question in person.

16:39.800 --> 16:47.800
Thank you for your presentation. I'd like to ask you how do you export the hard files from your browser?

16:47.800 --> 16:54.800
I don't know. Carf, carf. Okay. That's a detail from the first one.

16:54.800 --> 17:00.800
I assume that everyone already have the hard file. What we are doing here is like we have a nailing.

17:00.800 --> 17:07.800
And that nailing is collecting all the hard data and is converting into a adhesion and sending it over to Kafka.

17:07.800 --> 17:12.800
And then we have our collector just reading from that Kafka. So this is more detail in the first one.

17:12.800 --> 17:19.800
And I suppose this is not that why it didn't go deeper. But yeah, this is, it could be up to you how you want to get the data into Kafka.

17:19.800 --> 17:21.800
Or you can read it directly.

17:21.800 --> 17:25.800
As you said, you can create your own component receiver that just load the web page.

17:25.800 --> 17:28.800
You'll have the flexibility. Thank you. Thank you.

17:29.800 --> 17:34.800
Okay. Yep.

17:34.800 --> 17:39.800
We've all the extensions that you can have.

17:39.800 --> 17:48.800
How is the integration between Magento and Google Cloud for telemetry data?

17:48.800 --> 17:50.800
What do you mean sorry, the integration?

17:50.800 --> 18:00.800
Because there are various regions from what I understood and various extensions to a process data.

18:00.800 --> 18:09.800
Magento does a lot of things under the hood and I was wondering if there was some integration in that.

18:09.800 --> 18:12.800
There might be some integration that I know the work session before.

18:12.800 --> 18:15.800
We have a nailing that this acting as a user.

18:15.800 --> 18:18.800
So it's actually loading the data from a Chrome.

18:18.800 --> 18:23.800
So we are acting extracting of the data from a Chrome browser and we are getting into a heart.

18:23.800 --> 18:27.800
So we are not using any of those extensions that you're mentioning, but for sure there might be.

18:27.800 --> 18:32.800
Okay. There might be a wrong.

18:32.800 --> 18:36.800
Anyone else? Sure. On the other side.

18:49.800 --> 18:54.800
Did you open source any of it? Like the conversion part to hotel?

18:54.800 --> 18:56.800
Yeah. I talked with the community before.

18:56.800 --> 19:00.800
Yeah. We are willing to help there and create that component.

19:00.800 --> 19:05.800
Again, it would be like, it could be the only other processure because depending from where you are getting the data.

19:05.800 --> 19:11.800
It could be a processure that just convert from a JSON into a JSON half into an open telemetry trace.

19:11.800 --> 19:15.800
Or it could be a receiver time, but more than happy to do with the community.

19:15.800 --> 19:21.800
We have been working with the community before, but for, I'm sure.

19:21.800 --> 19:26.800
Giving a component is like a quite silent process because there are many components already.

19:26.800 --> 19:28.800
And there are many that are doing similar things.

19:28.800 --> 19:33.800
So it's quite silent, but we are more than happy to contribute the data to the community.

19:33.800 --> 19:38.800
That's totally part of this content, a vision.

19:39.800 --> 19:45.800
So you're putting the hard JSON on Kafka and then reading it with Kafka receiver.

19:45.800 --> 19:54.800
Do you have a custom extension encoding extension to the parse that JSON from hard into hotel P?

19:54.800 --> 19:56.800
Correct. Absolutely. This is what we are doing.

19:56.800 --> 20:02.800
We are using a field in Kafka receiver called extension and we implement our own extension encoding.

20:02.800 --> 20:04.800
And that's really useful for us.

20:08.800 --> 20:13.800
Thanks for watching.

