WEBVTT

00:00.000 --> 00:29.000
So, hi everyone, my name is Dan, I'm here to talk you about something which becomes more

00:29.000 --> 00:35.080
boring these days, just another attempt of thought prevention, but I'm sure it should not

00:35.080 --> 00:39.560
be your case because your network is secure.

00:39.560 --> 00:47.160
So it's just I know I heard it happen to someone else, nothing to have panic about.

00:47.160 --> 00:51.560
About our company we will keep again because by now everybody should know that we are

00:51.560 --> 00:58.680
the smartest producing the most stable software, you know, so I have to skip ahead because

00:58.680 --> 01:00.680
of the timing.

01:00.680 --> 01:04.680
I want to talk about cigarettes, what is cigarettes?

01:04.680 --> 01:10.120
It's an enterprise billing suite, we call it like that in the end, it's a framework.

01:10.120 --> 01:17.000
It's started as an online charging system and OCS, what people are calling in telecom

01:17.000 --> 01:27.360
and it evolved towards a framework of different services and they are all related.

01:27.360 --> 01:32.480
So it's not that we are keep putting functionality inside because we don't know what to do in

01:32.480 --> 01:40.400
our spare time, it's because the things are asking for each other.

01:40.400 --> 01:50.960
It's all open source, born in 2010, published in 2012, full sources are available on GitHub,

01:50.960 --> 01:59.040
100% gop, so imagine in 2010 when we started gop was kind of weekly release, so we might

01:59.040 --> 02:04.480
be the oldest goline developers, you know.

02:04.480 --> 02:11.360
No add-ons in private repositories and by now we have more than 400,000 lines of code which

02:11.360 --> 02:18.400
in gop which being a compact language, it's quite some functionality there and we do have

02:18.400 --> 02:21.520
consideration for community contributions.

02:21.520 --> 02:26.880
We support actively three branches which became like three versions of software nowadays.

02:26.880 --> 02:34.800
It's performance oriented because we started as an OCS and it has a modular architecture

02:34.800 --> 02:41.280
so you can take components out, put your own should not, should you want to avoid the licensing

02:41.280 --> 02:43.280
problems.

02:44.000 --> 02:52.240
Test driven development software so we like to write tests before or write after we have done

02:52.240 --> 02:55.440
the functionality.

02:55.440 --> 02:57.440
What can you do with it?

02:57.440 --> 03:04.480
Online charging system is the classic thing we started with, you have a lot of things like

03:04.480 --> 03:13.120
complex rating, bundles for online it works in mobile networks, we also work in tier 1,

03:14.000 --> 03:21.280
MNOs, high number of interfaces in and out so you can integrate it with quite some number of

03:22.560 --> 03:29.120
appliances there, then it can work as a routing server, popular LCR systems, you know,

03:29.120 --> 03:38.320
in with soft switches, then what I'm also about to talk about is fraud mitigation server,

03:39.200 --> 03:48.080
you have built in mechanism and components which are focusing on detecting fraud in your network,

03:49.200 --> 03:56.560
it's that kind of fraud related to billing stuff so we don't go into sniffing your network for

03:56.560 --> 04:03.120
IP packets, we focus on what it comes out of your real-time events or what it comes out out

04:03.120 --> 04:15.200
of CDRs. Statistic server, again important, it has a lot of appliances in marketing,

04:15.200 --> 04:27.680
in routing also, in observability of your network, so we provide that kind of component as well,

04:27.760 --> 04:34.880
resource utilization control server, so you can have virtual channels since there are no physical

04:34.880 --> 04:42.720
channels anymore. Mediation server, if you have to enhance your CDRs and modify their content

04:42.720 --> 04:49.920
or beaming a source which is again popular, you can do parallel billing to what you have today

04:49.920 --> 04:56.800
or put a server there which is sniffing, this is the kind of network sniffer, producing CDRs

04:56.800 --> 05:03.760
and checking in real-time with what you have already, if you have doubts, if your billing system works

05:03.760 --> 05:15.680
accurate, you can add this in parallel and have the functionality. Today I did a sort of,

05:15.680 --> 05:21.760
this is the only nice looking slide I have because as billing system we are working

05:21.760 --> 05:28.560
some in your basement, nobody sees it, it's there and nobody wants to touch it, so this is kind

05:28.560 --> 05:36.800
of best I could give out of myself. You can see on the left side here communication devices doing

05:36.800 --> 05:45.760
their things, making use of your resources and producing whether real-time or CDRs, sending them

05:45.760 --> 05:55.920
to us to all components which we call sessions and CDRs, then this sessions by processing

05:55.920 --> 06:03.360
them will push them into another subsystem named stats. stats are built again in real-time,

06:03.360 --> 06:11.680
but as they are, their nature is to be like only instance, so they are not available if you

06:11.680 --> 06:18.960
want to query them, what were my stats, 10 minutes ago, you cannot because stats are always

06:18.960 --> 06:25.040
real-time, our all components they work in memory, so because of the speed we have to do,

06:25.040 --> 06:32.080
everything is in memory, there is no select from my screen where average coordination or

06:32.080 --> 06:36.960
answer time was this and stuff like that, no because that one will be slow, it's not the target

06:37.520 --> 06:43.440
we are doing, just for your information we are doing about 7,000 requests per second per unit,

06:43.440 --> 06:51.040
so and our RPC goes up to 35,000 requests per second, so we are we are really targeting high

06:51.040 --> 07:00.320
speeds and this stats being in the memory, we need to somehow or hide them and also make use of

07:00.400 --> 07:08.240
the logic behind them, so this is why we wrote a new subsystem which is trends, we just push

07:08.240 --> 07:15.520
it like a few months ago, this is the first conference where we talk about it, and this trends

07:15.520 --> 07:23.840
will regularly query stats, it's very similar to the approach of permitios for example,

07:24.400 --> 07:35.120
and or hide these stats in a timely manner, so in sort of time based database, it's again

07:35.120 --> 07:42.480
all in memory, and then this trends can push information outside and that would be

07:43.200 --> 07:52.320
to our thresholds which is the subsystem which is able to read this events from trends and take

07:52.320 --> 07:59.280
actions on them which is part of the flow detection or they can send it to another subsystem

07:59.280 --> 08:05.120
ES which is our event exporter, and this ES has access to many, many interfaces including

08:05.120 --> 08:12.320
I don't know Amazon SQS, Rabbit and Q Kafka or whatever, so there are many interfaces out where

08:12.320 --> 08:22.240
you can make use of these trends, in terms of trends it's an archiver for start metric,

08:23.200 --> 08:32.960
it queries on interval, the start metrics and it has some fine tuning parameters like time to

08:32.960 --> 08:41.600
leave limits on the number of metrics are high, it can be also optionally stored into a database

08:41.600 --> 08:49.840
which usually it's no scale like radius to be again fast, and it computes the trends growth

08:49.920 --> 08:56.480
and trend label, I'll show you in an example so you understand easier, in terms of trends

08:56.480 --> 09:03.120
computation the metric must be numbers, you can have correlation types like average or last,

09:03.120 --> 09:09.200
and it can be exported as I showed you towards another subsystem, this is configuration options,

09:09.200 --> 09:15.760
but I'll skip them too because we have some sample API data which you can understand better,

09:16.320 --> 09:25.200
so a trend profile is our configuration and it contains the schedule how often we will query the

09:25.200 --> 09:31.920
stats, the statistics, it contains what sort of statistics will query, normally trends and

09:31.920 --> 09:37.120
statistics profile and statistics profile they are one to one, so you can only configure here one

09:37.120 --> 09:43.120
stat ID and you can filter what metrics you are hiring because you might not want to

09:43.120 --> 09:50.000
to hive all the metrics, then you have a correlation type, in this case is last since we have a

09:50.000 --> 09:58.000
queue of statistics, we need to make sure that we know what we correlate to, so it can be from the

09:58.000 --> 10:04.400
last statistics, so the last two statistics in the past or it can be as an average between all of them

10:05.040 --> 10:13.840
and then it can be, it can have a tolerance when issuing the label and it can call the threshold

10:13.840 --> 10:23.360
in order to have some detection mechanism for the fraud, this is an example of trend, how a trend

10:23.360 --> 10:31.920
looks like, then you see it runs at points in time and then it hives at each of these points

10:31.920 --> 10:37.440
in time, it hives some metrics, in this case it did average correlation, it has a value,

10:38.400 --> 10:46.720
unknown trend growth, so minus one means unknown and unknown trend label and if you, if we watch the

10:46.720 --> 10:55.200
second trend run we already see a trend growth, this means minus 48 percent, so the average

10:55.200 --> 11:05.840
correlation went down and a positive, so a trend growth of 150 percent for the total call cost,

11:05.840 --> 11:17.040
so this by sending this event to the thresholds, we can already make sense of them for the

11:17.040 --> 11:25.600
fraud saying that if the trend label is negative and for for average correlation and trend label for

11:26.640 --> 11:32.880
total call cost is positive, something is wrong, they don't match up anymore, so to us it says

11:32.880 --> 11:40.000
we have to either look on it or escalate or take actions and just to see I was not afraid about

11:40.000 --> 11:46.720
your shield, I finished, thank you very much

