WEBVTT

00:00.000 --> 00:15.000
Please welcome Nils. We've managed to solve a laptop problem.

00:16.000 --> 00:28.000
Nice to be here. I'm going to talk about the 6th project and I'm going to apply it to software updates.

00:29.000 --> 00:42.000
Let's start in point is digital signatures. I imagine some of you really software and you

00:42.000 --> 00:52.000
probably sign your releases. Others install software and hopefully verify the signatures which is a little

00:52.000 --> 01:00.000
hard to do manually. The good thing is that lots of tools if you do up to get update upgrade,

01:00.000 --> 01:07.000
if you do Firefox automatic update and that's very fast signatures for you and enforces

01:07.000 --> 01:17.000
value signatures which is all very good. But wouldn't know if one of the involved private keys is compromised.

01:17.000 --> 01:25.000
That's kind of hard and that's the issue that will be the focus of this task.

01:25.000 --> 01:32.000
So I will spend quite a bit of the talk to explain the 6th system.

01:32.000 --> 01:40.000
Then I will get back to the software update use case in some detail and then do a summary.

01:40.000 --> 01:48.000
And I'm Nils. I have been working at Glass Club with the 6th and 4th or 2 and a half years.

01:48.000 --> 01:54.000
I'm also the maintainer of the new network library and some other stuff.

01:54.000 --> 02:03.000
So what is a transparency log? It's an append only log of entries.

02:03.000 --> 02:13.000
Internally it uses a neural treat to make things efficient and it's intended to work like a public bulletin board with a special property.

02:13.000 --> 02:20.000
That once you post something on the board it stays there forever or at least until you destroy the entire board.

02:20.000 --> 02:25.000
You can't modify notice or remove it once it is posted.

02:25.000 --> 02:33.000
6th is one project in this space. It's a rather minimalistic transparency log.

02:33.000 --> 02:38.000
It's a free and open software project started roughly five years ago.

02:38.000 --> 02:50.000
Last year we have spent some effort on making sure that specs are stable and the tools are in a reasonable shape you can start using it.

02:50.000 --> 02:55.000
So the setting here we have automatic software updates.

02:55.000 --> 03:01.000
There are properly signed, but a tackle gets controlled of the really signing key.

03:01.000 --> 03:07.000
For example, you to key compromise or coercion of the legitimate keyholder.

03:07.000 --> 03:18.000
And then that tackle uses the signing key to sign malicious updates and deliver those to some small number of target users only a targeted attack.

03:19.000 --> 03:29.000
And what can we use transparency for? We will not be able to prevent this kind of attack, but the point is to enable the detection.

03:29.000 --> 03:36.000
And if we can detect it, we can try to recover and that's kind of out of scope.

03:36.000 --> 03:47.000
We talk what you actually do when you discover that there is a rogue update distributed summary users.

03:47.000 --> 03:51.000
So the detection is the point here.

03:51.000 --> 03:55.000
The threat model we consider are the powerful attacker.

03:55.000 --> 04:01.000
The attacker owns your really signing key, attacker owns all your distribution infrastructure.

04:01.000 --> 04:10.000
The attacker also owns our Sigsam log server and some but not all of the co-sign witnesses.

04:11.000 --> 04:16.000
And in this setting, we aim to detect attacks.

04:16.000 --> 04:23.000
And by enabling detection, we also have to deter attackers because some attackers might.

04:23.000 --> 04:29.000
They would be happy to do an attack if we can do undetected.

04:29.000 --> 04:35.000
So now we go into some details of the Sigsam system.

04:35.000 --> 04:39.000
The log entries are logged, check the assigned checksums.

04:39.000 --> 04:41.000
So this is the log entry.

04:41.000 --> 04:44.000
It's a checksum that says, what was logged?

04:44.000 --> 04:49.000
It's a public key hash that says, who logged this entry?

04:49.000 --> 04:56.000
And there's a signature that proves that the whole corresponding privacy actually was involved in creating the log entry.

04:57.000 --> 05:06.000
Sigsam is a sign for public use that we want to enable public log servers that any application and use can send stuff to.

05:06.000 --> 05:11.000
Even if the log operator doesn't know or trust these users.

05:11.000 --> 05:23.000
And it's minimalistic with the aim that they can use sort of building block for various other transparency applications where the software update thing is the main focus of this talk.

05:23.000 --> 05:27.000
These are the different things in the in the system.

05:27.000 --> 05:35.000
To the right, we have the familiar setting, we want this data to users.

05:35.000 --> 05:38.000
We want to sign it and the user can verify that it's authentic.

05:38.000 --> 05:45.000
We add the log server, we add the monitor that can detect when bad things happens.

05:45.000 --> 05:48.000
And we have the witnesses that provide security in the system.

05:48.000 --> 05:52.000
And I will describe this in some detail.

05:52.000 --> 05:59.000
For the user, if you don't have Sigsam, you have a public key, you get signed data, you verify the signature you're fine.

05:59.000 --> 06:08.000
If you have Sigsam, you extend the signature with the proof of logging and extend the public key with trust policy.

06:08.000 --> 06:12.000
And you verify the data, but it's still offline verification.

06:12.000 --> 06:15.000
It's the different kind of detached signature.

06:16.000 --> 06:22.000
When you sign stuff without Sigsam, you just have a private key, you sign stuff, you distribute it.

06:22.000 --> 06:29.000
With Sigsam, you also send the sign Sigsam to the log server and you get back a proof of logging.

06:29.000 --> 06:35.000
And then you distribute a proof of logging together with the data to your users.

06:35.000 --> 06:43.000
And one potential problem here is that the logs are ought to be available at signing time otherwise this will fail.

06:43.000 --> 06:47.000
So the log has to be available as another component.

06:47.000 --> 06:51.000
Monitoring without Sigsam, you don't have any monitoring.

06:51.000 --> 07:04.000
When you introduce Sigsam, the monitor repeatedly tells the log, looks at all entries, looks at signatures from entries of interest from public key of interest.

07:04.000 --> 07:15.000
And the objective here is that any signature that will be accepted by a user will also be seen by the monitor.

07:15.000 --> 07:21.000
You can also extend the monitoring to also take the actual data into account to verify.

07:21.000 --> 07:32.000
For example, if the release is supposed to be reproduced with the monitor, you can get the corresponding data and try to verify that and scream if it doesn't verify.

07:34.000 --> 07:41.000
To make sure that users and monitors what's in the log, it's critical that it's the same log.

07:41.000 --> 07:46.000
So what if the log looks different depending on host asking.

07:46.000 --> 07:50.000
And we don't want to just blindly trust the log to do the right thing.

07:50.000 --> 07:53.000
We don't want a single point of trust.

07:53.000 --> 07:57.000
And that's why we have witnesses.

07:57.000 --> 08:05.000
Typically, the log sends its current state to its witnesses and hopes to get the co-sing that are back.

08:05.000 --> 08:13.000
And when the witness co-signs the log state, it says, all the older entries that the co-sign proves that they are still in the log.

08:13.000 --> 08:17.000
It's really at only append, only like it should.

08:17.000 --> 08:26.000
And for the log to convince the witness do that, the log has to send it a consistency proof of the ties to the old and the new state together.

08:26.000 --> 08:30.000
The witness co-signatures are included in the proofs of logging.

08:30.000 --> 08:36.000
User can require co-signatures from M out of N trusted witnesses.

08:36.000 --> 08:50.000
And if users or monitors are configured so that they agree on the trusted witnesses, then this ensures that users and monitors see the same view of the log.

08:50.000 --> 08:55.000
Witnesses are light because they only, they don't care about the content for the log.

08:55.000 --> 08:57.000
They only about consistency.

08:57.000 --> 09:01.000
The smartest requirements on storage processing and bandwidth.

09:01.000 --> 09:10.000
And a single witness can serve many transparency logs, not even, not only seeing some logs as a kind of transparency logs as well.

09:10.000 --> 09:21.000
And doesn't need any publicly known IP addressed thanks to this optional bashing thing that lets the witness be a bit more in hiding.

09:21.000 --> 09:29.000
And the log will only access the witness via the bashing.

09:29.000 --> 09:33.000
A witness has a does need secure and reliable operation.

09:33.000 --> 09:43.000
So if you have the ability to run stuff securely for a public good hosting witnesses, it's a very good thing to do.

09:43.000 --> 09:48.000
Similarly, the trust policy can look like this.

09:48.000 --> 09:52.000
It specifies one or more logs.

09:52.000 --> 09:55.000
Any log in the list is acceptable.

09:55.000 --> 09:57.000
It defines list of witnesses.

09:57.000 --> 10:01.000
Logs and witnesses are identified by the public key.

10:01.000 --> 10:07.000
The URL is used when you submit stuff to the log, but it's ignored when verify proves.

10:07.000 --> 10:11.000
And then you can define groups with thresholds.

10:11.000 --> 10:19.000
This example sets a core where we have five companies that might be trustworthy.

10:19.000 --> 10:31.000
And we require at least a valid code signature from at least three of these witnesses to accept a proof of logging.

10:31.000 --> 10:37.000
So a bit of demo how to use this for software updates.

10:37.000 --> 10:44.000
We would first, if you want to apply this, you first have a setup phase where you decide logs on witnesses to rely on.

10:44.000 --> 10:46.000
That is the trust policy.

10:46.000 --> 10:49.000
You distribute this to your users.

10:49.000 --> 10:59.000
You also have to register a rate limit key in DNS because that's how we rate limit public submission to the log.

11:00.000 --> 11:09.000
Then in the pipeline that creates your releases, you add the step that is submit the signature to a six-on-log and in your update agent.

11:09.000 --> 11:17.000
You would need to verify these proofs instead of just verifying a plain detached signature.

11:17.000 --> 11:24.000
And then you add a monitor that can respond to unexpected entries in the log.

11:24.000 --> 11:29.000
And verifying these claims like reclubild claims.

11:29.000 --> 11:32.000
So you have to set up phase without tooling.

11:32.000 --> 11:35.000
We have the policy access thing.

11:35.000 --> 11:40.000
Our test log because that way we can run the demo without actually putting stuff in DNS.

11:40.000 --> 11:45.000
We install latest version of our Sigism Go tools.

11:45.000 --> 11:50.000
We generate a submit key which would be then the long-lived release sign in key.

11:50.000 --> 11:55.000
The rate limit key which is of less value and can be rotated easily.

11:55.000 --> 12:06.000
And we create the record we would have to put into DNS to be able to use this when passed by the rate limit system.

12:06.000 --> 12:14.000
In the release pipeline, you create the release artifact here release.gzip.

12:14.000 --> 12:20.000
You submit it to the log. You have to give this six-on-submit the policy.

12:20.000 --> 12:30.000
The private key used to sign and your domain and your rate limiting private key for to make valid request.

12:30.000 --> 12:38.000
And then you have the artifact release.gzip that you want to submit to the log.

12:38.000 --> 12:46.000
And then if all goes well, you get back.pro file and you distribute those two files to your users.

12:46.000 --> 12:51.000
For the agent, let's first have a close look at the proof.

12:51.000 --> 12:54.000
It then was the log. It was sent to.

12:54.000 --> 13:01.000
It gives the signature and key hash in the log.

13:01.000 --> 13:05.000
Then we have the state of the log with size and root hash.

13:05.000 --> 13:09.000
The signatures from the log itself and from the witnesses.

13:09.000 --> 13:18.000
And finally, we have the inclusion proof for the entry in the log with the index and the list of hatches.

13:18.000 --> 13:20.000
And then to verify this.

13:20.000 --> 13:26.000
The document line tool is six-on-verify. It takes the policy.

13:26.000 --> 13:30.000
The public key that should have submitted this to the log.

13:30.000 --> 13:34.000
The proof and the artifact itself.

13:34.000 --> 13:36.000
And that's it.

13:36.000 --> 13:44.000
We have a monitor which is a bit of work in progress, but you can run it like this.

13:44.000 --> 13:50.000
Give it a policy and the key you are interested in.

13:50.000 --> 14:00.000
It will tell all the logs mentioned in the policy and write out a line when it finds an entry that is signed by the key you specified.

14:00.000 --> 14:08.000
So here we find one entry with index and key hash checksum.

14:08.000 --> 14:11.000
The signature has already been verified by this tool.

14:11.000 --> 14:18.000
And this corresponds to what we submitted to the log in the previous slide.

14:18.000 --> 14:21.000
The flip with one of our early users.

14:21.000 --> 14:26.000
Yes, it's up this for release of his Aga and Clip from tool.

14:26.000 --> 14:30.000
And customers have done proof of concept monitor.

14:30.000 --> 14:37.000
The compares what's in the log with what's on the GitHub release page, which is very nice.

14:37.000 --> 14:44.000
So to sum up what is six-on.

14:44.000 --> 14:48.000
It's a public record of signatures.

14:48.000 --> 14:53.000
And that's useful because signatures have meaning to applications.

14:53.000 --> 15:01.000
For example, the signature by your release key means that the signed artifacts are official releases.

15:01.000 --> 15:10.000
And it's trusted when witnessed by sufficient number of witnesses.

15:10.000 --> 15:18.000
concretely, use submit and collect the proof when you sign stuff.

15:18.000 --> 15:24.000
You have to bring your own distribution network to get data and proof to users.

15:24.000 --> 15:27.000
Users do offline verification, just like any other detached signature.

15:27.000 --> 15:31.000
And then you monitor the logs for keys of interest.

15:31.000 --> 15:36.000
You can do it yourself and third part is can do that as well.

15:36.000 --> 15:41.000
So get going and do transparency with six-on.

15:41.000 --> 15:46.000
We have one log up and running.

15:46.000 --> 15:53.000
We have one witness running at Mulvod since yesterday, I think.

15:53.000 --> 15:58.000
We will very soon now have our own witness up and running online.

15:58.000 --> 16:04.000
We have documented our operational procedures to get that done in the secure way.

16:04.000 --> 16:10.000
Google has the armed witness project that does witness thing in a compatible way.

16:10.000 --> 16:13.000
And witnesses are a log.

16:13.000 --> 16:19.000
So I think it's time to get started to use this system.

16:19.000 --> 16:23.000
And finally, I want to mention some other use cases.

16:23.000 --> 16:29.000
If you can do transparency of a stage access, we can do break loss emergency access.

16:29.000 --> 16:34.000
If you want to be able to in several circumstances have access to user data.

16:34.000 --> 16:38.000
But you want that to be detectable when that mechanism is used.

16:38.000 --> 16:40.000
We can have vulnerable.

16:40.000 --> 16:47.000
The submissions where you submit a vulnerability report to a service that promises to publish this in 30 days.

16:47.000 --> 16:57.000
If you get a receipt that it is put in a public log, you can monitor that it actually is published in a timely fashion.

16:57.000 --> 17:00.000
So that's all on www.sixam.org.

17:00.000 --> 17:06.000
You can find contact info, docs, source code, everything.

17:06.000 --> 17:07.000
Thanks.

17:07.000 --> 17:17.000
Questions?

17:17.000 --> 17:18.000
Yes?

17:18.000 --> 17:25.000
How does the witness know if the release was legitimate or if the cyber has been further?

17:25.000 --> 17:26.000
The witness doesn't know that.

17:26.000 --> 17:29.000
The witness facility will look at the log entry.

17:29.000 --> 17:35.000
The witness says that I will co-sign this log some time ago.

17:35.000 --> 17:37.000
And I am co-signing it again.

17:37.000 --> 17:44.000
And I have verified that everything that was in the log is still in the log and has not been modified.

17:44.000 --> 17:51.000
That's the only claim that the witness makes.

17:51.000 --> 17:53.000
This is a contrast in monitors.

17:53.000 --> 17:55.000
Monitor read the entire log.

17:55.000 --> 18:00.000
All entries verify inclusion proves and extract the entries for keys of interest.

18:00.000 --> 18:04.000
And then the monitor wants to know if this legitimate entry.

18:04.000 --> 18:07.000
And that depends if you are the key owner.

18:07.000 --> 18:10.000
You would supposedly know what you should have signed.

18:10.000 --> 18:16.000
And if you find unexpected entries that could be key compromise or something that happened.

18:16.000 --> 18:20.000
If there are additional claims like a reprobile claim,

18:20.000 --> 18:26.000
then also third party monitor can look at these entries and verify those claims.

18:27.000 --> 18:31.000
So if it's my key and it's somehow stolen.

18:31.000 --> 18:32.000
Yeah.

18:32.000 --> 18:36.000
As on the side of it, I can say, hey, this is a song that should be in there.

18:36.000 --> 18:39.000
But if somebody takes a robber hold in both groups,

18:39.000 --> 18:42.000
I will not use case as part of it.

18:42.000 --> 18:46.000
The thing is, we should cover both cases.

18:46.000 --> 18:49.000
There are some scenarios that the user will verify these proofs.

18:49.000 --> 18:54.000
So if a packet does it, it makes a signature but does it log it.

18:54.000 --> 18:58.000
The signature will not be accepted by the targeted user.

18:58.000 --> 19:04.000
And if the packet attachs into the log, the user will accept it and bad things will happen,

19:04.000 --> 19:09.000
but you will be able to detect it after the fact because it's in the log.

19:15.000 --> 19:17.000
I think I bet we understand.

19:17.000 --> 19:23.000
It's a bit counterintuitive that it's no prevention, just detection.

19:25.000 --> 19:27.000
Another questions?

19:44.000 --> 19:52.000
For the user's point of view, the idea is that you get the data and the proof in the same way that you get the data and the detects signature today.

19:55.000 --> 20:02.000
So we don't do any distribution or service.

20:10.000 --> 20:12.000
Yes.

20:12.000 --> 20:20.000
So if I understand this correctly, this relies the defense against being encouraged to send something

20:20.000 --> 20:24.000
to the user.

20:24.000 --> 20:29.000
If it's not submit to the log, the signature will not be accepted.

20:29.000 --> 20:33.000
And if it is added to the log, it will be detected after the fact.

20:33.000 --> 20:40.000
You need to tell the user that they actually need to verify this against the inclusion proof.

20:40.000 --> 20:47.000
Anybody who is forcing you to sign something would also know that they would have to force you to submit it to the log.

20:51.000 --> 20:52.000
And also.

20:54.000 --> 21:00.000
Okay, let's see if I got the question.

21:02.000 --> 21:05.000
I didn't quite grasp the question.

21:05.000 --> 21:13.000
So I need to tell users that they need to use this to verify the signature and the inclusion proof.

21:13.000 --> 21:22.000
So if somebody is forcing me to do a signature, that somebody knows that they also need to force me to submit into the log.

21:22.000 --> 21:27.000
So now does it protect against being encouraged?

21:27.000 --> 21:33.000
So it's an assumption that the user's enabled strict verification.

21:33.000 --> 21:36.000
We don't get the value before that happens.

21:36.000 --> 21:44.000
So the user is using the user's configured to require this proof of logging.

21:44.000 --> 21:50.000
And then the attacker encourages you to make a signature.

21:50.000 --> 21:55.000
And then for that signature to be accepted to the user, the attacker must in some way,

21:55.000 --> 21:59.000
or the other get the signature into the log.

21:59.000 --> 22:04.000
And when it's in the log, it will be eventually be visible.

22:06.000 --> 22:12.000
Yeah.

22:12.000 --> 22:25.000
Are you talking to an FPGP or X49 or SSH to add this to some other existing sign-in tools?

22:25.000 --> 22:29.000
So far, the question is if we are talking to PDP,

22:30.000 --> 22:35.000
or the other tools to add this kind of extended signature?

22:35.000 --> 22:39.000
No. We haven't got some starting with that yet.

22:39.000 --> 22:46.000
And I think we will have to do it some one application by.

22:46.000 --> 22:47.000
Yes?

22:47.000 --> 22:52.000
Does the witness act like some sort of an inclusion proof, for example,

22:52.000 --> 22:55.000
like a city in the sort of cryptocurrency ecosystem?

22:55.000 --> 22:57.000
Or is it budgeted?

22:57.000 --> 23:03.000
The question is if the witness does something related to the city in the CTA ecosystem?

23:03.000 --> 23:05.000
I think.

23:05.000 --> 23:09.000
Does the witness act of the process?

23:09.000 --> 23:11.000
Are you putting witness in this process?

23:11.000 --> 23:15.000
Equate to inclusion with this?

23:15.000 --> 23:17.000
It does.

23:17.000 --> 23:20.000
Does the witness is doing inclusion proofs?

23:20.000 --> 23:22.000
No. I don't think.

23:23.000 --> 23:26.000
If I got the question about that, I think that answer is no.

23:26.000 --> 23:31.000
To inclusion proofs are in the logging proofs sent to users.

23:31.000 --> 23:41.000
And witnesses use consistent proofs to verify that the log behaves in an append-only way.

23:41.000 --> 23:44.000
Question in the back?

23:45.000 --> 23:48.000
Can you register as a witness?

24:05.000 --> 24:10.000
The question was, can anybody be witness?

24:11.000 --> 24:16.000
And can you register a lot of witnesses to overheld the policy?

24:16.000 --> 24:27.000
And the answer is that getting a witness to which log requires some kind of ping-pong or configuration between the log and the witnesses.

24:27.000 --> 24:31.000
Both parties must agree that they should talk to each other.

24:31.000 --> 24:35.000
You can't unilaterally witness a log.

24:35.000 --> 24:42.000
And the second part is that you do this majority check on Thursday check.

24:42.000 --> 24:46.000
Only against the witnesses that are listed in your trust policy.

24:46.000 --> 24:50.000
That can be another thousand coaching editors.

24:50.000 --> 24:54.000
But those are by keys that are not known to you.

24:54.000 --> 24:58.000
They are just discarded before it makes this policy decision.

24:59.000 --> 25:04.000
We can take one final.

25:04.000 --> 25:09.000
Yeah, the user verifying the proof decides that.

25:09.000 --> 25:16.000
And it's rather important that it's done in coordination with the monitors that the user will also be relying on.

25:16.000 --> 25:19.000
One final question.

25:19.000 --> 25:21.000
Yes?

25:22.000 --> 25:25.000
Is there a single sign and multiple witnesses?

25:25.000 --> 25:26.000
Yes.

25:42.000 --> 25:49.000
The question was, what benefit do we have over having multiple signers?

25:52.000 --> 25:57.000
I think having the trust in multiple witnesses is somewhat similar.

25:57.000 --> 26:05.000
What we'll get here is that we can see all the signatures that were made.

26:05.000 --> 26:10.000
You can do this and have multiple signers.

26:10.000 --> 26:15.000
And then for each of those signers, you can see if they make an unexpected signatures.

26:21.000 --> 26:23.000
Thank you, Nils.

