WEBVTT

00:00.000 --> 00:07.000
You're not going to talk about red fun, will you?

00:07.000 --> 00:11.000
No, sorry, I'm just going to teach it, but no, I'm not talking about it.

00:11.000 --> 00:15.000
Jake was going to really, really, really awesome demo about the red fun, the jack one

00:15.000 --> 00:18.000
and how they read it today, I.

00:18.000 --> 00:20.000
This talk is not about that.

00:20.000 --> 00:25.000
There'll be something about talking, but it also is to read to me, so let's see.

00:25.000 --> 00:26.000
Cool.

00:26.000 --> 00:34.000
OK, so broken docs can be a nightmare, whether you're a tech writer, a developer or a product manager.

00:34.000 --> 00:39.000
They wrote your trust, they wasted a time, and they make your product look kind of immature.

00:39.000 --> 00:47.000
But we're here today still about Dr. Detective, which can turn your docs from a source of pain to a delight through robust automated testing.

00:47.000 --> 00:53.000
I'm Jake, I'm from Red Panda Data, which is a streaming data platform.

00:53.000 --> 01:02.000
And we're starting to integrate Dr. Detective into our docs, so to get testing and output and automated testing.

01:02.000 --> 01:03.000
Yes.

01:03.000 --> 01:11.000
And I'm Ariel, I'm a technical writer at Samarky, where a data management integration and governance company.

01:11.000 --> 01:18.000
And I'm starting to look at Dr. Detective to also test our docs and help keep them up to date.

01:18.000 --> 01:20.000
Cool.

01:20.000 --> 01:24.000
All right, so this is just the truth, something always breaks.

01:24.000 --> 01:30.000
So imagine your company to support the shiny products, and you've been tasked to set it up and configure it.

01:30.000 --> 01:36.000
This is a product that's like, it's promised to solve all your company's problems.

01:36.000 --> 01:40.000
So you might start by feeling cautious the optimistic.

01:40.000 --> 01:46.000
You start researching just to see what this product does, and you might stumble across some docs.

01:46.000 --> 01:55.000
So things are looking pretty good. The docs even have a guide for the exact problem you're trying to solve. So you're confident.

01:55.000 --> 02:05.000
This guide has step-by-step instructions, copy paste commands, and you copy paste the first command into a terminal hit enter and it gives you an error.

02:05.000 --> 02:11.000
At this point, you probably lost all trust in the rest of the doc, and you're not going to continue reading.

02:11.000 --> 02:18.000
And this is a common occurrence, because keeping docs in sync with a product that's constantly changing is hard.

02:18.000 --> 02:20.000
Can you repeat that?

02:20.000 --> 02:22.000
I'll talk a little louder.

02:22.000 --> 02:24.000
So yeah.

02:24.000 --> 02:26.000
I hope all the whole thing?

02:26.000 --> 02:29.000
No, just a lot of sense.

02:29.000 --> 02:40.000
So it's common for docs to break, because keeping docs in sync with a product that constantly rolls out new changes is just difficult.

02:40.000 --> 02:45.000
Even a minor change, like changing a button label, or what can seem like a minor change.

02:45.000 --> 02:53.000
Like changing a button label from begin to start, complete the doc, and that's going to erode trust in the rest of your docs.

02:53.000 --> 02:58.000
So we're going to dive into why this happens.

02:58.000 --> 03:03.000
And it's a real challenge.

03:03.000 --> 03:11.000
So one of the big challenges, and this is probably happened to everyone, but actually question,

03:11.000 --> 03:17.000
who here has somehow never dealt with broken documentation?

03:17.000 --> 03:19.000
Ever.

03:19.000 --> 03:22.000
All right, you're in good company.

03:22.000 --> 03:24.000
There are the challenges.

03:24.000 --> 03:29.000
I mean, the biggest challenge at first is that there are lots of updates.

03:30.000 --> 03:32.000
Your product gets features.

03:32.000 --> 03:33.000
Your product gets changes.

03:33.000 --> 03:35.000
Your product gets deprecations.

03:35.000 --> 03:40.000
The libraries your product depends on are deprecated, and you find out later.

03:40.000 --> 03:44.000
And it is a shared responsibility.

03:44.000 --> 03:47.000
So I'm, for example, a technical writer.

03:47.000 --> 03:52.000
I could watch for every single change in the product.

03:52.000 --> 03:54.000
That's not very realistic.

03:54.000 --> 03:56.000
I'm a technical writer.

03:56.000 --> 04:04.000
I don't have the role of project manager who could be responsible for telling me all of the new features that are coming.

04:04.000 --> 04:08.000
But the project manager might not know about changes to existing features.

04:08.000 --> 04:10.000
And that's the engineers.

04:10.000 --> 04:12.000
But the engineers are not writing the documentation.

04:12.000 --> 04:13.000
I am.

04:13.000 --> 04:17.000
So at some point you have to figure out who is dealing with this mess.

04:17.000 --> 04:19.000
How do you get visibility?

04:19.000 --> 04:22.000
Somebody needs to know what's in the documentation.

04:22.000 --> 04:27.000
You need to know what's in the docs to decide if you want to change them.

04:27.000 --> 04:29.000
And there are a lot of moving parts.

04:29.000 --> 04:31.000
You cannot do it perfectly.

04:31.000 --> 04:34.000
And something will eventually fail.

04:34.000 --> 04:36.000
Your users will find out.

04:36.000 --> 04:42.000
And that will cascade into a horrible scenario where somebody realizes something is wrong.

04:42.000 --> 04:44.000
I mean, something's wrong.

04:44.000 --> 04:47.000
What tends to happen is you open a support ticket.

04:47.000 --> 04:50.000
Which leads to longer resolution times.

04:50.000 --> 04:57.000
And generally, if consistently reliable docs build the trust, then inaccurate ones will undermine it.

04:57.000 --> 05:02.000
And that's going to think things like your retention metrics, your satisfaction metrics.

05:02.000 --> 05:05.000
Because building the trust is hard, but losing it is super easy.

05:05.000 --> 05:10.000
So what we want is we want users to read something that just works.

05:10.000 --> 05:15.000
And there is a strategy to enable that and it's called docs as test.

05:15.000 --> 05:21.000
And docs as test is not the name of the of Dr. Techive that we're demoing.

05:21.000 --> 05:23.000
It is a practice.

05:23.000 --> 05:29.000
It's the practice of automatically verifying every step outlined in your documentation.

05:29.000 --> 05:32.000
Which sounds ambitious.

05:32.000 --> 05:38.000
But just imagine for a moment, imagine you have every command, every UI flow,

05:38.000 --> 05:43.000
and every configuration step automatically checked for accuracy.

05:43.000 --> 05:46.000
That is a dream.

05:46.000 --> 05:51.000
But quite honestly, it's kind of the goal and it's what you want to work towards.

05:51.000 --> 05:59.000
So if something breaks, like a command no longer works or UI element changes, the tests catch it automatically.

05:59.000 --> 06:01.000
This is a proactive approach.

06:01.000 --> 06:03.000
It means your docs remain accurate.

06:03.000 --> 06:06.000
It means they remain aligned with the products current state.

06:06.000 --> 06:11.000
And your users and yourself are probably much happier.

06:11.000 --> 06:15.000
Because the chances are you have to use your documentation as well.

06:15.000 --> 06:19.000
And there are existing testing tools.

06:19.000 --> 06:25.000
Docs, sorry, Dr. Techive is an intermediate tool.

06:25.000 --> 06:28.000
It's not for complete novices.

06:28.000 --> 06:31.000
You do still need some knowledge to use it.

06:31.000 --> 06:36.000
But for example, you have tools like Cypress and Playwright.

06:36.000 --> 06:41.000
And you could integrate them into your docs.

06:41.000 --> 06:48.000
But if you are like me and you are not a JavaScript developer and you have to use search engines

06:48.000 --> 06:55.000
or local LLMs to figure out code, this is a lot.

06:55.000 --> 06:56.000
This is a lot.

06:56.000 --> 06:59.000
The two snippets that just came on the screen do the exact same thing.

06:59.000 --> 07:00.000
They test a URL.

07:00.000 --> 07:04.000
They interact with the UI and they take screenshots.

07:05.000 --> 07:11.000
It would be nicer if this were a little simpler to read.

07:11.000 --> 07:15.000
Oh, and that's where the docs tested comes in.

07:15.000 --> 07:23.000
So I'm going to get rid of the slides and just go into a demo.

07:23.000 --> 07:28.000
I think if you want, I can just start in the end.

07:28.000 --> 07:29.000
Yeah, sure.

07:29.000 --> 07:34.000
So Dr. Techive is a testing framework built by many silver.

07:34.000 --> 07:39.000
It is an implementation of the docs to test methodology,

07:39.000 --> 07:43.000
and it aims to make tests much more approachable to technical writers

07:43.000 --> 07:48.000
by writing with JSON objects instead of programming languages.

07:48.000 --> 07:55.000
And in this case, the JSON, how it's presented, is aimed to be a lot more approachable.

07:55.000 --> 07:58.000
So yeah, this is the basic structure of a test.

07:58.000 --> 08:02.000
Yeah, it's just a JSON object and it's made up of step objects.

08:02.000 --> 08:05.000
And each step has an action associated with it.

08:05.000 --> 08:10.000
So here we've got the go-to action, which is a web page or a URL.

08:10.000 --> 08:14.000
We've got a find action that will interact with that web page.

08:14.000 --> 08:17.000
And each action also has properties.

08:17.000 --> 08:21.000
So this find action has a click property that tells Dr. Techive

08:21.000 --> 08:24.000
if you find this element, click it.

08:24.000 --> 08:28.000
And so what you can do is you can aggregate all of these steps up.

08:28.000 --> 08:31.000
So one after the other, they build a robust test,

08:31.000 --> 08:34.000
and it runs through a real user workflow.

08:34.000 --> 08:37.000
So you can define tests in a few ways.

08:37.000 --> 08:40.000
You can have a separate JSON file like this,

08:40.000 --> 08:44.000
but then you have to maintain the test separately to the doc.

08:44.000 --> 08:47.000
So then what you can also do is have in line comments.

08:47.000 --> 08:53.000
So within the document, you just define the steps in the same file,

08:53.000 --> 08:58.000
which means if you update the doc, you can update the test at the same time.

08:58.000 --> 09:03.000
There's a more complex option as well, and you can teach Dr. Techive

09:03.000 --> 09:07.000
to dynamically generate the test steps.

09:07.000 --> 09:11.000
So you basically feed it a bunch of regular expressions.

09:11.000 --> 09:16.000
And you say, if you come across this pattern, this is the kind of step you need to build.

09:16.000 --> 09:21.000
But that does take a little bit more set up, but then it also does mean that you don't need to maintain

09:21.000 --> 09:24.000
the test separately to your documentation.

09:24.000 --> 09:31.000
So, but I think this is the good, a good moment to show you the fun stuff

09:31.000 --> 09:34.000
and show you what it actually does.

09:34.000 --> 09:42.000
We've got a few demos for you, and we're going to start with a very simple demo.

09:42.000 --> 09:45.000
Who likes cats?

09:45.000 --> 09:51.000
Well, and if you don't like cats, I'm sorry you are going to get a cat fact anyhow.

09:51.000 --> 09:58.000
The first demo is a very simple test, if I can briefly show it here as well.

09:58.000 --> 10:04.000
It is a single API call to catfact.ninja slash fact.

10:04.000 --> 10:09.000
All we are doing is trying to see if it gets a valid response.

10:09.000 --> 10:29.000
Dr. Tech Dev is reading the input, reading catfacts, and done.

10:29.000 --> 10:36.000
Now, the terminal is a little bit small here, but it outputs it test results in a JSON file here,

10:36.000 --> 10:46.000
well, formatted, which shows you how many of the steps and tests actually worked.

10:46.000 --> 10:49.000
For the detail, and oh look, this is our actual response.

10:49.000 --> 10:53.000
Did you know that cats been nearly a third of their waking hours cleaning themselves?

10:53.000 --> 10:57.000
And did you know that this fact is 64 characters long?

10:57.000 --> 11:03.000
Well, we have this here, but the big thing is that we got a 200 response.

11:03.000 --> 11:09.000
So, this test passed, which is exactly what we wanted.

11:09.000 --> 11:12.000
We will move on to test number two.

11:12.000 --> 11:18.000
Sorry, demo number two, which I actually labeled as one.

11:18.000 --> 11:22.000
And this one actually uses some inline checks.

11:22.000 --> 11:29.000
We've got some makeshift markdown comets here, with the actual JSON for the test.

11:30.000 --> 11:33.000
We're going to check it URL.

11:33.000 --> 11:40.000
Check another URL, check URLs, go to URL, and then do something a little bit better.

11:40.000 --> 11:44.000
We're going to save a screenshot.

11:44.000 --> 11:58.000
And if the internet connection here works well, we will launch a library Chrome session,

11:58.000 --> 12:03.000
which took the script. That worked faster than on my computer.

12:03.000 --> 12:08.000
And that was automatically taken by Dr. Tech Dev,

12:08.000 --> 12:12.000
without needing our manual input when the test was running.

12:12.000 --> 12:17.000
You can imagine running this multiple times, or running it in a continuous integration,

12:17.000 --> 12:24.000
and continuous update system.

12:24.000 --> 12:33.000
Test number three or two is going to be a little bit different.

12:33.000 --> 12:38.000
We are detecting tests automatically.

12:38.000 --> 12:47.000
Here, we don't actually have any statements, but there are some pre-configured checks here,

12:47.000 --> 12:51.000
or markdown files, and which amount of things will check the links.

12:51.000 --> 12:59.000
Let's run this here.

12:59.000 --> 13:04.000
There we go. Once again, we are loading this in Chrome.

13:04.000 --> 13:14.000
This is a Chrome that's been pulled by a JavaScript library, specifically for this kind of work.

13:14.000 --> 13:17.000
Same image. No problem.

13:17.000 --> 13:20.000
And the test output worked.

13:20.000 --> 13:23.000
It used the Chrome application. It was on a Mac.

13:23.000 --> 13:26.000
It checked the links.

13:26.000 --> 13:28.000
You can read through all this if you want.

13:28.000 --> 13:32.000
It found an element matching a selector. That's fantastic.

13:32.000 --> 13:35.000
And it took a screenshot, and better than that.

13:35.000 --> 13:40.000
It checked whether the screenshot varied enough from the last time the screenshot was taken,

13:40.000 --> 13:46.000
which means you can actually use this to keep your internal screenshots up to date.

13:46.000 --> 13:48.000
See if your application has changed enough.

13:48.000 --> 13:51.000
Well, warn you if there's a problem.

13:51.000 --> 13:57.000
But my favorite test here.

13:57.000 --> 14:01.000
So, in keeping with the cat theme,

14:01.000 --> 14:07.000
how about we actually do a little bit more of a recording?

14:07.000 --> 14:15.000
We can also do some basic screen recording here,

14:15.000 --> 14:18.000
in which we're going to open a website.

14:18.000 --> 14:24.000
And we're going to search for kittens.

14:24.000 --> 14:28.000
Hey, we found them.

14:28.000 --> 14:42.000
And the end result is an actual animation.

14:42.000 --> 14:44.000
You can also use Dr. Tech.

14:44.000 --> 14:45.000
For screen recording.

14:45.000 --> 14:48.000
If you have steps that your users need to follow.

14:48.000 --> 14:51.000
For example, you could record the key presses.

14:51.000 --> 14:53.000
You could record all the different steps,

14:53.000 --> 14:56.000
and show it as an animated image afterwards.

14:57.000 --> 15:00.000
And this is the start.

15:00.000 --> 15:04.000
Because we can do more with this.

15:04.000 --> 15:05.000
Cool.

15:05.000 --> 15:10.000
So, I just want to show a real world example now,

15:10.000 --> 15:15.000
which is a repander we've built this into one of our quick start

15:15.000 --> 15:17.000
for a product called Repandiconso.

15:17.000 --> 15:21.000
So, this is a UI, so it needs some screenshots.

15:21.000 --> 15:24.000
We don't really like to have screenshots in dots,

15:24.000 --> 15:26.000
because they're quite expensive to maintain.

15:26.000 --> 15:28.000
If something changes, you have to go in.

15:28.000 --> 15:30.000
Set out the virtual environment again.

15:30.000 --> 15:32.000
Take the screenshot and it's hard.

15:32.000 --> 15:37.000
So, I just want you to notice here that the version here is 24.3.1,

15:37.000 --> 15:41.000
which is an old version.

15:41.000 --> 15:46.000
And the test is defined in the ASCII doc markup.

15:46.000 --> 15:48.000
So, you can see here these are Dr. Tech.

15:48.000 --> 15:49.000
Test steps.

15:49.000 --> 15:53.000
What we're doing is we're telling it to do exactly what we tell the users to do.

15:53.000 --> 15:58.000
We're saying to run this Docker compose file,

15:58.000 --> 15:59.000
wait a bit.

15:59.000 --> 16:02.000
And this is important because this is a real test.

16:02.000 --> 16:03.000
So, sometimes they're flaky.

16:03.000 --> 16:08.000
You need to wait for Docker to start before Red Panda console is available.

16:08.000 --> 16:10.000
So, we tell it to wait.

16:10.000 --> 16:12.000
And then we tell it to do.

16:12.000 --> 16:14.000
All the things we tell the users to do check the links

16:14.000 --> 16:16.000
so that it will save the screenshots.

16:16.000 --> 16:20.000
It's going to check whether or not there's a difference between the current version

16:20.000 --> 16:22.000
and the screenshot we have here.

16:22.000 --> 16:26.000
And if there's a difference between the new one that is about to generate.

16:26.000 --> 16:30.000
So, I'm just going to run this.

16:34.000 --> 16:38.000
And while that's running, I'm hoping it's going to work.

16:38.000 --> 16:40.000
And I'm just going to explain how this will work.

16:40.000 --> 16:42.000
So, we've got a package JSON here that's set up.

16:42.000 --> 16:45.000
So, we've set it up to run specific test depending on the script you're running.

16:45.000 --> 16:48.000
So, I've just run the test console doc script,

16:48.000 --> 16:52.000
and this is going to just point Dr. Tech Dev at my quick start for Red Panda console.

16:52.000 --> 16:56.000
We've also got this built into our CI pipeline.

16:56.000 --> 16:59.000
So, we've got test docs.yamel.

16:59.000 --> 17:04.000
And we've set this up so that we can trigger it from repository dispatch.

17:04.000 --> 17:08.000
So, if there's a new tag in the Red Panda console source code,

17:08.000 --> 17:12.000
we can run our tests on this quick start to make sure that nothing broke

17:12.000 --> 17:14.000
in the latest release.

17:14.000 --> 17:17.000
And then we can also just run it on every pull request.

17:17.000 --> 17:19.000
So, if there's a pull request made,

17:19.000 --> 17:21.000
it makes changes to the Red Panda console docs.

17:21.000 --> 17:23.000
We'll run these tests,

17:23.000 --> 17:27.000
and then you'll get the test output in the pull request.

17:27.000 --> 17:29.000
Cool.

17:29.000 --> 17:31.000
So, it's running, which is good.

17:31.000 --> 17:33.000
Oh, no, it's got errors.

17:33.000 --> 17:39.000
So, there's some errors here because of the internet connection.

17:39.000 --> 17:43.000
So, what I'm going to do is just tether and try this again.

17:47.000 --> 17:55.000
Okay, how long is the time out normally?

17:55.000 --> 17:59.000
I've got it set to like 30 seconds at the moment,

17:59.000 --> 18:03.000
and we will try this one more time,

18:03.000 --> 18:05.000
and if I'm going to just increase the time out,

18:05.000 --> 18:09.000
because it is just networking for me.

18:09.000 --> 18:13.000
Because what it's doing on the back end is it's just,

18:13.000 --> 18:15.000
it's pulling all the documentaries down,

18:15.000 --> 18:17.000
and it's going to try and open

18:17.000 --> 18:19.000
the local Red Panda console instance,

18:19.000 --> 18:21.000
but it's not going to be able to do that if,

18:21.000 --> 18:23.000
like, Docker doesn't work.

18:27.000 --> 18:29.000
Well, while we're rating,

18:29.000 --> 18:31.000
should we ask for any questions?

18:31.000 --> 18:33.000
Yeah, good.

18:33.000 --> 18:34.000
Actually, why not?

18:34.000 --> 18:35.000
Normally, this would be at the end,

18:35.000 --> 18:37.000
but while we're waiting for Docker,

18:37.000 --> 18:39.000
and the network connection to do what it should.

18:39.000 --> 18:41.000
Any questions?

18:41.000 --> 18:43.000
Yes.

18:43.000 --> 18:47.000
Have you ever used this new test changes

18:47.000 --> 18:49.000
in your relationship to make a snapshot

18:49.000 --> 18:53.000
of what you're interested in changing

18:53.000 --> 18:55.000
or how you're talking about it?

18:55.000 --> 18:57.000
Uh, us two of me.

18:57.000 --> 19:01.000
I'm, so, right now, um,

19:01.000 --> 19:02.000
I'm.

19:02.000 --> 19:03.000
Oh, yes.

19:03.000 --> 19:05.000
Have we actually used this to,

19:05.000 --> 19:07.000
to test screenshots,

19:07.000 --> 19:09.000
and to do a screenshot change in our documentation?

19:09.000 --> 19:11.000
No, just test, just test,

19:11.000 --> 19:13.000
just test if you look at the location of this,

19:13.000 --> 19:15.000
but when you ever change,

19:15.000 --> 19:17.000
when you're the present change,

19:17.000 --> 19:19.000
so, putting a breath,

19:19.000 --> 19:21.000
to visualize exactly what this,

19:23.000 --> 19:27.000
don't even do some change change in the principle of the.

19:29.000 --> 19:31.000
Oh, um,

19:31.000 --> 19:33.000
that's going to be helpful.

19:33.000 --> 19:34.000
So, I think,

19:34.000 --> 19:36.000
um, what you're saying is like,

19:36.000 --> 19:38.000
how do you know if the documentation change

19:38.000 --> 19:40.000
compared to what the test says it should do?

19:40.000 --> 19:42.000
And I think the idea is just,

19:42.000 --> 19:44.000
you write the test to mirror

19:44.000 --> 19:46.000
exactly what your documenting.

19:46.000 --> 19:48.000
Uh, so, for that,

19:48.000 --> 19:50.000
you need an environment that's set up

19:50.000 --> 19:52.000
that's kind of going to be similar to what the user has,

19:52.000 --> 19:55.000
and then you have to run through the steps exactly the same way the user does.

19:55.000 --> 19:56.000
So,

19:56.000 --> 19:58.000
talk detective supports things like CLI commands,

19:58.000 --> 20:00.000
as you can check the output of them.

20:00.000 --> 20:01.000
You can save them,

20:01.000 --> 20:04.000
and then use the output within your docs.

20:04.000 --> 20:06.000
So, then if the test runs

20:06.000 --> 20:08.000
and the output changes automatically,

20:08.000 --> 20:12.000
or doc gets updated with the output of the new version.

20:12.000 --> 20:14.000
And it's similar to what we do here with the screenshot.

20:14.000 --> 20:17.000
So, when we take a screenshot,

20:17.000 --> 20:20.000
we save it to a specific location,

20:20.000 --> 20:21.000
um,

20:21.000 --> 20:23.000
and the test does the same thing.

20:23.000 --> 20:24.000
It saves it to the same location.

20:24.000 --> 20:25.000
So, it overrides.

20:25.000 --> 20:27.000
And so, here,

20:27.000 --> 20:29.000
the image will be updated in the doc,

20:29.000 --> 20:30.000
and that's automatic.

20:30.000 --> 20:32.000
So, all you need to do is just push your changes to GitHub,

20:32.000 --> 20:34.000
or have your CI pipeline,

20:35.000 --> 20:37.000
push the changes as a poor request,

20:37.000 --> 20:40.000
or just write to the branch.

20:40.000 --> 20:42.000
But that has completed,

20:42.000 --> 20:44.000
and you can see here that the version's changed,

20:44.000 --> 20:45.000
because it picked up,

20:45.000 --> 20:47.000
there's a new version of a panda,

20:47.000 --> 20:49.000
and so the screenshot changed with the new version.

20:49.000 --> 20:51.000
And then inside the doc,

20:51.000 --> 20:53.000
we just reference that screenshot.

20:53.000 --> 20:55.000
We just don't want to show it here, which is annoying.

20:55.000 --> 20:58.000
But we reference the same screenshot.

20:58.000 --> 21:01.000
And then, like Ariel showed,

21:01.000 --> 21:02.000
you get the run down,

21:02.000 --> 21:05.000
and you can see if any of your steps failed.

21:05.000 --> 21:07.000
And it's going to show me that that's that failed,

21:07.000 --> 21:09.000
because the screenshot was so different,

21:09.000 --> 21:11.000
and compared to the previous version of it.

21:11.000 --> 21:13.000
So, that's what you can use in your CI pipeline,

21:13.000 --> 21:14.000
and you can say,

21:14.000 --> 21:15.000
if it does fail,

21:15.000 --> 21:16.000
then do this,

21:16.000 --> 21:17.000
create a GitHub issue,

21:17.000 --> 21:20.000
or create a GitHub request.

21:20.000 --> 21:22.000
And that's another thing that's great about doc,

21:22.000 --> 21:25.000
to take a bit of has a GitHub action that you can use.

21:25.000 --> 21:26.000
So, that's what we're using here.

21:26.000 --> 21:29.000
It's called the doc detective GitHub action V1,

21:29.000 --> 21:33.000
and that runs everything in a doc environment for you.

21:37.000 --> 21:39.000
Let's go back to the slides.

21:45.000 --> 21:47.000
As with everything,

21:47.000 --> 21:50.000
bear in mind that your test actions

21:50.000 --> 21:52.000
are in fact real actions,

21:52.000 --> 21:55.000
and you are working on your documentation,

21:55.000 --> 21:59.000
so make sure you are developing your tests well.

21:59.000 --> 22:02.000
This is a part of the writing process.

22:02.000 --> 22:05.000
It is going to create artifacts,

22:05.000 --> 22:07.000
it is going to create logs,

22:07.000 --> 22:09.000
make sure you clean them up,

22:09.000 --> 22:11.000
practice good documentation hygiene,

22:11.000 --> 22:14.000
and make sure to test against your data.

22:14.000 --> 22:18.000
You can test against theoretical data as much as you want,

22:18.000 --> 22:20.000
but ultimately,

22:20.000 --> 22:23.000
you are doing this for your own data,

22:23.000 --> 22:28.000
and you don't know what will happen if you use synthetic data,

22:28.000 --> 22:30.000
implemented live,

22:30.000 --> 22:32.000
and then you find out that your synthetic data

22:32.000 --> 22:36.000
was not quite what you needed to do.

22:36.000 --> 22:40.000
But these are all part of the documentation process,

22:40.000 --> 22:43.000
and this is a way that you can also integrate best practices

22:43.000 --> 22:46.000
and make sure that everyone is aligned.

22:47.000 --> 22:50.000
Okay, cool.

22:50.000 --> 22:52.000
So yeah, these are the resources.

22:52.000 --> 22:54.000
Docs' test is a strategy,

22:54.000 --> 22:55.000
like we said,

22:55.000 --> 22:57.000
there is a blog post about it,

22:57.000 --> 22:58.000
the money silver row.

22:58.000 --> 23:00.000
He is also the maintainer of Dr. Techive,

23:00.000 --> 23:02.000
so you can go there to see the docs,

23:02.000 --> 23:04.000
and there is an active discord channel.

23:04.000 --> 23:05.000
So if you've got questions,

23:05.000 --> 23:07.000
or we just want to try it out,

23:07.000 --> 23:08.000
we're both in there,

23:08.000 --> 23:10.000
and so is money.

23:10.000 --> 23:12.000
So yeah, go ahead and try that.

23:12.000 --> 23:13.000
And that is it.

23:13.000 --> 23:15.000
Thank you, everyone.

23:16.000 --> 23:18.000
Thank you.

23:36.000 --> 23:38.000
Is it only web tools,

23:38.000 --> 23:41.000
and can you evaluate source code?

23:41.000 --> 23:42.000
Yeah.

23:43.000 --> 23:48.000
So yeah, so Dr. Techive supports CLI input.

23:48.000 --> 23:50.000
So you can test CLI commands,

23:50.000 --> 23:52.000
which also means that you can run scripts.

23:52.000 --> 23:55.000
So part of the test that I just ran

23:55.000 --> 23:57.000
will run the scripts,

23:57.000 --> 24:00.000
the fetch the latest version of Red Panda,

24:00.000 --> 24:03.000
and then store those in environment variable,

24:03.000 --> 24:05.000
and that is the find here.

24:05.000 --> 24:06.000
So it says,

24:06.000 --> 24:09.000
Run show is the action in the step,

24:09.000 --> 24:12.000
and then we tell it to run this JavaScript file,

24:12.000 --> 24:16.000
and then we say to whatever is in the output of that script,

24:16.000 --> 24:18.000
store it in this environment variable,

24:18.000 --> 24:19.000
and this one.

24:19.000 --> 24:21.000
And then what we have is in our Docker compose file,

24:21.000 --> 24:23.000
we use those environment variables,

24:23.000 --> 24:24.000
so they get set,

24:24.000 --> 24:27.000
they get used when Docker spins up Red Panda,

24:27.000 --> 24:29.000
and yeah.

24:29.000 --> 24:30.000
So you can run scripts.

24:30.000 --> 24:32.000
Can you use S,

24:32.000 --> 24:34.000
to keep seeing more of that?

24:34.000 --> 24:36.000
Yeah, I think so.

24:36.000 --> 24:38.000
I think because it supports any kind of scripting,

24:38.000 --> 24:39.000
I've never done it,

24:39.000 --> 24:40.000
but I would assume so.

24:40.000 --> 24:44.000
Yeah, so sorry,

24:44.000 --> 24:45.000
the question was,

24:45.000 --> 24:47.000
does it support ASCII cinema,

24:47.000 --> 24:49.000
and I believe so.

24:49.000 --> 24:51.000
And there was a question over there?

24:51.000 --> 24:52.000
Yeah.

24:52.000 --> 24:53.000
So kind of piggybacking out with that.

24:53.000 --> 24:55.000
You mentioned you can make screenshots

24:55.000 --> 24:57.000
of your current test run.

24:57.000 --> 25:00.000
Is it also possible to catch the output of these commands steps,

25:00.000 --> 25:02.000
and then have them out in server file,

25:02.000 --> 25:05.000
to include in the documentation of background?

25:06.000 --> 25:07.000
I haven't done that yet,

25:07.000 --> 25:09.000
but I think you do that.

25:09.000 --> 25:10.000
It is possible.

25:10.000 --> 25:11.000
Yes.

25:11.000 --> 25:13.000
I don't think we don't currently store the output,

25:13.000 --> 25:15.000
but we check the output.

25:15.000 --> 25:16.000
So we'll say,

25:16.000 --> 25:18.000
as long as the output contains this string,

25:18.000 --> 25:21.000
then it's good if not then it's bad.

25:21.000 --> 25:22.000
The one thing I will say,

25:22.000 --> 25:25.000
because you also asked about CLI commands,

25:25.000 --> 25:27.000
that's something I'm testing right now.

25:27.000 --> 25:30.000
One of the issues I ran into is that

25:30.000 --> 25:33.000
one of our internal APIs is complicated enough

25:34.000 --> 25:37.000
that I'm having problems with the way

25:37.000 --> 25:39.000
Dr. Tettiff currently works.

25:39.000 --> 25:42.000
The maintainer is working on that for a future release,

25:42.000 --> 25:43.000
but in the meantime,

25:43.000 --> 25:46.000
the workaround is to actually call command line tools

25:46.000 --> 25:49.000
to do some checking and use that output.

25:49.000 --> 25:52.000
And I have access to all of that.

25:52.000 --> 25:53.000
Yeah.

25:53.000 --> 25:55.000
So I don't know if you can see that.

25:55.000 --> 25:57.000
But that's an example of output.

25:57.000 --> 26:00.000
So after running this command,

26:00.000 --> 26:04.000
the output should contain pandas of fabulous.

26:04.000 --> 26:05.000
And that's how it works.

26:09.000 --> 26:10.000
Fantastic. Thank you, everyone.

26:10.000 --> 26:11.000
Thank you.

26:30.000 --> 26:32.000
Thank you.

