WEBVTT

00:00.000 --> 00:09.000
The industry is kind of going crazy at the moment.

00:09.000 --> 00:13.000
There's also an integration mania that's happening with these companies.

00:13.000 --> 00:17.000
So you shouldn't be thinking about chat GPT during this talk.

00:17.000 --> 00:22.000
You should be thinking about a capability that's trained with your company or community's materials.

00:22.000 --> 00:27.000
Know's what you know, and that is integrated into all of your channels, slack, discourse,

00:27.000 --> 00:30.000
Twitter, Zen desk, etc.

00:30.000 --> 00:38.000
And that's speaking through those channels as a support agent or even possibly the brand.

00:38.000 --> 00:42.000
How many people have seen the Gartner hype cycle before?

00:42.000 --> 00:45.000
Okay, so I'm getting like pretty old now.

00:45.000 --> 00:48.000
And I feel like I've been through this like four or five times.

00:48.000 --> 00:54.000
And we're at a point in history where people are talking about things like AI alignment research.

00:54.000 --> 01:03.000
And there's a subset of the population that is seriously concerned that AI could actually extinguish the human race.

01:03.000 --> 01:07.000
That suggests to me we're coming up here.

01:07.000 --> 01:12.000
That's a pretty inflated expectation in my experience.

01:12.000 --> 01:16.000
Now, I could be wrong. Maybe AI will kill us all.

01:16.000 --> 01:19.000
But honestly, it looks like another tool to me.

01:19.000 --> 01:22.000
You guys can all tell me I'm wrong in 10 years.

01:22.000 --> 01:25.000
But basically, there's a lot of hype.

01:25.000 --> 01:27.000
The company space is exploding.

01:27.000 --> 01:29.000
And so there's a lot of froth out there.

01:29.000 --> 01:30.000
Okay.

01:30.000 --> 01:40.000
That makes it difficult to say exactly what the world is going to hold because you usually have to get really burned out on any given technology

01:40.000 --> 01:44.000
before people figure out how to apply it in a smart way.

01:44.000 --> 01:49.000
But I'm going to attempt to look into the future on the basis of scenes things that we've seen before.

01:49.000 --> 01:53.000
And we're always going to hue back to like basic good community principles.

01:53.000 --> 01:57.000
We're not going to look at vendor marketing material, but we're going to look at community principles.

01:57.000 --> 02:02.000
And so what are those? Well, unsurprisingly, community is about people talking with one another.

02:02.000 --> 02:11.000
And so the core two principles are basically like to the extent that you can use a technology to enrich or simplify people's interactions.

02:11.000 --> 02:13.000
It's probably a good thing.

02:13.000 --> 02:23.000
To the extent that the technology replaces, complicates, disincentivizes or gets in the middle of human to human contact, then it's probably a bad idea.

02:23.000 --> 02:26.000
Okay. A couple of principles below that.

02:26.000 --> 02:31.000
And, you know, as community professionals, a lot of us talk about trust.

02:31.000 --> 02:39.000
Okay. So if you think that trust among human beings is important and I kind of do, then basically disclosure is a key principle.

02:39.000 --> 02:46.000
It's a very bad thing if people are confused whether they're talking to a robot or whether they're actually talking to a human.

02:46.000 --> 02:53.000
How many people have done tech support with a company in any context where it wasn't clear whether they were talking to an automated robot or a human?

02:53.000 --> 02:55.000
Yeah, okay, that's most people thought so.

02:55.000 --> 02:58.000
They didn't give you disclosure. The second is opt-in.

02:58.000 --> 03:04.000
So basically like ideally you want to let people choose whether they want to talk to a robot.

03:04.000 --> 03:14.000
Now there's one possible exception here, which is many people are going to feel like robotic responses are okay provided that they come with a human fast follow.

03:14.000 --> 03:18.000
So if the robot is the only thing that you're talking to, that's bad.

03:18.000 --> 03:27.000
But if the robot does something useful, like for example, a GitHub automation bot, and then you're presuming that you're going to talk to a human next, that might be okay.

03:27.000 --> 03:31.000
We're going to talk about examples here.

03:31.000 --> 03:33.000
So let's get into the options.

03:33.000 --> 03:36.000
I'm going to try to structure this as a goofy menu.

03:36.000 --> 03:47.000
And we're going to fuzz them fiddle around and find out because I think that that's like the core method of how human beings learn things in the world as you mess around and you try to figure out what's going to happen.

03:47.000 --> 03:52.000
Sometimes you get burned and then you learn from that and then you rinse leather repeat.

03:52.000 --> 03:56.000
So the first option, let's call it the intern.

03:56.000 --> 04:05.000
Okay. In my community a while ago, we launched an LLM implementation that on GitHub automatically triages a GitHub issue.

04:05.000 --> 04:14.000
And so you type what your issue is or what your pull request is, it reads it, it compares it to other things that we've seen in the past and then it applies instead of GitHub labels.

04:15.000 --> 04:22.000
Automatically to the issue. Now for disclosure purposes, we actually told the community that we did this and posted it.

04:22.000 --> 04:33.000
So far, this is going really well. I suspect one of the reasons why this is going really well is that while it is driven by LLMs, nobody knows or really cares, it's just another GitHub bot, right?

04:33.000 --> 04:40.000
So you can make the bot smarter with the use of LLM, but this is not really functionally much different than what most people have seen before.

04:40.000 --> 04:56.000
If you have, if you use the QR code to the slides, you can see where we announced it to the community for transparency purposes and you can actually access the code or steal our code if you want because it's open source.

04:56.000 --> 05:06.000
A branch off of the intern pattern is there's a lot of systems that already auto detect spam for us. How many people have got a discourse instance where they're dealing with spam posts?

05:06.000 --> 05:15.000
Fewer, but they're out there and I was just triaging spam posts earlier today on hours and yeah, we're not going to read this anyways.

05:15.000 --> 05:32.000
So it's possible to set up filters where posts get sent through an LLM and where you're asking questions about is is person A being abusive to person B is person A using sexist, racist, otherwise violating of the code of conduct.

05:32.000 --> 05:44.000
Language in their post, that might require special admin or mod intervention. We're not doing this yet to the best of my knowledge. This isn't being done, but it's super easy to build.

05:44.000 --> 05:50.000
And I'm thinking about doing this for FOSDM 2026, it's just one of one of my ideas.

05:50.000 --> 06:00.000
Shout out to the Devroom organizers, they're the only Devroom at FOSDM that I've seen that has the code of conduct points of contact on the whiteboard to the right.

06:00.000 --> 06:04.000
So that got noticed in a good way.

06:04.000 --> 06:13.000
So our second pattern is called the researcher. I'm planning on releasing some open source code that does this. We are already doing this internally at Grafana.

06:13.000 --> 06:21.000
So the idea here is that you might want a bot to read everything that's ever been said on a given topic and then summarize it.

06:21.000 --> 06:29.000
Either for internal stakeholders like product management or for your community people so that they know how to go about supporting a particular case.

06:29.000 --> 06:39.000
In the researcher case, you're going to create a mark down document that might be 400 pages long and then you're going to ask it questions.

06:39.000 --> 06:45.000
Okay, this is fairly unabtrusive for the external community because they're not going to ever talk to a robot.

06:45.000 --> 06:53.000
It's simply an automation way of being faster about reading everything that they've said and being aware of their feedback.

06:54.000 --> 07:03.000
We use common room internally. A many community professionals may be familiar with either a common room or what's the other one. I'm forgetting it.

07:03.000 --> 07:05.000
Somebody shouted out.

07:05.000 --> 07:07.000
Or but thank you.

07:07.000 --> 07:11.000
So we're doing this with common room.

07:13.000 --> 07:17.000
Our third pattern, I like to call the robot in jail.

07:17.000 --> 07:26.000
And the reason we call it an robot in jail is that we do put it on the forums, but we segregate it into a given category.

07:26.000 --> 07:33.000
We describe that category. We do not ever make it an auto responder to anything else and we let those interact with it who want to.

07:33.000 --> 07:38.000
So we're doing this right now on our slack and for example on our discourse.

07:38.000 --> 07:42.000
Again with full disclosure following the principles that we describe.

07:43.000 --> 07:47.000
Open source code to help you do this is available for discourse today.

07:47.000 --> 07:50.000
Lots of vendor integrations exist for this.

07:50.000 --> 07:55.000
I've talked to a couple of vendors and I've said how do you recommend people do this and they recommend this pattern.

07:55.000 --> 07:57.000
They say create and ask AI channel.

07:57.000 --> 08:00.000
So it's obvious and clear to everybody what they're getting involved with.

08:00.000 --> 08:03.000
And if you hate the idea, don't go in.

08:03.000 --> 08:09.000
Okay, this one is what I would like to do next.

08:09.000 --> 08:12.000
This is called the egg drop or the reddit sidebar.

08:12.000 --> 08:16.000
So if you've been on IRC for a long time, you often have mobs in the channel.

08:16.000 --> 08:19.000
People come in and they ask the same questions over and over and over again.

08:19.000 --> 08:23.000
You want a person to talk to a person, but you want the person to be able to use an LOM.

08:23.000 --> 08:29.000
So we want to be able to extend to moderators or people that we trust in the community.

08:29.000 --> 08:33.000
The ability to tag the bot and say, hey, explain fact number seven.

08:33.000 --> 08:38.000
We're explained the common instructions about this thing that we get asked every single day.

08:38.000 --> 08:44.000
So interaction at the behest of a person, but not directly the bot answering to the person.

08:44.000 --> 08:51.000
And so in this sense, it's like making an LOM like an IRC bot a help utility that people can call,

08:51.000 --> 08:54.000
but that never speaks without being spoken to.

08:54.000 --> 08:58.000
That would be the reddit sidebar.

08:59.000 --> 09:04.000
This is one that I'm telling you is coming, although I haven't seen it quite yet.

09:04.000 --> 09:06.000
How many folks know about donut AI?

09:06.000 --> 09:08.000
There's Slack matching bots.

09:08.000 --> 09:10.000
Okay, so most people don't.

09:10.000 --> 09:16.000
So there's a company called donut and basically you invite this bot into a Slack or into another chat forum.

09:16.000 --> 09:18.000
And it periodically pairs people.

09:18.000 --> 09:22.000
And it says, hey, Joe, and hey, Susan, you should meet and get coffee.

09:22.000 --> 09:25.000
And that's configurable a hundred different ways and so on.

09:25.000 --> 09:28.000
I happen to love the movie, everything everywhere all at once.

09:28.000 --> 09:31.000
So I couldn't resist and everything bagel reference.

09:31.000 --> 09:36.000
But essentially, this would be the use of LOMs for the purpose of knitting people together.

09:36.000 --> 09:44.000
So the LOM wouldn't speak when not spoken to, but it would basically understand what you're interested in on the basis of what you've talked.

09:44.000 --> 09:47.000
And then it would arrange something like speed dating.

09:47.000 --> 09:49.000
It would say, hey, community member one.

09:49.000 --> 09:53.000
You should talk to community member two because you share these interests.

09:53.000 --> 09:55.000
This cool thing that this other person said.

09:55.000 --> 10:00.000
So the robot is just trying to make an introduction and then get out of the way and let people talk.

10:00.000 --> 10:09.000
I think this is an intriguing thing that's at least we're thinking about, but we haven't done it yet.

10:09.000 --> 10:11.000
There is always this option.

10:11.000 --> 10:15.000
You could do all the things in all of the places.

10:15.000 --> 10:18.000
And I got two other case studies I want to share with you.

10:18.000 --> 10:22.000
The first, I tried to obtain permission to tell you who the companies were.

10:22.000 --> 10:23.000
I was not able to obtain it.

10:23.000 --> 10:25.000
So I'm going to describe them as best I can.

10:25.000 --> 10:27.000
The first is a Gen AI company.

10:27.000 --> 10:32.000
They basically automated everything full stack using LOMs.

10:32.000 --> 10:38.000
And since the very inception of the company and their argument was that they were dog fooding their own product,

10:38.000 --> 10:39.000
which they were.

10:39.000 --> 10:41.000
So you can kind of understand why they felt that way.

10:41.000 --> 10:44.000
At any rate, they had not invested in a community.

10:44.000 --> 10:48.000
And this seems to be preventing them from building a community.

10:48.000 --> 10:55.000
And that they're training their user base to just go talk to the support bot because that's like what the whole product is about.

10:55.000 --> 11:02.000
And so it seems like an active thing that's preventing them from building a community to be frank.

11:02.000 --> 11:11.000
The second is there's a company in the data space that does has both open source and also a commercial enterprise offering.

11:11.000 --> 11:17.000
They got a lot of internal pressure from one of their VPs to help them reduce support costs within the company.

11:17.000 --> 11:21.000
So they went pedal to the metal and basically automated all of the things.

11:21.000 --> 11:25.000
Whenever you come to their support forum, the support bot immediately answers you.

11:25.000 --> 11:31.000
And some of the folks internally that I talked to there said that, well, is this a success?

11:31.000 --> 11:33.000
I mean, it kind of depends, right?

11:33.000 --> 11:40.000
On paper with the financial metrics, they did a really killer job reducing support costs by something like 70%.

11:40.000 --> 11:46.000
Because most supports requests were deflected and never got seen or dealt with by a human being.

11:46.000 --> 11:51.000
On the other side, the community folks are seeing overall engagement decline.

11:51.000 --> 11:54.000
Who's surprised by that?

11:54.000 --> 11:55.000
Right?

11:55.000 --> 11:57.000
I mean, so this is the tradeoff, right?

11:57.000 --> 12:03.000
If you really max these things out, you're kind of getting in the way of humans interacting.

12:03.000 --> 12:10.000
And you save money at the cost of potentially compromising some of the point of what community was about in the first place.

12:10.000 --> 12:13.000
Okay, so basically how it goes wrong?

12:13.000 --> 12:16.000
I mean, there's a lot of concern online about AI slot right now.

12:16.000 --> 12:21.000
Put simply, how it goes wrong is that you prioritize the wrong thing.

12:21.000 --> 12:32.000
If you prioritize efficiency, cost, and scale, it tends to go wrong because you've made community about something other than people, pretty straightforwardly.

12:32.000 --> 12:36.000
The other way that it goes wrong is that you violate the principles.

12:36.000 --> 12:49.000
So you don't disclose that it's an LLM and you kind of fool people into thinking that they're talking to a person and then you bait and switch them or it becomes obvious when the robot can't answer the question correctly.

12:49.000 --> 12:52.000
Or you don't let people opt out.

12:52.000 --> 12:58.000
If anybody knows the movie Vanilla Sky, there's a moment where a character gets trapped.

12:58.000 --> 13:04.000
And he realizes he really needs to escalate the issue and talk to somebody who knows what the hell is going on.

13:04.000 --> 13:08.000
And he's in a sheer panic. That's that moment.

13:08.000 --> 13:14.000
There's about a million articles online about how to reduce customer service costs.

13:14.000 --> 13:20.000
And as a community person, it's always kind of weird because it's like, well, I don't really want to increase customer service costs.

13:20.000 --> 13:23.000
But I also don't entirely want you to make them zero.

13:24.000 --> 13:28.000
Anyways. Okay. So there's this range of two hot and two cold options.

13:28.000 --> 13:37.000
We can think of them in three categories. You could go with no robots and you could say LLMs are evil and that the training is wrong and that they're not fully open sourced.

13:37.000 --> 13:40.000
You could take the position that they should stay out of your community completely.

13:40.000 --> 13:45.000
You could adopt a couple of these patterns that we're talking about or you could go pedal to the metal.

13:45.000 --> 13:50.000
And it's going to depend on your ecosystem, obviously.

13:51.000 --> 13:58.000
The trade-offs that I've seen so far and in talking to other people who are doing this, they're not talking about it publicly as much.

13:58.000 --> 14:07.000
But basically, if you do some robots, if you do, for example, in turn and robot in jail, you're going to create these special areas that people need to know about.

14:07.000 --> 14:13.000
And that means that in order for people to get value at of them, you're going to have to do advertising and socialization.

14:14.000 --> 14:21.000
And they run the risk of becoming special tricks that only your insiders know about and that kind of to some degree lessons there value.

14:21.000 --> 14:24.000
It also leaves common questions on the table.

14:24.000 --> 14:33.000
So if you do robot in jail over here on this Slack channel, people are still showing up to your general channels and they're asking the same frequently asked questions that they always did,

14:33.000 --> 14:38.000
which is why I was getting interested in the egg drop model. If you remember that, we talked about.

14:38.000 --> 14:44.000
So you go pedal to the metal. I mean, it's pretty stark and it's pretty simple.

14:44.000 --> 14:47.000
You deflect a lot of support tickets. You save a lot of time and money.

14:47.000 --> 14:54.000
And it's really, really bad for networking and connection because you're diffusing all of the random connections people would make,

14:54.000 --> 14:58.000
which could develop into deeper conversations or even relationships.

14:58.000 --> 15:02.000
You're shortcutting that and chopping it up, right?

15:02.000 --> 15:10.000
So quick recommendations for 2025. I said at the top of this talk that everything is changing really fast.

15:10.000 --> 15:18.000
And so I'm going to reserve the right to change my mind. If I come back in like falls down in 2029, I want to be able to like,

15:18.000 --> 15:23.000
basically disavow everything that I said on this stage here today because that's how fast it's moving.

15:23.000 --> 15:29.000
But I would like to recommend to you robot and jail researcher and intern.

15:30.000 --> 15:36.000
And the reason why I'm recommending those things is because they match all of the principles that we described up top.

15:36.000 --> 15:44.000
And they give choice and agency to the people who are in your community at the cost that you might have to do some,

15:44.000 --> 15:49.000
some marketing inside of your community to let people know that these capabilities exist.

15:49.000 --> 15:56.000
I recommend that you avoid no LLMs at all because just like any other piece of software, it's automation that can save you time.

15:56.000 --> 16:07.000
And I also recommend that you avoid pedal to the metal because I think your bosses will be super happy with the financial returns and you will be super sad about the community returns if you do that.

16:07.000 --> 16:16.000
I am personally interested in doing future work in this area and I'm going to be thinking about and possibly implementing if I can get the time,

16:16.000 --> 16:22.000
I drop and read it sidebar and thinking about with some community folks that I trust.

16:22.000 --> 16:26.000
What the super intelligent donut would look like, man, that's a weird sentence, isn't it?

16:26.000 --> 16:29.000
What would a super intelligent donut look like?

16:29.000 --> 16:37.000
Anyways, the final warning that you get before you rush off to implement is that people aren't technical problems.

16:37.000 --> 16:41.000
People don't need to be solved like technical problems.

16:41.000 --> 16:45.000
And please try to avoid automating people.

16:45.000 --> 16:53.000
I think sometimes our community can be prone to an engineering mindset where we think about how to get efficiency and automation.

16:53.000 --> 17:00.000
And in the community world, we are simply not technical problems to automate away.

17:00.000 --> 17:04.000
Last, resources and some open questions.

17:04.000 --> 17:07.000
Really, really smart discussion on LinkedIn a while ago.

17:07.000 --> 17:11.000
Emily frames the question and then the threat underneath of it is worth reading.

17:11.000 --> 17:17.000
It's basically like if people show up to your ecosystem and all the code that they're putting in via PR is mostly written by LOMs.

17:17.000 --> 17:19.000
How do you feel about that?

17:19.000 --> 17:22.000
Unsurprisingly, the answer is mixed.

17:22.000 --> 17:26.000
Duck alignment is working on policy resources for open source projects.

17:26.000 --> 17:30.000
So if you're worried about and or thinking about this, you should read about this.

17:30.000 --> 17:34.000
And also what RINMAN QSO is doing with the community contributor covenant.

17:34.000 --> 17:44.000
And this is about, I mean, unsurprisingly, you're going to find principles of disclosure in there as well.

17:44.000 --> 17:46.000
I work with Richie Hartman.

17:46.000 --> 17:49.000
And he did the keynote for Faustem.

17:49.000 --> 17:52.000
And he ended the keynote with B. Excellent to each other.

17:52.000 --> 17:55.000
And I couldn't think of a better way to end this talk.

17:55.000 --> 17:57.000
So that's where we're going to leave it.

17:57.000 --> 17:58.000
And I wish.

17:59.000 --> 18:00.000
Thank you.

18:00.000 --> 18:02.000
Thank you.

