WEBVTT

00:00.000 --> 00:15.000
And now we are welcoming Yola Ode Lola for the gaps we enter it and she will explain you what does it mean?

00:15.000 --> 00:21.000
Hello, thank you.

00:21.000 --> 00:22.000
Thank you very much.

00:22.000 --> 00:32.000
Okay, so a question I had having worked in tech for a while now to be fair.

00:32.000 --> 00:38.000
What resources would we need to create a website for predominantly blind?

00:38.000 --> 00:40.000
It's not working?

00:40.000 --> 00:41.000
Yeah.

00:41.000 --> 00:46.000
Oh, it's meeting.

00:46.000 --> 00:50.000
That should do.

00:50.000 --> 00:51.000
Hello.

00:51.000 --> 00:53.000
Yeah, that's definitely on.

00:53.000 --> 00:55.000
You can hear the difference, can't you?

00:55.000 --> 00:56.000
Right.

00:56.000 --> 00:59.000
So if you're creating a website for predominantly blind audience.

00:59.000 --> 01:04.000
So assuming everybody or 90% of people who visited your website, you need to be blind.

01:04.000 --> 01:11.000
What resources would you need to make sure they had the experience you wanted them to have?

01:11.000 --> 01:18.000
What would you need to test the website to ensure a consistent experience across browsers and screen readers?

01:18.000 --> 01:27.000
And just for those who are unaware, if you are blind and you are using, well, not just the web, but any kind of digital device.

01:27.000 --> 01:31.000
It's likely you would use an assistive technology called a screen reader.

01:31.000 --> 01:38.000
There are a bunch of different kinds of assistive technologies and screen readers is just one category of assistive technology.

01:38.000 --> 01:50.000
And some screen readers can even speak to other assistive technologies like Braille readers as well.

01:50.000 --> 01:52.000
So what would we check for?

01:52.000 --> 01:54.000
What should we check for?

01:54.000 --> 02:00.000
In this talk, I want to explore the gaps in interoperability data on the web and why we have them.

02:00.000 --> 02:02.000
My name is Lola Adelola.

02:02.000 --> 02:04.000
I am the founder of Lola's labs.

02:04.000 --> 02:06.000
I do web standards stuff.

02:06.000 --> 02:12.000
I call myself a web standards technologist, but really I just stole that from a job advert that I saw somewhere.

02:12.000 --> 02:23.000
I'm also the co-chair of the W3C technical architecture group and the co-chair of the W3C documentation community group.

02:23.000 --> 02:30.000
I'm also leading a project called the accessibility compact data, which is what this talk is going to resolve to.

02:30.000 --> 02:32.000
And hopefully you learn more about that.

02:32.000 --> 02:35.000
But technology is my side hustle I like to say.

02:35.000 --> 02:40.000
And really I'm an artist and I explore a bunch of different things through painting and photography.

02:40.000 --> 02:44.000
And writing my first degrees actually in creative writing.

02:44.000 --> 02:46.000
Nothing to do with computer science at all.

02:46.000 --> 02:50.000
And I hate cheesecake and I think everybody should too.

02:50.000 --> 02:52.000
You can debate me about that later.

02:52.000 --> 02:55.000
So let's revisit this question.

02:55.000 --> 03:00.000
What resources would we need to create a website for predominantly blind audience?

03:00.000 --> 03:09.000
In the first instance we'd want to make sure that the web features we used can actually be accessed by a blind users.

03:09.000 --> 03:12.000
This may seem familiar to some of you.

03:12.000 --> 03:13.000
Probably most of you.

03:13.000 --> 03:16.000
I don't know how many people in the remote web developers.

03:16.000 --> 03:22.000
But if you look in the developer console of any HTML page, you see the Dom tree,

03:22.000 --> 03:26.000
which is also the HTML representation of the page.

03:26.000 --> 03:31.000
So here we have our doc type declaration, the HTML tag.

03:31.000 --> 03:34.000
Inside it are head and body tags.

03:34.000 --> 03:37.000
And inside the head tag we have our meta and title tags.

03:37.000 --> 03:40.000
Then inside our body tag we have the main tag.

03:40.000 --> 03:43.000
And inside that we have a H1 paragraph and button.

03:43.000 --> 03:47.000
We also have something called the CSS object model,

03:47.000 --> 03:52.000
which is a representation of the CSS rule for the given HTML.

03:52.000 --> 03:57.000
The rules we set here are to have a font size of 16 pixels in the body.

03:57.000 --> 04:02.000
And make all the paragraph text be blue with the margin of 10 pixels around the paragraph.

04:02.000 --> 04:05.000
And this is represented in a similar tree format.

04:05.000 --> 04:10.000
The same rules we've defined are now have a clear hierarchy.

04:10.000 --> 04:13.000
The body and paragraph rules are on the same branch.

04:13.000 --> 04:17.000
While the font size branches from the body and the color and margin

04:17.000 --> 04:19.000
brought from the paragraph.

04:19.000 --> 04:25.000
These two combined, so our Dom tree and our CSS objects model,

04:25.000 --> 04:27.000
they give us our render tree.

04:27.000 --> 04:31.000
And the render tree is really what dictates the layout.

04:31.000 --> 04:36.000
This is the render tree for the HTML and CSS we just created.

04:36.000 --> 04:40.000
And this is everything the user would need to see,

04:40.000 --> 04:42.000
assuming that they were not blind.

04:43.000 --> 04:48.000
The render tree is important because it's how the browser determines the layout for the webpage.

04:48.000 --> 04:52.000
And what site are the users should see on the page and how they should see it.

04:52.000 --> 04:58.000
Blind users who use magnification tools or software blindness that affect how they perceive colours

04:58.000 --> 05:01.000
and contrast would also benefit from the render tree.

05:01.000 --> 05:06.000
In fact, most WCAG requirements are met with HTML and CSS.

05:06.000 --> 05:11.000
So in other words, this is how WCAG requirements would even be presented to the web user.

05:13.000 --> 05:19.000
However, for web users who use screen readers, they do not need the render tree at all.

05:19.000 --> 05:25.000
So if we're building our website for blind users, they wouldn't use the render tree.

05:25.000 --> 05:31.000
They don't really need to know the layout or the thing what they need to know are the accessibility semantics,

05:31.000 --> 05:34.000
which is communicated in the accessibility tree.

05:34.000 --> 05:39.000
The important thing here is that the accessibility tree is a separate structure,

05:39.000 --> 05:49.000
and extracts the roles, names, and texts of the don't tree, allowing screen readers to interact with the necessary semantics to communicate to users.

05:49.000 --> 05:55.000
Screen readers also communicate these semantics to other devices, as I mentioned earlier, such as brow readers.

05:55.000 --> 06:02.000
Every browser does this differently, but this is roughly what you can expect all browsers to include in the accessibility tree.

06:03.000 --> 06:10.000
We don't have any information in this accessibility tree about styles, and we have limited information about what's in the head.

06:10.000 --> 06:19.000
Mainly the role, name of the document, and for each element in the body we have the role, name, text of the element, and if that element is focusable.

06:19.000 --> 06:24.000
These are the important semantics of screen reader in general.

06:24.000 --> 06:32.000
And so the don't tree has all the HTML where the accessibility tree has the semantically meaning for information for screen readers.

06:32.000 --> 06:39.000
The don't tree standardized in the don't standard, which is worked on in the web-hyper-text application technology work in group.

06:39.000 --> 06:46.000
It's a mouthful, what rig, some of you may be familiar with that acronym, which also works on the HTML standard.

06:46.000 --> 06:53.000
This makes it easier for there to be interoperability between browsers, as they can agree on what should be in the don't tree.

06:53.000 --> 07:05.000
In the panel that we just had, we heard the panelists talk about the work that goes into making sure that there is a consistent web experience across browsers regardless of the browser you're using.

07:05.000 --> 07:11.000
However, the same is not exactly true for the accessibility tree.

07:11.000 --> 07:16.000
Who actually decides what should be in the accessibility tree? This isn't arbitrary.

07:16.000 --> 07:21.000
There's a whole ecosystem of specifications and APIs that govern this.

07:21.000 --> 07:30.000
However, it's not standardized because of that the browsers have flexibility on how they can implement their accessibility trees.

07:30.000 --> 07:38.000
So we're going to go in a bit more detail and forgive me if this gets confusing.

07:38.000 --> 07:43.000
It was confusing for me as I was delving into things.

07:43.000 --> 07:50.000
There are series of specifications that specify how web features are mapped to platform APIs.

07:50.000 --> 07:55.000
When we are dealing with the don't tree, we only really care about the web browser.

07:55.000 --> 08:03.000
When you're dealing with the accessibility tree, yes, the accessibility tree is a semantic representation of what is shown in a browser,

08:03.000 --> 08:08.000
but it connects to operating system, platform APIs.

08:08.000 --> 08:16.000
As you can imagine, each operating system has their own platform API, which I'll talk about more detail soon.

08:16.000 --> 08:27.000
What Apple says a link is going to be completely different to what Windows says a link is.

08:27.000 --> 08:36.000
And that's because within their operating system, there are other elements that are have nothing to do with the browser that may use the same roles.

08:36.000 --> 08:40.000
And so they get to define what those are and how those look.

08:40.000 --> 08:47.000
So as I said, there are a series of specifications that specify how web features are mapped to platform accessibility APIs.

08:47.000 --> 09:00.000
But the four most relevant to people who work on the web are the HTML core and SVG accessible API mappings and the accessible name and description computation.

09:00.000 --> 09:12.000
The core accessibility API mapping is a document which describes how user agents, so not just browsers, but in this case a web user agent browser,

09:12.000 --> 09:18.000
should expose the semantics of web content languages to accessibility APIs.

09:18.000 --> 09:29.000
For example, for accessibility API 1, the way role link should be sent to the API as role system link.

09:29.000 --> 09:34.000
And you can see that in the example for the core on the first line there.

09:35.000 --> 09:40.000
The HTML accessibility API mapping does something completely different.

09:40.000 --> 09:47.000
It defines how user agents map HTML elements specifically and attributes to platform APIs.

09:47.000 --> 09:57.000
So in this example, the anchor element, with a non empty htresh bring, should have the way RRL link.

09:57.000 --> 10:14.000
And because the anchor element would have the way RRL link, when it comes to mapping to accessibility API 1, it would then use the role system link to map to that operating systems accessibility API.

10:14.000 --> 10:23.000
And the HTML document, the HTML accessibility API mapping document goes through all the HTML elements and does the same thing.

10:23.000 --> 10:29.000
And then we also have our accessible name and description computation.

10:29.000 --> 10:34.000
Rolls aren't the only important semantics that need to be defined or accessibility API.

10:34.000 --> 10:43.000
We need to define accessible names and descriptions to and this is all helpful when creating the accessibility tree.

10:44.000 --> 10:46.000
So the DOM tree only has to consider the browser.

10:46.000 --> 10:56.000
However, the accessibility tree has to also consider the platform and operating system level APIs, which are mostly closed source and proprietary.

10:56.000 --> 11:00.000
And don't use with each other in the same way browser specifications do.

11:00.000 --> 11:08.000
So each platform has a different way of interpreting the data it receives and requires different instructions to properly interpret that data.

11:08.000 --> 11:16.000
And so each platform API is an additional variable that needs to be considered when creating web experiences for blind people.

11:16.000 --> 11:24.000
And so this is an example of the core AAM, oops sorry, the core AAM.

11:24.000 --> 11:30.000
Each platform API accepts different key value pairs per way RRL.

11:30.000 --> 11:38.000
For I accessible to, which is a Windows platform API, it will accepts the role the states the interface which are required.

11:38.000 --> 11:45.000
Whereas UIA, which is also a Windows API, will accepts the control tab and control pattern required.

11:45.000 --> 11:54.000
So we will take our way RRL link and send depending on the platform API, send different data.

11:54.000 --> 11:56.000
And you can imagine how confusing that would get.

11:56.000 --> 11:59.000
There's no consistency across platforms at all.

12:00.000 --> 12:07.000
And so each screen reader uses one of these platform APIs, depending on the operating system.

12:07.000 --> 12:14.000
So draws an NVIDIA, which are both screen readers, both operate on Windows and use I accessible to or Windows UIA automation.

12:14.000 --> 12:19.000
While voice-overs, we're on macOS and uses the macOS accessibility protocol.

12:19.000 --> 12:25.000
And Orca is available on Nome, and that uses the Linux accessibility toolkit.

12:25.000 --> 12:37.000
And so we have a group of spec definitions, the core HTML accessibility AAM appings, and the accessible name and description computation, which says how the accessibility treat is constructed.

12:37.000 --> 12:48.000
And then the accessibility treat passes those semantics to various platform APIs, which pass it to the assistive technologies in this case screen readers, which then pass it onto the users.

12:48.000 --> 12:57.000
These steps can be grouped into how the semantics in the HTML given page move from browser to operating system to user.

12:57.000 --> 12:59.000
That was a lot.

12:59.000 --> 13:10.000
If we wanted to build a HTML page for cited users, how would we know the paragraph h1 and button tags would render as we expect them to?

13:10.000 --> 13:22.000
We could use the browser compatibility data table. This is on MDN on most MDN pages. You have a compact table at the bottom, which tells you the support of the feature per browser.

13:22.000 --> 13:26.000
We could use can I use, which shows similar data.

13:26.000 --> 13:34.000
If you wanted to get straight to it, it doesn't have necessary reference information, but it does show you the browser support.

13:34.000 --> 13:46.000
We could use baseline, and this is even more streamlined, more simplified, and it uses a red yellow green system to kind of show whether it's something is widely available or not.

13:46.000 --> 13:54.000
Or maybe we don't even want to leave our IDE, and as we're typing, support data comes up to show that a thing is supported.

13:54.000 --> 14:01.000
This is currently available for CSS properties in JetBrains, WebStorm, and Visual Studio Code.

14:01.000 --> 14:09.000
The good thing about this, though, is that all those options is that they actually all use the same data source.

14:09.000 --> 14:25.000
So BCD, which is browser compact data, actually is the data that you see in the IDE integration in baseline in can I use in the browser compatibility data.

14:25.000 --> 14:29.000
It uses the same source, which means that there's unification of data, right?

14:29.000 --> 14:37.000
All that needs to happen is one place needs to be updated for everywhere to be updated and for developers to have accurate information.

14:37.000 --> 14:44.000
But the BCD collector does not include the accessibility tree in its support data information.

14:44.000 --> 14:52.000
It's only considered as the DOM tree, and as we've seen the four blind users, that is not enough.

14:52.000 --> 14:58.000
So how do we know our website works with assistive technologies?

14:58.000 --> 15:10.000
There's Alice support.io, which is a website that includes data specifically for the way our raw mapping and our attributes.

15:10.000 --> 15:17.000
However, this may be difficult for developers to read and understand, because you need to know which roles map to which elements.

15:17.000 --> 15:23.000
And the data is incomplete in some places, as it relies on volunteers to contribute and update.

15:23.000 --> 15:33.000
There's also PowerMapper, which is another good source, takes it a step further and gives support data for the HTML element across screen readers and browsers.

15:33.000 --> 15:42.000
However, this data is closed source, and while they have a lot of data, it's difficult to know where that data is updated, where to file bugs, et cetera, et cetera.

15:42.000 --> 15:54.000
There's also the RAT project, which initially was mainly for the APG platform, which is the area author and practices guide patterns.

15:54.000 --> 16:02.000
However, they have expanded it to be HTML elements too, but work is still ongoing in this space.

16:02.000 --> 16:19.000
And then there's web platform tests, and I don't know if any of you have tried to read web platform tests or get useful information from web platform tests, but if you are not a browser engineer, even if you are a browser engineer, unless you're working on something very, very specific, it can be difficult.

16:19.000 --> 16:26.000
As my colleagues said earlier, there are like 2 million web platform tests where the hell do you even begin.

16:26.000 --> 16:37.000
The presentation and segmentation, there are many options for us to visit, but they all do different things and present data in different ways or a closed source, or all these other things.

16:37.000 --> 16:50.000
And the W3C maintains that there is one web, however, the prioritization of sighted users means that blind users often have to experience multiple or incomplete webs.

16:50.000 --> 16:54.000
This infrastructure disparity tell us then.

16:54.000 --> 17:04.000
As it stands, a lack of reliable and easily accessible, inter-up data encourages web developers to create websites that employ these kinds of dark patterns, even if accidentally.

17:04.000 --> 17:11.000
For example, there's an assumption that if a web feature is currently exposed in the DOM tree, then it's also exposed in accessibility tree.

17:11.000 --> 17:19.000
This leads developers to not fully testing their websites, causing at best a poor experience and at worst an exploitative web experience.

17:19.000 --> 17:26.000
I'm going to skip through some slides here just so that I have enough time for questions at the end.

17:26.000 --> 17:37.000
So that brings me to the thing that I'm working on, which I think would be of interest to folks in this room, which is the accessibility compact data.

17:38.000 --> 17:49.000
This is a project I'm working on, and it has a few goals, right? So the first is to define machine readable data that represents accessibility support for web features.

17:49.000 --> 18:03.000
To integrate this data into MDN and other developer resources, to support developers in building accessible experiences from the start, and most importantly, improving web experiences for users of assistive technologies.

18:03.000 --> 18:09.000
ACD is not an audit in store, and it does not define a baseline.

18:09.000 --> 18:22.000
We want to provide support data for the browser that takes the accessibility tree into consideration, and support data to highlight when screen readers support a particular element, or if there are bugs.

18:22.000 --> 18:27.000
And similar to BCD, we want to integrate into the places where developers already are.

18:27.000 --> 18:33.000
We don't want developers to have to learn a new website or go to a different place to find accessibility information.

18:33.000 --> 18:38.000
We want it to just be integrated in the tools and resources that developers already use.

18:38.000 --> 18:48.000
And so for users of ACD, the impact will probably be that users will be more catered to, especially those who use assistive technologies, right?

18:48.000 --> 18:52.000
Web developers will know where they may need to fill holes.

18:52.000 --> 19:01.000
Assistive tech and browser vendors will be more aware of bugs, and spec authors will be encouraged to write accessibility tests for their web features.

19:01.000 --> 19:06.000
And so we do have support from a bunch of different people.

19:06.000 --> 19:14.000
Last year, we had support from the early project, open web docs, Microsoft Patrick, who was on the panel.

19:14.000 --> 19:18.000
A moment ago, he's a big support of the work we're doing.

19:18.000 --> 19:33.000
Mozilla, technological, Apple have helped us in some cases, and then we also have a bunch of indie accessibility professionals and experts who know how all of this works and who can dedicate their resources, their time and their knowledge.

19:33.000 --> 20:00.000
And in terms of where we are with the project, we have now a kind of very, very rough demo of a collector, which goes into a platform test, gets the relevant test from a platform test, goes into a screen reader data source that we've identified, combines the two and says that, okay, this web feature is supported in this browser version and this screen reader version.

20:00.000 --> 20:05.000
And we still need to do a lot of work there, obviously, but that's where we are with it.

20:05.000 --> 20:19.000
And that's it for me. You can find me on blue sky or Macedon, Lola, Lola, Lola, you can follow the project on GitHub, Lola, I think it's actually now Lola's Dash Lab, Dash Accessibility Compact Data.

20:19.000 --> 20:24.000
And if you'd like to sponsor us, you can sponsor this work at give.lolaslab.co.

20:24.000 --> 20:25.000
Thank you.

20:25.000 --> 20:35.000
So we have time for a couple of questions.

20:35.000 --> 20:40.000
Great rock, thanks.

20:40.000 --> 20:54.000
This might not be very relevant to the developers, but would it be easier if the screen readers talk directly to the browser instead of going through the platform APIs and all that mess?

20:54.000 --> 21:01.000
As somebody who does not work on operating systems, you have to take my answer with a huge grain of salt.

21:01.000 --> 21:10.000
I'm going to say no, mainly because the screen reader is an operating system level technology.

21:10.000 --> 21:17.000
It's not a browser technology, and so the screen reader still needs to speak to the operating system anyway.

21:17.000 --> 21:25.000
Like if we didn't have a browser, like actual browsers didn't exist, screen readers could still exist and still be very useful to blind people.

21:25.000 --> 21:34.000
If somebody opens up a word processor application on their desktop on their device, the screen reader still needs to engage with that.

21:34.000 --> 21:41.000
It still needs to know what a hyperlink is in the context of the processor, so the document and etc.

21:41.000 --> 21:46.000
So I don't know that it would be easier.

21:46.000 --> 21:56.000
Thank you. Any other question?

21:56.000 --> 22:10.000
So with after collecting this data with their being interested in an initiative like baseline to try to bring it or just incorporate some of these things into baseline.

22:10.000 --> 22:24.000
Okay, good. So we are talking with the people who are doing baseline to see what we can do to integrate this data into baseline.

22:24.000 --> 22:31.000
It's a bit tricky because on one hand something shouldn't be baseline if it's not accessible.

22:31.000 --> 22:39.000
The other hand we don't want developers to rely on baseline as a way to understand whether or not something is accessible, right?

22:39.000 --> 22:53.000
Like baseline is a simplification of a simplification and really for especially for accessibility data you want to kind of understand the nuances, so you can know what gaps you may need to plug.

22:53.000 --> 23:01.000
So we are still in discussion. We're going to be running a survey this year to really understand how developers view baseline and accessibility.

23:01.000 --> 23:05.000
But yeah, it's a conversation that's happening.

23:05.000 --> 23:20.000
Thank you. One last question.

23:20.000 --> 23:37.000
Hi. So maybe something we notice is like as a small company it can be hard to focus on accessibility because it's complex, it takes a lot of work and you don't have a lot of users that maybe need accessibility features but it could help a lot.

23:37.000 --> 23:47.000
Do you think like for example AI could help in this case with like is accessibility something that is good to understand for an AI and could help on the developer side would basically

23:47.000 --> 23:55.000
Assisting in how do you make this website more accessible but maybe also from the browser side because you have some new modern browsers that basically take away.

23:55.000 --> 24:04.000
The browser pages again and give more of like a human interfacing like chat or other kind of thing do you think like these things will will help.

24:05.000 --> 24:22.000
I think it's still to be seen. I don't know if anyone has tried to even create a HTML button with AI but it's very it can't do that in an accessible way and you know to make a HTML button accessible is not difficult at all.

24:23.000 --> 24:41.000
I think we've had discussions with a few different people who are interested in possibly integrating this data into AI tools so that at least the AI if a developer is using AI to build something at least the data that they're pulling from is as accurate as possible.

24:41.000 --> 24:51.000
But I think it's a bit more complicated it really depends on the AI tool and where their data is coming from.

24:51.000 --> 25:02.000
A lot of the data on the web at the moment is insufficient because a lot of websites are inaccessible so it's like you know you kind of just running around and circles at that point.

25:02.000 --> 25:09.000
If we could integrate this into AI I think that's an interesting prospect and we have spoken to a couple of people about that.

25:09.000 --> 25:12.000
And you are a very quick last question.

25:12.000 --> 25:31.000
We are a quick and another perspective on this being able to know if a feature is a portal not is very nice and it's very important but as we as a developer sometimes to like to try and test on the real thing for example spin up a different browser to test if the rendering is correct on Firefox and will come whatever.

25:31.000 --> 25:48.000
Would it be possible to use this data to later create some kind of simulation environment where one could for example spin up the screen reader and the stack or simulation of the stack that's going through and see exactly where the quirks are experiencing the first time.

25:48.000 --> 25:53.000
Would that be possible to like get a better understanding of the actual user experience.

25:53.000 --> 25:56.000
It's just a compatibility data.

25:56.000 --> 26:01.000
I think that's actually possible today but don't hold me to that.

26:01.000 --> 26:06.000
I don't know that you would need this data to do that basically.

26:06.000 --> 26:10.000
I think you can do that now but also maybe I'm wrong about that.

26:10.000 --> 26:17.000
I have a feeling my quote that's what some of my colleagues have done in other accessibility tools that they are building.

26:17.000 --> 26:19.000
Cool.

26:19.000 --> 26:20.000
Thank you very much.

26:20.000 --> 26:22.000
Thank you so much.

26:22.000 --> 26:25.000
Thank you.

