This repository has been archived by the owner on Feb 9, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 21
/
TouchProposal_Discussionbak.html
1180 lines (1077 loc) · 106 KB
/
TouchProposal_Discussionbak.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en" dir="ltr" typeof="bibo:Document " prefix="bibo: http://purl.org/ontology/bibo/ w3p: http://www.w3.org/2001/02pd/rec54#">
<head><meta property="dc:language" content="en" lang="">
<title>Mobile Accessibility: How WCAG 2.0 and Other W3C/WAI Guidelines Apply to Mobile</title>
<meta charset="utf-8">
<style type="text/css">
div.successcriteria {
border:solid #CCCCCC 1px;
background:#DCDCDC;
padding:.75em;
margin-top:1em;
color: black;
}
div.newsuccesscriteria {
border:solid #CCCCCC 1px;
background:#C7F1C2;
padding:.75em;
margin-top:1em;
color: black;
}
div.advisorysuccesscriteria {
border:solid #CCCCCC 1px;
background:#F4DBF2;
padding:.75em;
margin-top:1em;
color: black;
}
div.comment {
border:solid #B9F3BE 1px;
background:#F7F9A8;
padding:.75em;
margin-top:1em;
color: black;
}
div.technique {
border:solid #B9F3BE 1px;
background:#E4E2F1;
padding:.75em;
margin-top:1em;
color: black;
}
.blue {color:#041FF0}
</style>
<style>/*****************************************************************
* ReSpec 3 CSS
* Robin Berjon - http://berjon.com/
*****************************************************************/
/* --- INLINES --- */
em.rfc2119 {
text-transform: lowercase;
font-variant: small-caps;
font-style: normal;
color: #900;
}
h1 acronym, h2 acronym, h3 acronym, h4 acronym, h5 acronym, h6 acronym, a acronym,
h1 abbr, h2 abbr, h3 abbr, h4 abbr, h5 abbr, h6 abbr, a abbr {
border: none;
}
dfn {
font-weight: bold;
}
a.internalDFN {
color: inherit;
border-bottom: 1px solid #99c;
text-decoration: none;
}
a.externalDFN {
color: inherit;
border-bottom: 1px dotted #ccc;
text-decoration: none;
}
a.bibref {
text-decoration: none;
}
cite .bibref {
font-style: normal;
}
code {
color: #C83500;
}
/* --- TOC --- */
.toc a, .tof a {
text-decoration: none;
}
a .secno, a .figno {
color: #000;
}
ul.tof, ol.tof {
list-style: none outside none;
}
.caption {
margin-top: 0.5em;
font-style: italic;
}
/* --- TABLE --- */
table.simple {
border-spacing: 0;
border-collapse: collapse;
border-bottom: 3px solid #005a9c;
}
.simple th {
background: #005a9c;
color: #fff;
padding: 3px 5px;
text-align: left;
}
.simple th[scope="row"] {
background: inherit;
color: inherit;
border-top: 1px solid #ddd;
}
.simple td {
padding: 3px 10px;
border-top: 1px solid #ddd;
}
.simple tr:nth-child(even) {
background: #f0f6ff;
}
/* --- DL --- */
.section dd > p:first-child {
margin-top: 0;
}
.section dd > p:last-child {
margin-bottom: 0;
}
.section dd {
margin-bottom: 1em;
}
.section dl.attrs dd, .section dl.eldef dd {
margin-bottom: 0;
}
@media print {
.removeOnSave {
display: none;
}
}
/* custom styles for WCAG 2.0 guidelines */
body {
max-width: 75em;
}
code {
font-family: monospace;
}
div.constraint, div.issue, div.note, div.notice, div.example {
margin-left: 1em;
margin-bottom: 0.5em;
margin-top: 0;
padding-top:0;
}
dl div.note, dl div.example {
margin-top: 0.25em;
margin-left: 0.5em;
}
ol.enumar {
list-style-type: decimal;
margin-top: 0;
margin-bottom: .25em;
}
ol.enumla {
list-style-type: lower-alpha;
}
ol.enumlr {
list-style-type: lower-roman;
}
ol.enumua {
list-style-type: upper-alpha;
}
ol.enumur {
list-style-type: upper-roman;
}
div.div2 dl, dl.keyterms {
margin-left: 1.5em;
}
p, td {
line-height: 1.4;
margin-left: .5em;
color: #000;
background: inherit;
font-weight: normal;
}
li {
margin-top: 0;
margin-bottom: 0.25em;
padding-top: 0;
padding-bottom: 0;
}
li p, dd p {
margin-top: 0;
margin-bottom: 0;
padding-top: 0;
padding-bottom: 0
}
p.sctxt {
margin: 0.5em 0 0 0.5em;
padding: 0;
}
strong.sc-handle {
font-size: 1em;
}
dd.prefix p {
margin-bottom: 0;
}
div.head dt {
margin-top: 0.25em;
}
dt.label {
padding-top: .5em;
}
dd p {
margin-left: 0;
}
div.sc ul, div.sc ol, div.sc div.note, div.div3 ul, div.div3 ol {
display: block;
margin-top: 0;
margin-bottom: 0;
}
.principle {
padding: .5em;
border: solid #666666 1px;
background-color: #FFFFFF;
color: #000000;
}
/* If you place a comment immediately after a selector in a style sheet, IE 5 and earlier on Windows will ignore that selector. */
div.guideline/* */ {
border: solid #666666 1px;
background-color: #CFE8EF ! important;
padding:.75em;
margin-top:1em;
color: #000000;
position: relative;
}
div.sc/* */ {
border: solid #666666 1px;
background-color: #C7C7C7 ! important;
padding: 0 .5em .5em .5em;
margin: 1em 0 0 0;
color: #000000;
position: relative;
}
div.guideline h3/* */ {
background-color: #CFE8EF ! important;
color: #000000;
margin-right: 14.5em;
margin-bottom: 0.5em;
}
p.und-gl-link/* */ {
position: absolute;
right: 0.5em;
top: 0.5em;
width: 15em;
display: inline;
padding: 0;
font-size: 0.8125em;
}
div.sc li, div.sc li p {
padding: 0;
margin: 0;
}
div.sc {
margin-left: 0;
margin-bottom: 1.5em !important;}
.termref {
text-decoration:none;
color:#000000;
border-bottom:dotted #585858 1px; /* de-emphasize glossary links */
background-color: #fff;
}
a.termref:link {
color:#000000;
background : inherit;
}
a.termref:hover, .termref:active, a.termref:focus {
color:#0000CC;
background : inherit;
}
.sorethumb {
color: red; background: inherit;
}
a.HTMlink, a.HTMlink:visited, a.HTMlink:hover, a.HTMlink:focus {
font-size: 0.8125em;
padding: 0;
font-weight: normal;
}
h3.guideline a.HTMlink, h3.guideline a.HTMlink:visited, h3.guideline a.HTMlink:hover, h3.guideline a.HTMlink:focus {
margin: 0px 0px 2px 15px;
}
p.prefix {
margin: 0.25em 0 0.5em 0;
padding:0;
}
.req, .bp, .additional {
display: block;
border-bottom: solid #666666 1px;
margin-left: 1em;
margin-right: 0.25em;
padding-bottom: .25em;
padding-top: 0.5em;
}
div.sc/* */ {
position: relative;
margin-right: 11em;
top: 0;
left: 0;
}
div.sc div.note p.prefix {
margin-bottom: 0;
}
div.scinner {
margin: 1em 0 0 0;
padding-right: 1em;
}
div.doclinks/* */ {
position: absolute;
right: 1.5em; /* Fix IE5.5 (so that doclinks line up correctly) */
top: 0em;
width: 9em;
}
div.sc>div.doclinks/* */ {
right: -9em;
}
.doclinks p {
margin: 0 0 0 0 !important;
padding: 2px 8px 2px 0 !important;
line-height: 1.3
}
.doclinks p a {
margin: 0 !important;
padding: 0 !important
}
p.supportlinks {
margin: 0 5px 0 5px; /* top, right, bottom, left */
padding: 0.25em;
text-align: right;
border: solid #006 1px;
border-right: solid #006 3px;
background: #f4f4ff;
color: #000;
}
p.supportlinks a {
margin: 0.25em;
text-decoration: underline;
}
div {
clear: both;
}
span.screenreader {position: absolute; left: -10000px}
body {
background: white none no-repeat fixed left top;
color: black;
font-family: sans-serif;
margin: 0;
padding: 2em 1em 2em 70px;
}
:link {
background: transparent none repeat scroll 0 0;
color: #00c;
}
:visited {
background: transparent none repeat scroll 0 0;
color: #609;
}
a:active {
background: transparent none repeat scroll 0 0;
color: #c00;
}
a:link img, a:visited img {
border-style: none;
}
a img {
color: white;
}
@media all {
a img {
color: inherit;
}
}
th, td {
font-family: sans-serif;
}
h1, h2, h3, h4, h5, h6 {
text-align: left;
}
h1, h2, h3 {
background: white none repeat scroll 0 0;
color: #005a9c;
}
h1 {
font: 170% sans-serif;
}
h2 {
font: 140% sans-serif;
}
h3 {
font: 120% sans-serif;
}
h4 {
font: bold 100% sans-serif;
}
h5 {
font: italic 100% sans-serif;
}
h6 {
font: small-caps 100% sans-serif;
}
.hide {
display: none;
}
div.head {
margin-bottom: 1em;
}
div.head h1 {
clear: both;
margin-top: 2em;
}
div.head table {
margin-left: 2em;
margin-top: 2em;
}
p.copyright {
font-size: small;
}
p.copyright small {
font-size: small;
}
@media screen {
a[href]:hover {
background: #ffa none repeat scroll 0 0;
}
}
pre {
margin-left: 2em;
}
dt, dd {
margin-bottom: 0;
margin-top: 0;
}
dt {
font-weight: bold;
}
ul.toc, ol.toc {
list-style: outside none none;
}
@media speech {
h1, h2, h3 {
}
.hide {
}
p.copyright {
}
dt {
}
}
</style><!--[if lt IE 9]><script src='https://www.w3.org/2008/site/js/html5shiv.js'></script><![endif]--></head><body id="respecDocument" role="document" class="h-entry"><div id="respecHeader" role="contentinfo" class="head">
<h1 class="title p-name" id="title" property="dcterms:title">Touch Accessibility Proposal & Discussion</h1>
<h2 id="w3c-editor-s-draft-12-february-2015"><abbr title="World Wide Web Consortium">Note: this is an internal rough proposal to the mobile task force, not for re-use outside of the Mobile TF</abbr></h2>
<p>This is the supplement to David MacDonald's <a href="http://davidmacd.com/blog/mobile-tf-proposal.html">proposed discussion</a> and reorganization of <a href="http://www.w3.org/TR/mobile-accessibility-mapping/">Mobile Accessibility: How WCAG 2.0 and Other <abbr title="World Wide Web Consortium">W3C</abbr>/WAI Guidelines Apply to Mobile</a> The purpose of this is reorganization is to:</p>
<ol>
<li> List possible WCAG Guidelines, Success Criteria, and Techniques, and capture important discussions about them.</li>
<li> Document which advice cannot become Success Criteria or suffient techniques in the current wording </li>
<li>Begin discussion about whether we can adapt this non SC advisory recommendations into Success Criteria format, or leave them as advisory (or best practices), or Sufficient Techniques for an existing Success Criteria. <em><strong>We understand that almost no one follows advisory or best practice advice.</strong></em></li>
<li>Turn the information into a form digestible as an Normative Extension Spec. for WCAG 2</ol>
<h3>New proposed Guideline</h3>
<p><strong>Note:</strong> All proposed new guidelines and Success Criteria are numbered as to where they are proposed in WCAG 2 (that's why their numbers don't have 3.x as per this section)</p>
<div class="guideline"><strong>Guideline 2.5 Touch and Pointer Accessible: </strong>Make it easier for users to operate touch and pointer functionality. (updated 2015-09-10)</div>
<h4>Lively discussion on list. Here's it is after redundancies etc. removed</h4>
<p class="blue"> <strong>Patrick Lauke: </strong>"touch" is not strictly a "mobile" issue. There are already many devices (2-in-1 tablet/laptops, desktop machines with external touch-capable monitors, etc) beyond the mobile space which include touch interaction. So, a fundamental question for me would be: would these extensions be signposted/labelled as being "mobile-specific", or will they be added to WCAG 2 core in a more general, device-agnostic manner? Further, though I welcome the addition of SCs relating to touch target size and clearance, I'm wondering why we would not also have the equivalent for mouse or stylus interfaces...again, in short, why make it touch-specific, when in general the SCs should apply to all "pointers" ("mouse cursor, pen, touch (including multi-touch), or other pointing input device", to borrow some wording from the Pointer Events spec <a href="http://www.w3.org/TR/pointerevents/" rel="noreferrer" target="_blank">http://www.w3.org/TR/pointerevents/</a>)? </p>
<p class="blue"> <strong>Detlev: </strong>Hi Patrick, I didn't intend this first draft to be restricted to
touch ony devices - just capturing that input mode. It's certainly
good to capture input commonalities where they exist (e.g., activate
elements on touchend/mouseup)<br>
</p>
<p class="blue">
<strong>Patrick:</strong> Or, even better, just relying on the high-level focus/blur/click ones (though even for focus/blur, most touch AT don't fire them when you'd expect them - see <a href="http://patrickhlauke.github.io/touch/tests/results/#mobile-tablet-touchscreen-assistive-technology-events" rel="noreferrer" target="_blank">http://patrickhlauke.github.io/touch/tests/results/#mobile-tablet-touchscreen-assistive-technology-events</a> and particularly <a href="http://patrickhlauke.github.io/touch/tests/results/#desktop-touchscreen-assistive-technology-events" rel="noreferrer" target="_blank">http://patrickhlauke.github.io/touch/tests/results/#desktop-touchscreen-assistive-technology-events</a> where none of the tested touchscreen AT trigger a focus when moving to a control)
<p class="blue"><strong>Jonathan Avila:</strong> Regarding touch start and end -- we are thinking of access without AT by people with motor impairments who may tap the wrong control before sliding to locate the correct control. This is new and different than sc 3.2.x. I understand and have seen what you say about focus events and no key events so that is a separate matter to address</p>
<p class="blue" ><strong>Detlev Fischer:</strong> - but then there are touch-specific
things, not just touch target size as mentioned by Alan, but also<br>
touch gestures without mouse equivalent. Swiping - split-tapping -
long presses - rotate gestures - cursed L-shaped gestures, etc.<br>
<br>
<strong>Patrick:</strong> It's probably worth being careful about distinguishing between gestures that the *system / AT* provides, and which are then translated into high-level events (e.g. swiping left/right which a mobile AT will interpret itself and move the focus accordingly) and gestures that are directly handled via JavaScript (with touch and pointer events specific code) - also keeping in mind that the latter can't be done by default when using a touchscreen AT unless the user explicitly triggers some form of gesture passthrough.</p>
<p class="blue"><strong>Detlev:</strong> That's a good point. Thinking of the perspective of an AT user carrying out an accessibility test, or even any non-programmer carrying out a heuristic accessibility evaluation using browser toolbars and things like Firebug, I wonder what is implied in making that distinction, and how it might be reflected in documented test procedures.<br>
Are we getting to the point where it becomes impossible to carry out accessibility tests without investigating in detail the chain of events fired?<br>
<br>
<strong>Patrick: </strong>For the former, the fact that the focus is moved sequentially using a swipe left/right rather than a TAB/SHIFT+TAB does not cause any new issues not covered, IMHO, by the existing keyboard-specific SCs if instead of keyboard it talked in more input agnostic terms. Same for not trapping focus etc.</p>
<p class="blue"><strong>Detlev:</strong> One important difference being that swiping on mobile also gets to non-focusable elements. While a script may keep keyboard focus safely inside a pop-up window, a SR user may swipe beyond that pop-up unawares (unless the page background has been given the aria-hidden treatment, and that may not work everywhere as intended). Also, it may be easier to reset focus on a touch interface (e.g. 4-finger tap on iOS) compared to getting out of a keyboard trap if a keyboard is all you can use to interact.<br>
<br>
<strong>Patrick: </strong>For the latter, though, I agree that this would be touch (not mobile though) specific...and advice should be given that custom gestures may be difficult/impossible to even trigger for certain users (even for single touch gestures, and even more so for multitouch ones).</p>
<p class="blue"><strong>Detlev:</strong> Assuming a non-expert perspective (say, product manager, company stategist), when looking at Principle 2 Operable it would be quite intelligible to talk about<br>
2.1 Keyboard Accessible<br>
2.5 Touch Accessible<br>
2.6 Pointer Accessible (It's not just Windows and Android with KB, Blackberry has a pointer too)<br>
2.7 Voice Accessible<br>
<br>
While the input modes touch and pointer share many aspects and (as you show) touch events are actually mapped onto mouse events, there might be enough differences to warrant different Guidelines.<br>
For example, you are right that there is no reason why target size and clearance should not also be defined for pointer input, but the actual values would probably be slightly lower in a "Pointer accessible" Guideline. A pointer is a) more pointed (sigh) and therefore more precise and b) does not obliterate its target in the same way as a finger tip.<br>
Another example: A SC for touch might address multi-touch gestures, mouse has no swipe gesture. SCs under Touch accessible may also cover two input modes: default (direct interaction) and the two-phase indirect interaction of focusing, then activating, when the screenreader is turned on.<br>
<br>
Of course it might be more elegant to just make Guideline 2.1 input mode agnostic, but I wonder whether the resulting abstraction would be intelligible to designers and testers. I think it would be worthwhile to take a stab at *just drafting* an input-agnostic Guideline 2.1 "Operable in any mode" and draft SC below, to get a feel what tweaking core WCAG might look like, and how Success criteria and techniques down the line may play out. Interfaces catering for both mouse and touch input often lead to horrible, abject usability. Watch low vision touch users swear at Windows 8 (metro) built-in magnification via indirect input on sidebars (an abomination probably introduced because mice don't know how to pinch-zoom). Watch Narrator users struggle when swipe gestures get too close to the edge and unintendedly reveal the charms bar or those bottom and top slide-in bars in apps. Similar things happen when Blackberry screenreader users unintentionally trigger the common swipes from the edges which BB thought should be retained even with screenreader on. And finally, watch mouse users despair as they cannot locate a close button in a metro view because it is only revealed when they move the mouse right to the top edge of the screen.</p>
<p class="blue"><strong>Mike Pluke:</strong> I’d personally prefer something like “character input interface” [instead of keyboard interface] to further break the automatic assumption that we are talking about keyboards or other things with keys on them.
<p class="blue"><strong>Gregg Vanderheiden</strong>: This note is great
<ul class="blue">
<li>Note 1: A keyboard interface allows users to provide keystroke input to programs even if the native technology does not contain a keyboard. </li>
</ul class="blue">
<p class="blue">I would add a note 2</p>
<ul class="blue">
<li class="blue">Note 2: full control form a keyboard interface allows control from any input modality since it is modality agnostic. It can allow control from speech, Morse code, sip and puff, eye gaze, gestures, and augmentative communication aid or any other device or software program that can take (any type) of user input and convert it into keystrokes. Full control from keyboard interface is to input what text is to output. Text can be presented in any sensory modality. Keyboard interface input can be produced by software using any input modality.</li>
</ul>
<p class="blue">RE “character input interface”</p>
<ul class="blue">
<li class="blue">
we thought of that but you need more than the characters on the keyboard. You also need arrow keys and return and escape etc. </li>
<li class="blue">we thought of encoded input (but that is greek) and ascii (but that is not international) or UNICODE (but that is undefined and really geeky) </li>
</ul>
<p class="blue"><strong>Jonathan: </strong>While I agree the term [Keyboard Interface] is misleading, In desktop terms testing with a physical keyboard is one good way to make sure the keyboard interface is working. Even on mobile devices, support a physical keyboard through the keyboard interface is something that helps people with disabilities and is an important test. <em><strong> It just doesn’t go far enough.</strong></em><strong></strong>
<p class="blue"><strong>David MacDonald:</strong> +1 to "it doesn't go far enough."
However, </p></div>
<div class="comment">
<strong>David Macdonald Summary Comment: </strong>I think this discussion demonstrates that requiring <strong>ALL</strong> functionality to work with touch will be difficult. I don't think that we should require that "all functionality" be available via touch because:
<ol>
<li>It may not be possible. </li>
<li>It may not apply on ALL cases to ALL mobile sites. </li>
<li>I think we have to assume that normal usabililty practices will ensure that mobile apps will be primarily touch functioning.</li>
</ol>
However, the accessibility gap is that deveopers don't ensure that someone running assistive technology can ALSO operate the system with touch. This is 2.5.4 below. </div>
<p class="blue"><strong>David: </strong>How about this for a Guideline under which all the other touch events can be placed?</p>
<div class="guideline"><strong>Guideline 2.5 Touch Accessible:</strong> Make it easier for users to operate touch functionality (Understanding)</div>
<p class="blue"><strong>David: </strong>This provides a nice wide guideline under which we can place our Success Criteria and Techniques and it echos the language of the existing Guidelines. (i.e., Guideline 1.4)</p>
<p class="blue"><strong>Patrick: </strong>Sure, but this SC would be relegated into the "touch/mobile" extension to WCAG, which somebody designing a desktop/mouse site may look into (again, going back to the fundamental problem of WCAG extension, but I digress).
<p class="blue"><strong>David:</strong> WCAG 2 is a stable document, entrenched in many jurisdictional laws, which is a good thing. So far, unless something drastically changes in consensus or in the charter approval, the extension model is what we are looking at. However, we may want to explore the idea of incorporating all these recommendations into failure techniques or sufficient techniques for *existing* Success Criteria in WCAG core, which would ensure they get first class treatment in WCAG proper. This would ensure that they are not left out of jurisdictions that didn't add the extension. But some of the placement in existing Success Criteria could be pretty contrived. Most would probably end up in 1.3.1 (like everything else).
<div class="comment"><strong>David Summary:</strong><strong>I think it is worth weighing hard the pros and cons of rolling these into WCAG core vs. adding Success Criteria and Guidelines in this extension.
</div>
<h3>New Proposed <strong>Success Criteria</strong> under this proposed Guideline </h3>
<div class="newsuccesscriteria"><strong>2.5.1 Touch:</strong> All functionality of the content is operable through touch gestures. (Level A)</div>
<div class="prefix">
<blockquote>
<p class="blue"><strong>David: </strong>Is this applicable on all mobile site? See comment above.</p>
<p class="blue"><strong>Jonathan:</strong> But we still need an exception like we have for keyboard access for things like drawing and signatures, etc. So we need to take into timing and paths, etc. Except when the touch interactions requires specific timing or path...Perhaps pulling out similar language that is related to the keyboard success criteria about timing and paths. </p>
<div class="newsuccesscriteria"><strong>2.5.1 Touch: </strong>All <a href="http://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-keyboard-operable.html#functiondef" target="_blank">functionality</a> of the content is operable through a <a href="http://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-keyboard-operable.html#keybrd-interfacedef" target="_blank">touch interface</a> without requiring specific timings for individual touch gestures, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)</div>
<p class="blue"> <strong>David:</strong>I think we are overreaching by requiring EVERYTHING to work with touch. I think we want to stick with requiring that anything that DOES operate via touch, can be used by a variety of users, including those with touch based screen readers.
I think we can drop this and be more granular as with the other Success Criteria below.
<p class="blue"><strong>Gregg:</strong> Not posible or practical: an Apple watch – all the physical controls on the side have to also be operable from the screen? Or do you mean that a web page designer needs to provide their own keyboard in their content for any keyboard input on their page? I’m not sure this makes sense. If you are relying on the keyboard interface for input of text – then it is not all via touch – some is via the keyboard interface. And some mobile devices don’t have an onscreen keyboard (they have a physical one) – so all by touch means you AGAIN WOULD have to provide all the input with a keyboard built into each web page or you would fail this SC<br>
</blockquote>
</div>
<div class="newsuccesscriteria"> <strong>2.5.2 Touch Target Size:</strong> One dimension of any touch target measures at least 9 mm except when the user has reduced the default scale of content. (Level AA)</div>
<blockquote>
<p class="blue"><strong>Gregg: </strong>What good is one dimension?? If you have any physical disability you need to specify both dimensions.
ALSO – for what size screen. An apple watch? An iphone 4? All buttons would have to be huge in order to comply on very small screens – and you don’t know what size screen – so you can’t use absolute measures unless you assume smallest screen.
</p>
<p class="blue"><strong>Patrick:</strong> - Gregg's question about 9mm - it would be good to clarify if we mean *physical* mm, or CSS mm. Note that many guidelines (such as Google's guidelines for Android, or Microsoft's app design guidelines) use measurements such as dips (device independents pixels) precisely to avoid having to deal with differences in actual physical device dimensions (as it's the device/OS' responsibility to map its actual physical size to a reasonable dips measure, so authors can take that as a given that is reasonably uniform across devices). - on a more general level, I questioned why there should be an SC relating to target size for *touch*, but that there's no equivalent SC for mouse or stylus interaction? </p>
<p class="blue"><strong>Jon:</strong> My guess is that touch target size would need to be larger than a mouse pointer touch area -- so the touch target would catch those as well.</p>
<p class="blue"><strong>Patrick:</strong> Too small a target size can be just as problematic for users with tremors, mobility impairments, reduced dexterity, etc.</p>
<p class="blue"><strong>Jon </strong>That's exactly who this SC is aimed at. This SC is not specifically aimed at screen reader users or low vision users but people with motor impairments.</p>
<p class="blue"><strong>Patrick:</strong> I know it's not the remit of the TF, but I'd argue that this is exactly the sort of thing that would benefit from being a generalised SC applicable to all manner of pointing interaction (mouse, pen, touch, etc). Or is the expectation that there will be a separate TF for "pen and stylus TF", "mouse interaction TF", etc? (these two points also apply to 2.5.5)<strong><br>
</strong></p>
</blockquote>
<div class="newsuccesscriteria"> <strong>2.5.3 Single Taps and Long Presses Revocable:</strong> Interface elements that require a single tap or a long press as input will only trigger the corresponding event when the finger is lifted inside that element. (Level A)</div>
<blockquote>
<p class="blue">
<strong> Patrick:</strong></strong> I like the concept, but the wording that follows (requiring that things only trigger if touch point is still wihin the same element) is overly specific/limiting in my view. Also, it is partly out of the developer's control. For instance, in current iOS and Android, touch events have a magic "auto-capture" behavior: you can start a touch sequence on an element, move your touch point outside of the element, and release it...it will still fire touchmove/touchend events (but not click, granted). Pointer Events include an explicit feature to capture pointers and to emulate the same behavior as touch events. However, it would be possible to make taps/long presses revocable by, for instance, prompting the user with a confirmation dialog as a result of a tap/press (if the action is significant/destructive in particular). This would still fulfill the "revocable" requirement, just in a different way to "must be lifted inside the element".
In short: I'd keep the principle of "revocable" actions, but would not pin down the "the finger (touch point, whatever...keeping it a bit more agnostic) is lifted inside the element".</p>
<p class="blue"><strong>Gregg: </strong>This will make interfaces unusable by some people who cannot reliably land and release within the same element. Also it is only a relatively small number that know about this. Also if someone hits something by mistake – they usually don’t have the motor control to use this approach. Better is the ability to reverse or undo. I think that is already in WCAG though – with caveats. <br>
</p>
</blockquote>
<div class="newsuccesscriteria"> <strong>2.5.4 Modified Touch:</strong></strong> When touch input behavior is modified by built-in assistive technology, all functionality of the content is still operable through touch gestures. (Level A)</div>
<blockquote>
<p class="blue"><strong>Gregg: </strong> You have no control of how it is changed – so how can you be expected to have anything still work? </p>
<p class="blue"><strong>David MacDonald:</strong> How about this:
<div class="newsuccesscriteria"><strong>2.5.4 Touch:</strong> For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated. (Level A)</div>
</p>
<p class="blue"><strong>David:</strong> In the understanding document for this SC we would explain that touch gestures with VO on could be and probably would be the VO equivalent to the standard gestures used with VO off.</p>
<p class="blue"><strong>Patrick:</strong> As it's not possible to recognise gestures when VoiceOver is enabled, as VO intercepts gestures for its own purposes
(similar to how desktop AT intercept key presses) unless the user
explicitly uses a pass-through gesture, does this imply that interfaces
need to be made to also work just with an activation/double-tap ? i.e.,
does double-tap count in this context as a "gesture"? If not, it's not
technically possible for web pages to force pass-through (no equivalent
to role="application" for desktop/keyboard handling).<br>
<br>
<strong>David: </strong>VO uses gestures for its own purposes and then adds gestures to
substitute for those it replaced i.e., VO 3 finger swipe= 1 finger
swipe. I'm suggesting that we require everything that can be accomplished with VO
off with gestures can be accomplished with VO on.<br>
</p>
<p class="blue"><strong>Patrick: </strong>Not completely, though. If I build my own gesture recognition from basic principles (tracking the various touchstart/touchmove/touchend events), the only way that gesture can be passed on to the JS when VO is activated is if the user performs a pass-through gesture, followed by the actual gesture I'm detecting via JS. Technically, this means that yes, even VO users can make any arbitrary gesture detected via JS, but in practice, it's - in my mind - more akin to mouse-keys (in that yes, a keyboard user can nominally use any mouse-specific interface by using mouse keys on their keyboard, just as a touch-AT user can perform any custom gesture...but it's more of a last resort, rather than standard operation). Also, not sure if Android/TalkBack, Windows Mobile/Narrator have these sorts of pass-through gestures (even for iOS/VO, it's badly documented...no mention of it that I could find on any official Apple sites). In short, to me this still makes it lean more towards providing all functionality in other, more traditional ways (which would then also work for mobile/tablet users with an external keyboard/keyboard-like interface). Gestures can be like shortcuts for touch users, but should not replace more traditional buttons/widgets, IMHO. This may be a user setting perhaps? Choose if the interface should just rely on touch gestures, or provide additional focusable/actionable controls?
</p>
<p class="blue"><strong>Jonathan:</strong> I also worry that people might try to say that pass through gestures would meet this requirement.</p>
<p class="blue"><strong>David:</strong> How could we fix this concern? I think WCAG 2.1.1 already covers the need for keyboard use (without mouseKeys). We could maybe plug the hole so the pass through gesture is not relied on by the author the same way we do in 2.1.1 not relying on MouseKeys..</p>
<p class="blue"> <strong>Patrick:</strong> does this imply that interfaces need to be made to also work just with an activation/double-tap ? i.e., does double-tap count in this context as a "gesture"?
<p class="blue"><strong>Jonathan: </strong>In theory I think this would benefit people from prosthetics too. For example, many apps support zoom by double tapping without requiring a pinch. You should be able to control all actions from touch (e.g. through an API) and also through the keyboard. But I think it would be too constrictive to require on tap, double tap, long tap, etc. Since screen readers and the API support actions through rotors and other gestures it would seem that API based and keyboard access would be sufficient. But you bring up a good point that while this might make sense on native -- but mobile web apps don't have a good way without Indie UI to expose actions to the native assistive technologies. This is a key area that needs to be addressed by other groups and perhaps may be addressed by other options such as WAPA -- but we do need to be careful and perform some research as the abilities we need may not be yet supported or part of a mature enough specification.</p>
<p class="blue"><strong>David:</strong> It would be great to operate everything through taps... even creating a Morse code type of thing, where all gestures could be done with taps for those who can't swipe, but it would require a lot more functionality than is currently available. I think we should park it, and perhaps provide it as a best practise technique under this Success Criteria.</p>
<p class="blue"><strong>Gregg:</strong> do they have a way to map screen readers gestures [to avoid] colliding special gestures in apps? this was not to replace use of gestures — but to provide a simple alternate way to get at them if you can’t make them (physically can’t or can’t because of collisions) </p>
<p class="blue"><strong>Patrick:</strong> Not to my knowledge. iOS does have some form of gesture recording with Assistive Touch, but I can't seem to get it to play ball in combination with VoiceOver, and in the specific case of web content (though this may be my inexperience with this feature). On Android/Win Mobile side, I don't think there's anything comparable, so certainly no cross-platform, cross-AT mechanism.</p>
<p class="blue"><strong>Jonathan: </strong>This is only one aspect of the situation. It’s not so much as colliding gestures rather than a collision of how the touch interface is reconfigured to trap gestures combined with the issue of not being able to see where the gesture is being drawn. For iOS native apps, there is:<u></u><u></u></p>
<ul class="blue">
<li><u></u><u></u>an actions API that allows apps to associate custom actions with an actions rotor or assign a default action to a magic tap gesture<u></u><u></u></li>
<li><u></u>a pass through gesture –tap and hold and then perform the gestures.<u></u><u></u></li>
<li><u></u>A trait that can be assigned that will allow direct UI interaction with the element – allowing screen reader users the ability to sign there name, etc.</li>
</ul>
<p class="blue">Take for example a hypothetical knob on a webpage. Without a screen
reader I can turn that knob to specific settings. As a developer I can
implement keystrokes, let’s say control+1, control+2, etc. for the
different settings. I have met the letter of the success criteria by
providing a keyboard interface through creating JavaScript shortcut
keystroke listeners. In practical reality though as a mobile screen<br>
reader user who does not carry around a keyboard I have no way to
trigger those keystrokes.<br>
</p>
<p class="blue"> <strong>Patrick:</strong> Actually, it gets worse than that. As I noted previously, not all mobile/tablet devices with a paired keyboard actually send keyboard (keydown, keypress) events all the time. In iOS, with a paired keyboard (but no VO enabled), the keyboard is completely inactive except when the user is in a text entry field or similar (basically, it only works in the same situations in which iOS' on-screen keyboard would be triggered). When VO is enabled, the keyboard still only sends keyboard events when in a text entry field etc. In all other situations, every keystroke is intercepted by VO (and again, there is no mechanism to override this with role="application" or similar).
In short, for iOS you can't rely on anything that listens for keydown/keypress either. In Android, the situations is more similar to what would happen on desktop (from what I recall at least...would need to do some further testing) in that the keyboard always works/fires key events. Not had a chance to test Windows Mobile with paired keyboard yet, but I suspect it works in a similar way.
<p class="blue"><strong>David: </strong> We never envisioned in the years 2000-2008 when we were tying up WCAG people who are blind using a flat screen to operate a mobile device. I think it was a huge leap forward for our industry, and we need to foster their relationship to their devices, and run with it. Keyboard requirements are in place, they are not going away. <em><strong>Our job now is to look at the gaps, and see if there is anything we can do to ensure these users can continue to use their flat screens which has levelled the playing field for the blind, and to foster authoring that doesn't screw that up. </strong></em><strong></strong>
<p class="blue">Here's a rewrite with addressing the concerns.
<div class="newsuccesscriteria"><strong>2.5.4 Touch:</strong> For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A) </div>
<div>
<div aria-label="Show trimmed content" data-tooltip="Show trimmed content" id=":1u9" role="button" tabindex="0"><img src="https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif"></div></div>
<p class="blue"><strong>Patrick:</strong> As said, when touch AT is running, all gestures are intercepted by the AT at the moment (unless you mean taps?). And only iOS, to my knowledge, has a passthrough gesture (which is not announced/exposed to users, so a user would have to guess that if they tried it, something would then happen).<br>
If the intention was to also mean "taps", this is lost on me and possibly the majority of devs, as "gesture" usually implies a swipe, pinch, rotation, etc, which are all intercepted. [ED: skimming towards the end of the document, I see that in 3.3 Touchscreen Gestures "taps" are listed here. This, to me - and I'd argue most other devs - is confusing...I don't normally think of a "tap" as a "gesture"] So this SC (at least the "touch gestures with ... assistive technology activated") part is currently technically *impossible* to satisfy (for anything other than taps), except by not using gestures or by providing alternatives to gestures like actionable buttons.<br>
This can be clarified in the prose for the SC, but perhaps a better way would be to drop the "gestures" word, and then the follow-up about passthrough, leaving a much simpler/clearer:<br>
<blockquote>
<p class="blue"><strong>"2.5.4 Touch: </strong>For pages and applications that support touch, all functionality of the content is operable through touch with and without system assistive technology activated (Level A)" </p>
</blockquote>
<p class="blue">I'm even wondering about the "For pages and applications that support touch" preamble...why have it here? Every other SC relating to touch should then also have it, for consistency? Or perhaps just drop that bit too?</p>
<blockquote>
<p class="blue"> <strong>"2.5.4 Touch: </strong>All functionality of the content is operable through touch with and without system assistive technology activated (Level A)"</p>
</blockquote>
<p class="blue">OR is the original intent of this SC to be in fact</p>
<blockquote>
<p class="blue"> <strong>"2.5.4 Touch:</strong> For pages and applications that support touch *GESTURES*, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A)"</p>
</blockquote>
<p class="blue">is this about gestures? In that case, it's definitely technically impossible to satisfy this SC at all currently (see above), so I'd be strongly opposed to it.</p>
<p class="blue"><strong>Detlev:</strong> Maybe it's better to separate the discussion of terminology from the discussion of reworking the mobile TF Doc.<br>
I personally don't get why someone would choose to call swiping or pinching a gesture, but refuse to apply this term to tapping. What about double and triple taps? Taps with two fingers? Long presses? Split taps? To me, it makes sense to call *all* finger actions applied to a touch screen a gesture. I simply don't get why tapping would not count. Where do you draw the line, and why? A related issue is the distinction between touch gestures and button presses. With virtual (non-tactile, but fixed position capacitive) buttons, you already get into a grey area. The drafted Guideline 2.5 "Touch Accessible: All functionality available via touch" probably needs to be expanded to include account for devices with physical (both tactile or capacitive) device buttons. Which would mean something like </p>
<div class="newsuccesscriteria"><strong>Guideline 2.5 OR SC 2.5.4: </strong>On devices that support touch input, all functions are available vie touch or button presses also after AT is turned on (i.e. without the use of external keyboards). </div>
<p class="blue"><strong>Detlev</strong>: <span class="newsuccesscriteria">Not well put, but you get the idea.</span></p>
<p class="blue"><strong>David:</strong> I think when we say Touch, we mean all touch activities such as swipes, taps, gestures etc... anything you do to operate the page by <strong><em>touching</em></strong> it. Regarding gestures, all gestures are intercepted by VoiceOver. But all standard gestures are replaced by VoiceOver, unless the author does something dumb to break that. I think we need to, at a minimum, ensure that standard replacement gestures are not messed up. For instance: I recently tested a high profile app for a major sports event. It had a continuous load feature like twitter that kept populating as you scroll down with one finger. Turn on the VoiceOver and try the 3 finger equivalent of a one finger swipe to do a standard scroll and nothing happens to populate the page. The blind user has hit a brick wall. I think we have to ensure this type of thing doesn't happen on WCAG conforming things. </p>
</blockquote>
<hr>
<div data-tooltip="Hide expanded content" aria-label="Hide expanded content" id=":1vh" role="button" tabindex="0"></div>
<div class="newsuccesscriteria"><strong>2.5.5 Touch Target Clearance: </strong>The center of each touch target has a distance of at least 9 mm from the center of any other touch target, except when the user has reduced the default scale of content. (Level AA)<br>
<br>
</div>
<p class="blue"><strong>David</em></strong>: Isn't this the same as 2.5.2 </em>above (9 mm distance)</p><p class="blue"><strong>Gregg:</strong> This is essentiall 9x9 target center to center. The same problems as above. 9mm on what mobile device? </p>
<div class="newsuccesscriteria"><strong> 2.5.6 No Swipe Trap: </strong>When touch input behavior is modified by built-in assistive technology so that touch focus can be moved to a component of the page using swipe gestures, then focus can be moved away from that component using swipe gestures or the user is advised of the method for moving focus away. (Level A)</div>
<p class="blue"><strong>Gregg:</strong> Advised in an accessible way to all users? </p>
<div class="newsuccesscriteria"><strong>2.5.7 Pinch Zoom: </strong>Browser pinch zoom is not blocked by the page's viewport meta element so that it can be used to zoom the page to 200%. Restrictive values for user-scalable and maximum-scale attributes of this meta element should be avoided.</div>
<p class="blue"><strong>David</strong>: Have to fix "should be avoided" or send to advisory <br>
<br>
<strong class="blue">Gregg Comment:</strong> Maybe better as a failure of 1.4.4. FAILURE Blocking the zoom feature (pinch zoom or other) without providing some other method for achieving 200% magnification or better</p>
<p class="blue"><strong>Patrick: </strong>Just wondering if the fact that most mobile browsers (Chrome, Firefox, IE, Edge) provide settings to override/force zooming even when a page has disabled it makes any difference here? iOS/Safari is the only mainstream mobile browser which currently does not provide such a setting, granted. But what if that too did?<br>
</p>
<div class="newsuccesscriteria"><strong>2.5.8 <span class="guideline">Device manipulation: </span></strong>When device manipulation gestures are provided, touch and keyboard operable alternative control options are available.</div><p><strong>Gregg: </strong>How is this different than “all must be keyboard operable”
This says if it is gesture – then it must be gesture and keyboard. So that looks the same as it must be keyboard. </p>
<p class="blue" ><strong>David:</strong> It adds "Touch".</p>
<h3>New Possible Guideline Changing Screen Orientation (Portrait/Landscape)</h3>
<div class="guideline"><strong>3.4 Flexible Orientation:</strong></strong> Ensure users can use the content in the orientation that suits their circumstances</div>
<p class="blue"><strong>Gregg: </strong>Ensure is a requirement. Is this always possible? </p > <h3>Possible New Success Criteria </h3>
<div class="newsuccesscriteria"><strong>3.4.1 Expose Orientation: </strong>Changes in orientation are programmatically exposed to ensure detection by assistive technology such as screen readers.</div>
<p class="blue"><strong>Gregg:</strong> This is not a web content issue but a mobile device issue.
Hmmm how about alert?
Again – if it can’t always be possible – it shouldn't be an SC. Maybe it is always possible? ???? Home screens?
</h3>
<p class="blue"><strong>Patrick: </strong> Agree with Gregg this is not a web content issue as currently stated. Also, not every orientation change needs something like an alert to the user...what if nothing actually changes on the page when switching between portrait and landscape - does an AT user need to know that they just rotated the device? Perhaps the intent here is to ensure web content notifies the user if an orientation change had some effect, like a complete change in layout (for instance, a tab navigation in landscape turning into an accordion in portrait; a navigation bar in landscape turning into a button+dropdown in portrait)? If so, this needs rewording, along similar lines to a change in context?
<p class="blue"><strong>Jon: </strong>Yes, that is the intention. For example, if you change from landscape to portrait a set of links disappears and now there is a button menu instead. Or controls disappear or appear depending on the orientation.
<h3>New Possible techniques for Success Criteria 3.2.3</h3>
<div class="technique"> If the navigation bar is collapsed into a single icon, the entries in the drop-down list that appear when activating the icon are still in the same relative order as the full navigation menu. </div>
<p class="blue"><strong>Gregg:</strong> Good to focus this as technique for WCAG. </p>
<div class="technique"> A Web site, when viewed on the different screen sizes and in different orientations, has some components that are hidden or appear in a different order. The components that show, however, remain consistent for any screen size and orientation. </div>
<h3>New Techniques for 3.3.2 Labels or Instructions</h3>
<div class="technique">Therefore, instructions (e.g. overlays, tooltips, tutorials, etc.) should be provided to explain what gestures can be used to control a given interface and whether there are alternatives. </div>
<p class="blue"><strong>Gregg:</strong> Good – advisory techniques. </p> <h3 resource="#h-zoom-magnification">Advisory Technique for Grouping operable elements that perform the same action (4.4 in mobile doc)</h3>
<div class="technique">When multiple elements perform the same action or go to the same destination (e.g. link icon with link text), these should be contained within the same actionable element. This increases the touch target size for all users and benefits people with dexterity impairments. It also reduces the number of redundant focus targets, which benefits people using screen readers and keyboard/switch control. </div><p class="blue"><strong>Gregg:</strong> Good technique for WCAG
Oh this is the same as H2 no? are you just suggesting adding this text to H2. Good idea. </p>
<h3 resource="#h-zoom-magnification">4.5 Provide clear indication that elements are actionable</h3>
<h4>New Guideline</h4>
<div class="guideline">1.6 Make interactive elements distinguishable</div>
<h4>New Success Criteria</h4>
<div class="newsuccesscriteria"><strong>1.6.1 Triggers Distinguishable:</strong> Elements that trigger changes should be sufficiently distinct to be clearly distinguishable from non-actionable elements (content, status information, etc). </div>
<p class="blue"><strong>Gregg: </strong>Just as true for non-mobile
BUT - not testable. What does “sufficiently distinct” mean. Or “Clearly distinguishable”
WCAG requires that they be programmatically determined – so users could use AT to make the very visible (much more so than designers would ever permit)
But I’m not sure how you can create something testable out of this
Make it an ADVISORY TECHNIQUE ??? </p>
<h4>New Sufficient Techniques for 1.6.1 </h4>
<div class="technique"> <strong>Conventional Shape:</strong> Button shape (rounded corners, drop shadows), checkbox, select rectangle with arrow pointing downwards </div>
<div class="technique"> <strong>Iconography:</strong> conventional visual icons (question mark, home icon, burger icon for menu, floppy disk for save, back arrow, etc) </div>
<div class="technique"> <strong>Color Offset:</strong> shape with different background color to distinguish the element from the page background, different text color </div>
<div class="technique"> <strong>Conventional Style: </strong>Underlined text for links, color for links </div>
<div class="technique"> <strong>Conventional positioning:</strong> Commonly used position such as a top left position for back button (iOS), position of menu items within left-aligned lists in drop-down menus for navigation </div><p class="blue"><strong>Gregg: </strong>Not sure how these are sufficient by themselves to meet the above. This has to do with making things findable or understandable – not distinguishable. </p>
<h3 resource="#h-zoom-magnification">Set the virtual keyboard to the type of data entry required 5.1</h3>
<h4 resource="#h-set-the-virtual-keyboard-to-the-type-of-data-entry-required">New technique under 1.3.1 Info and Relationships</h4>
<div class="technique"><strong>Data Mask:</strong> Set the virtual keyboard to the type of data entry required</div><p class="blue"><strong>Gregg:</strong> Good advisory technique.</p>
<h4>New Success Criteria under 4.1</h4>
<div class="newsuccesscriteria"><strong>4.1.4 Non-interference of AT: </strong>Content does not interfere with default functionality of platform level assistive technology </div>
<p class="blue"><strong>Gregg:</strong> How would content know what this was? For example – if a page provided self voicing this might interfere with screen reader on platform. So no page can ever self voice? </p>
<h3><span class="secno">Advisory techniques: 2.2 </span> Zoom/Magnification </h3></body><body role="document" class="h-entry"><p class="blue"><section property="bibo:hasPart" resource="#zoom-magnification" typeof="bibo:Chapter" id="zoom-magnification"></section></p>
<h3><span class="secno">Advisory techniques: 2.2 </span> Zoom/Magnification </h3>
<div class="advisorysuccesscriteria">Support for system fonts that follow platform level user preferences for text size. </div>
<br>
(<strong><em>Rational for not being sufficient technique: </em></strong>can this be done?)<br>
<p class="blue"><strong>Gregg Comment:</strong>This looks like a technique for 1.4.4.---- but you should say “to at least 200%” or else it could not be sufficient </p>
<div class="advisorysuccesscriteria">Provide on-page controls to change the text size. <br>
<br>
(<strong><em>Rational for not being sufficient technique: </em></strong>best practice but usually not big enough, redundant with other zooming, extra work)</div>
<!--OddPage-->
<h3>Advisory techniques: Contrast
(2.3)</h3>
<div class="advisorysuccesscriteria">The default point size for mobile platforms might be larger than the default point size used on non-mobile devices. When determining which contrast ratio to follow, developers should strive to make sure to apply the lessened contrast ratio only when text is roughly equivalent to 1.2 times bold or 1.5 times (120% bold or 150%) that of the default platform size. <br>
<br>
(<strong><em>Rational for not being SC: </em></strong>"roughly equivalent" is not testable. Can we settle on something determinable and testable?<br>
<strong>Gregg Comment: </strong>How does an author know that someone will be viewing their content on a mobile device? Or what size mobile device? A table vs an iphone 4 I mega different. Not sure how you can make an SC out of this.
<section property="bibo:hasPart" resource="#other-w3c-wai-guidelines-related-to-mobile" typeof="bibo:Chapter" id="other-w3c-wai-guidelines-related-to-mobile">
<section property="bibo:hasPart" resource="#atag-2.0-and-accessible-mobile-authoring-tools" typeof="bibo:Chapter" id="atag-2.0-and-accessible-mobile-authoring-tools"> </section>
</section>
</div>
</section>
<div></div>
<h3><span class="secno">Advisory Techniques for 3.2 </span>Touch Target Size and Spacing
</h3>
<div class="advisorysuccesscriteria"> Ensuring that touch targets close to the minimum size are surrounded by a small amount of inactive space. <br>
<br>
<strong>Rational for not being a Success Criteria: </strong>Cannot measure "Small amount". Can we quantify it? </div>
<p class="blue"> <strong>Gregg:</strong> What is the evidence that this is of value? Not true of many keyboards. Are they all unusable?
Also if you define a gap – see notes above on ‘what size screen for that gap?” </p>
<h3>Advisory Techniques for touchscreen gestures</h3>
<div class="advisorysuccesscriteria">Gestures in apps should be as easy as possible to carry out. <br>
<br>
<strong>Rational for not being a Success Criteria: </strong>Cannot measure "easy as possible". Can we do rework it?
</p>
</div>
<div class="advisorysuccesscriteria"> Some (but not all) mobile operating systems provide work-around features that let the user simulate complex gestures with simpler ones using an onscreen menu. <br>
<br>
</div> <p class="blue"><strong>David: Rational for not being a Success Criteria: </strong>Cannot measure this or apply it in all circumstances. Can we do rework it?
</p>
<p class="blue"> <strong>Gregg:</strong> It SHOULD be required. But it is already covered by “all functions from keyboard interface” since that would provide an alternate method. So there is already an alternate way to do this.
NOTE: again – for some devices –it may not be possible to have something be accessible.
A broach that you tap on – and ask questions and it answers in audio – would not be usable by someone who is deaf. They fact that you can’t make it usable – would not be a reason to rewrite the accessibility rules to make it possible for it to pass. It simply would always be inaccessible. Accessibility rules do not say that everything must be accessible to all. They say that if it is reasonable or not an undue burden or some such – then it needs to do x or y or z. Some things are not required to be accessible to some groups. That does not make them accessible – it only means they are not required to be accessible.
RE keyboard interface – there may be some IOT devices that do not have remote interfaces – and the iot device itself is too small or limited to be accessible. We don’t rewrite the rules to make it possible for it to pass. We simply say that it is not accessible and it is not possible or reasonable to do so.
Most IOT does have a remote interface –so that can be accessible. </p>
<div class="advisorysuccesscriteria"> Usually, design alternatives exist to allow changes to settings via simple tap or swipe gestures. <br>
<br>
<strong>Rational for not being a Success Criteria: </strong>Cannot measure this or apply it in all circumstances. Can we do rework it?
</p>
</div>
<h3>Advisiory technique for Device manipulation Gestures </h3>
<div class="advisorysuccesscriteria">Some (but not all) mobile operating systems provide work-around features that let the user simulate device shakes, tilts, etc. from an onscreen menu. <br>
<br>
<strong>Rational for it not being a Success Criteria:</strong> It doesn't apply to all situations. Can we quantify it?</div>
<h3><span property="xhv:role" resource="xhv:heading">Advisiory technique placing buttons where they are easy to access consistent layout</h3>
<div class="advisorysuccesscriteria"> Developers should also consider that an easy-to-use button placement for some users might cause difficulties for others (e.g. left- vs. right-handed use, assumptions about thumb range of motion). Therefore, flexible use should always be the goal.<br>
<br>
<strong>Rational for it not being a Success Criteria:</strong> It doesn't apply to all situations. Can we quantify it?</div>
<p class="blue"><strong>Gregg Comment:</strong> Quantifying it would be required but since it doesn't apply to many pages which have interactive content all over the page – quantification is not relevant. </p>
<h3>Advisory technique for Positioning important page elements before the page scroll 4.3</h3>
<div class="advisorysuccesscriteria">Positioning important page information so it is visible without requiring scrolling can assist users with low vision and users with cognitive impairments. <br>
<br>
<strong>Rational for it not being a Success Criteria:</strong> It doesn't apply to all situations. Can we quantify it? </div><p class="blue"><strong>Gregg: </strong>Agree so advisory technique for WCAG? </p>
<h3>Advisory technique Provide easy methods for data entry 5.2 </h3>
<div class="advisorysuccesscriteria">Reduce the amount of text entry needed by providing select menus, radio buttons, check boxes or by automatically entering known information (e.g. date, time, location). <br>
<br>
<strong>Rational for it not being a Success Criteria:</strong> It doesn't apply to all situations. Can we quantify it?</div><p class="blue"><strong>Gregg:</strong> Can’t be SC because it is prescriptive and lists specific solutions – when others may also apply and be better. </p>
<hr>
<h4>Other ideas to consider</h4>
<ul>
<li>Moving transitions triggers some to get vertigo <a href="http://www.alphr.com/apple/1001057/why-apple-s-next-operating-systems-are-already-making-users-sick" target="_blank">http://www.alphr.com/apple/1001057/why-apple-s-next-operating-systems-are-already-making-users-sick</a> (via David)</li>
<li>iOS (or Macbook) should have separate sound outputs. So the screen reader user can play a movie connecting to a TV for friends and listen to VO to operate the movie, but not subject others to VoiceOver. This is an operating system and hardware
<ardware>
issue that authors can't do. (Via Janina Sajka) </li>
<li>Can we add touch and hold duration to our touch section? I have an aunt who has lost feeling in her fingers. She mentioned the issue with touching too long on the buttons on her iPhone it causes a long press action. Android allows for changing the touch and release duration but I do not see a similar setting on iphone/iPad. (Alan Smith on list)</li>
</ul>
<hr>
<p> </p>
<h1>Understanding Mobile Document</h1>
<section property="bibo:hasPart" resource="#introduction" typeof="bibo:Chapter" id="introduction" class="informative">
<h2 resource="#intro" id="intro"><span property="xhv:role" resource="xhv:heading"><span class="secno">1. </span>Introduction</span></h2>
<p><em>This section is non-normative.</em></p>
<p>This document provides informative guidance (but does not set requirements) with regard to interpreting and applying Web Content Accessibility Guidelines (WCAG) 2.0 [WCAG20] to web and non-web mobile content and applications. </p>
<p>While the World Wide Web Consortium (<abbr title="World Wide Web Consortium">W3C</abbr>)'s <abbr title="World Wide Web Consortium">W3C</abbr> Web Accessibility Initiative (WAI) is primarily concerned with web technologies, guidance for web-based technologies is also often relevant to non-web technologies. The <abbr title="World Wide Web Consortium">W3C</abbr>-WAI has published the Note <a href="http://www.w3.org/TR/wcag2ict/">Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies (WCAG2ICT)</a> to provide authoritative guidance on how to apply WCAG to non-web technologies such as mobile native applications. The current document is a mobile-specific extension of this effort. </p>
<p><abbr title="World Wide Web Consortium">W3C</abbr> <a href="http://www.w3.org/Mobile/">Mobile Web Initiative</a> Recommendations and Notes pertaining to mobile technologies also include the <a href="http://www.w3.org/TR/mobile-bp/">Mobile Web Best Practices</a> and the <a href="http://www.w3.org/TR/mwabp/">Mobile Web Application Best Practices</a>. These offer general guidance to developers on how to create content and applications that work well on mobile devices. The current document is focused on the accessibility of mobile web and applications to people with disabilities and is not intended to supplant any other <abbr title="World Wide Web Consortium">W3C</abbr> work. </p>
<section property="bibo:hasPart" resource="#wcag-2.0-and-mobile-content-applications" typeof="bibo:Chapter" id="wcag-2.0-and-mobile-content-applications">
<h3 resource="#h-wcag-2.0-and-mobile-content-applications" id="h-wcag-2.0-and-mobile-content-applications"><span property="xhv:role" resource="xhv:heading"><span class="secno">1.1 </span>WCAG 2.0 and Mobile Content/Applications<br>
</h3>
<p><em>"Mobile"</em> is a generic term for a broad range of wireless devices and applications that are easy to carry and use in a wide variety of settings, including outdoors. Mobile devices range from small handheld devices (e.g. feature phones, smartphones) to somewhat larger tablet devices. The term also applies to <em>"wearables"</em> such as "smart"-glasses, "smart"-watches and fitness bands, and is relevant to other small computing devices such as those embedded into car dashboards, airplane seatbacks, and household appliances. </p>
<p>While mobile is viewed by some as separate from <em>"desktop/laptop"</em>, and thus perhaps requiring new and different accessibility guidance, in reality there is no absolute divide between the categories. For example: </p>
<ul>
<li>many desktop/laptop devices now include touchscreen gesture control, </li>
<li>many mobile devices can be connected to an external keyboard and mouse, </li>
<li>web pages utilizing responsive design can transition into a "mobile" screen size even on a desktop/laptop, and </li>
<li>mobile operating systems have been used for laptop devices. </li>
</ul>
<p>Furthermore, the vast majority of user interface patterns from desktop/laptop systems (e.g. text, hyperlinks, tables, buttons, pop-up menus, etc.) are equally applicable to mobile. Therefore, it's not surprising that a large number of existing WCAG 2.0 techniques can be applied to mobile content and applications (see Appendix A). Overall, <strong>WCAG 2.0 is highly relevant to both web and non-web mobile content and applications</strong>. </p>
<p>That said, mobile devices do present a mix of accessibility issues that are different from the typical desktop/laptop. The "Discussion of Mobile-Related Issues" section, below, explains how these issues can be addressed in the context of WCAG 2.0 as it exists or with additional best practices. All the advice in this document can be applied to mobile web sites, mobile web applications, and hybrid web-native applications. Most of the advice also applies to native applications (also known as "mobile apps"). </p>
<p><em>Note:</em> WCAG 2.0 does not provide testable success criteria for some of the mobile-related issues. The work of the Mobile Accessibility Task Force has been to develop techniques and best practices in these areas. When the techniques or best practices don't map to specific WCAG success criteria, they aren't given a sufficient, advisory or failure designation. This doesn't mean that they are optional for creating accessible web content on a mobile platform, but rather that they cannot currently be assigned a designation. The Task Force anticipates that some of these techniques will be included as sufficient or advisory in a potential future iteration of WCAG. </p>
<p>The current document references existing WCAG 2.0 Techniques that apply to mobile platform (see Appendix A) and provides new best practices, which may in the future become WCAG 2.0 Techniques that directly address emerging mobile accessibility challenges such as small screens, touch and gesture interface, and changing screen orientation. </p>
</section>
<section property="bibo:hasPart" resource="#other-w3c-wai-guidelines-related-to-mobile" typeof="bibo:Chapter" id="other-w3c-wai-guidelines-related-to-mobile2">
<h3 resource="#h-other-w3c-wai-guidelines-related-to-mobile" id="h-other-w3c-wai-guidelines-related-to-mobile"><span property="xhv:role" resource="xhv:heading"><span class="secno">1.2 </span>Other <abbr title="World Wide Web Consortium">W3C</abbr>-WAI Guidelines Related to Mobile</span><br>
</h3>
<section property="bibo:hasPart" resource="#uaag-2.0-and-accessible-mobile-browsers" typeof="bibo:Chapter" id="uaag-2.0-and-accessible-mobile-browsers">
<h4 resource="#h-uaag-2.0-and-accessible-mobile-browsers" id="h-uaag-2.0-and-accessible-mobile-browsers"><span property="xhv:role" resource="xhv:heading"><span class="secno">1.2.1 </span>UAAG 2.0 and Accessible Mobile Browsers</span><br>
</h4>
<p>The User Agent Accessibility Guidelines (UAAG) 2.0 [<a href="http://www.w3.org/TR/UAAG20/">UAAG2</a>] is meant for the developers of user agents (e.g. web browsers and media players), whether for desktop/laptop or mobile operating systems. A user agent that follows UAAG 2.0 will improve accessibility through its
own user interface, through options it provides for rendering and
interacting with
content, and through its ability to communicate with other technologies,
including assistive technologies.</p>
<p>To assist developers of mobile browsers, the <a href="http://www.w3.org/TR/UAAG20-Reference/">UAAG 2.0 Reference</a> support document contains numerous mobile examples. These examples are also available in a separate list of <a href="http://www.w3.org/TR/IMPLEMENTING-UAAG20/mobile">mobile-related examples</a>, maintained by the <a href="http://www.w3.org/WAI/UA/">User Agent Accessibility Guidelines Working Group (UAWG)</a>. </p>
</section>
<section property="bibo:hasPart" resource="#atag-2.0-and-accessible-mobile-authoring-tools" typeof="bibo:Chapter" id="atag-2.0-and-accessible-mobile-authoring-tools2">
<h4 resource="#h-atag-2.0-and-accessible-mobile-authoring-tools" id="h-atag-2.0-and-accessible-mobile-authoring-tools"><span property="xhv:role" resource="xhv:heading"><span class="secno">1.2.2 </span>ATAG 2.0 and Accessible Mobile Authoring Tools</span><br>
</h4>
<p>The Authoring Tool Accessibility Guidelines (ATAG) 2.0 [<a href="http://www.w3.org/TR/ATAG20/">ATAG2</a>] provides guidelines for the developers of authoring tools, whether for desktop/laptop or mobile operating systems. An authoring tool that follows ATAG 2.0 will be both more accessible to authors with disabilities (Part A) and designed to enable, support, and promote the production of more accessible web content by all authors (Part B). </p>
<p>To assist developers of mobile authoring tools, the <a href="http://www.w3.org/TR/IMPLEMENTING-ATAG20/">Implementing ATAG 2.0</a> support document contains numerous mobile authoring tool examples. </p>
<h2>Understanding Individual SUccess Criteria and Techniques</h2>
<h2 resource="#h-mobile-accessibility-considerations-primarily-related-to-principle-1-perceivable" id="h-mobile-accessibility-considerations-primarily-related-to-principle-1-perceivable"><span property="xhv:role" resource="xhv:heading"><span class="secno">2. </span>Mobile accessibility considerations primarily related to Principle 1: Perceivable</span></h2>
<section property="bibo:hasPart" resource="#small-screen-size" typeof="bibo:Chapter" id="small-screen-size">
<h3 resource="#h-small-screen-size" id="h-small-screen-size"><span property="xhv:role" resource="xhv:heading"><span class="secno">2.1 </span>Small Screen Size<br>
</span></h3>
<p>Small screen size is one of the most common characteristics of mobile devices. While the exceptional resolution of these screens theoretically enables large amounts of information to be rendered, the small size of the screen places practical limits on how much information people can actually view at one time, especially when magnification is used by people with low vision. </p>
<p>Accessibility features geared toward specific populations of people with disabilities fall under the definition of assistive technology adopted by WCAG and thus cannot be relied upon to meet the success criteria. For example, a platform-level zoom feature that magnifies all platform content and has features to specifically support people with low vision is likely considered an assistive technology. </p>
</section>
<h3><span property="xhv:role" resource="xhv:heading"><span class="secno">2.2 </span>Zoom/Magnification</span><br>
</h3>
<p>A variety of methods allow the user to control content size on mobile devices with small screens. At the browser level these methods are generally available to assist a wide audience of users. At the platform level these methods are available as accessibility features to serve people with visual impairments or cognitive disabilities.</p>
<p>The methods include the following: </p>
<ul>
<li>OS-level features
<ul>
<li> Set default text size (typically controlled from the Display Settings) <em>Note</em>: System text size is often not supported by mobile browsers. </li>
<li> Magnify entire screen (typically controlled from the Accessibility Settings). <em>Note</em>: Using this setting requires the user to pan vertically and horizontally. </li>
<li> Magnifying lens view under user's finger (typically controlled from the Accessibility Settings) </li>
</ul>
</li>
<li>Browser-level features
<ul>
<li> Set default text size of text rendered in the browser's viewport
<ul>
<li> Reading mode that renders main content at a user-specified text size </li>
</ul>
</li>
<li> Magnify browser's viewport (typically "pinch-zoom"). <em>Note</em>: Using this setting requires the user to pan vertically and horizontally.
<ul>
<li><em>Note</em>: Some browsers have features that might modify this type of magnification (e.g. re-flowing the content at the new magnification level, overriding author attempts to prevent pinch-zoom). </li>
</ul>
</li>
</ul>
</li>
</ul>
<p>The WCAG 2.0 success criterion that is most related to zoom/magnification is </p>
<div class="successcriteria"> <strong>1.4.4 Resize text</strong> (Level AA) </div>
<p>SC 1.4.4 requires text to be resizable without assistive technology up to 200 percent. To meet this requirement content must not prevent text magnification by the user.</p>
<p><em>Note:</em> Relying on full viewport zooming (e.g. not blocking the browser's pinch zoom feature) requires the user to pan horizontally as well as vertically. While this technique meets the success criteria it is less usable than supporting text resizing features that reflow content to the user's chosen viewport size. It is best practice to use techniques that support text resizing without requiring horizontal panning. </p>
<h3 resource="#h-contrast" id="h-contrast"><span property="xhv:role" resource="xhv:heading"><span class="secno">2.3 </span>Contrast</span><br>
</h3>
<p>Mobile devices are more likely than desktop/laptop devices to be used in varied environments including outdoors, where glare from the sun or other strong lighting sources is more likely. This scenario heightens the importance of use of good contrast for all users and may compound the challenges that users with low vision have accessing content with poor contrast on mobile devices. </p>
<p>The WCAG 2.0 success criteria related to the issue of contrast are: </p>
<div class="successcriteria">
<ul>
<li> <strong>1.4.3 Contrast (Minimum)</strong> (Level AA) which requires a contrast of at least 4.5:1 (or 3:1 for large-scale text) and </li>
<li> <strong>1.4.6 Contrast (Enhanced)</strong> (Level AAA) which requires a contrast of at least 7:1 (or 4.5:1 for large-scale text). </li>
</ul>
</div>
<p>SC 1.4.3. allows for different contrast ratios for large text. Allowing different contrast ratios for larger text is useful because larger text with wider character strokes is easier to read at a lower contrast. This allows designers more leeway for contrast of larger text, which is helpful for content such as titles. The ratio of 18-point text or 14-point bold text described in the SC 1.4.3 was judged to be large enough to enable a lower contrast ratio for web pages displayed on a 15-inch monitor at 1024x768 resolution with a 24-inch viewing distance. Mobile device content is viewed on smaller screens and in different conditions so this allowance for lessened contrast on large text must be considered carefully for mobile apps.</p>
<p><em>Note</em>: The use of text that is 1.5 times the default on mobile platforms does not imply that the text will be readable by a person with low vision. People with low vision will likely need and use additional platform level accessibility features and assistive technology such as increased text size and zoom features to access mobile content. </p>
<h3 resource="#h-keyboard-control-for-touchscreen-devices" id="h-keyboard-control-for-touchscreen-devices"><span property="xhv:role" resource="xhv:heading"><span class="secno">3.1 </span>Keyboard Control for Touchscreen Devices</span><br>
</h3>
<p>Mobile device design has evolved away from built-in physical keyboards (e.g. fixed, slide-out) towards devices that maximize touchscreen area and display an on-screen keyboard only when the user has selected a user interface control that accepts text input (e.g. a textbox). </p>
<p>However, keyboard accessibility remains as important as ever and most major mobile operating systems do include keyboard interfaces, allowing mobile devices to be operated by external physical keyboards (e.g. keyboards connected via Bluetooth, USB On-The-Go) or alternative on-screen keyboards (e.g. scanning on-screen keyboards). </p>
<p>Supporting these keyboard interfaces benefits several groups with disabilities: </p>
<ul>
<li> People with visual disabilities who can benefit from some characteristics of physical keyboards over touchscreen keyboards (e.g. clearly separated keys, key nibs and more predictable key layouts). </li>
<li> People with dexterity or mobility disabilities, who can benefit from keyboards optimized to minimize inadvertent presses (e.g. differently shaped, spaced and guarded keys) or from specialized input methods that emulate keyboard input. </li>
<li> People who can be confused by the dynamic nature of onscreen keyboards and who can benefit from the consistency of a physical keyboard. </li>
</ul>
<p>Several WCAG 2.0 success criteria are relevant to effective keyboard control: </p>
<div class="successcriteria">
<ul>
<li> <strong>2.1.1 Keyboard</strong> (Level A) </li>
<li> <strong>2.1.2 No Keyboard Trap</strong> (Level A) </li>
<li> <strong>2.4.3 Focus Order</strong> (Level A) </li>
<li> <strong>2.4.7 Focus Visible</strong> (Level AA) </li>
</ul>
</div>
<h3 resource="#h-touch-target-size-and-spacing" id="h-touch-target-size-and-spacing"><span property="xhv:role" resource="xhv:heading"><span class="secno">3.2 </span>Touch Target Size and Spacing</span><br>
</h3>
<p>The high resolution of mobile devices means that many interactive elements can be shown together on a small screen. But these elements must be big enough and have enough distance from each other so that users can safely target them by touch. </p>
<p><em>Note:</em> This size is not dependent on the screen size, device or resolution. Screen magnification should not need to be used to obtain this size, because magnifying the screen often introduces the need to pan horizontally as well as vertically, which can decrease usability. </p>
<p class="blue"><strong>Gregg comment: </strong></strong> This means what? That an Apple watch – all the physical controls on the side have to also be operable from the screen? Or do you mean that a web page designer needs to provide their own keyboard in their content for any keyboard input on their page?</br.
I’m not sure this makes sense. If you are relying on the keyboard interface for input of text – then it is not all via touch – some is via the keyboard interface.><br>
And some mobile devices don’t have an onscreen keyboard (they have a physical one) – so all by touch means you AGAIN WOULD have to provide all the input with a keyboard built into each web page or you would fail this SC</br>I can’t figure out how to make sense of this one. </p>
<h3 resource="#h-touchscreen-gestures" id="h-touchscreen-gestures"><span property="xhv:role" resource="xhv:heading"><span class="secno">3.3 </span>Touchscreen Gestures</span><br>
</h3>
<p>Many mobile devices are designed to be primarily operated via gestures made on a touchscreen. These gestures can be simple, such as a tap with one finger, or very complex, involving multiple fingers, multiple taps and drawn shapes.</p>
<p><span class="advisorysuccesscriteria">Gestures in apps should be as easy as possible to carry out. This is especially important for screen reader interaction modes that replace direct touch manipulation by a two-step process of focusing and activating elements. It is also a challenge for users with motor or dexterity impairments or people who rely on head pointers or a stylus where multi-touch gestures may be difficult or impossible to perform. Often, interface designers have different options for how to implement an action. Widgets requiring complex gestures can be difficult or impossible to use for screen reader users. </span></p>
<p>Activating elements via the mouseup or touchend event. Using the mouseup or touchend event to trigger actions helps prevent unintentional actions during touch and mouse interaction. Mouse users clicking on actionable elements (links, buttons, submit inputs) should have the opportunity to move the cursor outside the element to prevent the event from being triggered. This allows users to change their minds without being forced to commit to an action. In the same way, elements accessed via touch interaction should generally trigger an event (e.g. navigation, submits) only when the touchend event is fired (i.e. when all of the following are true: the user has lifted the finger off the screen, the last position of the finger is inside the actionable element, and the last position of the finger equals the position at touchstart). </p>
<h3 resource="#h-device-manipulation-gestures" id="h-device-manipulation-gestures"><span property="xhv:role" resource="xhv:heading"><span class="secno">3.4 </span>Device Manipulation Gestures</span><br>
</h3>
<p>In addition to touchscreen gestures, many mobile operating systems provide developers with control options that are triggered by physically manipulating the device (e.g. shaking or tilting). While device manipulation gestures can help developers create innovative user interfaces, they can also be a challenge for people who have difficulty holding or are unable to hold a mobile device. </p>