Image Reorderer code question

elledge at msu.edu elledge at msu.edu
Wed Apr 8 01:17:23 UTC 2009


Hi Daphne--

Ideally we test 8 people per cell (undergrads, grads, donors, etc.) but more often test 5-6 because of client budgets. Occasionally we will have a cell of blind users, and once in a very long while a low vision group. We ask blind users to do the same tasks as sighted users. Since they are typical user tasks (usually defined by the ciient since they know their app or website), they will be as appropriate for blind/low vision users as sighted users. The cell of blind/low vision users will be average users as best as we can define them; we haven't had a project that I can recall where we had more than one cell. I've tested both JAWS and Window-Eyes users in the field; in the lab they are typically JAWS users (this seems to depend on geography). The main difference is that you have to read the consent form and questionnaire(s) to blind users; we read the test information and tasks out loud to everyone. Blind users can't read the tasks to refresh their memory, so we'll repeat them as needed out loud.

Let me know if you have any other questions about testing with blind/low vision users; I'm happy to share what I know!

Mike

Quoting "Daphne Ogle" <daphne at media.berkeley.edu>:

> Right.  I definitely want to keep the directing tasks in.  I was  
> thinking of replacing the existing task 1 "Please rearrange as many 
> of  the images as you wish.  Try to rearrange at least 4 images." 
> with the  one I described below.   I like it better because it is 
> more realistic  that users will have goal in mind (rather than just 
> moving them  around) and it is very dynamic.  This is how we saw user 
> interacting  with their images during user research.  But I hear you 
> about stumping  a user with something we aren't really testing.  We 
> don't really care  how well they can order the fruits by calorie 
> count.  Now, I'm back to  original suggestion of adding the caloric 
> information to the  interface.  I'll check on it.
>
> I'm interested in what sort of comparisons you are doing.  Do you  
> typically test with large numbers?  What comparisons do you do 
> between  sighted and blind users?
>
> Thanks!  Daphne
>
> On Apr 7, 2009, at 9:57 AM, elledge at msu.edu wrote:
>
>> I guess it depends upon what you want to get from the test. We  
>> generally specify tasks so that the results can be comparative. If  
>> we want to get a sense of what participants will do without  
>> direction, we'll usually ask them to do something in whatever makes  
>> sense to them (undirected) and then ask them why they did what they  
>> did. Then we'll follow-up with specific tasks (directed) so we can  
>> see how long it takes them, how they try to complete the task, and  
>> if they succeed or not. Having (at least some) directed tasks would  
>> enable you to compare blind and sighted user results, which would be 
>>  quite useful.
>>
>> There's also the risk that someone won't really know what do if they 
>>  aren't directed. An alternative would be to give them a couple of  
>> choices if they can't think of what to do.
>>
>> Mike
>>
>> Quoting "Daphne Ogle" <daphne at media.berkeley.edu>:
>>
>>> We could do that.  My intent for the scenario was that this is more
>>> like a quiz to kick-off the class.  So the idea is that they are
>>> putting them in order based on their current thinking/knowledge.  In
>>> fact, as I write this I realize that could be a really rich task as
>>> they would be moving a bunch of images around in may different ways.
>>>  How does that sound?
>>>
>>> -Daphne
>>>
>>> On Apr 7, 2009, at 9:04 AM, elledge at msu.edu wrote:
>>>
>>>> Hi Daphne--
>>>>
>>>> I think you're absolutely right on this. Change the wording so that
>>>> it has ppts putting the apple after, mango between, etc. Much more
>>>> realistic. Now, a question for you. If they are moving the objects
>>>> based upon nutritional value, should we provide that info as part  of
>>>>  the task?. We could add the info to the image, i.e., <alt ="Mango:
>>>> 95 calories"> for example, which would more closely simulate the
>>>> task. This would also require adding the number of calories to the
>>>> visual, but I would think that, and revising the alt tag, would be
>>>> trivial. What do you think?
>>>>
>>>> Mike
>>>>
>>>> Mike
>>>>
>>>> Quoting "Daphne Ogle" <daphne at media.berkeley.edu>:
>>>>
>>>>> Mike & Colin,
>>>>>
>>>>> Thanks for the great feedback.  I'm working on updating the  tasks  and
>>>>> have questions / concerns about the wording.  We really want the
>>>>> users to do a task they might do normally with the image  reorderer.
>>>>> Is it realistic that a user would go in saying, I want to move  the X
>>>>> image to be the 5th in the row?  I think it is more realistic that
>>>>> they want to move X image in between Y and Z images.  Which says  to
>>>>> me we need a way for them to know *where* they are as they are
>>>>> "dragging"the image.  I'm thinking that as they 'shift + right
>>>>> arrow', for instance, we would want a screen reader to say  something
>>>>> like "dropped between X and Y (X and Y being the image names).   Does
>>>>> this make sense to you?
>>>>>
>>>>> Knowing we don't have anything like this implemented yet, the
>>>>> question becomes, how should the tasks work for the user test.  I
>>>>> tempted to have a couple tasks like Mike suggests so we can see  how
>>>>> the interaction of moving the images works.  But also including  a  few
>>>>> tasks like "put the mangosteen between the pear and apple" and
>>>>> keeping the one that requests them to "move the mangosteen to the
>>>>> end".  I don't want to set people up to fail with these last tasks
>>>>> but I do want to see how they would try to figure it out.  Will  they
>>>>> have to try to read through all the images and try to remember  them
>>>>> and the order.  Talk about cognitive load!  Plus I feel like we  may
>>>>> get some good suggestions from them about what would be helpful.
>>>>> Thoughts?
>>>>>
>>>>> -Daphne
>>>>>
>>>>> On Apr 6, 2009, at 6:11 PM, Colin Clark wrote:
>>>>>
>>>>>> Hey Daphne,
>>>>>>
>>>>>> The default markup for the Image Reorderer is not table-based.  Mike
>>>>>> Elledge's comments about the way to phrase the test plan make a  lot
>>>>>> of sense to me.
>>>>>>
>>>>>> I know that Everett spent a bit of time playing around with the
>>>>>> Image Reorderer using a screen reader and found the lack of hard
>>>>>> stops at the beginning and end of the list rather disorienting.  He
>>>>>> had some other ideas for how we might be able to provide  additional
>>>>>> cues to help orient the user. I think that feedback is probably
>>>>>> archived on the mailing list.
>>>>>>
>>>>>> Hope this helps,
>>>>>>
>>>>>> Colin
>>>>>>
>>>>>> On 6-Apr-09, at 8:57 PM, Daphne Ogle wrote:
>>>>>>
>>>>>>> As we've been updating the user testing protocols for  relevance to
>>>>>>> users using adaptive technologies a few questions have come up.
>>>>>>>
>>>>>>> Here's one from Mike E. that I could use some help answering:
>>>>>>>
>>>>>>> "Hi all--
>>>>>>>
>>>>>>> Is the image reorder based on an html table structure? If not   blind
>>>>>>> users won't have a sense of middle or row. If the reorderer  isn't
>>>>>>> based on table structure, have them move an item to a particular
>>>>>>> position within the list of objects (ex. "move item so that it  is
>>>>>>> the fifth object in the list")."
>>>>>>
>>>>>> ---
>>>>>> Colin Clark
>>>>>> Technical Lead, Fluid Project
>>>>>> Adaptive Technology Resource Centre, University of Toronto
>>>>>> http://fluidproject.org
>>>>>>
>>>>>
>>>>> Daphne Ogle
>>>>> Senior Interaction Designer
>>>>> University of California, Berkeley
>>>>> Educational Technology Services
>>>>> daphne at media.berkeley.edu
>>>>> cell (510)847-0308
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>> Daphne Ogle
>>> Senior Interaction Designer
>>> University of California, Berkeley
>>> Educational Technology Services
>>> daphne at media.berkeley.edu
>>> cell (510)847-0308
>>>
>>>
>>>
>>>
>
> Daphne Ogle
> Senior Interaction Designer
> University of California, Berkeley
> Educational Technology Services
> daphne at media.berkeley.edu
> cell (510)847-0308
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.idrc.ocad.ca/pipermail/fluid-work/attachments/20090407/6816e85c/attachment.htm>


More information about the fluid-work mailing list