Image Reorderer code question

Daphne Ogle daphne at media.berkeley.edu
Tue Apr 7 16:24:54 UTC 2009


We could do that.  My intent for the scenario was that this is more  
like a quiz to kick-off the class.  So the idea is that they are  
putting them in order based on their current thinking/knowledge.  In  
fact, as I write this I realize that could be a really rich task as  
they would be moving a bunch of images around in may different ways.   
How does that sound?

-Daphne

On Apr 7, 2009, at 9:04 AM, elledge at msu.edu wrote:

> Hi Daphne--
>
> I think you're absolutely right on this. Change the wording so that  
> it has ppts putting the apple after, mango between, etc. Much more  
> realistic. Now, a question for you. If they are moving the objects  
> based upon nutritional value, should we provide that info as part of  
> the task?. We could add the info to the image, i.e., <alt ="Mango:  
> 95 calories"> for example, which would more closely simulate the  
> task. This would also require adding the number of calories to the  
> visual, but I would think that, and revising the alt tag, would be  
> trivial. What do you think?
>
> Mike
>
> Mike
>
> Quoting "Daphne Ogle" <daphne at media.berkeley.edu>:
>
> > Mike & Colin,
> >
> > Thanks for the great feedback.  I'm working on updating the tasks  
> and
> > have questions / concerns about the wording.  We really want the
> > users to do a task they might do normally with the image reorderer.
> > Is it realistic that a user would go in saying, I want to move the X
> > image to be the 5th in the row?  I think it is more realistic that
> > they want to move X image in between Y and Z images.  Which says to
> > me we need a way for them to know *where* they are as they are
> > "dragging"the image.  I'm thinking that as they 'shift + right
> > arrow', for instance, we would want a screen reader to say something
> > like "dropped between X and Y (X and Y being the image names).  Does
> > this make sense to you?
> >
> > Knowing we don't have anything like this implemented yet, the
> > question becomes, how should the tasks work for the user test.  I
> > tempted to have a couple tasks like Mike suggests so we can see how
> > the interaction of moving the images works.  But also including a  
> few
> > tasks like "put the mangosteen between the pear and apple" and
> > keeping the one that requests them to "move the mangosteen to the
> > end".  I don't want to set people up to fail with these last tasks
> > but I do want to see how they would try to figure it out.  Will they
> > have to try to read through all the images and try to remember them
> > and the order.  Talk about cognitive load!  Plus I feel like we may
> > get some good suggestions from them about what would be helpful.
> > Thoughts?
> >
> > -Daphne
> >
> > On Apr 6, 2009, at 6:11 PM, Colin Clark wrote:
> >
> >> Hey Daphne,
> >>
> >> The default markup for the Image Reorderer is not table-based. Mike
> >> Elledge's comments about the way to phrase the test plan make a lot
> >> of sense to me.
> >>
> >> I know that Everett spent a bit of time playing around with the
> >> Image Reorderer using a screen reader and found the lack of hard
> >> stops at the beginning and end of the list rather disorienting. He
> >> had some other ideas for how we might be able to provide additional
> >> cues to help orient the user. I think that feedback is probably
> >> archived on the mailing list.
> >>
> >> Hope this helps,
> >>
> >> Colin
> >>
> >> On 6-Apr-09, at 8:57 PM, Daphne Ogle wrote:
> >>
> >>> As we've been updating the user testing protocols for relevance to
> >>> users using adaptive technologies a few questions have come up.
> >>>
> >>> Here's one from Mike E. that I could use some help answering:
> >>>
> >>> "Hi all--
> >>>
> >>> Is the image reorder based on an html table structure? If not  
> blind
> >>> users won't have a sense of middle or row. If the reorderer isn't
> >>> based on table structure, have them move an item to a particular
> >>> position within the list of objects (ex. "move item so that it is
> >>> the fifth object in the list")."
> >>
> >> ---
> >> Colin Clark
> >> Technical Lead, Fluid Project
> >> Adaptive Technology Resource Centre, University of Toronto
> >> http://fluidproject.org
> >>
> >
> > Daphne Ogle
> > Senior Interaction Designer
> > University of California, Berkeley
> > Educational Technology Services
> > daphne at media.berkeley.edu
> > cell (510)847-0308
> >
> >
> >
> >
> >

Daphne Ogle
Senior Interaction Designer
University of California, Berkeley
Educational Technology Services
daphne at media.berkeley.edu
cell (510)847-0308



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.idrc.ocad.ca/pipermail/fluid-work/attachments/20090407/ab88882d/attachment.htm>


More information about the fluid-work mailing list