[Accessforall] Needs & Preferences call today at 12:00 UTC

Matthew Atkinson M.T.Atkinson at lboro.ac.uk
Tue Jun 12 09:07:01 UTC 2012


Hello,

Some of you may remember me from a few months ago when Andy Heath introduced my research group to the project, following a seminar of ours.  Since then I have been recovering from an operation, hence quiet.  I now feel there is something significant I can contribute.  If it is possible/prudent to discuss the following in the call today (or via the list until the next call) please let me know.

My work involves considering the implications of human capabilities on what GPII calls preferences (I've previously put [GPII] preferences and discrete Assistive Technologies (ATs) on a spectrum of "adaptations").  Very briefly, the goal is twofold:

1. Use known human capabilities (in human terms, such as those found in WHO's classification) to help infer [GPII] preferences.  This can help bootstrap users onto new devices, OSes or applications.  The next step is to have detected fluctuations in capabilities/environmental conditions affect preferences in the most appropriate way (perhaps setting up or triggering "conditions" in GPII terms).

2. Use changes in [GPII] preferences to help track which human capabilities the user may be having problems with (being aware that these could be due to environmental barriers, too).  This (along with very occasional user interaction) closes a feedback loop that allows us to continually refine our view of the user's capabilities and thus suggest more pertinent preferences/ATs.

Our latest paper on this was in W4A 2012 (you can read it without cost via <http://hdl.handle.net/2134/9789>).  I have pasted in an illustrative example from it as to how this may work.  The architecture was proposed before we met Andy, but it's remarkably similar to GPII and I would love to implement what I've been doing on top of what you guys are doing.

It seems that what I'm trying to do is to implement a Match Maker that takes not only GPII preferences as input (as per the wiki page: <http://wiki.gpii.net/index.php/HDM_Glossary_Proposals>) but also information we are tracking regarding human capabilities.  Output would include GPII preferences and updated capability information.

I believe a similar process could be used to help interoperability between GPII and other standards that have an established -- but different -- set of preferences (e.g. SNAPI-based smart cards).

So far I've been following your architectural discussion and have not seen anything that conflicts with me developing "on top of" the existing GPII infrastructure.  However, I would like to ensure that nothing precludes this and, therefore, am wondering if I ought to start contributing a little more vocally.

Caveat: as far as I know, the agreement regarding IP has not been finalised.  I really want to get involved as much as I can with GPII -- can anyone point me to a final agreement to which I can agree?

Also, I realise that it is now the day of the call and this may take some time to digest -- please let me know, if you can, if you'd rather discuss this on-list first, rather than in the (presumably quite time-pressured) call.

Thanks for your time -- I look forward to contributing much more soon.

best regards,


Matthew

[Very brief] Illustrative example from W4A paper as to the benefits of capabilities as well as "adaptations" (GPII preferences + discrete ATs):

Imagine a user with fine motor dexterity problems.  Recording [in the user's profile] an ability to use a mouse, but not its scroll wheel, is too device-specific.  Given the capability-centred approach we would store information regarding the capabilities of the user's finger.  The inability is indicative of a finger dexterity problem -- reduced capacity in fine motor skills.  On trying to use a public multi-touch terminal, the user may find their reduced dexterity a problem, as it may preclude using pinch-to-zoom gestures.  Although the user may never have used a multi-touch device before, inference can be made from their lack of fine motor capability that the pinch gesture could be unattainable.  A zoom widget, such as a large slider bar, can be provided for the user.

On a small device such as a tablet, where screen space is at a premium, this would not be provided for most users.  On a public information terminal there is likely more space, but given the popular design aesthetic of minimising screen clutter, an explicit zoom widget may not have otherwise been provided.

Further: if a user is not known to have experience with multitouch interaction -- therefore lacking the appropriate mental model -- the terminal can be adjusted to offer explanation.

-- 
Matthew Tylee Atkinson
http://mta.agrip.org.uk/


More information about the Accessforall mailing list