[Accessforall] Some statements for discussion regarding the further work of this group

Andy Heath andyheath at axelrod.plus.com
Mon Jan 16 12:52:30 UTC 2012

Hi all,

Thanks for this Gottfried.

I have edited in some comments below, particularly on the 5 statements.
To make it readable (at which I have failed) I have chopped some text 
(apols to Gottfried).

> [My apologies if you have received this email twice since I am sending
> this to the list and to individual email addresses.

ditto - my apologies too.

> Below my signature are 5 statements that I would like to discuss with

> *The development of a set of preference items for user interface
> adaptation should be based upon various use cases, including a range of
> user groups and a range of contexts of use.*
>   * Use cases need to be defined, and different kinds of adaptive user
>     interfaces constructed for testing and evaluation purposes.
>   * This is what Cloud4All is doing through its SP1 and SP3, on various
>     platforms. This takes time and ressources, and requires the
>     involvement of many partners.
>   * Other projects (aside from Cloud4All) may have done or may be doing
>     similar activities. This working group is suitable for information
>     exchange and coordination of these efforts.
> **

Well yes, I personally don't disagree, provided there are no ugly 
patents issues from organisations involved with Cloud4All.  I note that 
some participants are paid for Cloud4All work and some (maybe most, I 
don't know) of the participants here are working for free (which might 
include specialists consulted) - this makes it vitally important that 
there are no hidden agendas - which we can achieve my making all 
arguments explicit - mixing hidden agendas with 
freely-given-collaboration would not leave a pleasant taste.

> *The set and structure of preference items for user interface adaptation
> depends on the context of use (applications, platforms, assistive
> technologies). There is no universal set of preference items that would
> suffice for all existing contexts of use.*
>   * This is an implication of the first statement.


> **
> *The set of preference items for user interface adaptation will never be
> complete. New applications, assistive technologies and user interaction
> techniques will eventually require an extension of any existing set of
> items.*
>   * What if we want to accommodate new interaction techniques that
>     require new preference items? For example, consider the introduction
>     of sign language output by avatars. We would need a new preference
>     item for the speed of the sign language performance, for the
>     preferred avatar, etc. Or for gesture-based input we would need
>     additional preference items that specify a gesture alphabet.
>   * We need to define a framework that sets the structure of a user
>     profile independent of its vocabulary items.
>   * The vocabulary (preference items) should be repository-based so that
>     it can be updated on a regular basis.
>   * The framework (without vocabulary) should be defined in an ISO
>     standard. Also, a process of approval for new vocabulary items for
>     the repository should be specified in a second ISO standard.
>   * For an efficient development process, properties, structure and
>     approval process can be developed in parallel.
> **

I agree completely.  I think its important that we identify all the 
kinds of modelling
concepts and structures we might need to use - they don't need to be 
populated yet.  I elaborate on this later.

> *A flat set of properties (key-value pairs) is most appropriate for
> practical reasons.*
>   * In most cases, hierarchical constructs of user properties can be
>     approximated by a flat set of properties with URIs as keys.
>   * See as an example the WURFL properties database for mobile devices.
>     This simple collection of key-value pairs for thousands of mobile
>     devices is a de-facto standard in industry today for the user
>     interface adaptation based on mobile device characteristics. WURFL
>     is regularly updated by a community. http://wurfl.sourceforge.net/
>   * Many standards in this area have a flat structure, and some use URIs
>     as keys (e.g. Dublin Core, ETSI ES 202 746). This is a pragmatic
>     approach, and allows for core properties and extension properties
>     based on domain names. Also, using URIs as keys allows for formal
>     definition of preference items (and constraints) in RDF. Note that
>     doesn't mean that the preference values need to be coded in RDF.
>   * The core items will have a commonly defined domain name (namespace)
>     as part of their URI. Third parties such as user groups and vendors
>     can define their own extension items by using a different domain
>     (namespace).
> **

In contrast to Gottfried's very nicely-structured and readable 5 points 
the discussion below is unfortunately rather long, complex and rambling 
- my intention was to put a few things "on the discussion map" - I ought 
to re-write it but I don't have time - Please take it for what it is, 
I'm happy to expand on any points I'm making here verbally on the call.

I agree about hierarchy in the model (traditional containment hierarchy 
is not a good idea) but the argument is I think more involved ..

I don't completely agree that key-value pairs satisfies everything we 
need, though it depends on what other structures we model that go with it.
A case in point is that often in the work we have done in IMS and in 
24751 we have needed to express preferences that contain a data 
dependency.  For example, taking modalities (not the only use case by 
any means), we might express a preference "for textual content I require 
auditory" (note the auditory might be delivered by an accompanying 
alternative or generated on the fly - this does not matter (though it 
impacts on the device-properties/services structure)). This can easily 
be represented in a key-value architecture such as one might do with 
object attributes in an implementation, but doing so directly requires 
introducing an extra attribute and the notion of the relationship 
between a modality and what a user requires for it is easily confused 
and even lost if not made explicit.  In IMS Access for 3.0 (which is by 
the way now a little different from what is published) we chose an 
RDF-like description precisely *because* we could easily represent 
relationships like this as triples *inside the model* - but we are 
describing implementations with key-value pairs, introducing extra 
attributes to model the relationships where needed.

These kind of relationships occur in many ways - for example between 
context and preference also.  It isn't completely necessary to model 
them directly in the preferences as we have - they could be external to 
a key-value pair system - but it *is* necessary to model them. So in the 
modality case I describe above we might have the user has a preference 
for auditory and a contextual constraint that applies it to textual 
modalities (thus when taken together effectively representing the 
preference "for textual I need auditory").

We need to ask the question "where and how is dependency like this 
represented" - there are many ways possible.

The traditional Metadata approaches introduce constraints that may or 
may not be live-able with.  Hierarchical container-based structure (such 
as LOM) has definite problems with clean adaptable representation and we 
should imho abandon it.  Dublin Core approaches may also have their 
problems - I would argue that DC is too dumbed down to represent 
everything we might need to represent - not in the metadata its 
populated with, but in its structure (as I see that's a deliberate and 
quite justified design decision of DC). Let me give an example.

In IMS AfA 3.0 we have started defining a vocabulary of modality terms 
(similar arguments apply to other vocabs such as adaptation types and 
media types but we didn't get far enough to do much with them yet). 
These terms do not stand independently - much as we might like to we 
cannot impose on the world a structure of terms, the world wouldn't like 
it. What we *have* made a start on is identifying some terms and the 
relationship between them. We have constrained ourselves in the 
relationship between terms to one that is very simple and something like 
(but not the same as) the DC refinement relationship.  An example might 
be the terms "visual" and "color" when applied to modalities. If you 
have a resource having the "color" modality you can deduce from this 
formalised knowledge relationship between the two that it also has the 
"visual" modality - with real resources that knowledge can be used to 
deliver a resource that is an adaptation to a visual modality to a user 
requiring an adaptation to color if no adaptation to color is available.

Exploring modality terms and adaptation type terms (such as captions, 
transcripts etc.) I think there are very rich relationships between 
terms and between terms and adaptation types. There are also 
relationships between Media kinds and these two things. DC's 
"refinement" is not imho rich enough to express these relationships.
With relationships between terms formalised and made explicit many 
things become possible that were not before - it becomes possible to 
traverse the knowledge and supply adaptations that without the knowledge 
we did not know could meet those requirements - this is the game we 
ought to be in imho - exploiting to the maximum the information we *do* 
have available.

I would like to see a Framework that includes ontologies of terms, 
adaptation types and possibly media types (though I think this last is 
an order of magnitude harder).  We absolutely *cannot* construct a 
complete ontology of all these things - Gottfried's arguments about 
change and complexity apply - and also I believe the world must do it 
not us - but what we *can* do is make a start by providing a  framework 
in which such can be done and by doing it for a small Core model/set.

Where do traditional Metadata approaches fit in ?

Well, with the content side of things some of what we might do will map 
to some of them. DC is in a good place as is ISO Metadata for Learning 
Resources (but even than is limited in the relationships it can model) - 
and in some worlds it will be useful but not in all.  The problem with 
those approaches IMHO is they aren't general enough for a starting model 
- each of them is a particular solution for a particular domain and if 
one of them were going to become *the model of choice for everything* it 
would have done so by now.

Components of the abstract structure of what I'd like to see us build :

1. A Preference model along the key-value pair line closely integrated 
with ontologies defining terms and relationships between them and 
adaptation types and relationships between those and terms - populated 
with a very small Core of terms and relationships.

2. An architectural model including context, device and service components

3. A Framework model that sets all of this in context and makes it work 
in an way that the wider community can develop

Actually there is a model out there for a representation that in my view 
is *almost* there for parts of it - ISO MLR - as I see it the one 
drawback it has is that it the kind of relationship between entities 
that it models is restricted to "refinement" (as in Dublin Core).

> *In general, the user properties stored in a user profile should be
> based on requirements rather than functional descriptions of the user.
> However, functional descriptions may help to efficiently derive
> requirements-based properties for the initialization or fine-tuning of
> user profiles.*
>   * Our framework should be able to accommodate both requirements-based
>     user properties and functional user properties.
>   * There will likely be stricter ethical constraints for functional
>     user properties than for requirements-based user properties. For
>     example, functional user properties could be permitted only in a
>     local environment and temporary context (no storage on a central
>     user profile repository).

I'm *almost* persuaded - without it there are some areas that we just 
cannot address at all yet that we might do so usefully. We all seem to 
be using the word "functional" in different ways - Gottfried, we often 
referred to 24751 as expressing functional requirements (as contrasted 
with medical ones).  For me the question is "can we have an abstract 
model in the middle that permits requirements to be expressed in terms 
that are not personal/medical/my-idea-of-your-requirements and yet still 
provide for a changing ICT landscape" - 24751 is not abstract (and not 
so adaptable to change as a consequence) and the answer might be that 
for some areas we cannot and that your suggested approach could be 


Andy Heath

More information about the Accessforall mailing list