Engage: presence and absence of objects

James William Yoon james.yoon at utoronto.ca
Thu Mar 4 22:20:29 UTC 2010


Hi Dan,

We're definitely thinking about the same sorts of things on Engage, and I'm
excited to hear that you're struggling with similar design problems. I've
just come back from the first part of Engage's in-museum mobile application
pilot test, so I have some fresh thoughts for you.

Let me make sure that I understand your problem properly: you're uncertain
about whether or not to show the digital visual representation of an object
(and if so, how best to) when it might already be physically in front of the
visitor if he/she is in the museum. Your concern is that the image: a) takes
up precious screen real estate, b) may detract from the experience of the
physical object, and/or c) is redundant in the presence of the physical
object. Moreover, you want to provide a seamless experience both between and
within platforms, for both the in-museum and out-of-museum cases. Is this
more or less correct?

If so, here are my thoughts:

Firstly, for any experience whether there's an absence of the object (online
exhibition that complements a physical exhibition, for instance), regardless
of the platform, it's critical that there be a digital visual representation
(supplemented with a textual description of the visual, for users who
prefer/require an alternate modality) whenever possible (i.e., if the
representation is available, if the platform supports it, if there aren't
any technical reasons why it'd be inadvisable). This is pretty easy to agree
upon, I think--for many or most sighted users, the visual representation of
an artifact is the most conceptually accessible surrogate to the actual
object.

For the in-museum case where the user is already directly in front of the
physical object, I would argue that it's likewise critical to provide a
digital visual representation up-front (i.e., not on a secondary screen,
even if it's only a swipe away). At least two reasons why:

1. Confirmation. Early results from our pilot tests point overwhelmingly to
the fact that users use the image on the device as confirmation that they're
looking at the right object. It's the primary bridge between the physical
object and the virtual object. We suspected this before the pilot, and our
tests confirm it. A small percentage of our users did use the title of the
object or other textual metadata to confirm that they were looking at the
same object both on the device and in the physical space, but this occurred
*only* when it was not obvious that the image was of the object (i.e., they
attempted to make the link first by using the image, and when that failed,
they used text-based metadata as a fallback). This is true of both the case
where users already have a direct route to the object (e.g., entering an
object code) and an indirect one (e.g., scanning through a list of
thumbnails/title and selecting the appropriate one).

2. Pleasure. Users like seeing the image. In many cases, the image on the
device provides a different perspective than the one they're looking at. The
digital visual representation can:
- Provide the object's historical context (and thus support interpretive
activities)
- Provide angles of the object that aren't visible to the user in the
physical space (e.g., underneath the object, inside the object, etc.)
- Provide a detailed view of the object that aren't easily visible to the
user in the physical space (this is especially true of smaller sized
objects, objects that are against the wall or a barrier, and objects that
are separated from the user by glass, casings, rope, etc.)
During the 'think aloud' component of our pilot test, we found that users
either commented positively or had no comment at all on the up-front image.
We also found that there was a relatively uniform distribution of users who
disliked, liked, or were indifferent to the up-front textual content--some
commented that they didn't like reading text or were too lazy to, while
others enjoyed the extra value it offered.

So, while the image may be something of a redundancy, I think it's both a
necessary and desirable one. As for the possibility of detracting from the
in-museum experience: I don't think this is a major concern. Most of our
users appeared to spend a nominal amount of time looking at the image
relative to the physical artifact itself.

As for how to best display the image, and how much space should be afforded
to it: we're not really sure. In our earliest designs, we gave very little
space to the image, and focused primarily on the textual content. We
gradually gave higher priority to the image, and at one point, we made the
artifact image full-screen up front, with its text-based content one tap
away. For our pilot test, the image and text were about half-and-half (like
you considered), but given the consistently positive experience users had
with images and somewhat inconsistent experience with text, we might be
considering returning to the full-screen image up front.

Lastly, in terms of providing a seamless out-of-museum and in-museum
experience: this is something we've been doing some thinking around too. Our
current designs actually propose that we detect if the mobile device is in
the museum and provide different options and interface, just as you likewise
considered. There's something attractive about having one application that
could serve the pre-visit, in-museum, and post-visit experiences. For
instance, in the pre-visit the device could provide information about the
museum, and in the in-museum case the device could provide ways for entering
object codes. We're still doing a lot of thinking around how to do best do
this, but I'm not entirely convinced that having a 'Swiss Army knife'
application is the best approach (I think it adds conceptual complexity, and
might be trying to fit too many use cases into a single application. We'd be
asking the users to understand why the UI offers an wholly different
experience depending on where they are.).

Anyways, these are just some thoughts off the top of my head. I'd love to
hear some more of your thoughts, ideas, and questions.

Also, we'll make an list announcement once we've processed and posted the
data from our pilot test--there might be some relevant findings in there for
you.

Hope this helps some.

Cheers,
James
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://fluidproject.org/pipermail/fluid-work/attachments/20100304/77d07150/attachment.html>


More information about the fluid-work mailing list