Sonification and data representation design

Shahi, Sepideh sshahi at
Wed Apr 29 12:18:56 EDT 2015

Hi everyone,
Michelle and I had a conversation regarding the Pie Chat Tool development process a few days ago. She recommended breaking down the design to several phases to better facilitate the development. So, we can start by implementing the main features and if time and development resources become available, we can add the extra features to the tool.
I have prepared the attached wireframes proposing 4 phases for development. Please note that these are not final and still in the exploratory phase. Thus, any feedback regarding design, required features or any other aspect of the tool is much appreciated.

Drop box link:
Wiki link:


On Apr 16, 2015, at 12:43 PM, Shahi, Sepideh <sshahi at<mailto:sshahi at>> wrote:

Thanks Steve, It makes much more sense after watching the demo.
Here is the demo:


On Apr 16, 2015, at 10:57 AM, Steve Lee <steve at<mailto:steve at>> wrote:

[Adding Doug in so he sees the feedback]

Thanks Sepideh, I must admit I have not tried it myself but watch the demo in action with a blind user.

Steve Lee

On 16 April 2015 at 15:38, Shahi, Sepideh <sshahi at<mailto:sshahi at>> wrote:
Hi Steve,
This is a very interesting project. I was able to play with the screen reader for the Pie Chart, however, the sonifier function did not work.It creates a trend line but it does not play it...

Just a few points to consider:

  *   When it’s reading aloud the pie chart, it gives out too much information at once including total, highest, lowest, average, median and several actions that user can take, which may not be necessary for all users.
  *   Focus is fixed around the chart and it does not move to different sections as user tabs to different parts of the chart. This makes it difficult to expect what is going to be read next.
  *   The trend line does not have any visual association with the pie chart. I missed it the first time it was displayed and the next time I assumed there was something wrong with my display until I realized that is supposed be a trend line.


On Apr 10, 2015, at 9:35 AM, Steve Lee <steve at<mailto:steve at>> wrote:

FYI - Doug Schepers had a neat hallway demo at CSUN using pitch to
explore SVG charts
Steve Lee

On 9 April 2015 at 20:48, colinbdclark at<mailto:colinbdclark at> <colinbdclark at<mailto:colinbdclark at>> wrote:
Hi everyone,

As part of the Floe Project's efforts to create personalized, accessible
user interfaces that can be used across a variety of Open Educational
Resources, we've been working on a design framework and JavaScript toolkit
for authoring multimodal charts, graphs, and other data "visualizations."
One of the central goals of this effort is to make it easier for teachers,
students, and content authors to represent data in "layers" consisting of
different modalities--graphics, text, and audio.

To start, we've been focusing a lot on sonification, the process of
representing data using sounds. We're in the midst of a very early
brainstorming, sketching and idea generation process. Our work is documented
in the wiki:

In order to start exploring the potential of data sonification in a way that
allows us to experiment with different approaches and to iterate from
mockups to working implementations reasonably quickly, we've constrained our
current design sketches to tools that will help authors produce multimodal
"pie charts." The goal of this tool is to enable authors to produce layered
representations of fairly simple data, and to give end-users the ability to
explore, remap, and share their own personalized sonifications and

As part of this process, we've been exploring some new methods for how we
design sonifications strategies and evaluate their effectiveness. We've
started working with a small, informal group of people in a co-design
context, and will also be sharing our in-progress work here on the list.
What we've done so far is to prototype several different types of
sonifications using low-tech tools and then shared them with people using a
process of "progressive explanation." We start by having them listen to the
sonification with no additional cues or explanation, asking them to describe
their impressions (including how they imagine the sounds map to some
underlying data set). From there, we progressively explain more about the
intentions behind sonification (such as describing the sound mapping using
an "audio legend"), and continue to gather impressions and ideas from our
listeners. We've found this to be a very helpful process for exploring how
much textual or explanatory supporting material to provide with a given
sonification approach. Sepideh has posted some great examples and prototypes
in the wiki.

Over the coming months, we'll expand this design effort to encompass more
complex data and to more interactive situations such as simulations, games,
and performances.

We'll continue to share ideas, sketches, and works in progress here on the
mailing list, in the #fluid-design IRC channel, and in the wiki.
Constructive feedback and creative ideas are always appreciated during this
early stage in the process, as well as the understanding that we're still
experimenting and exploring the design space. Failures and half-baked ideas
are as useful at this stage in the design process as successes.


fluid-work mailing list - fluid-work at<mailto:fluid-work at>
To unsubscribe, change settings or access archives,
fluid-work mailing list - fluid-work at<mailto:fluid-work at>
To unsubscribe, change settings or access archives,

fluid-work mailing list - fluid-work at<mailto:fluid-work at>
To unsubscribe, change settings or access archives,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the fluid-work mailing list