Open Exhibits - Blog



Universal Design Guidelines for Computer Interactives

At the Museum of Science, Boston, we have been reviewing our software development and design process and we have compiled our findings from the last decade into a list of guidelines to consider. This list is an updated version of the table found in Universal Design of Computer Interactives for Museum Exhibitions (Reich, 2006). Although these are not strict rules, we are hoping they help provide a foundation on which to build during development of a universally designed computer interactive.

This list is organized by development area and each guideline is followed by a code, indicating which audiences benefit the most from these considerations. The key for these codes can be found at the bottom of this post.

Overall exhibition

  • Minimize background noise (D, HH, DYS)
  • Minimize visual noise (DYS, ADD, LV)
  • Stools for seating (LV, YC, OA, LM)
  • Consistency in interaction design throughout exhibition (B, LV, ID, LD, NCU)

Content development

  • Multisensory activities for framing (ALL)
  • Use of the clearest, simplest text that is free of jargon (ER, ID, LD, ESL)
  • Screen text that makes sense when heard and not viewed (B, LV, ID, ER, ESL)
  • A short description of the activity’s goals presented through images, audio, and text (ADD, NCU)
  • Clear, simple directions that provide a literal and precise indication of what to do and the exact order for doing it (B, LV, ID, LD, NCU)
  • Make as many options as possible visible to maximize discoverability (ALL)
  • Minimize number of actions required to accomplish a given task (ADD, ASD, ID, NCU)


  • A tactile or gestural interface, such as buttons, for navigating choices and making selections (B, LV, LM)
  • Care should be taken when combining multiple modes of interaction (B, LV, D, CD)
  • Tactile elements that do not require a lot of strength or dexterity (LM, YC)
  • Input mechanisms are within reach for all visitors (ideally limited to a 10” depth at a 33” height) (WC, EXH, YC, LM)
  • Monitors, overlays, and lighting are designed to reduce screen glare (SV, LV, EXH)
  • Usable controls within reach of the edge of the table (LM, WC, LV, YC)
  • 27-29 inches of clearance beneath the kiosk, with a depth of at least 19 inches (WC)

Software development

  • Connect to existing standards or everyday uses of technology (FCU)
  • Minimized use of flickering and quick-moving images or lights (SZ, ASD)
  • User control over pace of feedback (HH, ASD, ID, B, LV)
  • Control over the pace of interaction, including when a computer “times-out” (D, B, LV, LM, DYS, LD)
  • A limited number of choices presented at one time (5-7) (B, LV, ID, ADD)
  • Minimized screen scrolling (LV, ID, NCU)
  • Limit unintentional input by providing tolerance for error (B, LV, LM, NCU)
  • Provide easy methods to recover in the event errors are made (B, NCU, LD)
  • Adjustments of a control should produce noticeable feedback (ALL)
  • Ensure feedback is as close to real-time as possible (B, LV, CD, D, NCU)
  • Ensure dynamic elements indicate current status (e.g., active vs. inactive, selected vs. unselected) (ALL)


  • Auditory feedback for what is happening on the screen (ALL)
  • Audio descriptions for videos, images, and other visual-based information (B, LV, ID)
  • Screen text that is read aloud (B, LV, ID, LD, ESL, ER)
  • Open captions for videos and non-text based audio (D, HH, OA)
  • User control over volume (HH, ASD)


  • Clearly labeled audio/video components that are also presented visually, through open captions or images (D, HH, OA)
  • Text with a large font, clear typeface, capital and lower case letters and ample space between lettering and text lines (test on final screen or device to ensure legibility) (LV, OA, DYS, EXH)
  • High contrast (at least 70%) images and text (LV, OA, CB)
  • Alternatives to color-coded cues (LV, OA, CB)
  • A non-text visual indication of what to do and the activity’s content (ER, LD, DYS, ESL)
  • A clear, consistent and repetitive layout for presenting information (B, LV, LD, NCU)
  • Clear mapping between the buttons and screen images (SV)
  • Screen design should be intuitive and not to draw attention away from the learning goals (ALL)


Key for audience members who benefit:

ADD – visitors who have Attention Deficit Disorder

ALL – all visitors

ASD – visitors affected by Autism Spectrum Disorder

B – visitors who are blind

CB – visitors who are color blind

D – visitors who are d/Deaf

DYS – visitors with dyslexia

ER – visitors who are early readers or are learning to read

ESL – visitors whose first language is not English (including American Sign language users)

EXH – visitors at extreme heights (low and high)

FCU – visitors who are frequent computer users

HH – visitors who are hard of hearing

ID – visitors with intellectual disabilities

LD – visitors with learning disabilities

LM – visitors with limited mobility

LV – visitors with low vision

NCU – visitors who are new or infrequent computer users

OA – visitors who are older adults

SV – visitors who are sighted

SZ – visitors who are subject to seizures

WC – visitors who use wheelchairs

YC – visitors who are young children

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Jun 16, 2015

New Accessibility Feature Enhances Open Exhibits Experience

Video tour of enhanced Solar System Exhibit

Enhanced Solar System Exhibit

Ideum (the lead organization of Open Exhibits) has made significant progress in multitouch accessibility in the process of developing three prototypes for the Creating Museum Media for Everyone (CMME) National Science Foundation-funded project. The third prototype, a new version of our Open Exhibits Solar System Exhibit, incorporates improvements based on usability test results and suggestions from the Museum of Science Boston, National Center for Interactive Learning, WBGH, and advisor Sina Bahram. The major new feature in the current version is an accessibility layer designed for visually impaired users on large touch screen devices. This new CMME software will be released February 6, 2015.

Opening screen of Enhanced Solar System Exhibit with accessibility layer

Enhanced Solar System Exhibit with accessibility layer

Accessibility Layer

The main component of the accessibility layer is the information menu browser. To activate the menu browser, a user holds down three fingers for two seconds. This can be edited to incorporate most of the hundreds of gestures used in the Open Exhibit framework. During this hold, the user receives audio feedback letting them know the accessibility layer is activating. Once the menu is active, the user can swipe left or right to access different choices on the menu, in this case, the different planets in the solar system. The text that normally appears on the screen when an item is chosen from the visual menu is automatically narrated aloud. Using a simple set of gestures, the user can control the menu and the content to be read.

User enables accessibility layer with 3-finger gesture
User enables accessibility layer

Future Steps

In the current version, the accessibility layer is intended for one user, and that one user controls what content is active for the entire screen. We are currently working on a multi-user version that will incorporate multiple “spheres of influence” to allow users to control a range of space from a small area. Using these “spheres of influence,” multiple visually impaired and/or sighted users can interact with the exhibit simultaneously. The multi-user version’s audio will be multidirectional, that is, can be split so that users on different sides of the table can listen to different parts of the content at the same time. Our next step is to develop visual elements that will play along with the audio narration for those who have limited sight or hearing impairment, or are learning the English language.

More Info

by Stacy Hasselbacher View Stacy Hasselbacher's User Profile on Jan 6, 2015

CMME Exhibit Resource Overview

We have finished posting about the Museum of Science's portion of the Creating Museum Media for Everyone (CMME) project. In case you missed any of the posts, you can find direct links to each of them below.

Background: These posts include resources and thinking that jumpstarted our exhibit development process.

Final Exhibit Component: These posts detail the final exhibit, which is a part of the Catching the Wind exhibition, at the Museum of Science.

Exhibit Development Toolkit: These posts include specifications for the software programming, design, and text we used in the final exhibit. Feel free to repurpose any of the resources in these posts for your own exhibit development.

Paths Not Taken: These posts dive deeper into multi-sensory techniques we tried that did not work for our exhibit, but may be useful in other applications.

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Dec 31, 2014

CMME: Audio Toolkit

Audio is a major feature of the final exhibit for the Museum of Science’s portion of the Creating Museum Media for Everyone (CMME) project. The audio components help guide visitors through their interaction with the exhibit. We found that many of the audio components were important for almost all visitors, in addition to those who had low or no vision. Audio is also useful for visitors who are dyslexic and with other cognitive disabilities that affect the ability to read. This post outlines the final audio components, including text and audio files, we included in the exhibit. The findings that led us to most of our audio decisions are outlined in a previous post summarizing the formative evaluation of the CMME exhibit.

In this exhibit we used audio in three distinct ways:

  • Audio phone text
  • Broadcast text audio
  • Broadcast sonified audio

Audio phone text

Audio phone text accompanies almost all of the exhibits at the Museum of Science. This audio gives an overview of the exhibit component, including the physical layout, label copy text, image descriptions, and potential interactions visitors may have at an exhibit. This audio is typically accessed through an audio phone handset and visitors can advance through the audio files by pressing the buttons mounted on the exhibit near the handset holder.


Exhibit Component Drawing

This drawing of the CMME exhibit shows the audio phone handset on the front left edge of the exhibit component. There are two buttons mounted on the slanted surface above the handset that trigger the audio files to play when they are pressed.

The audio phone used for this exhibit has two buttons. The square button audio file contains a physical description of the exhibit so that visitors can orient themselves. The round button contains five audio files that articulate the text, images, and a brief introduction to possible visitor interaction at the exhibit. A file with the full audio phone text can be viewed and downloaded by clicking here. You can also listen to a sample audio file from the audio phone by clicking here (this matches the "Square button" section in the full audio phone text document).


Broadcast text audio

Broadcast text audio provides live feedback in response to a visitor’s action. This feedback includes when a visitor touches the touch screen or pushes a button. This feedback often gives details about their selection and provides additional information about how they might interact with the exhibit. A file with the full broadcast audio text can be viewed and downloaded by clicking here. You can listen to sample audio files from the broadcast audio by clicking on the following links for the button instructions, the introduction to the graph, and a graph title (these match the text in the corresponding sections of the full broadcast audio text document). The dynamic nature of the audio feedback meant some of the phrases and instructions were recorded in separate files and then pieced together in real time through the programming. For example, if a visitor holds their finger in one point on the graph, they will hear seven audio files strung together to describe the date in that area: “Turbine produced - 756 - watts in winds of - 25 - miles per hour - 4 - data points.” We chose not to use any computer generated vocalizations for the text and we recorded all of the audio with the same human voice.

Some exhibits at the Museum of Science have the broadcast audio as an “opt in” feature and visitors have the option to turn the audio on by pressing a button. For this exhibit, we found the introduction to the graph was so important to visitor understanding of the exhibit, we decided to leave the broadcast audio on all of the time. This factor improves understanding for many visitors, but may also limit interactions with the exhibit by visitors who may not want to listen to the audio or who may become overwhelmed by too much auditory stimulation. This concern led us to edit the amount of information we readily broadcast. Additional broadcast audio instructions can be accessed through a “More Audio” button located near the audio phone handset.

CMME audio phone and more info button

Picture of the front left corner of the CMME exhibit. The audio phone handset and corresponding control buttons are on the far left. The “More Audio” button is a few inches to the right and the cutout holes in the surface, where the speaker is mounted into the tabletop for the broadcast audio, are visible next to the buttons.

Although our feedback was dynamic, we were unable to expand this feedback to encompass audio hints. These would have added dynamic direction about the next available options for visitors when there was any idle time. For example, if a visitor explored touching the screen in the area of the graph, after a brief period of inactivity, the exhibit may then prompt them to, “Try holding your finger in one place on the graph for a more detailed description of data at that point.” This option allows you to divide instructions into more digestible pieces that are given when a visitor is ready for them. This kind of dynamic feedback also involves an additional layer of instruction writing and programming in the software that the scope of our project did not include.

Broadcast sonified audio

In addition to the broadcast text audio, this exhibit also includes sonified audio. These are tones that represent data values on the graphs. Similar to the broadcast audio feedback, the sonified audio is also dynamic and changes based on the current data being shown in the graph. This exhibit shows sonified trend lines in these data and sonifies the data points when a visitor moves their finger over them when touching the screen. Below are two videos showing the sonified data. We used static to represent areas of the graph in which no data is present.

This video shows when a graph is first selected. As the trend line slider moves across the screen, audio feedback plays out the values, with higher pitches representing higher values in the data. This graph goes from low to high and then plays static for the second half of the graph where no data is present.

This video shows a person moving their finger around within the graph area on the touch screen. Each tone that is played represents one data point and the pitch corresponds to its value. Static is played when the user moves her finger into an area of the graph where no data points are present.

Our decision to include dynamic audio feedback allows a wider range of visitors to interact with the graphs in this exhibit and understand the wind turbine data being presented in the graphs, but we had to be very judicious in our decisions about where use audio in this exhibit. There were a few areas in which we had to remove audio feedback because it was causing confusion.

Originally, the buttons read out an audio title of which option they represented when they were touched, but before they were even pushed. This led to visitors accidentally triggering the audio when they were interacting with another part of the exhibit and lead to confusion about what feedback corresponded with their actions. Additionally, the names of the turbines were often confusing in themselves, so having them repeated was not helpful. We added “wind turbine” with each of the brand names to reinforce the exhibit topic.

At first, we also played the broadcast audio introduction after each graph button was pushed. Some visitors felt this was repetitive, many did not listen, and some felt it was too complex to understand. Additionally, some visitors didn't realize the same audio was being repeated and felt they should listen to it even if they already understood what to do from using the prior graph. This led us to only play the introduction audio and animation the first time a graph is chosen by a visitor, but visitor interaction is locked out during this period to reinforce their understanding of the instructions. For each subsequent graph choice, visitors move straight to interacting with the graph. If a visitor does want the introduction content, a more detailed explanation is available in the “More Audio” button. Once a visitor stops interacting with the exhibit, it times out and moves back to the idle screen. Any additional interaction would once again trigger the introduction to play.


We would like to note that visitors who are deaf can often feel the vibration of the audio and know that there is auditory information that is being shared. If they feel confused by the interactive, they will think they are missing out on critical information. All audio directions in this exhibit are also reinforced with visual text and images, in order to be accessible for visitors who are deaf.


More Info

by Emily O'Hara View Emily O'Hara's User Profile on Dec 31, 2014

CMME: Graph Paths Not Taken

Written by Emily O’Hara and Stephanie Iacovelli

For the Museum of Science’s portion of the Creating Museum Media for Everyone project, we wanted to create an accessible interactive that featured graphed data.  The final exhibit component contains five scatter plot graphs, each with a calculated trend line. In addition to the graph options available in the final exhibit, the development team wanted to find a way for visitors to compare the data between graphs. We explored layering the graphs and comparing data through the use of a bar graph. Although neither of these solutions worked for our exhibit, we wanted to share the paths we tried, as they may be more applicable for others.

Layered graphs

For this exhibit, we were revising an existing element in the Museum’s Catching the Wind exhibition. The original graph interactive enabled visitors to view the power production graphs of the same five turbines we used in the final version, but it also allowed visitors to compare the turbines by layering two, three, four, or five of the data sets on one graph.

Original Catching the Wind graph screen

Picture of the original exhibit computer screen, showing how data for two of the turbines could be layered on the same graph. Scatter plot points for one of the turbines are shown in red and those for the other are shown in green.

When we began adding sonified audio tones to the data, in order to make the graphs accessible to visitors with low or no vision, we wanted to maintain the option of comparing graphs. We first attempted to simply layer two tones on top of one another and have them play simultaneously. This was not successful and users did not always realize there were even two different sounds present. Next, we tested playing one graph’s sonified trend line followed by the second, then playing both of them together. When we tested this with visitors who are blind, this method helped with comprehension and comparison, but ultimately we did not find the benefits of playing the layered audio outweighed the risk of confusion.

Bar graph

We also developed a bar graph for display in this exhibit so that visitors could more easily compare the power production of each wind turbine on the same graph.

Bar Graph Prototype

Picture of the bar graph prototype. For this version of the exhibit, visitors could still view each of the five wind turbine scatter plot graphs and then the bar graph was added as a sixth option. This version also had an area of the touch screen which contained dynamic instructions which changed to correspond with the type of graph on display.

While this type of graph allowed visitors to compare between wind turbines, we found it also required another layer of graph orientation. In addition to the detailed introduction visitors needed for the scatter plot graph, they needed another set of instructions for the bar graph. Switching between scatter plot and bar graphs also created additional confusion for users who are blind. We were using tactile grid lines to help orient visitors on the scatter plot graph and then, on the bar graph, the vertical grid lines no longer had meaning, but were still present on the graph. Overall, we decided the value of the bar graph was not worth the confusion it caused in our particular exhibit.


In our final exhibit, each graph only shows the data from a single wind turbine’s power production. To help visitors compare the graphs with one another, when a graph is first selected, the graph title is read and then the sonified audio trend line for that data is played. This enables a visitor to press each button of the graphs they want to compare and view or listen to them in quick succession. The axes values are also maintained between graphs to allow for this comparison.

Final exhibit scatter plot screen

Picture of the final exhibit screen displaying the Skystream wind turbine’s power production graph. Scatter plot dots represent each data point that was collected and the bright orange line drawn through the middle of the data points represents the trend for these data. A horizontal bar runs below the length of the graph area and contains a circle which can be moved back and forth to play back the audio sonification of the trend line.

How would you design an accessible exhibit to compare data sets? What other types of graphs would be useful for this comparison?

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Dec 31, 2014
First <<  1 2 3 4 5 6 7 8 9 10  >>  Last