Open Exhibits - Blog

Blog

  
  
CMME
  
  

CMME: Tactile Paths not Taken

Written by Malorie Landgreen and Ben Jones 

The Creating Museum Media for Everyone (CMME) team at the Museum of Science, Boston (MOS) explored many avenues to address our goal of making accessible digital interactive as useful as possible.

While reading through this blog, you will see examples of the work our team developed while brainstorming this interactive component, and the reasons some of these attempts were not chosen for the final proof-of-concept component. This post will review the tactile techniques we explored that did not end up in the final exhibit.

If you have not yet read our prior blog posts, the CMME Final Exhibit Component shows the final proof-of-concept exhibit component that the team installed and the Formative Evaluation Summary reviews how we got there.

Now, let’s dive into what didn’t work.

Capacitive Sensing Buttons

What is this?

3D-printed stainless steel buttons acting as touch sensors (printed at Shapeways)

Capacitive sensing buttons Picture of an array of six capacitive sensing buttons in an early prototype.

The far left button was 3D-printed stainless steel and the other five buttons were 3D-printed in plastic and wrapped with aluminum foil to make them conductive.

Why we tried it

The team wanted a physical representation of the turbines, and by combining the 3D prints of the turbines with a buttons press, served to identify each button directly. By touching the metal button, it triggered an audio label that read the name of the turbine. Physically pressing the button selected the graph our visitors wanted to explore.

How it was made

The buttons were printed in stainless steel because it was the most affordable conductive material; however, bronze and brass were also available. The 3D buttons were designed to fit into an off-the-shelf arcade button. These buttons were altered by removing the small light bulb inside, and replacing it with a piece of metal in the socket.

A spring made the electrical connection between the moving portion of the button and the light bulb socket. A wire ran from the light bulb socket to an Arduino to do the capacitive sensing. Using the Arduino Leonardo to send keystrokes to the computer (one keystroke for the audio when a capacitive sensor detected a touch, and another when the button was pressed).

When someone touched the button there would be a spike in capacitance. Initially, this just sent a threshold for the capacitance values to detect a touch, but a better method created a moving average of all of the capacitive sensors. If the value detected was above the average by a certain amount, then it was a touch event.

Altered button Picture of the deconstructed and altered button with a small piece of metal in the light bulb socket

 

Why didn’t it work?

We moved away from capacitive sensing buttons because it caused confusion as to how to interact with this component. When visitors touched the button, they triggered audio label readout and expected something more to happen. They did not realize they needed to press the button down to select the turbine graph.

We did not simply eliminate the capacitive functionality of the 3D buttons. The 2-inch round buttons were also too small to allow for to-scale models of the turbines, preventing an accurate understanding of the size differences between each turbine.

What was kept 

In the final proof-of-concept exhibit component, we kept the off-the-shelf arcade buttons to be used as regular buttons and 3D-printed models of the turbines were installed below each button. We added a grooved edge for easy navigation between the button and the 3D print, so our visitors can easily associate each button with its 3D-printed turbine.

Button array with tactile turbines Picture of the final array of five buttons, each with a to-scale, high contrast, 3D-printed turbine below.

 

Touch Screen Grid Overlay

What is this?

A clear acrylic die-cut grid, lined up perfectly with the graph grid lines on the touch screen behind.

Touch screen grid overlay Picture of an early prototype of the exhibit component with an arrow pointing at the touch screen where the clear acrylic grid was attached.

 

Why we tried it

The team discussed the need for a grid overlay that would allow our visitors who are blind or have low vision to be able to identify the lines of the graph for a better sense of where each data point is located. We wanted it to be a clear acrylic overlay so that it would not detract from visual elements of the graph.

How it was made

A graphic was created to the same scale as the graphic for the touch screen graph. It was then cut out in-house using a vinyl cutter. The overlay was attached to a full sheet of clear acrylic with a clear-drying adhesive. The layered acrylic sheets were tested to make sure the underlying touch screen could still sense touch. This solution was not implemented in the final exhibit component, so it was never produced in a more durable way.

Why didn’t it work?

This solution did not work because it ended up being more of a distraction to our visitors than an aid to understand the graph more thoroughly. At the time of this testing, the team was designing the interaction to also allow for a bar graph to compare all of the turbines, but the static overlay caused confusion when the scatterplot graph changed to a bar graph. With further testing, the team discovered that there was a simple solution that didn’t cause as much permanent visual or tactile clutter; audio was added to articulate where on the graph a visitor was touching.

What was kept

A simplified version of the clear acrylic overlay was kept for the final component, in which just the graph axes are raised, with notches along each axis where the grid lines are located. Audio supplements this interaction by articulating the axes titles and gridline increments when they are touched. When a visitor holds their finger in one place on the graph, audio also reads out their location and details about the nearby data.

Final tactile edged overlay Picture of the graph screen in the final exhibit component with an arrow pointing at the top right corner of the graph where the edge of the clear acrylic overlay is faintly visible.

 

Sonification Strip

What is this?

A separate horizontal bar below the graph that allowed visitors to run their finger across it to hear the trend line of each graph.

Prototype sonification strip Picture of an early prototype in which the sonification strip and the main graph area were demarcated by two separate cutouts in the graphic that covered the touch screen. An arrow is pointing to the sonification strip.

 

Why we tried it

We wanted our visitors to hear the trend line quickly and easily, as many times as they would like for each turbine. Hearing the trend line play one time after the button press, and then exploring the data points, didn’t allow all of our visitors to fully understand the power production trend of each turbine. The detached strip would keep the difference in functionality separate from the scatterplot graph above, but still allow for visitors to hear the trend as often as they desired.

How it was made

There was a cutout portion of the graphic that was laid on top of the touch screen. The cutout was a 1-inch tall strip that ran along the full length of the graph, about 1/4 inch below the bottom of the graph.

Why didn’t it work?

The separation between the graph and the sonification strip kept our visitors from either finding it or understanding that it correlated to the trend line on the graph.

What was kept

The team decided to keep the basic concept of the sonification strip, because the ability to hear the trend line multiple times was successful, but integrated it into the tactile acrylic overlay. This allowed for the relationship of the strip and the graph to be more integrated.

Final sonification strip Picture of the final exhibit component touch screen with the sonification strip highlighted during the introduction to the graph. There is an arrow pointing to the sonification strip and the text on the screen states: Touch to hear the trend line; Higher pitch = more power.

 

 

These three examples of unused tactile concepts led to a stronger final design for our component, but they may be more applicable in different situations. Have you tried other tactile options for visitors? Leave a comment below with other resources that have worked for you.

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Oct 15, 2014
 
  
CMME
  
  

CMME Exhibit Component: Formative Evaluation Summary

Formative Evaluation Methods: A total of nine iterations of the Creating Museum Media for Everyone (CMME) exhibit prototype were tested throughout the formative evaluation phase, which occurred from April 2013 to March 2014. Overall, 134 visitors took part in testing the prototypes. This includes 15 recruited people with disabilities and 119 general Museum visitors (who were not asked whether they identified as having a disability). Because people with disabilities were the target audience for this project, they were recruited to come in and test prototypes throughout the exhibit creation process. Their input was crucial for creating a universally designed component. However, even though people with disabilities were the target audience, all Museum of Science exhibits are tested with general Museum visitors to ensure usability, understanding, and interest in exhibits. Testing with people who have a variety of abilities and disabilities ensured that added features that would enhance the accessibility of the exhibit for some visitors did not hinder the experience for others. The table below outlines the types of disabilities represented in the testing sample:

Type of Disability

Number of participants

Blind or low vision

9

Physical

2

Intellectual

5

Deaf or hard of hearing

3

Note: Some participants identified as having multiple disabilities. Therefore, totals do not add up to 15. For each testing session, all visitors were asked to use the component as they normally would if they had walked up to it in the Museum. While they were exploring the interactive, visitors were also asked to use a “think-aloud protocol,” describing what they were thinking about during each step of the interaction. After they were done exploring, the evaluator asked some interview questions and sometimes prompted the visitors to use features that they hadn’t explored on their own. The testing protocol used with on-the-floor Museum visitors versus recruited visitors with disabilities was largely the same, except that the following question was added when testing with recruited visitors with disabilities: “Was there anything you wanted to do when using this exhibit that you were not able to?” Impacts of formative evaluation on the final design: Specific parts of the component, its features, and the exhibit content are referenced throughout this post and are explained in depth in the CMME Final Exhibit Component blog post. Briefly, the component presents data about the power generated by five wind turbines on the Museum of Science roof in the form of line and scatterplot graphs. Summarized below are main findings from the formative evaluation, which include a description of visitors’ experiences during testing and how these experiences impacted the final design of the exhibit component. 1. Understanding when there were no data points in an area of the graph What happened during testing? When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not. How is this addressed in the final component? A sound clip of static is now played whenever a visitor is touching an area of the screen that does not have data. 2. Dealing with multi-touch screen capabilities What happened during testing? When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not. How is this addressed in the final component? 1) All audio that describes using the touch screen tells visitors to “use one finger” when using the touch screen. 2) The multi-touch option for the screen is now programmed such that if multiple fingers are on the screen at once, it reads data from an average of those points. 3. Accessing the information found on the graph axes What happened during testing? When people who are blind were testing an early version of the prototype, in order to get information about the number of watts or number of miles per hour at any specific data point shown on the screen, they would have to remember how many increments they had passed on the axes. Unless they went back and counted the increments on each axis, they were unable to tell which point they were touching. How is this addressed in the final component? 1) Axes titles as well as graph values are read aloud when a visitor moves their finger along each axis. 2) When a finger is held down in one place on the graph or along the trend scrub bar, data values from that area are verbalized (i.e. "Average power production: X watts at Y miles per hour"). 4. Being introduced to the component features What happened during testing? During testing of a close-to-final version of the prototype where visitors could explore all five graphs, an introductory broadcast audio clip played when the first graph button was pushed. This audio clip verbalized information that was included on the exhibit label about what to do at the component. During formative evaluation, this introductory audio could be interrupted if a visitor pushed another button or touched the screen. All visitors who tested in this session interrupted the audio before it finished, often doing so when the audio encouraged them to touch the screen. Many visitors had difficulty knowing what to do and which options were available for exploring the data. The instructions and information given at the end of this audio blurb were areas that many visitors did not use or did not understand during this session. How is this addressed in the final component? The introductory audio blurb is now un-interruptible. Visitors must listen to this audio broadcast before being able to interact, something uncharacteristic of exhibit interactives at the Museum of Science. If a visitor touches the screen or pushes a button while the locked audio is broadcasting, a negative sound is given as feedback so visitors know that the exhibit is not broken, but they must wait until they hear “now you can explore on your own” to move on. After implementing this change, visitors understood the instructions and options for exploration more clearly and did not appear to be negatively impacted by the un-interruptible intro audio. 5. Differentiating wind turbines What happened during testing? Throughout testing, many visitors were having trouble figuring out what the terms "Aerovironment," "Windspire," "Proven," "Swift," and "Skystream" meant. These were the wind turbine names that labeled each button and graph. Understanding that these terms were names of different wind turbines was essential for visitors to understand the exhibit content. How is this addressed in the final component? We first tried labeling the turbines with a letter (i.e. Wind Turbine A: Proven), but visitors who are blind did not prefer this strategy because the word that differentiates the buttons was not the first one they heard. Instead, we decided to change or modify some turbine names so that visitors can better understand that they are brand names. For instance “Aerovironment” was changed to “AVX1000” and “Proven” was changed to “Proven 6.” This solution allowed for the first word to be unique while making the turbine names less confusing. [caption id="attachment_8848" align="aligncenter" width="597"]Buttons and high contrast tactile scale versions of the wind turbines Final wind turbine names used in the exhibit: These names are found under the buttons and above the high contrast tactile images of each wind turbine as well as on screen as graph titles.[/caption] 6. Connecting each graph to its accompanying wind turbine What happened during testing? Visitors had some difficulties throughout the formative evaluation understanding that pushing a new button would change the graph shown on the screen and present data from a different wind turbine. For instance, some visitors were able to interpret the data on the screen but weren’t sure what the difference was between different graphs. How is this addressed in the final component? 1) Each button that corresponds to a wind turbine lights up when visitors choose that button/graph, and 2) an area of the screen is designated for an animation of the related turbine. When a new button is pushed, an animated image of the turbine whose data is shown on the accompanying graph is shown on this part of the screen. [caption id="attachment_8881" align="aligncenter" width="597"]Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6. Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6.[/caption] 7. Finding the component welcoming What happened during testing? A few groups who tested the prototypes throughout the formative evaluation mentioned that they would not be likely to walk up to a screen with graphs on it, either because they didn’t like graphs and didn’t think the activity would be fun, or because they found graphs complex and intimidating. In one of the later testing sessions, one group talked about how much they ended up enjoying the interactive, even though they talked about it initially looking boring and complex when they walked up to it. How is this addressed in the final component? A welcome screen was added to the component that shows an animated drawing of the spinning wind turbines mounted on the Museum roof, whose power production data is represented in the graphs. This screen instructs visitors to “press a round button to begin.” If the screen is touched, the instruction “press a round button below the screen to begin” is also read aloud. [caption id="attachment_8850" align="aligncenter" width="597"]Welcome prompt screen on the final component. Welcome prompt screen on the final component.[/caption]    

More Info

by Stephanie Iacovelli View Stephanie Iacovelli's User Profile on Oct 1, 2014
 
  
CMME
  
  

CMME Final Exhibit Component

For the Creating Museum Media for Everyone (CMME) project, the team from the Museum of Science, Boston, aimed to develop a proof-of-concept exhibit component that used multisensory options to display data and whose components could be adapted into a basic toolkit for use by other museums. The development of this exhibit was kicked off with two back-to-back workshops featuring talks by experts in the field and working sessions to explore some possible directions for an accessible digital interactive. This post gives an overview of the final exhibit component and reviews the goals and constraints of the project. Video tour of the exhibit (closed captions available in the "CC" option along the bottom edge of the video window): https://www.youtube.com/watch?v=_yCzs7HKpYE If you are unable to watch the video, scroll down for text and image descriptions of the exhibit. Goals and constraints: Although the project included exploring many different technological paths, there were goals for both the overall project as well as the specific exhibit component. The project aimed to create shareable results that would help others in the museum field create more accessible digital interactives that could support data interpretation. Project goals:

  • To further the science museum field’s understanding of ways to research, develop, and evaluate inclusive digital interactives
  • To develop a universally designed computer-based multi-sensory interactive that allows visitors to explore [and manipulate]* data
  • To develop an open-source software framework [allowing the design of the full interactive to be adapted to fit any institution]*
  • To provide an exemplar that will allow other museums to represent data sets as universally accessible scatterplot [or bar]* graphs
*Bracketed portions of the project goals were explored, but are not reflected in the final exhibit component installed on the Museum floor. Code for programming these tasks was developed and will be released in an open source toolkit later this fall for institutions to explore. Exhibit goals:
  • Visitors will understand abstract wind turbine data through multi-sensory interaction and interpretation
  • Visitors will improve their data analysis skills to learn about wind turbine technology
  • Visitors will view themselves as science learners through their interaction with and manipulation of wind turbine data
Final exhibit component: We revised an existing component in the Museum of Science’s Catching the Wind exhibition, which allows a broad range of visitors to explore power production data from wind turbines mounted on the Museum roof. Below are text descriptions and pictures of the exhibit that match the content in the walkthrough video above: [caption id="attachment_8845" align="alignleft" width="597"]Catching the Wind exhibition panel Catching the Wind exhibition[/caption] The final exhibit component is part of a larger 25-foot long exhibition extending to the left and right. The component includes an informational label and a touch screen computer activity. The 4-foot wide computer interactive contains auditory and visual graphs showing power production of wind turbines mounted on the Museum's roof. [caption id="attachment_8846" align="alignleft" width="597"]Exhibit component contains an informational label and an interactive computer touch screen CMME exhibit component[/caption] There is a large printed label above the computer screen with images and statistics for each turbine type. The lower left side of the exhibit has an audio phone handset with two buttons. The square “Audio Text” button gives a physical description of the exhibit component. The round “Next Audio” button walks a visitor through the printed imagery and text on the labels. [caption id="attachment_8847" align="alignleft" width="597"]Slanted planel label and touch screen Slanted planel label and touch screen[/caption] The slanted panel along the front of the exhibit contains a touch screen, a small introductory label with simplified instructions, a square “More Audio” button which plays a detailed broadcast audio introduction to the graph, and five round buttons with corresponding high-contrast tactile versions of the turbines. A tactile adult person is used for scale next to each turbine.  [caption id="attachment_8848" align="aligncenter" width="597"]Buttons and high contrast tactile scale versions of the wind turbines Buttons and high-contrast tactile scale versions of the wind turbines[/caption] [caption id="attachment_8849" align="aligncenter" width="597"]Close-up image of high contrast tactile scale versions of the wind turbines Close-up of high-contrast tactile scale versions of the wind turbines[/caption] When idle, the large touch screen shows text prompting visitors to “press a round button to begin.” Audio also articulates this prompt when the screen is touched. [caption id="attachment_8850" align="aligncenter" width="597"]Welcome prompt screen Welcome prompt screen[/caption] To begin the activity, a visitor chooses a graph to explore by pressing one of the five round buttons below the computer touch screen. When any one of the buttons is pushed for the first time, a visual and audio introduction to the graph is played. Visitor interaction is limited during the introduction. Once the introduction has concluded, the visitor can then explore the displayed scatterplot graph of power production for that turbine or press another button to view a different graph. [caption id="attachment_8851" align="alignnone" width="597"]Auditory and visual introduction to the graph Still from auditory and visual introduction to the graph. Graph area is highlighted. Text within pop-up shown in picture: Touch to explore individual data points, Static = no data[/caption] The large rectangular section of exposed computer touch screen has tactile edges with notches that correspond to the axes and grid lines in the scatterplot graphs on the screen. When these are touched, the axes titles and grid line increments are read aloud. Within the graph area, the scatterplot dots are visible and when touched, are articulated by a tone that corresponds with their value. When the visitor touches an area where no data points are present, the visitor hears static. When a visitor holds their finger in one place on the graph, a pop-up text box and audio readout articulate the power produced at that wind speed and how many data points are present in that area of the graph. [caption id="attachment_8852" align="alignnone" width="597"]Pop-up text box of power production when visitor holds finger on screen within graph area Pop-up text box when visitor holds finger on screen within graph area. Text within pop-up shown in the picture: 1361 watts in winds of 16 MPH  6 data points[/caption] Along the bottom edge of the main graph area, there is a small, thin line of exposed computer touch screen. When this is touched, the trend line for the data is sonified, corresponding to the location of the visitor’s finger along the x-axis. If the visitor holds their finger in one place along this trend exploration bar, a text box will pop up and audio verbalizes the average power produced at that wind speed. [caption id="attachment_8853" align="alignnone" width="597"]Highlighted trend exploration bar below graph area of the screen Still from introduction to the graph, highlighting the trend exploration bar below graph area of the screen. Text within pop-up shown in the picture: Touch to hear the trend line, Higher pitch = more power[/caption] To the left of the graph screen, there is also an image of the turbine for which the current data is being shown. When this image is touched, audio articulates the image. [caption id="attachment_8855" align="alignnone" width="597"]Turbine image and graph of data from that turbine Image of Skystream wind turbine and graph of power production data from that turbine[/caption]  

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Sep 30, 2014
 
  
  
  

IMLS Funds New Partnership Between Open Exhibits & Omeka

We are excited to announce a new partnership between The Roy Rosenzweig Center for History and New Media at George Mason University,  Ideum  (makers of Open Exhibits)  and the University of Connecticut’s Digital Media Center.

Our organizations have been awarded a National Leadership Grant for Museums from the Institute of Museum and Library Sciences to extend two open museum platforms: Open Exhibits and Omeka.  (A full list of awardees can be found on the IMLS website.)

The project is called Omeka Everywhere. This new initiative will help keep Open Exhibits free and open for the next three years (our NSF Funding ended last month). In addition, a set of new initiatives for Open Exhibits and Omeka are planned. Here is a brief description of the project.

Dramatically increasing the possibilities for visitor access to collections, Omeka Everywhere will offer a simple, cost-effective solution for connecting onsite web content and in-gallery multi-sensory experiences, affordable to museums of all sizes and missions, by capitalizing on the strengths of two successful collections-based open-source software projects: Omeka and Open Exhibits.

Currently, museums are expected to engage with visitors, share content, and offer digitally-enabled experiences everywhere: in the museum, on the Web, and on social media networks. These ever-increasing expectations, from visitors to museum administrators, place a heavy burden on the individuals creating and maintaining these digital experiences. Content experts and museum technologists often become responsible for multiple systems that do not integrate with one another. Within the bounds of tight budget, it is increasingly difficult for institutions to meet visitors’ expectations and to establish a cohesive digital strategy. Omeka Everywhere will provide a solution to these difficulties by developing a set of software packages, including Collections Viewer templates, mobile and touch table applications, and the Heist application, that bring digital collections hosted in Omeka into new spaces, enabling new kinds of visitor interactions.

Omeka Everywhere will expand audiences for museum-focused publicly-funded open source software projects by demonstrating how institutions of all sizes and budgets can implement next-generation computer exhibit elements into current and new exhibition spaces. Streamlining the workflows for creating and sharing digital content with online and onsite visitors, the project will empower smaller museums to rethink what is possible to implement on a shoestring budget. By enabling multi-touch and 3D interactive technologies on the museum floor, museums will reinvigorate interest in their exhibitions by offering on-site visitors unique experiences that connect them with the heart of the institution—their collections.

More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on Sep 19, 2014
 
  
  
  

Multitouch Table Research Findings

A recent addition to our Papers section here on Open Exhibits is worth highlighting here in Open Exhibit blog. Open Exhibits co-PI, Kate Haley Goldman and her colleague Jessica Gonzalez, conducted research at three of our partner museums (Indian Pueblo Cultural Center, The Maxwell Museum of Anthropology, and the New Mexico Museum of Natural History and Science) to better understand how visitors interact with multitouch tables.

A multitouch table at the Maxwell Museum of Anthropology.
Open Exhibits software running on multitouch table at the Maxwell Museum of Anthropology. The Maxwell was one of three museums in which research was conducted. The touch table shown is an Ideum Pro multitouch table.

The research looks at a variety of different aspects concerning visitor interaction including: dwell time, social interaction and a variety of behavioral and verbal indicators. The data suggests that for most visitors the experience is still novel, most visitors (73-82%) to our three partner institutions had not seen a multitouch table before. The stay time was longer for the table, than for any other object found in the gallery spaces. The full report can be found at: OE Multitouch Table Use Findings.

More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on Jul 30, 2014
 
  
First <<  1 2 3 4 5 6 7 8 9 10  >>  Last