Open Exhibits - Blog

Blog

  
  
CMME
  
  

CMME Exhibit Component: Formative Evaluation Summary

Formative Evaluation Methods: A total of nine iterations of the Creating Museum Media for Everyone (CMME) exhibit prototype were tested throughout the formative evaluation phase, which occurred from April 2013 to March 2014. Overall, 134 visitors took part in testing the prototypes. This includes 15 recruited people with disabilities and 119 general Museum visitors (who were not asked whether they identified as having a disability). Because people with disabilities were the target audience for this project, they were recruited to come in and test prototypes throughout the exhibit creation process. Their input was crucial for creating a universally designed component. However, even though people with disabilities were the target audience, all Museum of Science exhibits are tested with general Museum visitors to ensure usability, understanding, and interest in exhibits. Testing with people who have a variety of abilities and disabilities ensured that added features that would enhance the accessibility of the exhibit for some visitors did not hinder the experience for others. The table below outlines the types of disabilities represented in the testing sample:

Type of Disability

Number of participants

Blind or low vision

9

Physical

2

Intellectual

5

Deaf or hard of hearing

3

Note: Some participants identified as having multiple disabilities. Therefore, totals do not add up to 15. For each testing session, all visitors were asked to use the component as they normally would if they had walked up to it in the Museum. While they were exploring the interactive, visitors were also asked to use a “think-aloud protocol,” describing what they were thinking about during each step of the interaction. After they were done exploring, the evaluator asked some interview questions and sometimes prompted the visitors to use features that they hadn’t explored on their own. The testing protocol used with on-the-floor Museum visitors versus recruited visitors with disabilities was largely the same, except that the following question was added when testing with recruited visitors with disabilities: “Was there anything you wanted to do when using this exhibit that you were not able to?” Impacts of formative evaluation on the final design: Specific parts of the component, its features, and the exhibit content are referenced throughout this post and are explained in depth in the CMME Final Exhibit Component blog post. Briefly, the component presents data about the power generated by five wind turbines on the Museum of Science roof in the form of line and scatterplot graphs. Summarized below are main findings from the formative evaluation, which include a description of visitors’ experiences during testing and how these experiences impacted the final design of the exhibit component. 1. Understanding when there were no data points in an area of the graph What happened during testing? When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not. How is this addressed in the final component? A sound clip of static is now played whenever a visitor is touching an area of the screen that does not have data. 2. Dealing with multi-touch screen capabilities What happened during testing? When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not. How is this addressed in the final component? 1) All audio that describes using the touch screen tells visitors to “use one finger” when using the touch screen. 2) The multi-touch option for the screen is now programmed such that if multiple fingers are on the screen at once, it reads data from an average of those points. 3. Accessing the information found on the graph axes What happened during testing? When people who are blind were testing an early version of the prototype, in order to get information about the number of watts or number of miles per hour at any specific data point shown on the screen, they would have to remember how many increments they had passed on the axes. Unless they went back and counted the increments on each axis, they were unable to tell which point they were touching. How is this addressed in the final component? 1) Axes titles as well as graph values are read aloud when a visitor moves their finger along each axis. 2) When a finger is held down in one place on the graph or along the trend scrub bar, data values from that area are verbalized (i.e. "Average power production: X watts at Y miles per hour"). 4. Being introduced to the component features What happened during testing? During testing of a close-to-final version of the prototype where visitors could explore all five graphs, an introductory broadcast audio clip played when the first graph button was pushed. This audio clip verbalized information that was included on the exhibit label about what to do at the component. During formative evaluation, this introductory audio could be interrupted if a visitor pushed another button or touched the screen. All visitors who tested in this session interrupted the audio before it finished, often doing so when the audio encouraged them to touch the screen. Many visitors had difficulty knowing what to do and which options were available for exploring the data. The instructions and information given at the end of this audio blurb were areas that many visitors did not use or did not understand during this session. How is this addressed in the final component? The introductory audio blurb is now un-interruptible. Visitors must listen to this audio broadcast before being able to interact, something uncharacteristic of exhibit interactives at the Museum of Science. If a visitor touches the screen or pushes a button while the locked audio is broadcasting, a negative sound is given as feedback so visitors know that the exhibit is not broken, but they must wait until they hear “now you can explore on your own” to move on. After implementing this change, visitors understood the instructions and options for exploration more clearly and did not appear to be negatively impacted by the un-interruptible intro audio. 5. Differentiating wind turbines What happened during testing? Throughout testing, many visitors were having trouble figuring out what the terms "Aerovironment," "Windspire," "Proven," "Swift," and "Skystream" meant. These were the wind turbine names that labeled each button and graph. Understanding that these terms were names of different wind turbines was essential for visitors to understand the exhibit content. How is this addressed in the final component? We first tried labeling the turbines with a letter (i.e. Wind Turbine A: Proven), but visitors who are blind did not prefer this strategy because the word that differentiates the buttons was not the first one they heard. Instead, we decided to change or modify some turbine names so that visitors can better understand that they are brand names. For instance “Aerovironment” was changed to “AVX1000” and “Proven” was changed to “Proven 6.” This solution allowed for the first word to be unique while making the turbine names less confusing. [caption id="attachment_8848" align="aligncenter" width="597"]Buttons and high contrast tactile scale versions of the wind turbines Final wind turbine names used in the exhibit: These names are found under the buttons and above the high contrast tactile images of each wind turbine as well as on screen as graph titles.[/caption] 6. Connecting each graph to its accompanying wind turbine What happened during testing? Visitors had some difficulties throughout the formative evaluation understanding that pushing a new button would change the graph shown on the screen and present data from a different wind turbine. For instance, some visitors were able to interpret the data on the screen but weren’t sure what the difference was between different graphs. How is this addressed in the final component? 1) Each button that corresponds to a wind turbine lights up when visitors choose that button/graph, and 2) an area of the screen is designated for an animation of the related turbine. When a new button is pushed, an animated image of the turbine whose data is shown on the accompanying graph is shown on this part of the screen. [caption id="attachment_8881" align="aligncenter" width="597"]Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6. Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6.[/caption] 7. Finding the component welcoming What happened during testing? A few groups who tested the prototypes throughout the formative evaluation mentioned that they would not be likely to walk up to a screen with graphs on it, either because they didn’t like graphs and didn’t think the activity would be fun, or because they found graphs complex and intimidating. In one of the later testing sessions, one group talked about how much they ended up enjoying the interactive, even though they talked about it initially looking boring and complex when they walked up to it. How is this addressed in the final component? A welcome screen was added to the component that shows an animated drawing of the spinning wind turbines mounted on the Museum roof, whose power production data is represented in the graphs. This screen instructs visitors to “press a round button to begin.” If the screen is touched, the instruction “press a round button below the screen to begin” is also read aloud. [caption id="attachment_8850" align="aligncenter" width="597"]Welcome prompt screen on the final component. Welcome prompt screen on the final component.[/caption]    

More Info

by Stephanie Iacovelli View Stephanie Iacovelli's User Profile on Oct 1, 2014
 
  
CMME
  
  

CMME Final Exhibit Component

For the Creating Museum Media for Everyone (CMME) project, the team from the Museum of Science, Boston, aimed to develop a proof-of-concept exhibit component that used multisensory options to display data and whose components could be adapted into a basic toolkit for use by other museums. The development of this exhibit was kicked off with two back-to-back workshops featuring talks by experts in the field and working sessions to explore some possible directions for an accessible digital interactive. This post gives an overview of the final exhibit component and reviews the goals and constraints of the project. Video tour of the exhibit: https://www.youtube.com/watch?v=_yCzs7HKpYE If you are unable to watch the video, scroll down for text and image descriptions of the exhibit. Goals and constraints: Although the project included exploring many different technological paths, there were goals for both the overall project as well as the specific exhibit component. The project aimed to create shareable results that would help others in the museum field create more accessible digital interactives that could support data interpretation. Project goals:

  • To further the science museum field’s understanding of ways to research, develop, and evaluate inclusive digital interactives
  • To develop a universally designed computer-based multi-sensory interactive that allows visitors to explore [and manipulate]* data
  • To develop an open-source software framework [allowing the design of the full interactive to be adapted to fit any institution]*
  • To provide an exemplar that will allow other museums to represent data sets as universally accessible scatterplot [or bar]* graphs
*Bracketed portions of the project goals were explored, but are not reflected in the final exhibit component installed on the Museum floor. Code for programming these tasks was developed and will be released in an open source toolkit later this fall for institutions to explore. Exhibit goals:
  • Visitors will understand abstract wind turbine data through multi-sensory interaction and interpretation
  • Visitors will improve their data analysis skills to learn about wind turbine technology
  • Visitors will view themselves as science learners through their interaction with and manipulation of wind turbine data
Final exhibit component: We revised an existing component in the Museum of Science’s Catching the Wind exhibition, which allows a broad range of visitors to explore power production data from wind turbines mounted on the Museum roof. Below are text descriptions and pictures of the exhibit that match the content in the walkthrough video above: [caption id="attachment_8845" align="alignleft" width="597"]Catching the Wind exhibition panel Catching the Wind exhibition[/caption] The final exhibit component is part of a larger 25-foot long exhibition extending to the left and right. The component includes an informational label and a touch screen computer activity. The 4-foot wide computer interactive contains auditory and visual graphs showing power production of wind turbines mounted on the Museum's roof. [caption id="attachment_8846" align="alignleft" width="597"]Exhibit component contains an informational label and an interactive computer touch screen CMME exhibit component[/caption] There is a large printed label above the computer screen with images and statistics for each turbine type. The lower left side of the exhibit has an audio phone handset with two buttons. The square “Audio Text” button gives a physical description of the exhibit component. The round “Next Audio” button walks a visitor through the printed imagery and text on the labels. [caption id="attachment_8847" align="alignleft" width="597"]Slanted planel label and touch screen Slanted planel label and touch screen[/caption] The slanted panel along the front of the exhibit contains a touch screen, a small introductory label with simplified instructions, a square “More Audio” button which plays a detailed broadcast audio introduction to the graph, and five round buttons with corresponding high-contrast tactile versions of the turbines. A tactile adult person is used for scale next to each turbine.  [caption id="attachment_8848" align="aligncenter" width="597"]Buttons and high contrast tactile scale versions of the wind turbines Buttons and high-contrast tactile scale versions of the wind turbines[/caption] [caption id="attachment_8849" align="aligncenter" width="597"]Close-up image of high contrast tactile scale versions of the wind turbines Close-up of high-contrast tactile scale versions of the wind turbines[/caption] When idle, the large touch screen shows text prompting visitors to “press a round button to begin.” Audio also articulates this prompt when the screen is touched. [caption id="attachment_8850" align="aligncenter" width="597"]Welcome prompt screen Welcome prompt screen[/caption] To begin the activity, a visitor chooses a graph to explore by pressing one of the five round buttons below the computer touch screen. When any one of the buttons is pushed for the first time, a visual and audio introduction to the graph is played. Visitor interaction is limited during the introduction. Once the introduction has concluded, the visitor can then explore the displayed scatterplot graph of power production for that turbine or press another button to view a different graph. [caption id="attachment_8851" align="alignnone" width="597"]Auditory and visual introduction to the graph Still from auditory and visual introduction to the graph. Graph area is highlighted. Text within pop-up shown in picture: Touch to explore individual data points, Static = no data[/caption] The large rectangular section of exposed computer touch screen has tactile edges with notches that correspond to the axes and grid lines in the scatterplot graphs on the screen. When these are touched, the axes titles and grid line increments are read aloud. Within the graph area, the scatterplot dots are visible and when touched, are articulated by a tone that corresponds with their value. When the visitor touches an area where no data points are present, the visitor hears static. When a visitor holds their finger in one place on the graph, a pop-up text box and audio readout articulate the power produced at that wind speed and how many data points are present in that area of the graph. [caption id="attachment_8852" align="alignnone" width="597"]Pop-up text box of power production when visitor holds finger on screen within graph area Pop-up text box when visitor holds finger on screen within graph area. Text within pop-up shown in the picture: 1361 watts in winds of 16 MPH  6 data points[/caption] Along the bottom edge of the main graph area, there is a small, thin line of exposed computer touch screen. When this is touched, the trend line for the data is sonified, corresponding to the location of the visitor’s finger along the x-axis. If the visitor holds their finger in one place along this trend exploration bar, a text box will pop up and audio verbalizes the average power produced at that wind speed. [caption id="attachment_8853" align="alignnone" width="597"]Highlighted trend exploration bar below graph area of the screen Still from introduction to the graph, highlighting the trend exploration bar below graph area of the screen. Text within pop-up shown in the picture: Touch to hear the trend line, Higher pitch = more power[/caption] To the left of the graph screen, there is also an image of the turbine for which the current data is being shown. When this image is touched, audio articulates the image. [caption id="attachment_8855" align="alignnone" width="597"]Turbine image and graph of data from that turbine Image of Skystream wind turbine and graph of power production data from that turbine[/caption]  

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Sep 30, 2014
 
  
  
  

IMLS Funds New Partnership Between Open Exhibits & Omeka

We are excited to announce a new partnership between The Roy Rosenzweig Center for History and New Media at George Mason University,  Ideum  (makers of Open Exhibits)  and the University of Connecticut’s Digital Media Center.

Our organizations have been awarded a National Leadership Grant for Museums from the Institute of Museum and Library Sciences to extend two open museum platforms: Open Exhibits and Omeka.  (A full list of awardees can be found on the IMLS website.)

The project is called Omeka Everywhere. This new initiative will help keep Open Exhibits free and open for the next three years (our NSF Funding ended last month). In addition, a set of new initiatives for Open Exhibits and Omeka are planned. Here is a brief description of the project.

Dramatically increasing the possibilities for visitor access to collections, Omeka Everywhere will offer a simple, cost-effective solution for connecting onsite web content and in-gallery multi-sensory experiences, affordable to museums of all sizes and missions, by capitalizing on the strengths of two successful collections-based open-source software projects: Omeka and Open Exhibits.

Currently, museums are expected to engage with visitors, share content, and offer digitally-enabled experiences everywhere: in the museum, on the Web, and on social media networks. These ever-increasing expectations, from visitors to museum administrators, place a heavy burden on the individuals creating and maintaining these digital experiences. Content experts and museum technologists often become responsible for multiple systems that do not integrate with one another. Within the bounds of tight budget, it is increasingly difficult for institutions to meet visitors’ expectations and to establish a cohesive digital strategy. Omeka Everywhere will provide a solution to these difficulties by developing a set of software packages, including Collections Viewer templates, mobile and touch table applications, and the Heist application, that bring digital collections hosted in Omeka into new spaces, enabling new kinds of visitor interactions.

Omeka Everywhere will expand audiences for museum-focused publicly-funded open source software projects by demonstrating how institutions of all sizes and budgets can implement next-generation computer exhibit elements into current and new exhibition spaces. Streamlining the workflows for creating and sharing digital content with online and onsite visitors, the project will empower smaller museums to rethink what is possible to implement on a shoestring budget. By enabling multi-touch and 3D interactive technologies on the museum floor, museums will reinvigorate interest in their exhibitions by offering on-site visitors unique experiences that connect them with the heart of the institution—their collections.

More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on Sep 19, 2014
 
  
  
  

Multitouch Table Research Findings

A recent addition to our Papers section here on Open Exhibits is worth highlighting here in Open Exhibit blog. Open Exhibits co-PI, Kate Haley Goldman and her colleague Jessica Gonzalez, conducted research at three of our partner museums (Indian Pueblo Cultural Center, The Maxwell Museum of Anthropology, and the New Mexico Museum of Natural History and Science) to better understand how visitors interact with multitouch tables.

A multitouch table at the Maxwell Museum of Anthropology.
Open Exhibits software running on multitouch table at the Maxwell Museum of Anthropology. The Maxwell was one of three museums in which research was conducted. The touch table shown is an Ideum Pro multitouch table.

The research looks at a variety of different aspects concerning visitor interaction including: dwell time, social interaction and a variety of behavioral and verbal indicators. The data suggests that for most visitors the experience is still novel, most visitors (73-82%) to our three partner institutions had not seen a multitouch table before. The stay time was longer for the table, than for any other object found in the gallery spaces. The full report can be found at: OE Multitouch Table Use Findings.

More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on Jul 30, 2014
 
  
CMME
  
  

Using Personas to Create Inclusive Digital Exhibit Interactives

The Creating Museum Media for Everyone (CMME) project created personas, or hypothetical archetypes of actual users, to guide the design process of the four prototypes produced during the CMME Workshop. The personas are not real people, but they do represent real people throughout the design process, and are based strictly on real user data. Personas are useful to design teams because they help ensure user-centered design by representing and communicating user needs to developers. Using real user data discourages potential personal bias on the team. Personas were of particular interest to the CMME team because it was likely that not everyone attending the workshop was familiar with the project’s specific target audience: people with disabilities. Although personas are useful in the beginning of the design process, they are not meant to take the place of user testing once prototypes are created; therefore, along with using the personas as a tool early on, the CMME team has tested all new prototype iterations with people who have a range of abilities and disabilities. Persona Creation The first step in persona creation was reading background literature about how personas can be used and how to create them. Some useful resources are: The Persona Lifecycle: Keeping People in Mind Throughout Product Design by John Pruitt and Tamara Adlin Inmates are Running the Asylum: Why High-tech Products Drive us Crazy and How to Restore the Sanity by Allan Cooper A Web for Everyone: Designing Accessible User Experiences by Sarah Horton and Whitney Quesenbery (See UX Magazine's book excerpt) User Interface Engineering blog posts:

The AEGIS Project The Fluid Project After reading background literature about persona use and creation, the first step was finding data sources that could provide information on how people with different types of disabilities use exhibits and digital interactives, as well as how these groups experience a museum. The CMME personas were based on data from 11 previous Museum of Science research and evaluation studies. A bulleted list of characteristics was created for each real person who took part in the research or evaluation study. These characteristics lists contained important and relevant findings from the studies, including traits and qualities of the people, as well as ways in which they used exhibits. Characteristics lists of all participants were then compared to identify interesting and important distinctions between the people in the sample. Some distinctions that came out of these data were tech savviness, reliance on auditory elements, reliance on visual elements, and reliance on tactile elements. Using the distinctions, scales were created in order to map the real people along a continuum. After each data point was mapped on the continua, patterns began to emerge when the same group of people would fall on the same area of multiple continua (see Figure 1). [caption id="attachment_8782" align="aligncenter" width="542"]Personas continua Figure 1: Some of the continua, where each two-letter or letter-number combination represents one person from a research or evaluation study. Groups circled in red show people from the studies that often fell on the same area along the continua.[/caption] These groupings became the individual personas. Key characteristics of each person in the clump were written down, and then all of the notable traits were combined to create initial drafts of the personas. In this case, key characteristics included difficulties when using exhibits or digital interactives, parts of exhibits or digital interactives that were helpful, attitudes, interests, and familiarity with computers. After some back and forth review with other members of the team, photos were added to the personas, and they were ready to introduce at the CMME Workshop. Use of personas at the Workshop The personas were introduced to the participants on the first day of the workshop as a presentation with slides that containing photos and quotes of each persona, along with a description of their characteristics. A paper version of the personas was also included in the participant packets. Large cutouts of the personas heads were created and dispersed on each table as a visual cue to remind participants to think about them during development. Each team was free to use the personas as they saw fit. Here is a summary of how each group used the personas at the workshop:
  • Personalization Options Team: This team created a prototype that addressed what personalization might look like at a museum. During development, the team went through each persona and created a personalized experience for them. After their prototype was created, they used a spreadsheet to show each persona going through the personalization path.
  • Dynamic Haptic Display Team: This team aimed to create a dynamic haptic representation of a graph. During their design process, the team thought about how each persona would find their individual data point and go about manipulating the data.
  • Multi-touch Audio Layer Team: This team set out to create a descriptive audio layer for a multi-touch table. They came up with two plausible options to pursue and went through them with each persona, listing out the pros and cons.
  • Data Sonification Team: This team’s goal was to create a prototype that would present data using sound to provide audio cues. This team chose three target personas and pictured these three personas going through and using their prototype.
Click here for more information about the Prototyping Workshop. According to Kate Haley-Goldman’s evaluation of the workshop, participants found the personas to be highly useful once they were being implemented during development. However, the introduction to the personas could have been stronger, as participants felt that they were introduced quickly during the Possibilities Workshop and then were not mentioned again until the Prototyping Workshop started. The CMME personas were meant to be a living document, updated as the project progressed. For the formative evaluation, at least 20 visitors with different types of disabilities tested different iterations of the CMME prototypes. The personas were expanded with the knowledge that was gathered from these visitors. Click here to see the current version of the personas.  

More Info

by Stephanie Iacovelli View Stephanie Iacovelli's User Profile on Jul 15, 2014
 
  
First <<  1 2 3 4 5 6 7 8 9 10  >>  Last