Open Exhibits - Blog



CMME: Haptic Possibilities in Exhibits

Post written by: Marta Beyer, Peter Moriarty, Emily O’Hara, and Robert Rayle

Through the Creating Museum Media for Everyone (CMME) grant, the Museum of Science and several other institutions set out to explore various possibilities for developing accessible digital museum interactives. One particular area CMME allowed us to explore was the potential of haptics technology within museum settings. Haptics, the ability to get information from touch, present a promising and unique way to convey information. By sharing many of the lessons we learned about haptics in this series of blog posts, we hope the field can continue to discover possibilities connected with this technology. The first post below provides a quick overview of why we were interested in exploring haptics and some of the initial research we did for the CMME project. In a second post we describe some of the haptic methods we tried.

Project background and inspiration for haptics

With the CMME grant, we set out to consider how haptic technology could be used in a museum exhibit to provide information to audiences of all abilities. Specifically, we were trying to figure out how haptics could be incorporated into an accessible digital exhibit focusing on data exploration. As we started this project, we were aware that some museums currently incorporate basic haptic experiences to provide dynamic tactile feedback. For example, in the Museum of Science’s Take a Closer Look exhibition on senses there is a very basic tactile experience that includes a small vibrating post. By touching this post, visitors can gain a sense of how sensitive their skin is to different frequencies of vibration.

Take a Closer Look haptic vibrating post
Photo of a visitor touching the vibrating post in a haptic interactive experience in the Museum of Science’s Take a Closer Look exhibition

Another haptic experience at the Museum of Science is in the Making Models exhibition, which highlights how people employ many different types of models to understand the world. One exhibit component, in particular, reveals several different ways humans model a heart, including a computer animation showing blood flow, a plastic model replicating the heart’s shape, a heart-shaped candy box, and words in different languages that convey the abstract concept of a heart. One model includes an audio representation of a heartbeat. Here visitors can also place their hand on the thin plastic sheeting which covers a speaker and feel the heartbeat through direct haptic feedback.

Making Models haptic heartbeat
Photo of a visitor touching the haptic part of this exhibit component to feel the sonified heartbeat in the Museum of Science’s Making Models exhibition

Together, these exhibits encouraged us to think about even broader possibilities of haptics in museums and the potential to convey more complex ideas using touch.

Haptic resources

In addition to referring to these haptic exhibits, we also turned to other resources for haptic inspiration. The following list includes some of the most helpful resources we referenced.

Simple Haptics

  • On this site industrial designer Camille Moussette shares his Ph.D. thesis on different forms of vibrational feedback. It explores a variety of haptic techniques such as shaking a whole unit, rotational vibrations, whacking vibrations, and some combination of these. For his dissertation, Moussette built different prototypes to document the processes and results. This reference provides a wide spectrum of ideas and inspirations and is a good starting point when making design goals and exhibit goals.

Precision Microdrives

  • This site is an in-depth exploration of specific haptic motors, in particular, ERM (eccentric rotating mass or pager motor style) and LRA (linear resonant actuators or voice coil style). It highlights the design choices one needs to think about when choosing between these different motor styles. It also walks someone through a design process of the whole system. Because this is a commercial site that sells motors, a designer could get information and specs about different systems which, in turn, could lead to him/her implementing a wide range of approaches.

Tactile Labs

  • This non-profit organization provides access to haptic technologies. In particular, their site has specific products related to the Haptuator, or a specific vibrotactile device called a transducer. The output accelerations on these transducers allow one to distinctly feel vibrations in 50 Hz-500 Hz range. Because many different choices are provided, this site might be of use if someone is looking for a particular product or trying to decide whether or not specs meet their design parameters. Several articles are referenced in the “Related Publication” section on the site including those by Hsin-Yun Yao who’s Ph.D. thesis about vibrotactile transducers led to the creation of the Haptuator.

Stanford University Mechanical Engineering Course

  • Information about Stanford’s Mechanical Engineering course called the “Design and Control of Haptic Systems” is provided on this site. Several papers/pdfs of relevant research are available. The lectures and assignments associated with this course could inform any technical designer’s haptics research and are quite inspirational. The “Haptic Interaction Design for Everyday Interfaces” article by Karon MacLean was particularly noteworthy for us as it describes basic information about haptics and relevant, current technology.

Haptics Group at the University of Pennsylvania

  • This site provides an overview of haptics and describes how the Haptics Group is connected with the GRASP Lab at the University of Pennsylvania. It also links to Professor Katherine Kuchenbecker’s short but fascinating TED talk on haptics and some potential applications.

If you know of other haptic resources—please pass them along in the comments below!

More Info

by Marta Beyer View Marta Beyer's User Profile on Oct 31, 2014

CMME: Tactile Paths not Taken

Written by Malorie Landgreen and Ben Jones 

The Creating Museum Media for Everyone (CMME) team at the Museum of Science, Boston (MOS) explored many avenues to address our goal of making accessible digital interactive as useful as possible.

While reading through this blog, you will see examples of the work our team developed while brainstorming this interactive component, and the reasons some of these attempts were not chosen for the final proof-of-concept component. This post will review the tactile techniques we explored that did not end up in the final exhibit.

If you have not yet read our prior blog posts, the CMME Final Exhibit Component shows the final proof-of-concept exhibit component that the team installed and the Formative Evaluation Summary reviews how we got there.

Now, let’s dive into what didn’t work.

Capacitive Sensing Buttons

What is this?

3D-printed stainless steel buttons acting as touch sensors (printed at Shapeways)

Capacitive sensing buttons Picture of an array of six capacitive sensing buttons in an early prototype.

The far left button was 3D-printed stainless steel and the other five buttons were 3D-printed in plastic and wrapped with aluminum foil to make them conductive.

Why we tried it

The team wanted a physical representation of the turbines, and by combining the 3D prints of the turbines with a buttons press, served to identify each button directly. By touching the metal button, it triggered an audio label that read the name of the turbine. Physically pressing the button selected the graph our visitors wanted to explore.

How it was made

The buttons were printed in stainless steel because it was the most affordable conductive material; however, bronze and brass were also available. The 3D buttons were designed to fit into an off-the-shelf arcade button. These buttons were altered by removing the small light bulb inside, and replacing it with a piece of metal in the socket.

A spring made the electrical connection between the moving portion of the button and the light bulb socket. A wire ran from the light bulb socket to an Arduino to do the capacitive sensing. Using the Arduino Leonardo to send keystrokes to the computer (one keystroke for the audio when a capacitive sensor detected a touch, and another when the button was pressed).

When someone touched the button there would be a spike in capacitance. Initially, this just sent a threshold for the capacitance values to detect a touch, but a better method created a moving average of all of the capacitive sensors. If the value detected was above the average by a certain amount, then it was a touch event.

Altered button Picture of the deconstructed and altered button with a small piece of metal in the light bulb socket


Why didn’t it work?

We moved away from capacitive sensing buttons because it caused confusion as to how to interact with this component. When visitors touched the button, they triggered audio label readout and expected something more to happen. They did not realize they needed to press the button down to select the turbine graph.

We did not simply eliminate the capacitive functionality of the 3D buttons. The 2-inch round buttons were also too small to allow for to-scale models of the turbines, preventing an accurate understanding of the size differences between each turbine.

What was kept 

In the final proof-of-concept exhibit component, we kept the off-the-shelf arcade buttons to be used as regular buttons and 3D-printed models of the turbines were installed below each button. We added a grooved edge for easy navigation between the button and the 3D print, so our visitors can easily associate each button with its 3D-printed turbine.

Button array with tactile turbines Picture of the final array of five buttons, each with a to-scale, high contrast, 3D-printed turbine below.


Touch Screen Grid Overlay

What is this?

A clear acrylic die-cut grid, lined up perfectly with the graph grid lines on the touch screen behind.

Touch screen grid overlay Picture of an early prototype of the exhibit component with an arrow pointing at the touch screen where the clear acrylic grid was attached.


Why we tried it

The team discussed the need for a grid overlay that would allow our visitors who are blind or have low vision to be able to identify the lines of the graph for a better sense of where each data point is located. We wanted it to be a clear acrylic overlay so that it would not detract from visual elements of the graph.

How it was made

A graphic was created to the same scale as the graphic for the touch screen graph. It was then cut out in-house using a vinyl cutter. The overlay was attached to a full sheet of clear acrylic with a clear-drying adhesive. The layered acrylic sheets were tested to make sure the underlying touch screen could still sense touch. This solution was not implemented in the final exhibit component, so it was never produced in a more durable way.

Why didn’t it work?

This solution did not work because it ended up being more of a distraction to our visitors than an aid to understand the graph more thoroughly. At the time of this testing, the team was designing the interaction to also allow for a bar graph to compare all of the turbines, but the static overlay caused confusion when the scatterplot graph changed to a bar graph. With further testing, the team discovered that there was a simple solution that didn’t cause as much permanent visual or tactile clutter; audio was added to articulate where on the graph a visitor was touching.

What was kept

A simplified version of the clear acrylic overlay was kept for the final component, in which just the graph axes are raised, with notches along each axis where the grid lines are located. Audio supplements this interaction by articulating the axes titles and gridline increments when they are touched. When a visitor holds their finger in one place on the graph, audio also reads out their location and details about the nearby data.

Final tactile edged overlay Picture of the graph screen in the final exhibit component with an arrow pointing at the top right corner of the graph where the edge of the clear acrylic overlay is faintly visible.


Sonification Strip

What is this?

A separate horizontal bar below the graph that allowed visitors to run their finger across it to hear the trend line of each graph.

Prototype sonification strip Picture of an early prototype in which the sonification strip and the main graph area were demarcated by two separate cutouts in the graphic that covered the touch screen. An arrow is pointing to the sonification strip.


Why we tried it

We wanted our visitors to hear the trend line quickly and easily, as many times as they would like for each turbine. Hearing the trend line play one time after the button press, and then exploring the data points, didn’t allow all of our visitors to fully understand the power production trend of each turbine. The detached strip would keep the difference in functionality separate from the scatterplot graph above, but still allow for visitors to hear the trend as often as they desired.

How it was made

There was a cutout portion of the graphic that was laid on top of the touch screen. The cutout was a 1-inch tall strip that ran along the full length of the graph, about 1/4 inch below the bottom of the graph.

Why didn’t it work?

The separation between the graph and the sonification strip kept our visitors from either finding it or understanding that it correlated to the trend line on the graph.

What was kept

The team decided to keep the basic concept of the sonification strip, because the ability to hear the trend line multiple times was successful, but integrated it into the tactile acrylic overlay. This allowed for the relationship of the strip and the graph to be more integrated.

Final sonification strip Picture of the final exhibit component touch screen with the sonification strip highlighted during the introduction to the graph. There is an arrow pointing to the sonification strip and the text on the screen states: Touch to hear the trend line; Higher pitch = more power.



These three examples of unused tactile concepts led to a stronger final design for our component, but they may be more applicable in different situations. Have you tried other tactile options for visitors? Leave a comment below with other resources that have worked for you.

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Oct 15, 2014

CMME Exhibit Component: Formative Evaluation Summary

Formative Evaluation Methods:

A total of nine iterations of the Creating Museum Media for Everyone (CMME) exhibit prototype were tested throughout the formative evaluation phase, which occurred from April 2013 to March 2014. Overall, 134 visitors took part in testing the prototypes. This includes 15 recruited people with disabilities and 119 general Museum visitors (who were not asked whether they identified as having a disability). Because people with disabilities were the target audience for this project, they were recruited to come in and test prototypes throughout the exhibit creation process. Their input was crucial for creating a universally designed component. However, even though people with disabilities were the target audience, all Museum of Science exhibits are tested with general Museum visitors to ensure usability, understanding, and interest in exhibits. Testing with people who have a variety of abilities and disabilities ensured that added features that would enhance the accessibility of the exhibit for some visitors did not hinder the experience for others.

The table below outlines the types of disabilities represented in the testing sample:

Type of Disability

Number of participants

Blind or low vision






Deaf or hard of hearing


Note: Some participants identified as having multiple disabilities. Therefore, totals do not add up to 15.

For each testing session, all visitors were asked to use the component as they normally would if they had walked up to it in the Museum. While they were exploring the interactive, visitors were also asked to use a “think-aloud protocol,” describing what they were thinking about during each step of the interaction. After they were done exploring, the evaluator asked some interview questions and sometimes prompted the visitors to use features that they hadn’t explored on their own. The testing protocol used with on-the-floor Museum visitors versus recruited visitors with disabilities was largely the same, except that the following question was added when testing with recruited visitors with disabilities: “Was there anything you wanted to do when using this exhibit that you were not able to?”

Impacts of formative evaluation on the final design:

Specific parts of the component, its features, and the exhibit content are referenced throughout this post and are explained in depth in the CMME Final Exhibit Component blog post. Briefly, the component presents data about the power generated by five wind turbines on the Museum of Science roof in the form of line and scatterplot graphs. Summarized below are main findings from the formative evaluation, which include a description of visitors’ experiences during testing and how these experiences impacted the final design of the exhibit component.

1. Understanding when there were no data points in an area of the graph

What happened during testing?

When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not.

How is this addressed in the final component?

A sound clip of static is now played whenever a visitor is touching an area of the screen that does not have data.

2. Dealing with multi-touch screen capabilities

What happened during testing?

When testing the first sonification prototype with three people who are blind, all three people oriented themselves to the component by feeling it with both hands. The screen often froze when multiple fingers touched the screen at once, making it unclear to the visitors whether the prototype was broken or not.

How is this addressed in the final component?

1) All audio that describes using the touch screen tells visitors to “use one finger” when using the touch screen. 2) The multi-touch option for the screen is now programmed such that if multiple fingers are on the screen at once, it reads data from an average of those points.

3. Accessing the information found on the graph axes

What happened during testing?

When people who are blind were testing an early version of the prototype, in order to get information about the number of watts or number of miles per hour at any specific data point shown on the screen, they would have to remember how many increments they had passed on the axes. Unless they went back and counted the increments on each axis, they were unable to tell which point they were touching.

How is this addressed in the final component?

1) Axes titles as well as graph values are read aloud when a visitor moves their finger along each axis. 2) When a finger is held down in one place on the graph or along the trend scrub bar, data values from that area are verbalized (i.e. "Average power production: X watts at Y miles per hour").

4. Being introduced to the component features

What happened during testing?

During testing of a close-to-final version of the prototype where visitors could explore all five graphs, an introductory broadcast audio clip played when the first graph button was pushed. This audio clip verbalized information that was included on the exhibit label about what to do at the component. During formative evaluation, this introductory audio could be interrupted if a visitor pushed another button or touched the screen. All visitors who tested in this session interrupted the audio before it finished, often doing so when the audio encouraged them to touch the screen. Many visitors had difficulty knowing what to do and which options were available for exploring the data. The instructions and information given at the end of this audio blurb were areas that many visitors did not use or did not understand during this session.

How is this addressed in the final component?

The introductory audio blurb is now un-interruptible. Visitors must listen to this audio broadcast before being able to interact, something uncharacteristic of exhibit interactives at the Museum of Science. If a visitor touches the screen or pushes a button while the locked audio is broadcasting, a negative sound is given as feedback so visitors know that the exhibit is not broken, but they must wait until they hear “now you can explore on your own” to move on. After implementing this change, visitors understood the instructions and options for exploration more clearly and did not appear to be negatively impacted by the un-interruptible intro audio.

5. Differentiating wind turbines

What happened during testing?

Throughout testing, many visitors were having trouble figuring out what the terms "Aerovironment," "Windspire," "Proven," "Swift," and "Skystream" meant. These were the wind turbine names that labeled each button and graph. Understanding that these terms were names of different wind turbines was essential for visitors to understand the exhibit content.

How is this addressed in the final component?

We first tried labeling the turbines with a letter (i.e. Wind Turbine A: Proven), but visitors who are blind did not prefer this strategy because the word that differentiates the buttons was not the first one they heard. Instead, we decided to change or modify some turbine names so that visitors can better understand that they are brand names. For instance “Aerovironment” was changed to “AVX1000” and “Proven” was changed to “Proven 6.” This solution allowed for the first word to be unique while making the turbine names less confusing.

[caption id="attachment_8848" align="aligncenter" width="553"]Buttons and high contrast tactile scale versions of the wind turbines Final wind turbine names used in the exhibit: These names are found under the buttons and above the high contrast tactile images of each wind turbine as well as on screen as graph titles.[/caption]

6. Connecting each graph to its accompanying wind turbine

What happened during testing?

Visitors had some difficulties throughout the formative evaluation understanding that pushing a new button would change the graph shown on the screen and present data from a different wind turbine. For instance, some visitors were able to interpret the data on the screen but weren’t sure what the difference was between different graphs.

How is this addressed in the final component?

1) Each button that corresponds to a wind turbine lights up when visitors choose that button/graph, and 2) an area of the screen is designated for an animation of the related turbine. When a new button is pushed, an animated image of the turbine whose data is shown on the accompanying graph is shown on this part of the screen.

[caption id="attachment_8881" align="aligncenter" width="553"]Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6. Final exhibit component where the button for Proven 6 is lit up and the screen in the upper left corner shows an animation of Proven 6.[/caption]

7. Finding the component welcoming

What happened during testing?

A few groups who tested the prototypes throughout the formative evaluation mentioned that they would not be likely to walk up to a screen with graphs on it, either because they didn’t like graphs and didn’t think the activity would be fun, or because they found graphs complex and intimidating. In one of the later testing sessions, one group talked about how much they ended up enjoying the interactive, even though they talked about it initially looking boring and complex when they walked up to it.

How is this addressed in the final component?

A welcome screen was added to the component that shows an animated drawing of the spinning wind turbines mounted on the Museum roof, whose power production data is represented in the graphs. This screen instructs visitors to “press a round button to begin.” If the screen is touched, the instruction “press a round button below the screen to begin” is also read aloud.

[caption id="attachment_8850" align="aligncenter" width="553"]Welcome prompt screen on the final component. Welcome prompt screen on the final component.[/caption]



More Info

by Stephanie Iacovelli View Stephanie Iacovelli's User Profile on Oct 1, 2014

CMME Final Exhibit Component

For the Creating Museum Media for Everyone (CMME) project, the team from the Museum of Science, Boston, aimed to develop a proof-of-concept exhibit component that used multisensory options to display data and whose components could be adapted into a basic toolkit for use by other museums. The development of this exhibit was kicked off with two back-to-back workshops featuring talks by experts in the field and working sessions to explore some possible directions for an accessible digital interactive. This post gives an overview of the final exhibit component and reviews the goals and constraints of the project.

Video tour of the exhibit (closed captions available in the "CC" option along the bottom edge of the video window):

If you are unable to watch the video, scroll down for text and image descriptions of the exhibit.

Goals and constraints:

Although the project included exploring many different technological paths, there were goals for both the overall project as well as the specific exhibit component. The project aimed to create shareable results that would help others in the museum field create more accessible digital interactives that could support data interpretation.

Project goals:

  • To further the science museum field’s understanding of ways to research, develop, and evaluate inclusive digital interactives
  • To develop a universally designed computer-based multi-sensory interactive that allows visitors to explore [and manipulate]* data
  • To develop an open-source software framework [allowing the design of the full interactive to be adapted to fit any institution]*
  • To provide an exemplar that will allow other museums to represent data sets as universally accessible scatterplot [or bar]* graphs

*Bracketed portions of the project goals were explored, but are not reflected in the final exhibit component installed on the Museum floor. Code for programming these tasks was developed and will be released in an open source toolkit later this fall for institutions to explore.

Exhibit goals:

  • Visitors will understand abstract wind turbine data through multi-sensory interaction and interpretation
  • Visitors will improve their data analysis skills to learn about wind turbine technology
  • Visitors will view themselves as science learners through their interaction with and manipulation of wind turbine data

Final exhibit component:

We revised an existing component in the Museum of Science’s Catching the Wind exhibition, which allows a broad range of visitors to explore power production data from wind turbines mounted on the Museum roof. Below are text descriptions and pictures of the exhibit that match the content in the walkthrough video above:

Catching the Wind exhibition panel
Catching the Wind exhibition

The final exhibit component is part of a larger 25-foot long exhibition extending to the left and right. The component includes an informational label and a touch screen computer activity. The 4-foot wide computer interactive contains auditory and visual graphs showing power production of wind turbines mounted on the Museum's roof.

Exhibit component contains an informational label and an interactive computer touch screen
CMME exhibit component

There is a large printed label above the computer screen with images and statistics for each turbine type. The lower left side of the exhibit has an audio phone handset with two buttons. The square “Audio Text” button gives a physical description of the exhibit component. The round “Next Audio” button walks a visitor through the printed imagery and text on the labels.

Slanted planel label and touch screen
Slanted panel label and touch screen

The slanted panel along the front of the exhibit contains a touch screen, a small introductory label with simplified instructions, a square “More Audio” button which plays a detailed broadcast audio introduction to the graph, and five round buttons with corresponding high-contrast tactile versions of the turbines. A tactile adult person is used for scale next to each turbine.

Buttons and high contrast tactile scale versions of the wind turbines
Buttons and high-contrast tactile scale versions of the wind turbines

Close-up image of high contrast tactile scale versions of the wind turbines
Close-up of high-contrast tactile scale versions of the wind turbines

When idle, the large touch screen shows text prompting visitors to “press a round button to begin.” Audio also articulates this prompt when the screen is touched.

Welcome prompt screen
Welcome prompt screen

To begin the activity, a visitor chooses a graph to explore by pressing one of the five round buttons below the computer touch screen. When any one of the buttons is pushed for the first time, a visual and audio introduction to the graph is played. Visitor interaction is limited during the introduction. Once the introduction has concluded, the visitor can then explore the displayed scatterplot graph of power production for that turbine or press another button to view a different graph.

Auditory and visual introduction to the graph
Still from auditory and visual introduction to the graph. Graph area is highlighted. Text within pop-up shown in picture: Touch to explore individual data points, Static = no data

The large rectangular section of exposed computer touch screen has tactile edges with notches that correspond to the axes and grid lines in the scatterplot graphs on the screen. When these are touched, the axes titles and grid line increments are read aloud. Within the graph area, the scatterplot dots are visible and when touched, are articulated by a tone that corresponds with their value. When the visitor touches an area where no data points are present, the visitor hears static. When a visitor holds their finger in one place on the graph, a pop-up text box and audio readout articulate the power produced at that wind speed and how many data points are present in that area of the graph.

Pop-up text box of power production when visitor holds finger on screen within graph area
Pop-up text box when visitor holds finger on screen within graph area. Text within pop-up shown in the picture: 1361 watts in winds of 16 MPH 6 data points

Along the bottom edge of the main graph area, there is a small, thin line of exposed computer touch screen. When this is touched, the trend line for the data is sonified, corresponding to the location of the visitor’s finger along the x-axis. If the visitor holds their finger in one place along this trend exploration bar, a text box will pop up and audio verbalizes the average power produced at that wind speed.

Highlighted trend exploration bar below graph area of the screen
Still from introduction to the graph, highlighting the trend exploration bar below graph area of the screen. Text within pop-up shown in the picture: Touch to hear the trend line, Higher pitch = more power

To the left of the graph screen, there is also an image of the turbine for which the current data is being shown. When this image is touched, audio articulates the image.

Turbine image and graph of data from that turbine
Image of Skystream wind turbine and graph of power production data from that turbine


More Info

by Emily O'Hara View Emily O'Hara's User Profile on Sep 30, 2014

IMLS Funds New Partnership Between Open Exhibits & Omeka

We are excited to announce a new partnership between The Roy Rosenzweig Center for History and New Media at George Mason University,  Ideum  (makers of Open Exhibits)  and the University of Connecticut’s Digital Media Center.

Our organizations have been awarded a National Leadership Grant for Museums from the Institute of Museum and Library Sciences to extend two open museum platforms: Open Exhibits and Omeka.  (A full list of awardees can be found on the IMLS website.)

The project is called Omeka Everywhere. This new initiative will help keep Open Exhibits free and open for the next three years (our NSF Funding ended last month). In addition, a set of new initiatives for Open Exhibits and Omeka are planned. Here is a brief description of the project.

Dramatically increasing the possibilities for visitor access to collections, Omeka Everywhere will offer a simple, cost-effective solution for connecting onsite web content and in-gallery multi-sensory experiences, affordable to museums of all sizes and missions, by capitalizing on the strengths of two successful collections-based open-source software projects: Omeka and Open Exhibits.

Currently, museums are expected to engage with visitors, share content, and offer digitally-enabled experiences everywhere: in the museum, on the Web, and on social media networks. These ever-increasing expectations, from visitors to museum administrators, place a heavy burden on the individuals creating and maintaining these digital experiences. Content experts and museum technologists often become responsible for multiple systems that do not integrate with one another. Within the bounds of tight budget, it is increasingly difficult for institutions to meet visitors’ expectations and to establish a cohesive digital strategy. Omeka Everywhere will provide a solution to these difficulties by developing a set of software packages, including Collections Viewer templates, mobile and touch table applications, and the Heist application, that bring digital collections hosted in Omeka into new spaces, enabling new kinds of visitor interactions.

Omeka Everywhere will expand audiences for museum-focused publicly-funded open source software projects by demonstrating how institutions of all sizes and budgets can implement next-generation computer exhibit elements into current and new exhibition spaces. Streamlining the workflows for creating and sharing digital content with online and onsite visitors, the project will empower smaller museums to rethink what is possible to implement on a shoestring budget. By enabling multi-touch and 3D interactive technologies on the museum floor, museums will reinvigorate interest in their exhibitions by offering on-site visitors unique experiences that connect them with the heart of the institution—their collections.

More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on Sep 19, 2014
First  <<  1 2 3 4 5 6 7 8 9 10  >>  Last