Open Exhibits Tutorials

Tutorials

  
  

Creating a Single Object

  

Add Interactive Touch Objects with CML and GML, Method 3, Part 1

Introduction
Method 3: Using CML Constructors & GML Manipulations

In this tutorial we are going to add a GML gesture to a touch object defined and managed in CML. Objects created using CML can have multiple independent gestures directly attached. All gestureEvents and property updates on the associated touch object are handled automatically by the GW3 framework and can be controlled via the GML and CML documents. This tutorial requires Adobe Flash CS5+ and GestureWorks 3 (download the sdk).

 

Adding A GML-Defined Gesture,
Using Method 3: CML Constructors & GML Manipulations

In GestureWorks3 we designed Creative Markup Language (CML) to simplify the development of multitouch applications by providing advanced methods in Flash for developers to create interactive objects and containers that can be manipulated using configurable gestures. In each application created with GestureWorks 3 there are two associated xml documents: “my_application.cml” and “my_gestures.gml”, which are located in the folder “bin/library/cml” and “bin/library/gml” respectively.

As part of the CML tool kit in GestureWorks 3 there are multiple built-in components that can be accessed using “my_application.cml”. In this example an ImageElement component from the ComponentKit can be used to dynamically load an image into a touch object (“touchContainer”), set it’s properties and place it on stage. For example:


1
2
3
4
5
6
7
8
9
<CanvasKit>
    <ComponentKit>
        <TouchContainer id="touchContainer" x="300" y="300" rotation="-45" dimensionsTo="image">
            <ImageElement id="image" src="library/assets/blimp0.jpg"/>
            <GestureList>
            </GestureList>
        </TouchContainer>
    </ComponentKit>
</CanvasKit>

To attach a gesture to a touch object defined in the CML document “my_application.cml” simply add a gesture between the “” tags associated the touchContiner. For example:

 


1
2
3
4
5
6
7
8
9
10
<CanvasKit>
    <ComponentKit>
        <TouchContainer id="touchContainer" x="300" y="300" rotation="-45" dimensionsTo="image">
            <ImageElement id="image" src="library/assets/blimp0.jpg"/>
            <GestureList>
                <Gesture ref="n-drag" gestureOn="true"/>
            </GestureList>
        </TouchContainer>
    </ComponentKit>
</CanvasKit>

 

This adds the gesture “n-drag” to the touch object (“touchContainer”) and effectively activates gesture analysis and processing. Any touch point placed on the touch object is added to the local cluster. The touch object will inspect touch point clusters for a matching gesture “action” and then calculate cluster motion in the x and y direction. The result is then processed and prepared for mapping.

The traditional event model in flash employs the explicit use of event listeners and handlers to manage gesture events on a touch object. However in GestureWorks3 Gesture Markup Language can be used to directly control how gesture events map to touch object properties and therefor how touch objects are transformed. These tools are integrated into the gesture analysis engine inside each touch object and allow custom gesture manipulations and property updates to occur on each touch object.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<Gesture id="n-drag" type="drag">
    <match>
        <action>
            <initial>
                <cluster point_number="0" point_number_min="1" point_number_max="5" translation_threshold="0"/>
            </initial>
        </action>
    </match>
    <analysis>
        <algorithm>
            <library module="drag"/>
            <returns>
                <property id="drag_dx"/>
                <property id="drag_dy"/>
            </returns>
        </algorithm>
    </analysis>
    <processing>
        <inertial_filter>
            <property ref="drag_dx" release_inertia="false" friction="0.996"/>
            <property ref="drag_dy" release_inertia="false" friction="0.996"/>
        </inertial_filter>
    </processing>
    <mapping>
        <update>
            <gesture_event>
                <property ref="drag_dx" target="x" delta_threshold="true" delta_min="0.01" delta_max="100"/>
                <property ref="drag_dy" target="y" delta_threshold="true" delta_min="0.01" delta_max="100"/>
            </gesture_event>
        </update>
    </mapping>
</Gesture>

In this example the gesture “n-drag” as defined the root GML document “my_gestures.gml” directly maps the values returned from gesture processing “drag_dx” and “drag_dy” to the “target” “x” and “y”. Internally the delta values are added to the “$x” and “$y” properties of the touch object. This translates the object on stage to the center of the touch point cluster. As the points move, so does the touch object. This effectively “drags” to touch object across the stage.

As shown in this example a single* gesture (defined in the GML) is attached to a single touch object (defined in the CML). However multiple independent gestures can be attached to multiple touch objects using a single GML and a single CML document.

The benefit of using the CML to construct the touch object and GML to handle gesture events is that complex interactive media object can be created with sophisticated gesture based manipulations in a few simple lines of code. The details of component creation, media loading and unloading, display layouts, gesture interactions and event management are all handled automatically in the CML and GML framework in GestureWorks3.

The tools available as part of the CML and GML internal framework allows developers to rapidly create configurable Flash applications that can be completely described using editable XML documents. This method mimics best practices used for dynamically assigning object based media assets and formatting, providing a framework that fully externalizes object gesture descriptions and interactions. The result of this approach allows developers to efficiently refine UI/UX interactions, layouts and content without the need to recompile applications.

*For more information on how to add multiple gestures see: Creating Interactive Touch Objects Using AS3 & GML, Part 2 (Adding Multiple GML Defined Manipulations).

Note: method 3 defines a work-flow that uses a combination of CML and GML to create and manage touch objects and their interactions. Included in GW3 are “traditional” methods for explicitly creating touch objects and managing touch/gesture interactions using actionscript.