Open Exhibits Tutorials

Tutorials

  
  

Creating Multiple Objects

  

Add Interactive Touch Objects with CML and GML, Method 3, Part 2

Tutorial Series: Add Interactive Touch Objects with CML & GML: Method 3

Tutorial 2: Add multiple GML-defined gesture using CML constructors & GML manipulations

Introduction
Method 3: Using CML Constructors & GML Manipulations

In this tutorial we are going to use the CML and GML documents to create touch objects and manage interactions. Each object created using CML can have multiple independent gestures directly attached. All gestureEvents and property updates on the associated touch object are handled automatically by the GW3 framework and can be controlled via the GML and CML documents. This tutorial requires Adobe Flash CS5+ and GestureWorks 3 (download the sdk).

 

Adding Multiple GML-Defined Gestures,
Using Method 3: CML Constructors & GML Manipulations

In GestureWorks3 we designed Creative Markup Language (CML) to simplify the development of multitouch applications by providing advanced methods in Flash for developers to create interactive objects and containers that can be manipulated using configurable gestures. In each application created with GestureWorks 3 there are two associated xml documents: “my_application.cml” and “my_gestures.gml”, which are located in the folder “bin/library/cml” and “bin/library/gml” respectively.

As part of the CML tool kit in GestureWorks 3 there are multiple built-in components that can be accessed using “my_application.cml”. For example:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
<CanvasKit>
    <ComponentKit>
        <TouchContainer id="touchContainer" x="200" y="200" rotation="-45" dimensionsTo="image" mouseChildren="true">
            <ImageElement id="image" src="library/assets/wb0.jpg"/>
            <GestureList>
            </GestureList>
        </TouchContainer>
        <TouchContainer id="touchContainer2" x="400" y="400" rotation="45" dimensionsTo="image2" mouseChildren="true">
            <ImageElement id="image2" src="library/assets/wb1.jpg"/>
            <GestureList>
            </GestureList>
         </TouchContainer>
    </ComponentKit>
</CanvasKit>;

This creates a new TouchContainer object that houses the ImageElement object that holds a dynamically loaded bitmap image. The “touchContainer” is then positioned, scaled and rotated inside the ComponentKit container which is placed in the CanvasKit and then on stage. This is repeated for the ” touchContainer2” item.

To attach a gesture to an touch object defined in the CML document, it must be added to the <GestureList> block associated with the TouchContainer. In order to attach multiple gestures to a touch object, multiple gesture references are added to the <gestureList> block and activated by setting the “gestureOn” attribute to true. For example:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<CanvasKit>
    <ComponentKit>
        <TouchContainer id="touchContainer" x="200" y="200" rotation="-45" dimensionsTo="image">
            <ImageElement id="image" src="library/assets/wb0.jpg"/>
            <GestureList>
                <Gesture ref="n-drag" gestureOn="true"/>
                <Gesture ref="n-scale" gestureOn="true"/>
                <Gesture ref="n-rotate" gestureOn="true"/>
            </GestureList>
        </TouchContainer>
        <TouchContainer id="touchContainer2" x="400" y="400" rotation="45" dimensionsTo="image2">
            <ImageElement id="image2" src="library/assets/wb1.jpg"/>
            <GestureList>
                <Gesture ref="n-drag" gestureOn="true"/>
                <Gesture ref="n-rotate" gestureOn="true"/>
                <Gesture ref="n-scale" gestureOn="true"/>
            </GestureList>
         </TouchContainer>
    </ComponentKit>
</CanvasKit>

The CML document in this case describes the attachment of three gestures “n-drag”, “n-scale ” and “n-rotate” to two “ImageElement” items. The effect of this is to activate matching, analysis and processing for the three gestures on each touch object. These gestures are uniquely defined in the root GML document “my_gestures.gml” located in the bin folder of the application.

The traditional event model in flash employs the explicit use of event listeners and handlers to manage gesture events on a touch object. However in GestureWorks3 Gesture Markup Language can be used to directly control how gesture events map to touch object properties and therefor how touch objects are transformed. These tools are integrated into the gesture analysis engine inside every touch object and allow custom gesture manipulations and property updates to occur on each touch object.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<Gesture id="n-drag" type="drag">
    <match>
        <action>
            <initial>
                <cluster point_number="0" point_number_min="1" point_number_max="5" translation_threshold="0"/>
            </initial>
        </action>
    </match>
    <analysis>
        <algorithm>
            <library module="drag"/>
            <returns>
                <property id="drag_dx"/>
                <property id="drag_dy"/>
            </returns>
        </algorithm>
    </analysis>
    <processing>
        <inertial_filter>
            <property ref="drag_dx" release_inertia="true" friction="0.996"/>
            <property ref="drag_dy" release_inertia="true" friction="0.996"/>
        </inertial_filter>
    </processing>
    <mapping>
        <update>
            <gesture_event>
                <property ref="drag_dx" target="x" delta_threshold="true" delta_min="0.01" delta_max="100"/>
                <property ref="drag_dy" target="y" delta_threshold="true" delta_min="0.01" delta_max="100"/>
            </gesture_event>
        </update>
    </mapping>
</Gesture>

In this example the gesture “n-drag” as defined the root GML document “my_gestures.gml” directly maps the values returned from gesture processing “drag_dx” and “drag_dy” to the “target” “x” and “y”. Internally the delta values are added to the “$x” and “$y” properties of the touch object (“TouchContainer”). This translates the object on stage to the center of the touch point cluster.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<Gesture id="n-scale" type="scale">
    <match>
        <action>
            <initial>
                <cluster point_number="0" point_number_min="2" point_number_max="5" separation_threshold="0"/>
            </initial>
        </action>
    </match>
    <analysis>
        <algorithm>
            <library module="scale"/>
            <returns>
                <property id="scale_dsx"/>
                <property id="scale_dsy"/>
            </returns>
        </algorithm>
    </analysis>
    <processing>
        <inertial_filter>
            <property ref="scale_dsx" release_inertia="true" friction="0.996"/>
            <property ref="scale_dsy" release_inertia="true" friction="0.996"/>
        </inertial_filter>
    </processing>
    <mapping>
        <update>
            <gesture_event>
                <property ref="scale_dsx" target="scaleX" func="linear" factor="0.0033" delta_threshold="true" delta_min="0.0001" delta_max="1"/>
                <property ref="scale_dsy" target="scaleY" func="linear" factor="0.0033" delta_threshold="true" delta_min="0.0001" delta_max="1"/>
            </gesture_event>
        </update>
    </mapping>
</Gesture>

The gesture “n-scale” as defined the root GML document “my_gestures.gml” directly maps the values returned from gesture processing “scale_dsx” and “scale_dsy” to the “target” “scaleX” and “scaleY”. Internally the delta values multiplied with the “$scaleX” and ”$scaleY” properties of the touch object (“touchContainer”) and scales the object on stage about the center of the touch point cluster.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<Gesture id="n-rotate" type="rotate">
    <match>
        <action>
            <initial>
                <cluster point_number="0" point_number_min="2" point_number_max="5" rotatation_threshold="0"/>
            </initial>
        </action>
    </match>
    <analysis>
        <algorithm>
            <library module="rotate"/>
            <returns>
                <property id="rotate_dtheta"/>
            </returns>
        </algorithm>
    </analysis>
    <processing>
        <noise_filter>
            <property ref="rotate_dtheta"  noise_filter="true" percent="30"/>
        </noise_filter>
        <inertial_filter>
            <property ref="rotate_dtheta" release_inertia="true" friction="0.996"/>
        </inertial_filter>
    </processing>
    <mapping>
        <update>
            <gesture_event>
                <property ref="rotate_dtheta" target="rotate" delta_threshold="true" delta_min="0.1" delta_max="10"/>
            </gesture_event>
        </update>
    </mapping>
</Gesture>

The gesture “n-rotate” as defined the root GML document “my_gestures.gml” directly maps the value returned from gesture processing “rotate_dtheta” to the “target” “rotation”. Internally the delta value added to the “$rotation” property of the touch object (“ImageElement”) and the object is rotated about the center of the touch point cluster on stage.

When touch points are placed on the touch object they are collected into a local “cluster”. The motion of this cluster is inspected for matching gesture actions and then the motion and geometry analysed, processed and returned for mapping. Each gesture activated on the touch object can be triggered independently. When multiple gestures are detected in the same frame the touch object undergoes a blended transformation which results in animated motion on stage.

Adding the three gestures “n-drag”,”n-scale” and “n-rotate” in combination allow the touch object to be interactively moved, scaled and rotated around a dynamic center of motion. This gives a “natural” feel to the touch objects as it can pivot and scale around any touch point in the cluster. These “affine” transformations are managed internally to the touchsprite (“ImageElement”).

The benefit of using the CML to construct touch objects and GML to handle gesture events is that complex interactive media object can be created with sophisticated gesture based manipulations in a few simple lines of code. The details of component creation, media loading and unloading, display layouts, gesture interactions and event management are all handled automatically in the CML and GML framework in GestureWorks3.

The tools available as part of the CML and GML internal framework allows developers to rapidly create configurable Flash applications that can be completely described using editable XML documents. This method mimics best practices used for dynamically assigning object based media assets and formatting, providing a framework that fully externalizes object gesture descriptions and interactions. The result of this approach allows developers to efficiently refine UI/UX interactions, layouts and content without the need to recompile applications.

*For more information on how to add multiple gestures see: Creating Interactive Touch Objects Using AS3 & GML, Part 2 (Adding Multiple GML Defined Manipulations).

Note: method 3 defines a work-flow that uses a combination of CML and GML to create and manage touch objects and their interactions. Included in GW3 are “traditional” methods for explicitly creating touch objects and managing touch/gesture interactions using actionscript.