Gestures_Article_4_0
Technical Article
Windows Phone 7™ Gestures Compared
Lab version: 1.0.0
Last updated: August 25, 2014
Phone Operating System Flavors 4
Creating a Gesture-Aware Application 7
Windows Phone 7 introduces gestures as part of the operating system.
This technical article compares ideas about gestures and their implementation across several phone operating systems, focusing on Windows Phone 7 as a reference.
Objectives
When you’ve finished the article, you will have:
· A high-level of understanding about gestures used by phone operating systems.
· A clear view of the ways that gesture implementations differ across resistive and capacitive touch screens, and across various phone operating systems.
· A knowledge of how to implement gesture-aware applications in Windows Phone 7.
Handheld devices and, in particular, smart–phone devices, have evolved over the years to use touch screen interaction. The primary user interface evolved from stylus-operated devices—with or without the need for additional hardware buttons—to finger-based touch screens over resistive touch screen hardware by using the human finger as a "big stylus." Since resistive screens rely on the application of firm pressure, these input devices had to settle for press/long press/release operations, and had little use for drag-and-drop actions.
Dragging an object with a finger over a resistive touch screen is a frustrating task. Why? Because during the drag operation we instinctively tend to release some of the pressure from the screen and drop the dragged object too soon. Moreover, resistive touch screens are capable of detecting only a single touch point, thereby limiting the possible actions the user can perform.
Capacitive touch screens
The recent use of capacitive touch screen hardware in phones has allowed the user to have better control over the location of the selecting finger. The hardware detects the touch location—regardless of the amount of pressure applied to the screen—by applying a tiny amount of electrical current through the finger, and then by calculating the position accordingly.
Because of their hardware implementation, capacitive screens, unlike older resistive touch screens, support multiple touch locations. Capacitive screens provide for a multitouch experience and unlock a wide range of innovative gestures that users can apply to control applications.
New interaction possibilities
In addition to using an input method to select and then drag an object on the screen, we can use capacitive touch screens to hold, pinch, rotate, enlarge, and throw away an object, and so on.
These input methods, known as gestures, are virtual actions that the user applies to the phone screen. The hardware determines the type of gesture based on the location, velocity, and direction of each finger touching the screen.
Phone Operating System Flavors
As of today, three major phone operating systems employ gestures: Apple iOS™, Google Android™, and Microsoft Windows Phone 7.
Because there is no uniform standard for gestures, each operating system has its own set of gestures. Some gestures conform to other operating systems, some gestures do not.
Next is a review of the differences among operating systems with regard to gesture support.
Apple iOS gesture set
Apple iOS, used mainly on iPhone™, iPad™, and iPod touch™ devices, supports the following gestures:
Tap – Press or select a screen object–a brief touch within a bounded area on the screen.
Double tap – Two rapid sequential taps on the same object.
Swipe – Move a finger across the screen and raise it without stopping.
Drag/pan – Hold the finger over a screen object and move it around.
Pinch – Hold two fingers on the screen, and in a relatively straight, virtual line move toward (pinch in) or away from (pinch out) each other.
Rotate – Hold two fingers on the screen and move them in opposite directions on different virtual lines; one finger acts as a center while the other circles around it.
Apple iOS 4 introduces two new gestures:
Long press – Touch a screen object without releasing.
Three-finger tap
Google Android™ gesture set
Google Android™ takes a different approach to gestures. The basic set is limited, but the operating system also enables you to create new gestures into gesture sets that can be used later within applications.
The basic set includes:
Single tap – Press or select a screen object.
Double tap – Two rapid sequential taps on the same object.
Down – Touch a spot on the screen with a finger. This is the first phase of a tap, fling, or another gesture.
Up – Finger no longer touches the screen. This is the last phase of a tap, fling, or another gesture.
Fling – Move a finger across the screen and raise the finger without stopping.
Long press – Touch a screen object without releasing it.
Scroll – Hold the finger over a screen object, and then move the finger.
Windows Phone 7 gesture set
Windows Phone 7 supports the following gestures:
Tap – Press or select a screen object—a brief touch within a bounded area on the screen.
Double tap – Two rapid sequential taps on the same object.
Pan – Hold a finger on the screen and move it around.
Flick – Move a finger across the screen and raise the finger without stopping (in-motion). This gesture may be used to create kinetic movements, and can follow a pan gesture.
Touch and hold – Touch a screen object for a defined time.
Also, the following gestures are supported for multitouch:
Pinch – Hold two fingers on the screen, and move the fingers toward each other.
Stretch – Hold two fingers on the screen, and move the fingers away from each other.
Similarities and differences
Windows Phone 7 gestures are compared to the gestures of the other two operating systems in the following table.
Table 1. Gesture comparison
|
Gesture name |
Illustration |
Windows Phone 7 usage |
Apple iOS equivalent |
Google Android equivalent |
|
Tap |
|
Select an object - or - Stop any content from moving on the screen |
Tap |
Tap |
|
Double tap |
|
Toggle between in and out zoom states |
Double tap |
Double tap |
|
Pan |
|
Move ("drag") an object on the screen to a different location |
Drag/pan |
Scroll |
|
Flick |
|
Move the whole canvas in any direction |
Swipe |
Fling |
|
Touch and hold |
|
Display context menu or option page for an item |
Long press |
Long press |
|
Pinch |
|
Zoom out - or – Diminish an object (depending on the application) |
Pinch |
No standard gesture |
|
Stretch |
|
Zoom in - or – Enlarge an object (depending on the application) |
Pinch |
No standard gesture |
Creating a Gesture-Aware Application
In order to create a gesture-aware application in Windows Phone 7 under XNA, we first must know how Windows Phone 7 exposes gestures to the programmer. Once we know how gestures are exposed, we used programmable gestures to define which gestures are allowed for our application, to sample incoming gestures, and to react to them.
Programmable gestures
Windows Phone 7 breaks the above logical gestures into more elaborate programmable gestures, as follows:
Table 2. Logical vs. programmable gestures
|
Logical gesture |
Programmable gestures |
Notes |
|
Tap |
Tap |
|
|
Double tap |
Double tap |
|
|
Pan |
FreeDrag (holding and moving in any direction), or HorizontalDrag (as the name implies), or VerticalDrag (as the name implies) gesture, followed by a DragComplete gesture. |
Pan is achieved by starting with the detection of a drag gesture and ending with the detection of the DragComplete gesture. An application may limit the user to horizontal only, to vertical only, to both horizontal and vertical, or to free drag types. |
|
Flick |
Flick |
A flick may be detected following a drag gesture set, and should be treated accordingly. |
|
Touch And Hold |
Hold |
|
|
Pinch |
Pinch gesture followed by a PinchComplete gesture. |
Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed. |
|
Stretch |
Pinch gesture followed by a PinchComplete gesture. |
Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed. |
Enabling desired gestures
Assuming we already have an XNA Windows Phone 7 Game project open in Visual Studio 2010, we should instruct the framework to enable the desired gestures in our application.
As described in the previous section, XNA allows the programmer to define which gestures the application is capable of consuming, thus allowing the user to perform those gestures. There might be a case, for example, where you would like to disallow FreeDrag/VerticalDrag gestures in your application. You may, however, want to allow HorizontalDrag or to support the Flick gesture.
In order to define the above instruction, we would use the namespace Microsoft.Xna.Framework.Input.Touch to access the static class TouchPanel. Within this static class, we then would access the static (Flags enumeration) property EnableGestures and set it to the desired set of enabled gestures.
Gestures must be enabled before we can use them. Thus, TouchPanel.EnabledGestures must be set to the appropriate set of gesture types before calling TouchPanel.IsGestureAvailable or TouchPanel.ReadGesture (both are described later in the article) for the first time.
The following example shows how to allow Tap, DoubleTap, and Hold touch gestures, and to disallow all the rest:
C#
TouchPanel.EnabledGestures = GestureType.Tap |
GestureType.DoubleTap |
GestureType.Hold;
Waiting for a gesture and reacting accordingly
We now want to detect and react to incoming gestures. To do this, we sample the current gestures from within the XNA project Update override method. We test if new gestures are available by checking the Boolean property TouchPanel.IsGestureAvailable. Next, we sample the gestures, and then react accordingly.
When true is returned, we know that there are new gestures waiting to be sampled. We then call the TouchPanel.ReadGesture method, which returns an instance of the GestureSample class. The returned GestureSample class instance holds a sample of the detected gesture, supplying various information such as the type and location of the gesture, and identification of the deltas between touch points (for multitouch gestures). In the following code example, the Boolean property is tested, a sample is acquired, and information is collected into a string for later display:
C#
string infoMessage = "";
while (TouchPanel.IsGestureAvailable)
{
GestureSample gestureSample = TouchPanel.ReadGesture();
infoMessage += String.Format("Type: {0}\nFirst touch point position: {1},{2};
Delta: {3},{4}\nSecond touch point position: {5},{6}; Delta: {7}, {8}\n",
gestureSample.GestureType,
gestureSample.Position.X, gestureSample.Position.Y,
gestureSample.Delta.X, gestureSample.Delta.Y,
gestureSample.Position2.X, gestureSample.Position2.Y,
gestureSample.Delta2.X, gestureSample.Delta2.Y);
}
Special considerations
A single gesture on the screen will create a few subsequent gestures to sample. For example, a DoubleTap gesture always will be preceded by a Tap gesture located near the succeeding DoubleTap gesture. When coding against gestures in XNA Framework, such subsequent gestures must be considered when reacting to incoming gestures.
Gestures are not always required in our applications. Sometimes we only need to know where user’s fingers are on the screen at any given moment.
This could be compared to registering a MouseClick event on a standard Windows application, or registering MouseDown and MouseUp events.
A MouseClick event represents the combination of subsequent MouseDown and MouseUp events at the same location. Registering to the raw MouseDown or MouseUp events allows us to make our own decisions about what the user is trying to do.
Reading raw touch information
In order to react to raw user activities, we call the static method TouchPanel.GetState from within the project's Update override method. The static method returns an instance of the TouchCollection structure, holding a collection of touch locations (each represented by the TouchLocationstructure). The collection holds one instance of the structure for each finger touching the screen.
Using the raw method requires the programmer to watch closely for touch screen changes, and to interpret them accordingly.
The following code example runs within the Update override method, gets the current touch state, tests if there are any current touched positions, and collects the information into a string for later display:
C#
string infoMessage = "";
TouchCollection touchLocations = TouchPanel.GetState();
if (touchLocations.Count > 0)
{
infoMessage = String.Format("Detected {0} touch points at the following locations:\n",
touchLocations.Count);
for (int i = 0; i < touchLocations.Count; i++)
infoMessage += string.Format("{0}. {3} at {1}, {2};\n", i, touchLocations
[i].Position.X, touchLocations [i].Position.Y, touchLocations [i].State);
}
Summary
Gestures are an exciting new way for applications to interact with the user under Windows Phone 7, employing the power of capacitive touch screen hardware.
Programming gesture-aware applications against the XNA Framework is quite simple and straightforward.
Instead of having to interpret and calculate touch locations over time, the programmer may rely on the XNA Framework to do the major part of the work interpreting standard gestures, leaving the special application logic to implement. Nevertheless, the programmer may still use the raw touch input detection method when required, and combine these two methods where needed.
Gestures_Article_4_0的更多相关文章
随机推荐
- PLSQL_闪回操作4_Flashback Drop
2014-06-25 Created By BaoXinjian
- PLSQL_PLSQL Hint用法总结(概念)
2014-06-20 Created By BaoXinjian
- linux下的g++编译器安装
再debian下直接apt-get install gcc g++就可以了.按照类似的逻辑,再Fedora下yum install gcc g++ 报告无法找到g++包. 查了一下,原来这个包的名字叫 ...
- (转)linux中fork()函数详解
一.fork入门知识 一个进程,包括代码.数据和分配给进程的资源.fork()函数通过系统调用创建一个与原来进程几乎完全相同的进程,也就是两个进程可以做完全相同的事,但如果初始参数或者传入的变量不同, ...
- Guava 10-散列
概述 Java内建的散列码[hash code]概念被限制为32位,并且没有分离散列算法和它们所作用的数据,因此很难用备选算法进行替换.此外,使用Java内建方法实现的散列码通常是劣质的,部分是因为它 ...
- gridview转成EXCEL文件保存(多页)
CompositeLink complink = new CompositeLink(new PrintingSystem()); PrintableComponentLink link = new ...
- 标准化命名CSS类,持续更新
放链接.持续化更新,以后可能会用上.https://github.com/zhangxinxu/zxx.lib.css/blob/master/zxx.lib.css
- JAVA中抽象类的一些总结
抽象类和普通类一样,有构造函数.抽象类中有一些属性,可以利用构造方法对属性进行初始化.子类对象实例化的时候先执行抽象类的构造,再执行子类构造. 抽象类不能用final声明.因为抽象类必须有子类继承,所 ...
- div的contenteditable和placeholder蹦出的火花
今天在做手机端发布描述内容时,需要实现换行,还需要有plachholder. 在文本框中换行自然想到了textarea. 问题似乎已经解决了,但是当内容发布后,在html中显示换行都丢失了. 这个时候 ...
- Android Studio 快捷键 主键
Alt+回车 导入包,自动修正Ctrl+N 查找类Ctrl+Shift+N 查找文件Ctrl+Alt+L 格式化代码Ctrl+Alt+O 优化导入的类和包Alt+Insert 生成代码(如get ...






