Gestures_Article_4_0
Technical Article
Windows Phone 7™ Gestures Compared
Lab version: 1.0.0
Last updated: August 25, 2014
Phone Operating System Flavors 4
Creating a Gesture-Aware Application 7
Windows Phone 7 introduces gestures as part of the operating system.
This technical article compares ideas about gestures and their implementation across several phone operating systems, focusing on Windows Phone 7 as a reference.
Objectives
When you’ve finished the article, you will have:
· A high-level of understanding about gestures used by phone operating systems.
· A clear view of the ways that gesture implementations differ across resistive and capacitive touch screens, and across various phone operating systems.
· A knowledge of how to implement gesture-aware applications in Windows Phone 7.
Handheld devices and, in particular, smart–phone devices, have evolved over the years to use touch screen interaction. The primary user interface evolved from stylus-operated devices—with or without the need for additional hardware buttons—to finger-based touch screens over resistive touch screen hardware by using the human finger as a "big stylus." Since resistive screens rely on the application of firm pressure, these input devices had to settle for press/long press/release operations, and had little use for drag-and-drop actions.
Dragging an object with a finger over a resistive touch screen is a frustrating task. Why? Because during the drag operation we instinctively tend to release some of the pressure from the screen and drop the dragged object too soon. Moreover, resistive touch screens are capable of detecting only a single touch point, thereby limiting the possible actions the user can perform.
Capacitive touch screens
The recent use of capacitive touch screen hardware in phones has allowed the user to have better control over the location of the selecting finger. The hardware detects the touch location—regardless of the amount of pressure applied to the screen—by applying a tiny amount of electrical current through the finger, and then by calculating the position accordingly.
Because of their hardware implementation, capacitive screens, unlike older resistive touch screens, support multiple touch locations. Capacitive screens provide for a multitouch experience and unlock a wide range of innovative gestures that users can apply to control applications.
New interaction possibilities
In addition to using an input method to select and then drag an object on the screen, we can use capacitive touch screens to hold, pinch, rotate, enlarge, and throw away an object, and so on.
These input methods, known as gestures, are virtual actions that the user applies to the phone screen. The hardware determines the type of gesture based on the location, velocity, and direction of each finger touching the screen.
Phone Operating System Flavors
As of today, three major phone operating systems employ gestures: Apple iOS™, Google Android™, and Microsoft Windows Phone 7.
Because there is no uniform standard for gestures, each operating system has its own set of gestures. Some gestures conform to other operating systems, some gestures do not.
Next is a review of the differences among operating systems with regard to gesture support.
Apple iOS gesture set
Apple iOS, used mainly on iPhone™, iPad™, and iPod touch™ devices, supports the following gestures:
Tap – Press or select a screen object–a brief touch within a bounded area on the screen.
Double tap – Two rapid sequential taps on the same object.
Swipe – Move a finger across the screen and raise it without stopping.
Drag/pan – Hold the finger over a screen object and move it around.
Pinch – Hold two fingers on the screen, and in a relatively straight, virtual line move toward (pinch in) or away from (pinch out) each other.
Rotate – Hold two fingers on the screen and move them in opposite directions on different virtual lines; one finger acts as a center while the other circles around it.
Apple iOS 4 introduces two new gestures:
Long press – Touch a screen object without releasing.
Three-finger tap
Google Android™ gesture set
Google Android™ takes a different approach to gestures. The basic set is limited, but the operating system also enables you to create new gestures into gesture sets that can be used later within applications.
The basic set includes:
Single tap – Press or select a screen object.
Double tap – Two rapid sequential taps on the same object.
Down – Touch a spot on the screen with a finger. This is the first phase of a tap, fling, or another gesture.
Up – Finger no longer touches the screen. This is the last phase of a tap, fling, or another gesture.
Fling – Move a finger across the screen and raise the finger without stopping.
Long press – Touch a screen object without releasing it.
Scroll – Hold the finger over a screen object, and then move the finger.
Windows Phone 7 gesture set
Windows Phone 7 supports the following gestures:
Tap – Press or select a screen object—a brief touch within a bounded area on the screen.
Double tap – Two rapid sequential taps on the same object.
Pan – Hold a finger on the screen and move it around.
Flick – Move a finger across the screen and raise the finger without stopping (in-motion). This gesture may be used to create kinetic movements, and can follow a pan gesture.
Touch and hold – Touch a screen object for a defined time.
Also, the following gestures are supported for multitouch:
Pinch – Hold two fingers on the screen, and move the fingers toward each other.
Stretch – Hold two fingers on the screen, and move the fingers away from each other.
Similarities and differences
Windows Phone 7 gestures are compared to the gestures of the other two operating systems in the following table.
Table 1. Gesture comparison
Gesture name |
Illustration |
Windows Phone 7 usage |
Apple iOS equivalent |
Google Android equivalent |
Tap |
|
Select an object - or - Stop any content from moving on the screen |
Tap |
Tap |
Double tap |
|
Toggle between in and out zoom states |
Double tap |
Double tap |
Pan |
|
Move ("drag") an object on the screen to a different location |
Drag/pan |
Scroll |
Flick |
|
Move the whole canvas in any direction |
Swipe |
Fling |
Touch and hold |
|
Display context menu or option page for an item |
Long press |
Long press |
Pinch |
|
Zoom out - or – Diminish an object (depending on the application) |
Pinch |
No standard gesture |
Stretch |
|
Zoom in - or – Enlarge an object (depending on the application) |
Pinch |
No standard gesture |
Creating a Gesture-Aware Application
In order to create a gesture-aware application in Windows Phone 7 under XNA, we first must know how Windows Phone 7 exposes gestures to the programmer. Once we know how gestures are exposed, we used programmable gestures to define which gestures are allowed for our application, to sample incoming gestures, and to react to them.
Programmable gestures
Windows Phone 7 breaks the above logical gestures into more elaborate programmable gestures, as follows:
Table 2. Logical vs. programmable gestures
Logical gesture |
Programmable gestures |
Notes |
Tap |
Tap |
|
Double tap |
Double tap |
|
Pan |
FreeDrag (holding and moving in any direction), or HorizontalDrag (as the name implies), or VerticalDrag (as the name implies) gesture, followed by a DragComplete gesture. |
Pan is achieved by starting with the detection of a drag gesture and ending with the detection of the DragComplete gesture. An application may limit the user to horizontal only, to vertical only, to both horizontal and vertical, or to free drag types. |
Flick |
Flick |
A flick may be detected following a drag gesture set, and should be treated accordingly. |
Touch And Hold |
Hold |
|
Pinch |
Pinch gesture followed by a PinchComplete gesture. |
Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed. |
Stretch |
Pinch gesture followed by a PinchComplete gesture. |
Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed. |
Enabling desired gestures
Assuming we already have an XNA Windows Phone 7 Game project open in Visual Studio 2010, we should instruct the framework to enable the desired gestures in our application.
As described in the previous section, XNA allows the programmer to define which gestures the application is capable of consuming, thus allowing the user to perform those gestures. There might be a case, for example, where you would like to disallow FreeDrag/VerticalDrag gestures in your application. You may, however, want to allow HorizontalDrag or to support the Flick gesture.
In order to define the above instruction, we would use the namespace Microsoft.Xna.Framework.Input.Touch to access the static class TouchPanel. Within this static class, we then would access the static (Flags enumeration) property EnableGestures and set it to the desired set of enabled gestures.
Gestures must be enabled before we can use them. Thus, TouchPanel.EnabledGestures must be set to the appropriate set of gesture types before calling TouchPanel.IsGestureAvailable or TouchPanel.ReadGesture (both are described later in the article) for the first time.
The following example shows how to allow Tap, DoubleTap, and Hold touch gestures, and to disallow all the rest:
C#
TouchPanel.EnabledGestures = GestureType.Tap |
GestureType.DoubleTap |
GestureType.Hold;
Waiting for a gesture and reacting accordingly
We now want to detect and react to incoming gestures. To do this, we sample the current gestures from within the XNA project Update override method. We test if new gestures are available by checking the Boolean property TouchPanel.IsGestureAvailable. Next, we sample the gestures, and then react accordingly.
When true is returned, we know that there are new gestures waiting to be sampled. We then call the TouchPanel.ReadGesture method, which returns an instance of the GestureSample class. The returned GestureSample class instance holds a sample of the detected gesture, supplying various information such as the type and location of the gesture, and identification of the deltas between touch points (for multitouch gestures). In the following code example, the Boolean property is tested, a sample is acquired, and information is collected into a string for later display:
C#
string infoMessage = "";
while (TouchPanel.IsGestureAvailable)
{
GestureSample gestureSample = TouchPanel.ReadGesture();
infoMessage += String.Format("Type: {0}\nFirst touch point position: {1},{2};
Delta: {3},{4}\nSecond touch point position: {5},{6}; Delta: {7}, {8}\n",
gestureSample.GestureType,
gestureSample.Position.X, gestureSample.Position.Y,
gestureSample.Delta.X, gestureSample.Delta.Y,
gestureSample.Position2.X, gestureSample.Position2.Y,
gestureSample.Delta2.X, gestureSample.Delta2.Y);
}
Special considerations
A single gesture on the screen will create a few subsequent gestures to sample. For example, a DoubleTap gesture always will be preceded by a Tap gesture located near the succeeding DoubleTap gesture. When coding against gestures in XNA Framework, such subsequent gestures must be considered when reacting to incoming gestures.
Gestures are not always required in our applications. Sometimes we only need to know where user’s fingers are on the screen at any given moment.
This could be compared to registering a MouseClick event on a standard Windows application, or registering MouseDown and MouseUp events.
A MouseClick event represents the combination of subsequent MouseDown and MouseUp events at the same location. Registering to the raw MouseDown or MouseUp events allows us to make our own decisions about what the user is trying to do.
Reading raw touch information
In order to react to raw user activities, we call the static method TouchPanel.GetState from within the project's Update override method. The static method returns an instance of the TouchCollection structure, holding a collection of touch locations (each represented by the TouchLocationstructure). The collection holds one instance of the structure for each finger touching the screen.
Using the raw method requires the programmer to watch closely for touch screen changes, and to interpret them accordingly.
The following code example runs within the Update override method, gets the current touch state, tests if there are any current touched positions, and collects the information into a string for later display:
C#
string infoMessage = "";
TouchCollection touchLocations = TouchPanel.GetState();
if (touchLocations.Count > 0)
{
infoMessage = String.Format("Detected {0} touch points at the following locations:\n",
touchLocations.Count);
for (int i = 0; i < touchLocations.Count; i++)
infoMessage += string.Format("{0}. {3} at {1}, {2};\n", i, touchLocations
[i].Position.X, touchLocations [i].Position.Y, touchLocations [i].State);
}
Summary
Gestures are an exciting new way for applications to interact with the user under Windows Phone 7, employing the power of capacitive touch screen hardware.
Programming gesture-aware applications against the XNA Framework is quite simple and straightforward.
Instead of having to interpret and calculate touch locations over time, the programmer may rely on the XNA Framework to do the major part of the work interpreting standard gestures, leaving the special application logic to implement. Nevertheless, the programmer may still use the raw touch input detection method when required, and combine these two methods where needed.
Gestures_Article_4_0的更多相关文章
随机推荐
- Workflow_工作流发送Document和Form链接的实现(案例)
2014-06-01 Created By BaoXinjian
- 数据库还原总提示空间不够,磁盘卷 'D:\' 上的可用空间不足,无法创建数据库
从数据库上备份下来bak格式的数据库文件之后,在本地数据库欢迎的时候总是提示空间不够. 这种情况一般在从64位电脑上面备份的数据库文件,还原到32位的sqlsever上面. System.Data.S ...
- java版的YUI3 combine服务-Combo Handler
YUI3中,为了避免js文件过大,各个功能模块是拆分的.它有一个“种子”的概念:先下载一个小的核心的js文件到浏览器端,再通过这个小的js文件去加载其它所需的模块. 这种按需加载虽然解决了单个js过大 ...
- 关于 MySQL 的 boolean 和 tinyint(1) (转)
boolean类型MYSQL保存BOOLEAN值时用1代表TRUE,0代表FALSE,boolean在MySQL里的类型为tinyint(1),MySQL里有四个常量:true,false,TRUE, ...
- android之location03
private class ButtonListener implements OnClickListener { @Override public void onClick(View v) { // ...
- SQL SERVER 中的 object_id()函数
在SQLServer数据库中,如果查询数据库中是否存在指定名称的索引或者外键约束等,经常会用到object_id('name','type')方法,做笔记如下: ? 语法:object_id('obj ...
- java获取数据库的列名,类型等信息
当你使用和学习JDK的时候,可以查看并学习它所提供给你的两个ResultSetMetaData 和DataBaseMetaData类的源码并很好的了解它们的实现原理和思路,JDBC中提供有两种源数据, ...
- virtualenvwrapper
VirtualEnv 是什么 VirtualEnv用于在一台机器上创建多个独立的python运行环境,VirtualEnvWrapper为前者提供了一些便利的命令行上的封装. 为什么要用 - 隔离项目 ...
- [SQL]不知道1
表结构,companyid是公司名,productid是产品号,说明每个公司生产多少种产品. companyid productid A A B B B C D D D 要求:取出所有生产的产品包含公 ...
- Codeforces 633D
题意: 给定n,和一个长度为n的序列. 让你在这n个数中找长度尽可能长的fib数列. 思路: 这题的数字是在1e9范围内的,所以最长的可能存在的fib数列官方的解释是90左右.有一种情况除外,就是0的 ...