U.S. patent application number 12/701150 was filed with the patent office on 2011-08-11 for multi-touch mouse in gaming applications.
This patent application is currently assigned to Microsoft Corporation. Invention is credited to Hrvoje Benko, Billy Chen, Eyal Ofek, Daniel A. Rosenfeld.
Application Number | 20110195781 12/701150 |
Document ID | / |
Family ID | 44354145 |
Filed Date | 2011-08-11 |
United States Patent
Application |
20110195781 |
Kind Code |
A1 |
Chen; Billy ; et
al. |
August 11, 2011 |
MULTI-TOUCH MOUSE IN GAMING APPLICATIONS
Abstract
Keyboards, mice, joysticks, customized gamepads, and other
peripherals are continually being developed to enhance a user's
experience when playing computer video games. Unfortunately, many
of these devices provide users with limited input control because
of the complexity of today's gaming applications. For example, many
computer video games require a combination of mouse and keyboard to
control even the simplest of in-game tasks (e.g., walking into a
room and looking around may require several keyboard keystrokes and
mouse movements). Accordingly, one or more systems and/or
techniques for performing in-game tasks based upon user input
within a multi-touch mouse are disclosed herein. User input
comprising one or more user interactions detect by spatial sensors
within the multi-touch mouse may be received. A wide variety of
in-game tasks (e.g., character movements, character actions,
character view, etc.) may be performed based upon the user
interactions (e.g., a swipe gesture, a mouse position change,
etc.).
Inventors: |
Chen; Billy; (Bellevue,
WA) ; Benko; Hrvoje; (Seattle, WA) ; Ofek;
Eyal; (Redmond, WA) ; Rosenfeld; Daniel A.;
(Seattle, WA) |
Assignee: |
Microsoft Corporation
Redmond
WA
|
Family ID: |
44354145 |
Appl. No.: |
12/701150 |
Filed: |
February 5, 2010 |
Current U.S.
Class: |
463/37 |
Current CPC
Class: |
A63F 13/06 20130101;
A63F 13/214 20140902; A63F 2300/6045 20130101; A63F 13/10 20130101;
A63F 2300/1006 20130101; A63F 13/42 20140902 |
Class at
Publication: |
463/37 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Claims
1. A method for performing in-game tasks based upon user input on a
multi-touch mouse comprising: receiving user input comprising a
first user interaction with a first spatial sensor of a multi-touch
mouse and a second user interaction with a second spatial sensor of
the multi-touch mouse; and performing a first in-game task based
upon the first user interaction and a second in-game task based
upon the second user interaction.
2. The method of claim 1, the first spatial sensor overlaid the
second spatial sensor within the multi-touch mouse.
3. The method of claim 1, the user input comprising a third user
interaction with the second spatial sensor; and the performing
comprising performing a third in-game task based upon the third
user interaction.
4. The method of claim 3, the first user interaction comprising a
multi-touch mouse position change; the second user interaction
comprising a swipe gesture on the surface of the multi-touch mouse;
and the third user interaction comprising a grip of the multi-touch
mouse.
5. The method of claim 1, the first user interaction comprising a
multi-touch mouse position change; and the second user interaction
comprising a swipe gesture on the surface of the multi-touch mouse,
a swipe gesture length, and swipe gesture direction.
6. The method of claim 5, comprising: performing a view control
in-game task based upon mapping the multi-touch mouse position
change with a view change in-game task; and performing an in-game
navigation control task based upon mapping the swipe gesture with
movement of a character; mapping the swipe gesture length with a
speed of the movement; and mapping the swipe gesture direction with
a movement direction.
7. The method of claim 1, the second user interaction comprising a
first finger to second finger distance on the multi-touch mouse
surface.
8. The method of claim 1, the second user interaction comprising a
change of grip of the multi-touch mouse; and the performing the
second in-game task comprising mapping the change of grip with a
camera view roll in-game task.
9. The method of claim 1, the first user interaction and the second
user interaction received concurrently within the user input, and
the first in-game task and the second in-game task performed
concurrently.
10. The method of claim 3, comprising: detecting the second user
interaction within a first portion of the multi-touch mouse surface
mapped to the first in-game task; and detecting the third user
interaction within a second portion of the multi-touch mouse
surface mapped to the third in-game task.
11. A system for performing in-game tasks based upon user input on
a multi-touch mouse, comprising: a mapping component configured to:
maintain a mapping of one or more user interactions on a
multi-touch mouse to one or more in-game tasks; and a task
component configured to: receive user input comprising a first user
interaction with a first spatial sensor of a multi-touch mouse and
a second user interaction with a second spatial sensor of the
multi-touch mouse; and perform a first in-game task corresponding
to the first user interaction and a second in-game task
corresponding to the second user interaction based upon the
mapping.
13. The system of claim 11, the mapping component configured to:
map a first portion of the multi-touch mouse surface to a third
in-game task and a second portion of the multi-touch mouse surface
to a fourth in-game task.
14. The system of claim 11, the user input comprising a third user
interaction with the second spatial sensor of the multi-touch
mouse.
15. The system of claim 14, the task component configured to:
perform a third in-game task corresponding to the third user
interaction based upon the mapping.
16. The system of claim 11, the first spatial sensor overlaid the
second spatial sensor within the multi-touch mouse.
17. The system of claim 16, the task component configured to
perform the first in-game task and the second in-game task
concurrently.
18. The system of claim 11, the task component configured to:
receive user input comprising the first user interaction
corresponding to a multi-touch mouse position change; and the
second user interaction corresponding a swipe gesture on the
surface of the multi-touch mouse; perform a character selection
in-game task of one or more characters based upon the multi-touch
mouse position change corresponding to locations of one or more
characters within a game; and perform a character movement to
destination in-game task of the selected characters based upon the
swipe gesture.
19. The system of claim 11, the mapping component configured to:
maintain the mapping based upon user preference defined by one or
more user settings.
20. A method for performing in-game tasks based upon user input on
a multi-touch mouse comprising: receiving user input comprising a
first user interaction with a first spatial sensor of a multi-touch
mouse, a second user interaction with a second spatial sensor of
the multi-touch mouse, and a third user interaction with a button
of the multi-touch mouse, the first spatial sensor, second spatial
sensor, and the button in an overlaid configuration within the
multi-touch mouse; and performing a first in-game task based upon
the first user interaction, a second in-game task based upon the
second user interaction, and a third in-game task based upon the
third user interaction concurrently.
Description
BACKGROUND
[0001] Today's gaming applications are becoming increasingly
comprehensive, allowing players to accomplish a wide range of
complex tasks. For example, players may perform tasks ranging from
flying F16 fighter jets to commanding army squadrons in WWII
scenarios. Many of these tasks require a large number of input
parameters from various input devices, such as a mouse and keyboard
combination. For example, in a first-person-shooter (FPS), the
player's view orientation may be mapped to the mouse position, the
player's movement may be mapped to keyboard keys (e.g., w, a, s,
d), and the player's actions may be mapped to a combination of
keyboard keys and/or mouse clicks. Unfortunately, this type of
input scheme may provide limited control in a computer video game.
For example, the keyboard's inherent binary control may limit the
player's ability to control tasks that require continuous control,
such as in navigation (e.g., varying travel speed). Furthermore,
the mouse provides limited control in computer video games due to
the limited number of inputs (e.g., 3 input mouse buttons may not
map easily to 8 different character abilities). In addition, these
input schemes may have a high learning curve for new players.
SUMMARY
[0002] This Summary is provided to introduce a selection of
concepts in a simplified form that are further described below in
the Detailed Description. This Summary is not intended to identify
key factors or essential features of the claimed subject matter,
nor is it intended to be used to limit the scope of the claimed
subject matter.
[0003] Among other things, one or more systems and/or techniques
for performing in-game tasks based upon user input within a
multi-touch mouse are disclosed herein. It may be appreciated that
a multi-touch mouse may be interpreted as a mouse comprising two or
more spatial sensors, which may or may not be overlaid on one
another. For example, a multi-touch mouse may comprise a first
spatial sensor configured to detect the position of the multi-touch
mouse (e.g., a position of the mouse based upon the relative
location of the mouse on a surface, such as a mouse pad, direction
change, velocity, etc.). The multi-touch mouse may comprise a
second spatial sensor configured to detect gestures and/or the
position of one or more fingers on the surface of the multi-touch
mouse. For example, the second spatial sensor may be configured to
detect a grip, change in a grip, a swipe gesture, a finger
location, a finger press, a first finger to second finger distance,
and/or other hand or finger detection. In this example, the first
sensor can be said to face "down" toward a (desktop/mouse pad)
surface upon which the mouse rests/moves, while the second sensor
can be said to face "up" away from said surface, and this can be
thought of as an "overlying" arrangement as the second sensor
(located on an upper surface of the mouse) may be substantially
directly above the first sensor (located on a lower surface of the
mouse). It will be appreciated, however, that other "overlying"
arrangements are possible, and also that such arrangements may not
be necessary. That is, while it may not be unusual for a touch
sensor to be located substantially directly above a movement
sensor, such an arrangement is not necessary. Moreover, "overlying"
as used herein is not meant to be interpreted in a limiting manner,
for example, to mean that the sensors are in direct contact with
one another, influence one another, need to be acted on
concurrently, etc. That is, one sensor can be acted on by a user,
for example, while the other sensor is not. Generally, a
multi-touch mouse may comprise one or more sensors (e.g., buttons)
that may or may not overlay one or more other sensors (e.g., none,
one, some, all) of the mouse.
[0004] It may be appreciated that a spatial sensor may (but need
not) detect one or more user interactions concurrently. For
example, a second spatial sensor may detect a grip and a swipe
gesture concurrently. It may be appreciated that a user interaction
may comprise one or more spatial measurements. For example, a
second spatial sensor may detect a finger swipe gesture as
comprising a swipe gesture, a swipe gesture direction, a swipe
gesture length, and/or a swipe gesture speed. It may be appreciated
that the term "finger" may be interpreted as one or more digits of
a hand (e.g., thumb, index finger, middle finger, ring finger,
pinky, etc.). It may be appreciated that, in one example, a
character may be interpreted as an in-game character representing a
user within the gamming application environment. It may also be
appreciated that any type of user interactions are contemplated
herein and that said term (e.g., as used in the claims) is not
meant to be limited to merely the examples provided herein. For
example, while a grip, swipe, pinch, finger location, finger press,
distance between digits, etc. may be mentioned herein, these and
any other gestures and/or interactions are contemplated.
[0005] A mapping component may be configured to maintain a mapping
of one or more user interactions within a multi-touch mouse to one
or more in-game tasks. In one example, a finger swipe gesture on
the surface of the multi-touch mouse may be mapped to an avatar
movement in-game task. In another example, a grip (a change in
grip) of the multi-touch mouse may be mapped to a camera view roll
in-game task. The mappings may be derived based upon user
preferences defined by one or more user settings. In another
example, the mapping component may be configured to map portions of
the multi-touch mouse surface with in-game tasks. For example, the
surface of the multi-touch mouse may be treated as one or more
portions (regions), such that respective portions are mapped to
different in-game tasks (e.g., a first portion relative to the
forefinger location may be mapped to an avatar movement in-game
task, while a second portion relative to the pinky location may be
mapped to a character jump in-game task). It may be appreciated
that multiple mapping variations may be maintained for a single
computer video game.
[0006] A task component may be configured to receive user input
comprising a first user interaction with a first spatial sensor of
the multi-touch mouse, a second user interaction with a second
spatial sensor of the multi-touch mouse, and/or other user
interactions with spatial sensors within the multi-touch mouse. In
one example, the first user interaction and the second user
interaction may be received concurrently within the user input
because the first spatial sensor may detect the first user input at
a substantially similar time as to when the second spatial sensor
detects the second user input due to an overlaid configuration of
the spatial sensors. It may be appreciated that the overlaid
configuration allows the spatial sensors to operate independent of
one another.
[0007] The task component may be configured to perform a first
in-game task corresponding to the first user interaction, a second
in-game task corresponding to the second user interaction, and/or
other in-game tasks corresponding to other user interactions based
upon the mapping. For example, the task component may receive user
input comprising a first user interaction relating to a multi-touch
mouse position change detected by the first spatial sensor, a
second user interaction relating to a first finger to second finger
distance detected by the second spatial sensor, and a third user
interaction relating to a grip detected by the second spatial
sensor. The task component may be configured to perform a character
view change in-game task based upon the multi-touch mouse position
change, a view zoom in-game task based upon the first finger to
second finger distance, and a camera view tilt based upon the grip
as specified in the mapping maintained by the mapping
component.
[0008] To the accomplishment of the foregoing and related ends, the
following description and annexed drawings set forth certain
illustrative aspects and implementations. These are indicative of
but a few of the various ways in which one or more aspects may be
employed. Other aspects, advantages, and novel features of the
disclosure will become apparent from the following detailed
description when considered in conjunction with the annexed
drawings.
DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 is a flow chart illustrating an exemplary method of
performing in-game tasks based upon user input on a multi-touch
mouse.
[0010] FIG. 2 is a component block diagram illustrating an
exemplary system for performing in-game tasks based upon user input
on a multi-touch mouse.
[0011] FIG. 3A is an illustration of an example of a multi-touch
mouse.
[0012] FIG. 3B is an illustration of an example of a multi-touch
mouse.
[0013] FIG. 3C is an illustration of an example of a multi-touch
mouse.
[0014] FIG. 4 is an illustration of an example of a multi-touch
mouse configured to generate user input based upon detected user
interaction.
[0015] FIG. 5 is an illustration of an example of performing
in-game tasks based upon user input on a multi-touch mouse.
[0016] FIG. 6 is an illustration of an example of performing
in-game tasks based upon user input on a multi-touch mouse.
[0017] FIG. 7 is an illustration of an example of performing
in-game tasks based upon user input on a multi-touch mouse.
[0018] FIG. 8 is an illustration of an exemplary computer-readable
medium wherein processor-executable instructions configured to
embody one or more of the provisions set forth herein may be
comprised.
[0019] FIG. 9 illustrates an exemplary computing environment
wherein one or more of the provisions set forth herein may be
implemented.
DETAILED DESCRIPTION
[0020] The claimed subject matter is now described with reference
to the drawings, wherein like reference numerals are used to refer
to like elements throughout. In the following description, for
purposes of explanation, numerous specific details are set forth in
order to provide a thorough understanding of the claimed subject
matter. It may be evident, however, that the claimed subject matter
may be practiced without these specific details. In other
instances, structures and devices are illustrated in block diagram
form in order to facilitate describing the claimed subject
matter.
[0021] Today, many computer users spend significant time playing
video games through their computers. A wide variety of video game
genres have developed over the years. For example, video games may
be developed based upon action, role playing, first person shooter,
strategy, flight simulation, third person adventure, racing,
massive multiplayer online, and/or other genres. To accommodate the
different play styles associated with the different video game
genres and to provide video game players with an enhanced
experience, different user input devices have been developed. For
example, joysticks, keyboards, mice, gamepads, motion sensors, and
other peripherals are available to video game players. There are
even keyboards customized sole for the purpose of playing
particular games. These input devices are developed in an attempt
to map user input with in-game tasks. Unfortunately, conventional
input devices do not allow for user interaction that adequately
maps to the complex input parameters required by today's computer
video games.
[0022] Accordingly, one or more systems and/or techniques for
performing in-game tasks based upon user input on a multi-touch
mouse are provided herein. The multi-touch mouse allows a video
game player to invoke various in-game tasks within a computer video
game by performing user interactions on the multi-touch mouse that
are mapped to the various in-game tasks For example, the
multi-touch mouse may comprise a first spatial sensor configured to
detect the position of the multi-touch mouse. It may be appreciated
that the mouse position change may comprise one or more
measurements, such as direction change, speed, acceleration,
velocity, etc. An in-game task, such as a character view change,
may be performed based upon receiving user input of the multi-touch
position change. The multi-touch mouse may comprise a second
spatial sensor configured to detect a plurality of user
interactions mapped to one or more in-game tasks. For example, the
second spatial sensor may detect a finger position, a finger swipe
gesture, a first finger to second finger distance, etc.
[0023] The first spatial sensor and the second spatial sensor allow
a user to perform a wide variety of gestures that may be
individually mapped to different in-game tasks. Because a wide
variety of gestures are available using just the multi-touch mouse,
the need to use an additional input device, such as a keyboard,
and/or the need to perform complex input combinations may be
mitigated. For example, a user may be able to control the movement,
camera view, and actions of a character within a first person
shooter video game using just the multi-touch mouse.
[0024] One embodiment of performing in-game tasks based upon user
input on a multi-touch mouse is illustrated by an exemplary method
100 in FIG. 1. At 102, the method begins. At 104, user input
comprising a first user interaction with a first spatial sensor of
a multi-touch mouse and a second user interaction with a second
spatial sensor of the multi-touch mouse is received. It may be
appreciated that the first user interaction and the second user
interaction may be received concurrently within the user input.
[0025] In one example, user interactions may be mapped to in-game
tasks. For example, if the video game player is engaged with a
strategy video game, then user interactions may be mapped to
in-game tasks of the strategy video game. The in-game tasks (e.g.,
select an infantry, rotate view, issue infantry movement commands,
issue infantry action commands, etc.) may be invoked within the
strategy video game based upon user interactions with the
multi-touch mouse (e.g., a swipe gesture, a multi-touch mouse
position change, a grip or change in grip, etc.).
[0026] In another example, portions (regions) of the multi-touch
mouse surface may be mapped to in-game tasks. That is, a user
interaction with a particular portion of the multi-touch mouse
surface may be mapped to a specific in-game task. For example, if
the video game player is engaged with a flight simulation game,
then user interactions with different portions of the multi-touch
mouse surface may be mapped to different in-game tasks of the
flight simulation game. User interaction with a first portion of
the multi-touch mouse surface may be mapped to
accelerations/deceleration. User interaction with a second portion
of the multi-touch mouse surface may be mapped to a view change of
the pilot. User interaction with a third portion of the multi-touch
mouse surface may be mapped to the flight path direction. It may be
appreciated that the user interaction with the one or more portions
of the multi-touch mouse surface may be detected by one or more
spatial sensor within the multi-touch mouse. It may be appreciated
that a portion (region) of the multi-touch mouse may be mapped to
more than one in-game tasks. For example, an upper left corner
portion of the multi-touch mouse may be mapped to a zoom aim
in-game task and a shoot in-game task.
[0027] In another example, a first portion may be a first region of
the multi-touch mouse associated with a first in-game task. A
second portion may be a second region of the multi-touch mouse
associated with a second in-game task. It may be appreciated that
the first region and the second region may or may not overlap on
the surface of the multi-touch mouse. In this way, the first
in-game task and second in-game task may be invoked based upon user
interaction with the overlap (and/or degree thereof) of the first
and second regions.
[0028] At 106, a first in-game task may be performed based upon the
first user interaction and a second in-game task may be performed
based upon the second user interaction. The in-game tasks may be
performed based upon the user interactions mapping to the in-games
tasks. It may be appreciated that the first in-game task and the
second in-game task may be performed concurrently. At 108, the
method ends.
[0029] FIG. 2 illustrates an example of a system 200 configured for
performing in-game tasks 212 based upon user input 204 on a
multi-touch mouse 202. The system 200 may comprise a task component
210 and/or a mapping component 206. The mapping component 206 may
be configured to maintain a mapping 208 of one or more user
interactions on a multi-touch mouse 202 to one or more in-game
tasks of a computer video game 214. For example, a grip user
interaction may be mapped to a camera view roll in-game task, a
swipe user interaction may be mapped to a movement in-game task, a
swipe length may be mapped to a movement speed in-game task, a
multi-touch mouse location may be mapped to a view in-game task,
etc. The mapping 208 may be maintained based upon user preference
defined by one or more user settings. It may be appreciated that a
spatial sensor of the multi-touch mouse 202 may be configured to
detect one or more of the user interactions mapped within the
mapping 208
[0030] It may be appreciated that the mapping component 206 may be
configured to maintain the mapping 208 as comprising mappings of
one or more portions (regions) of the multi-touch mouse surface to
one or more in-game tasks. That is, user interaction with a first
portion of the multi-touch mouse surface may be mapped to a first
in-game task and user interaction with a second portion of the
multi-touch mouse surface may be mapped to a second in-game task.
It may be appreciated that a spatial sensor of the multi-touch
mouse 202 may be configured to detect user interaction with one or
more of the portions (regions) of the multi-touch mouse surface. It
may be appreciated that the mapping component 206 may map two or
more in-game tasks to a single user interaction. It may be
appreciated that the mapping component 206 may map a single in-game
task to two or more user interactions.
[0031] The multi-touch mouse 202 may comprise a first spatial
sensor, a second spatial sensor, a button, and/or other spatial
sensors. In one example, the first spatial sensor and the second
spatial sensor may be in an overlaid configuration within the
multi-sensor mouse 202.
[0032] The task component 210 may be configured to receive user
input 204 comprising a first user interaction detected by a first
spatial sensor of the multi-touch mouse 202 and a second user
interaction detected by a second spatial sensor of the multi-touch
mouse 202. In one example, the task component 210 may be configured
to receive the user input 204 comprising three or more user
interactions (e.g., a third user interaction detected by the second
spatial sensor, a fourth user interaction with a button of the
multi-touch mouse 202) within the user input 204. The task
component 210 may be configured to perform in-game tasks 212 within
the computer video game 214 based upon the user interactions within
the user input 204 and the mapping 208. For example, the task
component 210 may be configured to perform a first in-game task
corresponding to the first user interaction and a second in-game
task corresponding to the second user interaction based upon the
mapping 208 (e.g., the first user interaction is mapped to the
first in-game task within the mapping 208). It may be appreciated
that the task component 210 may be configured to perform one or
more in-game tasks concurrently.
[0033] In one example, the task component 210 may receive the user
input 204 comprising a first user interaction corresponding to a
multi-touch mouse position change and a second user interaction
corresponding to a swipe gesture on the surface of the multi-touch
mouse 202. The mapping 208 may comprise a mapping of a multi-touch
mouse position change to a character selection in-game task and a
mapping of a swipe gesture to a character movement to destination
in-game task. The task component 210 may perform the character
selection in-game task of one or more characters within a strategy
game (the computer video game 214) based upon the multi-touch mouse
position change (user interaction) moving a curser over the one or
more characters within the strategy game. That is, a video game
player may move the multi-touch mouse in such a way that a
corresponding cursor within the strategy game hovers over one or
more characters within the strategy game. In this way, the one or
more characters hovered over by the cursor may be selected. The
task component 210 may perform the character movement to
destination in-game task of the one or more selected characters
based upon the swipe gesture (user interaction). It may be
appreciated that the task component 210 may perform the character
selection in-game task and the character movement to destination
in-game task concurrently.
[0034] FIG. 3A illustrates an example of a multi-touch mouse 300.
The multi-touch mouse surface may be divided into one or more
portions (regions), which may be individually mapped to in-game
tasks. For example, the multi-touch mouse 300 may comprise a first
portion 302, a second portion 304, a third portion 306, a fourth
portion 308, a fifth portion 310, and/or other portions. In one
example, a first spatial sensor (not illustrated) within the
multi-touch mouse 300 may be configured to detect a change in
position of the multi-touch mouse (e.g., an optical eye located at
the button of the multi-touch mouse 320). A second spatial sensor
within the multi-touch mouse 300 may be configured to detect user
interaction within the various portions of the multi-touch mouse
surface. For example, the second spatial sensor may be configured
to detect a swipe gesture within the second portion 304. The second
spatial sensor may be configured to detect a first finger to second
finger distance based upon the first portion 302 and the third
portion 306. The second spatial may be configured to detect a grip
based upon user interaction within the first portion 302, the
second portion 304, the third portion 306, the fourth portion 308,
and the fifth portion 310. It may be appreciated that the
multi-touch mouse 300 may comprise a single surface that is not
divided into multiple portions.
[0035] FIG. 3B illustrates an example of a multi-touch mouse 320.
The multi-touch mouse 320 may comprise a first spatial sensor (not
illustrated) configured to detect the movement of the multi-touch
mouse on a surface, such as a mouse pad (e.g., an optical eye
located at the button of the multi-touch mouse 320). The
multi-touch mouse 320 may comprise a second spatial sensor
overlaying the first spatial sensor. The second spatial sensor may
comprise a light source 322, a camera 324, a mirror 326, and/or
other components (e.g., optical components). The second spatial
sensor may be configured to detect user interaction on the surface
328 of the multi-touch mouse 320. For example, the second spatial
sensor may be configured to detect user interaction by a first
finger 330 and user interaction by a second finger 332.
[0036] FIG. 3C illustrates an example of a multi-touch mouse 340.
The multi-touch mouse 340 may comprise a first spatial sensor (not
illustrated) configured to detect the movement of the multi-touch
mouse on a surface, such as a mouse pad (e.g., an optical eye
located at the button of the multi-touch mouse 340). The
multi-touch mouse 340 may comprise a second spatial sensor 344 and
a third spatial sensor 346. It may be appreciated that the second
spatial sensor 344 and the third spatial sensor 346 may be
configured to operate as a single spatial sensor or two separate
spatial sensors. The second spatial sensor 344 and the third
spatial sensor 346 may be configured to detect user interaction
with the multi-touch mouse 340.
[0037] FIG. 4 illustrates an example of a multi-touch mouse 400
configured to generate user input based upon detected user
interaction. For example, the multi-touch mouse 400 may comprise a
first spatial sensor (not illustrated) configured to detect the
movement of the multi-touch mouse on a surface, such as a mouse pad
(e.g., an optical eye located at the button of the multi-touch
mouse 400). The multi-touch mouse 400 may comprise a second spatial
sensor configured to detect user interaction on a multi-touch mouse
surface 402 of the multi-touch mouse 400. In one example, the
multi-touch mouse surface 402 may be divided into multiple portions
(e.g., a first portion 404, a second portion 408, a third portion
410, a fourth portion 412, a fifth portion 416, a sixth portion
418, a seventh portion 420, etc.). Respective portions of the
multi-touch mouse surface 402 may be mapped to one or more in-game
tasks of a computer video game. In another example, the multi-touch
mouse surface 402 may be a single surface that is not divided into
multiple portions. It may be appreciated that user interaction with
more than one portion may be detected concurrently by a spatial
sensor. It may be appreciated that a single portion may be mapped
to more than one in-game task.
[0038] In one example, the second spatial sensor may be configured
to detect user interaction with the first portion 404 of the
surface of the multi-touch mouse surface 402. The user interaction
with the first portion 404 may be mapped to a view change in-game
task of a computer video game. For example, a user may roll or
swipe their wrist across the first portion 404 to change their view
within the computer video game (e.g., rotate the view as though the
character in the computer video game was moving his or her
head).
[0039] The second spatial sensor may be configured to detect user
interaction with the second portion 408 of the surface of the
multi-touch mouse surface 402. The user interaction with the second
portion 408 may be mapped to a camera view roll in-game task of the
computer video game. For example, a user may roll their palm across
the second portion 408 to tilt the view as though the character
within the computer video game tilted his or her head.
[0040] The second spatial sensor may be configured to detect user
interaction with the third portion 410 of the surface of the
multi-touch mouse surface 402. The user interaction with the third
portion 410 may be mapped to a weapon fire in-game task of the
computer video game. For example, a user may press a finger over
the third portion 410 to fire a weapon within the computer video
game.
[0041] The second spatial sensor may be configured to detect user
interaction with the fourth portion 412 of the surface of the
multi-touch mouse surface 402. The user interaction with the fourth
portion 412 may be mapped to an in-game navigation control task of
the computer video game. For example, a user may swipe a finger
over the fourth portion to move a character within the computer
video game in a direction of the swipe gesture. In particular, a
swipe gesture length of the swipe gesture may translate into the
speed in which the character moves and a swipe gesture direction of
the swipe gesture may translate into the direction in which the
character moves.
[0042] The second spatial sensor may be configured to detect user
interaction between one or more fingers. That is, a first finger to
second finger user interaction 414 may be detected based upon a
distance between a first finger and a second finger on the
multi-touch mouse surface 402. For example, the first finger to
second finger user interaction 414 may be mapped to a zoom in/out
in-game task.
[0043] FIG. 5 illustrates an example of performing in-game tasks in
a first person shooter computer video game 510 based upon user
input on a multi-touch mouse. The multi-touch mouse may comprise
one or more spatial sensors. For example, a first spatial sensor
may be configured to detect positional changes of the multi-touch
mouse. A second spatial sensor may be configured to detect gestures
on the multi-touch mouse surface 502. In one example, a mapping
component may be configured to maintain a mapping of user
interactions with in-game tasks. In another example the mapping
component may be configured to maintain a mapping of portions
(regions) of the multi-touch mouse surface 502 with in-game tasks.
A task component may be configured to receive user input comprising
user interactions detected by the spatial sensors of the
multi-touch mouse. The task component may be configured to perform
in-game tasks based upon the user input and the mappings maintained
by the mapping component.
[0044] In one example, the mapping component may maintain a mapping
that maps a swipe gesture user interaction 508 with a character
movement in-game task, a finger press user interaction 506 with a
fire weapon in-game task, and a multi-touch mouse position change
user interaction 504 with a character view change. A first spatial
sensor may be configured to detect the multi-touch mouse position
change user interaction 504 and/or other user interactions. A
second spatial sensor may be configured to detect the swipe gesture
user interaction 508, the finger press user interaction 506, and/or
other user interactions. The task component may be configured to
receive user input from the multi-touch mouse. The user input may
comprise one or more detected user interactions. The task component
may invoke one or more in-game task based upon the in-game tasks
being mapped to the user interactions within the mapping.
[0045] FIG. 6 illustrates an example of performing in-game tasks
within a first person shooter computer video game 610 based upon
user input on a multi-touch mouse. The multi-touch mouse may
comprise one or more spatial sensors configured to detect user
interaction with the multi-touch mouse (e.g., a gesture on the
multi-touch mouse surface 602). In one example, a mapping component
may maintain a mapping that maps a swipe gesture user interaction
608 with a character movement in-game task, a first finger to
second finger distance user interaction 612 with a character view
zoom in-game task, and a multi-touch mouse position change user
interaction 604 with a character view change. A task component may
be configured to receive user input from the multi-touch mouse. The
user input may comprise one or more detected user interactions. The
task component may invoke one or more in-game task based upon the
in-game tasks based upon the in-game tasks being mapped to the user
interactions within the mapping.
[0046] FIG. 7 illustrates an example of performing in-game tasks
within a strategy computer video game 710 based upon user input on
a multi-touch mouse. The multi-touch mouse may comprise one or more
spatial sensors configured to detect user interaction with the
multi-touch mouse (e.g., a gesture on the multi-touch mouse surface
706). In one example, a mapping component may maintain a mapping
that maps a swipe gesture user interaction 704 with a character
selection in-game task 712 and a multi-touch mouse position change
user interaction 702 with character movement to destination in-game
task 714. A task component may be configured to receive user input
from the multi-touch mouse. The user input may comprise one or more
detected user interactions.
[0047] For example, a user may perform a circle gesture on the
multi-touch mouse surface 706 using a finger, while moving the
position of the multi-touch mouse to the right. A first spatial
sensor within the multi-touch mouse may detect the position change
of the multi-touch mouse as the multi-touch mouse position change
user interaction 702. A second spatial sensor within the
multi-touch mouse may detect the circle gesture as the swipe
gesture user interaction 704. The task component may receive user
input of the two user interactions. In response, a cursor 708 may
be moved in a circular motion around a set of characters within the
strategy video game 710. In this way, the task component may invoke
the character selection in-game task 712 of the characters
encompasses within the circular motion of the cursor 708. The task
component may also invoke the character movement to destination
in-game task 714 to move the selected characters.
[0048] Still another embodiment involves a computer-readable medium
comprising processor-executable instructions configured to
implement one or more of the techniques presented herein. An
exemplary computer-readable medium that may be devised in these
ways is illustrated in FIG. 8, wherein the implementation 800
comprises a computer-readable medium 816 (e.g., a CD-R, DVD-R, or a
platter of a hard disk drive), on which is encoded
computer-readable data 814. This computer-readable data 814 in turn
comprises a set of computer instructions 812 configured to operate
according to one or more of the principles set forth herein. In one
such embodiment 800, the processor-executable instructions 812 may
be configured to perform a method 810, such as the exemplary method
100 of FIG. 1, for example. That is, the processor-executable
instructions 812 may implement the exemplary method 100 which may
be executed via one or more processors. In another such embodiment,
the processor-executable instructions 812 may be configured to
implement a system, such as the exemplary system 200 of FIG. 2.
Many such computer-readable media may be devised by those of
ordinary skill in the art that are configured to operate in
accordance with the techniques presented herein.
[0049] Although the subject matter has been described in language
specific to structural features and/or methodological acts, it is
to be understood that the subject matter defined in the appended
claims is not necessarily limited to the specific features or acts
described above. Rather, the specific features and acts described
above are disclosed as example forms of implementing the
claims.
[0050] As used in this application, the terms "component,"
"module," "system", "interface", and the like are generally
intended to refer to a computer-related entity, either hardware, a
combination of hardware and software, software, or software in
execution. For example, a component may be, but is not limited to
being, a process running on a processor, a processor, an object, an
executable, a thread of execution, a program, and/or a computer. By
way of illustration, both an application running on a controller
and the controller can be a component. One or more components may
reside within a process and/or thread of execution and a component
may be localized on one computer and/or distributed between two or
more computers.
[0051] Furthermore, the claimed subject matter may be implemented
as a method, apparatus, or article of manufacture using standard
programming and/or engineering techniques to produce software,
firmware, hardware, or any combination thereof to control a
computer to implement the disclosed subject matter. The term
"article of manufacture" as used herein is intended to encompass a
computer program accessible from any computer-readable device,
carrier, or media. Of course, those skilled in the art will
recognize many modifications may be made to this configuration
without departing from the scope or spirit of the claimed subject
matter.
[0052] FIG. 9 and the following discussion provide a brief, general
description of a suitable computing environment to implement
embodiments of one or more of the provisions set forth herein. The
operating environment of FIG. 9 is only one example of a suitable
operating environment and is not intended to suggest any limitation
as to the scope of use or functionality of the operating
environment. Example computing devices include, but are not limited
to, personal computers, server computers, hand-held or laptop
devices, mobile devices (such as mobile phones, Personal Digital
Assistants (PDAs), media players, and the like), multiprocessor
systems, consumer electronics, mini computers, mainframe computers,
distributed computing environments that include any of the above
systems or devices, and the like.
[0053] Although not required, embodiments are described in the
general context of "computer readable instructions" being executed
by one or more computing devices. Computer readable instructions
may be distributed via computer readable media (discussed below).
Computer readable instructions may be implemented as program
modules, such as functions, objects, Application Programming
Interfaces (APIs), data structures, and the like, that perform
particular tasks or implement particular abstract data types.
Typically, the functionality of the computer readable instructions
may be combined or distributed as desired in various
environments.
[0054] FIG. 9 illustrates an example of a system 910 comprising a
computing device 912 configured to implement one or more
embodiments provided herein. In one configuration, computing device
912 includes at least one processing unit 916 and memory 918.
Depending on the exact configuration and type of computing device,
memory 918 may be volatile (such as RAM, for example), non-volatile
(such as ROM, flash memory, etc., for example) or some combination
of the two. This configuration is illustrated in FIG. 9 by dashed
line 914.
[0055] In other embodiments, device 912 may include additional
features and/or functionality. For example, device 912 may also
include additional storage (e.g., removable and/or non-removable)
including, but not limited to, magnetic storage, optical storage,
and the like. Such additional storage is illustrated in FIG. 9 by
storage 920. In one embodiment, computer readable instructions to
implement one or more embodiments provided herein may be in storage
920. Storage 920 may also store other computer readable
instructions to implement an operating system, an application
program, and the like. Computer readable instructions may be loaded
in memory 918 for execution by processing unit 916, for
example.
[0056] The term "computer readable media" as used herein includes
computer storage media. Computer storage media includes volatile
and nonvolatile, removable and non-removable media implemented in
any method or technology for storage of information such as
computer readable instructions or other data. Memory 918 and
storage 920 are examples of computer storage media. Computer
storage media includes, but is not limited to, RAM, ROM, EEPROM,
flash memory or other memory technology, CD-ROM, Digital Versatile
Disks (DVDs) or other optical storage, magnetic cassettes, magnetic
tape, magnetic disk storage or other magnetic storage devices, or
any other medium which can be used to store the desired information
and which can be accessed by device 912. Any such computer storage
media may be part of device 912.
[0057] Device 912 may also include communication connection(s) 926
that allows device 912 to communicate with other devices.
Communication connection(s) 926 may include, but is not limited to,
a modem, a Network Interface Card (NIC), an integrated network
interface, a radio frequency transmitter/receiver, an infrared
port, a USB connection, or other interfaces for connecting
computing device 912 to other computing devices. Communication
connection(s) 926 may include a wired connection or a wireless
connection. Communication connection(s) 926 may transmit and/or
receive communication media.
[0058] The term "computer readable media" may include communication
media. Communication media typically embodies computer readable
instructions or other data in a "modulated data signal" such as a
carrier wave or other transport mechanism and includes any
information delivery media. The term "modulated data signal" may
include a signal that has one or more of its characteristics set or
changed in such a manner as to encode information in the
signal.
[0059] Device 912 may include input device(s) 924 such as keyboard,
mouse, pen, voice input device, touch input device, infrared
cameras, video input devices, and/or any other input device. Output
device(s) 922 such as one or more displays, speakers, printers,
and/or any other output device may also be included in device 912.
Input device(s) 924 and output device(s) 922 may be connected to
device 912 via a wired connection, wireless connection, or any
combination thereof. In one embodiment, an input device or an
output device from another computing device may be used as input
device(s) 924 or output device(s) 922 for computing device 912.
[0060] Components of computing device 912 may be connected by
various interconnects, such as a bus. Such interconnects may
include a Peripheral Component Interconnect (PCI), such as PCI
Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an
optical bus structure, and the like. In another embodiment,
components of computing device 912 may be interconnected by a
network. For example, memory 918 may be comprised of multiple
physical memory units located in different physical locations
interconnected by a network.
[0061] Those skilled in the art will realize that storage devices
utilized to store computer readable instructions may be distributed
across a network. For example, a computing device 930 accessible
via a network 928 may store computer readable instructions to
implement one or more embodiments provided herein. Computing device
912 may access computing device 930 and download a part or all of
the computer readable instructions for execution. Alternatively,
computing device 912 may download pieces of the computer readable
instructions, as needed, or some instructions may be executed at
computing device 912 and some at computing device 930.
[0062] Various operations of embodiments are provided herein. In
one embodiment, one or more of the operations described may
constitute computer readable instructions stored on one or more
computer readable media, which if executed by a computing device,
will cause the computing device to perform the operations
described. The order in which some or all of the operations are
described should not be construed as to imply that these operations
are necessarily order dependent. Alternative ordering will be
appreciated by one skilled in the art having the benefit of this
description. Further, it will be understood that not all operations
are necessarily present in each embodiment provided herein.
[0063] Moreover, the word "exemplary" is used herein to mean
serving as an example, instance, or illustration. Any aspect or
design described herein as "exemplary" is not necessarily to be
construed as advantageous over other aspects or designs. Rather,
use of the word exemplary is intended to present concepts in a
concrete fashion. As used in this application, the term "or" is
intended to mean an inclusive "or" rather than an exclusive "or".
That is, unless specified otherwise, or clear from context, "X
employs A or B" is intended to mean any of the natural inclusive
permutations. That is, if X employs A; X employs B; or X employs
both A and B, then "X employs A or B" is satisfied under any of the
foregoing instances. In addition, the articles "a" and "an" as used
in this application and the appended claims may generally be
construed to mean "one or more" unless specified otherwise or clear
from context to be directed to a singular form.
[0064] Also, although the disclosure has been shown and described
with respect to one or more implementations, equivalent alterations
and modifications will occur to others skilled in the art based
upon a reading and understanding of this specification and the
annexed drawings. The disclosure includes all such modifications
and alterations and is limited only by the scope of the following
claims. In particular regard to the various functions performed by
the above described components (e.g., elements, resources, etc.),
the terms used to describe such components are intended to
correspond, unless otherwise indicated, to any component which
performs the specified function of the described component (e.g.,
that is functionally equivalent), even though not structurally
equivalent to the disclosed structure which performs the function
in the herein illustrated exemplary implementations of the
disclosure. In addition, while a particular feature of the
disclosure may have been disclosed with respect to only one of
several implementations, such feature may be combined with one or
more other features of the other implementations as may be desired
and advantageous for any given or particular application.
Furthermore, to the extent that the terms "includes", "having",
"has", "with", or variants thereof are used in either the detailed
description or the claims, such terms are intended to be inclusive
in a manner similar to the term "comprising.
* * * * *