U.S. patent application number 13/903056 was filed with the patent office on 2014-12-04 for systems and methods for moving display objects based on user gestures.
This patent application is currently assigned to General Electric Company. The applicant listed for this patent is General Electric Company. Invention is credited to Robert William Grubbs, Justin V. John, Pavan Kumar Singh Thakur.
Application Number | 20140359538 13/903056 |
Document ID | / |
Family ID | 50942897 |
Filed Date | 2014-12-04 |
United States Patent
Application |
20140359538 |
Kind Code |
A1 |
Thakur; Pavan Kumar Singh ;
et al. |
December 4, 2014 |
SYSTEMS AND METHODS FOR MOVING DISPLAY OBJECTS BASED ON USER
GESTURES
Abstract
Certain embodiments herein relate to systems and methods for
moving display objects based on user gestures. In one embodiment, a
system can include at least one memory configured to store
computer-executable instructions and at least one control device
configured to access the at least one memory and execute the
computer-executable instructions. The instructions may be
configured to detect a first user gesture adjacent to an output
device in order to identity a display object displayed on the
output device. The instructions may be configured to detect a
second user gesture adjacent to the output device in order to
identify a location to move the display object. The instructions
may be configured to update the output device to display the
display object at the identified location on the output device.
Inventors: |
Thakur; Pavan Kumar Singh;
(Hyderabad, IN) ; Grubbs; Robert William; (Salem,
VA) ; John; Justin V.; (Schenectady, NY) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
General Electric Company |
Schenectady |
NY |
US |
|
|
Assignee: |
General Electric Company
Schenectady
NY
|
Family ID: |
50942897 |
Appl. No.: |
13/903056 |
Filed: |
May 28, 2013 |
Current U.S.
Class: |
715/863 |
Current CPC
Class: |
G06F 3/0486 20130101;
G06F 3/04886 20130101; G06F 3/0488 20130101; G06F 3/04883 20130101;
G06F 3/017 20130101 |
Class at
Publication: |
715/863 |
International
Class: |
G06F 3/01 20060101
G06F003/01 |
Claims
1. A method for moving display objects based on user gestures, the
method comprising: detecting, by at least one control device, a
first user gesture adjacent to an output device to identify at
least one display object displayed on the output device; detecting,
by the at least one control device, a second user gesture adjacent
to the output device identifying a location to move the at least
one display object; and updating, by the at least one control
device, the output device to display the at least one display
object at the identified location on the output device.
2. The method of claim 1, wherein the first user gesture is
generated by a first user.
3. The method of claim 1, wherein the first user gesture is a
finger stroke gesture.
4. The method of claim 1, wherein either the first user gesture or
the second user gesture is detected by the at least one control
device via an input device disposed in close proximity to the
output device.
5. The method of claim 4, wherein the input device comprises at
least one of: a camera, a transparent ink pad, a gesture reader
software module or a transparent ink pad user interface
control.
6. The method of claim 4, wherein detecting, by at least one
control device, a first user gesture adjacent to the output device
to identify at least one display object displayed on the output
device further comprises: receiving, by the at least one control
device from the input device, the first user gesture; determining,
by the at least one control device, a text character or command
associated with the first user gesture; and identifying, by the at
least one control device, a gesture action associated with the text
character or command.
7. The method of claim 6, wherein the gesture action comprises
selecting and moving the at least one display object on the output
device.
8. The method of claim 1, wherein the second user gesture is
generated by a second user.
9. The method of claim 1, wherein the second user gesture is a
finger tap gesture.
10. The method of claim 6, wherein updating, by the at least one
control device, the output device to display the at least one
display object at the identified location on the output device
further comprises: executing, by the at least one control device,
the at least one display object to the identified location on the
output device based at least in part on detecting the second user
gesture.
11. A system for moving display objects being displayed on an
output device of a computer based on one or more user gestures, the
system comprising: an input unit configured to detect at least one
of a first user gesture or a second user gesture on an output
device of a surface computer; and at least one control device in
communication with the input unit that is configured to: detect a
first user gesture adjacent to an output device in order to
identity at least one display object displayed on the output
device; detect a second user gesture adjacent to the output device
in order to identify a location to move the at least one display
object; and update the output device to display the at least one
display object at the identified location on the output device.
12. The system of claim 11, wherein the first user gesture is
generated by a first user.
13. The system of claim 11, wherein the first user gesture is a
finger stroke gesture.
14. The system of claim 11, wherein the input device is disposed in
close proximity to the output device.
15. The system of claim 11, wherein the input device comprises at
least one of: a camera, a transparent ink pad, a gesture reader
software module, or a transparent ink pad user interface
control.
16. The system of claim 15, wherein the input unit detects the
first user gesture or the second user gesture via the transparent
ink pad control user interface.
17. The system of claim 11, wherein the at least one control device
is further configured select the at least one display object based
on the first user gesture.
18. The system of claim 11, wherein the second user gesture is
generated from a second user.
19. The system of claim 11, wherein the second user gesture is a
finger tap gesture.
20. The system of claim 11, wherein the at least one controller is
further configured to update the output device to display the at
least one display object at the identified location on the output
device in response to detecting the second user gesture.
Description
FIELD OF THE DISCLOSURE
[0001] Embodiments of the disclosure generally relate to moving
display objects displayed on an output device, and more
particularly, to systems and methods for moving display objects
based on user gestures.
BACKGROUND
[0002] It has become increasing popular to provide touch-sensitive
displays in mobile and communication-type computing devices, such
as, hand held tablets or smartphones. Typically, objects being
displayed on such devices can be moved by touching and dragging the
object using one or more fingers. While users of relatively smaller
computing devices can comfortably use conventional touch and drag
gestures to move objects on associated displays, relatively large
surface computers have much larger display areas making it
relatively uncomfortable for users to move display objects using
conventional touch and drag gestures.
BRIEF SUMMARY OF THE DISCLOSURE
[0003] Some or all of the above needs and/or problems may be
addressed by certain embodiments of the disclosure. Certain
embodiments may include systems and methods for moving display
objects based on user gestures, such as objects displayed on an
output device of a surface computer. According to one embodiment of
the disclosure, there is disclosed a system. The system may include
at least one memory configured to store computer-executable
instructions and at least one control device configured to access
the at least one memory and execute the computer-executable
instructions. The instructions may be configured to detect a first
user gesture adjacent to an output device of a surface computer to
identify a display object displayed on the output device. The
instructions may be further configured to detect a second user
gesture adjacent to the output device identifying a location to
move the display object on the output device. The instructions may
further be configured to update the output device to display the
display object at the identified location on the output device.
[0004] According to another embodiment of the disclosure, there is
disclosed a method. The method can include detecting, by a control
device, a first user gesture adjacent to an output device of a
surface computer identifying at least one display object displayed
on the output device. The method may further include detecting, by
the control device, a second user gesture adjacent to the output
device identifying a location to move the display object. The
method may also include updating, by the control device, the output
device to display the identified display object at the identified
location on the output device.
[0005] Other embodiments, systems, methods, aspects, and features
of the disclosure will become apparent to those skilled in the art
from the following detailed description, the accompanying drawings,
and the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The detailed description is set forth with reference to the
accompanying drawings, which are not necessarily drawn to scale.
The use of the same reference numbers in different figures
indicates similar or identical items.
[0007] FIG. 1 illustrates an example system for moving display
objects based on user gestures, according to an embodiment of the
disclosure.
[0008] FIG. 2 is a flow diagram of an example method for moving
display objects based on user gestures, according to an embodiment
of the disclosure.
[0009] FIG. 3A is an example method for identifying a display
object based on user gestures, according to an embodiment of the
disclosure.
[0010] FIG. 3B is an example method for identifying a location to
move a display object based on user gestures, according to an
embodiment of the disclosure.
[0011] FIG. 3C is an example method for updating an output device
to display an identified display object at an identified location
on the output device, according to an embodiment of the
disclosure.
DETAILED DESCRIPTION
[0012] Illustrative embodiments of the disclosure will now be
described more fully hereinafter with reference to the accompanying
drawings, in which some, but not all embodiments of the disclosure
are shown. The disclosure may be embodied in many different forms
and should not be construed as limited to the embodiments set forth
herein; rather, these embodiments are provided so that this
disclosure will satisfy applicable legal requirements.
[0013] Certain embodiments disclosed herein relate to moving one or
more display objects based on user gestures. Accordingly, a system
can be provided to facilitate moving display objects based upon
detecting a first user gesture and a second user gesture generated
by one or more users interacting with the output device. For
example, a user may interact with the output device by, for
instance, a finger stroke and/or a finger tap adjacent to the
surface of the output device. Based upon the first and the second
user gesture, a display object may be selected and a location to
move the display object on the output device may be identified.
Thereafter, the output device may be updated to display the
identified display object at the identified location on the output
device. One or more technical effects associated with certain
embodiments herein may include, but are not limited to, reduced
time and expense for a user to move display objects to new
positions on a relatively large output device of a surface computer
without employing traditional touch and drag methods as briefly
described above. Furthermore, one or more technical effects
associated with certain embodiments can include providing
[0014] FIG. 1 depicts a block diagram of one example system 100
that facilitates moving display objects based on user gestures.
According to an embodiment of the disclosure, the system 100 may
include a surface computer 110 that includes an output device 120.
The output device 120 may be configured to display to a user one or
more display objects 130, such as, for instance, user interface
controls, that may include text, colors, images, icons, and the
like.
[0015] With continued reference to FIG. 1, the surface computer 110
may further include one or more input devices 140 configured to
detect and/or capture user gestures adjacent to the output device
120. In certain embodiments, the input devices 140 may include a
user gesture capturing device, such as, for instance, one or more
cameras and/or transparent ink pad controls disposed in close
proximity to the output device 120. In certain embodiments, a user
device 140 can include a gesture reader software module and/or a
transparent ink pad user interface control. In any instance, the
input devices 140 can be configured to detect a first user gesture
and a second user gesture adjacent to the output device 120 and
communicate them in real-time or near real-time to a control
device, such as, control device 150 in FIG. 1, via a network, such
as, network 105 in FIG. 1. In certain embodiments, the control
device 150 may be configured to receive and to analyze the first
and the second user gestures from the input devices 140.
[0016] Based at least upon the first and the second user gestures,
the control device 150 may also be configured to identify and/or
select a display object 130, a location on the output device 120 to
move the display object 130 and/or generate and transmit to the
surface computer 110 an updated output device 120 to display the
identified display object 120 at the identified location on the
output device 120 via network 105 as will be described.
[0017] The control device 150 may include any number of suitable
computer processing components that may, among other things,
analyze user gestures detected by the input devices 140. Examples
of suitable processing devices that may be incorporated into the
control device 150 include, but are not limited to, personal
computers, server computers, application-specific circuits,
microcontrollers, minicomputers, other computing devices, and the
like. As such, the control device 150 may include any number of
processors 155 that facilitate the execution of computer-readable
instructions. By executing computer-readable instructions, the
control device 150 may include or form a special purpose computer
or particular machine that facilitates processing of user gestures
in order to move display objects displayed on the output device
120.
[0018] In addition to one or more processor(s) 155, the control
device 150 may include one or more memory devices 160, one or more
input/output ("I/O") interfaces 165, and/or one or more
communications and/or network interfaces 170. The one or more
memory devices 160 or memories may include any suitable memory
devices, for example, caches, read-only memory devices, random
access memory devices, magnetic storage devices, etc. The one or
more memory devices 160 may store user gestures or other data,
executable instructions, and/or various program modules utilized by
the control device 150, for example, data files 170, an operating
system ("OS") 180 and/or a user gesture analyzer module 185. The
data files 170 may include any suitable data that facilitates the
operation of the control device 150 including, but not limited to,
information associated with one or more detected user gestures
and/or information associated with one or more control actions
directed by the control device 150 based on detected user gestures.
The OS 180 may include executable instructions and/or program
modules that facilitate and/or control the general operation of the
controller 150.
[0019] Additionally, the OS 180 may facilitate the execution of
other software programs and/or program modules by the processors
155, such as, the user gesture analyzer module 185. The user
gesture analyzer module 185 may be a suitable software module
configured to analyze and/or process user gestures detected by the
input devices 140. For instance, the user gesture analyzer module
185 may analyze user gestures detected by the input devices 140,
which may be collected and stored in memory 160.
[0020] According to one embodiment, the control device 150 may be
configured to detect a first user gesture via the one or more input
devices 140. For instance, upon viewing one or more display objects
130 displayed on the output device 120, a first user may generate a
first user gesture using one or more fingers in order to select, or
otherwise identify, a display object 130 the user would like to
move. To do so, in one embodiment, a user may tap the screen of the
output device 120 with a finger where the display object 130 is
displayed in order to indicate that the user would like to move the
display object 130 to another location on the output device 120. As
another non-limiting example, the user may generate a finger stroke
gesture on the screen of the output device 120 in order to identify
the display object 130 the user would like to move.
[0021] In certain embodiments, the input devices 140 may include
one or more program modules that facilitate capturing detected user
gestures and any other information associated with the user
gestures. For instance, the input devices 140 may include one or
more cameras that detect a user gesture. Thereafter, a user gesture
reader software module may be executed and configured to
automatically, or in response to some other trigger, transmit the
captured user gesture and any other information associated with the
user gesture to the control device 150 via network 105. Similarly,
in another example, the input devices 140 may include one or more
transparent ink pad controls, where upon detecting a user gesture
by the transparent ink pad controls, the transparent ink pad
control interface transmits the user gesture to the control device
150 via network 105.
[0022] Upon receiving the first user gesture, the control device
150 may be configured to execute the user gesture analyzer module
185. The user gesture analyzer module 185 may be configured to
analyze the first user gesture. For instance, the user gesture
analyzer module 185 may be configured to associate a location of
the first user gesture on the output device 120 to the location of
a display object 130 on the output device 120. Using this example,
the user gesture analyzer module 185 may determine the particular
display object 130 the user would like to move. Having identified
the display object 130 the user would like to move, in one
embodiment, the user gesture analyzer module 185 may be configured
to select the display object 130 and/or wait to receive a second
user gesture detected from the input device 140.
[0023] Continuing with the same example, after a first user gesture
by a first user, a second user gesture may be generated by a second
user. According to one embodiment, a second user may generate a
second user gesture, such as, a finger tap gesture, using one or
more fingers in order to select or otherwise identify a location on
the output device 120 to move the identified display object 130. To
do so, in one embodiment, a user may tap a location on the screen
of the output device 120 with a finger in order to indicate the
location to move the display object 130 on the output device
120.
[0024] Similar to the first user gesture, the input device 140 may
be configured to automatically, or in response to some other
trigger, transmit to the control device 150 via network 105 the
captured second user gesture and any other information associated
with the second user gesture. Upon receiving the second user
gesture, the control device 150 may be configured to execute the
user gesture analyzer module 185. The user gesture analyzer module
185 may be configured to analyze the second user gesture. For
instance, the user gesture analyzer module 185 may be configured to
associate the second user gesture to the first user gesture. In
this way, the user gesture analyzer module 185 may be configured to
associate the display object 130 identified by the first user
gesture to the location on the output device 120 identified by the
second user gesture. Thereafter, the user gesture analyzer module
185 may be configured to update the output device 120 to display
the identified display object 130 at the identified location on the
output device 130. For instance, the user gesture analyzer module
185 may direct the communication by the control device 150 of an
updated of presentation of the display objects 120 to the surface
computer 110 for display on the output device 120.
[0025] In another non-limiting example, the user gesture analyzer
module 185 may be configured to make more complex user gesture
assessments. For instance, a first user gesture may be, for
instance, a finger stroke that is a circle gesture representing a
command or a text character, such as the letter "o." Upon receiving
a first user gesture, according to one embodiment, the user gesture
analyzer module 185 may be executed and configured to analyze a
first user gesture in order to identify the command or the text
character associated with the first user gesture. Thereafter, in
certain embodiments, the user gesture analyzer module 185 may then
search one or more data files 180 that may identify , for each
command or text character, a corresponding gesture action, such as,
selecting and/or moving the display object 120, to be executed by
the control device 150 in response to detecting a second user
gesture by the input device 140 or some other trigger.
[0026] Upon detecting the second user gesture, the user gesture
analyzer module 185 then may execute the command action and/or
direct the communication from the control device 150 of an updated
of presentation of the display objects 120 to the surface computer
110 for display on the output device 120 that includes executing
the identified action.
[0027] As desired, embodiments of the disclosure may include a
system 100 with more or less than the components illustrated in
FIG. 1. Additionally, certain components of the system 100 may be
combined in various embodiments of the disclosure. The system 100
of FIG. 1 is provided by way of example only.
[0028] Referring now to FIG. 2, shown is a flow diagram of an
example method 200 for moving one or more display objects being
displayed on an output device of a surface based on one or more
user gestures, according to an illustrative embodiment of the
disclosure. The method 200 may be utilized in association with
various systems, such as the system 100 illustrated in FIG. 1.
[0029] The method 200 may begin at block 205. At block 205, a
control device, such as 150 in FIG. 1, may detect a first user
gesture adjacent to an output device, such as 120 in FIG. 1, of a
surface computer, such as 110 in FIG. 1. In certain embodiments,
the first user gesture may be analyzed by, for example, a user
gesture analyzer module such as 185 in FIG. 1, in order to identify
a display object, such as 130 in FIG. 1, on the output device that
a user would like to move to another location on the output device.
In certain embodiments, the first user gesture may be detected by
an input device, such as, input device 140 illustrated in FIG. 1.
As described above, the first user gesture may include a
finger-based gesture, such as, a finger stroke gesture, that may be
generated by a first user.
[0030] Next, at block 210, the control device 150 may detect, via
the input device 140, a second user gesture adjacent to the output
device 120 of the surface computer 110 identifying a location to
move the display object 130 on the output device 120. In certain
embodiments, the second user gesture may be a finger tap gesture
generated by a second user.
[0031] Lastly, at block 215, the control device 150 may update the
output device 120 to display the identified display object 130 at
the identified location on the output device 120. As described
above, based on the detected first and second user gesture, the
control device 150 may be configured to communicate an updated
presentation of the display object 130 to the output device 120 for
display to one or more users.
[0032] The method 200 of FIG. 2 may optionally end following block
215.
[0033] The operations described and shown in the method 200 of FIG.
2 may be carried out or performed in any suitable order as desired
in various embodiments of the disclosure. Additionally, in certain
embodiments, at least a portion of the operations may be carried
out in parallel. Furthermore, in certain embodiments, less than or
more than the operations described in FIG. 2 may be performed. As
desired, the operations set forth in FIG. 2 may also be performed
in a loop as a rotating machine is monitored. For example, the
operations may be performed every twenty minutes.
[0034] Referring now to FIG. 3A, shown is an example method for
identifying a display object based on user gestures as described in
block 205 of FIG. 2. As illustrated in FIG. 3A, one or more display
objects 320a may be displayed on an output device 310a. A user may
identify or otherwise select the display object 320a by generating
a first user gesture. For example, as shown in FIG. 3A a user may
tap the screen of the output device 321a with a finger where the
display object 320a is displayed in order to indicate that the user
would like to move the display object 320a to another location on
the output device 310a.
[0035] Next, in FIG. 3B, shown is an example method for identifying
a location to move a display object based on user gestures as
described in block 210 of FIG. 2. As shown in FIG. 3B, a user may
generate a second user gesture using one or more fingers in order
to identify a location on an output device 310b to move an
identified display object 320b. For instance, as shown in FIG. 3B,
a user may tap a location on the screen of the output device 310b
with a finger in order to indicate the location to move the display
object 320b on the output device 310b.
[0036] Lastly, in FIG. 3C, shown is an example method for updating
an output device to display an identified display object at an
identified location on the output device as described in block 215
of FIG. 2. According to one embodiment, based upon the first and
second user gesture, an updated presentation of a selected display
object 320c at an identified location on an output device 310c may
be displayed to one or more users. The disclosure is described
above with reference to block and flow diagrams of systems,
methods, apparatus, and/or computer program products according to
example embodiments of the disclosure. It will be understood that
one or more blocks of the block diagrams and flow diagrams, and
combinations of blocks in the block diagrams and flow diagrams,
respectively, can be implemented by computer-executable program
instructions. Likewise, some blocks of the block diagrams and flow
diagrams may not necessarily need to be performed in the order
presented, or may not necessarily need to be performed at all,
according to some embodiments of the disclosure.
[0037] These computer-executable program instructions may be loaded
onto a general purpose computer, a special purpose computer, a
processor, or other programmable data processing apparatus to
produce a particular machine, such that the instructions that
execute on the computer, processor, or other programmable data
processing apparatus create means for implementing one or more
functions specified in the flow diagram block or blocks. These
computer program instructions may also be stored in a
computer-readable memory that can direct a computer or other
programmable data processing apparatus to function in a particular
manner, such that the instructions stored in the computer-readable
memory produce an article of manufacture including instruction
means that implement one or more functions specified in the flow
diagram block or blocks. As an example, embodiments of the
disclosure may provide for a computer program product, comprising a
computer usable medium having a computer-readable program code or
program instructions embodied therein, said computer-readable
program code adapted to be executed to implement one or more
functions specified in the flow diagram block or blocks. The
computer program instructions may also be loaded onto a computer or
other programmable data processing apparatus to cause a series of
operational elements or steps to be performed on the computer or
other programmable apparatus to produce a computer-implemented
process such that the instructions that execute on the computer or
other programmable apparatus provide elements or steps for
implementing the functions specified in the flow diagram block or
blocks.
[0038] Accordingly, blocks of the block diagrams and flow diagrams
support combinations of means for performing the specified
functions, combinations of elements or steps for performing the
specified functions and program instruction means for performing
the specified functions. It will also be understood that each block
of the block diagrams and flow diagrams, and combinations of blocks
in the block diagrams and flow diagrams, can be implemented by
special purpose, hardware-based computer systems that perform the
specified functions, elements or steps, or combinations of special
purpose hardware and computer instructions.
[0039] While the disclosure has been described in connection with
what is presently considered to be the most practical and various
embodiments, it is to be understood that the disclosure is not to
be limited to the disclosed embodiments, but on the contrary, is
intended to cover various modifications and equivalent arrangements
included within the spirit and scope of the appended claims.
[0040] This written description uses examples to disclose the
disclosure, including the best mode, and also to enable any person
skilled in the art to practice the disclosure, including making and
using any devices or systems and performing any incorporated
methods. The patentable scope of the disclosure is defined in the
claims, and may include other examples that occur to those skilled
in the art. Such other examples are intended to be within the scope
of the claims if they have structural elements that do not differ
from the literal language of the claims, or if they include
equivalent structural elements with insubstantial differences from
the literal language of the claims.
* * * * *