U.S. patent application number 12/666916 was filed with the patent office on 2010-07-01 for virtual keypad systems and methods.
This patent application is currently assigned to Panasonic Corporation. Invention is credited to David Kryze, Philippe Morin, Luca Rigazio, Peter Veprek.
Application Number | 20100164897 12/666916 |
Document ID | / |
Family ID | 40162292 |
Filed Date | 2010-07-01 |
United States Patent
Application |
20100164897 |
Kind Code |
A1 |
Morin; Philippe ; et
al. |
July 1, 2010 |
VIRTUAL KEYPAD SYSTEMS AND METHODS
Abstract
Accordingly, a virtual keypad system for inputting text is
provided. A virtual keypad system includes a remote controller
having at least one touchpad incorporated therein and divided into
a plurality of touch zones. A display device is in data
communication with the remote controller and is operable to display
a user interface including a keypad, where each key of the keypad
is mapped to a touch zone of the touchpad. A prediction module, in
response to an operator pressing a given touch zone to select a
particular character, performs one or more key prediction methods
to predict one or more next plausible keys. A key mapping module
remaps the touch zones of the touchpad to the keys of the keypad
based on the one or more next plausible keys.
Inventors: |
Morin; Philippe; (Goleta,
CA) ; Kryze; David; (Campbell, CA) ; Rigazio;
Luca; (San Jose, CA) ; Veprek; Peter; (San
Jose, CA) |
Correspondence
Address: |
GREGORY A. STOBBS
5445 CORPORATE DRIVE, SUITE 400
TROY
MI
48098
US
|
Assignee: |
; Panasonic Corporation
Kadoma-shi
JP
|
Family ID: |
40162292 |
Appl. No.: |
12/666916 |
Filed: |
June 26, 2008 |
PCT Filed: |
June 26, 2008 |
PCT NO: |
PCT/US08/68384 |
371 Date: |
December 28, 2009 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11977346 |
Oct 24, 2007 |
|
|
|
12666916 |
|
|
|
|
60946858 |
Jun 28, 2007 |
|
|
|
60946858 |
Jun 28, 2007 |
|
|
|
Current U.S.
Class: |
345/173 |
Current CPC
Class: |
G06F 1/1698 20130101;
H04N 5/4403 20130101; H04N 2005/4441 20130101; H04N 21/42222
20130101; H04N 21/42224 20130101; G06F 3/04883 20130101; H04N
21/42204 20130101; G06F 3/0237 20130101; H04N 21/42228 20130101;
H04N 2005/443 20130101; G06F 3/0346 20130101; G06F 3/04886
20130101 |
Class at
Publication: |
345/173 |
International
Class: |
G06F 3/041 20060101
G06F003/041 |
Claims
1. A virtual keypad system for inputting text, comprising: a remote
controller having at least one touchpad incorporated therein and
divided into a plurality of touch zones; a display device in data
communication with the remote controller and operable to display a
user interface including a keypad, where each key of the keypad is
mapped to a touch zone of the touchpad; a prediction module, in
response to an operator pressing a given touch zone to select a
particular character, performs one or more key prediction methods
to predict one or more next plausible keys; and a key mapping
module remaps the touch zones of the touchpad to the keys of the
keypad based on the one or more next plausible keys.
2. The system of claim 1, wherein an arrangement of the keypad on
the user interface is modified based on the one or more next
plausible keys.
3. The system of claim 2, wherein one or more keys of the keypad on
the user interface is enlarged based on the one or more next
plausible keys.
4. The system of claim 2, wherein one or more keys of the keypad on
the user interface is highlighted based on the one or more next
plausible keys.
5. The system of claim 1 further comprising a language model
operable to predict the one or more next plausible keys based on
previous characters selected by the operator.
6. The system of claim 5, wherein the language model is further
operable to generate a list of plausible words based on the
previous characters selected by the operator.
7. The system of claim 1, wherein the one or more key prediction
methods includes a trajectory analysis method, where the next
plausible key is predicted based on a direction of operator
movement on the touchpad.
8. The system of claim 1, wherein the one or more key prediction
methods includes a hand movement analysis method, where the next
plausible key is predicted based on a detection of which hand is
moving on the touchpad.
9. The system of claim 1, wherein the touchpad is divided into two
operating zones and wherein the key mapping module maps a first
subset of keys of the keypad to a first operating zone of the
touchpad and maps a second subset of keys of the keypad to a second
operating zone of the touchpad.
10. The system of claim 9, wherein the first operating zone
corresponds to a top zone of the touchpad and wherein the second
operating zone corresponds to a bottom zone of the touchpad.
11. The system of claim 9, wherein the first operating zone
corresponds to a right zone of the touchpad and wherein the second
operating zone corresponds to a left zone of the touch pad.
12. The system of claim 1, wherein the user interface includes at
least one of auto-completion selection buttons and auto-completion
selection list.
13. The system of claim 12, wherein the touchpad is divided into
two operating zones and wherein the key mapping module maps a first
operating zone to the keys of the keypad and maps a second
operating zone to the at least one of the of auto-completion
selection buttons and auto-completion selection.
14. The system of claim 9, wherein the key mapping module maps a
third subset of keys of the first subset of keys to the second
operating zone and maps a fourth subset of keys of the second
subset of keys to the first operating zone.
15. The system of claim 14, wherein the third subset of keys
include the keys designated. by the letters `t,` `g,` and `b,` and
wherein the fourth subset of keys include the keys designated by
the letters `y,` `h,` and `n.`
16. The system of claim 1, wherein the key prediction methods
include a timing analysis method, where the next plausible keys are
predicted based on a timing of operator movement on the
touchpad.
17. The system of claim 16, wherein the timing analysis method
predicts no next plausible keys when the timing exceeds a
predetermined limit.
18. The system of claim 17, wherein the key mapping module remaps
the touch zones of the touchpad to the keys of the keypad based on
the no next plausible keys.
19. The system of claim 1, wherein the touch zones are defined by
one or more XY coordinates of the touchpad and wherein the key
mapping module maps the touch zones of the touchpad to the keys of
the keypad by associating a key of the keypad to each XY coordinate
of the touch zones.
20. The system of claim 19, wherein the key mapping modules
generates a map from the mapping and wherein the map is a
two-dimensional lookup table defined by the coordinates of the
touchpad.
21. A virtual keypad system for inputting text, comprising: a
remote controller having at least one touchpad incorporated therein
and divided into a plurality of touch zones; a display device in
data communication with the remote controller and operable to
display a keypad and an area for displaying input from the keypad,
where each key on the keypad is associated with a touch zone on the
touchpad and, in response to an operator pressing a given touch
zone, a character on the key correlating to the given touch zone is
displayed in an input area of the display; a language model adapted
to receive characters displayed in the input area and operable to
predict next most plausible characters in string of received
characters; and a module that arranges the each key on the keypad
for facilitating entry of a desired key based on output from the
language model.
22. The system of claim 21, wherein the arranging comprises
enlarging keys on the keypad based on the output from the language
model.
23. A virtual keypad system for inputting text, comprising: a
remote controller having at least one touchpad incorporated therein
and divided into a plurality of touch zones; a display device in
data communication with the remote controller and operable to
display a user interface including a keypad, where each key of the
keypad is mapped to a touch zone of the touchpad; a prediction
module performs one or more key prediction methods to predict one
or more next plausible keys; a key mapping module remaps the touch
zones of the touchpad to the keys of the keypad based on the one or
more next plausible keys; a user interface manager module modifies
an arrangement of the keypad based on the remapping of the touch
zones of the touchpad to the keys of the keypad; and a text input
module, in response to an operator pressing a given touch zone,
selects a key based on the remapping of the touch zones of the
touchpad to the keys of the keypad.
24. The system of claim 23, wherein the user interface manager
module enlarges one or more keys of the keypad based on the
remapping of the touch zones of the touchpad to the keys of the
keypad.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. patent
applications Nos. 60/946858 and 11/977346, filed on Jun. 28, 2007
and Oct. 24, 2007, respectively. The disclosures of the above
applications are incorporated herein by reference.
FIELD
[0002] The present invention relates to methods and systems for
recognizing text input from a remote controller.
BACKGROUND ART
[0003] Many electronic consumer products come with remote control
devices. These remote control devices can communicate a variety of
commands to the electronic product. With the rise in technological
advancements to the electronic products, the remote control devices
have become complex to operate. For example, modern television
remote controls can include selection buttons for volume, channel
selection, menu selection, and picture viewing. To operate the
remote control, the user must take time away from the program he or
she is watching to focus in on the buttons. This can be very
distracting to a viewer.
[0004] In addition, many Internet based services such as online
shopping are now being provided through the television. Additional
selection buttons such as keypad buttons must be included on the
remote control device to accommodate these new services. The new
selection buttons serve to increase the complexity as well as the
cost of the remote control devices. Various solutions have been
proposed to address such problems. One solution is disclosed in
U.S. Pat. No. 6,765,557 to use a touchpad for controlling a home
entertainment device such as an Interactive television. However,
even this solution cannot completely solve the problems of
inconveniency to the user inconvenience.
[0005] The statements in this section merely provide background
information related to the present disclosure and may not
constitute prior art.
SUMMARY
[0006] A virtual keypad system for inputting text is provided. A
virtual keypad system includes a remote controller having at least
one touchpad incorporated therein and divided into a plurality of
touch zones. A display device is in data communication with the
remote controller and is operable to display a user interface
including a keypad, where each key of the keypad is mapped to a
touch zone of the touchpad. A prediction module, in response to an
operator pressing a given touch zone to select a particular
character, performs one or more key prediction methods to predict
one or more next plausible keys. A key mapping module remaps the
touch zones of the touchpad to the keys of the keypad based on the
one or more next plausible keys.
[0007] Further areas of applicability will become apparent from the
description provided herein. It should be understood that the
description and specific examples are intended for purposes of
illustration only and are not intended to limit the scope of the
present disclosure.
BRIEF DESCRIPTION OF DRAWINGS
[0008] The drawings described herein are for illustration purposes
only and are not intended to limit the scope of the present
teachings in any way.
[0009] FIG. 1 is an illustration of a text input system according
to various aspects of the present disclosure.
[0010] FIG. 2A is an illustration of a remote controller of the
text input system of FIG. 1 that includes a touchpad according to
various aspects of the present disclosure.
[0011] FIG. 2B is a block diagram illustrating internal components
of the remote controller of FIG. 2A according to various aspects of
the present disclosure.
[0012] FIG. 3 is a dataflow diagram illustrating a virtual keypad
module of the text input system of FIG. 1 according to various
aspects of the present disclosure.
[0013] FIG. 4A is an illustration of the remote controller being
held in a portrait position according to various aspects of the
present disclosure.
[0014] FIG. 4B is an illustration of the remote controller being
held in a landscape position according to various aspects of the
present disclosure.
[0015] FIG. 5 is a table illustrating a mapping between XY
coordinates of the touchpad and keys of a keypad.
[0016] FIG. 6 is a table illustrating a remapping between the
coordinates of the touchpad and the keys of a keypad.
[0017] FIG. 7 is an illustration of a first embodiment of a virtual
keypad graphical user interface according to various aspects of the
present disclosure.
[0018] FIG. 8A is an illustration of a second embodiment of a
virtual keypad graphical user interface according to various
aspects of the present disclosure.
[0019] FIG. 8B is an illustration of a third embodiment of a
virtual keypad graphical user interface according to various
aspects of the present disclosure.
[0020] FIG. 9 is an illustration of a fourth embodiment of a
virtual keypad graphical user interface according to various
aspects of the present disclosure.
[0021] FIG. 10 is an illustration of a fifth embodiment of a
virtual keypad graphical user interface according to various
aspects of the present disclosure.
DETAILED DESCRIPTION
[0022] The following description is merely exemplary in nature and
is not intended to limit the present teachings, their application,
or uses. It should be understood that throughout the drawings,
corresponding reference numerals indicate like or corresponding
parts and features. As used herein, the term module or sub-module
can refer to a processor (shared, dedicated, or group) and memory
that executes one or more software or firmware to programs, and/or
other suitable components that can provide the described
functionality and/or combinations thereof.
[0023] Referring now to FIG. 1, FIG. 1 depicts an exemplary text
input system 10 implemented according to various aspects of the
present disclosure. The exemplary text input system 10 includes a
virtual keypad module 12 that facilitates the input of alphanumeric
characters by a user for interacting with various services
delivered through a display device 14. The display device 14 can
be, but is not limited to, a television (as shown), a projector and
screen, or a computer. The services can be, for example, internet
based services such as, online shopping and movie subscriptions.
The virtual keypad module 12 provides feedback to the user via a
graphical user interface (GUI) 18. The GUI 18 includes a virtual
keypad 20, as will be discussed in more detail below.
[0024] In various embodiments, the virtual keypad module 12 can be
implemented within the display device 14. In various other
embodiments, the virtual keypad module 12 can be implemented
separate from the display device 14 (such as, for example, on a set
top box (not shown)) and can be in data communication with the
display device 14. For ease of the discussion, the remainder of the
disclosure will be discussed in the context of the virtual keypad
module 12 being implemented within the display device 14.
[0025] The text input system 10 further includes a remote
controller 16 that generates one or more signals to the display
device 14 in response to user input. The virtual keypad module 12
receives and processes the signals. Based on the signals, the
virtual keypad module 12 determines an orientation and a holding
position of the remote controller 16, recognizes text input, and/or
provides visual feedback to the user via a graphical user interface
(GUI) 18. In particular, the virtual keypad module 12 implements
selection auto-correction methods that compensate for human typing
(i.e., clicking) error. For example, when attempting to input text
quickly, users can typically undershoot or overshoot the location
and click on a nearby unintended key. The virtual keypad module 12
employs a combination of prediction and auto-correction methods to
determine which character(s) is/are most likely to be entered by
the user.
[0026] In one example, provided fast input speeds, a prediction
method is used to compensate for the possible overshoot and
undershoot. As will be discussed in more detail below, the
predictions can be used to enlarge an activation area of possible
keys while reducing (or zero-ing) activation areas of keys that are
not in the next-character prediction list. However, if the
prediction methods are unable to generate a prediction, even at
fast input speeds, the virtual keypad module 12 disables the
selection auto-correction methods and reverts to a default mode
(i.e., without enlarging or reducing the activation area). The
virtual keypad module 12 can also disable the selection
auto-correction method when the interaction becomes slow because it
is assumed that clicking errors do not generally occur during slow
interaction.
[0027] FIGS. 2A and 2B illustrate an exemplary remote controller 16
according to various aspects of the present disclosure. As shown in
FIG. 2A, the exterior of the remote controller 16 includes a
touchpad 22 and one or more soft keys 24a-24d. In various
embodiments, touch zones defined by one or more coordinates of the
touchpad 22 can be mapped to a particular key of the virtual keypad
20 (FIG. 1).
[0028] A user can select a particular key of the virtual keypad 20
by gently placing his finger or thumb on the touchpad 22 at or near
the associated touch zone (FingerDown event), by dragging a finger
or thumb along the touchpad 22 to the associated touch zone
(FingerDrag event), and/or by lifting the finger or thumb away from
the touchpad 22 (FingerUp event). While the user has a finger or
thumb on the touchpad 22 (i.e., between FingerDown and FingerUp
events), the user can click on the touchpad 22 by applying greater
force (FingerPress event) followed by releasing the force
(FingerRelease event) to select a key.
[0029] In various other embodiments, a relative access method can
be used as an alternative or as a secondary method for selecting
keys. The relative access method assumes a position of the user's
finger or thumb to be a current coordinate or touch zone of the
touchpad 22. Subsequent gestures by the user are then interpreted
relative to that coordinate or touch zone. This allows for an
adjustable precision in selection.
[0030] Important functions of the remote controller 16 (such as,
for example, volume, channel, and mute) can be associated with
specific selection buttons 26a-26d of the touchpad 22. The
selection buttons 26a-26d can be designated by a specific touchpad
button that is painted or illuminated on the touchpad 22 (as shown)
or by a button displayed on an overlay to the virtual keypad 20
(FIG. 1) of the GUI 18 (FIG. 1). This allows the user to use the
remote controller 16 in complete darkness without having to look
away from the content being displayed on the display device 14
(FIG. 1).
[0031] The functions can be controlled by simply touching the
buttons or be controlled by performing a specific gesture. In one
example, sliding a finger or thumb up or down on the right side of
the touchpad 22 can trigger a volume up or volume down action. In
another example, sliding a finger or thumb right or left on the top
side of the touchpad 22 can trigger a channel up or channel down
action.
[0032] In various embodiments, the body of the remote controller 16
can be made of a soft material, allowing the remote controller 16
to be squeezed. The squeezing of the remote controller 16 can be
performed by the user to trigger certain actions, particularly in
contexts where the GUI 18 (FIG. 1) is just waiting for an
acknowledgement without proposing a choice (such as a "next" button
in a slideshow).
[0033] As shown in FIG. 2B, the internal components of the remote
controller 16 can include, but are not limited to, input sensors
30, output actuators 32, an input controller 34, an output
controller 36, a processing handler 38, a wireless transmitter
(e.g., RF, Bluetooth, etc.) 40, and/or combinations thereof. The
following describes operations performed by the sensors including
the input sensors 30. The inputs sensors 30 can include touchpad
sensors 42. The touchpad sensors 42 can be single-position
registering touchpad sensors mounted side-by-side that allow for
the selection of at least two contact points on the touchpad 22
(FIG. 2A) simultaneously. Alternatively, the touchpad sensors 42
can be a single multi-touch capable touchpad sensor that can
register, with equal precision, two points of contact at the same
time. In various embodiments, the touchpad sensors 42 can register
pressure information to allow the touchpad 22 (FIG. 2A) to be
clickable.
[0034] The input sensors 30 can also include one or more selection
button sensors 44, one or more touchpad button sensors 46, one or
more accelerometers 48, and one or more holding sensors 50. The
holding sensors 50 can be, for example, capacitive sensors that are
located around the border of the remote controller 16, and/or
behind the remote controller 16. The holding sensors 50 indicate
whether the user is touching an area of the remote controller 16 in
a proximity of the holding sensor 50. The accelerometer 48 can be a
three-axis accelerometer that indicates a positioning of the remote
controller 16. The input controller 34 reads the real-time data
from all active sensors. In various embodiments, some sensors may
not be active at all times to reduce power consumption. The
processing handler 38 gathers and forms into packets the data to be
transmitted and/or processes the real-time data from one or more
active sensors to perform local actions. The RF transmitter 40 (RF
driver 40) generates the signals in packet form to the display
device 14 (FIG. 1).
[0035] The output actuators 32 can include one or more LED panels
52 for displaying the touchpad buttons 26a-26d, depending on the
specific state of interaction with the GUI 18 present on-screen.
The output actuators 32 can additionally or alternatively include
actuators for providing sufficient haptic feedback to the user
(such as, for example, vibration actuators 54, light actuators 55,
and/or speaker actuators 56). The above has described one example
of the operations performed by the output actuator 32. The output
controller 36 updates the state of all the active actuators.
[0036] Referring now to FIG. 3, a dataflow diagram illustrates a
more detailed exemplary virtual keypad module 12. Various
embodiments of the virtual keypad module 12 according to the
present disclosure may include any number of sub-modules. As can be
appreciated, the sub-modules shown in FIG. 3 may be combined and/or
further partitioned to similarly perform text input. The data
inputs 70, 72, 74, and 76 to the virtual keypad module 12 are
received from the remote controller 16 (FIG. 1) and/or received
from other modules (not shown) within the display device 14 (FIG.
1). In various embodiments, the virtual keypad module 12 includes
an orientation recognition module 60, a hand position recognition
module 62, a prediction module 64, a key input module 66, a key
mapping module 67, and a GUI manager module 68. The following
describes one example of operations performed by each module. The
orientation recognition module 60 determines an orientation of the
remote controller 16 based on data received from the holding
sensors 50 (FIG. 2B) and the accelerometer 48 (FIG. 2B). For
example, the user can be holding the remote controller 16 (FIG. 2A)
in a portrait position, as shown in FIG. 4A, or in a landscape
position, as shown in FIG. 4B. In various embodiments, the
orientation recognition module 60 determines the orientation by way
of an Artificial Neural Network (ANN). The ANN can be trained by
data indicating both landscape position conditions and portrait
position conditions.
[0037] In one example, the orientation is determined by training an
ANN with sensory data. The sensory data can comprise
three-dimensional acceleration (accx, accy, accz) and an activation
state of the n capacitive holding position sensors, which can
signal that human skin is ever in proximity (1) or is not in
proximity (0). These n+3 values are fed into a single perceptron or
linear classifier to determine if the remote controller 16 (FIG.
2A) is horizontal or vertical. Perceptron coefficients can be
trained on a database and hard-coded by a manufacturer. The hand
position recognition module 62 determines a holding style of the
remote controller 16 (FIG. 2A) based on data received from the
holding sensors 50 (FIG. 2B) and the accelerometer 48 (FIG. 2B).
For example, the sensory data 70, 72 can be used to determine
whether the remote controller 16 (FIG. 2A) is held with one or two
hands; and if it is held with one hand, whether it is held with the
left or right hand. In various embodiments, the hand position
recognition module 62 determines the holding style by way of an
ANN. The ANN can be trained by data indicating right-hand
conditions, left-hand conditions, and two hands conditions.
[0038] In one example, the hand position is determined similarly as
discussed above. Multiple perceptrons can be implemented for the
more than one binary decisions (e.g., left hand, right hand, two
handed).
[0039] As will be discussed in more detail below, the determination
of the orientation and the holding style gives the virtual keypad
module 12 the ability to accommodate the user by automatically
adapting the text input methods and the look and feel of the GUI 18
(FIG. 1). Thus, the determination of the orientation and holding
position allows the user to hold the remote controller 16 (FIG. 2A)
in the most convenient way based on their personal preference and
the actual conditions of use (e.g., standing, sitting, lying down).
In the case of operating the remote controller 16 (FIG. 2A) in a
dark room, the user can pick up and operate the remote controller
16 (FIG. 2A) without worrying about how they are holding it.
[0040] The hand position recognition module 62 can further perform
user verification based on a combination of holding sensor data 70,
accelerometer data 72, additional sensor information (such as an
image of the palm of the user's hand), and/or bio-sensors. The data
can be used to fully determine the identity of the user or, more
broadly, infer the category to which the user belongs (e.g.,
left-handed, right-handed, kid, adult, elderly). User
identification can be used, for example, for parental control,
personalization, and profile switching. User categorization can be
used to adapt the GUI 18 (FIG. 1).
[0041] The key mapping module 67 generates a map indicating an
association between the coordinates or touch zones of the touchpad
22 (FIG. 2A) and the keys of the keypad and/or touchpad selection
buttons. In various embodiments, the key mapping module 67
generates the map based on the orientation and hand position
information determined from the orientation recognition module 60
and the hand position recognition module 62, respectively. The key
mapping module 67 maps the touch zones of the touchpad 22 to the
keys of the keypad, by associating a key of the keypad to each XY
coordinate of the touch zones. Here, the touch zones are defined by
one or more XY coordinates of the touchpad 22. The key mapping
module 67 generates a map from the mapping. Here, the map is a
two-dimensional lookup table defined by the coordinates of the
touchpad 22. For example, as shown in FIG. 5, the map can be a two
dimensional (XY) table 80 that is used to assign a key of the
keypad and/or a touchpad selection button to each coordinate of the
touchpad 22 (FIG. 2A). As will be discussed in more detail below,
the map can then be referenced by the key input module 66 to
determine an action to be taken and can be referenced by the GUI
manager module 68 to generate the GUI 18 (FIG. 1).
[0042] Referring back to FIG. 3, the key input module 66 processes
touchpad sensor data 74 and/or the accelerometer data 72. In
various embodiments, the key input module 66 interprets the
touchpad sensor data 74 to be a coordinate or coordinates of the
touchpad 22 (FIG. 2A) and determines what action to be taken based
on the coordinate or coordinates. For example, the key input module
66 can receive the touchpad sensor data 74, determine a particular
coordinate from the data 74, and reference the map generated by the
key mapping module 67. Based on the entries in the map, the key
input module 66, for example, can project that the user is hovering
over a particular key of the keypad and, thus, entering a
particular text.
[0043] In various embodiments, the key input module 66 interprets
the accelerometer data 72 as an action to be taken. For example,
the accelerometer data 72 can indicate if a user has lifted the
remote controller 16 (FIG. 2A) quickly to select, for example, an
uppercase mode. The accelerometer data 72 can indicate when a user
has lowered the remote controller 16 (FIG. 2A) quickly to select,
for example, a lowercase mode.
[0044] To enhance the precision and speed at which the text is
entered, the prediction module 64 generates a prediction of which
key and/or word the user is trying to select. The prediction module
64 generates the prediction based on the touchpad sensor data 74
and/or based on a determination of previous text entered. In
various embodiments, the prediction module 64 performs one or more
next key prediction methods, such as, for example, a language model
method, a trajectory analysis method, a hand movement analysis
method, a timing analysis method, and/or combinations thereof. In
short, the prediction module 64, in response to an operator
pressing a given touch zone to select a particular character,
performs one or more key prediction methods to predict one or more
next plausible keys. Examples of the operator are a finger of the
user, a touch-pen, and the like, that operates the touchpad 22.
[0045] In one example, the prediction module 64 employs one or more
language models known in the art to predict the next key based on
previous text entered. The language models, for instance, predict
the one or more next plausible keys based on previous characters
selected by the operator. For example, if the partial word `pr` has
been entered, the language model can predict that a vowel is likely
to follow and that the letter `r` will not be a possibility.
[0046] In another example, the prediction module 64 employs one or
more language models to provide a list of reliable candidates of
full words from partial word inputs. In this case, the language
model generates a list of plausible words based on the previous
characters selected by the operator. The full words can be selected
by the user for auto-completion. For example, if the partial word
`Pan` has been entered, a list can be generated that includes
`Panasonic` and `Pan-American.` Instead of typing the remaining
characters, the user can simply select one of the full words.
[0047] In various embodiments, the language model can generate the
word predictions based on words previously entered. For example,
once selected, the words can be remembered and the language model
can be adapted to favor the remembered words.
[0048] In yet another example, the trajectory analysis method can
be performed to predict possible next keys based on future path
estimation including directions and/or velocities of user's finger
or thumb movement on the touchpad 22 (FIG. 2A) as indicated by the
touchpad sensor data 74. The trajectory analysis method predicts
next plausible keys based on a direction of operator movement on
the touchpad 22. For example, if the user first selects the `k` key
and the language model predicts that the next key can be one of
`I,` `e,` or `a,` the touchpad sensor data 74 can be evaluated to
determine a direction the user is heading and velocity of the
movement and, thus, eliminate one or more of the choices.
[0049] For example, the trajectory analysis method determines a
coordinate of the key `k` and subsequent finger movements. From
that history of XY coordinates, the future path is determined. The
path includes a tolerance to account for short-term prediction
(more accurate) and longer-term prediction (less accurate). If the
future path estimation is heading away from the coordinates of the
predicted key, the choice is eliminated. For example, if the path
is heading on an Up/Left diagonal line, then the keys `e` and `a`
are eliminated and the key `I` is selected as the predicted next
key.
[0050] In another example, the hand movement analysis method can be
performed using the holding style information provided by the hand
position recognition module 62 and the predictions provided by the
prediction module 64. The hand movement analysis method predicts
next plausible keys based on a detection of which hand is moving on
the touchpad 22. The hand movement analysis method can evaluate
whether the remote controller 16 (FIG. 2A) is held with two hands
or one hand. If the remote controller 16 (FIG. 2A) is held by two
hands and movement by a right hand or left hand is detected, then
the choices that are associated with the non-moving hand would be
eliminated.
[0051] For example, if the user first selects the `k` key, the
prediction module 64 predicts that the next key can be one of `e,`
or `a,` and movement is detected by the right hand, then the keys
`e` and `a` are eliminated and the key `I` is selected as the
predicted next key.
[0052] In yet another example, the timing analysis method can be
performed when the prediction module 64 is unable to predict the
appropriate next key or word via any of the next key prediction
methods. Such may be the case when the user is entering proper
nouns, such as a last name or a password. The timing analysis
method evaluates the time the user takes to move from one key to
the next. That is, the timing analysis method predicts next
plausible keys based on a timing of operator movement on the
touchpad 22. In more detail, the timing analysis method predicts no
next plausible keys when the timing exceeds a predetermined limit.
Here, the key mapping module 67 remaps the touch zones of the
touchpad 22 to the keys of the keypad based on the no next
plausible keys. If the user moves more slowly, it is more likely
that a proper noun is being entered. The predictions are then
ignored.
[0053] Based on the predictions provided by the prediction module
64, the key mapping module 67 can remap the coordinates of the
touchpad 22 (FIG. 2A) to the keys of the virtual keypad 20 (FIG.
1). That is, the key mapping module 67 remaps the touch zones of
the touchpad 22 to the keys of the keypad based on the one or more
next plausible keys predicted by the prediction module 64. In
various embodiments, the coordinates that are associated with the
predicted next key can be expanded to make the key more accessible.
For example, as shown in FIG. 6, if the predicted next key is `p,`
the map can be adjusted such that the coordinates that were
previously mapped to the keys `o` or `I` are now mapped to the key
`p.` Thus, if the user is actually hovering over the `I` key, the
`p` key will be selected and entered if the user clicks on the
touchpad 22 (FIG. 2A).
[0054] Referring back to FIG. 3, in various embodiments, the key
mapping module 67 can remap the coordinates to the keys based on a
relative speed of the user movement. For example, the key mapping
module 67 can remap the coordinates such that the predicted key is
mapped to a larger touch zone area when a faster movement is
detected. The key mapping module 67 can maintain the original
coordinates or remap to the original coordinates when slower
movements are detected. The key mapping module 67 can scale the
area between the larger area and the original area when a movement
that is not fast or slow is detected.
[0055] The GUI manager module 68 generates GUI data 76 for
displaying the GUI 18. The GUI 18 provides visual feedback to the
user indicating the actions they have performed with the remote
controller 16 (FIG. 2A). As shown in the exemplary GUIs 18 of FIGS.
7 through 10, the GUI 18 can include the virtual keypad 20,
including multiple alphanumeric keys 90, one or more selection
buttons 92, a selection list 94, a text display box 96, a title bar
98, and/or combinations thereof.
[0056] As discussed further below, the touchpad 22 is divided into
two operating zones (a first operating zone and a second operating
zone). The key mapping module 67 maps a first subset of keys of the
keypad to the first operating zone of the touchpad 22 and maps a
second subset of keys of the keypad to the second operating zone of
the touchpad 22. For example, the first operating zone corresponds
to a top zone or a right zone of the touchpad 22, and the second
operating zone corresponds to a bottom zone or a left zone of the
touchpad 22. Here, each of the first subset and the second subset
is a group of keys of the keypad. In various embodiments, the GUI
manager module 68 displays the virtual keypad 20 based on the
holding position and the orientation determined by the hand
position recognition module 62 and the orientation recognition
module 60, respectively. For example, as shown in FIG. 7, if two
hands are used to hold the remote controller 16 (FIG. 2A) in the
landscape position, the virtual keypad 20 and the touchpad 22 can
be divided into two zones 100, 102 (e.g., left and right). The keys
in the right zone 102 can be associated with a first zone 104 of
the touchpad 22, and the keys in the left zone 100 can be
associated with a second zone 106 of the touchpad 22. The user can
select a key 90 in the right zone 102 with a right thumb or finger,
and the user can select a key 90 in the left zone 100 with a left
thumb or finger. The keys 90 in each zone 100, 102 can be
distinguished by a particular color or shading of the keys 90 so
that the user can determine which keys 90 can be selected with
which thumb or finger. This will allow for a natural text input,
similar to the experience when entering text on classic
keyboards.
[0057] In this example, selection of a predicted word
(auto-completion) can be made through the display of the two most
probable words (e.g., `Panasonic,` `Pan-America`). The selection
buttons 92 are auto-completion selection buttons. When the user
selects one of the selection buttons 92, a probable word displayed
in the selected selection button 92 is displayed as a complete word
on the display box 96. For example, the two words can be displayed
on selection buttons 92. The user can select the selection buttons
92 by pushing soft keys 24b, 24d located on the top side of the
remote controller 16 with the index fingers, or by dragging the
finger or thumb to a dedicated zone located at a designated
location of the touchpad 22.
[0058] In various embodiments, when the touchpad 22 and the virtual
keypad 20 are divided into two zones, the mapping of the
coordinates can provide for an overlap between the two areas. The
key mapping module 67 maps a third subset of keys of the first
subset of keys to the second operating zone and maps a fourth
subset of keys of the second subset of keys to the first operating
zone. For instance, the third subset of keys include the keys
designated by the letters `t,` `g,` and `b` of the keys of the
first subset (keys included in the zone 100 of FIG. 7). Likewise,
the fourth subset of keys include the keys designated by the
letters `y,` `h,` and `n` of the keys of the second subset (keys
included in the zone 102 of FIG. 7). That enables the keys along a
boundary between the divided zones to operate for any zones of the
touchpad 22. For example, the letter `g` in left keyboard area can
be selected by the touchpad 22 in the first zone, as well as be
selected by the touchpad 22 in the second zone 106. The overlap
keys can be identified on the GUI 18 by shading or color.
[0059] Referring now to FIGS. 8A and 8B, in another example, if two
hands are used to hold the remote controller 16 in the landscape
position, the touchpad 22 can be divided into two zones 108, 110 or
112, 114. The virtual keypad 20, however, is not divided. A first
zone 110 or 114 of the touchpad 22 can be associated with the
entire virtual keypad 20, and can be referred to as the "Key-Entry
Zone." The second zone 108 or 112 of the touchpad 22 can be
associated with the auto-completion buttons 92 (FIG. 7) or
selection lists 94 (auto-completion selection list), and can be
referred to as the "Auto-Completion Zone." The auto-completion
buttons 92 or selection lists 94 can be displayed when the user
touches the touchpad 22 in the "Auto-Completion Zone." The user
interface can include a small visual notification to signal the
availability and status of auto-completion words. This way the user
will not be bothered with auto-completion unless he decides to use
it.
[0060] In various embodiments, the zones 108, 110 or 112, 114 of
the touchpad 22 can be configured based on an identification of the
user. For example, in the case of a right-handed user, the right
zone 110 can be associated with the "Key-Entry Zone" used most
often and the left zone 108 can be associated with the
"Auto-Completion Zone." Similarly, for a left-handed user, the left
zone 108 can be associated with the "Key-Entry Zone" used most
often and the right zone 110 can be associated with the
"Auto-Completion Zone."
[0061] Referring now to FIG. 9, in yet another example, if one hand
is used to hold the remote controller 16 in the portrait position,
the touchpad 22 is divided into zones. The virtual keypad 20,
however, is not divided into zones. Only one zone of the touchpad
22 is used by the user at all times. The user will be able to
address the entire virtual keypad 20 from the entire touchpad 22.
Such touchpad 22 will principally function as the "Key-Entry Zone."
If auto-completion is needed, the user can switch the touchpad 22
to the "Auto-completion Zone" by using a simple gesture. Such
gesture can include, but is not limited to, moving the thumb or
finger to a specific area of the touchpad 22 (for instance lower
right), or sliding the finger or thumb along the touchpad 22
quickly from right to left.
[0062] In any of the examples shown in FIGS. 7 through 9, the user
can select a key 90 by dragging the thumb or finger on the touchpad
22. In response to the movement, one or more on-screen pointers
(such as, for example, a cursor, or an illustration of a thumb or
finger) slides to a target key 90. The on-screen pointers can be
displayed according to the hand position (e.g., left hand only,
right hand only, or both hands). The key 90 can be selected by
clicking the clickable touchpad 22 and/or upon release. When
displaying a thumb or finger as the pointer, a different thumb or
finger posture can be used to indicate a thumb or finger press as
opposed to a thumb or finger that is dragged on the touchpad
surface. The selected character associated with the key will be
displayed in the text display box 96.
[0063] Referring now to FIG. 10, in various embodiments, the GUI
manager module 68 displays the keys 90 of the virtual keypad 20
based on the predicted next key and the remapping of the
coordinates performed by the key mapping module 67 (FIG. 3). For
example, by knowing the next likely key, the GUI manager module 68
(FIG. 3) can highlight and/or enlarge the most likely key 116 based
on the mapping of the coordinates. That is, an arrangement of the
keypad on the user interface is modified based on the one or more
next plausible keys predicted by the prediction module 64. In more
detail, one or more keys of the keypad on the user interface is
highlighted or enlarged based on the one or more next plausible
keys predicted by the prediction module 64. These operations are
performed by the GUI manager module 68. However, when the user is
not moving the cursor quickly or the user hovers over a given
coordinate, the highlighted and/or enlarged key 116 is remapped to
the original coordinates of the touchpad 22 and the highlighted
and/or enlarged key can be resized back to the original size.
[0064] Those skilled in the art can now appreciate from the
foregoing description that the broad teachings of the present
disclosure can be implemented in a variety of forms. Therefore,
while this disclosure has been described in connection with
particular examples thereof, the true scope of the disclosure
should not be so limited since other modifications will become
apparent to the skilled practitioner upon a study of the drawings,
specification, and the following claims.
* * * * *