U.S. patent application number 13/610698 was filed with the patent office on 2013-10-03 for systems and methods for determining user input using position information and force sensing.
This patent application is currently assigned to SYNAPTICS INCORPORATED. The applicant listed for this patent is Raymond Trent, Tom Vandermeijden. Invention is credited to Raymond Trent, Tom Vandermeijden.
Application Number | 20130257792 13/610698 |
Document ID | / |
Family ID | 49234247 |
Filed Date | 2013-10-03 |
United States Patent
Application |
20130257792 |
Kind Code |
A1 |
Trent; Raymond ; et
al. |
October 3, 2013 |
SYSTEMS AND METHODS FOR DETERMINING USER INPUT USING POSITION
INFORMATION AND FORCE SENSING
Abstract
The embodiments described herein provide devices and methods
that facilitate improved input device performance. Specifically,
the devices and methods provide improved resistance to the effects
of errors that may be caused by the motion of detected objects on
such input devices, and in particular, to the effect of aliasing
errors on input devices that use capacitive techniques to generate
images of sensor values. The devices and methods provide improved
resistance to the effects of aliasing errors by using force values
indicative of force applied to the input surface. Specifically, the
devices and methods use the force value to disambiguate determined
position information for objects detected in the images of sensor
values. This disambiguation of position information can lead to a
reduction in the effects of aliasing errors and can thus improve
the accuracy and usability of the input device.
Inventors: |
Trent; Raymond; (Santa
Clara, CA) ; Vandermeijden; Tom; (Santa Clara,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Trent; Raymond
Vandermeijden; Tom |
Santa Clara
Santa Clara |
CA
CA |
US
US |
|
|
Assignee: |
SYNAPTICS INCORPORATED
Santa Clara
CA
|
Family ID: |
49234247 |
Appl. No.: |
13/610698 |
Filed: |
September 11, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61619344 |
Apr 2, 2012 |
|
|
|
Current U.S.
Class: |
345/174 ;
345/175 |
Current CPC
Class: |
G06F 3/04883 20130101;
G06F 3/0488 20130101 |
Class at
Publication: |
345/174 ;
345/175 |
International
Class: |
G06F 3/042 20060101
G06F003/042; G06F 3/044 20060101 G06F003/044 |
Claims
1. A processing system for an input device, the processing system
comprising: a sensor module comprising sensor circuitry configured
to: operate a plurality of sensor electrodes to generate images of
sensor values indicative of objects in a sensing region proximate
to an input surface at a first rate; operate at least one force
sensor to generate force values indicative of force applied to the
input surface at a second rate; a determination module configured
to: determine if an input object detected in a first image of
sensor values and an input object detected in a second image of
sensor values remained in contact with the input surface between
the first image and the second image based at least in part on the
force values.
2. The processing system of claim 1 wherein the second rate is
greater than the first rate.
3. The processing system of claim 1 wherein the first image of
sensor values and the second image of second values comprise
consecutive images generated by the determination module.
4. The processing system of claim 1 wherein the force sensor
comprises a capacitive force sensor.
5. The processing system of claim 1 wherein the determination
module is further configured to determine positional information
for an input object based on the force values.
6. The processing system of claim 1 wherein the determination
module is further configured to determine an initial contact
location for an input object first detected in the first image of
sensor values based at least in part on at least one force value
preceding the first image of sensor values and the first image of
sensor values.
7. The processing system of claim 1 wherein the determination
module is configured to generate a first user interface action in
response to a determination that the input object detected in the
second image of sensor values remained in contact with the input
surface between the first image and the second image and generate a
second user interface action in response to a determination that
the input object detected in the second image of sensor values did
not remain in contact with the input surface between the first
image and the second image.
8. A processing system for an input device, the processing system
comprising: a sensor module comprising sensor circuitry configured
to: operate a plurality of sensor electrodes to generate images of
sensor values indicative of objects in a sensing region proximate
to an input surface at a first rate; operate at least one force
sensor to generate force values indicative of force applied to the
input surface at a second rate; a determination module configured
to: determine an initial contact location for an input object first
detected in a first image of sensor values based at least in part
on at least one force value preceding the first image of sensor
values and the first image of sensor values.
9. The processing system of claim 8 wherein the second rate is
greater than the first rate.
10. The processing system of claim 8 wherein the determination
module is further configured to initiate an edge user interface
action based on the initial contact location in response to the
initial contact location being in an edge region.
11. The processing system of claim 8 wherein the determination
module is further configured to not initiate an edge user interface
action in response to a determination that an initial contact
within an edge region did not occur prior to the first image of
sensor values.
12. The processing system of claim 8 wherein the force sensor
comprises a capacitive force sensor.
13. The processing system of claim 8 wherein the determination
module is further configured to generate positional information for
the input object using the force values.
14. The processing system of claim 8 wherein the determination
module is further configured to determine if an input object
detected in the first image of sensor values and an input object
detected in a second image of sensor values remained in contact
with the input surface between the first image and the second image
based at least in part on the force values.
15. An input device comprising: an input surface a plurality of
capacitive sensor electrodes proximate to the input surface; at
least one force sensor coupled to the input surface; a processing
system operatively coupled to the plurality of capacitive sensor
electrodes and the at least one force sensor, the processing system
configured to: operate the plurality of capacitive sensor
electrodes to generate images of sensor values indicative of
objects in a sensing region proximate to the input surface at a
first rate; operate the at least one force sensor to generate force
values indicative of force applied to the input surface at a second
rate; determine if an input object detected in a first image of
sensor values and an input object detected in a second image of
sensor values remained in contact with the input surface between
the first image and the second image based at least in part on the
force values.
16. The input device of claim 15 wherein the second rate is greater
than the first rate.
17. The input device of claim 15 wherein the first image of sensor
values and the second image of second values comprise consecutive
images generated by the processing system.
18. The input device of claim 15 wherein the force sensor comprises
a capacitive force sensor.
19. The input device of claim 15 wherein the processing system is
further configured to determine positional information for an input
object based on the force values.
20. The input device of claim 15 wherein the processing system is
further configured to determine an initial contact location for an
input object first detected in the first image of sensor values
based at least in part on at least one force value preceding the
first image of sensor values and the first image of sensor
values.
21. The input device of claim 15 wherein the processing system is
configured to generate a first user interface action in response to
a determination that the input object detected in the second image
of sensor values remained in contact with the input surface between
the first image and the second image and generate a second user
interface action in response to a determination that the input
object detected in the second image of sensor values did not remain
in contact with the input surface between the first image and the
second image.
22. A method of determining input in an input device, the method
comprising: operating a plurality of sensor electrodes to generate
images of sensor values indicative of objects in a sensing region
proximate to an input surface at a first rate; operating at least
one force sensor to generate force values indicative of force
applied to the input surface at a second rate; determining an
initial contact location for an input object first detected in a
first image of sensor values based at least in part on at least one
force value preceding the first image of sensor values and the
first image of sensor values; and generating a user interface
action based at least in part on the initial contact location.
23. The method of claim 22 wherein the second rate is greater than
the first rate.
24. The method of claim 22 wherein the generating the user
interface action based at least in part on the initial contact
location comprises initiating an edge user interface action based
on the initial contact location in response to the initial contact
location being in an edge region.
25. The method of claim 22 wherein the force sensor comprises a
capacitive force sensor.
26. The method of claim 22 further comprising generating positional
information for the input object using the force values.
27. The method of claim 22 further comprising determining if an
input object detected in the first image of sensor values and an
input object detected in a second image of sensor values remained
in contact with the input surface between the first image and the
second image based at least in part on the force values.
Description
PRIORITY CLAIM
[0001] This application claims priority to U.S. Provisional Patent
Application Ser. No. 61/619,344, filed Apr. 2, 2012
FIELD OF THE INVENTION
[0002] This invention generally relates to electronic devices, and
more specifically relates to input devices.
BACKGROUND OF THE INVENTION
[0003] Input devices including proximity sensor devices (also
commonly called touchpads or touch sensor devices) are widely used
in a variety of electronic systems. A proximity sensor device
typically includes a sensing region, often demarked by a surface,
in which the proximity sensor device determines the presence,
location and/or motion of one or more input objects. Proximity
sensor devices may be used to provide interfaces for the electronic
system. For example, proximity sensor devices are often used as
input devices for larger computing systems (such as opaque
touchpads integrated in, or peripheral to, notebook or desktop
computers, or as transparent sensor devices integrated with display
screens to provide a touch screen interface).
[0004] Many proximity sensor devices use capacitive techniques to
sense input objects. Such proximity sensor devices may typically
incorporate either profile capacitive sensors or capacitive image
sensors. Capacitive profile sensors alternate between multiple axes
(e.g., x and y), while capacitive image sensors scan multiple
transmitter rows to produce a more detailed capacitive "image" of
"pixels" associated with an input object. While capacitive image
sensors are advantageous in a number of respects, they do share
some potential disadvantages.
[0005] Specifically, because of the time required to generate each
capacitive image, image sensors can be sensitive to errors caused
by quickly moving objects. For example, aliasing errors may arise
when sequential images show input objects at different locations.
In such cases it can be difficult to determine if the detected
input objects are the same input object or different input objects.
Likewise, it can be difficult to determine where a detected object
first entered or later exited the sensing region. These aliasing
errors can occur when objects are quickly moving within or in
and/or out of the sensing region. In such situations the proximity
sensor device can incorrectly interpret the presence and movement
of such objects. Such errors can thus result in unwanted or missed
user interface actions, and thus can frustrate the user and degrade
the usability of the device.
[0006] Thus, while capacitive image proximity sensor devices are
advantageous in a number of respects, there is a continuing need to
improve the performance of such devices. For example, to improve
the responsiveness of such sensors, or to improve the sensor's
resistance to errors, such as aliasing errors.
[0007] Other desirable features and characteristics will become
apparent from the subsequent detailed description and the appended
claims, taken in conjunction with the accompanying drawings and the
foregoing technical field and background.
BRIEF SUMMARY OF THE INVENTION
[0008] The embodiments of the present invention provide devices and
methods that facilitate improved input device performance.
Specifically, the devices and methods provide improved resistance
to the effects of errors that may be caused by the motion of
detected objects on such input devices, and in particular, to the
effect of aliasing errors on input devices that use capacitive
techniques to generate images of sensor values. The devices and
methods provide improved resistance to the effects of aliasing
errors by using force values indicative of force applied to the
input surface. Specifically, the devices and methods use the force
value to disambiguate determined position information for objects
detected in the images of sensor values. This disambiguation of
position information can lead to a reduction in the effects of
aliasing errors and can thus improve the accuracy and usability of
the input device.
[0009] In one embodiment, a processing system is provided for an
input device having a plurality of sensor electrodes, where the
processing system comprises a sensor module and a determination
module. The sensor module comprises sensor circuitry configured to
operate the plurality of sensor electrodes to generate images of
sensor values indicative of objects in a sensing region proximate
to an input surface at a first rate. The sensor module is further
configured to operate at least one force sensor to generate force
values indicative of force applied to the input surface at a second
rate. The determination module is configured to determine if an
input object detected in a first image of sensor values and an
input object detected in a second image of sensor values remained
in contact with the input surface between the first image and the
second image based at least in part on the force values. Such a
determination can disambiguate the positional information for the
detected objects, and thus can be used to improve the accuracy and
usability of the input device.
[0010] For example, such a determination can disambiguate whether
such detected objects indicate a first object lifting from the
input surface and a second object being placed on the input
surface, or instead indicates the same input object being moved
across the input surface without lifting from the input surface.
Such a disambiguation of position information can lead improve the
likelihood that the input device will respond to the detected
objects correctly, and thus can improve the accuracy and usability
of the input device.
[0011] In another embodiment, a processing system is provided for
an input device having a plurality of sensor electrodes, where the
processing system comprises a sensor module and a determination
module. The sensor module comprises sensor circuitry configured to
operate the plurality of sensor electrodes to generate images of
sensor values indicative of objects in a sensing region proximate
to an input surface at a first rate. The sensor module is further
configured to operate at least one force sensor to generate force
values indicative of force applied to the input surface at a second
rate. The determination module is configured to determine an
initial contact location for an input object first detected in a
first image of sensor values based at least in part on at least one
force value preceding the first image of sensor values and the
first image of sensor values. Such a determination can disambiguate
the positional information for the detected objects, and thus can
be used to improve the accuracy and usability of the input
device.
[0012] For example, such a determination can disambiguate whether
such a detected object had an initial contact location in a
specified region that would indicate a specific user interface
action. Such a disambiguation of position information can lead
improve the likelihood that the input device will respond to the
detected object correctly, and thus can improve the accuracy and
usability of the input device.
BRIEF DESCRIPTION OF DRAWINGS
[0013] The preferred exemplary embodiment of the present invention
will hereinafter be described in conjunction with the appended
drawings, where like designations denote like elements, and:
[0014] FIG. 1 is a block diagram of an exemplary system that
includes an input device in accordance with an embodiment of the
invention;
[0015] FIGS. 2A and 2B are block diagrams of sensor electrodes in
accordance with exemplary embodiments of the invention;
[0016] FIGS. 3A-3B are top and side views an exemplary input device
and that includes at least one force sensor;
[0017] FIGS. 4-7 are schematic views of an exemplary input device
with one or more input objects in the sensing region; and
[0018] FIG. 8 is a schematic view of an input device showing
various exemplary object positions.
DETAILED DESCRIPTION OF THE INVENTION
[0019] The following detailed description is merely exemplary in
nature and is not intended to limit the invention or the
application and uses of the invention. Furthermore, there is no
intention to be bound by any expressed or implied theory presented
in the preceding technical field, background, brief summary, or the
following detailed description.
[0020] Various embodiments of the present invention provide input
devices and methods that facilitate improved usability. FIG. 1 is a
block diagram of an exemplary input device 100, in accordance with
embodiments of the invention. The input device 100 may be
configured to provide input to an electronic system (not shown). As
used in this document, the term "electronic system" (or "electronic
device") broadly refers to any system capable of electronically
processing information. Some non-limiting examples of electronic
systems include personal computers of all sizes and shapes, such as
desktop computers, laptop computers, netbook computers, tablets,
web browsers, e-book readers, and personal digital assistants
(PDAs). Additional example electronic systems include composite
input devices, such as physical keyboards that include input device
100 and separate joysticks or key switches. Further example
electronic systems include peripherals such as data input devices
(including remote controls and mice), and data output devices
(including display screens and printers). Other examples include
remote terminals, kiosks, and video game machines (e.g., video game
consoles, portable gaming devices, and the like). Other examples
include communication devices (including cellular phones, such as
smart phones), and media devices (including recorders, editors, and
players such as televisions, set-top boxes, music players, digital
photo frames, and digital cameras). Additionally, the electronic
system could be a host or a slave to the input device.
[0021] The input device 100 can be implemented as a physical part
of the electronic system, or can be physically separate from the
electronic system. As appropriate, the input device 100 may
communicate with parts of the electronic system using any one or
more of the following: buses, networks, and other wired or wireless
interconnections. Examples include I.sup.2C, SPI, PS/2, Universal
Serial Bus (USB), Bluetooth, RF, SMBus, and IRDA.
[0022] In FIG. 1, the input device 100 is shown as a proximity
sensor device (also often referred to as a "touchpad" or a "touch
sensor device") configured to sense input provided by one or more
input objects 140 in a sensing region 120. Example input objects
include fingers and styli, as shown in FIG. 1.
[0023] Sensing region 120 encompasses any space above, around, in
and/or near the input device 100 in which the input device 100 is
able to detect user input (e.g., user input provided by one or more
input objects 140). The sizes, shapes, and locations of particular
sensing regions may vary widely from embodiment to embodiment. In
some embodiments, the sensing region 120 extends from a surface of
the input device 100 in one or more directions into space until
signal-to-noise ratios prevent sufficiently accurate object
detection. The distance to which this sensing region 120 extends in
a particular direction, in various embodiments, may be on the order
of less than a millimeter, millimeters, centimeters, or more, and
may vary significantly with the type of sensing technology used and
the accuracy desired. Thus, some embodiments sense input that
comprises no contact with any surfaces of the input device 100,
contact with an input surface (e.g. a touch surface) of the input
device 100, contact with an input surface of the input device 100
coupled with some amount of applied force or pressure, and/or a
combination thereof. In various embodiments, input surfaces may be
provided by surfaces of casings within which sensor electrodes
reside, by face sheets applied over the sensor electrodes or any
casings, etc. In some embodiments, the sensing region 120 has a
rectangular shape when projected onto an input surface of the input
device 100.
[0024] The input device 100 also includes one or more force sensors
that are coupled to a surface below the sensing region 120 and the
processing system 110, and configured to provide force values that
are indicative of force applied to the input surface (not shown in
FIG. 1). The input device 100 utilizes capacitive sensing to detect
user input in the sensing region 120. To facilitate capacitive
sensing, the input device 100 comprises one or more sensing
electrodes for detecting user input (not shown in FIG. 1).
[0025] Some implementations are configured to provide images that
span one, two, three, or higher dimensional spaces. Some
implementations are configured to provide projections of input
along particular axes or planes.
[0026] In some capacitive implementations of the input device 100,
voltage or current is applied to create an electric field. Nearby
input objects cause changes in the electric field, and produce
detectable changes in capacitive coupling that may be detected as
changes in voltage, current, or the like.
[0027] Some capacitive implementations utilize arrays or other
regular or irregular patterns of capacitive sensing elements to
create electric fields. In some capacitive implementations,
separate sensing elements may be ohmically shorted together to form
larger sensor electrodes. Some capacitive implementations utilize
resistive sheets, which may be uniformly resistive.
[0028] Some capacitive implementations utilize "transcapacitive"
sensing methods. Transcapacitive sensing methods, sometimes
referred to as "mutual capacitance", are based on changes in the
capacitive coupling between sensor electrodes. In various
embodiments, an input object near the sensor electrodes alters the
electric field between the sensor electrodes, thus changing the
measured capacitive coupling. In one implementation, a
transcapacitive sensing method operates by detecting the capacitive
coupling between one or more transmitter sensor electrodes (also
"transmitter electrodes" or "transmitters") and one or more
receiver sensor electrodes (also "receiver electrodes" or
"receivers"). Transmitter sensor electrodes may be modulated
relative to a reference voltage (e.g., system ground) to transmit
transmitter signals. Receiver sensor electrodes may be held
substantially constant relative to the reference voltage to
facilitate receipt of resulting signals. A resulting signal may
comprise effect(s) corresponding to one or more transmitter
signals, one or more conductive input objects, and/or to one or
more sources of environmental interference (e.g. other
electromagnetic signals). Sensor electrodes may be dedicated
transmitters or receivers, or may be configured to both transmit
and receive.
[0029] In contrast, absolute capacitance sensing methods, sometimes
referred to as "self capacitance", are based on changes in the
capacitive coupling between sensor electrodes and an input object.
In various embodiments, an input object near the sensor electrodes
alters the electric field near the sensor electrodes, thus changing
the measured capacitive coupling. In one implementation, an
absolute capacitance sensing method operates by modulating sensor
electrodes with respect to a reference voltage (e.g. system ground)
to generate resulting signals on the sensor electrodes. In this
case, the resulting signals received on a sensor electrode are
generated by the modulation of that same sensor electrode. The
resulting signals for absolute capacitive sensing thus comprise the
effects of modulating the same sensor electrode, the effects of
proximate conductive input objects, and the effects of and/or to
one or more sources of environmental interference. Thus, by
analyzing the resulting signals on the sensor electrodes the
capacitive coupling between the sensor electrodes and input objects
may be detected.
[0030] Notably, in transcapacitive sensing the resulting signals
corresponding to each transmission of a transmitter signal are
received on different sensor electrodes than the transmitter
electrode used to transmit. In contrast, in absolute capacitive
sensing each resulting signal is received on the same electrode
that was modulated to generate that resulting signal.
[0031] In FIG. 1, processing system 110 is shown as part of the
input device 100. The processing system 110 is configured to
operate the hardware of the input device 100 to detect input in the
sensing region 120. The processing system 110 comprises parts of or
all of one or more integrated circuits (ICs) and/or other circuitry
components. For example, as described above, the processing system
110 may include the circuit components for operating the plurality
of sensor electrodes to generate images of sensor values indicative
of objects in a sensing region proximate to an input surface, and
may also include circuit components to operate at least one force
sensor to generate force values indicative of force applied to an
input surface.
[0032] In some embodiments, the processing system 110 also
comprises electronically-readable instructions, such as firmware
code, software code, and/or the like. In some embodiments,
components composing the processing system 110 are located
together, such as near sensing element(s) of the input device 100.
In other embodiments, components of processing system 110 are
physically separate with one or more components close to sensing
element(s) of input device 100, and one or more components
elsewhere. For example, the input device 100 may be a peripheral
coupled to a desktop computer, and the processing system 110 may
comprise software configured to run on a central processing unit of
the desktop computer and one or more ICs (perhaps with associated
firmware) separate from the central processing unit. As another
example, the input device 100 may be physically integrated in a
phone, and the processing system 110 may comprise circuits and
firmware that are part of a main processor of the phone. In some
embodiments, the processing system 110 is dedicated to implementing
the input device 100. In other embodiments, the processing system
110 also performs other functions, such as operating display
screens, driving haptic actuators, etc.
[0033] The processing system 110 may be implemented as a set of
modules that handle different functions of the processing system
110. Each module may comprise circuitry that is a part of the
processing system 110, firmware, software, or a combination
thereof. In various embodiments, different combinations of modules
may be used. Example modules include hardware operation modules for
operating hardware such as sensor electrodes and display screens,
data processing modules for processing data such as sensor signals
and positional information, and reporting modules for reporting
information. Further example modules include sensor operation
modules configured to operate sensing element(s) and a
determination module. In accordance with the embodiments described
herein, the sensor module may be configured to operate the
plurality of sensor electrodes to generate images of sensor values
indicative of objects in a sensing region proximate to an input
surface at a first rate. The sensor module may be further
configured to operate at least one force sensor to generate force
values indicative of force applied to the input surface at a second
rate. In one embodiment, the determination module is configured to
determine if an input object detected in a first image of sensor
values and an input object detected in a second image of sensor
values remained in contact with the input surface between the first
image and the second image based at least in part on the force
values. In another embodiment, the determination module may be
configured to determine an initial contact location for an input
object first detected in a first image of sensor values based at
least in part on at least one force value preceding the first image
of sensor values and the first image of sensor values. In either
case such a determination can disambiguate the positional
information for the detected objects, and thus can be used to
improve the accuracy and usability of the input device.
[0034] In some embodiments, the processing system 110 responds to
user input (or lack of user input) in the sensing region 120
directly by causing one or more actions. Example actions include
changing operation modes, as well as GUI actions such as cursor
movement, selection, menu navigation, and other functions. In some
embodiments, the processing system 110 provides information about
the input (or lack of input) to some part of the electronic system
(e.g. to a central processing system of the electronic system that
is separate from the processing system 110, if such a separate
central processing system exists). In some embodiments, some part
of the electronic system processes information received from the
processing system 110 to act on user input, such as to facilitate a
full range of actions, including mode changing actions and GUI
actions.
[0035] For example, in some embodiments, the processing system 110
operates the sensing element(s) of the input device 100 to produce
electrical signals indicative of input (or lack of input) in the
sensing region 120. The processing system 110 may perform any
appropriate amount of processing on the electrical signals in
producing the information provided to the electronic system. For
example, the processing system 110 may digitize analog electrical
signals obtained from the sensor electrodes. As another example,
the processing system 110 may perform filtering or other signal
conditioning. As yet another example, the processing system 110 may
subtract or otherwise account for a baseline, such that the
information reflects a difference between the electrical signals
and the baseline. As yet further examples, the processing system
110 may determine positional information, recognize inputs as
commands, recognize handwriting, and the like. In one embodiment,
processing system 110 includes a determination module configured to
determine positional information for an input device based on the
measurement.
[0036] "Positional information" as used herein broadly encompasses
absolute position, relative position, velocity, acceleration, and
other types of spatial information. Exemplary "zero-dimensional"
positional information includes near/far or contact/no contact
information. Exemplary "one-dimensional" positional information
includes positions along an axis. Exemplary "two-dimensional"
positional information includes motions in a plane. Exemplary
"three-dimensional" positional information includes instantaneous
or average velocities in space. Further examples include other
representations of spatial information. Historical data regarding
one or more types of positional information may also be determined
and/or stored, including, for example, historical data that tracks
position, motion, or instantaneous velocity over time.
[0037] Likewise, the term "force values" as used herein is intended
to broadly encompass force information regardless of format. For
example, the force values can be provided for each object as a
vector or scalar quantity. As other examples, the force information
can also include time history components used for gesture
recognition. As will be described in greater detail below, the
force values from the processing systems may be used to
disambiguate positional information for detected objects, and thus
can be used to improve the accuracy and usability of the input
device.
[0038] In some embodiments, the input device 100 is implemented
with additional input components that are operated by the
processing system 110 or by some other processing system. These
additional input components may provide redundant functionality for
input in the sensing region 120, or some other functionality. FIG.
1 shows buttons 130 near the sensing region 120 that can be used to
facilitate selection of items using the input device 100. Other
types of additional input components include sliders, balls,
wheels, switches, and the like. Conversely, in some embodiments,
the input device 100 may be implemented with no other input
components.
[0039] In some embodiments, the input device 100 comprises a touch
screen interface, and the sensing region 120 overlaps at least part
of an active area of a display screen. For example, the input
device 100 may comprise substantially transparent sensor electrodes
overlaying the display screen and provide a touch screen interface
for the associated electronic system. The display screen may be any
type of dynamic display capable of displaying a visual interface to
a user, and may include any type of light emitting diode (LED),
organic LED (OLED), cathode ray tube (CRT), liquid crystal display
(LCD), plasma, electroluminescence (EL), or other display
technology. The input device 100 and the display screen may share
physical elements. For example, some embodiments may utilize some
of the same electrical components for displaying and sensing. As
another example, the display screen may be operated in part or in
total by the processing system 110.
[0040] It should be understood that while many embodiments of the
invention are described in the context of a fully functioning
apparatus, the mechanisms of the present invention are capable of
being distributed as a program product (e.g., software) in a
variety of forms. For example, the mechanisms of the present
invention may be implemented and distributed as a software program
on information bearing media that are readable by electronic
processors (e.g., non-transitory computer-readable and/or
recordable/writable information bearing media readable by the
processing system 110). Additionally, the embodiments of the
present invention apply equally regardless of the particular type
of medium used to carry out the distribution. Examples of
non-transitory, electronically readable media include various
discs, memory sticks, memory cards, memory modules, and the like.
Electronically readable media may be based on flash, optical,
magnetic, holographic, or any other storage technology.
[0041] In accordance with various embodiments of the invention, the
input device 100 is configured with the processing system 110
coupled to a plurality of capacitive sensor electrodes and at least
one force sensor.
[0042] In general, the input device 100 facilitates improved
performance. Specifically, The input device 100 provides resistance
to the effects of errors that may be caused by the motion of
detected objects, and in particular, to the effect of aliasing
errors that can be caused by the capacitive techniques to generate
images of sensor values. The input device 100 provides improved
resistance to the effects of aliasing errors by using force values
indicative of force applied to the input surface. Specifically, the
processing system 110 uses force values to disambiguate determined
position information for objects detected in the images of sensor
values.
[0043] In one embodiment, a processing system 110 is coupled to
plurality of sensor electrodes and at least one force sensor. In
one embodiment, the processing system 110 comprises a sensor module
and a determination module. The processing system 110 is configured
to operate the plurality of sensor electrodes to generate images of
sensor values indicative of objects in a sensing region proximate
to an input surface at a first rate. The processing system 110 is
further configured to operate at least one force sensor to generate
force values indicative of force applied to the input surface at a
second rate. In one embodiment, the second rate is greater than the
first rate, and specifically the second rate may be more than twice
the first rate. In one embodiment, the processing system 110 is
configured to determine if an input object detected in a first
image of sensor values and an input object detected in a second
image of sensor values remained in contact with the input surface
between the first image and the second image based at least in part
on the force values. In another embodiment, the processing system
110 is configured to determine an initial contact location for an
input object first detected in a first image of sensor values based
at least in part on at least one force value preceding the first
image of sensor values and the first image of sensor values.
[0044] In either case such a determination can disambiguate the
positional information for the detected objects, and thus can be
used to improve the accuracy and usability of the input device
100.
[0045] As was described above, the processing system 110 is coupled
to sensor electrodes to determine user input. Specifically, the
processing system operates by detecting the capacitive coupling
between one or more transmitter sensor electrodes and one or more
receiver sensor electrodes. Turning now to FIG. 2, these figures
conceptually illustrate exemplary sets of capacitive sensor
electrodes configured to sense in a sensing region. Specifically,
FIG. 2A shows electrodes 200 in a rectilinear arrangement, while
FIG. 2B shows electrodes 225 in a radial/concentric arrangement.
However, it will be appreciated that the invention is not so
limited, and that a variety of electrode shapes and arrangements
may be suitable in any particular embodiment.
[0046] Turning now to FIG. 2A, in the illustrated embodiment the
capacitive sensor electrodes 200 comprise first sensor electrodes
210 and second sensor electrodes 220. Specifically, in the
illustrated embodiment, the first sensor electrodes 210 comprise
six electrodes 210-1 to 210-6, and the second sensor electrodes 220
comprise six electrodes 220-1 to 220-6. Each of the first sensor
electrodes 210 is arranged to extend along a second axis.
Specifically, each first sensor electrode 210 has a major axis that
extends along the second axis. It should also be noted that the
first sensor electrodes 210 are distributed in an array, with each
of the first sensor electrodes 210 positioned a distance from
adjacent first sensor electrodes 210 and corresponding to a
different position in the first axis.
[0047] Likewise, each of the second sensor electrodes 220 is
arranged to extend along a first axis, where the first and second
axes are different axis. Specifically, each second sensor electrode
220 has a major axis that extends along the first axis. It should
also be noted that the second sensor electrodes 220 are distributed
in an array, with each of the second sensor electrodes 220
positioned a distance from adjacent second sensor electrodes 220
and corresponding to a different position in the second axis.
[0048] Sensor electrodes 210 and 220 are typically ohmically
isolated from each other. That is, one or more insulators separate
sensor electrodes 210 and 220 and prevent them from electrically
shorting to each other. In some embodiments, sensor electrodes 210
and 220 are separated by insulative material disposed between them
at cross-over areas; in such constructions, the sensor electrodes
210 and/or sensor electrodes 220 may be formed with jumpers
connecting different portions of the same electrode. In some
embodiments, sensor electrodes 210 and 220 are separated by one or
more layers of insulative material. In some other embodiments,
sensor electrodes 210 and 220 are separated by one or more
substrates; for example, they may be disposed on opposite sides of
the same substrate, or on different substrates that are laminated
together. The capacitive coupling between the transmitter
electrodes and receiver electrodes change with the proximity and
motion of input objects in the sensing region associated with the
transmitter electrodes and receiver electrodes.
[0049] In transcapacitive sensing, the sensor pattern is "scanned"
to determine the capacitive couplings between transmitter and
receiver electrodes. That is, the transmitter electrodes are driven
to transmit transmitter signals and the receiver electrodes are
used acquire the resulting signals. The resulting signals are then
used to determine measurements of the capacitive couplings between
electrodes, where each capacitive coupling between a transmitter
electrode and a receiver electrode provides one "capacitive pixel".
A two-dimensional array of measured values derived from the
capacitive pixels form a "capacitive image" (also commonly referred
to as a "capacitive frame") representative of the capacitive
couplings at the pixels. Multiple capacitive images may be acquired
over multiple time periods, and differences between them used to
derive information about input in the sensing region. For example,
successive capacitive images acquired over successive periods of
time can be used to track the motion(s) of one or more input
objects entering, exiting, and within the sensing region.
[0050] A detailed example of generating images of sensor values
will now be given with reference to FIG. 2A. In this detailed
example sensor values are generated on a "column-by-column", with
the first resulting signals for each column captured substantially
simultaneously. Specifically, each column of resulting signals is
captured at a different time, and taken together are used to
generate the first image of sensor values. In the embodiment of
FIG. 2A, a transmitter signal may be transmitted with electrode
210-1, and first resulting signals captured with each of the
receiver electrodes 220-1 to 220-6, where each first resulting
signal comprises effects of the first transmitter signal. These six
first resulting signals comprise a set (corresponding to a column)
of first resulting signals that may be used to generate the first
image of sensor values. Specifically, from each of these six first
resulting signals provides a capacitive measurement that
corresponds to a pixel in the first capacitive image, and together
the six pixels make up a column in the first capacitive image.
[0051] Another transmitter signal may then be transmitted with
electrode 210-2, and again first resulting signals may then be
captured with each of the receiver electrodes 220-1 to 220-6. This
comprises another column of first resulting signals that may be
used to generate the first image. This process may be continued,
transmitting from electrodes 210-3, 210-4, 210-5 and 210-6, with
each transmission generating another column of first resulting
signals until the complete first image of sensor values is
generated.
[0052] It should next be noted that this is only one example of how
such a capacitive image of sensor values can be generated. For
example, such images could instead be generated on a "row-by row"
basis using electrodes 220 as transmitter electrodes and electrodes
210 as receiver electrodes. In any case the images of sensor values
can be generated and used to determine positional information for
objects in the sensing region.
[0053] Next should be noted that in some embodiments the sensor
electrodes 210 and 220 are both configured to be selectively
operated as receiver electrodes and transmitter electrodes, and may
also be selectively operated for absolute capacitive sensing. Thus,
the sensor electrodes 210 may be operated as transmitter electrodes
while the sensor electrodes 220 are operated as receiver electrodes
to generate the image of sensor values. Likewise, the sensor
electrodes 220 may be operated as transmitter electrodes while the
sensor electrodes 210 are operated as receiver electrodes to
generate images to generate the image sensor values. Finally,
sensor electrodes 210 and 220 may be selectively modulated for
absolute capacitive sensing.
[0054] It should next be noted again that while the embodiment
illustrated in FIG. 2A shows sensor electrodes arranged in a
rectilinear grid, that is this is just one example arrangement of
the electrodes. In another example, the electrodes may be arranged
to facilitate position information determination in polar
coordinates (e.g., r, .THETA.). Turning now to FIG. 2B, capacitive
sensor electrodes 225 in a radial/concentric arrangement are
illustrated. Such electrodes are examples of the type that can be
used to determine position information in polar coordinates.
[0055] In the illustrated embodiment, the first sensor electrodes
230 comprise 12 electrodes 230-1 to 230-12 that are arranged
radially, with each of the first sensor electrodes 230 starting
near a center point and extending in different radial directions
outward. In the illustrated embodiment the second sensor electrodes
240 comprise four electrodes 240-1 to 240-4 that are arranged in
concentric circles arranged around the same center point, with each
second sensor electrode 240 spaced at different radial distances
from the center point. So configured, the first sensor electrodes
230 and second sensor electrodes 240 can be used to generate images
of sensor values.
[0056] As described above, generating image of sensor values is
relatively processing intensive. For example, using transcapacitive
sensing to scan the capacitive couplings either on a "row-by-row"
or "column-by-column" basis generally requires significant time and
processing capability because each row and/or column in the image
is generated separately. Furthermore, the rate at which each row or
column can be scanned may be further limited by the relatively
large RC time constants in some input device sensor electrodes.
Furthermore, in typical applications multiple capacitive images are
acquired over multiple time periods, and differences between them
used to derive information about input in the sensing region. For
all these reasons, the rate at which images of sensor values can be
generated may be limited.
[0057] As was described above, because of the time required to
generate each capacitive image, image sensors can be sensitive to
errors caused by quickly moving objects. For example, aliasing
errors may arise when sequential images show input objects at
different locations. In such cases it can be difficult to determine
if the detected input objects are the same input object or
different input objects. Likewise, it can be difficult to determine
where a detected object first entered or later exited the sensing
region.
[0058] Returning to FIG. 1, as was noted above the processing
system 110 is further configured to operate at least one force
sensor to generate force values that are indicative of force
applied to an input surface. In general, the one or more force
sensors are coupled to a surface and are configured to provide a
plurality a measures of force applied to the surface. Such force
sensor(s) can be implemented in a variety of different
arrangements. To give several examples, the force sensor(s) can be
implemented as multiple force sensors arranged near a perimeter of
the sensing region 120. Furthermore, each of the force sensors can
be implemented to measure compression force, expansion force, or
both, as it is applied at the surface. Finally, a variety of
different technologies can be used to implement the force sensors.
For example, the force sensors can be implemented with variety of
different technologies, including piezeoelectric force sensors,
capacitive force sensors, and resistive force sensors.
[0059] In general, the force sensor(s) operate to provide signals
to the processing system 110 that are indicative of force. The
processing system 110 may be configured to perform a variety of
actions to facilitate such force sensing. For example, the
processing system 110 may perform a variety of processes on the
signals received from the sensor(s). For example, processing system
110 may select or couple individual force sensor electrodes,
calibrate individual force sensors, and determine force
measurements from data provided by the force sensors.
[0060] Turning now to FIGS. 3A and 3B, examples of input objects in
a sensing region and applying force to a surface are illustrated.
Specifically, FIGS. 3A and 3B show top and side views of an
exemplary input device 300. In the illustrated example, user's
finger 302 and provides input to the device 300. Specifically, the
input device 300 is configured to determine the position of the
finger 302 and other input objects within the sensing region 306
using a sensor. For example, using a plurality of electrodes (e.g.,
electrodes 210 and 220 of FIG. 2A) configured to capacitively
detect objects such as the finger 302, and a processor configured
to determine the position of the fingers from the capacitive
detection.
[0061] In accordance with the embodiments of the invention, the
input device 300 is further configured include one or more force
sensor(s) 310. Specifically, one or more force sensor(s) 310 are
arranged about the sensing region 306. Each of these force
sensor(s) provides a measure of the force applied to the surface
308 by the fingers. Each of these individual force sensors can be
implemented with any suitable force sensing technology. For
example, the force sensors can be implemented with piezeoelectric
force sensors, capacitive force sensors, and/or resistive force
sensors. It should be noted that while the force sensor(s) 310 are
illustrated as being arranged around the perimeter of the sensing
region 306 that this is just one example configuration. As one
example, in other embodiments a full array of force sensors 310
could be provided to generate an "image" of force values.
[0062] The force sensor(s) are configured to each provide a measure
of the force applied to the surface. A variety of different
implementations can be used to facilitate this measurement. For
example, the sensing element of the force sensor can be directly
affixed to the surface. For example, the sensing element can be
directly affixed to the underside of the surface or other layer. In
such an embodiment, each force sensor can provide a measure of the
force that is being applied to the surface by virtue of being
directly coupled to the surface. In other embodiments, the force
sensor can be indirectly coupled to the surface. For example,
through intermediate coupling structures that transfer force,
intermediate material layers or both. In any such case, the force
sensors are again configured to each provide a measure of the force
applied to the surface. In yet other embodiments the force sensors
can be configured to directly detect force applied by the input
object itself, or to a substrate directly above the force
sensors.
[0063] In one specific example, the force sensor(s) can be
implemented as contact--no contact sensors by being configured to
simply indicate when contact is detected. Such a contact--no
contact sensor can be implemented with a force sensor that
identifies contact when detected force is above a specified
threshold, and provides a simply binary output indicating that such
contact has been detected. Variations in such contact--no contact
sensors include the use of hysteresis in the force thresholds used
determine contact. Additionally, such sensors can use averaging of
detected force in determining if contact is occurring.
[0064] In general it will be desirable to position each of the
plurality of force sensors near the perimeter edge of the sensor
and to space the sensors to the greatest extent possible, as this
will tend to maximize the accuracy of the sensing measurements. In
most cases this will position the sensors near the outer edge of
the sensing region. In other cases it will be near the outer edge
of the touch surface, while the sensing region may extend beyond
the surface for some distance. Finally, in other embodiments one or
more the sensors can be positioned in the interior of the
sensor.
[0065] In the example of FIG. 3, four force sensors 310 are
positioned near the perimeter of the rectangular sensing region 306
and beneath the surface 308. However, it should be noted that this
is just one example configuration. This, in other embodiments fewer
or more of such sensors may be used. Furthermore, the sensors may
be located in a variety of different positions beneath the surface
308. Thus, it is not necessary to locate the force sensors near the
corners or perimeters of the surface 308.
[0066] It should be noted that many force sensors can be used to
generate force values at relatively high rates compared to the
rates at which images of sensor values can be generated. For
example, in capacitive force sensors each force sensor can generate
a forced value with relatively few capacitive measurements compared
to the number of capacitive measurements required for each image,
and thus force values can be generated at a relatively higher rate
compared to rate at which images of sensor values can be generated.
As will be described in greater detail below, the faster rate at
which force values can be generated may be used to reduce errors in
the positional information determined by the sensor. Specifically,
the embodiments described herein can use the faster rate of force
values to provide improved resistance to the effects of errors that
may be caused by the motion of detected objects, and in particular,
to the effect of aliasing errors. In such embodiments the faster
rate of force values are used disambiguate determined position
information for objects detected in the images of sensor values.
This disambiguation of position information can lead to a reduction
in the effects of aliasing errors and can thus improve the accuracy
and usability of the input device. Furthermore, in other
embodiments the force sensors can be provided to generate force
values at the same rate at which capacitive images are generated.
In these embodiments it will be generally desirable to control the
force sensors such that the force values are generated between
images such that the force values provide information regarding the
contact of input objects between such images.
[0067] So configured, the at least one force sensors operate to
generate force values that are indicative of force applied to the
surface 308. As will now be described in detail, in the various
embodiments the processing system is configured to determine if an
input object detected in a first image of sensor values and an
input object detected in a second image of sensor values remained
in contact with the input surface between the first image and the
second image based at least in part on these force values. In
various other embodiments the processing system is configured to
determine an initial contact location for an input object first
detected in a first image of sensor values based at least in part
on at least one force value preceding the first image of sensor
values and the first image of sensor values. It should be noted
that in determining an initial contact location calculating the
actual location of initial contact is not required. Instead, in
many cases all that is needed to determine an initial contact
location is to determine if the initial contact was within a
certain region or within a threshold distance of some location.
Such an example will be described in greater detail below.
[0068] Turning now to FIGS. 4 and 5, the input device 300 is
illustrated with two different exemplary input object scenarios. In
FIG. 4, an input object (i.e., a finger 402) is illustrated moving
across the sensing region 306 from a first position to a second
position while remaining in contact with the surface 308. In FIG.
5, two input objects (i.e., finger 502 and finger 504) are shown,
where finger 502 is being lifted from the surface 308 at the first
position and shortly thereafter the finger 504 is placed at the
surface in the second position.
[0069] It should be appreciated that when either scenario occurs
within a sufficiently short time period, the input device 300 will
effectively detect an image with a finger in the first position
followed by an image with a finger in the second position. Without
more information, the input device 300 may not be able to
distinguish between the scenario illustrated in FIG. 4 where the
finger stayed in contact with the surface 308 and the scenario
illustrated in FIG. 5 where a finger was lifted from the surface
308 and thereafter a finger was quickly placed down on the surface
308. Without such a reliable determination, the input device 300
will be unable to reliably generate the appropriate response.
[0070] This can lead to several different potential problems. For
example, the input device may not reliably "scroll" or "pan" as
intended by the user in response to a motion across the surface.
Instead, the motion across the surface by the finger may be
interpreted as a new "tap" at the new location of the finger and
inadvertently activate a function associated with such a tap. As
another example, pointing with an input object can be
misinterpreted as taps at a new location and vice versa. In such
cases misinterpreting an intended "tap" as pointing motion can
cause unwanted cursor jumping when selection was instead intended
by the user.
[0071] The embodiments described herein avoid these potential
problems by providing a mechanism for more reliably determining if
an input object detected in a first image of sensor values and an
input object detected in a second image of sensor values remained
in contact with the input surface between the first image and the
second image. Specifically, by using the force values from one or
more force sensors to disambiguate whether the input object
remained in contact between the images. Thus, the input device 300
may be then configured to generate a first user interface action in
response to a determination that the input object detected in the
second image of sensor values remained in contact with the input
surface between the first image and the second image and generate a
second user interface action in response to a determination that
the input object detected in the second image of sensor values did
not remain in contact with the input surface between the first
image and the second image.
[0072] As was noted above, many types of force sensors can be
utilized to provide force values to the processing system. By
providing at least one force value between images generated by the
sensor electrodes it can be more reliably determined whether the
input object detected in a first image remained in contact with the
surface between images. Specifically, if applied force is detected
between the consecutive images it can be more reliably assumed that
the input object remained in contact between images and thus the
correct user interface response can be more reliably generated.
[0073] Furthermore, as was discussed above many typical force
sensors can be configured to provide force values at a relatively
high rate. Specifically, because of the amount of time and
processing typically required to generate a full capacitive image
(that will include numerous rows or columns of values, each
generated at a different time) the rate at which such images may be
generated is limited. In contrast, many force sensors can generate
force values at considerably greater rate. This is particularly
true of some capacitive force sensors, where each capacitive
measurement may be used to generate a force value. Thus, by using
such force sensors multiple force values can be generated between
each generated image. Generating and using multiple force values
between images can thus provide further ability to determine if an
object has remained in contact with the surface between images, or
if instead the object has lifted from the surface and the same or
other object placed back down.
[0074] It should be noted that this is just one example of the type
of ambiguity that can be potentially resolved through the use of
force values. In another example such force values can be used to
determine if an input object detected in a first image had actually
initially contacted the input surface at a different location
before it was first sensed in an image. Turning now to FIGS. 6 and
7, the input device 300 is illustrated with two such different
exemplary input object scenarios. In FIG. 6, an input object (i.e.,
a finger 602) is illustrated with an initial detected location at a
position 604 and then moving across the sensing region 306 to the
second position 606. In FIG. 7, an input object (i.e., a finger
602) is illustrated with an initial contact location at a contact
position 608 and then moving across the sensing region 306 from the
position 604 to the second position 606. In both scenarios the
input object is first detected in an image at position 604 and then
subsequently detected in the next image at the second position 606.
However, the two scenarios differ as to where the actual initial
contact occurred.
[0075] Such a distinction can make a difference in applications
where the resulting user interface action is dependent upon the
location of initial contact by the input object. And without more
information, the input device 300 may not be able to determine that
the input object actually made initial contact at an earlier
location than it was first detected in an image. Without such a
reliable determination, the input device 300 will be unable to
reliably generate the appropriate response.
[0076] This can lead to several different potential problems. For
example, where a resulting user interface action is dependent upon
the location of initial contact by the input object. As a specific
example, in some cases a user interface may be configured to
provide a pull-down interface in response to a user initially
contacting the sensor near an edge region and dragging the input
object across the sensing region 306. Such pull-down interfaces can
provide a variety of functions to the user. However, in most
embodiments such pull-down interfaces will only be activated when
the initial contact location is determined be at or near an edge
region of the sensing region. Thus, if the initial contact location
is inaccurately determined to not be near the edge the pull-down
interface will not be activated. As noted above, with a quickly
moving finger the input object may not be detected in an image at
its first true contact location (e.g., contact location 608) and
may instead only be detected at a later position (e.g., position
604). In such a scenario the pull-down interface may incorrectly
not be activated when it was intended to be activated by the
user.
[0077] It should be noted that in such embodiments it may be
sufficient to determine that a contact prior to detecting the input
object in the first image did not occur prior to the object being
detected in an image, or did not occur within a specified threshold
distance prior to the object being detected in the image. Stated
another way, the lack of force detection can be used to help make
the disambiguation even if the initial contact location was within
an edge region. For example, in the scenario of FIG. 6 if no
contact above a threshold level is detected in the immediate time
prior to having detected the image at position 604 then it can be
reliably determined that contact in the edge region did not occur
and an edge region specific response need not be generated.
[0078] As another example, in some embodiments a gesture may be
performed when an initial contact location is within a distance
threshold or some other criteria such as speed of motion. In this
case the embodiments described herein can be used to determine if
such an initial contact within a threshold occurred. Again, the
input device can be configured to not perform the gesture when an
initial contact is not detected immediately prior the input object
being detected in an image, and where the input object was detected
outside the specified distance threshold in that image.
Alternatively, the input device can be configured perform the
gesture only when the initial contact is affirmatively determined
to be within the specified distance threshold. For example, when an
initial contact is determined to have occurred prior to detecting
the input object in the first image, and that initial contact
location is within the specified distance threshold. Or
alternatively, when the input object is detected within the
specified range in the first image and no force values indicate
that the actual initial contact occurred outside the specified
range. In each of these various embodiments the force values are
used with the images to determine the gesture intended by the
user.
[0079] The embodiments described herein avoid these potential
problems by providing a mechanism for more reliably determining an
initial contact location for an input object first detected in a
first image of sensor values based at least in part on at least one
force value preceding the first image of sensor values and the
first image of sensor values. Specifically, by using the force
values from one or more force sensors to disambiguate whether the
input object actually made contact prior to being first detected in
an image, and determining an estimate of the location of the
initial contact. As was noted above, many types of force sensors
can be utilized to provide force values to the processing system.
By providing at least one force value between images generated by
the sensor electrodes those force values can be used to determine
if an input object made contact before it was detected in an image.
Specifically, if applied force is detected shortly before the
object was detected in an image it can be more reliably assumed
that the input object may have actually contacted the input surface
at a different location.
[0080] Furthermore, as was discussed above many typical force
sensors can be configured to provide force values at a relatively
high rate. Specifically, because of the amount of time and
processing typically required to generate a full capacitive image
(that will include numerous rows or columns of values, each
generated at a different time) the rate at which such images may be
generated is limited. In contrast, many force sensors can generate
force values at considerably higher rate. This is particularly true
of some capacitive force sensors, where each capacitive measurement
may be used to generate a force value. Thus, by using such force
sensors multiple force values can be generated between each
generated image. Generating and using multiple force values between
images can thus provide further ability to determine if an object
had initially contacted the surface prior to be detected in an
image.
[0081] A variety of different techniques can be used to determine
an initial contact location for an input object first detected in a
first image of sensor values based at least in part on at least one
force value preceding the first image of sensor values and the
first image of sensor values. As one example, locations of the
input object in the first image and the second image are used and
the time difference between such images to estimate a rate of
motion of the input object across the sensing region. By estimating
the rate of motion of the input object, and the time that contact
was detected using the force sensor, an estimate of the initial
contact location can be determined.
[0082] Turning now to FIG. 8, the input device 300 is illustrated
with position 604, second position 606 and contact position 608
illustrated as crosses in the sensing region 306. As can be seen in
FIG. 8, position 604 and second position 606 are separated by a
distance D1, while contact position 608 and position 604 are
separated by a distance D2. The time of the input object being at
position 604, and the location of position 604 can be determined
from the first image. Likewise, the time of the input object being
at second position 606, and the location of second position 606 can
be determined from the second image. Finally, the time of contact
at contact position 608 can be determined from the force
values.
[0083] With these values determined the position of the input
object contact (i.e., contact position 608) can be accurately
estimated. Specifically, because the distance D2 can be determined
and used to estimate the velocity of the input object as it moved
from position 604 and second position 606 the distance D1 can be
estimated by assuming the velocity was relatively constant between
all three positions. Furthermore, the location of contact position
606 can be estimated by assuming that the input object was
traveling in a relatively straight line. Thus, from these
determinations it can be determined if the initial contact at
contact position 606 likely occurred in a region that would
indicate a specific user interface action was intended to be
performed.
[0084] For example, it can be determined if the initial contact
position 608 occurred in an edge region proximate an edge of the
sensor region 306. FIG. 8 illustrates the boundary of an exemplary
edge region with line 610. As described above, such edge regions
are commonly implemented to support a variety of user interface
functions. For example, to provide a pull-down interface in
response to a user initially contacting the sensor in the edge
region and dragging the input object across the sensing region 306.
In this case, by determining the location of contact position 608
it can be more reliably determined that the user intended to
initiate such a pull-down interface. Thus, if the initial contact
position 608 is determined to have occurred in the edge region the
pull-down interface can be activated even though the input object
was not detected in a capacitive image until it was outside the
edge region at position 604. The input device 300 can thus more
reliably respond to quickly moving fingers and other input objects
that may not be detected at their initial locations.
[0085] As described above, the force values provided by the force
sensors can be used with the images of sensor values to provide a
variety of positional information. For example, positional
information for an input object detected in a first image of sensor
values based at least in part on the first image of sensor values
and the force values. This positional information may be used to
distinguish between a variety of different user interface actions.
For example, determining if an input object detected in a first
image of sensor values and an input object detected in a second
image of sensor values performed a swipe across the input surface
while remaining in contact with the input surface between the first
image and the second image, or instead if the input object detected
in the first image lifted from the input surface between the first
image and the second image. As another example, determining if an
initial contact location for an input object first detected in a
first image of sensor values based at least in part on at least one
force value preceding the first image of sensor values and the
first image of sensor values.
[0086] The force values provide by the force sensors can also be
used for additional functions. For example, one or more force
values may themselves be used to generate positional information
for the input object. This can be done using a variety of
techniques, such as by estimating a deflection response or
deformation response from the force values. Examples of these
techniques are described in U.S. patent application Ser. No.
12/729,969, filed Mar. 23, 2010, entitled DEPRESSABLE TOUCH SENSOR;
U.S. patent application Ser. No. 12/948,455, filed Nov. 17, 2010,
entitled SYSTEM AND METHOD FOR DETERMINING OBJECT INFORMATION USING
AN ESTIMATED DEFLECTION RESPONSE; U.S. patent application Ser. No.
12/968,000 filed Dec. 14, 2010, entitled SYSTEM AND METHOD FOR
DETERMINING OBJECT INFORMATION USING AN ESTIMATED RIGID MOTION
RESPONSE; and U.S. patent application Ser. No. 13/316,279, filed
Dec. 9, 2011, entitled INPUT DEVICE WITH FORCE SENSING.
[0087] Thus, the embodiments and examples set forth herein were
presented in order to best explain the present invention and its
particular application and to thereby enable those skilled in the
art to make and use the invention. However, those skilled in the
art will recognize that the foregoing description and examples have
been presented for the purposes of illustration and example only.
The description as set forth is not intended to be exhaustive or to
limit the invention to the precise form disclosed.
* * * * *