U.S. patent application number 14/216344 was filed with the patent office on 2014-07-17 for speakerphone control for mobile device.
This patent application is currently assigned to Apple Inc.. The applicant listed for this patent is Apple Inc.. Invention is credited to John O. Louch.
Application Number | 20140199984 14/216344 |
Document ID | / |
Family ID | 40955612 |
Filed Date | 2014-07-17 |
United States Patent
Application |
20140199984 |
Kind Code |
A1 |
Louch; John O. |
July 17, 2014 |
Speakerphone Control For Mobile Device
Abstract
A speakerphone system integrated in a mobile device is
automatically controlled based on the current state of the mobile
device. In one implementation, the mobile device is controlled
based on an orientation or position of the mobile device. In
another implementation, the control of the speakerphone includes
automatically controlling one or more graphical user interfaces
associated with the speakerphone system.
Inventors: |
Louch; John O.; (San Luis
Obispo, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Apple Inc. |
Cupertino |
CA |
US |
|
|
Assignee: |
Apple Inc.
Cupertino
CA
|
Family ID: |
40955612 |
Appl. No.: |
14/216344 |
Filed: |
March 17, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12033706 |
Feb 19, 2008 |
8676224 |
|
|
14216344 |
|
|
|
|
Current U.S.
Class: |
455/418 |
Current CPC
Class: |
H04M 1/605 20130101;
G06F 3/165 20130101; H04M 2250/12 20130101; G06F 3/02 20130101;
H04M 1/6041 20130101; G06F 3/04886 20130101; G06F 3/005 20130101;
G06F 3/04817 20130101; H04M 1/6033 20130101; H04M 1/72583 20130101;
H04M 1/72569 20130101 |
Class at
Publication: |
455/418 |
International
Class: |
H04M 1/60 20060101
H04M001/60 |
Claims
1. A method comprising: recording a first movement pattern of a
mobile phone in a learning mode, the first movement pattern of the
mobile phone being recorded according to sensor data acquired by
one or more sensors of the mobile phone during the learning mode;
after the recording of the first movement pattern in the learning
mode, and during an automatic control mode, detecting a second
movement pattern of the mobile phone; comparing the first movement
pattern and the second movement pattern; determining a control
action associated with the second movement pattern, based on a
result of comparing the first movement pattern and the second
movement pattern; and automatically controlling an audio system
associated with the mobile phone based on the control action.
2. The method of claim 1, wherein detecting the second movement
pattern includes receiving at least one sensor input related to
acceleration, proximity, orientation, rotation, angle, connection
to one or more hardware devices, gripping, time, distance, touch
for user input, or volume levels.
3. The method of claim 1, further comprising: receiving user input
to activate or deactivate the automatic control of the audio
system.
4. The method of claim 1, further comprising: prompting for user
input to accept the determined control action associated with the
second movement pattern.
5. The method of claim 1, wherein detecting the second movement
pattern comprises: obtaining information from a state machine.
6. The method of claim 1, wherein controlling the speakerphone
system, further comprises: activating or deactivating the audio
system.
7. The method of claim 1, wherein controlling the speakerphone
system, further comprises: automatically adjusting a sensitivity of
a microphone or speaker volume.
8. The method of claim 1, wherein controlling the audio system,
further comprises: generating or adjusting a graphical user
interface of the mobile phone.
9. The method of claim 8, wherein generating or adjusting the
graphical user interface, further comprises: changing a brightness
of a display presenting the graphical user interface.
10. The method of claim 8, wherein generating or adjusting the
graphical user interface, further comprises: resizing of elements
displayed on the graphical user interface.
11. A method, comprising: with one or more sensors in a mobile
phone, detecting whether a user is holding the mobile phone; and
automatically controlling an audio system associated with the
mobile phone based on whether the user is holding the mobile
phone.
12. The method defined in claim 11 wherein the one or more sensors
comprise a pressure sensor.
13. The method defined in claim 11 wherein the one or more sensors
comprise a temperature sensor.
14. The method defined in claim 11 wherein automatically
controlling the audio system associated with the mobile phone based
on whether the user is holding the mobile phone comprises: in
response to detecting that the user is holding the mobile phone,
deactivating a speakerphone system; and in response to detecting
that the user is not holding the mobile phone, activating a
speakerphone system.
15. A method, comprising: in a mobile phone having a port that
couples to one or more external devices, detecting whether any
external devices are coupled to the port in the mobile phone; and
automatically controlling an audio system associated with the
mobile phone based on whether any external devices are coupled to
the port in the mobile phone.
16. The method defined in claim 15 wherein automatically
controlling the audio system associated with the mobile phone based
on whether any external devices are coupled to the port in the
mobile phone comprises: in response to detecting that one or more
external devices are coupled to the port in the mobile phone,
activating a speakerphone system; and in response to detecting that
no external devices are coupled to the port in the mobile phone,
deactivating a speakerphone system.
17. A method, comprising: with a microphone in a mobile phone,
detecting a volume of a user's voice; determining whether the
volume of the user's voice exceeds a default value; and
automatically controlling an audio system associated with the
mobile phone based on whether the volume of the user's voice
exceeds the default value.
18. The method defined in claim 17 wherein automatically
controlling the audio system associated with the mobile phone based
on whether the volume of the user's voice exceeds the default value
comprises: in response to determining that the volume of the user's
voice is greater than the default value, deactivating a
speakerphone system; and in response to determining that the volume
of the user's voice is lower than the default value, activating a
speakerphone system.
19. A method, comprising: with a touch-sensitive display in a
mobile phone, determining whether a user is providing input via the
touch-sensitive display; and automatically controlling an audio
system associated with the mobile phone based on whether the user
is providing input via the touch-sensitive display.
20. The method defined in claim 19 wherein automatically
controlling an audio system associated with the mobile phone based
on whether the user is providing input via the touch-sensitive
display comprises: in response to determining that the user is
providing input via the touch-sensitive display, activating a
speakerphone system; and in response to determining that the user
is not providing input via the touch-sensitive display,
deactivating a speakerphone system.
Description
RELATED APPLICATION
[0001] This application is a continuation of U.S. patent
application Ser. No. 12/033,706, filed Feb. 19, 2008, which is
hereby incorporated by reference herein in its entirety. This
application claims the benefit of and claims priority to U.S.
patent application Ser. No. 12/033,706, filed Feb. 19, 2008. This
subject matter is related to U.S. patent application Ser. No.
11/937,463, for "Variable Device Graphical User Interface," filed
Nov. 8, 2007, the subject matter of which is incorporated by
reference herein in its entirety.
TECHNICAL FIELD
[0002] This subject matter is generally related to mobile
devices.
BACKGROUND
[0003] Modern mobile devices (e.g., mobile phones, media players)
often include a speakerphone system. The speakerphone system, which
typically includes a loudspeaker and a microphone integrated into
the mobile device, can free the user's hands and facilitate
multi-party conversations using the mobile device. A typical
speakerphone system for a mobile device is controlled by hardware
and/or software mechanisms which require the user to make physical
contact with the mobile device. When operating the mobile device in
hands free mode, the user must manually activate the speakerphone
system to engage in a conversation, and then deactivate the
speakerphone system when finished with the conversation. Even if
the user is holding the device, the manual steps of activating and
deactivating the speakerphone system can be annoying to the
user.
SUMMARY
[0004] A speakerphone system integrated in a mobile device is
automatically controlled based on the current state of the mobile
device. In one implementation, the mobile device is controlled
based on an orientation or position of the mobile device. In
another implementation, the control of the speakerphone includes
automatically controlling one or more graphical user interfaces
associated with the speakerphone system.
DESCRIPTION OF DRAWINGS
[0005] FIGS. 1A and 1B illustrate an example speakerphone system
for a mobile device which can be managed based on a current state
of the mobile device.
[0006] FIG. 2 illustrates a graphical user interface for the
example mobile device.
[0007] FIG. 3 is a block diagram of an example implementation of
the mobile device of FIG. 2.
[0008] FIG. 4 illustrates an example process for managing a mobile
device's speakerphone system based on a current state of the mobile
device.
[0009] FIG. 5 illustrates an example data structure for mapping
mobile device states to speakerphone control actions.
DETAILED DESCRIPTION
Speakerphone System Overview
[0010] FIGS. 1A and 1B illustrate an example speakerphone system
for a mobile device 100 which can be managed based on a current
state of the mobile device 100. The mobile device 100 can be, for
example, a handheld computer, a personal digital assistant, a
cellular telephone, a network appliance, a camera, a smart phone,
an enhanced general packet radio service (EGPRS) mobile phone, a
network base station, a media player, a navigation device, an email
device, a game console, or a combination of any two or more of
these data processing devices or other data processing devices. The
mobile device 100 can have an speakerphone system associated with
the mobile device 100, e.g., an integrated speaker in the mobile
device 100, or an external speaker wirelessly connected to the
mobile device 100. The speakerphone system can include a microphone
122 which can be arranged on a front or back side of the mobile
device 100 to facilitate hands-free operation during a telephone
conversation or playback of audio content (e.g., music, voicemail).
In various implementations, the speakerphone system also includes a
loudspeaker 124 to deliver and/or amplify voice to reach a user of
the mobile device 100.
[0011] The speakerphone system can be managed based on the current
state of the mobile device 100. In some implementations, the
current state of the mobile device 100 can be determined using a
reference frame and one or more sensors (e.g., accelerometer, gyro,
light sensor, proximity sensor) integrated into the mobile device
100. A "state" can be an orientation or position of the device with
respect to the reference frame. For example, when the mobile device
100 is laid flat on a surface (e.g., the x-y plane) of a table top
140, the speakerphone system can be activated to allow hands free
operation. If the mobile device 100 is laying flat on a surface, an
assumption can be made that the user intends to use the
speakerphone system for hands free operation. Similarly, if the
mobile device 100 is orientated substantially vertical (in the
z-plane), an assumption can be made that the mobile device 100 is
temporarily fixed in a docking or recharging device (as shown in
FIG. 1A), and the speakerphone system can be activated to allow
hands free operation while docked and/or while recharging.
[0012] A "state" of the mobile device 100 can also be determined
based solely on sensor inputs. For example, one or more sensors
(e.g., proximity sensor 268) on the front and/or back sides of the
mobile device 100 can indicate a state of the mobile device 100.
For example, if a first proximity sensor on the back side of the
mobile device 100 is triggered and a second proximity sensor on the
front side of the mobile device 100 is not triggered, then an
assumption can be made that the mobile device 100 is laying flat on
a surface. Based on this assumption, the speakerphone system can be
controlled (e.g., activated) to allow hands free operation. If the
first proximity sensor and/or the second proximity sensor are
triggered, then an assumption can be made that the mobile device
100 is being held by the user or is stored (e.g., stored in a bag
or case). Based on this assumption, the speakerphone system can be
controlled differently (e.g., deactivated).
[0013] In some implementations, a processor (e.g., processor 304)
in the mobile device 100 can use a state machine to maintain the
current state of the mobile device 100. The state machine can track
various combinations of inputs which can cause a state change to
occur. A control action can then be issued based on the current
state of the mobile device 100 as indicated by the state machine. A
control action can be activating or deactivating the speakerphone
system, generating or adjusting a graphical user interface and/or
any other suitable control action.
[0014] For example, a first state of the mobile device 100 can be
defined by a first proximity sensor on the back side of the mobile
device 100 sensing proximity to an object (e.g., a table top
surface) and a motion sensor not sensing motion of the mobile
device 100 (e.g., acceleration is below a threshold value). The
combination of these sensor inputs can place the state machine of
the mobile device 100 into the first state. The first state can
exist when the mobile device 100 is laying at rest, face up on a
flat surface, for example. The control action can be activating the
speakerphone system and adjusting the volume of the loudspeaker
124. Another control action can be to generate a graphical user
interface, as described in U.S. patent application Ser. No.
11/937,463.
[0015] A second state of the mobile device 100 can be defined by
the motion sensor sensing motion (e.g., acceleration above a
threshold value). Such motion can place the state machine into the
second state. The second state can exist when a user has picked up
the mobile device 100 from the surface to make a call, for example.
The control action can be lowering the volume of the loudspeaker
124. Other control actions are possible.
[0016] A third state of the mobile device 100 can be defined by a
second proximity sensor located on the front side of the mobile
device 100 sensing proximity to an object (e.g., the user's head)
and the motion sensor not sensing motion of the mobile device 100
(e.g., acceleration is again below a threshold value). The
combination of these sensor inputs can place the state machine of
the mobile device 100 into the third state. The third state can
exist when the user 110 has raised the mobile device 100 to the
user's ear and the mobile device is no longer in motion. The
control action can be deactivating the speakerphone system. Other
control actions are possible.
[0017] It should be understood that any number of states and/or
combinations of states can be defined and used to trigger control
actions. The state machine can be implemented by a processor of the
mobile device 100 (e.g., processor 304). The processor can also
determine appropriate control actions based on the current state of
the mobile device 100 as determined by the state machine.
Example Mobile Device
[0018] FIG. 2 illustrates a graphical user interface for an example
mobile device. As described in reference to FIG. 1, the mobile
device 100 typically includes a built-in microphone 122 and
loudspeaker 124. In some implementations, an up/down button 284 for
volume control of the loudspeaker 124 and the microphone 122 can be
included. The mobile device 100 can also include an on/off button
282 for a ring indicator of incoming phone calls. An audio jack 266
can also be included for use of headphones and/or a microphone.
[0019] In addition, as shown in FIG. 2, the mobile device 100 can
include a display 202, which, in some implementations, is
touch-sensitive. The touch-sensitive display 202 can implement
liquid crystal display (LCD) technology, light emitting polymer
display (LPD) technology, or some other display technology. The
touch sensitive display 202 can be sensitive to haptic and/or
tactile contact with a user.
[0020] In some implementations, the mobile device 100 can display
one or more graphical user interfaces on the touch-sensitive
display 202 for providing user access to various system objects and
for conveying information to a user. In some implementations, the
graphical user interfaces can include one or more display objects,
e.g., 204 and 206. In the example shown, the display objects 204
and 206, are graphic representations of system objects. Some
examples of system objects include device functions, applications,
windows, files, alerts, events, or other identifiable system
objects.
[0021] In some implementations, a proximity sensor 268 can be
included to determine the current state of the mobile device 100 by
detecting the user 110 positioning the mobile device 100 proximate
to the user's ear, as described in reference to FIG. 1. In some
implementations, the graphical user interface can be resized to
reduce the graphical representations of display objects 204 and
206, e.g., graphical icons, and their corresponding touch areas
(e.g., areas on the touch-sensitive display where a touch on the
display 202 selects the graphical icons). In various
implementations, an ambient light sensor 270 can also be used to
determine the current state of the device. For example, the ambient
light sensor 270 can sense when the mobile device 100 has been
stored away. This sensor input can be used alone or in combination
with other sensor inputs to determine the current state of the
mobile device 100.
[0022] In some implementations, the microphone 122 can be used as a
volume sensor which can detect the user's voice volume. For
example, when the volume level from the voice source exceeds a
default value, an assumption can be made that the user is speaking
directly into the microphone 122 while holding the mobile device
100 to their ear, resulting in the speakerphone system being
deactivated, for example.
[0023] In some implementations, the ambient light sensor 270 can be
utilized to facilitate adjusting the brightness of the display 202,
and an accelerometer 272 can be utilized to detect movement of the
mobile device 100, as indicated by the directional arrow 274.
Accordingly, the speakerphone system and a graphical user interface
can be adjusted according to a detected orientation of the mobile
device 100.
[0024] In some implementations, the mobile device 100 includes
circuitry and sensors for supporting a location determining
capability, such as that provided by the global positioning system
(GPS) or other positioning systems (e.g., systems using Wi-Fi
access points, television signals, cellular grids, Uniform Resource
Locators (URLs)). In some implementations, a positioning system
(e.g., a GPS receiver) can be integrated into the mobile device 100
or provided as a separate device that can be coupled to the mobile
device 100 through an interface (e.g., port device 290) to provide
access to location-based services. In some implementations, the
mobile device 100 includes a gyroscopic sensor or other sensors
that can be used to detect motion or orientation of the device with
respect to a reference frame.
[0025] In some implementations, positioning sensors (e.g., an
accelerometer 272) can be used to compute an instantaneous
coordinate frame of the mobile device 100. For example, when the
mobile device 100 is lying flat on a surface, an instantaneous
coordinate frame centered on the mobile device 100 can be computed.
For example, the z-axis can be perpendicular to the surface which
can lie in the x-y plane in a right-handed coordinated system, as
shown in FIG. 1A. If the user 110 moves the mobile device 100 to
the position and orientation shown in FIG. 1B, then a trajectory
for the mobile device 100 can be determined from the change in
coordinates of the mobile device 100. For example, in reference to
FIGS. 1A and 1B, the mobile device's 100 coordinate frame in FIG.
1A rotates by about ninety degrees with respect to the z-axis to
change to a coordinate frame in FIG. 1B while the user 110 is
holding the mobile device 100. Accordingly, the speakerphone system
can be controlled according to the detected change of coordinate
frames.
[0026] In some implementations, one or more sensors (e.g., a
pressure sensor, temperature sensor) for detecting when a user is
holding or gripping the mobile device 100 can be integrated into a
housing of the mobile device 100. These sensors can detect when the
mobile device 100 is gripped by a user, for example, by detecting a
pressure exerted upon the body of the mobile device 100 or a
partial temperature change (e.g., deviation from an ambient
temperature) on the mobile device 100.
[0027] In some implementations, the mobile device 100 can include a
touch sensor, which detects a user entering input via the graphical
user interface, resulting in the speakerphone system being
activated, for example. The user input can be received by the
mobile device 100 from the user touching the touch-sensitive
display 202, or from the user touching a keypad or a like device
(not shown) associated with the mobile device 100.
[0028] In some implementations, the mobile device 100 can include a
time sensor (e.g., using the internal clock of the mobile device
100), which detects a duration for a certain state (e.g., position,
or orientation) of the mobile device 100. The detected duration can
be used to determine if a control action will be triggered, to
prevent overly frequent, unnecessary responses to each state
change. By way of illustration, if the state change does not exceed
a certain amount of time, e.g., five seconds, an assumption can be
made that the state change is temporal, and therefore no control
action will be triggered in response. By contrast, if the state
change lasts longer than five seconds, an assumption can be made
that the state change will remain for a longer period, and thus a
control action can be triggered accordingly.
[0029] The decision whether to trigger a corresponding control
action can also be made upon detection of time in combination with
a transition distance of the mobile device 100, to enhance accuracy
of the state determination. For example, in FIGS. 1A and 1B, if the
mobile device 100 has been raised by the user 110 by twenty feet,
for a interval exceeding five seconds, an assumption can be made
that the user intends to use the handset for the telephone
conversation. Accordingly, the speakerphone system can be
deactivated in response to the assumption. Otherwise, the
speakerphone system can remain unchanged until the state change is
greater than a certain amount on time or distance.
[0030] In some implementations, a port device 290, e.g., a
Universal Serial Bus (USB) port, or a docking port, or some other
wired port connection, can be included. The port device 290 can,
for example, be utilized to establish a wired connection to other
computing devices, such as other communication devices, network
access devices, a personal computer, a printer, a display screen,
or other processing devices capable of receiving and/or
transmitting data. In some implementations, the port device 290
allows the mobile device 100 to synchronize with a host device
using one or more protocols, such as, for example, the TCP/IP,
HTTP, UDP and any other known protocol.
[0031] In some implementations, the mobile device 100 can have
hardware connection sensors that detect whether the mobile device
100 is connected to any hardware devices via the port device 290.
When the mobile device 100 is being connected to hardware devices
(e.g., a docking station or re-charger), it is more likely than
not, a user of the mobile device 100 is not holding the handset,
and thus the speakerphone system (e.g., the speaker volume and/or
microphone sensitivity), and graphical user interface can be
adjusted accordingly.
[0032] The mobile device 100 can also include a camera lens and
sensor 280. In some implementations, the camera lens and sensor 280
can be located on the back surface of the mobile device 100. The
camera can capture still images and/or video. In some
implementations, the images captured by the camera can be used to
measure proximity to a user or if the mobile device 100 is held by
the user and the speakerphone system and graphical user interface
can be activated or adjusted accordingly.
[0033] The mobile device 100 can also include one or more wireless
communication subsystems, such as an 802.11b/g communication device
186, and/or a Bluetooth.TM. communication device 188. Other
communication protocols can also be supported, including other
802.x communication protocols (e.g., WiMax, Wi-Fi, 3G), code
division multiple access (CDMA), global system for mobile
communications (GSM), Enhanced Data GSM Environment (EDGE),
etc.
Example Mobile Device Architecture
[0034] FIG. 3 is a block diagram of an example implementation 300
of the mobile device 100 of FIG. 1. The mobile device 100 can
include a memory interface 302, one or more data processors, image
processors and/or central processing units 304, and a peripherals
interface 306. The memory interface 302, the one or more processors
304 and/or the peripherals interface 306 can be separate components
or can be integrated in one or more integrated circuits. The
various components in the mobile device 100 can be coupled by one
or more communication buses or signal lines.
[0035] Sensors, devices, and subsystems can be coupled to the
peripherals interface 306 to facilitate multiple functionalities.
For example, a motion sensor 310, a light sensor 312, and a
proximity sensor 314 can be coupled to the peripherals interface
306 to facilitate the orientation, lighting, and proximity
functions described with respect to FIG. 2. A hardware connection
sensor 318 can be coupled to the peripherals interface 306, to
facilitate determining a state of connecting the mobile device 100
to any hardware, e.g., a docking station, a charger, a personal
computer, etc. A gripping sensor 319 can be coupled to the
peripherals interface 306, to determine if the mobile device 100 is
being gripped. In various implementation, a gripping sensor can
include a temperature sensor, and/or a pressure sensor. Further, a
touch sensor 321 can be coupled to the peripherals interface 306,
to detect if a user is touching user input interface, e.g., a touch
screen or a keypad. A time sensor 323 can also be coupled to the
peripherals interface 306, to detect a duration of a certain state
of the mobile device 100. Other sensors 316 can also be connected
to the peripherals interface 306, such as a positioning system
(e.g., GPS receiver), a temperature sensor, a biometric sensor, a
gyroscope, or other sensing device, to facilitate related
functionalities.
[0036] A camera subsystem 320 and an optical sensor 322, e.g., a
charged coupled device (CCD) or a complementary metal-oxide
semiconductor (CMOS) optical sensor, can be utilized to facilitate
camera functions, such as recording photographs and video
clips.
[0037] Communication functions can be facilitated through one or
more wireless communication subsystems 324, which can include radio
frequency receivers and transmitters and/or optical (e.g.,
infrared) receivers and transmitters. The specific design and
implementation of the communication subsystem 324 can depend on the
communication network(s) over which the mobile device 100 is
intended to operate. For example, a mobile device 100 may include
communication subsystems 324 designed to operate over a GSM
network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network,
and a Bluetooth.TM. network. In particular, the wireless
communication subsystems 324 may include hosting protocols such
that the device 100 may be configured as a base station for other
wireless devices.
[0038] An audio subsystem 326 can be coupled to a loudspeaker 124,
and microphone 122 to facilitate voice-enabled functions, for
example, hands-free functionalities, voice recognition, voice
replication, digital recording, and telephony functions.
[0039] The I/O subsystem 340 can include a touch screen controller
342 and/or other input controller(s) 344. The touch-screen
controller 342 can be coupled to a touch screen 346. The touch
screen 346 and touch screen controller 342 can, for example, detect
contact and movement or break thereof using any of a plurality of
touch sensitivity technologies, including but not limited to
capacitive, resistive, infrared, and surface acoustic wave
technologies, as well as other proximity sensor arrays or other
elements for determining one or more points of contact with the
touch screen 346.
[0040] The other input controller(s) 344 can be coupled to other
input/control devices 348, such as one or more buttons, rocker
switches, thumb-wheel, infrared port, USB port, and/or a pointer
device such as a stylus. The one or more buttons (not shown) can
include an up/down button for volume control of the speaker 126 and
loudspeaker 124 and/or the microphone 122.
[0041] In some implementations, the mobile device 100 can present
recorded audio and/or video files, such as MP3, AAC, and MPEG
files. In some implementations, the mobile device 100 can include
the functionality of an MP3 player, such as an iPod.TM.. The mobile
device 100 may, therefore, include a 36-pin connector that is
compatible with the iPod. Other input/output and control devices
can also be used.
[0042] The memory interface 302 can be coupled to memory 350. The
memory 350 can include high-speed random access memory and/or
non-volatile memory, such as one or more magnetic disk storage
devices, one or more optical storage devices, and/or flash memory
(e.g., NAND, NOR). The memory 350 can store an operating system
352, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an
embedded operating system such as VxWorks. The operating system 352
may include instructions for handling basic system services and for
performing hardware dependent tasks. In some implementations, the
operating system 352 can be a kernel (e.g., UNIX kernel).
[0043] The memory 350 may also store communication instructions 354
to facilitate communicating with one or more additional devices,
one or more computers and/or one or more servers. The memory 350
may include graphical user interface instructions 356 to facilitate
graphic user interface processing; sensor processing instructions
358 to facilitate sensor-related processing and functions; phone
instructions 360 to facilitate phone-related processes and
functions; electronic messaging instructions 362 to facilitate
electronic-messaging related processes and functions; web browsing
instructions 364 to facilitate web browsing-related processes and
functions; media processing instructions 366 to facilitate media
processing-related processes and functions; GPS/navigation
instructions 368 to facilitate GPS and navigation-related processes
and instructions; camera instructions 370 to facilitate
camera-related processes and functions; GUI adjustment instructions
373 to facilitate adjustment of graphical user interfaces and user
interface elements in response to sensor data; and/or other
software instructions 372 to facilitate other processes and
functions.
[0044] In addition, the memory 350 can store audio management
instructions 376 to facilitate functions managing audio subsystem,
including the loudspeaker 124, and the microphone 122. In some
implementations, the audio management instructions 376 are operable
to toggle the speakerphone system and adjust speaker volume and/or
microphone sensitivity, in response to the sensor processing
instructions 358.
[0045] The memory 350 may also store other software instructions
(not shown), such as web video instructions to facilitate web
video-related processes and functions; and/or web shopping
instructions to facilitate web shopping-related processes and
functions. In some implementations, the media processing
instructions 366 are divided into audio processing instructions and
video processing instructions to facilitate audio
processing-related processes and functions and video
processing-related processes and functions, respectively. An
activation record and International Mobile Equipment Identity
(IMEI) 374 or similar hardware identifier can also be stored in
memory 350.
[0046] Each of the above identified instructions and applications
can correspond to a set of instructions for performing one or more
functions described above. These instructions need not be
implemented as separate software programs, procedures, or modules.
The memory 350 can include additional instructions or fewer
instructions. Furthermore, various functions of the mobile device
100 may be implemented in hardware and/or in software, including in
one or more signal processing and/or application specific
integrated circuits.
Example Process of Controlling Speakerphone System
[0047] FIG. 4 illustrates an example process 400 for managing a
mobile device's speakerphone system based on a current state of the
mobile device 100. For convenience, the process 400 is described
below in reference to FIGS. 1-3 (e.g., a mobile device 100, a
speakerphone system, and other components that perform the process
400).
[0048] In some implementations, the process 400 can begin when
input from one or more sensors on the mobile device are used to
determine a current state of the mobile device (410). An example
state can be a change of the mobile device's position or
orientation relative to a user of the mobile device or a reference
frame. The one or more sensors (e.g., accelerometer 272, motion
sensor 310, proximity sensor 314, gyroscope) can detect changes in
position and orientation of the mobile device, including the
magnitude of those changes. In some implementations, the
determining can include detecting and determining two or more
different states based on inputs from two or more different
sensors. For example, the mobile device can detect that the mobile
device is being held or gripped, and also that the sound as well as
the light sensors indicate proximity, to cross-check and/or
determine a level of proximity and/or motion of the device.
[0049] The current state or a sequence of states are used to
determine a control action (420). Some examples of control actions
include activating and deactivating a speakerphone system and/or
generating or adjusting one or more graphical user interfaces of
the mobile device. The control action can then be implemented by
one or more components of the mobile device (430) to, for example,
automatically control a speakerphone system of the mobile device or
generate and/or adjust graphical user interfaces of the mobile
device.
Example of Mapping States to Control Actions
[0050] FIG. 5 illustrates an example data structure 500 for mapping
mobile device states to speakerphone control actions. The mobile
device 100 can use the data structure 500 to map the current state
to one or more control actions. The states can be determined based
on sensor inputs, as described in reference to FIGS. 1-4, and
mapped to control actions which can be applied to the speakerphone
system, a graphical user interface and any other feature,
peripheral, application of the mobile device 100. Based on the
target of a control action, the control actions can be implemented
by sensor processing instructions 358, audio management
instructions 376 and/or GUI adjustment instructions 373. In some
implementations, the mobile device 100 can include a user-settable
option to enable or disable control actions.
[0051] Referring to FIG. 5, the current state can be any desired
state of the mobile device 100, including but not limited to: the
mobile device 100 being orienting towards or away from a user; the
mobile device 100 being in proximity or within a distance to a
user; the mobile device 100 being gripped or released by a user;
the mobile device 100 being connected or disconnected to hardware
device(s) (e.g., a docking station); the device 100 being held at a
certain angle with reference to an instantaneous coordinate frame;
volume levels received at the microphone 122; the device 100
receiving input from user interface or not; and other events that
tend to denote a change of position, orientation, or state of the
mobile device 100.
[0052] In various implementations, a control action applied to the
speakerphone system can account for the magnitude of the determined
event. For example, the sensitivity of the microphone 122 and/or
the volume of the loudspeaker 124 can be adjusted in correspondence
with the proximity change, thus guaranteeing optimal voice quality.
As another example, an angle at which the mobile device 100 is
being held can determine the angle of the speakerphone system
relative to the ground, and correspondingly, the reception and
transmission of sound waves by the speakerphone system can be
affected by the angle relative to the ground. A control action can
be made that accounts for this angle change. For example, if the
mobile device 100 is held at an angle more parallel than
perpendicular with a plane (e.g., the ground), the control action
can increase the sensitivity and/or volume of the speakerphone
system to optimize sending and receiving sound waves.
[0053] In some implementations, a control action can include
resizing of one or more graphical user interfaces displayed on the
mobile device 110, such as resizing of display objects 204 or 206
and their corresponding touch areas, shifting a graphical user
interface or user interface elements in the graphical user
interface, changing of a sensitivity level of the touch-sensitive
display, and/or adjusting and enlarging of the target touch area of
a user interface element based on an angle at which the device is
being held, such as, for example, described in U.S. patent
application Ser. No. 11/937,463.
[0054] In some implementations, a control action can include
adjusting a parameter by a predefined amount or percentage. For
example, a speaker's volume can be attenuated by a predetermined
amount of decibels or a predetermined percentage of the original
volume of the speaker. In some other implementations, the amount of
an adjustment is based on the amount of detected motion, amount of
detected acceleration, amount of change in orientation from a
starting position, or the like. For example, if the amount of
acceleration from rest is small, the amount of change of the
speaker's volume is small as well.
[0055] In some implementations, the amount of adjustment can be
based on user preferences. For example, the user can configure (in
a device options and settings interface, for example) a maximum
allowable amount (e.g., a percentage) for a type of adjustment. In
some implementations, the user can toggle the automatic control of
the speaker system through the graphical user interface.
[0056] In some implementations, a control action can include
interrupting actions initiated by the user with additional
confirmation steps. Rather than activating/deactivating or
adjusting the speakerphone system by some amount, for example, the
device can add confirmation dialogs or question interfaces to the
user interface flow for particular actions. As an example, if a
user attempts to make a call on device 100 while connecting the
device 100 to a personal computer is detected, the device 100 can
display a confirmation dialog box displaying and asking the user
whether to activate the speakerphone system. The confirmation
dialog box can have relatively large target touch areas for "Yes"
and "No" buttons for the user to select to confirm or decline to
activate speakerphone.
[0057] In some implementations, the mobile device 100 "learns"
particular characteristics or patterns of the state of the device
and/or the user's interactions with the device 100 in view of the
state to determine which control action should be issued. By way of
illustration, the mobile device 100 can have a learning mode, where
the device 100 can "learn" by recording a detected state of the
device 100, e.g., a trajectory of a motion, or a signature of
proximity. Further, the device 100 can compare its state detected
from the sensor input to the one or more patterns already stored in
the device 100, e.g., a signature of a bouncing motion while the
user is walking, a signature of a change in orientation, a
trajectory while the user is picking up and raising the device 100,
etc. If the detected state matches one of the patterns, the mobile
device 100 can apply to the speakerphone system and/or graphical
user interface the corresponding control action to the matching
pattern. In some scenarios, the device 100 can apply the control
action based on the closest pattern determined from the
comparison.
[0058] In various implementations, the patterns that can be
compared to the detected state are related to detected volume
levels, detected gripping, detected proximity, detected user input,
or other states that can be associated with patterns. In some
implementation, the patterns can be configured or modified by the
user or by learning from the user's past preference of a control
action given a detected state. In some implementations, the
patterns and the control action account for the magnitude of the
state, e.g., the level of detected volume, or the distance of the
motion.
[0059] In some implementations, the device 100 includes an
indicator, e.g., a light indicator, to signify that an adjustment
to the speakerphone system is being made. For example, the light
indicator can show different colors to signify different
adjustments to the speakerphone system that are being made. In some
implementations, a control action can include changing lighting of
the display of the device 100, providing a user a better vision of
the graphical user interface. In various implementations, the
different lighting of the display can indicate that some changes to
the speakerphone system are being made.
[0060] The disclosed and other embodiments and the functional
operations described in this specification can be implemented in
digital electronic circuitry, or in computer software, firmware, or
hardware, including the structures disclosed in this specification
and their structural equivalents, or in combinations of one or more
of them. The disclosed and other embodiments can be implemented as
one or more computer program products, i.e., one or more modules of
computer program instructions encoded on a computer-readable medium
for execution by, or to control the operation of, data processing
apparatus. The computer-readable medium can be a machine-readable
storage device, a machine-readable storage substrate, a memory
device, a composition of matter effecting a machine-readable
propagated signal, or a combination of one or more them. The term
"data processing apparatus" encompasses all apparatus, devices, and
machines for processing data, including by way of example a
programmable processor, a computer, or multiple processors or
computers. The apparatus can include, in addition to hardware, code
that creates an execution environment for the computer program in
question, e.g., code that constitutes processor firmware, a
protocol stack, a database management system, an operating system,
or a combination of one or more of them. A propagated signal is an
artificially generated signal, e.g., a machine-generated
electrical, optical, or electromagnetic signal, that is generated
to encode information for transmission to suitable receiver
apparatus.
[0061] A computer program (also known as a program, software,
software application, script, or code) can be written in any form
of programming language, including compiled or interpreted
languages, and it can be deployed in any form, including as a
stand-alone program or as a module, component, subroutine, or other
unit suitable for use in a computing environment. A computer
program does not necessarily correspond to a file in a file system.
A program can be stored in a portion of a file that holds other
programs or data (e.g., one or more scripts stored in a markup
language document), in a single file dedicated to the program in
question, or in multiple coordinated files (e.g., files that store
one or more modules, sub-programs, or portions of code).
[0062] The processes and logic flows described in this
specification can be performed by one or more programmable
processors executing one or more computer programs to perform
functions by operating on input data and generating output. The
processes and logic flows can also be performed by, and apparatus
can also be implemented as, special purpose logic circuitry, e.g.,
an FPGA (field programmable gate array) or an ASIC
(application-specific integrated circuit).
[0063] Processors suitable for the execution of a computer program
include, by way of example, both general and special purpose
microprocessors, and any one or more processors of any kind of
digital computer. Generally, a processor will receive instructions
and data from a read-only memory or a random access memory or both.
The essential elements of a computer are a processor for performing
instructions and one or more memory devices for storing
instructions and data. Generally, a computer will also include, or
be operatively coupled to receive data from or transfer data to, or
both, one or more mass storage devices for storing data, e.g.,
magnetic, magneto-optical disks, or optical disks. However, a
computer need not have such devices. Computer-readable media
suitable for storing computer program instructions and data include
all forms of non-volatile memory, media and memory devices,
including by way of example semiconductor memory devices, e.g.,
EPROM, EEPROM, and flash memory devices; magnetic disks, e.g.,
internal hard disks or removable disks; magneto-optical disks; and
CD-ROM and DVD-ROM disks. The processor and the memory can be
supplemented by, or incorporated in, special purpose logic
circuitry.
[0064] To provide for interaction with a user, the disclosed
embodiments can be implemented on a computer having a display
device, e.g., a CRT (cathode ray tube), LCD (liquid crystal
display) monitor, touch sensitive device or display, for displaying
information to the user and a keyboard and a pointing device, e.g.,
a mouse or a trackball, by which the user can provide input to the
computer. Other kinds of devices can be used to provide for
interaction with a user as well; for example, feedback provided to
the user can be any form of sensory feedback, e.g., visual
feedback, auditory feedback, or tactile feedback; and input from
the user can be received in any form, including acoustic, speech,
or tactile input.
[0065] While this specification contains many specifics, these
should not be construed as limitations on the scope of what being
claims or of what may be claimed, but rather as descriptions of
features specific to particular embodiments. Certain features that
are described in this specification in the context of separate
embodiments can also be implemented in combination in a single
embodiment. Conversely, various features that are described in the
context of a single embodiment can also be implemented in multiple
embodiments separately or in any suitable subcombination. Moreover,
although features may be described above as acting in certain
combinations and even initially claimed as such, one or more
features from a claimed combination can in some cases be excised
from the combination, and the claimed combination may be directed
to a subcombination or variation of a subcombination.
[0066] Similarly, while operations are depicted in the drawings in
a particular order, this should not be understand as requiring that
such operations be performed in the particular order shown or in
sequential order, or that all illustrated operations be performed,
to achieve desirable results. In certain circumstances,
multitasking and parallel processing may be advantageous. Moreover,
the separation of various system components in the embodiments
described above should not be understood as requiring such
separation in all embodiments, and it should be understood that the
described program components and systems can generally be
integrated together in a single software product or packaged into
multiple software products.
[0067] Thus, particular embodiments have been described. Other
embodiments are within the scope of the following claims.
* * * * *