U.S. patent application number 15/670323 was filed with the patent office on 2018-03-29 for systems and methods for power optimization in vlc positioning.
The applicant listed for this patent is QUALCOMM Incorporated. Invention is credited to Ajay Kumar Dhiman, Gaurav Gagrani.
Application Number | 20180088208 15/670323 |
Document ID | / |
Family ID | 61686076 |
Filed Date | 2018-03-29 |
United States Patent
Application |
20180088208 |
Kind Code |
A1 |
Gagrani; Gaurav ; et
al. |
March 29, 2018 |
Systems and Methods for Power Optimization in VLC Positioning
Abstract
Embodiments of systems and methods of power optimization in VLC
positioning are disclosed. In one embodiment, a method of power
optimization in visible light communication (VLC) positioning of a
mobile device comprises receiving, by a transceiver, positioning
assistance data of a venue, where the positioning assistance data
includes identifiers and positions of light fixtures in the venue,
decoding, by a VLC signal decoder, one or more light fixtures
within a field of view of the mobile device to obtain corresponding
light fixture identifiers, determining, by a controller, a motion
of the mobile device with respect to the one or more light fixtures
based on the light fixture identifiers and the positioning
assistance data of the venue, and controlling, by the controller,
the mobile device to operate in a reduced power mode based on the
motion of the mobile device with respect to the one or more light
fixtures.
Inventors: |
Gagrani; Gaurav; (Hyderabad,
IN) ; Dhiman; Ajay Kumar; (Hyderabad, IN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
QUALCOMM Incorporated |
San Diego |
CA |
US |
|
|
Family ID: |
61686076 |
Appl. No.: |
15/670323 |
Filed: |
August 7, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62401101 |
Sep 28, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G01S 2201/01 20190801;
G01S 5/0263 20130101; H04N 5/23212 20130101; G01S 5/163 20130101;
H04W 64/003 20130101; H04W 4/33 20180201; H04W 52/029 20130101;
H04N 5/23241 20130101; G01S 1/70 20130101; G02B 7/09 20130101; H04B
10/116 20130101; G02B 7/10 20130101; H04W 72/005 20130101; G03B
3/10 20130101; G03B 13/34 20130101; H04W 52/0254 20130101; Y02D
30/70 20200801; G01C 21/206 20130101; H04N 5/23293 20130101; G01S
5/16 20130101; H04W 4/80 20180201; G06F 1/3206 20130101; Y02D 10/00
20180101; G01S 1/7034 20190801; H04B 10/60 20130101; H04N 5/23245
20130101; G06F 1/3287 20130101; H04W 48/10 20130101; G03B 2217/007
20130101 |
International
Class: |
G01S 5/16 20060101
G01S005/16; H04W 4/00 20060101 H04W004/00; H04W 72/00 20060101
H04W072/00; H04W 64/00 20060101 H04W064/00; H04B 10/116 20060101
H04B010/116 |
Claims
1. A method of power optimization in visible light communication
(VLC) positioning of a mobile device comprising: receiving, by a
transceiver, positioning assistance data of a venue, wherein the
positioning assistance data includes identifiers and positions of
light fixtures in the venue; decoding, by a VLC signal decoder, one
or more light fixtures within a field of view of the mobile device
to obtain corresponding light fixture identifiers; determining, by
a controller, a motion of the mobile device with respect to the one
or more light fixtures based on the light fixture identifiers and
the positioning assistance data of the venue; and controlling, by
the controller, the mobile device to operate in a reduced power
mode based on the motion of the mobile device with respect to the
one or more light fixtures.
2. The method of claim 1, further comprising: in the reduced power
mode, monitoring angles of arrival of light from the one or more
light fixtures in field of view of the mobile device; and
determining a position of the mobile device using decoded light
fixture identifiers and the angles of arrival of light from the one
or more light fixtures in field of view of the mobile device.
3. The method of claim 2, wherein monitoring angles of arrival of
light from the one or more light fixtures in field of view of-the
mobile device comprises: for each light fixture in the field of
view of the mobile device, measuring light pixels of the each light
fixture for at least one image sensor frame; and determining the
field of view for the each light fixture based on the light pixels
measured for the at least one image sensor frame.
4. The method of claim 1, wherein controlling the mobile device to
operate in a reduced power mode comprises: controlling an actuator
of a camera of the mobile device to position one or more lenses of
the camera in a fixed focal length during a period while the motion
of the mobile device is above a first threshold.
5. The method of claim 1, wherein controlling the mobile device to
operate in a reduced power mode further comprises: controlling a
video frontend engine of the mobile device to stop generating
statistics data during a period while the motion of the mobile
device is above a first threshold.
6. The method of claim 1, wherein controlling the mobile device to
operate in a reduced power mode further comprises: controlling a
video front end engine of the mobile device to stop transferring
statistics data to a memory during a period while the motion of the
mobile device is above a first threshold.
7. The method of claim 1, wherein controlling the mobile device to
operate in a reduced power mode further comprises: controlling an
image processing engine of the mobile device to stop processing
data in support of auto focus, auto white balance, and auto
exposure during a period while the motion of the mobile device is
above a first threshold.
8. The method of claim 1, wherein controlling the mobile device to
operate in a reduced power mode further comprises: controlling the
VLC signal decoder of the mobile device to intermittently decode
incoming video frames during a period while the motion of the
mobile device is above a first threshold.
9. The method of claim 1, further comprising: detecting the motion
of the mobile device with respect to the one or more light fixtures
is below a second threshold; and controlling the mobile device to
operate in a normal power mode based on the motion of the mobile
device with respect to the one or more light fixtures being below
the second threshold.
10. A mobile device, comprising: a camera configured to receive
visible light communication signals; a memory configured to store
the visible light communication signals; a transceiver configured
to receive positioning assistance data of a venue, wherein the
positioning assistance data includes identifiers and positions of
light fixtures in the venue; a VLC signal decoder configured to
decode one or more light fixtures within a field of view of the
mobile device to obtain corresponding light fixture identifiers; a
controller configured to: determine a motion of the mobile device
with respect to the one or more light fixtures based on the light
fixture identifiers and the positioning assistance data of the
venue; and control the mobile device to operate in a reduced power
mode based on the motion of the mobile device with respect to the
one or more light fixtures.
11. The mobile device of claim 10, wherein the controller is
further configured to: in the reduced power mode, monitor angles of
arrival of light from the one or more light fixtures in field of
view of the mobile device; and determine a position of the mobile
device using decoded light fixture identifiers and the angles of
arrival of light from the one or more light fixtures in field of
view of the mobile device.
12. The mobile device of claim 11, wherein the controller is
further configured to: for each light fixture in the field of view
of the mobile device, measure light pixels of the each light
fixture for at least one image sensor frame; and determine the
field of view for the each light fixture based on the light pixels
measured for the at least one image sensor frame.
13. The mobile device of claim 10, wherein the controller is
further configured to: control an actuator of the camera of the
mobile device to position one or more lenses of the camera in a
fixed focal length during a period while the motion of the mobile
device is above a first threshold.
14. The mobile device of claim 10, wherein the controller is
further configured to: control a video frontend engine of the
mobile device to stop generating statistics data during a period
while the motion of the mobile device is above a first
threshold.
15. The mobile device of claim 10, wherein the controller is
further configured to: control a video front end engine of the
mobile device to stop transferring statistics data to the memory
during a period while the motion of the mobile device is above a
first threshold.
16. The mobile device of claim 10, wherein the controller is
further configured to: control an image processing engine of the
mobile device to stop processing data in support of auto focus,
auto white balance, and auto exposure during a period while the
motion of the mobile device is above a first threshold.
17. The mobile device of claim 10, wherein the controller is
further configured to: control a VLC signal decoder of the mobile
device to intermittently decode incoming video frames during a
period while the motion of the mobile device is above a first
threshold.
18. The mobile device of claim 10, wherein the controller is
further configured to: detect the motion of the mobile device with
respect to the one or more light fixtures is below a second
threshold; and control the mobile device to operate in a normal
power mode based on the motion of the mobile device with respect to
the one or more light fixtures being below the second
threshold.
19. A mobile device, comprising: means for receiving positioning
assistance data of a venue, wherein the positioning assistance data
includes identifiers and positions of light fixtures in the venue;
means for decoding one or more light fixtures within a field of
view of the mobile device to obtain corresponding light fixture
identifiers; means for determining a motion of the mobile device
with respect to the one or more light fixtures based on the light
fixture identifiers and the positioning assistance data of the
venue; and means for controlling the mobile device to operate in a
reduced power mode based on the motion of the mobile device with
respect to the one or more light fixtures.
20. The mobile device of claim 19, further comprising; in the
reduced power mode, means for monitoring angles of arrival of light
from the one or more light fixtures in field of view of the mobile
device; and means for determining a position of the mobile device
using decoded light fixture identifiers and the angles of arrival
of light from the one or more light fixtures in field of view of
the mobile device.
21. The mobile device of claim 20, wherein monitoring angles of
arrival of light from the one or more fight fixtures in field of
view of the mobile device comprises: for each light fixture in the
field of view of the mobile device, means for measuring light
pixels of the each light fixture for at least one image sensor
frame; and means for determining the field of view for the each
light fixture based on the light pixels measured for the at least
one image sensor frame.
22. The mobile device of claim 19, wherein the means for
controlling the mobile device to operate in a reduced power mode
comprises: means for controlling an actuator of a camera of the
mobile device to position one or more lenses of the camera in a
fixed focal length during a period while the motion of the mobile
device is above a first threshold.
23. The mobile device of claim 19, wherein the means for
controlling the mobile device to operate in a reduced power mode
further comprises: means for controlling a video frontend engine of
the mobile device to stop generating statistics data during a
period while the motion of the mobile device is above a first
threshold.
24. The mobile device of claim 19, wherein the means for
controlling the mobile device to operate in a reduced power mode
further comprises: means for controlling a video front end engine
of the mobile device to stop transferring statistics data to a
memory during a period while the motion of the mobile device is
above a first threshold.
25. The mobile device of claim 19, wherein the means for
controlling the mobile device to operate in a reduced power mode
further comprises: means for controlling an image processing engine
of the mobile device to stop processing data in support of auto
focus, auto white balance, and auto exposure during a period while
the motion of the mobile device is above a first threshold.
26. The mobile device of claim 19, wherein the means for
controlling the mobile device to operate in a reduced power mode
further comprises: means for controlling a VLC signal decoder of
the mobile device to intermittently decode incoming video frames
during a period while the motion of the mobile device is above a
first threshold.
27. The mobile device of claim 19, further comprising: means for
detecting the motion of the mobile device with respect to the one
or more light fixtures is below a second threshold; and means for
controlling the mobile device to operate in a normal power mode
based on the motion of the mobile device with respect to the one or
more light fixtures being below the second threshold.
28. A non-transitory medium storing instructions for execution by
one or more processors of a mobile device, the instructions
comprising: instructions for receiving, by a transceiver,
positioning assistance data of venue, wherein the positioning
assistance data includes identifiers and positions of light
fixtures in the venue; instructions for decoding, by a VLC signal
decoder, one or more light fixtures within a field view of the
mobile device to obtain corresponding light fixture identifiers;
instructions for determining, by a controller, a motion of the
mobile device with respect to the one or more light fixtures based
on the light fixture identifiers and the positioning assistance
data of the venue; and instructions for controlling, by the
controller, the mobile device to operate in a reduced power mode
based on the motion of the mobile device with respect to the one or
more light fixtures.
29. The non-transitory medium of claim 28, further comprising: in
the reduced power mode, instructions for monitoring angles of
arrival of light from the one or more light fixtures in field of
view of the mobile device; and instructions for determining a
position of the mobile device using decoded light fixture
identifiers and the angles of arrival of light from the one or more
light fixtures in field of view of the mobile device.
30. The non-transitory medium of claim 29, wherein the instructions
for monitoring angles of arrival of light from the one or more
light fixtures in field of view of the mobile device comprises: for
each light fixture in the field of the mobile device, instructions
for measuring light pixels of the each light fixture for at least
one image sensor frame; and instructions for determining the field
of view for the each light fixture based on the light pixels
measured for the at least one image sensor frame.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims benefit of U.S. provisional
application No. 62/401,101, "SYSTEMS AND METHODS TO MINIMIZE
ACTUATOR POWER LEAKAGE" filed Sep. 28, 2016. The aforementioned
United States patent application is hereby incorporated by
reference in its entirety.
FIELD
[0002] The present disclosure relates to the field of positioning
of mobile devices. In particular, the present disclosure relates to
systems and methods for power optimization in visible light
communication (VLC) positioning.
BACKGROUND
[0003] Recently, interest in radio over fiber technologies
complementary to Radio Frequency (RF) technologies has increased
due to the exhaustion of RF band frequencies, potential crosstalk
between several wireless communication technologies, increased
demand for communication security, and the advent of an ultra-high
speed ubiquitous communication environment based on various
wireless technologies. Consequently, visible light communication
employing visible light LEDs has been developed to complement RF
technologies.
[0004] In conventional visible light communication positioning
applications, a camera of a mobile device is kept on while the user
of the mobile device is in motion. Such conventional applications
would require the image acquisition unit of the camera to
continuously make adjustments based on the new scenes in the field
of view of the camera, and cause the processing unit of the mobile
device to continuously process the newly acquired image frames,
such as making decisions regards to auto focus, auto exposure, and
auto-white balance. These operations may unnecessarily consume
battery power and computing bandwidth of the mobile device.
Therefore, there is a need for systems and methods for power
optimization in visible light communication positioning.
SUMMARY
[0005] Embodiments of systems and methods for power optimization in
visible light communication positioning are disclosed. In one
embodiment, a method of power optimization in visible light
communication (VLC) positioning of a mobile device comprises
receiving, by a transceiver, positioning assistance data of a
venue, where the positioning assistance data includes identifiers
and positions of light fixtures in the venue, decoding, by a VLC
signal decoder, one or more light fixtures within a field of view
of the mobile device to obtain corresponding light fixture
identifiers, determining, by a controller, a motion of the mobile
device with respect to the one or more light fixtures based on the
light fixture identifiers and the positioning assistance data of
the venue, and controlling, by the controller, the mobile device to
operate in a reduced power mode based on the motion of the mobile
device with respect to the one or more light fixtures.
[0006] In another embodiment, mobile device comprises a camera
configured to receive visible light communication signals, a memory
configured to store the visible light communication signals, a
transceiver configured to receive positioning assistance data of a
venue, where the positioning assistance data includes identifiers
and positions of light fixtures in the venue, a VLC signal decoder
configured to decode one or more light fixtures within a field of
view of the mobile device to obtain corresponding light fixture
identifiers, a controller configured to determine a motion of the
mobile device with respect to the one or more light fixtures based
on the light fixture identifiers and the positioning assistance
data of the venue, and control the mobile device to operate in a
reduced power mode based on the motion of the mobile deice with
respect to the one or more light fixtures.
[0007] According to aspects of the present disclosure, systems and
techniques for minimizing power usage in camera's that have a
voltage regulator that is shared between at least one camera
component and a display are disclosed. The camera of a phone has an
actuator module which moves the lens in between macro and infinity
position. In some embodiments, the actuator includes a voice coil
motor (VCM). A spring (or other biasing mechanism) may be coupled
to the lens. In some lens/sensor arrangements, the infinity
position is the lens position where the lens is moved to be nearest
to the image sensor, and the macro position is the lens position
where the lens is moved to be farthest from the image sensor.
Typically, when the camera is powered down, a 3A algorithm
(auto-focus, auto-exposure, auto-white balance) moves the lens to a
default positon, for example, the infinity position, using the
actuator to physically move the lens. In most of low-end and
mid-tier phones, multimedia subsystems will share voltage
regulators due to space restrictions of the system-on-chip (SOC)
and to lower cost of the power management integrated circuit
(PMIC).
[0008] In some imaging systems, for example cell phones, a
low-dropout voltage regulator (LDO) is shared between a display and
a camera, and power is consumed by the camera sensor/components
even after the camera is turned off if the display is still ON.
Some components of the phone, like the actuator, has a power on/off
control only with a voltage regulator. That is, the actuator does
not have another way to power down except to power down the
LDO.
[0009] As a result, in some imaging systems, the actuator does not
turn off, and continues to consume power (at least at a low level)
even when camera software turns the camera off--the LDO cannot be
powered down because it is shared and needed by the display. This
results in power consumption by the camera continuously even when
the camera is off. The power consumption becomes negligible only if
the camera and the display are placed in an OFF or SUSPEND state,
and the voltage regulator can be powered down.
[0010] According to aspects of the present disclosure, an imaging
device may include a camera system having an image sensor having
sensing elements arranged in an imaging plane, a lens having at
least one optical element, the lens and image sensor arranged in an
optical path configured to propagate light through the lens and to
the image sensor, and an actuator coupled to the lens, the actuator
operative to move the lens to a plurality of focus positions each
being a different distance from the image sensor. The imaging
device also may include a display, a voltage regulator electrically
connected to provide power to the display and to the camera system,
a memory circuit configured to store information representing an
actuator control value that corresponds to a low power focus
position, the low power focus position being the lens position
where the actuator uses the least amount of power, and a processor
coupled to the memory circuit, the actuator and the display, the
processor configured to control the actuator to move the lens to
the low power focus position when the camera system is placed in an
OFF state.
[0011] Such imaging devices can include other features, as
described herein. For example, the imaging device may be configured
such that in the OFF state, camera imaging functionality is
disabled. In the OFF state, the voltage regulator supplies power to
the camera system and to the display, and camera imaging
functionality is disabled. In some embodiments, in the OFF state,
the voltage regulator supplies power to actuator and to the
display, and the image sensor functionality is in an OFF state such
that no image data is generated by the image sensor. In some
embodiments, the voltage regulator is a low-dropout voltage
regulator. In some embodiments, the low power focus position
includes a predetermined value. In some embodiments, the low power
focus position stored in the memory circuit is selected based on a
type of camera. In some embodiments, the low power focus position
is selected based on a type of actuator. In some embodiments, the
imaging device further includes actuator control information stored
in the memory circuit and used by the processor to control the
actuator to move the lens to the low power focus position, In some
embodiments, the memory circuit comprises two or more memory
components. In some embodiments, the lens is positioned on one side
of the image sensor, an optical axis of the lens is aligned
perpendicular with the image sensor, and the actuator operates to
move the lens in a direction substantially perpendicular to the
imaging plane, towards and away from the image sensor.
[0012] According to aspects of the present disclosure, a method of
operating a mobile imaging device may include supplying power from
a voltage regulator to a display of an imaging device and supplying
power from a voltage regulator to a camera system of the imaging
device. The camera system may include an image sensor having
sensing elements arranged in an imaging plane, a lens having at
least one optical element, the lens and image sensor arranged in an
optical path configured to propagate light through the lens and to
the image sensor, and an actuator coupled to the lens, the actuator
operative to move the lens to a plurality of focus positions each
being a different distance from the image sensor. The method
further comprises receiving a control signal indicating to place
the camera system in an OFF state, retrieving from a memory circuit
an actuator control value that corresponds to a low power focus
position, the low power focus position being the lens position
where the actuator uses the least amount of power, and controlling,
with a processor, the actuator to move the lens to the low power
focus position, wherein the voltage regulator supplies power to the
display and to the camera system when the camera system is in the
OFF state.
[0013] Such methods can include other features, as described
herein. For example, in some embodiments, when in the OFF state,
camera imaging functionality is disabled. In some embodiments, in
the OFF state, the voltage regulator supplies power to the camera
system and to the display, and camera imaging functionality is
disabled. In some embodiments, in the OFF state, the voltage
regulator supplies power to actuator and to the display, and the
image sensor functionality is in an OFF state such that no image
data is generated by the image sensor. In some embodiments, the
voltage regulator is a low-dropout voltage regulator. In some
embodiments, the low power focus position includes a predetermined
value. In some embodiments, the low power focus position stored in
the memory circuit is selected based on a type of camera. In some
embodiments, the low power focus position is selected, based on a
type of actuator. In some embodiments, the method uses actuator
control information stored in the memory circuit, which is used by
the processor to control the actuator to move the lens to the low
power focus position. In some embodiments, the memory circuit
comprises two or more memory components.
[0014] According to aspects of the present disclosure, a
non-transitory computer readable medium may include instructions
that when executed cause a processor to perform a method for
reducing defocus events occurring during autofocus search
operations, the method including supplying power from a voltage
regulator to a display of an imaging device, supplying power from a
voltage regulator to a camera system of the imaging device. The
camera system may include an image sensor having sensing elements
arranged in an imaging plane, a lens having at least one optical
element, the lens and image sensor arranged in an optical path
configured to propagate light through the lens and to the image
sensor, and an actuator coupled to the lens, the actuator operative
to move the lens to a plurality of focus positions each being a
different distance from the image sensor. The method may further
include receiving a control signal indicating to place the camera
system in an OFF state, retrieving from a memory circuit an
actuator control value that corresponds to a low power focus
position, the low power focus position being the lens position
where the actuator uses the least amount of power, and controlling,
with a processor, the actuator to move the lens to the low power
focus position, wherein the voltage regulator supplies power to the
display and to the camera system when the camera system is in the
OFF state.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The aforementioned features and advantages of the
disclosure, as well as additional features and advantages thereof,
will be more clearly understandable after reading detailed
descriptions of embodiments of the disclosure in conjunction with
the non-limiting and non-exhaustive aspects of following drawings.
The drawings are shown for illustration purposes and they are not
drawn to scale.
[0016] FIG. 1 illustrates an exemplary environment for power
optimization in visible light communication positioning according
to aspects of the present disclosure.
[0017] FIG. 2 illustrates another exemplary environment for power
optimization in visible light communication positioning according
to aspects of the present disclosure.
[0018] FIG. 3A illustrates an exemplary block diagram of a mobile
device for implementing power optimization in visible light
communication positioning according to aspects of the present
disclosure.
[0019] FIG. 3B illustrates an exemplary block diagram of a
controller of the according to aspects of the present
disclosure.
[0020] FIG. 4A illustrates a camera control engine according to
aspects of the present disclosure.
[0021] FIG. 4B illustrates an exemplary information that may be
stored in memory and used by an actuator to move a lens to a
position according to aspects of the present disclosure.
[0022] FIG. 5A illustrates a block diagram of an exemplary camera
control mechanism according to aspects of the present
disclosure.
[0023] FIG. 5B illustrates different states of a power supply, a
display, and a camera system, and a corresponding lens position of
a lens of the camera system according to aspects of the present
disclosure.
[0024] FIG. 5C illustrates an exemplary implementation of camera
system control according to aspects of the present disclosure.
[0025] FIG. 6 illustrates an exemplary implementation of power
optimization in visible light communication positioning according
to aspects of the present disclosure.
[0026] FIG. 7A illustrates an exemplary implementation of
determining a position of the mobile device in a reduced power mode
according to aspects of the present disclosure.
[0027] FIG. 7B illustrates various exemplary implementations of
controlling a mobile device to operate in a reduced power mode
according to aspects of the present disclosure.
[0028] FIG. 8 illustrates an exemplary block diagram of a mobile
device for implementing power optimization in visible light
communication positioning according to aspects of the present
disclosure.
[0029] FIG. 9 illustrates an exemplary implementation of
determining position of a mobile device according to aspects of the
present disclosure.
DESCRIPTION OF EMBODIMENTS
[0030] Embodiments of systems and methods for power optimization in
visible light communication positioning are disclosed. The
following descriptions are presented to enable any person skilled
in the art to make and use the disclosure. Descriptions of specific
embodiments and applications are provided only as examples. Various
modifications and combinations of the examples described herein
will be readily apparent to those skilled in the art, and the
general principles defined herein may be applied to other examples
and applications without departing from the scope of the
disclosure. Thus, the present disclosure is not intended to be
limited to the examples described and shown, but is to be accorded
the scope consistent with the principles and features disclosed
herein. The word "exemplary" or "example" is used herein to mean
"serving as an example, instance, or illustration." Any aspect or
embodiment described herein as "exemplary" or as an "example" is
not necessarily to be construed as preferred or advantageous over
other aspects or embodiments.
[0031] According to aspects of the present disclosure, visible
light communication is a method of communication using modulation
of a light intensity emitted by a light fixture, such as a light
emitting diode (LED) luminary device. Visible light is light having
a wavelength in a range that is visible to the human eye. The
wavelength of the visible light is in the range of 380 to 780 nm.
Since humans cannot perceive on-off cycles of a LED luminary device
above a certain number of cycles per second, for example 150 Hz,
LEDs may use Pulse Width Modulation (PWM) to increase the lifespan
thereof and save energy.
[0032] FIG. 1 illustrates an exemplary environment for power
optimization in visible light communication positioning according
to aspects of the present disclosure. As shown in FIG. 1, in the
exemplary environment 180, a mobile device 182 may observe one or
more light fixtures, such as LEDs 184a, 184b, 184c, 184d, 184e,
184f, in, the field of view (FOV) of camera 186 of the mobile
device 182. According to aspects of the present disclosure, the
field of view is that part of the world that is visible through the
lens of the camera at a particular position and orientation in
space. Objects outside the FOV when an image is captured are not
recorded in the camera. The term FOV can sometimes be used to refer
to a physical characteristic of a lens system, and when used in
this way, the FOV may refer to the angular size of the view cone,
which may be alternatively referred to as an angle of view. As used
herein, however, the FOV refers to that which is visible in a
camera system given the lens stack (with its optical
characteristics, including the angle of view) at a particular
location in a particular orientation.
[0033] According to aspects of the present disclosure, light
fixtures, such as LEDs 184a through 184f, may broadcast positioning
signals by modulating their light output level over time in the
visible light communication mode. LED light output can be modulated
at relatively high frequencies. Using modulation frequencies in the
kHz range ensures that VLC signals will not cause any light flicker
that could be perceived, by to the human eye, while at the same
time allowing for sufficiently high data rates for positioning. The
VLC signals can be designed in such a way that the energy
efficiency of the light fixture is not compromised, and this is
achieved, for example, by using binary modulation. This type of
modulation can be produced by highly efficient boost converters
that are an integral component PWM-dimmable LED drivers. In
addition to being efficient and compatible with existing driver
hardware, the VLC signal can also conform to the standard methods
of dimming based on the PWM and variation of the maximum
current.
[0034] The VLC signal transmitted by each light fixture conveys a
unique identification which differentiates that light fixture from
other light fixtures in the venue. The assistance data that
contains a map of locations of the light fixtures and their
identifiers may be created at the time the system is installed, and
may be stored on a remote server. To determine its position, a
mobile device may download the assistance data and may reference it
every time the mobile device decodes light fixture identification
from a VLC signal.
[0035] The identification may be either stored internally in the
driver or may be supplied from an external system, such as a
Bluetooth wireless mesh network. A light fixture may periodically
switch to transmitting a new identification in order to prevent
unauthorized use of the positioning infrastructure.
[0036] FIG. 2 illustrates another exemplary environment for power
optimization in visible light communication positioning according
to aspects of the present disclosure. As shown in FIG. 2, in the
exemplary environment 280, mobile device 182 may observe one or
more light fixtures, such as LEDs 280a, 280b, 280c, 280d, 280e,
280f, in the field of view of camera 186 of the mobile device 182.
Numerals 282 and 284 indicate angle of arrival of light at an image
sensor of the mobile device 182 from LEDs 280d and 280e,
respectively. In this example, the mobile device 182 may be in
motion, as indicated by arrow 286 with velocity V.
[0037] According to aspects of the present disclosure, the mobile
device 182 may use a light fixture's identification as well as
angle of arrival of light from the light fixture to estimate its
position. Based on the mobile device's orientation and motion, the
light fixtures visible at any position may remain substantially the
same for a period of time. Thus, when the same light fixtures are
available in the FOV, mobile device 182 can be configured to
estimate its position by using angle of arrival of light and
previously decoded identifiers of the light fixtures in the
environment 30. Since, angle of arrival estimation may take less
time than full identity detection, image sensors can be configured
to be enabled or disabled periodically. The disabling of the image
sensors and the intermittent decoding of VLC messages can result in
significant power savings over conventional solutions. In some
implementations, to determine the FOV of the image sensor, the
controller may first identify a subset of the pixel array where the
source of the VLC signal is visible. Note that the FOV may have a
larger signal to noise ratio than the rest of the image. The
controller may identify this region by identifying pixels that are
brighter than the others. For example, it may set a threshold T,
such that if the pixel intensity in luminance value is greater than
T, the pixel may then be considered to be part of the FOV. The
threshold T may, for instance, be the 50% luminance value of the
image,
[0038] According to aspects of the present disclosure, a
controller/processor of the mobile device 182 can be configured to
decode the light fixture identifiers for neighboring light fixtures
or visible light fixtures in the environment. In one exemplary
approach, the controller can be configured to decode light fixtures
in the FOV of the mobile device. Using the decoded identifiers of
the light fixtures and assistance data of the environment 280, the
controller can estimate the position of the mobile device and
determine the relative position of neighboring light fixtures. As
the position/orientation of the mobile device changes, the
controller may continue to use light fixtures identifiers as well
as angle of arrival to estimate the position of the mobile
device.
[0039] In some embodiments, in situations when the motion of a
mobile device is low, or the mobile device is moving slowly, or
stationary, if the same light fixtures are visible, the mobile
device may enter a reduced duty cycle state, where the image
sensors may be switched off for a period of time and be turned on
only for a programmable duration in a reduced duty cycle state.
According to aspects of the present disclosure, in the reduced duty
cycle state, the controller may skip decoding the light fixture
identification and may measure the light pixels from the particular
light fixture. In some implementations, light fixtures in the FOV
of the image sensors can be detected based on just one or two image
sensor full frames. Based on the last FOV information for a
particular light fixture and mobile device orientation and motion
information, the controller may predict the upcoming FOV of the
particular light fixture in the environment. In addition, the
controller may be configured to compute the likelihood between
predicted FOV and observed/measured FOV. If the likelihood is high,
then controller may determine that the previously decoded
identification of the light fixtures can be used for
positioning.
[0040] In some embodiments, the controller may also examine the
similarities between two image frames to determine the validity of
previously decoded identifiers of the light fixtures. The position
of the mobile device may be calculated using measurements of the
angle of arrival of the light fixtures signal. Based on this angle
computation and light fixtures position information, the distance
between the mobile device and the light fixtures in the FOV of the
mobile device can be computed. Using the triangulation method, the
mobile device's precise position may then be calculated based on
the distance between the mobile device and the light fixtures in
the FOV of the mobile device.
[0041] FIG. 3A illustrates an exemplary block diagram of a mobile
device for implementing power optimization in visible light
communication positioning according to aspects of the present
disclosure. As shown in FIG. 3A, mobile device 300 may be
implemented with computer bus architecture, represented generally
by the bus 314. The bus 314 may include any number of
interconnecting buses and bridges depending on the specific
application of the mobile device 300 and the overall design
constraints. The bus 314 links together various circuits including
one or more controller/processors and/or hardware components,
represented by the controller/processor 304, VLC signal decoder
305, display 309, and the computer-readable medium/memory 306. The
bus 314 may also link various other circuits such as timing
sources, peripherals, voltage regulators, and power management
circuits, which are well known in the art, and therefore, will not
be further described.
[0042] The mobile device 300 may include a transceiver 310, motion
sensors 311, and a camera (also referred to as image sensor) 307.
The transceiver 310 is coupled to one or more antennas 320. The
transceiver 310 provides a means for communicating with various
other apparatus over a transmission medium. The transceiver 310
receives a signal from the one or more antennas 320, extracts
information from the received signal, and provides the extracted
information to the mobile device 300. In addition, the transceiver
310 receives information from the mobile device 300, and based on
the received information, generates a signal to be applied to the
one or more antennas 320.
[0043] The camera 307 provides a means for capturing VLC signal
frames. The camera 307 captures a VLC signal frame from a light
source, extracts information from the captured VLC signal frame,
and provides the extracted information to the mobile device
300.
[0044] The motion sensors 311 may include but not limited to,
accelerometer, gyroscope, and magnetometer configured to detect
motions and rotations of the mobile device. The accelerometer may
perform better in detecting linear movements, the gyroscope may
perform better in detecting rotations, and the magnetometer may
perform better in detecting orientations of the mobile device. A
combination of two or more such sensors may be used to detect
movement, rotation, and orientation of the mobile device according
to aspects of the present disclosure.
[0045] According to embodiments of the present disclosure, an
accelerometer is a device that measures the acceleration of the
mobile device. It measures the acceleration associated with the
weight experienced by a test mass that resides in the frame of
reference of the accelerometer. For example, an accelerometer
measures a value even if it is stationary, because masses have
weights, even though there is no change of velocity. The
accelerometer measures weight per unit of mass a quantity also
known as gravitational force or g-force. In other words, by
measuring weight, an accelerometer measures the acceleration of the
free-fall reference frame (inertial reference frame) relative to
itself. In one approach, a multi-axis accelerometer can be used to
detect magnitude and direction of the proper acceleration (or
g-force), as a vector quantity. In addition, the multi-axis
accelerometer can be used to sense orientation as the direction of
weight changes, coordinate acceleration as it produces g-force or a
change in g-force, vibration, and shock. In another approach, a
micro-machined accelerometer can be used to detect position,
movement, and orientation of the mobile device.
[0046] According to embodiments of the present disclosure, a
gyroscope is used to measure rotation and orientation of the mobile
device, based on the principles of conservation of angular
momentum. The accelerometer or magnetometer can be used to
establish an initial reference for the gyroscope. After the initial
reference is established, the gyroscope cm be more accurate than
the accelerometer or magnetometer in detecting rotation of the
mobile device because it is less impacted by vibrations, or by the
electromagnet fields generated by electrical appliances around the
mobile device. A mechanical gyroscope can be a spinning wheel or
disk whose axle is free to take any orientation. This orientation
changes much less in response to a given external torque than it
would without the large angular momentum associated with the
gyroscope's high rate of spin. Since external torque is minimized
by mounting the device in gimbals, its orientation remains nearly
fixed, regardless of any motion of the platform on which it is
mounted. In other approaches, gyroscopes based on other operating
principles may also be used, such as the electronic,
microchip-packaged Micro-electromechanical systems (MEMS) gyroscope
devices, solid state ring lasers, fiber optic gyroscopes and
quantum gyroscope.
[0047] According to embodiments of the present disclosure, a
magnetometer can be used to measure orientations by detecting the
strength or direction of magnetic fields around the mobile device.
Various types of magnetometers may be used. For example, a scalar
magnetometer measures the total strength of the magnetic field it
is subjected to, and a vector magnetometer measures the component
of the magnetic field in a particular direction, relative to the
spatial orientation of the mobile device. In another approach, a
solid-state Hall-effect magnetometer can be used. The Hall-effect
magnetometer produces a voltage proportional to the applied
magnetic field, and it can be configured to sense polarity.
[0048] The mobile device 300 includes a controller (also referred
to as a processor) 904 coupled to a computer-readable medium/memory
306, which can include, in some implementations a non-transitory
computer readable medium storing instructions for execution by one
or more processors, such as controller processor 304. The
controller/processor 304 is responsible for general processing,
including the execution of software stored on the computer-readable
medium/memory 306. The software, when executed by the controller
304, causes the mobile device 300 to perform. the various functions
described in FIGS. 5B-5C, FIG. 6, FIGS. 7A-7B, and FIG. 9 for any
particular apparatus. The computer-readable medium/memory 306 may
also be used for storing data that is manipulated by the controller
304 when executing software. The mobile device 300 may further
include at least one of VLC signal decoder 305, and display 309.
Some components may be implemented as software modules running in
the controller 304, resident/stored in the computer readable
medium/memory 306, or may be implemented as one or more hardware
components coupled to the controller 304, or some combination
thereof. The components of mobile device 300 can be configured to
implement the methods as described in association with FIGS. 5B-5C,
FIG. 6, FIGS. 7A-7B, and FIG. 9.
[0049] According to aspects of the present disclosure, the camera
or image sensor in the mobile device can be configured to extract a
time domain VLC signal from a sequence of image frames that capture
a given light fixture. The received VLC signal can be demodulated
and decoded by the mobile device to produce a unique identification
for a light fixture. Furthermore, the camera or image sensor can in
parallel extract VLC signals from images containing multiple light
fixtures that are visible in the field of view of the image sensor.
In this manner, the mobile device may use multiple independent
sources of information to confirm and refine its position.
[0050] Each pixel in an image sensor accumulates light energy
coming from a narrow range of physical directions, so by performing
pixel-level analysis the mobile device can precisely determine the
angle of arrival of light from one or more light fixtures. This
direction of angle of light enables the mobile device to compute
its position relative to a light fixture to within a few
centimeters.
[0051] By combining the position relative to a light fixture with
the information about the location of that light fixture as
determined based on the decoded identification coming from the
positioning signal, the mobile device can determine its global
position in the venue within accuracy of centimeters.
[0052] FIG. 3B illustrates an exemplary block diagram of a
controller according to aspects of the present disclosure. In the
example of FIG. 3B, a controller 304 may include a camera control
engine 322, a video frontend engine (VFE) 324, an image processing
engine 326, and one or more processors (not shown). The camera
control engine 322, the VFE 324, and the image processing engine
326 may be configured to perform various power saving modes during
VLC positioning, which are further described in FIG. 4A and FIG.
7B.
[0053] According to aspects of the present disclosure, the actuator
movement is controlled via a digital to analog converter (DAC)
register using an inter-IC (I2C) controller. Since infinity/macro
DAC value may not be the same across all sensors due to module
manufacturing errors, the infinity and macro DAC values are stored
in EEPROM calibration data for each sensor. Infinity is the general
position where the sensor consumes less power than the macro.
Because sensors infinity is different due to module manufacturing
errors, different power is consumed by different sensors at the
infinity position.
[0054] To minimize power consumption, the system can be configured
to place the actuator in a lowest power mode (for example close to
or at infinity) when the LDO is shared with others and the camera
is turned off even though the display is still on (such that the
LDO shared by the actuator and display is still activated).
[0055] The process of putting the actuator to the low power mode
can be determined with a module integrator, which can be configured
to find the low power state of the actuator, which may be close to
the EEPROM saved value for infinity. In other words, when the
camera is turned off but the LDO is still activated (in situations
when it is shared by the display), the actuator places the lens in
a position that is predetermined to be the lowest power consumption
position for the lens--information of this position are stored in
memory and may be based on the specific configuration of the
sensor/actuator. When this process has been implemented, a
significant power consumption improvement of 2 to 4 percent may
have been observed (i.e. 2 to 4 less power is consumed for the same
set of operations).
[0056] FIG. 4A illustrates a camera control engine according to
aspects of the resent disclosure. As shown in FIG. 4A, in
particular the components of a sensor module (or camera system) 124
may be incorporated in a mobile device 100. The sensor module 124
includes an image sensor 107, a lens 112, and an auto focus (AF)
component or actuator 118 configured to move the lens in a
direction perpendicular to the image sensor 107 to one of a
plurality of lens positions in a range of lens positions, that are
represented by lens positions A, B, C, through N. For example, in
the illustrated embodiment the actuator 118 is operable to move the
lens 112 towards the scene (away from the image sensor 107) or away
from the scene (towards the image sensor 107). Moving the lens to
one of the lens positions requires the actuator to consume power to
keep the lens at that position. In other words, as long as power is
being supplied to the actuator 118 and the lens is in any of the
plurality of lens positions, then power is being consumed by the
actuator 118. The camera system 124 also includes an aperture 123
through which light can enter the camera system 124 and propagate
through the lens 112 to the image sensor 107. The embodiment of a
camera system 124 illustrated in FIG. 4A also includes a controller
108 coupled to the actuator 118, that can receive signals and
control the actuator 118 to move the lens 112 to a lens
position.
[0057] In the example shown in FIG. 4A, the mobile device 100 also
includes a processor 120 and memory 126. As discussed below in
reference to FIG. 5A, the processor 120 may be an image processor
or can be configured to be a plurality of processors. Memory 126
provides storage for the processor 120, which can be a solid state
memory such as a FLASH memory, RAM, ROM, and/or EEPROM. The memory
126 is configured to store information used by the processor 120 to
operate and control the mobile device 100. Such information may
include information that is used by the processor 120 to control
the actuator to move the lens to a low power focus position when
the camera is deactivated or in a standby mode, to save power.
[0058] The mobile device 100 may also include a power supply 128
and a display 125. The power supply 128 may be coupled to the
camera system 124 and supplies power to the actuator 118. The power
supply 128 may also be coupled to the display 125 and provides
power to the display 125. The display may be any type of an
electronic display haying, for example, a plurality of LCD, LED, or
OLED display elements. The power supply may be any type of a power
supply component that supplies power to both the display 125 and
the camera system 124, and may be controlled such that if one of
either the display 125 or the camera system 124 is activated (needs
power), power is also available and is supplied to the other
component. In some embodiments the power supply can be a voltage
regulator, for example a low dropout voltage regulator (LDO).
[0059] FIG. 4B illustrates an exemplary information that may be
stored in memory and used by an actuator to move a lens to a
position according to aspects of the present disclosure. As shown
in FIG. 4B, a table 400 includes an example of information that may
be stored in memory, and that may be retrieved and used by an
actuator to move a lens to a position that uses a minimum amount of
power, for example, when power is being supplied to the actuator of
a camera system but the camera system is not being used for
imaging. In various embodiments, information that can be used by a
processor to control an actuator to place the lens in a position of
low (or lowest) power usage is stored in memory of the imaging
device or the cameras system itself. For example, in some
embodiments, the camera system 124 includes a controller 108 (FIG.
4A) that includes memory that can be used to store information
representative of a lens position for low power usage. That is, at
what distance from the image sensor 107 should the actuator 118
position the lens 112 such that the power usage by the actuator 118
(or the camera system 124) is low or at a minimum. In other
embodiments, the low focus lens position information is stored in
working memory 126 or in a specific module (e.g., actuator control
module 240) of the imaging device 200 (FIG. 5A).
[0060] The table 400 illustrates one-example of how information of
low power focus positions may be ordered (or related) for an
embodiment where a plurality of low power lens positions are
stored, each for a different set or type of camera components, in
an imaging device. In this example, Table 400 includes a first
column 465 that includes information representing different
components that may be incorporated into the camera system 124.
Table 400 also includes a second column 470 that includes
information representing a low power lens position that corresponds
to each of the component sets in the first column 465. For example,
if the camera system has component set 3, the lens position for
minimal power usage is LENS POSITION C. Accordingly, a processor
can be configured to retrieve the stored information of a low power
lens positon corresponding to whatever camera component set that is
included on the imaging device 200 from such a table of stored
information and use the retrieved information to place the lens in
the position that uses that least amount of power when the camera
is not being used but power is being supplied to the camera system
124 (for example, to the actuator 118). The table 400 illustrates
an example of how information contained therein can be ordered, and
an example of what type or information may be stored, according to
some embodiments. In various embodiments, information may be stored
in a lookup table, a list, or any type of relational arrangement
such that a particular low power focus position can be retrieved
from memory and used to control an actuator to move a lens to a
position that uses the least amount of power for that particular
actuator.
[0061] FIG. 5A illustrates a block diagram of an exemplary camera
control mechanism according to aspects of the present disclosure.
In particular, FIG. 5A illustrates an example imaging device 200
(also referred to as mobile device) that can be, for example, a
cell phone, a camera, or another type of mobile image capture
device. The imaging device 200 is shown with certain components
which may be included in embodiments of the imaging device 200. The
illustrated imaging device 200 includes an image processor 120
coupled to a camera system 124. The image processor 120 is also
coupled to, and in communication with, a working memory 126, memory
230, and device processor 250, which in turn is in communication
with storage 210 and an electronic display 125.
[0062] The camera system 124 includes an image sensor 107, an
actuator 118, and a lens 112. The image sensor 107 includes sensing
elements arranged in an imaging plane. The lens 112 includes at
least one optical element 213 through which light propagates in a
path through the lens to the image sensor 107. Light propagates to
the lens 112 from a scene through aperture 123, which allows light
to pass through the housing of the imaging device 200 and enter the
camera system 124. Light passing through the aperture 123 is
refracted by the lens 112 as it propagates through the lens 112 to
the image sensor 107. In some embodiments, the actuator 118 is a
voice coil motor (VCM). Other embodiments can include other types
of actuators. The actuator 118 is coupled to the lens 112 and is
configured to move the lens 112 to a plurality of lens positions
each at a different distance from the image sensor 107. For
example, the image processor 120 can control the actuator 118 to
move the lens 112 to a desired position for focusing or for an
optical zoom operation, or to place the lens 112 in a determined
position when the camera system 124 is deactivated such that the
least amount (or a minimal amount) of power is consumed by the
actuator 118. The imaging device 200 also includes a display 125, a
device processor 250, and a power supply 128. The power supply 128
is coupled to, and supplies power to, the display 125 and/or the
camera system 124, including the actuator 118 via power line 217
for example. In some embodiments, the power supply 128 supplies
power to other components of imaging device 200 as well. Because
the power supply 128 supplies power to both the display 125 and the
camera system 124, powering down (turning off) the power supply 128
will affect the power received by both the camera system 124 (and
actuator 118) and the display 125.
[0063] Imaging device 200 may be a cell phone, digital camera,
tablet computer, personal digital assistant, or the like. Some
embodiments of imaging device 200 can be incorporated into a
vehicle-based imaging system, for example an unmanned aerial
vehicle. There are many portable computing devices in which a
reduced thickness stereoscopic imaging system such as is described
herein would provide advantages. Imaging device 200 may also be a
stationary computing device or any device in which a thin
stereoscopic imaging system would be advantageous. A plurality of
applications may be available on imaging device 200. These
applications may include traditional photographic and video
applications, panoramic image generation, stereoscopic imaging such
as three-dimensional images and/or three-dimensional video,
three-dimensional modeling, and three-dimensional object and/or
landscape mapping, to name a few.
[0064] The image processor 120 may be configured to perform various
processing operations on received image data. Examples of image
processing operations include cropping, scaling (e.g., to a
different resolution), image format conversion, image filtering
(e.g., spatial image filtering), lens artifact or defect
correction, stereoscopic matching, depth map generation, etc. In
some embodiments, the image processor 120 can be a chip in a
three-dimensional wafer stack including the image sensor 107 of the
camera system 124, for example a RICA processor. In such
embodiments the working memory 126 and memory 230 can be
incorporated as hardware or software of the image processor 120. In
some embodiments, image processor 120 may be a general purpose
processing unit or a processor specially designed for imaging
applications. Image processor 120 may, in some embodiments,
comprise a plurality of processors. Certain embodiments may have a
processor dedicated to each image captured and transmitted to the
image processor 120. In some embodiments, image processor 120 may
be one or more dedicated image signal processors (ISPs).
[0065] As shown, the image processor 120 is connected to a memory
230 (any "memory" described herein is also referred to herein as a
"memory circuit" indicating that the memory may be hardware or
media that is configured to store information) and a working memory
126. In the illustrated embodiment, the memory 230 stores capture
control module 235, actuator control module 240, and operating
system 245. These modules include instructions that configure the
image processor 120 to perform various image processing and device
management tasks. Working memory 126 may be used by image processor
120 to store a working set of processor instructions contained in
the modules of memory 230. Alternatively, working memory 126 may
also be used by image processor 120 to store dynamic data created
during the operation of imaging device 200. As discussed above, in
some embodiments the working memory 126 and memory 230 can be
incorporated as hardware or software of the image processor 120. In
some embodiments, the functionality of the image processor 120 and
the device processor 250 are combined to be performed by the same
processor.
[0066] As mentioned above, the image processor 120 is configured by
several modules stored in the memory 230. The capture control
module 235 may include instructions that configure the image
processor 120 to adjust the capture parameters (for example
exposure time, focus position, and the like) of the image sensor
107 and optionally the camera system 124. Capture control module
235 may further include instructions that control the overall image
capture functions of the imaging device 200. For example, capture
control module 235 may include instructions that call subroutines
to configure the image processor 120.
[0067] Actuator control module 240 may comprise instructions that
configure the image processor 120 to control the actuator 118. For
example, the actuator control module 240 may configure the image
processor 120 (or another processor) to determine if the camera
system 124 is ON or OFF determine whether the display 125 is ON or
OFF, and provide certain control actions for the actuator 118
depending on the activation state of the display 125 and the camera
system 124. The actuator control module 240 may also configure the
image processor 120 (or another processor) to retrieve information
from memory 230 or working memory 126 and use the retrieved
information to control the actuator 118. For example, information
indicating a low power focus position of the lens 112 may be stored
in a memory 230 or 126 of the imaging device 200. The low power
focus position is a position of the lens 112 that when the actuator
118 moves the lens to that position, the actuator 118 consumes the
lowest amount of power. This position is not necessarily at an
infinity position of the lens. The particular low power focus
position can depend on the components of the camera system 124. For
example, a particular low power focus position may depend on the
type of actuator being used in the camera system 124, and even the
particular make and/or model of the actuator 118.
[0068] If the image processor 120 determines that the display 125
is in an active state and the camera system 124 is in an inactive
state (for example, an OFF state), the image processor 120 can
operate to retrieve the low power focus position information from
memory 230 or 126, and control the actuator 118 to move the lens to
the lens to a position that corresponds to the low power lens
position. An example of various "states" of the imaging device 200
is illustrated in FIG. 5B. In some embodiments, the information of
a low power focus position corresponding to the actuator 118 is
loaded in a memory component of the imaging device 200 when it is
being manufactured and configured (for example, pre-sale) once the
components incorporated into the imaging device 200 are known. For
example, working memory 126, the actuator control module 240, or
storage 210 can store information corresponding to, or
representative of, one or more low power focus positions. An
example of such information is represented in the table illustrated
in FIG. 4B. In some embodiments, information corresponding to the
low power focus position (that is, indicative of the low power
focus position) can be downloaded to a memory of the imaging device
200 while the device is operational, for example, during a software
update or upon initial configuration of the imaging device 200.
[0069] Operating system module 245 configures the image processor
120 to manage the working memory 126 and the processing resources
of imaging device 200. For example, operating system module 245 may
include device drivers to manage hardware resources such as the
image sensor 107, the power supply 128, and storage 210, Therefore,
in some embodiments, instructions contained in the image processing
modules discussed above may not interact with these hardware
resources directly, but instead interact through standard
subroutines or application interfaces (APIs) located in operating
system component 245. Instructions within operating system 245 may
then interact directly with these hardware components. Operating
system module 245 may further configure the image processor 120 to
share information with device processor 250.
[0070] Device processor 250 may be configured to control the
display 125 to captured images, or a preview of a captured image to
a user. The display 125 may also be configured to provide a view
finder displaying a preview image for use prior to capturing an
image, or may be configured to display a captured image stored in
memory or recently captured by the user. The display 125 comprise
an LCD or LED screen and may implement touch sensitive
technologies, for example providing a user interface for
controlling device functions.
[0071] Device processor 250 or image processor 120 may write data
to storage module 210, for example data representing captured
images and/or depth information. While storage module 210 is
represented graphically as a traditional disk device, those with
skill in the art would understand that the storage module 210 may
be configured as any storage media device. For example, the storage
module 210 may include a disk drive, such as a hard disk drive,
optical disk drive or magneto-optical disk drive, or a solid state
memory such as a FLASH memory, RAM, ROM, and/or EEPROM. The storage
module 210 can also include multiple memory units, and any one of
the memory units may be configured to be within the imaging device
200, or may be external to the imaging device 200. For example, the
storage module 210 may include a ROM memory containing system
program instructions stored within the imaging device 200. The
storage module 210 may also include memory cards or high speed
memories configured to store captured images which may be removable
from the imaging device 200. Though not illustrated, imaging device
200 may include one or more ports or devices for establishing wired
or wireless communications with a network or with another
device,
[0072] Although FIG. 5A depicts an imaging device 200 having
separate components including a processor, imaging sensor, and
memory, one skilled in the art would recognize that these separate
components may be combined in a variety of ways to achieve
particular design objectives. For example, in some embodiments the
memory components may be combined with processor components to save
cost and improve performance. Additionally, although FIG. 5A
illustrates two memory components, including memory component 230
comprising several modules and a separate memory 226 comprising a
working memory, one with skill in the art would recognize several
embodiments utilizing different memory architectures. For example,
a design may utilize ROM or static RAM memory for the storage of
processor instructions implementing the modules contained in memory
230. The processor instructions may be loaded into RAM to
facilitate execution by the image processor 120. For example,
working memory 126 may comprise RAM memory, with instructions
loaded into working memory 126 before execution by the image
processor 120.
[0073] FIG. 5B illustrates different states of a power supply, a
display, and a camera system, and a corresponding lens position of
a lens of the camera system according to aspects of the present
disclosure. In this example, FIG. 5B shows a state diagram
illustrating three different states of a power supply, a display,
and a camera system of an imaging device, and also illustrates the
corresponding lens position of a lens of the camera system. The
components referred to in reference to FIG. 5B can be, for example,
the components illustrated in FIG. 5A. In a first state 405, a
display is activated, a camera system is activated and a power
supply is providing power to both the display and to the camera
system. In the first state 405, the lens is positioned by the
actuator at a lens position needed to support an imaging operation,
for example, a focusing operation or performing an optical zoom
operation. In a second state 410 a display is activated, a camera
system is off, and a power supply is providing power to both the
display and to the camera system. In the second state 410, the lens
is positioned by the actuator at a low power focus position. For
example, at a position that takes the least amount of power to
maintain such a position. In a third state 415, a display is off, a
camera system is off, and a power supply is not providing power to
either the display and to the camera system. In the third state
415, the lens is positioned at a position corresponds to the camera
off position when no power is being supplied to the camera
system.
[0074] FIG. 5B also illustrates operations where the imaging device
changes state. For example, when the imaging device deactivates the
camera and deactivates the display (labelled by numeral 411) from
activated states, the imaging device changes states from the first
state 405 to the third state 415 and the lens may be moved to a
power off position. When the imaging device activates the camera
and the display (labelled by numeral 413) when both are in a
deactivated state, the imaging device changes states from the third
state 415 to the first state 405 where the lens is operationally
controlled for focusing or optical zoom operations.
[0075] FIG. 5B further illustrates when the imaging device changes
states between the first state 405 and the second state 410. For
example, when the imaging device deactivates the camera (labelled
by numeral 407) when the imaging device is in the first state 405,
and the display remains activated, the imaging device changes state
from the first state 405 to the second state 410 and the lens is
moved to be in the low power focus position. When the imaging
device activates the camera (labelled by numeral 409) and the
display remains activated, the imaging device changes states from
the second state 410 to the first state 405, where the lens
operationally controlled for focusing or optical zoom
operations.
[0076] FIG. 5B also illustrates when the imaging device changes
states between the second state 410 and the third state 415. For
example, when the imaging device deactivates the display (labelled
by numeral 417) and the camera is off when the imaging device is in
the second state 410, the imaging device changes state from the
second state 410 to the third state 415, where the lens may be
moved to be in a power off focus position. When the imaging device
is in the third state, and the imaging device activates the display
(labelled by numeral 419) and the camera remains off, the imaging
device changes states from the third state 415 to the second state
410, where the lens is moved to the low power focus position.
[0077] FIG. 5C illustrates an exemplary implementation of camera
system control according to aspects of the present disclosure. In
the example of FIG. 5C, a method 500 is shown for operating a
mobile imaging device camera system to minimize power loss when the
camera system of the imaging device is turned off and the display
of the imaging device is On and the actuator or the camera system
and the display both receive power from the same voltage regulator.
At block 505, the method 500 includes supplying power from a
voltage regulator (e.g., a power supply) to a display of an imaging
device. In some embodiments, this may be performed by the power
supply 128 (FIG. 4A and FIG. 5A) supplying power to the display
125. At block 510 the method includes supplying power from a
voltage regulator (e.g., a power supply) to a camera system of the
imaging device. In some embodiments, this may be performed by the
power supply 128 (FIG. 4A and FIG. 5A) supplying power to the
camera system 124. As illustrated in FIG. 5A, embodiments of such a
camera system may include, for example, image sensor 107 having
sensing elements arranged in an imaging plane, a lens 112 having at
least one optical element 213, the lens 112 and image sensor 107
arranged in an optical path configured to propagate light through
the lens 112 and to the image sensor 107. The camera system may
also include an actuator 118 coupled to the lens 112, the actuator
118 operative to move the lens 112 to a plurality of focus
positions each being a different distance from the image sensor
107.
[0078] At block 515, the method 500 further includes receiving a
control signal indicating to place the camera system in an OFF
state. Even though the camera is in the OFF state, power may still
be supplied to the actuator because the camera system and the
display share the power supply and the power supply cannot be
de-activated when the display is being used, even if the camera
system is de-activated (or OFF). At block 520, the method 500
includes retrieving from a memory circuit an actuator control
value, for example, information that corresponds to (or represents)
a lens position that is a low power focus position, where the low
power focus position is a lens position where the actuator uses the
least amount of power. The low power focus position can be
dependent on the particular components used in the camera system
(for example, the actuator). Accordingly, in some embodiments,
information corresponding to a plurality of low power lens
positions, each for different types of components of the camera
system, may be stored in memory, and retrieved when needed. An
example of an embodiment of such ordered information is illustrated
FIG. 4B. Using the actuator control value, the actuator can move
the lens to a certain position (the low power focus position). At
block 525, the method 500 further comprises controlling, with a
processor, the actuator to move the lens to the low power focus
position, wherein the voltage regulator supplies power to the
display and to the camera system when the camera system is in the
OFF state, or in a VLC positioning state and the velocity of the
mobile device exceeds a first threshold.
[0079] Such methods can include other features, as described
herein. For example, in some embodiments, when the in the OFF
state, camera imaging functionality is disabled. In some
embodiments, in the OFF state, the voltage regulator supplies power
to the camera system and to the display, and camera imaging
functionality is disabled. In some embodiments, in the OFF state,
the voltage regulator supplies power to actuator and to the
display, and the image sensor functionality is in an OFF state such
that no image data is generated by the image sensor. In some
embodiments, the voltage regulator is a low-dropout voltage
regulator. In some embodiments, the low power focus position
includes a predetermined value. In some embodiments, the low power
focus position stored in the memory circuit is selected based on a
type of camera. In some embodiments, the low power focus position
is selected based on a type of actuator. In some embodiments, the
method uses actuator control information stored in the memory
circuit, which is used by the processor to control the actuator to
move the lens to the low power focus position. In some embodiments,
the memory circuit comprises two or more memory components.
[0080] FIG. 6 illustrates an exemplary implementation of power
optimization in visible light communication positioning according
to aspects of the present disclosure. In block 602, the method
receives, by a transceiver, positioning assistance data of a venue,
wherein the positioning assistance data includes identifiers and
positions of light fixtures in the venue. In block 604, the method
decodes, by a VLC signal decoder, one or more light fixtures within
a field of view of the mobile device to obtain corresponding light
fixture identifiers. In block 606, the method determines, by a
controller, a motion of the mobile device with respect to the one
or more light fixtures based on the light fixture identifiers and
the positioning assistance data of the venue. In block 608, the
method controls, by the controller, the mobile device to operate in
a reduced power mode based on the motion of the mobile device with
respect to the one or more light fixtures.
[0081] FIG. 7A illustrates an exemplary implementation of
determining a position of the mobile device in a reduced power mode
according to aspects of the present disclosure. In block 702, in a
reduced power mode, the method monitors angles of arrival of light
from the one or more light fixtures in field of view of the mobile
device, and determines a position of the mobile device using
decoded light fixture identifiers and the angles of arrival of
light from the one or more light fixtures in field of view of the
mobile device.
[0082] According to aspects of the present disclosure, the method
performed in block 702 may further include the method performed in
block 704. In block 704, the method of monitoring angles of arrival
of light from the one or more light fixtures in field of view of
the mobile device may include, for each light fixture in the field
of view of the mobile device, measuring light pixels of the each
light fixture for at least one image sensor frame, and determining
the field of view for the each light fixture based on the light
pixels measured for the at least one image sensor frame.
[0083] FIG. 7B illustrates various exemplary implementations of
controlling a mobile device to operate in a reduced power mode
according to aspects of the present disclosure. In the examples
shown in FIG. 7B, in block 712, the method may control an actuator
of a camera of the mobile device to position one or more lenses of
the camera in a fixed focal length during a period while the motion
of the mobile device is above a first threshold. In block 714, the
method may control a video frontend engine of the mobile device to
stop generating statistics data during a period while the motion of
the mobile device is above a first threshold. In block 716, the
method may control the video front end engine of the mobile device
to stop transferring statistics data to a memory during a period
while the motion of the mobile device is above a first threshold.
In block 718, the method may control an image processing engine of
the mobile device to stop processing data in support of auto focus,
auto white balance, and auto exposure during a period while the
motion of the mobile device is above a first threshold. In block
720, the method may control a VLC signal decoder of the mobile
device to intermittently decode incoming video frames during a
period while the motion of the mobile device is above a first
threshold.
[0084] According to aspects of the present disclosure, the mobile
device may be configured to perform one or more of the methods
described in block 712, block 714, block 716, block 718, or block
720, alone or in combination. For example, in one implementation,
the mobile device may be configured to perform the methods in
blocks 712, 716, and 720. In another implementation, the mobile
device may be configured to perform the methods in blocks 712, 714,
and 718. In yet another implementations, the mobile device may be
configured to perform the methods in all the blocks from 712 to
720.
[0085] FIG. 8 illustrates an exemplary block diagram of a mobile
device for implementing power optimization in visible light
communication positioning according to aspects of the present
disclosure. As shown in the example of FIG. 8, mobile device 800
may include a camera 801, a controller processor 802 (which, in
some implementations may additionally or alternatively be a
controller), memory 804, an input module 806 and an output module
808 coupled together via a bus 809 over which the various
components (801, 802, 804, 806, 808) may interchange data and
information. In some embodiments, memory 804 includes routines 811
and data/information 813. In some embodiments, the input module 806
and output module 808 may be located internal to the
controller/processor 802. The blocks of mobile device 800 can be
configured to implement the methods as described in association
with FIGS. 5B-5C, FIG. 6, FIGS. 7A-7B, and FIG. 9.
[0086] According to aspects of the present disclosure, camera 801
includes a lens 850, a shutter 852, photo detector array 854 (which
is an image sensor), a shutter control module 856, a photo detector
readout module 858, an auto exposure lock activation module 860,
and an interface module 862. The shutter control module 856, and
photo detector readout module 858 and interface module 862 are
communicatively coupled together via bus 864. In some embodiments,
camera 801 may further include auto exposure lock activation module
860. Shutter control module 856, photo detector readout module 858.
and auto exposure lock activation module 860 may be configured to
receive control messages from controller/processor 802 via bus 809,
interface module 862 and bus 864. Photo detector readout module 858
may communicate readout information of photo detector array 854 to
controller/processor 802, via bus 864, interface module 862, and
bus 809. Thus, the image sensor, such as the photo detector array
854, can be communicatively coupled to the controller/processor 802
via photo detector readout module 858, bus 864, interface module
862, and bus 809.
[0087] Shutter control module 856 may control the shutter 852 to
expose different rows of the image sensor to input light at
different times, for example, under the direction of
controller/processor 802. Photo detector readout module 858 may
output information to the processor, for example, pixel values
corresponding to the pixels of the image sensor.
[0088] Input module 806 may include a wireless radio receiver
module 810 and a wired and/or optical receiver interface module
814. Output module 808 may include a wireless radio transmitter
module 812 and a wired and/or optical receiver interface module
814. Wireless radio receiver module 810, such as a radio receiver
supporting OFDM and/or CDMA, may receive input signals via receive
antenna 818. Wireless radio transmitter module 812, such as a radio
transmitter supporting OFDM and/or CDMA, may transmit output
signals via transmit antenna 820. In some embodiments, the same
antenna can be used for transmitting and receiving signals. Wired
and/or optical receiver interface module 814 may be communicatively
coupled to the Internet and/or other network nodes, for example via
a backhaul, and receives input signals. Wired and/or optical
transmitter interface module 816 may be communicatively coupled to
the Internet and/or other network nodes, for example via a
backhaul, and may transmit output signals.
[0089] In various embodiments, controller/processor 802 can be
configured to: sum pixel values in each row of pixel values
corresponding to a first region of an image sensor to generate a
first array of pixel value sums, at least some of the pixel value
sums representing energy recovered from different portions of the
VLC light signal, the different portions being output at different
times and with different intensities; and perform a first
demodulation operation on the first array of pixel value sums to
recover information communicated by the VLC signal.
[0090] In some embodiments, controller/processor 802 can be further
configured to: identify, as a first region of the image sensor, a
first subset of pixel sensor elements in a sensor where the VLC
signal is visible during a first frame time. In some such
embodiments, controller/processor 802 can be further configured to
identify, a second region of the image sensor corresponding to a
second subset of pixel sensor elements in the sensor where the VLC
signal may be visible during a second frame time, the first and
second regions being different. In some embodiments,
controller/processor 802 is further configured to: sum pixel values
in each row of pixel values corresponding to the second region of
the image sensor to generate a second array of pixel value sums, at
least some of the pixel value sums in the second array representing
energy recovered from different portions of the VLC light signal,
the different portions being output at different times and with
different intensities; and perform a second demodulation operation
on the second array of pixel value sums to recover information
communicated by the VLC signal; and where the first demodulation
operation produces a first symbol value and the second demodulation
produces a second symbol value.
[0091] In various embodiments, the recovered information includes a
first symbol value, and different information is recovered from the
VLC signal over a period of time. In some embodiments, the array of
pixel value sums represents an array of temporally sequential light
signal energy measurements made over a period of time.
[0092] In various embodiments, the portion of the VLC signal
corresponding to a first symbol from which the first symbol value
is produced has a duration equal to or less than the duration of a
frame captured by the image sensor.
[0093] In some embodiments, controller/processor 802 is configured
to identify a frequency from among a plurality of alternative
frequencies, as part of being configured to perform a demodulation
operation. In some embodiments, the transmitted VLC signal includes
pure tones or square waves corresponding to tone frequencies equal
to or greater than 150 Hz, and the lowest frequency component of
the VLC signal may be 150 Hz or larger. In some embodiments, the
transmitted VLC signal is a digitally modulated signal with binary
amplitude (ON or OFF). In some such embodiments, the transmitted
VLC signal is a digitally modulated signal with binary ON-OFF
signals whose frequency content is at least 150 Hz.
[0094] In some embodiments, controller/processor 802 is configured
to perform one of: an OFDM demodulation, CDMA demodulation, Pulse
Position Modulation (PPM) demodulation, or ON-OFF keying
demodulation to recover modulated symbols, as part of being
configured to perform a demodulation operation.
[0095] In some embodiments, the image sensor may be a part of a
camera that supports an auto exposure lock which when enabled
disables automatic exposure, and controller/processor 802 is
further configured to: activate the auto exposure lock; and
capturing the pixels values using a fixed exposure time
setting.
[0096] In some embodiments, controller/processor 802 is further
configured to detect a beginning of a codeword including a
predetermined number of symbols. In some such embodiments,
controller/processor 802 is configured to: detect a predetermined
VLC synchronization signal having duration equal to or less than
the duration of a frame; and interpreting the VLC synchronization
signal as an identification of the beginning of the codeword, as
part of being configured to detect a beginning of a codeword.
[0097] According to aspects of the present disclosure, the image
sensor in the mobile device can be configured to extract a time
domain VLC signal from a sequence of image frames that capture a
given light fixture. The received VLC signal can be demodulated and
decoded by the mobile device to produce a unique identification for
a light fixture. Furthermore, an image sensor can in parallel
extract VLC signals from images containing multiple light fixtures
that are visible in the field of view of the image sensor. In this
manner, the mobile device may use multiple independent sources of
information to confirm and refine its position.
[0098] Each pixel in an image sensor accumulates light energy
coming from a narrow range of physical directions, so by performing
pixel-level analysis the mobile device can precisely determine the
angle of arrival of light from one or more light fixtures. This
direction of angle of light enables the mobile device to compute
its position relative to a light fixture to within a few
centimeters.
[0099] By combining the position relative to a light fixture with
the information about the location of that light fixture as
determined based on the decoded identification coming from the
positioning signal, the mobile device can determine its global
position in the venue within accuracy of centimeters.
[0100] FIG. 9 illustrates an exemplary implementation of
determining position of a mobile device according to aspects of the
present disclosure. In the example shown in FIG. 9, in block 902,
an image sensor of the mobile device is configured to
capture/receive image frames. In block 904, a controller of the
mobile device is configured to extract time-domain VLC signal from
the image frames. In block 906, the controller is configured to
decode light fixture identification using the VLC signal. In block
908, the controller is configured to compute angle of arrival of
light using the image frames received. In block 910, the controller
is configured to determine position of the mobile device relative
to the light fixture(s) using the angle of arrival of light. In
block 912, the controller is configured to determine position of
the mobile device in the venue based on the decoded light
fixture(s) identification and the position of the mobile device
relative to the light fixture(s).
[0101] One of the benefits of the disclosed positioning method is
that VLC-based positioning does not suffer from the uncertainty
associated with the measurement models used by other positioning
methods. For example, RF-signal-strength-based approaches may
suffer from unpredictable multipath signal propagation. On the
other hand, VLC-based positioning uses line-of-sight paths whose
direction can be more precisely determined using the image
sensor.
[0102] Another benefit of the disclosed positioning method is that,
in addition to providing the position of the device in the
horizontal plane, the disclosed positioning method may also provide
position of the mobile device in the vertical dimension (the
Z-axis). This is a benefit of using angle of arrival of light,
which is a three dimensional vector. The ability of obtaining
accurate height estimation can enable new applications such as
autonomous navigation and operation of drones and forklifts in
warehouses and on manufacturing floors.
[0103] Another benefit of using directionality of the light vectors
is that the mobile device can determine its orientation in the
horizontal (X-Y) plane, which is also referred to as the mobile
device's azimuth or yaw angle. This information can be used to
inform a user which way the user is holding the mobile device
relative to other items in the venue. By contrast, a global
positioning system (GPS) receiver determines the heading from a
time sequence of position estimates which may require the user move
in a certain direction before it can determine which way the user
is going. With the disclosed approach, the orientation/heading is
determined as soon as the first position is computed.
[0104] Moreover, the disclosed positioning method may have a low
latency and update rate. Typical indoor positioning systems that
use RF signals may require many measurements to be taken over time
and space in order to get a position fix. With the disclosed
approach, the time to first fix can be on the order of 0.1 second
and the update rate can be as high as 30 Hz. This ensures a
responsive and lively user experience and can even satisfy many of
the challenging drone/robot navigation applications.
[0105] Furthermore, the disclosed positioning method has better
scalability than conventional positioning methods. Conventional
positioning methods that require two-way communication between a
mobile device and infrastructure typically do not scale well as the
number of mobile users and infrastructure access points increases.
This is because each mobile-to-infrastructure communication creates
interference for other mobile devices, as they attempt to position
themselves. In the case of RTT-based positioning in Wi-Fi frequency
bands, the interference may also cause a drop in total WLAN
throughput. On the other hand, the disclosed VLC-based positioning
can be inherently scalable because it employs one-way
communication. As a result, the performance of the disclosed
positioning approach does not degrade no matter how many users and
transmitters may be simultaneously active in the venue.
[0106] Another benefit of disclosed method of visible light
communication position is that in that VLC enables communication
through widely available bandwidth without regulation. In addition,
since users can observe a location and direction at which light
corresponding to a VLC communication travels, information regarding
coverage can be accurately ascertained. VLC can also offer reliable
security and low power consumption. In light of these and other
advantages, VLC can be applied in locations where the use of RF
communications is prohibited, such as hospitals or airplanes, and
can also provide additional information services through electronic
display boards.
[0107] Note that at least the following three paragraphs, FIG.
3A-3B through FIG. 9 and their corresponding descriptions provide
support for receiver means for receiving positioning assistance
data of a venue; means for decoding one or more light fixtures
within a field of view of the mobile device to obtain corresponding
light fixture identifiers; means for determining a motion of the
mobile device with respect to the one or more light fixtures based
on the light fixture identifiers and the positioning assistance
data of the venue; means for controlling the mobile device to
operate in a reduced power mode based on the motion of the mobile
device with respect to the one or more light fixtures; means for
monitoring angles of arrival of light from the one or more light
fixtures in field of view of the mobile device; means for
determining a position of the mobile device using decoded light
fixture identifiers and the angles of arrival of light from the one
or more light fixtures in field of view of the mobile device; means
for measuring light pixels of the each light fixture for at least
one image sensor frame; means for determining the field of view for
the each light fixture based on the light pixels measured for the
at least one image sensor frame; means for controlling an actuator
of a camera of the mobile device to position one or more lenses of
the camera in a fixed focal length during a period while the motion
of the mobile device is above a first threshold; means for
controlling a video frontend engine of the mobile device to stop
generating statistics data during a period while the motion of the
mobile device is above a first threshold; means for controlling a
video front end engine of the mobile device to stop transferring
statistics data to a memory during a period while the motion of the
mobile device is above a first threshold; means for controlling an
image processing engine of the mobile device to stop processing
data in support of auto focus, auto white balance, and auto
exposure during a period while the motion of the mobile device is
above a first threshold; means for controlling a VLC signal decoder
of the mobile device to intermittently decode incoming video frames
during a period while the motion of the mobile device is above a
first threshold; means for detecting the motion of the mobile
device with respect to the one or more light fixtures is below a
second threshold; and means for controlling the mobile device to
operate in a normal power mode based on the motion of the mobile
device with respect to the one or more light fixtures being below
the second threshold.
[0108] The methodologies described herein may be implemented by
various means depending upon applications according to particular
examples. For example, such methodologies may be implemented in
hardware and firmware/software. In a hardware implementation, for
example, a processing unit may be implemented within one or more
application specific integrated circuits (ASICs), digital signal
processors (DSPs), digital signal processing devices (DSPDs),
programmable logic devices (PLDs), field programmable gate arrays
(FPGAs), processors, controllers, micro-controllers,
microprocessors, electronic devices, other devices units designed
to perform the functions described herein, or combinations
thereof.
[0109] Some portions of the detailed description included herein
are presented in terms of algorithms or symbolic representations of
operations on binary digital signals stored within a memory of a
specific apparatus or special purpose computing device or platform.
In the context of this particular specification, the term specific
apparatus or the like includes a general purpose computer once it
is programmed to perform particular operations pursuant to
instructions from program software. Algorithmic descriptions or
symbolic representations are examples of techniques used by those
of ordinary skill in the signal processing or related arts to
convey the substance of their work to others skilled in the art. An
algorithm is here, and generally, is considered to be a
self-consistent sequence of operations or similar signal processing
leading to a desired result. In this context, operations or
processing involve physical manipulation of physical quantities.
Typically, although not necessarily, such quantities may take the
form of electrical or magnetic signals capable of being stored,
transferred, combined, compared or otherwise manipulated. It has
proven convenient at times, principally for reasons of common
usage, to refer to such signals as bits, data, values, elements,
symbols, characters, terms, numbers, numerals, or the like. It
should be understood, however, that all of these or similar terms
are to be associated with appropriate physical quantities and are
merely convenient labels. Unless specifically stated otherwise, as
apparent from the discussion herein, it is appreciated that
throughout this specification discussions utilizing terms such as
"processing," "computing," "calculating," "determining" or the like
refer to actions or processes of a specific apparatus, such as a
special purpose computer, special purpose computing apparatus or a
similar special purpose electronic computing device. In the context
of this specification, therefore, a special purpose computer or a
similar special purpose electronic computing device is capable of
manipulating or transforming signals, typically represented as
physical electronic or magnetic quantities within memories,
registers, or other information storage devices, transmission
devices, or display devices of the special purpose computer or
similar special purpose electronic computing device.
[0110] Wireless communication techniques described herein may be in
connection with various wireless communications networks such as a
wireless wide area network (WWAN), a wireless local area network
(WLAN), a wireless personal area network (WPAN), and so on. The
term "network" and "system" may be used interchangeably herein. A
WWAN may be a Code Division Multiple Access (CDMA) network, a Time
Division Multiple Access (TDMA) network, a Frequency Division
Multiple Access (FDMA) network, an Orthogonal Frequency Division
Multiple Access (OFDMA) network, a Single-Carrier Frequency
Division Multiple Access (SC-FDMA) network, or any combination of
the above networks, and so on. A CDMA network may implement one or
more radio access technologies (RATs) such as cdma2000,
Wideband-CDMA (W-CDMA), to name just a few radio technologies.
Here, cdma2000 may include technologies implemented according to
IS-95, IS-2000, and IS-856 standards. A TDMA network may implement
Global System for Mobile Communications (GSM), Digital Advanced
Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are
described in documents from a consortium named "3rd Generation
Partnership Project" (3GPP). Cdma2000 is described in documents
from a consortium named "3rd Generation Partnership Project 2"
(3GPP2). 3GPP and 3GPP2 documents are publicly available. 4G Long
Term Evolution (LTE) communications networks may also be
implemented in accordance with claimed subject matter, in an
aspect. A WLAN may comprise an Institute of Electrical and
Electronic Engineers (IEEE) 802.11x network, and a WPAN may
comprise a Bluetooth.RTM. network, an IEEE 802.15x, for example.
Wireless communication implementations described herein may also be
used in connection with any combination of WWAN, WLAN or WPAN.
[0111] In another aspect, as previously mentioned, a wireless
transmitter or access point may comprise a femtocell, utilized to
extend cellular telephone service into a business or home. In such
an implementation, one or more mobile devices may communicate with
a femtocell via a CDMA cellular communication protocol, for
example, and the femtocell may provide the mobile device access to
a larger cellular telecommunication network by way of another
broadband network such as the Internet.
[0112] Techniques described herein may be used with a GPS that
includes any one of several global navigation satellite system
(GNSS) and/or combinations of GNSS. Furthermore, such techniques
may be used with positioning systems that utilize terrestrial
transmitters acting as "pseudolites", or a combination of satellite
vehicles (SVs) and such terrestrial transmitters. Terrestrial
transmitters may, for example, include ground-based transmitters
that broadcast a pseudo noise (PN) code or other ranging code (for
example, similar to a GPS or CDMA cellular signal). Such a
transmitter may be assigned a unique PN code so as to permit
identification by a remote receiver. Terrestrial transmitters may
be useful, for example, to augment a GPS in situations where GPS
signals from an orbiting SV might be unavailable, such as in
tunnels, mines, buildings, urban canyons or other enclosed areas.
Another implementation of pseudolites is known as radio-beacons.
The term "SV", as used herein, is intended to include terrestrial
transmitters acting as pseudolites, equivalents of pseudolites, and
possibly others. The terms "GPS signals" and/or "SV signals", as
used herein, is intended to include GPS-like signals from
terrestrial transmitters, including terrestrial transmitters acting
as pseudolites or equivalents of pseudolites.
[0113] The terms, "and," and "or" as used herein may include a
variety of meanings that will depend at least in part upon the
context in which it is used. Typically, "or" if used to associate a
list, such as A, B or C. is intended to mean A, B, and C, here used
in the inclusive sense, as well as A, B or C, here used in the
exclusive sense. Reference throughout this specification to "one
example" or "an example" means that a particular feature,
structure, or characteristic described in connection with the
example is included in at least one example of claimed subject
matter. Thus, the appearances of the phrase "in one example" or "an
example" in various places throughout this specification are not
necessarily all referring to the same example. Furthermore, the
particular features, structures, or characteristics may be combined
in one or more examples. Examples described herein may include
machines, devices, engines, or apparatuses that operate using
digital signals. Such signals may comprise electronic signals,
optical signals, electromagnetic signals, or any form of energy
that provides information between locations.
[0114] While there has been illustrated and described what are
presently considered to be example features, it will be understood
by those skilled in the art that various other modifications may be
made, and equivalents may be substituted, without departing from
claimed subject matter. Additionally, many modifications may be
made to adapt a particular situation to the teachings of claimed
subject matter without departing from the central concept described
herein. Therefore, it is intended that claimed subject matter not
be limited to the particular examples disclosed, but that such
claimed subject matter may also include all aspects falling within
the scope of the appended claims, and equivalents thereof.
* * * * *