U.S. patent application number 16/048132 was filed with the patent office on 2018-11-22 for method of decaying chrominance in images.
The applicant listed for this patent is Filmic Inc.. Invention is credited to Christopher Cohen, Matthew Voss.
Application Number | 20180338124 16/048132 |
Document ID | / |
Family ID | 63078962 |
Filed Date | 2018-11-22 |
United States Patent
Application |
20180338124 |
Kind Code |
A1 |
Cohen; Christopher ; et
al. |
November 22, 2018 |
METHOD OF DECAYING CHROMINANCE IN IMAGES
Abstract
A method and system for decaying chrominance. One or more
processors obtain a selected one of a series of root images of a
digital video. The selected root image includes root pixels each
associated with color values. The processor(s) selects one of the
root pixels until each of the root pixels has been selected. The
color values associated with the selected root pixel are
expressible as a color vector with a plurality of elements each
storing a different one of the color values. The processor(s)
determines a perceptual luminance value for the selected root
pixel, generates a monochromic vector for the selected root pixel,
generates a biased monochromic vector by multiplying the
monochromic vector with a bias, and generates new color values
associated with a new pixel of a denoised image corresponding to
the selected root pixel by blending the biased monochromic vector
with the color vector.
Inventors: |
Cohen; Christopher;
(Seattle, WA) ; Voss; Matthew; (Seattle,
WA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Filmic Inc. |
Seattle |
WA |
US |
|
|
Family ID: |
63078962 |
Appl. No.: |
16/048132 |
Filed: |
July 27, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15910993 |
Mar 2, 2018 |
10051252 |
|
|
16048132 |
|
|
|
|
62468063 |
Mar 7, 2017 |
|
|
|
62468874 |
Mar 8, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06T 5/002 20130101;
G06T 2207/10016 20130101; G06T 7/90 20170101; H04N 5/23212
20130101; H04N 5/23293 20130101; H04N 5/23222 20130101; G06T
2207/10024 20130101; H04N 9/646 20130101; H04N 5/23216 20130101;
H04N 5/213 20130101; H04N 9/77 20130101 |
International
Class: |
H04N 9/64 20060101
H04N009/64; H04N 5/213 20060101 H04N005/213; H04N 9/77 20060101
H04N009/77 |
Claims
1. A system comprising: memory storing a digital video and a noise
decay module, the digital video comprising a series of root images,
the noise decay module comprising instructions; and at least one
processor configured to execute the instructions that when executed
cause the at least one processor to obtain a selected one of the
series of root images and generate a denoised image based on the
selected root image, the selected root image comprising a plurality
of root pixels each associated with a set of color values, for each
root pixel in the plurality of root pixels, the at least one
processor generating the denoised image by: determining a
perceptual luminance value for the root pixel based on (a) the set
of color values associated with the root pixel and (b) the set of
color values associated with each of a predetermined number of root
pixels neighboring the root pixel, the set of color values
associated with the root pixel being expressible as a color vector
with a first plurality of elements, each of the first plurality of
elements storing a different one of the set of color values,
generating a monochromic vector for the root pixel, the monochromic
vector having a second plurality of elements, each of the second
plurality of elements equaling the perceptual luminance value,
generating a biased monochromic vector by multiplying monochromic
vector with a bias calculated as a function of the perceptual
luminance value, and generating a new set of color values
associated with a new pixel of the denoised image by blending the
biased monochromic vector with the color vector.
2. The system of claim 1, further comprising: a camera configured
to capture the digital video and store the digital video in the
memory.
3. The system of claim 2, wherein the instructions cause the at
least one processor to obtain each of the series of root images one
at a time and generate a different denoised image for each of the
series of root images in real-time as the digital video is
captured.
4. The system of claim 1, wherein when each of the plurality of
root pixels of the selected root image does not have linearized
gamma values, the instructions cause the at least one processor to
remap each value in the set of color values of each of the
plurality of root pixels of the selected root image to a
corresponding linear color value before generating the denoised
image.
5. The system of claim 4, wherein the selected root image is in a
Standard Red Blue Green ("sRGB") color space before the remapping
occurs.
6. The system of claim 1, wherein the instructions, when executed
by the at least one processor, cause the at least one processor to
remap the denoised image to a Standard Red Blue Green (sRGB) color
space after the denoised image is generated.
7. The system of claim 1, wherein the predetermined number of root
pixels and the root pixel comprise nine root pixels.
8. The system of claim 1, wherein the root pixel and the
predetermined number of root pixels are region pixels, and the
perceptual luminance value for the root pixel is determined by:
determining a plurality of relative luminance values by calculating
a relative luminance value for each of the region pixels, and
determining a median of the plurality of relative luminance values,
the perceptual luminance value for the root pixel being the
median.
9. The system of claim 8, wherein the relative luminance value is
determined for each of the region pixels by: multiplying 0.2126 by
a red component of the set of color values associated with the
region pixel, multiplying 0.7152 by a green component of the set of
color values associated with the region pixel, and multiplying
0.0722 by a blue component of the set of color values associated
with the region pixel.
10. The system of claim 1, wherein the bias is calculated by
multiplying 0.16667 with a natural log of the perceptual luminance
value to obtain a result and adding one to the result.
11. The system of claim 1 implemented as a smartphone comprises: a
camera configured to capture the digital video and store the
digital video in the memory, the instructions causing the at least
one processor to obtain each of the series of root images one at a
time and generate a different denoised image for each of the series
of root images in real-time as the digital video is captured.
12. A method comprising: obtaining a selected one of a series of
root images of a digital video with at least one processor, the
selected root image comprising a plurality of root pixels each
associated with a set of color values; and until each of the
plurality of root pixels has been selected: selecting, with the at
least one processor, one of the root pixels, the set of color
values associated with the selected root pixel being expressible as
a color vector with a first plurality of elements, each of the
first plurality of elements storing a different one of the set of
color values, determining, with the at least one processor, a
perceptual luminance value for the selected root pixel based on (a)
the set of color values associated with the selected rot pixel, and
(b) the set of color values associated with each of a predetermined
number of root pixels neighboring the selected root pixel,
generating, with the at least one processor, a monochromic vector
for the selected root pixel, the monochromic vector having a second
plurality of elements, each of the second plurality of elements
equaling the perceptual luminance value, generating, with the at
least one processor, a biased monochromic vector by multiplying the
monochromic vector with a bias calculated as a function of the
perceptual luminance value, and generating, with the at least one
processor, a new set of color values associated with a new pixel of
a denoised image corresponding to the selected root pixel by
blending the biased monochromic vector with the color vector.
13. The method of claim 12, further comprising: capturing the
digital video with a camera; and storing the digital video in a
storage location accessible by the at least one processor, the at
least one processor obtaining the selected root image from the
storage location.
14. The method of claim 12, further comprising: determining, with
the at least one processor, whether each of the plurality of root
pixels of the selected root image has linearized gamma values; and
when it is determined that the plurality of root pixels of the root
image do not have linearized gamma values, remapping each value in
the set of color values of each of the plurality of root pixels of
the selected root image to a corresponding linear color value.
15. The method of claim 14, wherein the selected root image is in a
Standard Red Blue Green (sRGB) color space before the remapping
occurs.
16. The method of claim 12, further comprising: remapping, with the
at least one processor, the denoised image to a Standard Red Blue
Green (sRGB) color space after each of the plurality of root pixels
has been selected.
17. The method of claim 12, wherein the predetermined number of
root pixels and the selected root pixel comprise nine root
pixels.
18. The method of claim 12, wherein the selected root pixel and the
predetermined number of root pixels are region pixels, and the
perceptual luminance value for the selected root pixel is
determined by: determining, with the at least one processor, a
plurality of relative luminance values by calculating a relative
luminance value for each of the region pixels, and determining,
with the at least one processor, a median of the plurality of
relative luminance values, the perceptual luminance value for the
selected root pixel being the median.
19. The method of claim 18, wherein the relative luminance value is
determined for each of the region pixels by: multiplying 0.2126 by
a red component of the set of color values associated with the
region pixel, multiplying 0.7152 by a green component of the set of
color values associated with the region pixel, and multiplying
0.0722 by a blue component of the set of color values associated
with the region pixel.
20. The method of claim 12, wherein the bias is calculated by
multiplying 0.16667 with a natural log of the perceptual luminance
value to obtain a result and adding one to the result.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Application No. 62/468,063, filed on Mar. 7, 2017, and U.S.
Provisional Application No. 62/468,874, filed on Mar. 8, 2017, both
of which are incorporated herein by reference in their
entireties.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] The present invention is directed generally to methods of
reducing or removing chromatic noise in images and digital
video.
Description of the Related Art
[0003] Luminance noise refers to fluctuations in brightness.
Luminance noise may appear as light and dark specks (e.g., within a
region of an image in which pixels should have the same or similar
brightness). Chromatic or chroma noise refers to fluctuations in
color. Chroma noise may appear as specks or blotches of unexpected
color(s) (e.g., within a region of an image in which pixels should
have the same or similar colors). Chroma noise is often more
apparent in very dark or very light areas of an image and may give
the image an unnatural appearance.
[0004] Image editing software often includes a user input (e.g.,
slider) that may be used to remove chroma noise manually. Software
may also automatically remove chroma noise by decolorizing any
pixels that have an unexpected color when compared to their
neighboring pixels. Decolorized pixels are set to black, which
essentially converts the chroma noise to luminance noise. Then,
other image processing techniques may be applied to the image to
remove the luminance noise and improve the overall appearance of
the image.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
[0005] FIG. 1 is a functional block diagram of a video capture
system.
[0006] FIG. 2 is a flow diagram of a method of generating a
denoised image performable by the video capture system.
[0007] FIG. 3 is a functional block diagram illustrating an
exemplary mobile communication device that may be used to implement
the video capture system.
DETAILED DESCRIPTION OF THE INVENTION
[0008] FIG. 1 illustrates a video capture system 200 configured to
capture digital video 203, which may be referred to as an image
stream. For example, the digital video 203 may be captured and/or
processed as a Real-Time Messaging Protocol ("RTMP") video stream.
By way of a non-limiting example, the video capture system 200 may
be implemented as a mobile communication device 140 (described
below and illustrated in FIG. 3). The video capture system 200
includes a housing 202, a camera 204, one or more processors 206,
memory 208, a display 210, and one or more manual controls 220. The
camera 204, the processor(s) 206, the memory 208, and the display
210 may be connected together by a bus 212 (e.g., like a bus system
186 illustrated in FIG. 3).
[0009] The camera 204 is mounted on the housing 202. The camera 204
is configured to capture the digital video 203 and store that
digital video 203 in the memory 208. The captured digital video 203
includes a series of root images (e.g., including a root image 240)
of a scene. By way of a non-limiting example, the camera 204 may be
implemented as a camera or video capture device 158 (see FIG.
3).
[0010] The processor(s) 206 is/are configured to execute software
instructions stored in the memory 208. By way of a non-limiting
example, the processor(s) 206 may be implemented as a central
processing unit ("CPU") 150 (see FIG. 3) and the memory 208 may be
implemented as memory 152 (see FIG. 3).
[0011] The display 210 is positioned to be viewed by the user while
the user operates the video capture system 200. The display 210 is
configured to display a preview of the digital video 203 being
captured by the camera 204. By way of a non-limiting example, the
display 210 may be implemented as conventional display device, such
as a touch screen. The display 210 may be mounted on the housing
202. For example, the display 210 may be implemented as a display
154 (see FIG. 3). Alternatively, the display 210 may be implemented
as an electronic viewfinder, an auxiliary monitor connected to the
video capture system 200, and the like.
[0012] The manual control(s) 220 is/are configured to be operated
by the user and may affect properties (e.g., focus, exposure, and
the like) of the digital video 203 being captured. The manual
control(s) 220 may be implemented as software controls that
generate virtual controls displayed by the display 210. In such
embodiments, the display 210 may be implemented as touch screen
configured to receive user input that manually manipulates the
manual control(s) 220. Alternatively, the manual control(s) 220 may
be implemented as physical controls (e.g., button, knobs, and the
like) disposed on the housing 202 and configured to be manually
manipulated by the user. In such embodiments, the manual control(s)
220 may be connected to the processor(s) 206 and the memory 208 by
the bus 212.
[0013] By way of non-limiting examples, the manual control(s) 220
may include a focus control 220A, an exposure control 220B, and the
like. The focus control 220A may be used to change the focus of the
digital video being captured by the camera 204. The exposure
control 220B may change an ISO value, shutter speed, aperture, or
an exposure value ("EV") of the digital video being captured by the
camera 204.
[0014] The memory 208 stores a noise decay module 230 implemented
by the processor(s) 206. In some embodiments, the noise decay
module 230 may generate and display the virtual controls
implementing the manual control(s) 220. Alternatively, the manual
control(s) 220 may be implemented by other software instructions
stored in the memory 208.
[0015] FIG. 2 is a flow diagram of a method 280 performed by the
noise decay module 230 (see FIG. 1). Referring to FIG. 1, the
method 280 (see FIG. 2) generates the denoised image 250 from one
of the series of root images of the digital video 203. For ease of
illustration, the method 280 (see FIG. 2) will be described as
generating the denoised image 250 from the root image 240.
[0016] In first block 282 (see FIG. 2), the noise decay module 230
obtains the root image 240 as a raw bitmap (e.g., directly the
camera 204 ) before the root image 240 is encoded. The root image
240 includes a plurality of root pixels each associated with one or
more color values within a color space (e.g., a standard Red Green
Blue ("SRGB") color space). In this example, the RGB color values
of each root pixel include separate values for red ("R.sub.srgb"),
green ("G.sub.srgb"), and blue ("B.sub.srgb"). However, through
application of ordinary skill in the art to the present teachings,
the method 280 may be adapted for use with other color spaces, such
as HSL (Hue, Saturation, Lightness), HSV (Hue, Saturation, Value),
and the like.
[0017] In decision block 284 (see FIG. 2), the noise decay module
230 determines whether the root image 240 has linearized gamma
values. In other words, has the root image 240 not yet been gamma
corrected? The decision in decision block 284 (see FIG. 2) is
"YES," when the root image 240 has linearized gamma values, meaning
the root image 240 not yet been gamma corrected. Otherwise, the
decision in decision block 284 (see FIG. 2) is "NO."
[0018] When the decision in decision block 284 (see FIG. 2) is
"YES," the noise decay module 230 advances to block 288 (see FIG.
2). On the other hand, when the decision in decision block 284 (see
FIG. 2) is "NO," in block 286 (see FIG. 2), the noise decay module
230 remaps the root image 240 to linear gamma (e.g., using a shader
or a lookup table). For example, if the root pixels are in the SRGB
color space and the RGB values (R.sub.srgb, G.sub.srgb, and
B.sub.srgb) are scaled to range from 0 to 1, the following formulas
may be used to obtain the linear RGB values (R.sub.linear,
G.sub.linear, and B.sub.linear) for each root pixel in the root
image 240:
R linear = { R srgb 12.92 , R srgb .ltoreq. 0.04045 ( R srgb +
0.055 1.055 ) 2.4 , R srgb > 0.04045 Eq . 1 R G linear = { G
srgb 12.92 , G srgb .ltoreq. 0.04045 ( G srgb + 0.055 1.055 ) 2.4 ,
G srgb > 0.04045 Eq . 1 G B linear = { B srgb 12.92 , B srgb
.ltoreq. 0.04045 ( B srgb + 0.055 1.055 ) 2.4 , B srgb > 0.04045
Eq . 1 B ##EQU00001##
Then, the noise decay module 230 advances to block 288 (see FIG.
2).
[0019] At this point, the noise decay module 230 processes each
root pixel of the root image 240 one at a time. Thus, in block 288
(see FIG. 2), the noise decay module 230 selects one of the root
pixels.
[0020] Then, in block 290 (see FIG. 2), the noise decay module 230
calculates a perceptual luminance ("p") for the selected root
pixel. The perceptual luminance ("p") may be calculated by first
calculating a relative luminance ("Y") for the selected root pixel.
The relative luminance ("Y") refers to the brightness of the
selected root pixel.
[0021] The relative luminance ("Y") of a particular pixel may be
calculated using the following function in which a variable "s"
represents the three linearized RGB color values (R.sub.linear,
G.sub.linear, and B.sub.linear) of the particular pixel expressed
as an RGB vector:
Y = dot ( s , vec 3 ( 0.2126 , 0.7152 , 0.0722 ) ) Y = [ R linear ,
G linear , B linear ] [ 0.2126 , 07152 , 0.0722 ] = ( R linear
.times. 0.2126 ) + ( G linear .times. 0.7152 ) + ( B linear .times.
0.0722 ) Eq . 2 ##EQU00002##
Using the above equation, the relative luminance ("Y") may be
calculated for each pixel in a two-dimensional region of the root
image 240 centered at the selected root pixel. For example, the
region may be three pixels by three pixels. In this example, the
selected root pixel may be characterized as being an origin of the
region (which includes the root pixel and its eight surrounding
neighbors) and assigned a coordinate value of (0, 0). Thus, a
separate relative luminance value may be calculated for each of the
eight root pixels neighboring the selected root pixel as well as
for the selected root pixel. In this example, the following set of
nine relative luminance values would be calculated: Y.sub.(-1 ,-1),
Y.sub.(-1,0), Y.sub.(-1,1), Y.sub.(0,-1), Y.sub.(0,0), Y.sub.(0,1),
Y.sub.(1,-1), Y.sub.(1,0), and Y.sub.(1,1). Then, these relative
luminance values may be combined to determine the relative
luminance ("Y") of the selected root pixel. For example, an average
or a median of the relative luminance values may be calculated and
used as the relative luminance ("Y") of the selected root
pixel.
[0022] If the color values of the selected root pixel (represented
by the RGB vector "s") are linear, the perceptual luminance ("p")
of the selected root pixel equals the relative luminance ("Y") of
the selected root pixel. Otherwise, the relative luminance ("Y")
may be linearized to obtain the perceptual luminance ("p") using
the following formula:
p = ( ( Y + 0.055 ) 1.055 ) 2.4 Eq . 3 ##EQU00003##
[0023] The perceptual luminance ("p") in the RGB color space may be
used by the method 280 (see FIG. 2) for two reasons. First, the
human eye is vastly more sensitive to green than any other color
and the RGB perceptual luminance easily accounts for this
sensitivity. Second, digital image sensors (e.g., included in the
camera 204 ) that include an RGB color filter array ("CFA")
configuration produce green channels that are lower in noise than
their red and blue counterparts. By using the perceptual luminance
("p") to determine chrominance decay (or desaturate the root image
240), the method 280 (see FIG. 2) spares (or causes less decay in)
higher-quality green-dominant colors in the root image 240.
[0024] Next, in block 292 (see FIG. 2), the noise decay module 230
creates a linear monochromatic RGB vector by setting the value of
each of the R,G, and B elements of the linear monochromatic RGB
vector equal to the perceptual luminance ("p").
linear monochromatic RGB vector=[p, p, p] Eq. 4
[0025] In block 294 (see FIG. 2), the noise decay module 230
multiplies the linear monochromatic RGB vector by a
relative-luminance weighted saturation bias ("o") to obtain a
biased monochromatic RGB vector.
biased monochromatic RGB vector=[o*p, o*p, o*p] Eq. 5
The relative-luminance weighted saturation bias ("o") may be
calculated using the following formula:
o=0.16667.times.In(p)+1.0 Eq. 6
[0026] In block 296 (see FIG. 2), the noise decay module 230
generates a new pixel of the denoised image 250 with new
(desaturated) color values by blending the biased monochromatic RGB
vector ([o*p, o*p, o*p]) with the RGB vector ([R.sub.linear,
G.sub.linear, B.sub.linear]) of the selected root pixel. In other
words, the biased monochromatic RGB vector is multiplied by a first
weight and the RGB vector is multiplied by a second weight wherein
the first and second weights total one. The new color values are
less saturated than the original color values associated with the
selected root pixel. In particular, dim or less bright areas are
more desaturated than brighter areas. Thus, the method 280 may be
characterized as desaturating the selected root pixel and/or
applying a weighted saturation to the selected root pixel.
[0027] Next, in decision block 298 (see FIG. 2), the noise decay
module 230 determines whether all of the root pixels of the root
image 240 have been selected in block 288 (see FIG. 2). The
decision in decision block 298 (see FIG. 2) is "YES," when the
noise decay module 230 has not yet selected all of the root pixels.
When the decision in decision block 298 (see FIG. 2) is "YES," the
noise decay module 230 returns to block 288 and selects a next root
pixel from the root image 240.
[0028] On the other hand, the decision in decision block 298 (see
FIG. 2) is "NO," when the noise decay module 230 has selected all
of the root pixels. When the decision in decision block 298 (see
FIG. 2) is "NO," the method 280 (see FIG. 2) terminates.
[0029] At this point, a new pixel has been generated for each of
the root pixels. Combined, the new pixels define the denoised image
250. Optionally, the denoised image 250 may be remapped to a
different color space. For example, the linear RGB values may be
remapped to the sRGB color space. The denoised image 250 may be
subject to one or more additional operations, such as Gamma curve
remapping, luma curve augmentation (shadow/highlight repair),
histogram equalization, additional spacial denoise, RGB mixing, and
lookup table application. Optionally, the denoised image 250 may be
displayed to the user using the display 210.
[0030] The method 280 (see FIG. 2) desaturates the root image 240
(or linear bitmap) using the perceptual luminance ("p") assigned to
each root pixel to reduce or minimize chroma noise in critically
underexposed (or dark) areas of the root image 240. Darker regions
are desaturated more than lighter areas, which may be characterized
as progressively desaturating the very darkest pixels (where
chrominance typically decomposes in low bit-depth images).
[0031] Referring to FIG. 2, the method 280 does not evaluate
high-frequency chrominance of either the root pixel selected in
block 288 or its neighborhood. Instead, the method 280 assumes that
the occurrence of chrominant anomalies (or chroma noise)
progressively increases as the perceptual luminance ("p") of the
selected root-pixel (or its neighborhood) approaches zero.
Therefore, the method 280 evaluates only the perceptual luminance
("p") of the selected root pixel (which may be the median relative
luminance of its spatial neighborhood). The visual reduction of
chrominance noise in darker sectors of the root image 240 is an
incidental byproduct of the progressive desaturation process.
[0032] The method 280 decays the chrominance of the root image 240
and generates the denoised image 250 within the gamut of the
original color space (e.g., the sRGB color space) of the root image
240.
Mobile Communication Device
[0033] FIG. 3 is a functional block diagram illustrating a mobile
communication device 140. The mobile communication device 140 may
be implemented as a cellular telephone, smart phone, a tablet
computing device, a self-contained camera module (e.g., a wired web
camera or an Action Camera module), and the like. By way of a
non-limiting example, the mobile communication device 140 may be
implemented as a smartphone executing IOS or Android OS. The mobile
communication device 140 may be configured to capture the digital
video 203 (see FIG. 1) and process the digital video 203 as a RTMP
protocol video stream.
[0034] The mobile communication device 140 includes the CPU 150.
Those skilled in the art will appreciate that the CPU 150 may be
implemented as a conventional microprocessor, application specific
integrated circuit (ASIC), digital signal processor (DSP),
programmable gate array (PGA), or the like. The mobile
communication device 140 is not limited by the specific form of the
CPU 150.
[0035] The mobile communication device 140 also contains the memory
152. The memory 152 may store instructions and data to control
operation of the CPU 150. The memory 152 may include random access
memory, ready-only memory, programmable memory, flash memory, and
the like. The mobile communication device 140 is not limited by any
specific form of hardware used to implement the memory 152. The
memory 152 may also be integrally formed in whole or in part with
the CPU 150.
[0036] The mobile communication device 140 also includes
conventional components, such as a display 154 (e.g., operable to
display the denoised image 250), the camera or video capture device
158, and keypad or keyboard 156. These are conventional components
that operate in a known manner and need not be described in greater
detail. Other conventional components found in wireless
communication devices, such as USB interface, Bluetooth interface,
infrared device, and the like, may also be included in the mobile
communication device 140. For the sake of clarity, these
conventional elements are not illustrated in the functional block
diagram of FIG. 3.
[0037] The mobile communication device 140 also includes a network
transmitter 162 such as may be used by the mobile communication
device 140 for normal network wireless communication with a base
station (not shown). FIG. 3 also illustrates a network receiver 164
that operates in conjunction with the network transmitter 162 to
communicate with the base station (not shown). In a typical
embodiment, the network transmitter 162 and network receiver 164
are implemented as a network transceiver 166. The network
transceiver 166 is connected to an antenna 168. Operation of the
network transceiver 166 and the antenna 168 for communication with
a wireless network (not shown) is well-known in the art and need
not be described in greater detail herein.
[0038] The mobile communication device 140 may also include a
conventional geolocation module (not shown) operable to determine
the current location of the mobile communication device 140.
[0039] The various components illustrated in FIG. 3 are coupled
together by the bus system 186. The bus system 186 may include an
address bus, data bus, power bus, control bus, and the like. For
the sake of convenience, the various busses in FIG. 3 are
illustrated as the bus system 186.
[0040] The memory 152 may store instructions executable by the CPU
150. The instructions may implement portions of one or more of the
methods described above (e.g., the method 280 illustrated in FIG.
2). Such instructions may be stored on one or more non-transitory
computer or processor readable media.
[0041] The foregoing described embodiments depict different
components contained within, or connected with, different other
components. It is to be understood that such depicted architectures
are merely exemplary, and that in fact many other architectures can
be implemented which achieve the same functionality. In a
conceptual sense, any arrangement of components to achieve the same
functionality is effectively "associated" such that the desired
functionality is achieved. Hence, any two components herein
combined to achieve a particular functionality can be seen as
"associated with" each other such that the desired functionality is
achieved, irrespective of architectures or intermedial components.
Likewise, any two components so associated can also be viewed as
being "operably connected," or "operably coupled," to each other to
achieve the desired functionality.
[0042] While particular embodiments of the present invention have
been shown and described, it will be obvious to those skilled in
the art that, based upon the teachings herein, changes and
modifications may be made without departing from this invention and
its broader aspects and, therefore, the appended claims are to
encompass within their scope all such changes and modifications as
are within the true spirit and scope of this invention.
Furthermore, it is to be understood that the invention is solely
defined by the appended claims. It will be understood by those
within the art that, in general, terms used herein, and especially
in the appended claims (e.g., bodies of the appended claims) are
generally intended as "open" terms (e.g., the term "including"
should be interpreted as "including but not limited to," the term
"having" should be interpreted as "having at least," the term
"includes" should be interpreted as "includes but is not limited
to," etc.). It will be further understood by those within the art
that if a specific number of an introduced claim recitation is
intended, such an intent will be explicitly recited in the claim,
and in the absence of such recitation no such intent is present.
For example, as an aid to understanding, the following appended
claims may contain usage of the introductory phrases "at least one"
and "one or more" to introduce claim recitations. However, the use
of such phrases should not be construed to imply that the
introduction of a claim recitation by the indefinite articles "a"
or "an" limits any particular claim containing such introduced
claim recitation to inventions containing only one such recitation,
even when the same claim includes the introductory phrases "one or
more" or "at least one" and indefinite articles such as "a" or "an"
(e.g., "a" and/or "an" should typically be interpreted to mean "at
least one" or "one or more"); the same holds true for the use of
definite articles used to introduce claim recitations. In addition,
even if a specific number of an introduced claim recitation is
explicitly recited, those skilled in the art will recognize that
such recitation should typically be interpreted to mean at least
the recited number (e.g., the bare recitation of "two recitations,"
without other modifiers, typically means at least two recitations,
or two or more recitations).
[0043] Accordingly, the invention is not limited except as by the
appended claims.
* * * * *