U.S. patent application number 13/245941 was filed with the patent office on 2013-01-24 for regional image processing in an image capture device.
This patent application is currently assigned to BROADCOM CORPORATION. The applicant listed for this patent is Gordon (Chong Ming Gordon) Lee, David Plowman, Benjamin Sewell, Efrat Swissa. Invention is credited to Gordon (Chong Ming Gordon) Lee, David Plowman, Benjamin Sewell, Efrat Swissa.
Application Number | 20130021489 13/245941 |
Document ID | / |
Family ID | 47555520 |
Filed Date | 2013-01-24 |
United States Patent
Application |
20130021489 |
Kind Code |
A1 |
Sewell; Benjamin ; et
al. |
January 24, 2013 |
Regional Image Processing in an Image Capture Device
Abstract
Disclosed are various embodiments of applying image processing
techniques in an image capture device. Regions of an image can be
isolated and a respective region type identified. The image capture
device can apply various image processing techniques to various
regions of the image based at least upon a region type that is
identified for the various regions.
Inventors: |
Sewell; Benjamin; (Truro,
GB) ; Plowman; David; (Great Chesterfield, GB)
; Lee; Gordon (Chong Ming Gordon); (Cambridge, GB)
; Swissa; Efrat; (Pittsburgh, PA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Sewell; Benjamin
Plowman; David
Lee; Gordon (Chong Ming Gordon)
Swissa; Efrat |
Truro
Great Chesterfield
Cambridge
Pittsburgh |
PA |
GB
GB
GB
US |
|
|
Assignee: |
BROADCOM CORPORATION
Irvine
CA
|
Family ID: |
47555520 |
Appl. No.: |
13/245941 |
Filed: |
September 27, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61509747 |
Jul 20, 2011 |
|
|
|
Current U.S.
Class: |
348/222.1 ;
348/E5.031; 382/103 |
Current CPC
Class: |
G06T 2207/10016
20130101; H04N 5/23267 20130101; H04N 19/436 20141101; G06T 5/50
20130101; G06T 5/003 20130101; H04N 19/192 20141101; H04N 19/54
20141101; H04N 19/56 20141101; G06T 2207/20201 20130101; H04N
5/23254 20130101; H04N 19/139 20141101 |
Class at
Publication: |
348/222.1 ;
382/103; 348/E05.031 |
International
Class: |
H04N 5/228 20060101
H04N005/228; G06K 9/00 20060101 G06K009/00 |
Claims
1. An image capture device, comprising: at least one image sensor;
and an application executed in the image capture device, the
application comprising: logic that initiates capture of at least
one image via the at least one image sensor associated with the
image capture device; logic that isolates at least one region of
the image; logic that identifies a region type associated with the
at least one region; logic that applies at least one image
processing technique to the at least one region of the image based
at least upon a preconfigured image processing configuration
associated with the region type.
2. The image capture device of claim 1, the logic that isolates the
at least one region of the image further comprises: logic that
identifies at least one object in the image; and logic that
designates the at least one object as at least one region of the
image.
3. The image capture device of claim 2, wherein the image capture
device is configured to capture a plurality of images associated
with a plurality of frames of a video, and the image capture
application further comprises: logic that tracks the object in
subsequent frames of the video; and logic that applies the image
processing technique associated with region type to the at least
one region in the subsequent frames.
4. The image capture device of claim 1, wherein the logic that
isolates the at least one region of the image further comprises:
logic that performs at least one edge recognition algorithm on the
image; and logic that extracts at least one region of the image
associated with at least one edge identified in the image.
5. The image capture device of claim 1, wherein the logic that
identifies the region type associated with the at least one region
further comprises performing at least one image recognition
algorithm on the at least one region, the image recognition
algorithm configured to determine whether the at least one region
corresponds to a region type specified by a region library
accessible to the image capture device.
6. The image capture device of claim 5, wherein the region library
comprises at least one signature associated with a respective known
region type, the at least one signature comprising at least one
parameter uniquely associated with the respective known region
type.
7. The image capture device of claim 6, wherein the logic that
identifies the region type associated with the at least one region
further comprises: logic that generates a respective signature
associated with the at least one region; and logic that determines
whether the respective signature is within a predetermined rage of
the at least one signature associated with the respective known
region type.
8. The image capture device of claim 1, wherein the logic that
applies the at least one image processing technique to the at least
one region of the image further comprises recording the at least
one image processing technique to meta data associated with the
image.
9. The image capture device of claim 1, wherein the logic that
identifies the region type associated with the at least one region
further comprises: logic that generates a confidence score
associated with identification of the region type, the confidence
score corresponding to a confidence level of the identification;
and logic that adjusts a level associated with the at least one
image processing technique associated with the region type based at
least upon the confidence score.
10. The image capture device of claim 1, wherein the region type is
one of: a landscape region, a portrait region, a low light region,
a fireworks region, a backlight region, sky region, a high motion
region, and a facial region.
11. The image capture device of claim 1, wherein the logic that
identifies a region type associated with the at least one region
further comprises: logic that identifies a first region type
associated with a first region; logic that identifies a second
region type associated with the second region; logic that applies a
first image processing technique to the first region, the first
image processing technique associated with the first region type;
and logic that applies a second image processing technique to the
second region, the second image processing technique associated
with the second region type.
12. A method, comprising the steps of: capturing, in an image
capture device, an image via an image sensor associated with the
image capture device; isolating, in the image capture device, at
least one region of the image; identifying, in the at least one
image capture device, a region type associated with the at least
one region; and applying, in the image capture device, at least one
image processing technique to the at least one region of the image
based at least upon a preconfigured image processing configuration
associated with the region type.
13. The method of claim 12, wherein the step of isolating the at
least one region of the image further comprises: identifying at
least one object in the image; extracting the object from the
image; and designating the object as at least one region of the
image.
14. The method of claim 12, wherein the step of isolating at least
one region of the image further comprises: performing at least one
edge recognition algorithm on the image; and extracting at least
one region of the image associated with at least one edge
identified in the image.
15. The method of claim 12, wherein the step of identifying the
region type associated with the at least one region further
comprises performing at least one image recognition algorithm on
the at least one region, the image recognition algorithm configured
to determine whether the at least one region corresponds to a
region library accessible to the image capture device.
16. The method of claim 15, wherein the region library comprises at
least one signature associated with a respective known region type,
the at least one signature comprising at least one parameter
uniquely associated with the respective known region type.
17. The method of claim 12, wherein the step of applying the at
least one image processing technique to the at least one region of
the image further comprises recording the at least one image
processing technique to meta data associated with the image.
18. The method of claim 12, wherein the step of identifying a
region type associated with the at least one region further
comprises: generating a confidence score associated with
identification of the region type, the confidence score
corresponding to a confidence level of the identification; and
adjusting a level associated with the at least one image processing
technique associated with the region type based at least upon the
confidence score.
19. The method of claim 12, wherein the step of identifying a
region type associated with the at least one region further
comprises: identifying a first region type associated with a first
region; identifying a second region type associated with the second
region; applying a first image processing technique to the first
region, the first image processing technique associated with the
first region type; and applying a second image processing technique
to the second region, the second image processing technique
associated with the second region type.
20. A system, comprising: means for capturing an image via an image
sensor associated with the image capture device; means for
isolating at least one region of the image; means for identifying a
region type associated with the at least one region; and means for
applying at least one image processing technique to the at least
one region of the image based at least upon a preconfigured image
processing configuration associated with the region type.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to co-pending U.S.
provisional application entitled, "Image Capture Device Systems and
Methods," having Ser. No. 61/509,747, filed Jul. 20, 2011, which is
entirely incorporated herein by reference.
BACKGROUND
[0002] Image capture devices (e.g., still cameras, video cameras,
etc.) can apply various image processing techniques. These
techniques can be applied globally or, in other words, to an entire
image. Images captured by an image capture device can often contain
various objects and/or subjects such that application of a single
image processing technique to the entirety of the image can result
in a less than desirable result.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Many aspects of the invention can be better understood with
reference to the following drawings. The components in the drawings
are not necessarily to scale, emphasis instead being placed upon
clearly illustrating the principles of the present invention.
Moreover, in the drawings, like reference numerals designate
corresponding parts throughout the several views.
[0004] FIGS. 1A-1B are drawings of a mobile device incorporating an
image capture device according to various embodiments of the
disclosure.
[0005] FIG. 2 is a drawing of an image capture device that can be
incorporated into the mobile device of FIG. 1 according to various
embodiments of the disclosure.
[0006] FIGS. 3-4 are drawings of an example image that can be
captured and processed by an image capture device of FIG. 2
according to various embodiments of the disclosure.
[0007] FIG. 5 is a flowchart depicting one example of a process
that can be executed in an image capture device according to
various embodiments of the disclosure.
DETAILED DESCRIPTION
[0008] Embodiments of the present disclosure relate to systems and
methods that can be executed in an image capture device. More
specifically, embodiments of the present disclosure further
comprise tailored regional image processing techniques applied to
various regions of an image based at least upon an identification
and characterization of various image elements and/or objects that
can be isolated within the captured imagery and/or video. An image
capture device can include a camera, video camera, a mobile device
with an integrated image capture device, or other devices suitable
to capturing imagery and/or video as can be appreciated. In some
embodiments, an image capture device according to an embodiment of
the disclosure can include a device such as a smartphone, tablet
computing system, laptop computer, desktop computer, or any other
computing device that has the capability to receive and/or capture
imagery via image capture hardware.
[0009] Accordingly, image capture device hardware can include
components such as lenses, image sensors (e.g., charge coupled
devices, CMOS image sensor, etc.), processor(s), image signal
processor(s), a main processor, memory, mass storage, or any other
hardware or software components that can facilitate capture of
imagery and/or video. In some embodiments, an image signal
processor can be incorporated as a part of a main processor in an
image capture device module that is in turn incorporated into a
device having its own processor, memory and other components.
[0010] An image capture device according to an embodiment of the
disclosure can provide a user interface via a display that is
integrated into the image capture device. The display can be
integrated with a mobile device, such as a smartphone and/or tablet
computing device, and can include a touchscreen input device (e.g.,
a capacitive touchscreen, etc.) with which a user may interact with
the user interface that is presented thereon. The image capture
device hardware can also include one or more buttons, dials,
toggles, switches, or other input devices with which the user can
interact with software executed in the image capture device.
[0011] Referring now to the drawings, FIGS. 1A-1B show a mobile
device 102 that can comprise and/or incorporate an image capture
device according to various embodiments of the disclosure. The
mobile device 102 may comprise, for example, a processor-based
system, such as a computer system. Such a computer system may be
embodied in the form of a desktop computer, a laptop computer, a
personal digital assistant, a mobile device (e.g., cellular
telephone, smart phone, etc.), tablet computing system, set-top
box, music players, or other devices with like capability. The
mobile device can include, for example, an image capture device
104, which can further include a lens system as well as other
hardware components that can be integrated with the device to
facilitate image capture. The mobile device 102 can also include a
display device 141 upon which various content and other user
interfaces may be rendered. The mobile device 102 can also include
one or more input devices with which a user can interact with a
user interface rendered on the display device 141. For example, the
mobile device 102 can include or be in communication with a mouse,
touch input device (e.g., capacitive and/or resistive touchscreen
incorporated with the display device 141), keyboard, or other input
devices.
[0012] The mobile device 102 may be configured to execute various
applications, such as a camera application that can interact with
an image capture module that includes various hardware and/or
software components that facilitate capture and/or storage of
images and/or video. In one embodiment, the camera application can
interact with application programming interfaces (API's) and/or
other software libraries and/or drivers that are provided for the
purpose interacting with image capture hardware, such as the lens
system and other image capture hardware. The camera application can
be a special purpose application, a plug-in or executable library,
one or more API's, image control algorithms, image capture device
firmware, or other software that can facilitate communication with
image capture hardware in communication with the mobile device 102.
Accordingly, a camera application according to embodiments of the
present disclosure can capture imagery and/or video via the various
image capture hardware as well as facilitate storage of the
captured imagery and/or video in memory and/or mass storage
associated with the mobile device 102.
[0013] FIG. 2 illustrates an embodiment of the various image
capture components, or one example of an image capture device 104,
that can be incorporated in the mobile device 102 illustrated in
FIGS. 1A-1B. Although one implementation is shown in FIG. 2 and
described herein, an image capture device according to an
embodiment of the disclosure more generally comprises an image
capture device that can provide images in digital form.
[0014] The image capture device 104 includes a lens system 200 that
conveys images of viewed scenes to an image sensor 202. By way of
example, the image sensor 202 comprises a charge-coupled device
(CCD) or a complementary metal oxide semiconductor (CMOS) sensor
that is driven by one or more sensor drivers 204. The analog image
signals captured by the sensor 202 are provided to an
analog-to-digital (A/D) converter 206 for conversion into binary
code that can be processed by a processor 208. The processor can
also execute a regional image processing application 151 that can
carry out the regional image processing discussed herein. In some
embodiments, the regional image processing application 151 can take
the form of API's, control algorithms, or other software accessible
to the image capture device 104 and/or a mobile device 102 or other
system in which the image capture device 104 is integrated.
[0015] Operation of the sensor driver(s) 204 is controlled through
a camera controller 210 that is in bi-directional communication
with the processor 208. In some embodiments, the controller 210 can
control one or more motors 212 that are used to drive the lens
system 200 (e.g., to adjust focus, zoom, and/or aperture settings).
The controller 210 can also communicate with a flash system, user
input devices (e.g., buttons, dials, toggles, etc.) or other
components associated with the image capture device 104. Operation
of the camera controller 210 may be adjusted through manipulation
of a user interface. A user interface comprises the various
components used to enter selections and commands into the image
capture device 104 and therefore can include various buttons as
well as a menu system that, for example, is displayed to the user
in, for example, a camera application executed on a mobile device
102 and/or on a back panel associated with a standalone digital
camera.
[0016] The digital image signals are processed in accordance with
instructions from an image signal processor 218 stored in permanent
(non-volatile) device memory. Processed (e.g., compressed) images
may then be stored in storage memory, such as that contained within
a removable solid-state memory card (e.g., Flash memory card). The
embodiment shown in FIG. 2 further includes a device interface 224
through which the image capture device 104 can communicate with a
mobile device or other computing system in which it may be
integrated. For example, the device interface 224 can allow the
image capture device to communicate with a main processor
associated with a mobile device as well as memory, mass storage, or
other resources associated with the mobile device. The device
interface 224 can communicate with a mobile device in various
communications protocols, and this communication can be
facilitated, at a software level, by various device drivers,
libraries, API's or other software associated with the image
capture device 104 that is executed in the mobile device.
[0017] An image capture device (e.g., camera, mobile device with
integrated camera, etc.) and/or processing system can be configured
with tailored regional processing that is based at least upon an
identification and characterization of various image elements.
Image and video adjustments associated with prior art image capture
systems (e.g., post processing outside of the camera or image
capture device) is often applied to the entirety of an image. For
example, adjusting brightness, tone, color intensity, contrast,
gamma, etc., or other aspects of an image generally involves
application of such an adjustment to the entire image or sequence
of frames in prior art systems. The following drawings illustrate
various examples of logic that can, alone or in combination, be
implemented in an image capture device.
[0018] An image capture device according to an embodiment of the
disclosure can apply various image processing techniques to various
regions of an image that can be associated with a particular region
type. The identification and characterization of regions within
captured imagery as well as application of various image processing
techniques can be accomplished via software executed by the
processor 208, the ISP 218 as well as a processor associated with a
device in communication with the image capture device 104. It
should be appreciated that the specific implementation and/or
embodiments disclosed herein are merely examples.
[0019] Accordingly, reference is now made to FIG. 3, which
illustrates an example image that can be captured by the image
capture device 104 (FIG. 2) according to various embodiments
according to the disclosure. In the depicted non-limiting examples
of FIGS. 3-4, the image capture device 104 is incorporated into a
mobile device 102, which can execute a camera application that
renders a user interface for display on a display device associated
with the mobile device 102. It should be appreciated that this is
only one non-limiting illustrative implementation. FIG. 3
additionally illustrates how an image capture device according to
the disclosure can apply regional image processing techniques to
potentially enhance the visual appeal of an image and/or video
captured by the image capture device. Prior art image capture
devices may identify a scene represented by an image and apply an
image processing technique globally (e.g., to the image in its
entirety). Accordingly, embodiments of the disclosure can identify
various regions in an image, characterize the region by associating
the region with a known region type, and selectively apply image
processing techniques to the various identified regions based at
least upon their region types.
[0020] Therefore, FIG. 3 illustrates an example of an image 300
that can be captured by the image capture device. As one example,
the image 300 can be captured via a camera application executed on
a mobile device where the camera application is configured to
communicate with API's associated with the image capture device for
the purposes of initiating capture of imagery, display of imagery
on a display of the mobile device as well as storage of captured
imagery in the form of still images and/or video in memory or mass
storage associated with the mobile device. The example image 300
can include various objects and/or regions that can be identified
by employing image recognition, pattern recognition, and other
techniques that can be utilized to identify and/or isolate certain
regions from others within the image 300.
[0021] FIG. 4 continues the example of FIG. 3 by illustrating an
example of the various regions that can be identified in an image
captured by an image capture device. In the example of FIG. 4, the
image capture device can initiate an analysis of the image 300 upon
capture. The image capture device can identify various regions 302,
304, 306, 308, 310 of the image by employing various image
recognition and/or pattern recognition techniques. In one
embodiment, the image capture device can identify edges that are
depicted in the image and identify the region of the image within
the various edges as a region. The image capture device can also
analyze color properties, lighting properties, or any other
properties of the various portions of an image to isolate the
various regions. Accordingly, the image capture device can maintain
or reference a region library that defines the various properties
associated with known region types. For example, the region library
can specify various parameters or parameter ranges, which can
include color, shape, size, etc., that correspond to various known
region types. As another example, the image capture device can
calculate a signature corresponding to the region and determine
whether the calculated signature corresponds to or is within a
range of a signature specified by the region library.
[0022] The region library can specify parameters and/or signatures
corresponding to various region types, which can include, but are
not limited to, a landscape region, a portrait region, a low light
region, a fireworks region, a backlight region, a high motion
region, a facial region, or any other region for which various
image parameters and/or ranges of parameters can be defined.
[0023] In the depicted example, the image capture device can employ
facial recognition algorithms to isolate and/or determine whether a
region in the image 300 corresponds to a human face. In the
depicted example, the image capture device can determine whether
region 302 corresponds to a human face by analyzing its relative
size, color, shape, and other properties as can be appreciated.
Accordingly, the image capture device can associate region 302 with
a region type corresponding to a human face or head. The image
capture device can employ the various image recognition techniques
to determine whether a portion of the image corresponds to a
background and/or sky region type. In the depicted example, the
image capture device can determine whether region 304 corresponds
to a set of parameter and/or parameter ranges specified by a region
library as associated with a sky. The image capture device can also
calculate a confidence score that is based at least upon how
closely a region isolated in the image 300 matches the parameters
associated with a known region type specified by the region
library. In other words, the image capture device can isolate the
various regions of an image and characterize certain regions as a
known region type of the parameters specified by the region library
are within a certain range.
[0024] Similarly, the image capture device can isolate other
regions 306, 308, 310 as well as other regions correspond to known
region types in a region library. In various embodiments, the
region library can be stored in memory associated with a mobile
device with which the image capture device is integrated, in memory
associated with the image capture device, hard coded into the
processor and/or ISP of the image capture device, or provided in
other ways as can be appreciated.
[0025] Upon identification of region types associated with the
various regions of the image, the image capture device can apply
various image processing techniques that can be associated with the
region types. Image processing techniques can include, but are not
limited to, adjusting color levels, sharpness, brightness,
contrast, or any other parameter or property associated with a
region of the image. An image processing technique can also
include, but is not limited to, the application of one or more
signal processing techniques, filters, or any other process that
receives as an input image data associated with a region and
outputs image data that is altered or modified in some form. For
example, one or more image processing techniques can be associated
with a known region type corresponding to a human face and applied
only to the region 302 corresponding to the face rather than
globally to the entire image 300. Accordingly, the image capture
device can apply smoothing, blemish removal, or other image
processing techniques to the facial region 302.
[0026] As another example, the image capture device can employ
various image recognition techniques to determine whether a portion
of the image 300 corresponds to a background or sky region. In the
depicted example, the image capture device can determine whether
region 304 corresponds to a sky region and apply image processing
techniques specific to such a region type only to the region 304.
For example, these image processing techniques can include color
enhancement, adjustment of various color levels, modifying
sharpness, contrast, application of one or more image filters, or
other image processing techniques as can be appreciated. The image
processing techniques associated with a particular region type can
be preconfigured so that the image capture device applies these
image processing techniques only to the region types that are
identified within the image 300 rather than to the entire image 300
globally. Similarly, the image capture device can also determine
whether the other regions 306, 308, 310 correspond to other region
types for which image processing techniques are defined and apply
the preconfigured image processing techniques that are associated
with the identified region types to these regions.
[0027] Additionally, as noted above, the image capture device can
calculate a confidence score that is associated with an
identification of a region type in an image. Accordingly, the image
capture device can apply the image processing techniques to an
identified region at higher levels when a confidence score
associated with identification of the region type is higher. In
other words, the image capture device can more aggressively apply
image processing techniques associated with a region type when a
confidence score reflects a high degree of confidence that
identification of a region is accurate. Additionally, in the case
of video captured by the image capture device, the image capture
device can employ the same techniques described above to each frame
associated with a video. In some embodiments, the image capture
device can apply the image processing techniques to a sampling of
frames associated with a video. Additionally, the image capture
device can employ object tracking techniques to track a particular
object throughout the various frames of a video so that the same
image processing techniques are applied to the object in the
various video frames.
[0028] In some embodiments, the image capture device can apply the
image processing techniques that are associated with identified
regions in an image and/or video frames by modifying the captured
image data prior to storage in memory and/or a mass storage device.
In other embodiments, the image capture device can record the image
processing techniques that are applied in meta data associated with
the image while retaining the originally captured image data. In
such a scenario, a camera application or other software generating
a user interface associated with content captured by the image
capture device can display either the originally captured image
data or the image after application of the image processing
techniques.
[0029] Referring next to FIG. 5, shown is a flowchart that provides
one example of the operation of a portion of a regional image
processing application 151 executed by an image capture device 104,
a mobile device 102 or any other device in which an image capture
device 104 is integrated according to various embodiments of the
disclosure. It is understood that the flowchart of FIG. 5 provides
merely an example of the many different types of functional
arrangements that may be employed to implement the operation of the
portion of logic employed by the image capture device as described
herein. As an alternative, the flowchart of FIG. 5 may be viewed as
depicting an example of steps of a method implemented in a
computing device, processor, or other circuits according to one or
more embodiments.
[0030] First, in box 501, image capture can be initiated in the
image capture device so that one or more images are captured by the
lens system, image sensor, and other image capture device hardware
as discussed above. In box 503, the image capture device can
isolate a region within the captured imagery. In box 505, the image
capture device can associate the isolated region with an image
type. If a region type can be identified, then in box 507 the image
capture device can apply image processing techniques that can be
preconfigured as associated with the identified region type. In box
509, if the image capture device can determine whether there are
additional regions to be processed in the captured image and repeat
the process if so.
[0031] Embodiments of the present disclosure can be implemented in
various devices, for example, having a processor, memory as well as
image capture hardware that can be coupled to a local interface.
The logic described herein can be executable by one or more
processors integrated with a device. In one embodiment, an
application executed in a computing device, such as a mobile
device, can invoke API's that provide the logic described herein as
well as facilitate interaction with image capture hardware. Where
any component discussed herein is implemented in the form of
software, any one of a number of programming languages may be
employed such as, for example, processor specific assembler
languages, C, C++, C#, Objective C, Java, Javascript, Perl, PHP,
Visual Basic, Python, Ruby, Delphi, Flash, or other programming
languages.
[0032] As such, these software components can be executable by one
or more processors in various devices. In this respect, the term
"executable" means a program file that is in a form that can
ultimately be run by a processor. Examples of executable programs
may be, for example, a compiled program that can be translated into
machine code in a format that can be loaded into a random access
portion of memory and run by a processor, source code that may be
expressed in proper format such as object code that is capable of
being loaded into a random access portion of the memory and
executed by the processor, or source code that may be interpreted
by another executable program to generate instructions in a random
access portion of the memory to be executed by the processor, etc.
An executable program may be stored in any portion or component of
the memory including, for example, random access memory (RAM),
read-only memory (ROM), hard drive, solid-state drive, USB flash
drive, memory card, optical disc such as compact disc (CD) or
digital versatile disc (DVD), floppy disk, magnetic tape, or other
memory components.
[0033] Although various logic described herein may be embodied in
software or code executed by general purpose hardware as discussed
above, as an alternative the same may also be embodied in dedicated
hardware or a combination of software/general purpose hardware and
dedicated hardware. If embodied in dedicated hardware, each can be
implemented as a circuit or state machine that employs any one of
or a combination of a number of technologies. These technologies
may include, but are not limited to, discrete logic circuits having
logic gates for implementing various logic functions upon an
application of one or more data signals, application specific
integrated circuits having appropriate logic gates, or other
components, etc. Such technologies are generally well known by
those skilled in the art and, consequently, are not described in
detail herein.
[0034] The flowchart of FIG. 5 shows the functionality and
operation of an implementation of portions of an image capture
device according to embodiments of the disclosure. If embodied in
software, each block may represent a module, segment, or portion of
code that comprises program instructions to implement the specified
logical function(s). The program instructions may be embodied in
the form of source code that comprises human-readable statements
written in a programming language or machine code that comprises
numerical instructions recognizable by a suitable execution system
such as a processor in a computer system or other system. The
machine code may be converted from the source code, etc. If
embodied in hardware, each block may represent a circuit or a
number of interconnected circuits to implement the specified
logical function(s).
[0035] Although the flowchart of FIG. 5 shows a specific order of
execution, it is understood that the order of execution may differ
from that which is depicted. For example, the order of execution of
two or more blocks may be scrambled relative to the order shown.
Also, two or more blocks shown in succession in FIG. 5 may be
executed concurrently or with partial concurrence. Further, in some
embodiments, one or more of the blocks shown in FIG. 5 may be
skipped or omitted. In addition, any number of counters, state
variables, warning semaphores, or messages might be added to the
logical flow described herein, for purposes of enhanced utility,
accounting, performance measurement, or providing troubleshooting
aids, etc. It is understood that all such variations are within the
scope of the present disclosure.
[0036] Also, any logic or application described herein that
comprises software or code can be embodied in any non-transitory
computer-readable medium for use by or in connection with an
instruction execution system such as, for example, a processor in a
computer device or other system. In this sense, the logic may
comprise, for example, statements including instructions and
declarations that can be fetched from the computer-readable medium
and executed by the instruction execution system. In the context of
the present disclosure, a "computer-readable medium" can be any
medium that can contain, store, or maintain the logic or
application described herein for use by or in connection with the
instruction execution system. The computer-readable medium can
comprise any one of many physical media such as, for example,
magnetic, optical, or semiconductor media. More specific examples
of a suitable computer-readable medium would include, but are not
limited to, magnetic tapes, magnetic floppy diskettes, magnetic
hard drives, memory cards, solid-state drives, USB flash drives, or
optical discs. Also, the computer-readable medium may be a random
access memory (RAM) including, for example, static random access
memory (SRAM) and dynamic random access memory (DRAM), or magnetic
random access memory (MRAM). In addition, the computer-readable
medium may be a read-only memory (ROM), a programmable read-only
memory (PROM), an erasable programmable read-only memory (EPROM),
an electrically erasable programmable read-only memory (EEPROM), or
other type of memory device.
[0037] It should be emphasized that the above-described embodiments
of the present disclosure are merely possible examples of
implementations set forth for a clear understanding of the
principles of the disclosure. Many variations and modifications may
be made to the above-described embodiment(s) without departing
substantially from the spirit and principles of the disclosure. All
such modifications and variations are intended to be included
herein within the scope of this disclosure and protected by the
following claims.
* * * * *