U.S. patent application number 13/921161 was filed with the patent office on 2014-12-18 for method and system for rendering simulated depth-of-field visual effect.
The applicant listed for this patent is NVIDIA Corporation. Invention is credited to Holger GRUEN, Nikolay SAKHARNYKH.
Application Number | 20140368494 13/921161 |
Document ID | / |
Family ID | 52018820 |
Filed Date | 2014-12-18 |
United States Patent
Application |
20140368494 |
Kind Code |
A1 |
SAKHARNYKH; Nikolay ; et
al. |
December 18, 2014 |
METHOD AND SYSTEM FOR RENDERING SIMULATED DEPTH-OF-FIELD VISUAL
EFFECT
Abstract
Systems and methods for rendering depth-of-field visual effect
on images with high computing efficiency and performance. A
diffusion blurring process and a Fast Fourier Transform (FFT)-based
convolution are combined to achieve high-fidelity depth-of-field
visual effect with Bokeh spots in real-time applications. The
brightest regions in the background of an original image are
enhanced with Bokeh effect by virtue of FFT convolution with a
convolution kernel. A diffusion solver can be used to blur the
background of the original image. By blending the Bokeh spots with
the image with gradually blurred background, a resultant image can
present an enhanced depth-of-field visual effect. The FFT-based
convolution can be computed with multi-threaded parallelism.
Inventors: |
SAKHARNYKH; Nikolay; (San
Jose, CA) ; GRUEN; Holger; (Peissenberg, DE) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
NVIDIA Corporation |
Santa Clara |
CA |
US |
|
|
Family ID: |
52018820 |
Appl. No.: |
13/921161 |
Filed: |
June 18, 2013 |
Current U.S.
Class: |
345/419 |
Current CPC
Class: |
G06T 5/004 20130101;
G06T 5/10 20130101; G06T 2200/21 20130101; G06T 2207/20056
20130101; G06T 5/00 20130101 |
Class at
Publication: |
345/419 |
International
Class: |
G06T 15/00 20060101
G06T015/00 |
Claims
1. A computer implemented method of rendering an image, the method
comprising: accessing a first image represented in a space domain,
said first image comprising a first section and a second section;
applying a blur operator on said second section to generate a
second image; identifying a plurality of target regions from said
second section in said first image; converting said plurality of
target regions from said space domain to a frequency domain by
performing a convolution thereon; restoring said plurality of
target regions in said first image from said frequency domain to
said space domain to produce a third image; and blending said
second image and said third image to produce a resultant image.
2. The computer implemented method of claim 1, wherein said
converting and said performing a convolution comprises performing a
Fast Fourier Transform (FFT) convolution on a convolution kernel
with said plurality of target regions, and wherein said convolution
kernel represents a hexagon camera aperture.
3. The computer implemented method of claim 2, wherein a size of
said convolution kernel corresponds to a circle of confusion (CoC)
at an infinite distance.
4. The computer implemented method of claim 2 further comprising
configuring a plurality of execution threads to perform said FFT
convolution in parallel.
5. The computer implemented method of claim 1, wherein said
identifying said plurality of target regions comprises:
down-sampling said first image to a downsized image; filtering said
downsized image with a bright pass filter based on a configurable
luminance threshold.
6. The computer implemented method of claim 1, wherein said first
section corresponds to an in-focus subject represented in said
first image, and wherein further said second section corresponds to
a background represented in said first image.
7. The computer implemented method of claim 1, wherein said blur
operator comprises a diffusion depth-of-field solver, and wherein
further said resultant image comprises Bokeh effect components.
8. A non-transient computer readable storage medium comprising
executable instructions that implements a method of rendering an
image, said method comprising: accessing a first image; identifying
a first portion and a second portion of said first image; selecting
a plurality of regions from said second portion of said first
image, said plurality of regions comprising luminous regions on
said first image; perform ing a Fast Fourier Transform (FFT)
convolution on said plurality of regions with a convolution kernel
representing a geometric shape; and performing an inverse FFT
convolution on said plurality of regions to produce a second
image.
9. The non-transient computer readable storage medium of claim 8,
wherein said method further comprises: applying a diffusion
depth-of-field solver to said second portion of said first image
and preserving said first portion of said first image to produce a
third image; and combining said second image and said third image
to produce an output image, wherein said output image comprises
components with Bokeh effect.
10. The non-transient computer readable storage medium of claim 9,
wherein said convolution kernel represents a pentagon with a size
equal to a circle of confusion (CoC) at an infinite distance of
said first image.
11. The non-transient computer readable storage medium of claim 9,
wherein selecting said plurality of regions comprises:
down-sampling said second portion of said first image to a
reduced-resolution image, and applying a bright pass filter on said
reduced-resolution image based on a luminance threshold.
12. The non-transient computer readable storage medium of claim 9,
wherein said method further comprises configuring a plurality of
execution threads to perform said FFT convolution and said inverse
FFT convolution respectively in parallel.
13. The non-transient computer readable storage medium of claim 9,
wherein said identifying said first portion and said second portion
of said first image comprises distinguishing said first portion and
said second portion based on a cut-off distance with reference to a
focal point of said first image.
14. The non-transient computer readable storage medium of claim 9,
wherein said first image corresponds to a computer created
image.
15. A system comprising a processor; a memory coupled to said
processor and storing an image processing program, said image
processing program comprising instructions that cause said
processor to perform a method of generating an image, said method
comprising: accessing a first image; identify a first portion and a
second portion of said first image; selecting a plurality of
regions from said second portion of said first image, said
plurality of regions comprise luminous regions on said first image;
performing a Fast Fourier Transform (FFT) convolution on said
plurality of regions with a convolution kernel representing a
geometric shape to produce an intermediate image; performing an
inverse FFT convolution on said intermediate image to produce a
second image; applying a blur operator on said second portion of
said first image and preserving said first portion to produce a
third image; and combining said second image and said third image
to produce a resultant image.
16. The system of claim 15, wherein said blur operator comprises a
diffusion depth-of-field (DOF) solver, and wherein said output
image presents a Bokeh effect.
17. The system of claim 15, wherein said Fast Fourier Transform
(FFT) convolution converts said plurality of regions from a space
domain to a frequency domain, and wherein said convolution kernel
represents a camera aperture shape with a size equal to a circle of
confusion (CoC) at an infinite distance of said first image.
18. The system of claim 15, wherein said selecting said plurality
of regions comprises: down-sampling said second portion of said
first image to a reduced-resolution image, and applying a bright
pass filter on said reduced-resolution image based on a luminance
threshold.
19. The system of claim 15, wherein said method further comprising
configuring a plurality of execution threads to perform said FFT
convolution and said inverse FFT convolution respectively in
parallel.
20. The system of claim 15, wherein said first portion corresponds
to an in-focus subject, and wherein said second portion corresponds
to a background.
Description
TECHNICAL FIELD
[0001] The present disclosure relates generally to the field of
computer graphics, and, more specifically, to the field of image
processing.
BACKGROUND
[0002] In optics, depth of field (DOF) refers to the distance
between the nearest and farthest objects in a viewing window that
appear acceptably sharp in an image in a video game, movie or a
photograph for example. The DOF effect can be simulated in computer
graphics, e.g. based on a blur algorithm. Control over depth of
field is an important artistic tool that can be used to emphasize
the subject of an image.
[0003] Using diffusion equations to compute blurred image became
quite popular with the development of computational power of
graphic processors. The diffusion method simulates color
distribution in an original image with varying blur radii as a
function of the depth. This is typically done using an analogy of
temperature field distribution on a 2D plate with non-uniform
conductivity. FIG. 1A shows a diffusion equation that can be used
to simulate DOF effect in accordance with the prior art. In such a
simulation, equations may be discretized and solved using an
alternating direction implicit (ADI) method in a single time step.
The main property of diffusion approaches is that they can respect
edges and depth discontinuity in the scene at the same time using
varying filter size for blurring.
[0004] In real life, DOF blur naturally comes from a camera
aperture which usually has a polygon shape. The polygon shape blur
with emphasized bright features are referred to as a "Bokeh
effect." FIG. 1B illustrates the Bokeh effect created by a camera
in a photograph. The hexagons bright spots in the background render
the Bokeh effect in the photograph. In general, Gaussian-like blur
operates well for averaging a scene with no brightness spikes. The
Bokeh effect becomes almost a standard process in realistic image
processing due to its natural presence in movies. So this is an
important feature which needs to be addressed in almost any
realistic looking depth-of-field technique. However, due to their
inherent limitations, diffusion equations alone can only produce
Gaussian-life blur that lacks a polygon shape or selectively
highlighted bright spots in the image. In other words, diffusion
equations do not produce good quality Bokeh effect objects in image
processing.
[0005] Other DOF techniques include convolutions based on gather
DOF algorithms and scatter DOF algorithms. FIG. 1C illustrates the
principle of using gather algorithms to produce DOF effects in
accordance with the prior art. Such algorithms generally involve
computing the final color by performing convolution with a filter
kernel featuring a size determined by the circle of confusion (CoC)
and a shape of a polygon. FIG. 1D illustrates the principle of
using scatter algorithms to produce DOF effects with a polygon
filter in accordance with the prior art. The scatter algorithms
involve setting up a quad with the Bokeh texture and rendering the
quad with additive blending.
[0006] In the context of computer games or other application
programs that often call for real-time image or video processing,
computing efficiency and performance are important factors
contributing to high quality visual effects.
[0007] Fast Fourier Transform (FFT)-based convolution is an
important tool for generating fixed kernel shape blurs of the
image. Graphic processors are very suitable for FFT and there are
many efficient implementations of FFT using different application
program interfaces (APIs). In general FFT blurs are much faster
than their alternatives that are based on gather or scatter
algorithms. However it's applications in depth-of-field effect
techniques are underdeveloped due to restriction on the fixed
kernel shape.
SUMMARY OF THE INVENTION
[0008] It would be advantageous to provide a computer implemented
method of rendering high quality depth-of-field visual effect to
images in real-time with high computing efficiency and performance.
Accordingly, embodiments of the present disclosure exploit a hybrid
approach that combines a diffusion blurring process and a Fast
Fourier Transform (FFT)-based convolution to achieve high-fidelity
depth-of-field visual effects. The brightest regions in the
background of an original image are enhanced with a Bokeh effect by
virtue of FFT convolution with a convolution kernel. In addition, a
diffusion solver can be used to blur the background of the original
image. By blending the Bokeh spots with the image with blurred
background, a resultant image is rendered with an enhanced
depth-of-field visual effect. The convolution kernel can be
selected based on a desired geometric shape (of the aperture) that
resembles the Bokeh spots generated by the optical system in a
camera. Preferably, the FFT-based convolution can be computed with
multi-threaded parallelism in a graphic processor having multiple
programmable parallel processors.
[0009] In one embodiment of present disclosure, a computer
implemented method of rendering an image comprises: (1) accessing a
first image having first section and a second section; (2) applying
a blur operator on the second section to generate a second image;
(3) identifying a plurality of target regions from the second
section in the first image; (4) converting the plurality of target
regions from the space domain to a frequency domain by performing a
convolution thereon; (5) restoring the plurality of target regions
in the first image from the frequency domain to the space domain to
produce a third image; and (6) blending the second image and the
third image to produce a resultant image. The blur operator may
comprise a diffusion depth-of-field solver. The convolution may
comprise an FFT convolution performed on a convolution kernel with
the plurality of target regions. The convolution kernel may
represent the geometric shape of a camera aperture and have a size
equal to the circle of confusion at an infinite distance. The
convolution may be performed in parallel by multiple execution
thread. The plurality of target regions may correspond to brightest
regions in the second section.
[0010] In another embodiment of present disclosure, a non-transient
computer readable storage medium comprises executable instructions
that implement a method of rendering an image comprising: (1)
accessing a first image; (2) identifying a first portion and a
second portion of the first image; (3) selecting a plurality of
regions from the second portion of the first image, the plurality
of regions comprising luminous regions on the first image; (4)
performing a Fast Fourier Transform (FFT) convolution on the
plurality of regions with a convolution kernel representing a
geometric shape; and (5) performing an inverse FFT convolution on
the plurality of regions to produce a second image. The method may
further comprise: (1) applying a diffusion depth-of-field solver to
the second portion of the first image and preserving the first
portion of the first image to produce a third image; and (2)
combining the second image and the third image to produce an output
image, wherein the output image comprises components with Bokeh
effect.
[0011] In another embodiment of present disclosure, a system
comprises a processor, a memory, and an image processing program
stored in the memory. The image processing program comprises
instructions to perform a method of: (1) accessing a first image;
(2) identify a first portion and a second portion of the first
image; (3) selecting a plurality of luminous regions from the
second portion of the first image; (4) performing an FFT
convolution on the plurality of regions with a convolution kernel
representing a geometric shape to produce an intermediate image;
and (5) performing an inverse FFT convolution on the intermediate
image to produce a second image; (6) applying a blur operator on
the second portion of the first image and preserving the first
portion to produce a third image; and (7) combining the second
image and the third image to produce a resultant image.
[0012] This summary contains, by necessity, simplifications,
generalizations and omissions of detail; consequently, those
skilled in the art will appreciate that the summary is illustrative
only and is not intended to be in any way limiting. Other aspects,
inventive features, and advantages of the present invention, as
defined solely by the claims, will become apparent in the
non-limiting detailed description set forth below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Embodiments of the present invention will be better
understood from a reading of the following detailed description,
taken in conjunction with the accompanying drawing figures in which
like reference characters designate like elements and in which:
[0014] FIG. 1A shows a typical diffusion equation that can be used
to simulate DOF effect in accordance with the prior art.
[0015] FIG. 1B illustrates the Bokeh effect created by a camera in
a photograph.
[0016] FIG. 1C illustrates the principle of using gather algorithms
to produce DOF effect in accordance with the prior art.
[0017] FIG. 1D illustrates the principle of using scatter
algorithms to produce DOF effect with a polygon filter in
accordance with the prior art.
[0018] FIG. 2 is a flow chart depicting an exemplary computer
implemented method of creating enhanced depth-of-focus visual
effect on an original image using the hybrid approach in accordance
with an embodiment of the present disclosure
[0019] FIG. 3 is a flow chart depicting an exemplary compute
implemented method of creating Bokeh spots in the background layer
by use of FFT convolution in accordance with an embodiment of the
present disclosure.
[0020] FIG. 4A shows a sample original image.
[0021] FIG. 4B illustrates an image resulted from a blurring
operation performed on a background layer of the original
image.
[0022] FIG. 4C illustrates an image resulted from down-sampling and
bright pass filtering on the background layer of the original
image.
[0023] FIG. 4D illustrates a frequency domain image resulted from a
forward FFT convolution performed on the image in FIG. 4C.
[0024] FIG. 4E illustrates a space domain image resulted from an
inverse FFT convolution performed on the image shown in FIG.
4D.
[0025] FIG. 4F illustrates the final image resultant from blending
the image with blurred background shown in FIG. 4B and the image
with the Bokeh spots shown in FIG. 4E.
[0026] FIG. 5A illustrates that an exemplary forward FFT transforms
a filtered image from a space domain to a frequency domain.
[0027] FIG. 5B illustrates that the forward FFT transforms a
polygon from a space domain to a frequency domain which is then
used as the convolution kernel.
[0028] FIG. 5C illustrates that the frequency domain image and the
frequency domain kernel are multiplied, or convoluted, based on a
FFT algorithm, resulting an image similar with FIG. 4D.
[0029] FIG. 6 is a block diagram illustrating a computing system
including a depth-of-focus effect generator in accordance with an
embodiment of the present disclosure.
DETAILED DESCRIPTION
[0030] Reference will now be made in detail to the preferred
embodiments of the present invention, examples of which are
illustrated in the accompanying drawings. While the invention will
be described in conjunction with the preferred embodiments, it will
be understood that they are not intended to limit the invention to
these embodiments. On the contrary, the invention is intended to
cover alternatives, modifications and equivalents, which may be
included within the spirit and scope of the invention as defined by
the appended claims. Furthermore, in the following detailed
description of embodiments of the present invention, numerous
specific details are set forth in order to provide a thorough
understanding of the present invention. However, it will be
recognized by one of ordinary skill in the art that the present
invention may be practiced without these specific details. In other
instances, well-known methods, procedures, components, and circuits
have not been described in detail so as not to unnecessarily
obscure aspects of the embodiments of the present invention. The
drawings showing embodiments of the invention are semi-diagrammatic
and not to scale and, particularly, some of the dimensions are for
the clarity of presentation and are shown exaggerated in the
drawing Figures. Similarly, although the views in the drawings for
the ease of description generally show similar orientations, this
depiction in the Figures is arbitrary for the most part. Generally,
the invention can be operated in any orientation.
NOTATION AND NOMENCLATURE
[0031] It should be borne in mind, however, that all of these and
similar terms are to be associated with the appropriate physical
quantities and are merely convenient labels applied to these
quantities. Unless specifically stated otherwise as apparent from
the following discussions, it is appreciated that throughout the
present invention, discussions utilizing terms such as "processing"
or "accessing" or "executing" or "storing" or "rendering" or the
like, refer to the action and processes of a computer system, or
similar electronic computing device, that manipulates and
transforms data represented as physical (electronic) quantities
within the computer system's registers and memories and other
computer readable media into other data similarly represented as
physical quantities within the computer system memories or
registers or other such information storage, transmission or
display devices. When a component appears in several embodiments,
the use of the same reference numeral signifies that the component
is the same component as illustrated in the original
embodiment.
Method and System for Rendering Simulated Depth-of-Field Visual
Effect
[0032] FIG. 2 is a flow chart depicting an exemplary computer
implemented method 200 of creating comprehensive depth-of-focus
visual effect on an original image using the hybrid approach in
accordance with an embodiment of the present disclosure. At 201, an
in-focus layer and a background layer (or an out-of-focus layer)
are first identified on an original image. At 202, a blur operator
can be performed on the background layer of the image, resulting in
a gradually blurred background to achieve a depth-of-focus effect
in the image. At 203, a plurality of regions, or spots, in the
background layer are selected and converted to Bokeh spots by
virtue of a Fast Fourier Transform (FFT) convolution. As to be
described in greater detail below, the selected regions may
correspond to bright spots in the background layer. At 204, the DOF
effect produced by the blur operator and the Bokeh effect produced
by the FFT are combined to achieve a resultant realistic looking
image. Thereby, the resultant image advantageously presents a
comprehensive depth-of-filed visual effect attributed to the Bokeh
spots in a blurred background. In addition, because a diffusion
blur approach or an FFT-based convolution may not dependent on the
scene and camera parameters, the hybrid approach can maintain
constant performance independent from these factors. For example
during a game playing process, this feature can offer stability of
the frame rate which is essential in modern games of many genres.
Further, the hybrid approach can advantageously handle varying
sizes of circles of confusion as well as highlighting proper Bokeh
features o the scene effectively.
[0033] The original image and the resultant image may be 2D or 3D
images. For purposes of practicing the present disclosure, the
blurring techniques can involve any suitable algorithm or program
configured to impart a blur effect on an image. In some
embodiments, the blur operator includes a diffusion equation solver
that generates Gaussian-blurs. Further, the present disclosure is
not limited to any specific FFT convolution algorithm or program.
For example, the FFT convolution may be based on a Cooley-Tukey
algorithm, Prime-factor FFT algorithm, Bruun's FFT algorithm,
Rader's FFT algorithm, etc. Specific feasible methods of computing
various FFT convolutions or inverse FFT convolutions are well known
in the art and can be used.
[0034] The method 200 can be implemented as a computer program. The
modern graphic processing units (GPU) has evolved from a fixed
function graphics pipeline to a programmable parallel processor
with computing power that exceeds that of a multi-core central
processing unit. Because FFT convolutions and diffusion blur
operations are especially suitable for parallel processing, a GPU
may offer a preferred platform due to its lightweight creation
overhead and capability of running thousands of threads at full
efficiency. However, it will be appreciated by those skilled in the
art that method 200 can be also executed in a central processing
units (CPU) as well.
[0035] As will be appreciated by those with ordinary skill in the
art, the blurring operation and the Bokeh spots generation in
accordance with the present disclosure can be independent of each
other. Thereby the two steps 202 and 203 can be implemented in any
order or simultaneously before their results are blended.
[0036] FIG. 3 is a flow chart depicting an exemplary computer
implemented method 300 of creating Bokeh spots on an original image
by use of FFT convolutions in accordance with an embodiment of the
present disclosure. Method 300 is similar with block 202 in FIG. 2.
In the illustrated embodiment, target spots to be converted
correspond to bright pixels in the background layer. At 301, the
background layer of the original image is optionally down-sampled
to a lower resolution, and then further filtered with a bright pass
filter in accordance with a programmable luminance threshold, e.g.
specified by a user, at 302. In some other embodiments, the
targeted regions to be converted to Bokeh spots may be selected
based on any other aspect of the image data, or by any other
filtering mechanism e.g. a color filter. At 303, a convolution
kernel is determined based on a desired size and shape of the Bokeh
spots to be achieved. At 304, the forward FFT convolution is
performed with the convolution kernel on the selected regions in
the background layer. Here the selected regions correspond to the
bright pass filtered image. The FFT convolution converts the
filtered image from the space domain to a frequency/phase domain.
At 305, a corresponding inverse FFT is performed to restore the
frequency/phase domain image to a space domain image which has
Bokeh spots as a result of the FFT convolutions.
[0037] In some embodiments, the FFT and/or inverse FFT conversion
operations can be implemented on a block-linear memory layout that
caters for the fact that the input data is from the 2D-image
domain. This memory layout change can advantageously and greatly
improve the cache coherency of all memory accesses executed by the
conversion algorithm
[0038] Bokeh spots produced by a real-life camera usually echo the
shape of the camera aperture which also determines the sizes of the
Bokeh spots on the picture. In some embodiments, the sizes of the
Bokeh spots may be specified by a user or varied according to a
specified function. However it has been generally observed that
Bokeh features produced by optical cameras typically are visible
only at distant ranges and on the contrast bright parts of the
image, regardless of the camera parameters used to take the image
or the subject matter included, as illustrated in FIG. 1B.
Moreover, the DOF circles of confusion tend to converge to a
constant at distant ranges. These observations suggest Bokeh spots
on the background layer can be characteristic of similar sizes or
even the same size. Accordingly, in some other embodiments, a fixed
convolution kernel may be uniformly applied for convolution on all
the selected regions to be converted to Bokeh spots.
[0039] For example, the circle of confusion (CoC) at infinite
distance can be used as the Bokeh spot size, e.g.,
lim z .fwdarw. .infin. a f ( z - p ) z ( p - f ) = a f p - f ,
##EQU00001##
[0040] where z is the distance of the pixels, a is the aperture
capable of producing the desired Bokeh effect, f is the focal
length of the image, and p is the distance of the pixels in
focus.
[0041] FIG. 4A-4F are sample images produced during an exemplary
process of creating a comprehensive DOF visual effect by use of the
hybrid approach in accordance with an embodiment of the present
disclosure. FIG. 4A is the example original image which may be a
photograph taken with a camera, or a computer created graphic. In
this example, the original image shows little DOF effect where the
background and the foreground compete for a viewer's attention.
[0042] FIG. 4B illustrates an image resulted from a blur operation
applied on a background layer of the original image. In the
illustrated example, the foremost object is selected as the
in-focus object to be emphasized by virtue of DOF effect. However,
the distinction between the background and the in-focus objects can
be flexibly defined in any manner, e.g. based on a cut-off distance
with reference to a view point. In some embodiments, pixels in the
background may be blurred in accordance with Gaussian functions to
various extents, as described above with reference to FIG. 2 and
FIG. 3.
[0043] FIG. 4C illustrates an image resulted from down-sampling and
bright pass filtering on the background layer of the original
image. Thus, this image data only include the bright regions of
background layer in a reduced resolution which facilitates fast
convolution. In this example, the original image in FIG. 4A has a
resolution of 1920.times.1200 pixels, while the down-sampled image
has a resolution of 512.times.512 pixels. FIG. 4D illustrates a
frequency domain image resulted from a forward FFT convolution
performed on the image shown in FIG. 4C. FIG. 4E illustrates a
space domain image resulted from an inverse FFT convolution
performed on the image shown in FIG. 4D. Comparing to FIG. 4D, FIG.
4E demonstrates that the bright regions are blurred to a polygon
shape which mimics real-life Bokeh spots in high fidelity. FIG. 4F
illustrates the final image resultant from blending the image with
blurred background in FIG. 4B and the image with the Bokeh spots in
FIG. 4E. The blending operation can be performed in accordance with
any technique that is well known in the art. Comparing to the
original image in FIG. 4A, the final image presents a comprehensive
depth-of-field effect that encompasses an emphasized subject and a
gradually blurred background with the bright regions highlighted in
a polygon shape, or the Bokeh effect.
[0044] FIG. 5A-5C are sample images produced during an exemplary
process of creating Bokeh spots by virtue of FFT convolutions in
accordance with an embodiment of the present disclosure. FIG. 5A
illustrates that an exemplary forward FFT transforms a filtered
image from a space domain to a frequency domain. The image may be
down-sampled for purposes of simplifying the FFT computation. FIG.
5B illustrates that the forward FFT transforms a polygon to a
frequency domain kernel which is used as the convolution kernel.
The polygon is selected based on the desired shape and size of the
Bokeh spots to be generated. FIG. 5C illustrates that the frequency
domain image and the frequency domain kernel are multiplied, or
convoluted, based on a FFT algorithm, resulting an image similar
with FIG. 4D.
[0045] FIG. 6 is a block diagram illustrating an exemplary
computing system 600 including a depth-of-focus effect generator
610 in accordance with an embodiment of the present disclosure. The
computing system comprises a processor 601, a system memory 602, a
GPU 603, I/O interfaces 604 and other components 605, an operating
system 606 and application software 607 including a depth-of-focus
effect generator 610 stored in the memory 602. When incorporating
the user's configuration input and executed by the CPU 601 or the
GPU 603, the DOF effect generator 610 can produce a comprehensive
DOF effect with Bokeh effect by using the hybrid approach in
accordance with an embodiment of the present disclosure. The DOF
effect generator 610 may include a diffusion solver, a bright pass
filter, a sampler and a forward and inverse FFT convolution
operator. The user configuration input may include an original
image for processing, and Bokeh spot size and shape for example, as
discussed with reference to FIG. 2 and FIG. 3. The hybrid Bokeh
effect generator 610 may be an integral part of an image processing
tool, a processing library, or a computer game that is written in
Fortran, C++, or any other programming languages known to those
skilled in the art.
[0046] Although certain preferred embodiments and methods have been
disclosed herein, it will be apparent from the foregoing disclosure
to those skilled in the art that variations and modifications of
such embodiments and methods may be made without departing from the
spirit and scope of the invention. It is intended that the
invention shall be limited only to the extent required by the
appended claims and the rules and principles of applicable law.
* * * * *