U.S. patent application number 13/329943 was filed with the patent office on 2013-06-20 for system and method for depth-guided image filtering in a video conference environment.
This patent application is currently assigned to Cisco Technology, Inc.. The applicant listed for this patent is Dihong Tian. Invention is credited to Dihong Tian.
Application Number | 20130156332 13/329943 |
Document ID | / |
Family ID | 47430136 |
Filed Date | 2013-06-20 |
United States Patent
Application |
20130156332 |
Kind Code |
A1 |
Tian; Dihong |
June 20, 2013 |
SYSTEM AND METHOD FOR DEPTH-GUIDED IMAGE FILTERING IN A VIDEO
CONFERENCE ENVIRONMENT
Abstract
A method is provided in one example embodiment that includes
receiving a plurality of depth values corresponding to pixels of an
image; and filtering the image as a function of a plurality of
variations in the depth values between adjacent pixels of a window
associated with the image. In more detailed embodiments, the method
may include encoding the image into a bit stream for transmission
over a network. The filtering can account for a bit rate associated
with the encoding of the image.
Inventors: |
Tian; Dihong; (San Jose,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Tian; Dihong |
San Jose |
CA |
US |
|
|
Assignee: |
Cisco Technology, Inc.
|
Family ID: |
47430136 |
Appl. No.: |
13/329943 |
Filed: |
December 19, 2011 |
Current U.S.
Class: |
382/232 ;
375/240.02; 375/E7.161; 382/260; 382/264; 382/265 |
Current CPC
Class: |
H04N 13/128 20180501;
H04N 13/194 20180501; H04N 19/117 20141101; H04N 21/2662 20130101;
H04N 19/82 20141101; H04N 21/4223 20130101; H04N 13/161 20180501;
H04N 19/136 20141101; H04N 21/234345 20130101; H04N 21/816
20130101; H04N 19/182 20141101; H04N 7/148 20130101 |
Class at
Publication: |
382/232 ;
382/260; 382/265; 382/264; 375/240.02; 375/E07.161 |
International
Class: |
G06K 9/36 20060101
G06K009/36; H04N 7/26 20060101 H04N007/26; G06K 9/40 20060101
G06K009/40 |
Claims
1. A method, comprising: receiving a plurality of depth values
corresponding to pixels of an image; and filtering the image as a
function of a plurality of variations in the depth values between
adjacent pixels of a window associated with the image.
2. The method of claim 1, further comprising: encoding the image
into a bit stream for transmission over a network, wherein the
filtering includes accounting for a bit rate associated with the
encoding of the image.
3. The method of claim 1, further comprising: receiving intensity
values corresponding to the pixels, wherein the filtering is a
function of variations in the intensity values between the adjacent
pixels.
4. The method of claim 1, wherein filtering the image comprises
smoothing certain adjacent pixels having variations of depth values
below a threshold value.
5. The method of claim 1, wherein the image is filtered in a loop
comprising an inverse transform, an inverse quantization, and a
prediction compensation that is based on previous encoding.
6. The method of claim 1, wherein the window comprises pixels from
a spatial region.
7. The method of claim 1, wherein the window comprises pixels from
a temporal region.
8. The method of claim 1, wherein the filtering preserves pixels
corresponding to depth values closer to a viewpoint over pixels
corresponding to depth values further away from the viewpoint.
9. Logic encoded in one or more non-transitory media that includes
code for execution and when executed by one or more processors is
operable to perform operations comprising: receiving a plurality of
depth values corresponding to pixels of an image; and filtering the
image as a function of a plurality of variations in the depth
values between adjacent pixels of a window associated with the
image.
10. The logic of claim 9, the operations further comprising:
encoding the image into a bit stream for transmission over a
network, wherein the filtering includes accounting for a bit rate
associated with the encoding of the image.
11. The logic of claim 9, the operations further comprising:
receiving intensity values corresponding to the pixels, wherein the
filtering is a function of variations in the intensity values
between the adjacent pixels.
12. The logic of claim 9, wherein filtering the image comprises
smoothing certain adjacent pixels having variations of depth values
below a threshold value.
13. The logic of claim 9, wherein the image is filtered in a loop
comprising an inverse transform, an inverse quantization, and a
prediction compensation that is based on previous encoding.
14. The logic of claim 9, wherein the window comprises pixels from
a spatial region.
15. The logic of claim 9, wherein the window comprises pixels from
a temporal region.
16. The logic of claim 9, wherein the filtering preserves pixels
corresponding to depth values closer to a viewpoint over pixels
corresponding to depth values further away from the viewpoint.
17. An apparatus, comprising: one or more processors; a memory; and
a video encoder with a depth-guided filter, wherein the apparatus
is configured for: receiving a plurality of depth values
corresponding to pixels of an image; and filtering the image as a
function of a plurality of variations in the depth values between
adjacent pixels of a window associated with the image.
18. The apparatus of claim 17, the apparatus being further
configured for: encoding the image into a bit stream for
transmission over a network, wherein the filtering includes
accounting for a bit rate associated with the encoding of the
image.
19. The apparatus of claim 17, the apparatus being further
configured for: receiving intensity values corresponding to the
pixels, wherein the filtering is a function of variations in the
intensity values between the adjacent pixels.
20. The apparatus of claim 17, wherein the filtering is configured
to preserve pixels corresponding to depth values closer to a
viewpoint over pixels corresponding to depth values further away
from the viewpoint.
Description
TECHNICAL FIELD
[0001] This disclosure relates in general to the field of
communications, and more particularly, to a system and a method for
depth-guided image filtering in a video conference environment.
BACKGROUND
[0002] Video architectures have grown in complexity in recent
times. Some video architectures can deliver real-time, face-to-face
interactions between people using advanced visual, audio, and
collaboration technologies. In certain architectures, service
providers may offer sophisticated video conferencing services for
their end users, which can simulate an "in-person" meeting
experience over a network. The ability to optimize video encoding
and decoding with certain bitrate constraints during a video
conference presents a significant challenge to developers and
designers, who attempt to offer a video conferencing solution that
is realistic and that mimics a real-life meeting.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] To provide a more complete understanding of the present
disclosure and features and advantages thereof, reference is made
to the following description, taken in conjunction with the
accompanying figures, wherein like reference numerals represent
like parts, in which:
[0004] FIG. 1 is a simplified block diagram illustrating an example
embodiment of a communication system in accordance this
disclosure;
[0005] FIG. 2A is a simplified block diagram illustrating
additional details that may be associated with a video processing
unit in which a depth-guided filter is coupled with a video encoder
to encode an image;
[0006] FIG. 2B is a simplified block diagram illustrating
additional details that may be associated with a video processing
unit in which a depth-guided filter is coupled with a video decoder
to decode an image;
[0007] FIG. 3 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
processing unit, in which a depth-guided filter is coupled with a
video decoder as a pre-filter;
[0008] FIG. 4 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
encoder in which a depth-guided filter is an in-loop filter;
[0009] FIG. 5 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
decoder in which a depth-guided filter is an in-loop filter;
and
[0010] FIG. 6 is a simplified flowchart illustrating one possible
set of activities associated with the present disclosure.
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
Overview
[0011] A method is provided in one example embodiment that includes
receiving a plurality of depth values corresponding to pixels of an
image. The method also includes filtering (e.g., adjusting,
modifying, improving) the image as a function of a plurality of
variations (e.g., differences) in the depth values between adjacent
pixels of a window associated with the image. In more detailed
embodiments, the method may include encoding the image into a bit
stream for transmission over a network. The filtering can account
for a bit rate associated with the encoding of the image.
[0012] In other embodiments, the method includes receiving
intensity values corresponding to the pixels, where the filtering
is a function of variations in the intensity values between the
adjacent pixels. The filtering of the image can include smoothing
certain adjacent pixels having variations of depth values below a
threshold value. The image is filtered in a loop comprising an
inverse transform, an inverse quantization, and a prediction
compensation that is based on previous encoding. The window may
include pixels from a spatial region, or a temporal region. The
filtering can preserve pixels corresponding to depth values closer
to a viewpoint over pixels corresponding to depth values further
away from the viewpoint.
Example Embodiments
[0013] Turning to FIG. 1, FIG. 1 is a simplified schematic diagram
illustrating a communication system 100 for conducting a video
conference in accordance with one embodiment of the present
disclosure. FIG. 1 includes multiple endpoints associated with
various end users of the video conference. In general, endpoints
may be geographically separated, where in this particular example,
a plurality of endpoints 112a-112c are located in San Jose, Calif.
and remote endpoints (not shown) are located in Chicago, Ill. FIG.
1 includes a multipoint manager element 120 coupled to endpoints
112a-112c. Note that the numerical and letter designations assigned
to the endpoints do not connote any type of hierarchy; the
designations are arbitrary and have been used for purposes of
teaching only. These designations should not be construed in any
way to limit their capabilities, functionalities, or applications
in the potential environments that may benefit from the features of
communication system 100.
[0014] In this example, each endpoint 112a-112c is fitted
discreetly along a desk and is proximate to its associated
participant. Such endpoints could be provided in any other suitable
location, as FIG. 1 only offers one of a multitude of possible
implementations for the concepts presented herein. In one example
implementation, the endpoints are videoconferencing endpoints,
which can assist in receiving and communicating video and audio
data. Other types of endpoints are certainly within the broad scope
of the outlined concepts, and some of these example endpoints are
further described below. Each endpoint 112a-112c is configured to
interface with a respective multipoint manager element 120, which
helps to coordinate and to process information being transmitted by
the end users.
[0015] As illustrated in FIG. 1, a number of image capture devices
114a-114c and displays 115a-115c are provided to interface with
endpoints 112a-112c, respectively. Displays 115a-115c render images
to be seen by conference participants and, in this particular
example, reflect a three-display design (e.g., a `triple`). Note
that as used herein in this specification, the term "display" is
meant to connote any element that is capable of rendering an image
during a video conference. This would necessarily be inclusive of
any panel, screen, Telepresence display or wall, computer display,
plasma element, television, monitor, or any other suitable surface
or element that is capable of such rendering.
[0016] The components of communication system 100 may use
specialized applications and hardware to create a system that can
leverage a network. Communication system 100 can use standard IP
technology and can operate on an integrated voice, video, and data
network. The system can also support high quality, real-time voice,
and video communications using broadband connections. It can
further offer capabilities for ensuring quality of service (QoS),
security, reliability, and high availability for high-bandwidth
applications such as video. Power and Ethernet connections for all
end users can be provided. Participants can use their laptops to
access data for the meeting, join a meeting place protocol or a Web
session, or stay connected to other applications throughout the
meeting.
[0017] For purposes of illustrating certain example techniques of
communication system 100, it is important to understand certain
image processing techniques and the communications that may be
traversing the network. The following foundational information may
be viewed as a basis from which the present disclosure may be
properly explained.
[0018] Conceptually, an image may be described as any electronic
element (e.g., an artifact) that reproduces the form of a subject,
such as an object or a scene. In many contexts, an image may be an
optically formed duplicate or reproduction of a subject, such as a
two-dimensional photograph of an object or scene. In a broader
sense, an image may also include any two-dimensional representation
of information, such as a drawing, painting, or map. A video is a
sequence of images, in which each still image is generally referred
to as a "frame."
[0019] A digital image, in general terms, is a numeric
representation of an image. A digital image is most commonly
represented as a set (rows and columns) of binary values, in which
each binary value is a picture element (i.e., a "pixel"). A pixel
holds quantized values that represent the intensity (or
"brightness") of a given color at any specific point in the
two-dimensional space of the image. A digital image can be
classified generally according to the number and nature of those
values (samples), such as binary, grayscale, or color. Typically,
pixels are stored in a computer memory as a two-dimensional array
of small integers (i.e., a raster image or a raster map).
[0020] An image (or video) may be captured by optical devices
having a sensor that converts lights into electrical charges, such
as a digital camera or a scanner, for example. The electrical
charges can then be converted into digital values. Some digital
cameras give access to almost all the data captured by the camera,
using a raw image format. An image can also be synthesized from
arbitrary non-image information, such as mathematical functions or
three-dimensional geometric models.
[0021] Images from digital image capture devices often receive
further processing to improve their quality and/or to reduce the
consumption of resources, such as memory or bandwidth. For example,
a digital camera frequently includes a dedicated digital
image-processing unit (or chip) to convert the raw data from the
image sensor into a color-corrected image in a standard image file
format. Image processing in general includes any form of signal
processing for which the input is an image, such as a photograph or
video frame. The output of image processing may be either an image
or a set of characteristics or parameters related to the image.
Most image-processing techniques involve treating the image as a
two-dimensional signal and applying standard signal-processing
techniques to it.
[0022] Digital images can be coded (or compressed) to reduce or
remove irrelevance and redundancy from the image data to improve
storage and/or transmission efficiency. For example,
general-purpose compression generally includes entropy encoding to
remove statistical redundancy from data. However, entropy encoding
is frequently not very effective for image data without an image
model that attempts to represent a signal in a form that is more
readily compressible. Such models exploit the subjective redundancy
of images (and video). A motion model that estimates and
compensates for motion can also be included to exploit significant
temporal redundancy usually found in video.
[0023] An image encoder usually processes image data in blocks of
samples. Each block can be transformed (e.g., with a discrete
cosine transform) into spatial frequency coefficients. Energy in
the transformed image data tends to be concentrated in a few
significant coefficients; other coefficients are usually close to
zero or insignificant. The transformed image data can be quantized
by dividing each coefficient by an integer and discarding the
remainder, typically leaving very few non-zero coefficients, which
can readily be encoded with an entropy encoder. In video, the
amount of data to be coded can be reduced significantly if the
previous frame is subtracted from the current frame.
[0024] Digital image processing often also includes some form of
filtering intended to improve the quality of an image, such as by
reducing noise and other unwanted artifacts. Image noise can be
generally defined as random variation of brightness or color
information in images not present in the object imaged. Image noise
is usually an aspect of electronic noise, which can be produced by
the sensor and/or other circuitry of a capture device. Image noise
can also originate during quantization. In video, noise can also
refer to the random dot pattern that is superimposed on the picture
as a result of electronic noise. Interference and static are other
forms of noise, in the sense that they are unwanted, which can
affect transmitted signals.
[0025] Smoothing filters attempt to preserve important patterns in
an image, while reducing or eliminating noise or other fine-scale
structures. Many different algorithms can be implemented in filters
to smooth an image. One of the most common algorithms is the
"moving average", often used to try to capture important trends in
repeated statistical surveys. Noise filters, for example, generally
attempt to determine whether the actual differences in pixel values
constitute noise or real photographic detail, and average out the
former while attempting to preserve the latter. However, there is
often a tradeoff made between noise removal and preservation of
fine, low-contrast detail that may have characteristics similar to
noise. Other filters (e.g., a deblocking filter) can be applied to
improve visual quality and prediction performance, such as by
smoothing the sharp edges that can form between macroblocks when
block-coding techniques are used.
[0026] Image textures can also be calculated in image processing to
quantify the perceived texture of an image. Image texture data
provides information about the spatial arrangement of color or
intensities in an image or a selected region of an image. The use
of edge detection to determine the number of edge pixels in a
specified region helps determine a characteristic of texture
complexity. After edges have been found, the direction of the edges
can also be applied as a characteristic of texture and can be
useful in determining patterns in the texture. These directions can
be represented as an average or in a histogram. Image textures may
also be useful for classification and segmentation of images. In
general, there are two primary types of segmentation based on image
texture: region-based and boundary-based. Region-based segmentation
generally attempts to group or cluster pixels based on texture
properties together, while boundary-based segmentation attempts to
group or cluster pixels based on edges between pixels that come
from different texture properties. Though image texture is not
always a perfect measure for segmentation, it can be used
effectively along with other measures, such as color, to facilitate
image segmentation.
[0027] In 3-D imaging, an image may be accompanied by a depth map
that contains information corresponding to a third dimension of the
image: indicating distances of objects in the scene to the
viewpoint. In this sense, depth is a broad term indicative of any
type of measurement within a given image. Each depth value in a
depth map can correspond to a pixel in an image, which can be
correlated with other image data (e.g., intensity values). Depth
maps may be used for virtual view synthesis in 3-D video systems
(e.g., 3DTV, or for gesture recognition in human-computer
interaction, for example, MICROSOFT KINECT).
[0028] From a video coding perspective, depth maps may also be used
for segmenting images into multiple regions, usually along large
depth discontinuities. Each region may then be encoded separately,
with possibly different parameters. Segmenting each image into
foreground and background is one example, in which foreground
objects in closer proximity to the viewpoint are differentiated
from background objects that are relatively far away from the
viewpoint. Such segmentation can be especially meaningful for
Telepresence and video conferencing, in which scenes comprise
primarily meeting participants, i.e., people.
[0029] However, merely using depth maps for image segmentation does
not fully exploit the information to optimize image coding. In
general, pixels within a region have been treated equally after
segmentation in coding: regardless of their locations in the region
with respect to other regions. In the foreground-background case,
for example, a block of pixels in a color image is encoded as
either foreground or background, which lacks a fine grain approach
for improving image coding using depth.
[0030] In accordance with embodiments disclosed herein,
communication system 100 can overcome this shortcoming (and others)
by providing depth-guided image filtering. More specifically,
communication system 100 can provide a system and method for
processing a sequence of images using depth maps that are generated
in correspondence to the images. Depth maps and texture data of
images can be used to develop a filter, which can be applied to the
images. Such a system and method may be particularly advantageous
for a conferencing environment such as communication system 100, in
which images are encoded under a bitrate constraint and transported
over a network, but the filter may also be applied advantageously
independent of image encoding.
[0031] At its most general level, the system and method described
herein may include receiving an image and a depth map, such as from
a 3-D camera, and filtering the image according to the depth map
such that details in the image that correspond to depth
discontinuity and intensity variation can be preserved while
substantially reducing or eliminating noise in the image. When
coupled with a video encoder, the image may be further filtered
such that details of objects closer to a viewpoint are preserved
preferentially over objects further away, which may be particularly
useful when the bitrate for encoding the image is constrained. For
a block-based video encoder such as H.264 or MPEG-4, for example,
the filtering may operate to reduce coding artifacts, such as
artifacts introduced by quantization errors. When coupled with a
video encoder, depth-guided filtering may further operate to
conceal errors from partial image corruption, such as might occur
with data loss during transmission.
[0032] Before turning to some of the additional operations of
communication system 100, a brief discussion is provided about some
of the infrastructure of FIG. 1. Endpoint 112a may be used by
someone wishing to participate in a video conference in
communication system 100. The term "endpoint" may be inclusive of
devices used to initiate a communication, such as a switch, a
console, a proprietary endpoint, a telephone, a bridge, a computer,
a personal digital assistant (PDA), a laptop or electronic
notebook, an iPhone, an iPad, a Google Droid, any other type of
smartphone, or any other device, component, element, or object
capable of initiating voice, audio, or data exchanges within
communication system 100. In some embodiments, image capture
devices may be integrated with an endpoint, particularly mobile
endpoints.
[0033] Endpoint 112a may also be inclusive of a suitable interface
to an end user, such as a microphone, a display, or a keyboard or
other terminal equipment. Endpoint 112a may also include any device
that seeks to initiate a communication on behalf of another entity
or element, such as a program, a database, or any other component,
device, element, or object capable of initiating a voice or a data
exchange within communication system 100. Data, as used herein,
refers to any type of video, numeric, voice, or script data, or any
type of source or object code, or any other suitable information in
any appropriate format that may be communicated from one point to
another. Additional details relating to endpoints are provided
below with reference to FIG. 2.
[0034] In operation, multipoint manager element 120 can be
configured to establish, or to foster a video session between one
or more end users, which may be located in various other sites and
locations. Multipoint manager element 120 can also coordinate and
process various policies involving endpoints 112a-112c. In general,
multipoint manager element 120 may communicate with endpoints
112a-112c through any standard or proprietary conference control
protocol. Multipoint manager element 120 includes a switching
component that determines which signals are to be routed to
individual endpoints 112a-112c. Multipoint manager element 120 can
also determine how individual end users are seen by others involved
in the video conference. Furthermore, multipoint manager element
120 can control the timing and coordination of this activity.
Multipoint manager element 120 can also include a media layer that
can copy information or data, which can be subsequently
retransmitted or simply forwarded along to one or more endpoints
112a-112c.
[0035] FIG. 2A is a simplified block diagram illustrating
additional details that may be associated with a video processing
unit 204a, in which a depth-guided filter is coupled with a video
encoder to encode an image. In this example embodiment, the video
processing unit is integrated with image capture device 114a, which
can include also an image sensor unit 202. Video processing unit
204a may further include a processor 206a, a memory element 208a, a
video encoder 210 with a depth-guided filter, a filter parameter
controller 212a, and a rate controller 214. Video processing unit
204a may be associated with a proprietary element, a server, a
network appliance, or any other suitable component, device, module,
or element capable of performing the operations discussed
herein.
[0036] Video processing unit 204a can also be configured to store,
aggregate, process, export, and/or otherwise maintain image data
and logs in any appropriate format, where these activities can
involve processor 206a and memory element 208a. Video processing
unit 204a is generally configured to receive information as a
signal (e.g., an image signal or a video signal) from image sensor
unit 202 via some connection. In the example embodiment of FIG. 2A,
video processing unit 204a is integrated with image capture device
114a, but it may be implemented independently of video processing
unit 204a, or it may be integrated with other components in
communication system 100, such as endpoint 112a or multipoint
manager element 120.
[0037] Video processing unit 204a may interface with image sensor
unit 202 through a wireless connection, or via one or more cables
or wires that allow for the propagation of signals between these
two elements. These devices can also receive signals from an
intermediary device, a remote control, etc., where the signals may
leverage infrared, Bluetooth, WiFi, electromagnetic waves
generally, or any other suitable transmission protocol for
communicating data (e.g., potentially over a network) from one
element to another. Virtually any control path can be leveraged in
order to deliver information between video processing unit 204a and
image sensor unit 202. Transmissions between these two sets of
devices can be bidirectional in certain embodiments such that the
devices can interact with each other (e.g., dynamically, real-time,
etc.). This would allow the devices to acknowledge transmissions
from each other and offer feedback, where appropriate. Any of these
devices can be consolidated with each other, or operate
independently based on particular configuration needs. For example,
a single box may encompass audio and video reception capabilities
(e.g., a set-top box that includes video processing unit 204a,
along with camera and microphone components for capturing video and
audio data).
[0038] In general terms, video processing unit 204a is a video
element, which is intended to encompass any suitable unit, module,
software, hardware, server, program, application, application
program interface (API), proxy, processor, field programmable gate
array (FPGA), erasable programmable read only memory (EPROM),
electrically erasable programmable ROM (EEPROM), application
specific integrated circuit (ASIC), digital signal processor (DSP),
or any other suitable device, component, element, or object
configured to process video data. This video element may include
any suitable hardware, software, components, modules, interfaces,
or objects that facilitate the operations thereof. This may be
inclusive of appropriate algorithms and communication protocols
that allow for the effective exchange (reception and/or
transmission) of data or information.
[0039] In yet other embodiments, though, video processing unit 204a
may be a network element, or may be integrated with a network
element. A network element generally encompasses routers, switches,
gateways, bridges, load balancers, firewalls, servers, processors,
modules, or any other suitable device, component, element, or
object operable to exchange information in a network environment.
This includes proprietary elements equally, which can be
provisioned with particular features to satisfy a unique scenario
or a distinct environment.
[0040] Video processing unit 204a may share (or coordinate) certain
processing operations with other video elements. Memory element
208a may store, maintain, and/or update data in any number of
possible manners. In a general sense, the arrangement depicted
herein may be more logical in its representations, whereas a
physical architecture may include various
permutations/combinations/hybrids of these elements.
[0041] In one example implementation, video processing unit 204a
may include software (e.g., as part of video encoder 210) to
achieve certain operations described herein. In other embodiments,
operations may be provided externally to any of the aforementioned
elements, or included in some other video element or endpoint
(either of which may be proprietary) to achieve this intended
functionality. Alternatively, several elements may include software
(or reciprocating software) that can coordinate in order to achieve
the operations, as outlined herein. In still other embodiments, any
of the devices illustrated herein may include any suitable
algorithms, hardware, software, components, modules, interfaces, or
objects that facilitate operations disclosed herein, including
depth-guided image filtering.
[0042] In the context of a video conference, image sensor unit 202
can capture participants and other scene elements as a sequence of
images 216 and depth maps 218. Each image 216 and depth map 218 can
be passed as a signal to video encoder 210 in video processing unit
204a. Video encoder 210 includes a depth-guided filter that can be
used to filter and encode the signal into a bit stream 220, which
can be transmitted to another endpoint in a video conference, for
example. Video encoder 210 may operate under rate controller 214 by
receiving instructions from rate controller 214 and providing rate
controller 214 with rate statistics of the video encoding. Filter
parameter controller 212a may also receive instructions from rate
controller 214 and determine parameters for the depth-guided filter
based on image 216 and depth map 218. The encoded bit stream may
include compressed image data, depth values, and/or parameters from
filter parameter controller 212a, for example.
[0043] FIG. 2B is a simplified block diagram illustrating
additional details that may be associated with a video processing
unit 204b, in which a depth-guided filter is coupled with a video
decoder to decode an image. Video processing unit 204b is similar
to video processing unit 204a in that it also includes a respective
processor 206b, a memory element 208b, and a filter parameter
controller 212b, and may also be configured to store, aggregate,
process, export, and/or otherwise maintain image data and logs in
any appropriate format, where these activities can involve
processor 206b and memory element 208b. It also shares many of the
other characteristics of video processing unit 204a, including
characteristics of a video element and/or a network element in
various embodiments. In some embodiments, elements of video
processing unit 204a and video processing unit 204b may be
integrated into a single unit. Video processing unit 204b differs
from video processing unit 204a to the extent that it includes a
video decoder 222 and operates to decode a bit stream into an image
that can be rendered on a suitable output device, such as display
115a, based on received depth values and filter parameters. Video
processing unit 204b is generally configured to receive information
from a bit stream via some connection, which may be a wireless
connection, or via one or more cables or wires that allow for the
propagation of signals.
[0044] FIG. 3 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
processing unit in which a depth-guided filter is coupled with a
video encoder as a pre-filter. In this example embodiment, video
processing unit 302 is integrated with image capture device 114a,
which includes image sensor 202. Video processing unit 302 may be
similar to video processing unit 204a in that it also includes a
respective processor 304, a memory element 308, a filter parameter
controller 312, and a rate controller 314, and it may also be
configured to store, aggregate, process, export, and/or otherwise
maintain image data and logs in any appropriate format, where these
activities can involve processor 304 and memory element 308. It
also shares many of the other characteristics of video processing
unit 204a, including characteristics of a video element and/or a
network element in various embodiments.
[0045] Video processing unit 302 is generally configured to receive
information as a signal from image sensor unit 202 via some
connection, which may be a wireless connection, or via one or more
cables or wires that allow for the propagation of signals. Video
processing unit 302 applies depth-guided filtering to an image 316
based on depth map 318 before it is encoded with video encoder 320,
such that edges in image 316 that correspond to depth discontinuity
and intensity variations can be preserved while noises in image 316
are removed or reduced.
[0046] FIG. 4 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
encoder, in which a depth-guided filter is an in-loop filter. A
video encoder 402 is generally configured to receive image
information as a signal via some connection, which may be a
wireless connection, or via one or more cables or wires that allow
for the propagation of signals. In the example embodiment of video
encoder 402, an image can be processed in blocks or macroblocks of
samples. In general, a video encoder can transform each block into
a block of spatial frequency coefficients, divide each coefficient
by an integer, and discard the remainder, such as in a transform
and quantization module 404. The resulting coefficients can then be
encoded, for example, with entropy encoding module 406.
[0047] Prediction (intra/inter prediction module 408) may also be
used to enhance encoding, such as with motion compensation. A
prediction can be formed based on previously encoded data, either
from the current time frame (intra-prediction) or from other frames
that have already been coded (inter-prediction). For example,
inverse transform and inverse quantization 410 can be used to
rescale the quantized transform coefficients. Each coefficient can
be multiplied by an integer value to restore its original scale. An
inverse transform can combine the standard basis patterns, weighted
by the rescaled coefficients, to re-create each block of data.
These blocks can be combined together to form a macroblock, and the
prediction can be subtracted from the current macroblock to form a
residual.
[0048] In a video encoder with an in-loop filter, such as video
encoder 402, a deblocking filter 412 can also be applied to blocks
in decoded video to improve visual quality and prediction
performance by smoothing the sharp edges that can form between
macroblocks when block-coding techniques are used. In video encoder
402, a depth-guided filter 414 can be applied to an image after
inverse transform and inverse quantization, deblocking filtering,
and prediction compensation. By fusing depth information with
texture data, the depth-guided filtering can help reduce coding
artifacts, such as those that can be introduced by quantization
errors.
[0049] FIG. 5 is a simplified block diagram illustrating additional
details that may be associated with another embodiment of a video
decoder, in which a depth-guided filter is an in-loop filter. A
video decoder 502 is generally configured to receive information
from a bit stream via some connection. Entropy decoding, inverse
transform, and inverse quantization 504 can be used to decode and
rescale quantized transform coefficients from the bit stream. Each
coefficient can be multiplied by an integer value to restore its
original scale. An inverse transform can combine the standard basis
patterns, weighted by the rescaled coefficients, to re-create each
block of data. These blocks can be combined together to form a
macroblock. In a video decoder with an in-loop filter, such as
video decoder 502, a deblocking filter 506 can also be applied to
decoded blocks to improve visual quality and prediction performance
by smoothing the sharp edges that can form between macroblocks when
block-coding techniques are used. In video decoder 502, a
depth-guided filter 508 can be applied after deblocking filter 506.
Depth-guided filter 508 may also be advantageous for concealing
errors if part of an image is corrupted, such as during
transmission data loss.
[0050] One example form of a depth-guided filter may be defined
as:
D G F ( p ) = 1 W p q .di-elect cons. S G .sigma. d ( D p - D q ) G
.sigma. r ( I p - I q ) I q ##EQU00001##
[0051] In the equation, p is the center pixel to be filtered, and q
is a neighboring pixel in the window S. D.sub.p, I.sub.p, and
D.sub.q, I.sub.q denote the depth and intensity values of the two
pixels, respectively; G.sub..sigma..sub.d and G.sub..sigma..sub.r
are two zero-mean Gaussian distributions with standard deviations
.sigma..sub.d and .sigma..sub.r, which control the strength of
Gaussian smoothing according to depth and texture, respectively;
and W.sub.p is a normalization factor:
W p = q .di-elect cons. S G .sigma. d ( D p - D q ) G .sigma. r ( I
p - I q ) ##EQU00002##
[0052] In general, a filter window comprises a finite group of
pixels around a pixel to be filtered (the "center" pixel). The
window is typically symmetric about the center pixel, but may also
be asymmetrical in some embodiments. The window can be square
(e.g., 3.times.3, 5.times.5, etc.), but can also be circular or
other shapes. The window S may include pixels from a spatial or
temporal region (e.g., a neighborhood) or both. In this example
embodiment of a depth-guided filter, all pixels are given the same
weight regardless of their spatial or temporal distance to the
center pixel p, but in other embodiments different weights can be
assigned to neighboring pixels in accordance to their distance to
the center pixel. Such different weights may also follow a Gaussian
distribution with respect to the distance of the pixels.
Alternatively, other distributions such as the Gibbs distribution
(also known as the Gibbs measure) or user-defined piece-wise
linear/non-linear functions may be used instead of the Gaussian
distribution.
[0053] A depth-guided filter as described herein may be applied to
remove spatial and temporal noise that is coherent from the
production of images (e.g., from the video camera, and to reduce
coding errors such as quantization errors). By taking into account
both depth and texture variations, the filtering operations can be
performed such that pixels with small depth and intensity
variations (therefore, likely to be noise) will be smoothed,
whereas those with large depth or intensity variations can be
preserved. As a result, details corresponding to contour and
texture-rich areas in the image may be perceptually enhanced.
[0054] The strength of the smoothing effect of each Gaussian can be
controlled by the standard deviation (i.e., .sigma.). The strength
is directly proportional to the size of the standard deviation.
Consequently, fewer details may be preserved after filtering. In a
video encoding context, this means that there may be less
information to be encoded. Therefore, by adjusting the sigma
.sigma..sub.d according to the depth of pixels, one may preferably
preserve more details for objects that are closer to the viewpoint
and less for objects that are farther away. When operating under a
bit-rate controller, the adjustment may also account for the bit
rate that is available for encoding the current image.
[0055] When included in a video decoding loop, the depth-guided
filter may also operate to conceal decoding errors that may be
caused by, for example, loss of image data during transmission over
a network, assuming that the corresponding depth data was correctly
received. For example, the error concealment process may include
first copying image data from previously decoded images from
multiple locations, selecting the one that has strong edges best
aligned with discontinuities in the received depth map, and
applying the depth-guided filter to the image formed by the
preceding step.
[0056] FIG. 6 is a simplified flowchart 600 illustrating potential
operations that may be associated with example embodiments of a
video encoder and/or decoder according to this disclosure. At 602,
depth values corresponding to pixels of an image may be received.
At 604, intensity values corresponding to the pixels may also be
received. At 606, the image can be filtered as a function of
variations in depth and intensity values between adjacent pixels of
a window. For example, the filtering may include smoothing adjacent
pixels having variations of depth values below a configurable
threshold value. In general, the filtering can preserve pixels
corresponding to depth values closer to a viewpoint
(preferentially) over pixels corresponding to depth values further
away from the viewpoint. At 608, the image can be encoded into a
bit stream for transmission (e.g., over a network interface).
[0057] At 610, the encoded bit stream may be transmitted, along
with depth information and other codec parameters, which may be
received at 612. At 614, the depth information and other codec
parameters may be used to decode the bit stream into an image. Note
that such depth-guided filtering may provide significant
advantages, some of which have already been discussed. In
particular, a depth map can be used to improve image quality, by
reducing noise and coding errors, for example. Depth-guided
filtering may also provide a fine granular control of image
details.
[0058] In certain example implementations, the image processing
functions outlined herein may be implemented by logic encoded in
one or more tangible media (e.g., embedded logic provided in an
application specific integrated circuit (ASIC), digital signal
processor (DSP) instructions, software (potentially inclusive of
object code and source code) to be executed by a processor, or
other similar machine, etc.). In some of these instances, a memory
element (as shown in FIG. 2A and FIG. 2B) can store data used for
the operations described herein. This includes the memory element
being able to store software, logic, code, or processor
instructions that are executed to carry out the activities
described herein. A processor can execute any type of instructions
associated with the data to achieve the operations detailed herein.
In one example, a processor (e.g., as shown in FIG. 2A and FIG. 2B)
could transform an element or an article (e.g., data) from one
state or thing to another state or thing. In another example, the
activities outlined herein may be implemented with fixed logic or
programmable logic (e.g., software/computer instructions executed
by a processor) and the elements identified herein could be some
type of a programmable processor, programmable digital logic (e.g.,
a field programmable gate array (FPGA), an erasable programmable
read only memory (EPROM), an electrically erasable programmable ROM
(EEPROM)) or an ASIC that includes digital logic, software, code,
electronic instructions, or any suitable combination thereof.
[0059] In certain implementations, a video processing unit (or
other elements of communication system 100) can include software in
order to achieve the depth-guided image filtering outlined herein.
For example, at least some portions of the activities outlined
herein may be implemented in non-transitory logic (i.e., software)
provisioned in, for example, video processing units 204a-204b,
multipoint manager element 120, and/or any of endpoints 112a-112c.
Such a configuration can include one or more instances of video
encoder with depth-guided filter 210/video decoder with
depth-guided filter 222 being provisioned in various locations of
the network. In some embodiments, one or more of these features may
be implemented in hardware, provided external to the aforementioned
elements, or consolidated in any appropriate manner to achieve the
intended functionalities. Moreover, the aforementioned elements may
include software (or reciprocating software) that can coordinate in
order to achieve the operations as outlined herein. In still other
embodiments, these elements may include any suitable algorithms,
hardware, software, components, modules, interfaces, or objects
that facilitate the operations thereof.
[0060] Furthermore, components of communication system 100
described and shown herein may also include suitable interfaces for
receiving, transmitting, and/or otherwise communicating data or
information in a network environment. Additionally, some of the
processors and memories associated with the various components may
be removed, or otherwise consolidated such that a single processor
and a single memory location are responsible for certain
activities. In a general sense, the arrangements depicted in the
FIGURES may be more logical in their representations, whereas a
physical architecture may include various permutations,
combinations, and/or hybrids of these elements. It is imperative to
note that countless possible design configurations can be used to
achieve the operational objectives outlined here. Accordingly, the
associated infrastructure has a myriad of substitute arrangements,
design choices, device possibilities, hardware configurations,
software implementations, equipment options, etc.
[0061] The elements discussed herein may be configured to keep
information in any suitable memory element (random access memory
(RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in
any other suitable component, device, element, or object where
appropriate and based on particular needs. Any of the memory items
discussed herein (e.g., database, table, cache, key, etc.) should
be construed as being encompassed within the broad term "memory
element." Similarly, any of the potential processing elements,
modules, and machines described herein should be construed as being
encompassed within the broad term "processor."
[0062] Note that with the examples provided above, interaction may
be described in terms of two, three, or four elements or
components. However, this has been done for purposes of clarity and
example only. In certain cases, it may be easier to describe one or
more of the functions or operations by only referencing a limited
number of components. It should be appreciated that the principles
described herein are readily scalable and can accommodate a large
number of components, as well as more complicated/sophisticated
arrangements and configurations. Accordingly, the examples provided
should not limit the scope or inhibit the broad teachings provided
herein as potentially applied to a myriad of other architectures.
Additionally, although described with reference to particular
scenarios, where a particular module is provided within an element,
these modules can be provided externally, or consolidated and/or
combined in any suitable fashion. In certain instances, such
modules may be provided in a single proprietary unit.
[0063] It is also important to note that operations in the appended
diagrams illustrate only some of the possible scenarios and
patterns that may be executed by, or within elements of
communication system 100. Some of these operations may be deleted
or removed where appropriate, or these operations may be modified
or changed considerably without departing from the scope of
teachings provided herein. In addition, a number of these
operations have been described as being executed concurrently with,
or in parallel to, one or more additional operations. However, the
timing of these operations may be altered considerably. The
preceding operational flows have been offered for purposes of
example and discussion. Substantial flexibility is provided in that
any suitable arrangements, chronologies, configurations, and timing
mechanisms may be provided without departing from the teachings
provided herein.
[0064] Although a system and method for depth-guided image
filtering has been described in detail with reference to particular
embodiments, it should be understood that various other changes,
substitutions, and alterations may be made hereto without departing
from the spirit and scope of this disclosure. For example, although
the previous discussions have focused on video conferencing
associated with particular types of endpoints, handheld devices
that employ video applications could readily adopt the teachings of
the present disclosure. For example, iPhones, iPads, Android
devices, personal computing applications (i.e., desktop video
solutions, Skype, etc.) can readily adopt and use the depth-guided
filtering operations detailed above. Any communication system or
device that encodes video data would be amenable to the features
discussed herein.
[0065] It is also imperative to note that the systems and methods
described herein can be used in any type of imaging or video
application. This can include standard video rate transmissions,
adaptive bit rate (ABR), variable bit rate (VBR), CBR, or any other
imaging technology in which image encoding can be utilized.
Numerous other changes, substitutions, variations, alterations, and
modifications may be ascertained to one skilled in the art and it
is intended that the present disclosure encompass all such changes,
substitutions, variations, alterations, and modifications as
falling within the scope of the appended claims.
[0066] In order to assist the United States Patent and Trademark
Office (USPTO) and, additionally, any readers of any patent issued
on this application in interpreting the claims appended hereto,
Applicant wishes to note that the Applicant: (a) does not intend
any of the appended claims to invoke paragraph six (6) of 35 U.S.C.
section 112 as it exists on the date of the filing hereof unless
the words "means for" or "step for" are specifically used in the
particular claims; and (b) does not intend, by any statement in the
specification, to limit this disclosure in any way that is not
otherwise reflected in the appended claims.
* * * * *