U.S. patent application number 15/180985 was filed with the patent office on 2016-10-13 for method of efficiently implementing a mpeg-4 avc deblocking filter on an array of parallel processors.
The applicant listed for this patent is Amazon Technologies, Inc.. Invention is credited to Brian G. Lewis.
Application Number | 20160301943 15/180985 |
Document ID | / |
Family ID | 47017456 |
Filed Date | 2016-10-13 |
United States Patent
Application |
20160301943 |
Kind Code |
A1 |
Lewis; Brian G. |
October 13, 2016 |
METHOD OF EFFICIENTLY IMPLEMENTING A MPEG-4 AVC DEBLOCKING FILTER
ON AN ARRAY OF PARALLEL PROCESSORS
Abstract
A method for implementing a deblocking filter including the
steps of (A) reading pixel values for a plurality of macroblocks of
an unfiltered video frame from an input buffer into a working
buffer, where the working buffer has dimensions determined by a
predefined input region of the deblocking filter and a portion of
the working buffer forms a filter output region of the deblocking
filter, (B) sequentially processing the pixel values in the working
buffer through a plurality of filter processing stages using an
array of software-configurable general purpose parallel processors,
where each of the plurality of filter processing stages operates on
a respective set of the pixel values in the working buffer, and (C)
writing filtered pixel values from the filter output region of the
working buffer to an output buffer after the plurality of filter
processing stages are completed.
Inventors: |
Lewis; Brian G.; (Portland,
OR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Amazon Technologies, Inc. |
Reno |
NV |
US |
|
|
Family ID: |
47017456 |
Appl. No.: |
15/180985 |
Filed: |
June 13, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13656877 |
Oct 22, 2012 |
9369725 |
|
|
15180985 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04N 19/439 20141101;
H04N 19/86 20141101; H04N 19/182 20141101; H04N 19/436
20141101 |
International
Class: |
H04N 19/436 20060101
H04N019/436; H04N 19/86 20060101 H04N019/86; H04N 19/182 20060101
H04N019/182; H04N 19/42 20060101 H04N019/42 |
Claims
1. A method comprising: processing pixel values in a working buffer
in parallel using an array of parallel processors by processing the
pixel values in the working buffer through a plurality of filter
processing stages including: a first filter processing stage
configured to filter pixel values across vertical edges in pixel
values in the working buffer; and a second filter processing stage
configured to filter pixel values across horizontal edges in pixel
values in the working buffer; determining the plurality of filter
processing stages have been processed; and writing the filtered
pixel values from the working buffer to an output buffer.
2. The method of claim 1, wherein each of the plurality of filter
processing stages computes filtered pixel values by applying a
predefined filter on the pixel values in the working buffer.
3. The method of claim 1, wherein for the first filter processing
stage, the array of parallel processors processes rows of the pixel
values within the pixel values in the working buffer
concurrently.
4. The method of claim 1, wherein for the second filter processing
stage, the array of parallel processors processes columns of the
pixel values within the pixel values in the working buffer
concurrently.
5. The method of claim 1, wherein each of the filter processing
stages writes the filtered pixels back to a respective stage output
region of the working buffer.
6. The method of claim 1, wherein each of the filter processing
stages processes a different filter region of a plurality of filter
regions of the working buffer.
7. The method of claim 1, wherein the wherein the plurality of
filter processing stages form a MPEG-4 part 10 compliant deblocking
filter and the filtered pixel values are computed using an adaptive
multi-tap filter applied at right angles to edges being
filtered.
8. An apparatus comprising: a working buffer configured to store
pixel values during filter processing; an output buffer configured
to store filtered pixel values; and an array of parallel
processors, the array of parallel processors being configured to
process the pixel values in the working buffer through a plurality
of filter processing stages including: a first filter processing
stage configured to filter pixel values across vertical edges in
pixel values in the working buffer; and a second filter processing
stage configured to filter pixel values across horizontal edges in
pixel values in the working buffer.
9. The apparatus of claim 8, wherein the array of parallel
processors are configured to compute filtered pixel values for each
of the plurality of filter processing stages by applying a
predefined filter on the pixel values in the working buffer.
10. The apparatus of claim 8, wherein for the first filter
processing stage, the array of parallel processors processes rows
of pixel values within the pixel values in the working buffer
concurrently.
11. The apparatus of claim 8, wherein for the second filter
processing stage, the array of parallel processors processes
columns of pixel values within the pixel values in the working
buffer concurrently.
12. The apparatus of claim 8, wherein the array of parallel
processors are configured to write the filtered pixels back to a
respective stage output region of the working buffer for each of
the plurality of filter processing stages.
13. The apparatus of claim 8, wherein the array of parallel
processors are configured to process a different filter region of a
plurality of filter regions of the working buffer for each of the
plurality of filter processing stages.
14. The apparatus of to claim 8, wherein the plurality of filter
processing stages form a MPEG-4 part 10 compliant deblocking filter
and the filtered pixel values are computed using an adaptive
multi-tap filter applied at right angles to edges being
filtered.
15. A computing system comprising: at least one processor; and a
memory device including instructions that, when executed by the at
least one processor, cause the computing system to: process pixel
values in a working buffer in parallel using an array of parallel
processors by processing the pixel values in the working buffer
through a plurality of filter processing stages including: a first
filter processing stage configured to filter pixel values across
vertical edges in pixel values in the working buffer; and a second
filter processing stage configured to filter pixel values across
horizontal edges in pixel values in the working buffer; determine
the plurality of filter processing stages have been processed; and
write the filtered pixel values from the working buffer to an
output buffer.
16. The computing system of claim 15, wherein each of the plurality
of filter processing stages computes filtered pixel values by
applying a predefined filter on the pixel values in the working
buffer.
17. The computing system of claim 15, wherein for the first filter
processing stage, the array of parallel processors processes rows
of the pixel values within the pixel values in the working buffer
concurrently.
18. The computing system of claim 15, wherein for the second filter
processing stage, the array of parallel processors processes
columns of the pixel values within the pixel values in the working
buffer concurrently.
19. The computing system of claim 15, wherein each of the filter
processing stages writes the filtered pixels back to a respective
stage output region of the working buffer.
20. The computing system of claim 15, wherein the plurality of
filter processing stages form a MPEG-4 part 10 compliant deblocking
filter and the filtered pixel values are computed using an adaptive
multi-tap filter applied at right angles to edges being filtered.
Description
[0001] This application is a continuation of U.S. application Ser.
No. 13/656,877 filed Oct. 22, 2012 entitled "METHOD OF EFFICIENTLY
IMPLEMENTING A MPEG-4 AVC DEBLOCKING FILTER ON AN ARRAY OF PARALLEL
PROCESSORS" which is a continuation of U.S. application Ser. No.
12/342,229, issued as U.S. Pat. No. 8,295,360 on Oct. 23, 2012, all
of which are incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The present invention relates to video compression generally
and, more particularly, to a method and/or architecture for
efficiently implementing a MPEG-4 AVC deblocking filter on an array
of parallel processors.
BACKGROUND OF THE INVENTION
[0003] Referring to FIG. 1, a diagram is shown illustrating
divisions of a video frame 10 in accordance with the MPEG-4 part 10
advanced video coding (AVC) standard. The MPEG-4 part 10 standard
defines a method for video compression that operates on rectangular
groups of pixels. The type of compression performed by the MPEG-4
part 10 AVC standard is generally referred to as "block-based"
compression. Each frame 10 of video is divided into a number of
macroblocks 12. Each of the macroblocks 12 is further divided into
transform blocks 14. The transform blocks 14 can also be referred
to as sub-blocks.
[0004] As part of the video compression process, a prediction for
the pixels in each macroblock 12 is generated based upon either (i)
pixels from adjacent macroblocks 12 in the same frame 10 or (ii)
pixels from previous frames in the video sequence. Differences
between the prediction and the actual pixel values for the
macroblock 12 are referred to as residual values (or just
residuals). The residual values for each transform block 14 are
converted from spatial-domain to frequency-domain coefficients. The
frequency-domain coefficients are then divided down to reduce the
range of values needed to represent the frequency-domain
coefficients through a process known as quantization. Quantization
allows much higher compression ratios, but at the cost of
discarding information about the original video sequence. Once the
data has been quantized, the frames of the original sequence can no
longer be reconstructed exactly.
[0005] The quantized coefficients and a description of how to
generate the macroblock prediction pixel values constitute the
compressed video stream. When video frames are reconstructed from
the compressed stream, the compression sequence is reversed. The
coefficients for each transform block 14 are converted back to
spatial residuals. A prediction for each macroblock is generated
based on the description in the stream and added to the residuals
to reconstruct the pixels for the macroblock. Because of the
information lost in quantization, however, the reconstructed pixels
differ from the original ones. One of the goals of video
compression is to minimize the perceived differences as much as
possible for a given compression ratio.
[0006] In block-based video compression the differences in the
reconstructed images tend to be most obvious at the edges of the
macroblocks 12 and the transform blocks 14. Because the blocks are
compressed and reconstructed separately, errors tend to accumulate
differently on each side of block boundaries and can produce a
noticeable seam. To counteract the production of a noticeable seam,
the MPEG-4 part 10 video compression standard includes a deblocking
filter.
[0007] A definition of the deblocking filter can be found in
Section 8.7 of the MPEG-4 part 10 video compression standard. The
deblocking filter blends pixel values across macroblock and
transform block edges in the reconstructed frames to reduce the
discontinuities that result from quantization. Filtering takes
place as part of both the compression and decompression processes.
Filtering is performed after the video frames are reconstructed,
but before the reconstructed frames are used to predict macroblocks
in other frames. Because filtered frames are used for prediction,
the filtering process must be exactly the same during compression
and decompression or errors will accumulate in the decompressed
video frames.
[0008] The definition of the deblocking filter in the MPEG-4part 10
specification specifies that macroblocks are filtered in raster
order (i.e., from left to right and top to bottom of the video
frame).
[0009] Because the macroblocks are filtered in raster order, the
inputs to the deblocking filter include pixels that were already
filtered as part of a previous macroblock. The inclusion of already
filtered pixels as inputs to the deblocking filter implies
sequential processing of the macroblocks in a frame in the
specified raster order. The MPEG-4 part 10 deblocking filter
improves both the perceived quality of the reconstructed image and
the compression ratio, but requires additional processing. When
performed sequentially, the deblocking filter processing can
significantly increase the time required to encode and decode each
frame.
[0010] It would be desirable to filter an arbitrary number of
macroblock-size areas in a single video frame at the same time to
reduce the time required to filter the frame.
SUMMARY OF THE INVENTION
[0011] The present invention concerns a method for implementing a
deblocking filter comprising the steps of (A) reading pixel values
for a plurality of macroblocks of an unfiltered video frame from an
input buffer into a working buffer, where the working buffer has
dimensions determined by a predefined input region of the
deblocking filter and a portion of the working buffer forms a
filter output region of the deblocking filter, (B) sequentially
processing the pixel values in the working buffer through a
plurality of filter processing stages using an array of
software-configurable general purpose parallel processors, where
each of the plurality of filter processing stages operates on a
respective set of the pixel values in the working buffer, and (C)
writing filtered pixel values from the filter output region of the
working buffer to an output buffer after the plurality of filter
processing stages are completed.
[0012] The objects, features and advantages of the present
invention include providing a method and/or architecture for
efficiently implementing a MPEG-4 AVC deblocking filter on an array
of parallel processors that may (i) use multiple processors to
filter an arbitrary number of macroblock-size areas in a single
video frame at the same time, (ii) reduce the time taken to filter
a frame, (iii) utilize separate storage buffers for unfiltered and
filtered video frames, (iv) utilize a sequence of stages to
generate output pixel values, (v) alternate between filtering
across vertical edges and filtering across horizontal edges, (vi)
process multiple columns of pixels at the same time when filtering
across horizontal edges, and/or (vii) process multiple rows of
pixels at the same time when filtering across vertical edges.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] These and other objects, features and advantages of the
present invention will be apparent from the following detailed
description and the appended claims and drawings in which:
[0014] FIG. 1 is a diagram illustrating division of a video frame
into macroblocks and transform blocks;
[0015] FIG. 2 is a diagram illustrating an array of parallel
processors on which a filter in accordance with an example
embodiment of the present invention may be implemented;
[0016] FIG. 3 is a diagram illustrating filter input and output
regions in accordance with an example embodiment of the present
invention;
[0017] FIG. 4 is a diagram illustrating an arrangement of 3.times.3
filter regions in a video frame;
[0018] FIG. 5 is a diagram illustrating a number of filter
processing steps in accordance with an example embodiment of the
present invention; and
[0019] FIG. 6 is a diagram illustrating simultaneous filtering of
pixel rows.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0020] In an example embodiment of the present invention, multiple
processors may be used to filter an arbitrary number of
macroblock-size areas in a single video frame at the same time. For
example, deblocking filter logic as specified in ISO/IEC 1449610
(MPEG-4 part 10--Advanced Video Coding) may be implemented using
parallel processors. The use of parallel processors allows
simultaneous processing of all pixel blocks in a video frame. The
use of multiple processors may reduce the amount of time taken to
filter the frame in proportion to the number of processors used.
Examples of systems in which a filter in accordance with an
embodiment of the present invention may be implemented can be found
in co-pending non-provisional U.S. patent applications: U.S. Ser.
No. 12/342,145, entitled "Video Encoder Using GPU," Attorney Docket
No. 9974.00001, filed Dec. 23, 2008, U.S. Ser. No. 12/058,636,
entitled "Video Encoding and Decoding Using Parallel Processors,"
Attorney Docket No. 58013/8:1, filed Mar. 28, 2008, now U.S. Pat.
No. 8,121,197; U.S. Ser. No. 12/189,735, entitled "A Method For
Efficiently Executing Video Encoding Operations On Stream Processor
Architectures," Attorney Docket No. 58013-7:2, filed Aug. 11, 2008;
each of which is herein incorporated by reference in their
entirety.
[0021] Referring to FIG. 2, a diagram is shown illustrating a
system in accordance with an example embodiment of the present
invention. In one example, separate storage buffers may be utilized
for input (unfiltered) and output (filtered) video frames.
[0022] For example, an architecture 100 in accordance with an
example embodiment of the present invention may comprise a parallel
processor array (PPA) 102 and storage medium 104. The storage
medium 104 may contain an input buffer 106 and an output buffer
108. The parallel processor array 102 may comprise, in one example,
a plurality of single instruction multiple data (SIMD) processors
110. The plurality of SIMD processors 110 may be configured to
perform deblocking filter processing on a video frame using a
filter kernel. A set of program instructions for the parallel
processor array 102 may be referred to as a kernel. In one example,
the filter kernel may implement a deblocking filter that is
compliant with the MPEG-4 part 10 AVC standard using the parallel
processor array 102. In one example, the plurality of SIMD
processors 110 may read unfiltered pixels from the input buffer 106
and write filtered pixels to the output buffer 108. Referring to
FIG. 3, a diagram is shown illustrating example filter input and
output regions in accordance with an example of an embodiment of
the present invention. A minimum portion of a video frame that may
be filtered separately from and in parallel with the remainder of
the video frame may be referred to as a filter region. The minimum
filterable region is the smallest region that can be filtered
independently of the rest of the video frame. In one example, the
minimum filterable region may be the size of a single macroblock.
However, specific implementations may be configured to filter
larger regions, provided the larger regions contain integer numbers
of the minimum filterable region. For example, the example
illustrated in FIG. 3 uses filter regions that are 3 times as high
and wide as the minimum filter region. Input pixels for each filter
region may be read from a filter input region 120 of the input
buffer 106. Corresponding output pixels for the filtered area may
be written to a filter output region 122 within the output buffer
108. Macroblock boundaries 124 (thinner solid lines) and transform
block boundaries 126 (dotted lines) are shown for reference.
[0023] The dimensions of the filter input region 120 and the filter
output region 122 are shown relative to an upper-left macroblock.
In one example, the filter input region 120 may have dimensions of
21 horizontal pixels by 29 vertical pixels. An upper left corner of
the filter input region 120 may be six pixels down and six pixels
right of an upper-left corner of the upper-left macroblock. The
filter output region 122 may be 16 by 16 pixels. An upper-left
corner of the filter output region 122 may be located three pixels
right of and nine pixels below the upper-left corner of the filter
input region 120. In one example, each of the filter regions in a
video frame may be filtered by a separate processor 110.
Alternatively, multiple filter regions may be filtered using a
single processor 110.
[0024] Referring to FIG. 4, a diagram is shown illustrating an
example video frame 200 with filter regions arranged into 3 by 3
groups. Boundaries 202 of the 3 by 3 filter regions (indicated by
thicker lines) generally do not align with the macroblock
boundaries 204 or the boundaries of the video frame. Because the
boundaries do not align, partial filter regions may occur at the
edges of the video frame 200. If separate processors are allocated
to the partial regions, the processors generally have fewer pixels
to filter and may be underutilized when compared to processors
allocated to whole regions. The underutilization of the processors
allocated to the partial regions may be rectified by assigning
partial region pairs to the same processor to increase the pixels
available for filtering. For example, a simple approach may be to
form partial region pairs 206 by pairing partial regions from the
top and bottom of the same column, or the left and right of the
same row. Referring to FIG. 5, diagrams are shown illustrating a
number of filter processing steps in accordance with an example
embodiment of the present invention. The processing for each filter
region in a video frame may be performed in a number of filtering
steps or stages. In one example, six stages 300, 302, 304, 306, 308
and 310 may be implemented. The stages 300, . . . , 310 may be used
in sequence to process each filter region. Processing for each
filter region in a video frame may be performed using a working
buffer 312 having dimensions similar to the filter input region 120
(described in connection with FIG. 3). The working buffer 312 may
be loaded initially with pixels from the input video frame buffer
106 at the beginning of filter processing. The order in which pixel
values are computed within the working buffer 312 is generally
important for the filter to generate output pixel values compliant
with the MPEG-4 part 10 AVC specification. Each of the stages 300,
. . . , 310 reads pixel values from the working buffer 312,
computes filtered pixel values based upon the pixel values read,
and writes the filtered pixels back to a respective stage output
region 314 of the working buffer 312. The filtration of individual
pixels may be performed according to the process described in
section 8.7 of the MPEG-4 part 10 AVC specification. A general
description of the filtering process may be as follows. Pixel
values may be computed using an adaptive multi-tap filter applied
at right angles to the edge being filtered. Up to 4 pixels on each
side of the edge may be used as the filter input, and filtered
values may be computed for up to three pixels on each side of the
edge. The specific filter technique used may be determined based
upon the type of edge being filtered (e.g., macroblock or transform
block), the input pixel values and the prediction method and degree
of quantization used to generate the input pixels. The respective
stage output regions 314 for each of the stages 300, 310 are
illustrated with thick borders.
[0025] When processing of the particular filter region is complete,
the pixels from a filter output region 318 of the working buffer
312 may be written out to the output video frame buffer 108. The
output pixels from a particular stage are generally form the input
pixels to the following stage. In some cases, pixels that lie
outside the final filter output region 318 may be processed to
generate intermediate results that may be used to compute the
pixels within the filter output region 318. In one example, the
stages 300, 304 and 308 may filter data across vertical
macroblock/transform block edges, and the stages 302, 306 and 310
may filter data across horizontal edges. Dotted lines are shown in
FIG. 5 to generally illustrate edges 316 that may influence the
output pixel values for each step/stage.
[0026] Within each of the filtering stages, pixels are generally
processed sequentially from left to right for vertical edges, and
from top to bottom for horizontal edges. The filteration of
individual pixels is generally compliant with section 8.7 of the
MPEG-4 part 10 specification. For example, pixel values may be
computed using an adaptive multi-tap filter applied at right angles
to the edge being filtered. Up to 4 pixels on each side of the edge
may be used as the filter input, and filtered values may be
computed for up to three pixels on each side of the edge. The
specific filter technique used may be determined based on the type
of edge being filtered (e.g., macroblock or transform block), the
input pixel values and the prediction method and degree of
quantization use to generate the input pixels.
[0027] In one example, the location and size of the respective
output regions 314 for each of the filtering stages 300, . . . ,
310 may be summarized as in the following TABLE 1:
TABLE-US-00001 TABLE 1 Region X Y Width Height Stage 300 3 0 5 29
Stage 302 Upper 3 3 7 5 Stage 302 Lower 3 19 7 5 Stage 304 Upper 7
3 12 7 Stage 304 Lower 7 19 12 7 Stage 306 Upper 3 7 7 12 Stage 306
Lower 3 23 7 5 Stage 308 Upper 7 10 12 9 Stage 308 Lower 7 26 12 3
Stage 310 10 7 9 18
[0028] All dimensions in TABLE 1 are in pixels. X and Y values
represent the location of the upper-left corner of the particular
region as measured from the upper-left corner of the working buffer
(zero-based).
[0029] The method in accordance with embodiments of the present
invention generally allows additional parallelism within each of
the stages 300, . . . , 310 when multiple processors or processors
with single instruction/multiple data (SIMD) capability are used.
For example, in the steps that filter across vertical edges (e.g.,
stages 300, 304 and 308), all rows of pixels may be processed at
the same time. In the steps that filter across horizontal edges
(e.g., stages 302, 306 and 310), all columns of pixels may be
processed at the same time.
[0030] Referring to FIG. 6, a diagram is shown illustrating the
level of parallelism for the case of the third filtering step 304
in FIG. 5. Rows of pixels 404 that may be processed simultaneously
in the step 304 are shown within the stage output region 402 for
the filtering step. The filter input region 400 is shown for
reference.
[0031] As used herein, the terms "simultaneous" and
"simultaneously" are meant to describe events that share some
common time period, but the term is not meant to be limited to
events that begin at the same point in time, end at the same point
in time, or have the same duration.
[0032] The functions illustrated in the diagrams of FIGS. 5 and 6
may be implemented using one or more of a conventional general
purpose processor, digital computer, microprocessor,
microcontroller, RISC (reduced instruction set computer) processor,
CISC (complex instruction set computer) processor, SIMD (single
instruction multiple data) processor, signal processor, central
processing unit (CPU), arithmetic logic unit (ALU), video digital
signal processor (VDSP) and/or similar computational machines,
programmed according to the teachings of the present specification,
as will be apparent to those skilled in the relevant art(s).
Appropriate software, firmware, coding, routines, instructions,
opcodes, microcode, and/or program modules may readily be prepared
by skilled programmers based on the teachings of the present
disclosure, as will also be apparent to those skilled in the
relevant art(s). The software is generally executed from a medium
or several media by one or more of the processors of the machine
implementation.
[0033] The present invention may also be implemented by the
preparation of ASICs (application specific integrated circuits),
Platform ASICs, FPGAs (field programmable gate arrays), PLDs
(programmable logic devices), CPLDs (complex programmable logic
device), sea-of-gates, RFICs (radio frequency integrated circuits),
ASSPs (application specific standard products) or by
interconnecting an appropriate network of conventional component
circuits, as is described herein, modifications of which will be
readily apparent to those skilled in the art(s).
[0034] The present invention thus may also include a computer
product which may be a storage medium or media and/or a
transmission medium or media including instructions which may be
used to program a machine to perform one or more processes or
methods in accordance with the present invention. Execution of
instructions contained in the computer product by the machine,
along with operations of surrounding circuitry, may transform input
data into one or more files on the storage medium and/or one or
more output signals representative of a physical object or
substance, such as an audio and/or visual depiction. The storage
medium may include, but is not limited to, any type of disk
including floppy disk, hard drive, magnetic disk, optical disk,
CD-ROM, DVD and magneto-optical disks and circuits such as ROMs
(read-only memories), RAMS (random access memories), EPROMs
(electronically programmable ROMs), EEPROMs (electronically
erasable ROMs), UVPROM (ultra-violet erasable ROMs), Flash memory,
magnetic cards, optical cards, and/or any type of media suitable
for storing electronic instructions.
[0035] The elements of the invention may form part or all of one or
more devices, units, components, systems, machines and/or
apparatuses. The devices may include, but are not limited to,
servers, workstations, storage array controllers, storage systems,
personal computers, laptop computers, notebook computers, palm
computers, personal digital assistants, portable electronic
devices, battery powered devices, set-top boxes, encoders,
decoders, transcoders, compressors, decompressors, pre-processors,
post-processors, transmitters, receivers, transceivers, cipher
circuits, cellular telephones, digital cameras, positioning and/or
navigation systems, medical equipment, heads-up displays, wireless
devices, audio recording, storage and/or playback devices, video
recording, storage and/or playback devices, game platforms,
peripherals and/or multi-chip modules. Those skilled in the
relevant art(s) would understand that the elements of the invention
may be implemented in other types of devices to meet the criteria
of a particular application.
[0036] While the invention has been particularly shown and
described with reference to the preferred embodiments thereof, it
will be understood by those skilled in the art that various changes
in form and details may be made without departing from the spirit
and scope of the invention.
* * * * *