U.S. patent application number 11/063387 was filed with the patent office on 2005-07-07 for optical processing.
This patent application is currently assigned to Lenslet Ltd.. Invention is credited to Goldenberg, Efraim, Konforti, Naim, Mendlovic, David, Sariel, Aviram, Zalevsky, Zeev.
Application Number | 20050149598 11/063387 |
Document ID | / |
Family ID | 26323841 |
Filed Date | 2005-07-07 |
United States Patent
Application |
20050149598 |
Kind Code |
A1 |
Mendlovic, David ; et
al. |
July 7, 2005 |
Optical processing
Abstract
A method of performing a DFT (discrete Fourier transform) or a
DFT derived transform on data, comprising: providing spatially
modulated light having spatial coherence, said spatially modulated
light representing the data to be transformed; Fourier transforming
said spatially modulated light, using an at least one optical
element; and compensating for at least one of a scaling effect and
a dispersion effect of said at least one optical element, using an
at least one dispersive optical element.
Inventors: |
Mendlovic, David;
(Petach-Tikva, IL) ; Goldenberg, Efraim; (Ashdod,
IL) ; Konforti, Naim; (Holon, IL) ; Zalevsky,
Zeev; (Rosh-Ha'ayin, IL) ; Sariel, Aviram;
(Ramot-Hashavim, IL) |
Correspondence
Address: |
REED SMITH, LLP
ATTN: PATENT RECORDS DEPARTMENT
599 LEXINGTON AVENUE, 29TH FLOOR
NEW YORK
NY
10022-7650
US
|
Assignee: |
Lenslet Ltd.
Herzelia Pituach
IL
|
Family ID: |
26323841 |
Appl. No.: |
11/063387 |
Filed: |
February 22, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11063387 |
Feb 22, 2005 |
|
|
|
09979180 |
Feb 25, 2002 |
|
|
|
09979180 |
Feb 25, 2002 |
|
|
|
PCT/IL00/00285 |
May 19, 2000 |
|
|
|
11063387 |
Feb 22, 2005 |
|
|
|
09926547 |
Mar 5, 2002 |
|
|
|
09926547 |
Mar 5, 2002 |
|
|
|
PCT/IL99/00479 |
Sep 5, 1999 |
|
|
|
Current U.S.
Class: |
708/816 ;
375/E7.226 |
Current CPC
Class: |
H04N 19/60 20141101;
G06K 9/58 20130101; G06T 7/262 20170101; G06E 3/005 20130101 |
Class at
Publication: |
708/816 |
International
Class: |
G06E 003/00 |
Foreign Application Data
Date |
Code |
Application Number |
May 19, 1999 |
IL |
130038 |
Jul 25, 1999 |
IL |
131094 |
Sep 5, 1999 |
WO |
PCT/IL99/00479 |
Claims
1. A method of performing a DFT (discrete Fourier transform) or a
DFT derived transform on data, comprising: providing spatially
modulated light having spatial coherence, said spatially modulated
light representing the data to be transformed; Fourier transforming
said spatially modulated light, using an at least one optical
element; and compensating for at least one of a scaling effect and
a dispersion effect of said at least one optical element, using an
at least one dispersive optical element.
2. A method according to claim 1, wherein said spatially modulated
light is substantially temporally incoherent.
3. A method according to claim 1, wherein said spatially modulated
light is non-monochromatic light.
4. A method according to claim 1, wherein said spatially modulated
light is a multi-wavelength light including at least one wavelength
gap.
5. A method according to any of claims 1-4, wherein said data is
mirrored and replicated in said modulated light.
6. A method according to any of claims 1-5, wherein said at least
one dispersive element comprises a zone plate.
7. A method according to any of claims 1-6, wherein said at least
one dispersive optical element comprises a zone plate array.
8. A method according to any of claims 1-7, wherein said at least
one optical element comprises a phase conjugate plate.
9. A method according to any of claims 1-7, wherein said at least
one optical element comprises a dispersive-lens.
10. A method according to any of claims 1-8, wherein said
transformed light encodes a DCT transform of said data.
11. A method according to any of claims 1-10, comprising spatially
modulating light from a light source using an SLM (spatial light
modulator) to produce said spatially modulated light.
12. A method according to any of claims 1-11, comprising detecting
said transformed light using a detector array.
13. A method according to any of claims 1-12, wherein said
transform is a block transform.
14. Apparatus for performing a DFT (discrete Fourier transform) or
a discrete Fourier derived transform, comprising: at least one
reflective element; a detector array; and a spatially modulated
light source, wherein said reflective element, said detector and
said source are arranged so that light from said spatially
modulated light source is reflected from said mirror to be focused
on said array.
15. Apparatus according to claim 14, comprising a lens to focus
said light.
16. Apparatus according to claim 14, wherein said at least one
reflective element comprises a curved mirror that focuses said
light.
17. Apparatus according to any of claims 14-16, wherein said at
least one reflective element is partially transparent and wherein
said spatially modulated light source comprises a primary light
source on an opposite side of said mirror from said detector
array.
18. Apparatus according to claim 17, wherein said spatially
modulated light source comprises an SLM (spatial light modulator)
between said at least one reflective element and said primary light
source.
19. Apparatus according to claim 17, wherein said detector array is
integrated with a reflective SLM (spatial light modulator).
20. Combined detector and spatial modulator apparatus, comprising:
a plurality of detector elements; and a plurality of light
modulating elements interspersed with said detector elements.
21. Apparatus according to claim 20, wherein all of said elements
are formed on a single substrate.
22. Apparatus according to claim 20 or claim 21, wherein said light
modulating elements are reflective.
23. Apparatus for performing a DFT (discrete Fourier transform) or
a discrete Fourier derived transform, comprising: a detector array
having formed therein at least one pinhole; a light source on one
side of said array; at least one processing element; and an SLM
(spatial light modulator) on an opposite side of said array from
said light source, wherein said array, source, processing element
and SLM are so positioned and arranged that light from said light
source passes through said pinhole and is modulated by said SLM
before being processed by said processing element and impinging on
said detector.
24. Apparatus according to claim 23, wherein said SLM is
reflective.
25. A method of separating channels in a multi-channel optical
system, comprising: optically processing a plurality of adjacent
channels using a common optical element to have overlapping output
areas; detecting a result of said processing on an image plane; and
deriving the processing of a single channel of said plurality of
channels by subtracting an effect of the overlapping channels.
26. A method according to claim 25, wherein said optical element
comprises a lens.
27. A method according to claim 26, wherein said plurality of
adjacent channels comprises a set of 3.times.3 channels.
28. A method according to any of claims 25-27, comprising a
plurality of spatially shifting elements associated with at least
some of said channels, to spatially shift said detected result on
said detector plane.
29. A method according to claim 28, wherein said plurality of
spatially shifting element comprise a plurality of prisms.
30. A method according to claim 29, wherein a prism is not
associated with a central channel in a spatial arrangement of said
plurality of channels.
Description
RELATED APPLICATIONS
[0001] This application is a continuation in part of PCT
application PCT/IL99/00479, filed Sep. 5, 1999, by applicant
Lenslet Ltd. in the IL receiving office and designating the US, the
disclosure of which is incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The present invention relates to the field of optical
processing and, in some embodiments, to compressing images using
optical components.
BACKGROUND OF THE INVENTION
[0003] Often, the information bandwidth to be transmitted is
greater than the available bandwidth. Therefore, information is
often compressed before it is transmitted (or stored), to reduce
the required bandwidth. For example, the HDTV standard was defined,
at its inception, to include compression. Many types of signals are
compressed, including still images, video and telephone
conversations. The reigning compression standards are JPEG for
still images and MPEG (I, II, III or IV) for video. In actuality,
these standards are standards for the compressed signals. There is
no particular requirements, in the standard, on the method for
converting the uncompressed signals into compressed signals.
[0004] Compression and in some cases decompression are often very
demanding and typically require dedicated hardware. Both JPEG and
MPEG are transform-based methods, in which the uncompressed data is
transformed into a transform space, where the data is represented
by a set of coefficients. It is usually desirable that the
coefficients have less autocorrelation than the image data or even
no autocorrelation at all. Although the DCT transform does not
completely decorrelate the coefficients, the correlation between
them is significantly reduced. In other compression methods, other
transform spaces are used. In transform space, some of the
coefficients have a greater visual and/or other effect on the
image, than other coefficients. To obtain compression, the
coefficients are quantized, with fewer bits being allocated to
those coefficients which have a lesser effect. Typically, a
coefficient is quantized by dividing it by a weight and then
rounding or truncating the result.
[0005] Optical and electro-optical processors have been used in the
art, to a small extent, for computationally demanding applications.
However, with the advent of very fast electronic computer
components and parallel processors, their acceptance has been
limited.
[0006] Performing some types of linear transforms, for example
Fourier transforms, continuous cosine transforms and Walash
transforms, using optical components is known, for example, as
described in "Cosinusoidal Transforms in White Light", by N. George
and S. Wang, in Applied Optics, Vol. 23, No. 6, Mar. 15, 1984, in
"Hartley Transforms for Hybrid Pattern Matching", by Nomura, K.
Itoh and Y. Ichioka, in Applied Optics, Vol. 29, No. 29, Oct., 10,
1990, in "Lens Design for a White-Light Cosine-Transform Achromat",
by K. B. Farr and S. Wang, in Applied Optics, Vol. 34, No. 1, Jan.
1, 1995 and in "Optical Computing", by D. Feitelson, in a chapter
titled "Optical Image and Signal Processing", pp. 102-104 (which
pages describe general discrete linear transforms using a lenslet
array), and pp. 117-129 (which describe matrix multiplication), MIT
Press 1988, the disclosures of which are incorporated herein by
reference.
SUMMARY OF THE INVENTION
[0007] An aspect of some embodiments of the invention relates to
optical processing architectures. Exemplary architectures are used
for general linear transforms, BLOCK transforms, wavelet
transforms, such as the S transform, S+P transform family, other
integer to integer "wavelet-like" transforms, or general known
wavelet transform (Daubechies etc.) useful for wavelet compression,
DCT transforms, communication signal processing, and/or image
compression and/or decompression. In some embodiments, pure optical
systems are provided. In other embodiments, hybrid optical and
electronic systems are provided.
[0008] An optical processing system in accordance with an exemplary
embodiment of the invention optionally comprises five stages, an
input which receives the data to be processed, an optional
pre-processing stage which converts the representation of the data
into a presentation more suitable for processing, a processing
stage which performs the processing, an optional post processing
stage which converts the representation of the processed data into
one suitable for output and an output stage which outputs the data.
In an exemplary embodiment of the invention, some or all of the
stages are optical. In some embodiments, one or more electronic or
hybrid electronic and optical stages may be used, for example for
pre-processing the data. Additionally, in some embodiments, only
some of the processing is performed optically, with the balance of
the processing optionally being performed electronically.
[0009] An aspect of some embodiments of the invention relates to
optical block transforms, especially of image data. In an exemplary
embodiment of the invention, an optical component is used to
transform image data in blocks, with each block being transformed
separately. In an exemplary embodiment of the invention, the
transform used is a DCT (Discrete Cosine Transform) transform,
optionally a JPEG-DCT, which is the DCT transform variant used for
JPEG. Optionally, the block size is 8.times.8, which is a standard
block size for many applications. Alternatively, different block
sizes may be used, for example 16.times.16 or 64.times.64, possibly
with different block sizes and/or block aspect ratios for different
parts of the image. For wavelet transforms, larger blocks are
optionally used.
[0010] An aspect of some embodiments of the invention relates to
performing a DCT (Discrete Cosine Transform) using optical
processing, optionally a JPEG-DCT. In an exemplary embodiment of
the invention, a single optical element is used to transform data
from an image domain to a DCT domain. A related aspect is an
optical element which performs discrete wavelet and "integer to
integer" wavelet transforms (such as the S and S+P transforms), for
example using a combination of diffraction gratings neutral density
filters (for weighting sums and differences).
[0011] An aspect of some embodiments of the invention relates to a
block-DCT-transforming lens, optionally a JPEG-DCT performing lens.
In an exemplary embodiment of the invention, such a lens comprises
a two dimensional matrix of groups of optical elements, each such
group performing a DCT on a single block. Optionally, such a group
comprises a lenslet array which performs the DCT directly.
Alternatively, the matrix comprises a matrix of optical elements,
with each optical element performing a DCT transform for a single
block. Alternatively to performing a DCT transform, a
correspondence between JPEG DCT and DFT (Discrete Fourier
Transform) may be utilized, so that a Fourier-transforming lens (or
optical element or lenslet array) is used. Optionally, optical or
electrical components are provided to modify the data and/or the
transformed data so that the Fourier lens generate a DCT transform,
at least for real image data. Alternatively to block-DCT lens, a
lens for performing other types of block transforms, such as a
block-Wavelet-transform, can be provided.
[0012] An aspect of some embodiments of the invention relates to
performing optical motion estimation. In an exemplary embodiment of
the invention, the motion estimation is performed on
block-DCT-transformed data, by comparing DCT coefficients of
neighboring blocks. Optionally the same hardware is used to perform
DCT for motion estimation and for image compression. Alternatively
or additionally to motion estimation, motion compensation may also
be performed by correcting DCT coefficients of transformed
data.
[0013] An aspect of some embodiments of the invention relates to
data compression using optical components. In various exemplary
embodiments of the invention, individual steps of image compression
methods are performed using optical components. In some embodiments
of the invention, multiple sequential steps are implemented using
optical components, possibly without conversion back to electrical
signals in-between steps.
[0014] In an exemplary embodiment of the invention, the data
compressed is image data. Optionally, compression method is a
transform based method, especially a DCT based method, such as JPEG
or MPEG. Alternatively or additionally, other types of data
compression which require processing (not spatial zooming) may be
used, for example, entropy encoding. In an exemplary embodiment of
the invention, at least the DCT and/or motion estimation steps used
for the above compression methods are performed optically.
Alternatively, the compression method is a wavelet based
compression method.
[0015] Alternatively or additionally to compression, data
decompression may be effected using optical processing, for example
to perform an inverse DCT.
[0016] An aspect of some embodiments of the invention relates to
direct acquisition of images which are compressed, partially
compressed, pre-processed for rapid compression and/or otherwise
processed. In an exemplary embodiment of the invention, a camera
uses a DCT-transforming lens, which receives light from an imaged
object and projects a transform of the light onto an optical
detector, such as a CCD, for data acquisition. Alternatively, other
types of optical detectors, such as a CMOS detector may be used.
Optionally, but not necessarily, other optical elements are
provided between the DCT lens and the CCD to perform further
optical processing and/or image compression on the data.
Alternatively or additionally, optical and/or electro-optical
elements are provided between the object and the DCT lens to
perform pre-processing on the optical data, for example to change
its data representation scheme, or to better process polychromatic
light. In an exemplary embodiment of the invention, the DCT lens
accepts polychromatic light. Alternatively, color information is
separated out of the light received from the imaged object and the
DCT lens accepts monochromatic light. In an exemplary embodiment of
the invention, the optical processing is use to perform or aid in
performing JPEG or MPEG compression. Alternatively or additionally,
other compression protocols are performed. Alternatively to DCT
lens, other block-transform lens may be provided, for example for
an S-transform.
[0017] An aspect of some preferred embodiments of the invention
relates to using a continuous Fourier-transform optical system, for
example a Fourier lens, for performing a discrete transform, for
example a Fourier based transform such as a DCT transform. In a
preferred embodiment of the invention, data to the Fourier lens is
matched to a data receptor at the other side of the Fourier lens to
allow a discrete transform to be performed.
[0018] An aspect of some embodiments of the invention relates to
applying Fourier based transforms, such as a DCT transform using
incoherent light systems. In some embodiments of the invention, a
combination of dispersive element such that the one compensates for
the other can be used. For example, a pair of conjugate zone plates
are used to effect a Fourier transform, by providing a dispersive
effect and correcting for the wavelength scaling. Alternatively, a
zone plate combined with a suitable lens is provided. In an
exemplary embodiment of the invention, one or more arrays of
conjugate zone plates are provided in order to create a
multi-channel system. Potential advantages of using incoherent
light include (a) allowing direct processing of an incoming image;
(b) reducing speckle effects; (c) allowing the light to always be
real and non-negative, with the detected signal representing
intensity, which may be more appropriate to cosine transform
applications and to square-law detection devices; and/or (d)
reducing complexity, since incoherent optical systems are often
less sensitive to deformation of the components, such as the
flatness of the spatial light modulator.
[0019] An aspect of some embodiments of the invention relates to
using one or more reflective elements in an image processor, for
example to reduce an optical path. In a processor comprising
generally a source, SLM and processing lens (or other optical
element), one or more of the elements may be reflective, rather
than transmissive. For example, a reflective source may comprise a
source viewed through a pinhole formed in a mirror. Alternatively
or additionally, the SLM may be combined with the source to provide
a spatially modulated source of any type. In some embodiments of
the invention, a mirror may be semi-reflective, in that light
impinging on the mirror from one side is reflected and from the
other side is transmitted. Alternatively or additionally, the
mirror may be a polarizing beam splitter, that selectively reflects
light of a certain polarization. In some embodiments the use of a
selective mirror is used to.
[0020] In some embodiments of the invention, two or more of the
optical elements are integrated into a single element, for example,
the SLM and the lens, the SLM and the detector or the lens and the
detector. In one example, the detector is partially reflective
(and/or a polarizing beam splitter) and is curved, to act as a
lens. A second mirror (optionally polarization affecting) returns
the light processed by the lens effect of the detector, to the
detector, for detection. In some embodiments, two light beams are
thus provided, a processed beam and an unprocessed beam that can be
used as a reference beam for various uses.
[0021] An aspect of some embodiments of the invention relates to
reducing interactions between light from adjacent pixels or pixel
groups. In an exemplary embodiment of the invention, one or more of
the following separation methods are practiced: frequency
separation, spatial separation (optionally with a light absorbing
or light redirecting separator between adjacent pixels),
polarization axis differences, temporal offset and/or their
combinations. Alternatively or additionally, no separation is
practiced. In an exemplary embodiment, a plurality of channels are
processed using a single lens or other optical element. A prism or
other spatially shifting optical element is provided for at least
one of the channels, so that the transform effect of the lens is
offset for that channel. Then, the effects of channel overlap are
calculated or estimated and corrected for.
[0022] An aspect of some embodiments of the invention relates to
using optical switching technology for transforming data or for
otherwise processing data encoded using light waves. Optionally,
calcite or other bi-refringent materials are used to split light
beams, each original light beam representing a pixel or a part
thereof. The split light beams are then added, subtracted and/or
multiplied by constants to perform the required calculations (such
as a DCT transform or a DWT transform), with the end result of the
addition and subtraction being light waves encoding the transformed
data. Alternatively to calcite, diffractive or refractive optical
elements may be used to split the beams of light.
[0023] There is thus provided in accordance with an exemplary
embodiment of the invention, a method of performing a DFT (discrete
Fourier transform) or a DFT derived transform on data,
comprising:
[0024] providing spatially modulated light having spatial
coherence, said spatially modulated light representing the data to
be transformed;
[0025] Fourier transforming said spatially modulated light, using
an at least one optical Celement; and
[0026] compensating for at least one of a scaling effect and a
dispersion effect of said at least one optical element, using an at
least one dispersive optical element. Optionally, said spatially
modulated light is substantially temporally incoherent.
Alternatively, said spatially modulated light is non-monochromatic
light.
[0027] In an exemplary embodiment of the invention, said spatially
modulated light is a multi-wavelength light including at least one
wavelength gap.
[0028] In an exemplary embodiment of the invention, said data is
mirrored and replicated in said modulated light. Alternatively or
additionally, said at least one dispersive element comprises a zone
plate.
[0029] In an exemplary embodiment of the invention, said at least
one dispersive optical element comprises a zone plate array.
Alternatively or additionally, said at least one optical element
comprises a phase conjugate plate.
[0030] Alternatively or additionally, said at least one optical
element comprises a dispersive lens. Alternatively or additionally,
said transformed light encodes a DCT transform of said data.
[0031] In an exemplary embodiment of the invention, the method
comprises spatially modulating light from a light source using an
SLM (spatial light modulator) to produce said spatially modulated
light. Alternatively or additionally, the method comprises
detecting said transformed light using a detector array.
Alternatively or additionally, said transform is a block
transform.
[0032] There is also provided in accordance with an exemplary
embodiment of the invention, apparatus for performing a DFT
(discrete Fourier transform) or a discrete Fourier derived
transform, comprising:
[0033] at least one reflective element;
[0034] a detector array; and
[0035] a spatially modulated light source,
[0036] wherein said reflective element, said detector and said
source are arranged so that light from said spatially modulated
light source is reflected from said mirror to be focused on said
array. Optionally, said apparatus comprises a lens to focus said
light. Alternatively, said at least one reflective element
comprises a curved mirror that focuses said light.
[0037] In an exemplary embodiment of the invention, said at least
one reflective element is partially transparent and wherein said
spatially modulated light source comprises a primary light source
on an opposite side of said mirror from said detector array.
Optionally, said spatially modulated light source comprises an SLM
(spatial light modulator) between said at least one reflective
element and said primary light source. Alternatively or
additionally, said detector array is integrated with a reflective
SLM (spatial light modulator).
[0038] There is also provided in accordance with an exemplary
embodiment of the invention, a combined detector and spatial
modulator apparatus, comprising:
[0039] a plurality of detector elements; and
[0040] a plurality of light modulating elements interspersed with
said detector elements.
[0041] Optionally, all of said elements are formed on a single
substrate. Alternatively or additionally, said light modulating
elements are reflective.
[0042] There is also provided in accordance with an exemplary
embodiment of the invention, apparatus for performing a DFT
(discrete Fourier transform) or a discrete Fourier derived
transform, comprising:
[0043] a detector array having formed therein at least one
pinhole;
[0044] a light source on one side of said array;
[0045] at least one processing element; and
[0046] an SLM (spatial light modulator) on an opposite side of said
array from said light source, wherein said array, source,
processing element and SLM are so positioned and arranged that
light from said light source passes through said pinhole and is
modulated by said SLM before being processed by said processing
element and impinging on said detector. Optionally, said SLM is
reflective.
[0047] There is also provided in accordance with an exemplary
embodiment of the invention, a method of separating channels in a
multi-channel optical system, comprising:
[0048] optically processing a plurality of adjacent channels using
a common optical element to have overlapping output areas;
[0049] detecting a result of said processing on an image plane;
and
[0050] deriving the processing of a single channel of said
plurality of channels by subtracting an effect of the overlapping
channels. Optionally, said optical element comprises a lens.
Alternatively or additionally, said plurality of adjacent channels
comprises a set of 3.times.3 channels.
[0051] In an exemplary embodiment of the invention, the method
comprises a plurality of spatially shifting elements associated
with at least some of said channels, to spatially shift said
detected result on said detector plane. Optionally, said plurality
of spatially shifting element comprise a plurality of prisms.
Optionally, a prism is not associated with a central channel in a
spatial arrangement of said plurality of channels.
BRIEF DESCRIPTION OF THE DRAWINGS
[0052] The present invention will be more clearly understood from
the following detailed description of some embodiments of the
invention and from the attached drawings, in which:
[0053] FIG. 1 is a flowchart of a baseline method of
JPEG-compliant-compression;
[0054] FIG. 2 is a schematic block diagram of an optical JPEG
compression system, in accordance with an exemplary embodiment of
the invention;
[0055] FIG. 3A is a schematic block diagram of a
matrix-multiplication based optical DCT component, in accordance
with an exemplary embodiment of the invention;
[0056] FIG. 3B is a schematic block diagram of an optical matrix by
vector multiplication component, in accordance with an exemplary
embodiment of the invention;
[0057] FIG. 4A is a schematic diagram of a lens-matrix based 2D DCT
component, in accordance with an exemplary embodiment of the
invention;
[0058] FIG. 4B is a schematic diagram of an optical element for the
lens matrix of FIG. 4A;
[0059] FIG. 4C is a schematic diagram of a lenslet array for the
lens matrix of FIG. 4A;
[0060] FIG. 4D is a schematic illustration of an optical system for
performing a DCT transform using a Fourier lens;
[0061] FIG. 5A is a schematic block diagram of an optical JPEG
compression system, in accordance with another exemplary embodiment
of the invention;
[0062] FIG. 5B is a schematic cross-section of a channel separation
method in accordance with an exemplary embodiment of the
invention;
[0063] FIG. 6 is a schematic flowchart of a base-line method of
MPEG-compliant compression;
[0064] FIG. 7A is a schematic diagram of a direct-compression
camera system, in accordance with an exemplary embodiment of the
invention;
[0065] FIG. 7B is a schematic block diagram of a YUV-separated
implementation of the embodiment of FIG. 7A;
[0066] FIG. 8 is a schematic diagram of a lithographic
implementation of an optical compression system in accordance with
an exemplary embodiment of the invention;
[0067] FIG. 9A is a flowchart for a DIF (decimation in time) type
of DCT computation;
[0068] FIG. 9B is a schematic figure of a calcite based DCT
transforming optical element, in accordance with an exemplary
embodiment of the invention;
[0069] FIG. 10 is a schematic figure of a detail of FIG. 9B;
[0070] FIG. 11 is a conjugate-zone array based optical processor,
in accordance with an exemplary embodiment of the invention;
[0071] FIG. 12 is a schematic diagram of a polarizing reflective
optical processor, in accordance with an exemplary embodiment of
the invention;
[0072] FIG. 13 is a schematic diagram of a planar reflective
optical processor, in accordance with an exemplary embodiment of
the invention;
[0073] FIG. 14 is a schematic diagram of a sphere based reflective
optical processor, in accordance with an exemplary embodiment of
the invention; and
[0074] FIG. 15 is a schematic diagram of a pin-hole based
reflective optical processor, in accordance with an exemplary
embodiment of the invention.
DETAILED DESCRIPTION OF SOME EMBODIMENTS
[0075] FIG. 1 is a flowchart of a base-line method 20 of
JPEG-compliant compression. Image data is first transformed using
the DCT (Discrete Cosine Transform) (22), to generate a set of
coefficients. These coefficients are then quantized (24). The
quantized coefficients are then unfolded from a 8.times.8
representation to a 64.times.1 representation ("Zig-Zag", 26).
These quantized coefficients are encoded using a variable-length
encoding scheme (28), zero-run length encoded and then Huffman
encoded (30), to reduce entropy. A compressed data file is then
generated by prefixing the encoded data with header information
(32). Other, similar, methods of JPEG compression are also
known.
[0076] In accordance with an exemplary embodiment of the invention,
various of the above steps are performed using optical elements,
rather than using electronic or software elements. In the above
described JPEG compression method, the step that is typically most
computationally demanding, is the DCT step. Thus, in an exemplary
embodiment of the invention, the DCT step is performed
optically.
[0077] FIG. 2 is a schematic block diagram of an optical JPEG
compression system 50, in accordance with an exemplary embodiment
of the invention. An electronic input 52 is optionally used to
receive the data to be compressed. This element will generally
depend on the application for which system 50 is used. For example,
if system 50 is implemented on a PC card, electronic input 52 will
generally comprises a PC-compatible bus connector. The acquired
data is then converted into light, using a spatial light source 54.
In an exemplary embodiment of the invention, light source 54
comprises an SLM (Spatial Light Modulator) which modulates a beam
of light that is reflected from it or that is transmitted through
it. Alternatively, source 54 may comprise an array of LEDs, laser
diodes, VCELs (vertical cavity emission lasers) or other types of
pixelated display devices such as CRT, field effect emitter arrays
and plasma displays.
[0078] The type of light emitted by source 54 is optionally
selected to match an optical DCT unit 56. In some embodiments of
the invention, the light from source 54 is coherent (so a laser
source is optionally used). In other embodiments, the optics do not
require coherent light. In an exemplary embodiment of the
invention, the light is optionally monochromatic. Alternatively,
polychromatic light may be used. In some particular exemplary
embodiments of the invention, multiple frequencies of monochromatic
light are used, for example wherein the frequencies are used to
encode attributes of the data, such as its numerical sign. In an
exemplary embodiment of the invention, the data is encoded using an
analog encoding scheme, for example phase or amplitude.
Alternatively, a digital encoding scheme is used. Possibly, as
described below, the light may be A/D converted from
analog-encoding light into digital-encoding light, for example
after it is transformed.
[0079] Optical DCT unit 56 transforms the light from an image space
to a transform space. Optionally, the transformed light is
projected unto a spatial optical sensor 58, such as a CCD array.
Details of various types of DCT unit 56 and methods of construction
thereof are provided below.
[0080] Data is read out of CCD array 58 and then it is quantized,
using a quantizer 60. In an exemplary embodiment of the invention,
the quantization may be performed by setting gain and/or offset
characteristic of the CCD and/or individual elements thereof and/or
controlling the readout of the CCD, for example to provide a
reduced bit-per-pixel ratio. Alternatively, the data is quantized
as it is read off the CCD or after it is read off the CCD.
Alternatively or additionally, the data is quantized by optical
means, such as a second SLM in front of the CCD. The unfolding of
the data may be performed before the quantizing or after the
quantizing. Then, the data is encoded using a variable length
encoding unit 62, Huffman-encoded using a Huffman encoding unit 64
and, finally, a header is attached to the compressed data so that
it meets the JPEG standard. Alternatively or additionally, the data
is encoded using arithmetic coding (optionally performed by an
arithmetic coding unit--not shown).
[0081] As will be described below, additional elements of the
compression system may be replaced with optical units. In some
embodiments of the invention, the different optical units will be
interconnected with electrical circuitry, for example for control,
data management or data conversion. Thus, even if two consecutive
units are embodied using optical means, they may have an
intervening step of optical/electrical conversion and then
electrical/optical conversion. In other embodiments, the processed
light will feed directly from one optical unit to the next. In an
exemplary embodiment of the invention, a system includes both
optical and electronic components and the processing is divided
between the components so they can act in parallel. In one example,
some of the transforming may be performed optically and some
electronically. Such dividing up of work, can better utilize all
the elements in a compression/decompression device, especially if
some of the components are dual use, for example DSP
components.
[0082] Optical DCT unit 56 may be implemented in various ways. It
should be noted that when compressing images, the DCT transform
applied is in actuality a block-DCT transform, where each part of
the image is separately transformed.
[0083] FIG. 3A is a schematic block diagram of a
matrix-multiplication based optical DCT component 70, in accordance
with an exemplary embodiment of the invention. The DCT transform
can be presented in matrix form as [DCT]=[C][T][C]. Matrix by
matrix multiplication may be performed in many ways, including
using multiple repetitions of vector by matrix multiplication, for
example as described in "Introduction to Fourier Optics", Goodman,
pp. 286, or using direct matrix by matrix multiplication, for
example as described in Feitelson, pp. 126-127 (double or triple
products), optionally using monochromatic coherent light, or as
described in Feitelson, pp. 118, using lenslet arrays, which can
accommodate white light; the disclosure of all of which is
incorporated herein by reference,
[0084] In a vector by matrix embodiment of component 70, a line
data provider 72 provides individual lines or columns of an
8.times.8 block to a matrix multiplier 74. The DCT transform of a
vector can be performed by multiplying a source vector V by a
convolution matrix C, to obtain a transformed vector T. For each
8.times.8 block the lines (or the columns) are individually
transformed and then the result is transformed along the individual
columns (or lines). In an exemplary embodiment of the invention;
the data is row transformed using a first unit 74 and is then
column transformed using a second multiplication unit 74'.
Alternatively, a same unit is used for both the row and column
transforms. Optionally, the transformed row data is accumulated
using a store unit 76. If each one of the rows is transformed in
sequence, the transformed row data may be accumulated using store
76 even if a separate unit 74' is used for column transforms.
[0085] FIG. 3B is a schematic block diagram of an optical
matrix-by-vector multiplication component 80, in accordance with an
exemplary embodiment of the invention. When performing a DCT
transform, negative-valued results may be produced. Multiplication
component 80 separately processes negative- and positive-valued
results, to avoid miss-processing. Mathematically, the
multiplication of a matrix C by a vector V is a linear operation,
so that it can be separated in to negative and positive components,
e.g.: C*V=Cp*Vp+Cn*Vn-Cp*Vn-Cn*Vp, where the "n" subscript
indicates negative numbers and the "p" subscript indicates positive
numbers. In the component of FIG. 3B, vector V is separated into
positive and negative values, which are each separately multiplied
by positive or negative valued component matrixes 82 and then
summed using subtractors 84 and an adder 86. In an exemplary
embodiment of the invention, four matrix multiplication units 82
are provided. Alternatively, only two or even only one unit 82 is
used, for example to sequentially process negative and positive
numbers. In general, the source data is all positive, so that the
vector Vn is empty. It is noted that the DCT of the original image
data, which is positive, may be simpler to implement than the DCT
of transformed data, which may be negative.
[0086] In an exemplary embodiment of the invention, sign issues are
solved using a bipolar number representation. In a bipolar
representation, each number is designated by two components:
s=[p,n], where s is a general signed number (not necessarily an
integer), and [p,n] are its positive and negative components. s is
retrieved by setting s=p-n. Therefore, the number -5 can be
described by [0,5], [3,8], [10,15], [1,6] or other combinations of
p and n, as long as p, n>0, and p-n=s.
[0087] The generalized bipolar representation can be adapted to
matrix calculation, by representing each number by a 2.times.2
matrix, of the form
[0088] [p n]
[0089] [n p].
[0090] For example: 1 [ 1 - 2 3 - 4 ] [ 1 - 1 2 2 ] = [ - 3 - 5 - 5
- 11 ] [ 1 0 0 2 0 1 2 0 3 0 0 4 0 3 4 0 ] [ 1 0 0 1 0 1 1 0 2 0 2
0 0 2 0 2 ] = [ 1 4 0 5 4 1 5 0 3 8 0 11 8 3 11 0 ]
[0091] This representation can be extended to triple product matrix
multiplication.
[0092] The [p,n] representation may be implemented using separate
optical beams to represent each of the p and n components.
Alternatively, a single, multi-characteristic beam may be used to
represent both components. In one example, different optical
frequencies are used for the different components. Alternatively or
additionally, different polarizations are used for the different
components. The two components may be separated out after
processing, for example one component being diverted (or copied) to
a different CCD. Alternatively, a single optical detector detects
both components, for example a detector that is sensitive to the
difference between the amplitudes in the two frequencies. Such a
detector may be implemented by electronically subtracting the
output of two adjacent detectors, each detector being sensitive to
a different frequency.
[0093] Alternatively to the method of FIG. 3B, negative numbers may
be dealt with in other ways. In one example, negative and positive
numbers are differentiated by their phase. A diffraction grating
can be used to divert numbers with one phase value to a different
part of a CCD target (or other optical element) than those numbers
with a second phase values. In another example, negative numbers
are encoded using a different frequency than positive numbers. The
different frequencies can be separated using suitable gratings or
other optical elements. Alternatively or additionally, a
self-electro-optical device may use one frequency in order to
modulate the other frequency. Alternatively or additionally, a
frequency sensitive CCD may be used, for example an RGB CCD.
Alternatively or additionally, a CCD may be provided with binary
phase or frequency detection, by providing a controllable polarizer
or spectral filter before the CCD and timing its operation to the
acquisition of positive or negative numbers.
[0094] Alternatively, negative numbers may be managed by biasing
them to be positive, for example, by forcing the results of a DCT
to be in the range [0 . . . 2] instead of [-1 . . . 1] (normalized
values). In practice, if the maximum DC amplitude is A, the DCT
results are shifted by +A, from the range [-A . . . A] to the range
[0 . . . 2A]. In the example (described below) where a DCT is
performed by mirroring the 8.times.8 datablock into a 16.times.16
datablock, a strong spatial delta pulse is provided in the middle
of each 16.times.16 datablock, for example by controlling the SLM.
The effects of this pulse (the bias) are optionally removed using
electronic processing after the data is transformed.
[0095] Once the data is multiplied, further processing, such as
sign extraction or as described below, can be achieved, for
example, by performing optical A/D, and then binary operations or
by using electronic components.
[0096] In the above description, a plurality of matrix-by-matrix or
vector-by matrix operations are performed. The number of actually
provided multiplication units depends on the implementation and
especially on the level of parallelism of the implementation. For
example, in one implementation, all the 8.times.8 blocks are
processed in parallel. However, within each block, the
multiplications may be performed in parallel (using a plurality of
units if required) or in sequence (reusing a single unit for two
operations, for example for row and for column DCT). Alternatively
or additionally, two or more of the blocks may be processed in
series, for example the two blocks sharing a single 8.times.8
multiplier. Such sequential processing generally requires
electronic components, such as store 74, to read and store
intermediate results and possibly also for summing up the
individual results.
[0097] In an exemplary embodiment of the invention, each matrix
multiplication unit comprises a series of {SLM, lens, CCD}
sub-systems which unit accepts electronic data at one end, converts
it into optical signals, transforms the data using lens and then
converts the transformed data into electronic signals again.
Alternatively, a single SLM and/or a single CCD may be shared among
several multipliers.
[0098] FIG. 4A is a schematic diagram of a lens-matrix based 2D DCT
component 90, in accordance with an exemplary embodiment of the
invention. Light from an image of an object 92 impinges on a
lens-matrix 94. Array 94 optionally comprises a plurality of
lens-elements 95, each of which performs a DCT on one 8.times.8
block of image 92. The result of the DCT is recorded by a CCD
96.
[0099] FIG. 4B is a schematic diagram of a single optical element
98 suitable for the lens matrix of FIG. 4A, for performing a DCT.
Optical element 95 is designed so that light emitted by different
portions of the lens corresponds to different coefficients of the
DCT transform of the impinging light. Thus, light corresponding to
a first DCT coefficient is detected by CCD 96 at a point A. Light
corresponding to a second DCT coefficient is detected at a point B.
Typically, at least some of the light emitted by lens 95 does not
correspond to a DCT coefficient, due to design considerations. Such
light may be detected, for instance at a point C. The readout of
CCD 96 is optionally configured to account for the correspondence
between DCT coefficients and spatial locations on CCD 96. In an
exemplary embodiment of the invention, a plurality of or even all
of optical elements 98 are combined into a single composite optical
element. Alternatively, a single optical element 98 may be
implemented as a sequence of individual optical elements.
[0100] FIG. 4C is a schematic diagram of a lenslet array 100 for
the lens matrix of FIG. 4A, for performing a DCT. In a lenslet
array, each individual lenslet optionally generates one DCT
coefficient from the impinging light. In one exemplary embodiment,
light from a 8.times.8 block of the image is received by 64
lenslets, optionally arranged in a 8.times.8 array. After each
lenslet is a mask having opaque and transmissive portions and a CCD
element is positioned opposite the mask to receive light which
passes through the mask. In an exemplary embodiment, each lenslet
creates an image of the image to be transformed. Each DCT
coefficient d(k,l) is defined as: 2 d ( k , l ) = 1 N 1 N f ( i , j
) h ( k , l ; i , j ) ( 1 )
[0101] where f is the input and h is the convolution definition.
The opaque and transmissive portions of each of the (k,l) masks are
defined to represent the values of h, in which the transmissiveness
of mask elements for a lenslet (k,l) are defined to match the
relative contribution of those image pixels (i,j) which take part
in determining the (k,l) coefficient. The CCD element sums the
light which passes through the mask, determining the DCT
coefficient.
[0102] The Formula for a DCT transform and for an inverse DCT
transform of an 8.times.8 block of image data f(x,y) and an
8.times.8 block of transform data F(u,v), are, respectively: 3 F (
u , v ) = 1 4 C ( u ) C ( v ) x = 0 7 y = 0 7 f ( x , y ) cos ( 2 x
+ 1 ) u 16 cos ( 2 y + 1 ) v 16 f ( x , y ) = 1 4 u = 0 7 v = 0 7 C
( u ) C ( v ) F ( u , v ) cos ( 2 x + 1 ) u 16 cos ( 2 y + 1 ) v
16
[0103] where C(u) and C(v) are 1/{square root}2 for u,v=0 and 1
otherwise.
[0104] In an exemplary embodiment of the invention, a single large
lenslet array is used to implement a matrix of individual lenslet
arrays. In an exemplary embodiment of the invention, the light from
object 92 is focused onto lens matrix 94, so that all of the object
is viewed by each one of the lens elements 95. Alternatively, only
a block area of the light impinges on each lens element, for
example by providing multiple side-by-side lens, each one viewing
only a portion of object 92. Alternatively, where an SLM is used,
the light which passes through the SLM can be formed of blocks of
non-parallel light, so that block portions of the modulated light
impinge each on a different lens element 95. Alternatively,
especially where individual optical elements are used, the light
from object 92 can be parallel light, so that each optical element
receives parallel light from a single block area. In some cases, a
lens element may receive light from more than one block area, for
example for processing which is beyond the extent of a single block
or to provide an overlap between blocks, for example to solve
calibration problems.
[0105] FIG. 4D is a schematic illustration of an optical system for
performing a DCT transform using a Fourier lens. Although in one
exemplary embodiment of the invention the lens-element directly
performs a DCT, in an alternative exemplary embodiment of the
invention, a correspondence between Fourier transform and DCT is
utilized to perform DCT (or other transforms) using a Fourier
transform lens. Mathematically, a Fourier transform of real and
symmetrical data results in only real (and symmetric) cosine
coefficients. The image data to be compressed is typically real. It
can be made symmetric by mirroring in the SLM. In order to achieve
a discrete transform, the data is optionally provided as an impulse
image, with each image portion being a spatial delta function, each
of which pulses is transformed using a Fourier transform lens. This
type of data provision can be achieved using an SLM with a pinhole
filter. In multi-wavelength based embodiments, different pinholes
may be designated for different wavelengths. Optionally both the
SLM and the CCD are spatially matched according to the following
formula: .DELTA.{overscore (x)}=0.5x, which defines the distances
between the delta functions (pinholes) in the SLM and 4 u = f 2 x N
,
[0106] which defines the distances between the delta-function
receptor in the CCD (can also be modeled by providing a pinhole
filter in front of the CCD. In these formula, .DELTA.x and .DELTA.u
are the intervals between delta functions in the SLM and CCD
respectively, f is the focal length, N is the block size and
.DELTA.{overscore (x)} is the placement of the delta function in
the interval (phase shift) in the SLM. It can be seen that the
pixels intervals in the CCD and the SLM are not necessarily the
same, which may be implemented by ignoring some of the CCD pixels.
An alternative matching condition is described below.
[0107] In the example of FIG. 4D, an 8.times.8 block of image 92 is
made symmetric using a doubling and mirroring optical system 93
(alternatively to using an SLM) and then transformed by a Fourier
lens 97. Since the data is mirrored in two dimensions (only one
shown for clarity) a 8.times.8 block is transformed into a
16.times.16 block. The result is then combined using a combining
optical system 99, to provide an 8.times.8 DCT transform. In some
embodiments of the invention, optical system 93, lens 97 and
optical system 99 are combined into a single optical element, thus,
the end result is a single optical element which performs a DCT,
suitable for use in lens-matrix 94. A matrix of such optical
elements may be combined to form a single optical element which
performs a block DCT transform. Alternatively to the optical
systems shown, other constructions can be utilized for mirroring,
doubling and combining. In one example, an image block is first
doubled and arranged as a 2.times.2 matrix of blocks and then
individual blocks of the 2.times.2 matrix are flipped, to provide
the symmetry required for the DCT transform (or other type of
transform).
[0108] The correspondence between Fourier transform and DCT can
also be utilized for other optical transform architectures, for
example the matrix-vector multiplication method described above. In
another example, a wavelet transform can be performed by mirroring
data to be anti-symmetric instead of symmetric, as in the DCT
case.
[0109] The above matching condition may be derived using the
following analysis (for a one dimensional case). The following
equation defines the JPEG-DCT which is to be achieved: 5 F ( k ) =
n = 0 N - 1 = 7 f ( n ) cos ( k ( 2 n + 1 ) 2 N ) ( 2 )
[0110] Assuming symmetric input, where every block of 16 samples is
represented as a combination of delta functions, spaced at
intervals of size .DELTA.x, and transmitted from a is
.DELTA.{overscore (x)} position inside each interval: 6 s ( x ) = n
= 0 N - 1 = 7 f ( n ) ( x - n x - x _ ) + n = 0 N - 1 = 7 f ( n ) (
x + n x + x _ ) ( 3 )
[0111] Applying the optical Fourier transform: 7 s ~ ( u ) = -
.infin. .infin. s ( x ) - j 2 ux f x ( 4 )
[0112] The imaginary parts cancel out (due to the input being
symmetric): 8 s ~ ( u ) = n = 0 N - 1 = 7 f ( n ) cos ( 2 u ( n x +
x _ ) f ) ( 5 )
[0113] Assuming accurate sampling at the Fourier plane (the CCD): 9
s ~ ( k ) = n = 0 N - 1 = 7 f ( n ) cos ( 2 k u ( n x + x _ ) f ) (
6 )
[0114] Since equation (2) is desired, we match: 10 s ~ ( k ) = n =
0 N - 1 = 7 f ( n ) cos ( 2 k u ( n x + x _ ) f ) = n = 0 N - 1 = 7
f ( n ) cos ( k ( 2 n + 1 ) N ) ( 7 )
[0115] Thus, one matching condition is: 11 cos ( 2 k u ( n x + x _
) f ) = cos ( k ( 2 n + 1 ) 2 N ) ( 8 )
[0116] Leading to: 12 u x f = 1 2 N and ( 9 ) 2 u x _ f = 1 2 N (
10 )
[0117] resulting in the above matching condition: 13 { x _ = 0.5 x
u = f 2 x N ( 11 )
[0118] In some cases, it may be not be suitable to provide delta
functions (pinholes or other optical elements) on one or both of
the SLM and CCD. The following analysis shows a method of matching
a CCD and an SLM, by spatially modulating the light in a less
drastic manner, for example using continuous neutral density
filters.
[0119] The following equation describes an SLM-like object: 14 s (
x ) = n = 0 N - 1 = 7 f ( n ) l ( x - n x ) + n = 0 N - 1 = 7 f ( n
) l ( x + n x ) ( 12 )
[0120] Where 1(x) is a general transmission function of the SLM,
assumed identical for all pixels, and symmetric, so it can be
mirrored. However, it should be noted that a similar but more
complex analysis can also be performed in the case where not all
the pixels are identical.
[0121] After applying the optical (and continuous) Fourier
transform: 15 s ~ ( u ) = n = 0 N - 1 = 7 f ( n ) L ( u ) cos ( 2 u
n x f ) ( 13 )
[0122] Where L(u) is the Fourier transform of 1(x). Since the
actual sampling is done by summing all intensities on a detector
cell (i.e., a CCD pixel cell), equation (5) transforms to: 16 s ~ (
k ) = 1 u k u - u _ / 2 k u + u _ / 2 s ~ ( u ) W ( u ) u ( 14
)
[0123] Where W(u) is the CCD detection weight function. Again, it
is assumed that W is the same for all pixels but this assumption is
not required. Using equation (13): 17 s ~ ( k ) = 1 u k u - u _ / 2
k u + u _ / 2 W ( u ) { n = 0 N - 1 = 7 f ( n ) L ( u ) cos ( 2 n x
f ) } u ( 15 )
[0124] Since equation (2) is desired, we match: 18 cos ( k ( 2 n +
1 ) 2 N ) = 1 u k u - u _ / 2 k u + u _ / 2 W ( u ) L ( u ) cos ( 2
n x f ) u ( 16 )
[0125] We define:
R(u).ident.W(u).multidot.L(u)/.DELTA.u (17)
[0126] The matching requirement is thus: 19 cos ( k ( 2 n + 1 ) 2 N
) = k u - u _ / 2 k u + u _ / 2 R ( u ) cos ( 2 n x f ) u , n , k =
0 , 1 , N - 1 ( 18 )
[0127] Which results in the following N.times.N Fredholm I
equations (for the 1D case. In 2D its N.times.N.times.N.times.N
equations): 20 { k = 0 , n = 0 : 1 = u _ / 2 u _ / 2 R ( u ) u k =
0 , n = 1 : 1 = u _ / 2 u _ / 2 R ( u ) cos ( 2 x u f ) u k = 1 , n
= 0 : cos ( 2 N ) = u - u _ / 2 u + u _ / 2 R ( u ) u k = 1 , n = 1
: cos ( 3 2 N ) = u - u _ / 2 u + u _ / 2 R ( u ) cos ( 2 x u f ) u
( 19 )
[0128] Equation set (19) defines a Fourier coefficients solution to
the problem of describing R(u) by cosine series, i.e., 21 R ( u ) k
= n = 0 N - 1 = 7 cos ( k ( 2 n + 1 ) 2 N ) cos ( 2 u n x f ) ( 20
)
[0129] This solution optionally defines a matching between
individual pixels in the SLM(u) and the
CCD(k).u.epsilon.[k.multidot..DELTA.u-.DELTA- .{overscore (u)}/2,
k.multidot..DELTA.u+.DELTA.{overscore (u)}/2].
[0130] It should be noted that equation 20 actually defines a
family of solutions, thus, in some embodiments of the invention,
standard geometries of SLMs and CCDs are used, while in others one
or both of the SLM and CCD are modified to better fit a particular
matching solution. In the general case, the matching may be
performed by using neutral filters and by matching at least the
locations, if not the sizes of CCD and SLM pixels.
[0131] In an exemplary embodiment of the invention, the above
matching condition(s) are applied towards other discrete linear
transforms which are to be applied using Fourier lens: 22 F ( k ) =
n = 0 N - 1 f ( n ) C ( k , n ) ( 21 )
[0132] Applying the same procedure, as in equations (12)-(20), (18)
now reads: 23 C ( k , n ) = k u + u _ / 2 k u - u _ / 2 R ( u ) cos
( 2 n x f ) du , n , k = 0 , 1 , N - 1 ( 22 )
[0133] So for the general 1D linear transform: 24 R ( u ) k = n = 0
N - 1 C ( k , n ) cos ( 2 u n x f ) , u k [ k u - u _ / 2 , k u + u
_ / 2 ] ( 23 )
[0134] or the matching condition of equation (11) can be used. In
the context of matching conditions it should be noted that a matrix
arrangement of sub-elements is not required. Rather, it is
sufficient that there be a correspondence between the pixels in the
SLM and the pixels in the CCD. A simple construction is that of a
matrix of elements.
[0135] The use of the above matching condition may depend on the
type of detector used. A standard CCD detector measures power
(amplitude squared). Thus, a square root of the measurement may
need to be determined. Additionally, some types of processing
require the sign of the result, or even its phase. Various methods
of determining a sign of the result are described above. A related
issue is that a CCD detector integrates the square of the
amplitude, so when even after taking a square root the result is
not precise. However, in many cases the effect of the error is
negligible and usually smaller than that allowed by the JPEG
standard. This error is especially small if most of the CCD area
(for each pixel) is ignored. Ignoring most of the CCD area is also
useful in that it reduces noise, albeit usually requiring more
signal strength.
[0136] Alternatively, an amplitude (rather than power) detector is
used, for example using a detector with a gamma of 0.5.
Alternatively or additionally, a phase detector is used to
determine the sign. One possible implementation of a phase detector
is to supply a polarized reference beam that can be compared to the
detected beam, for example using inference effects.
[0137] In an alternative exemplary embodiment of the invention,
DCT, FFT or block transforms are achieved using a holographic lens,
for example replacing lens-matrix 94, individual lens-elements 95
and/or other optical elements (described below). Alternatively or
additionally, two dimensional holograms may be used, for example,
by providing arrays of phase and amplitude modifying materials,
instead of refracting elements. Alternatively or additionally, a
look-up-table based approach to transforming may be used, for
example using the look-up table methods described in U.S. Pat. No.
4,892,370, the disclosure of which is incorporated herein by
reference. Alternatively or additionally, acousto-optical type
optical elements are used. An advantage of transform-lens, such as
described with reference to FIGS. 4A-4D, is that they are better
matched to the physical model of the compression, i.e.,
transforming data from an image space into a transform space.
Holograms are a general purpose optical element design method,
which although they are very flexible, may have an efficiency
penalty. Look-up tables are general purpose solutions which may
require a larger and/or more complex optical architecture than a
matched architecture such as a lenslet array.
[0138] FIG. 5A is a schematic block diagram of an optical JPEG
compression system 110, in accordance with another exemplary
embodiment of the invention, in which the DCT transformed data is
further processed prior to being converted to electrical signals. A
main difference from the embodiment of FIG. 2 is the provision of
an A/D converter 112, which converts the data from an analog
representation to a digital representation. Thus, coding (e.g., VLC
and Huffman) can be performed optically using various types of
available hardware architectures. The Zig-Zag step (26) may be
performed before or after quantization, for example, even after the
data is converted to electrical signals, by optical sensor 58. An
exemplary optical A/D converter is described in "Real-Time Parallel
Optical Analog-to-Digital Conversion", by A. Armand, A. A. Sawchuk,
T. C. Strand, D. Boswell and B. H. Soffer, in Optics Letters, Vol.
5 No. 3, March 1980, the disclosure of which is incorporated herein
by reference.
[0139] In the embodiment of FIG. 5A, quantization is shown as being
performed on the optical data, for example utilizing an SLM or a
controllable attenuator such as an LCA with one or more face
polarizers which selectively "divide" DCT coefficients by a weight.
Alternatively, the data is quantized after the A/D conversion, for
example using a suitable lookup table or a holographic lens. In
embodiments where digital data is represented by spatial bit
patterns, as in the above paper ("real-time"), quantizing may be
performed by spatially blocking out certain bits. In embodiments
where digital data is represented temporally, temporal filtering
may be used in which certain pixels are darkened, in synchrony to
the bit order of the light pattern, so that those bits are blocked
out. It is noted that the quantization step and the encoding step
(at least the VLC) may be combined as a single step, using
relatively standard tables, as known in the art of electronic
JPEG.
[0140] In some embodiments of the invention, it is desirable to
achieve different spatial and/or bit resolutions for different
parts of the image. In one example, the CCD can be read out at
varying resolutions, responsive to the desired spatial resolution.
In another example the light source is defocused for portions where
a lower resolution is required. Alternatively or additionally, the
quantization is varied between the blocks. If for example
quantization is achieved by selective blocking of pixels, this
blocking may be implemented using an electrically controllable
spatial filter, for example an LCD, which can be set to the desired
quantization.
[0141] In an exemplary embodiment of the invention it is desirable
to simultaneously generate multiple resolutions of JPEG data. In an
exemplary embodiment of the invention, this is achieved by parallel
application of the JPEG algorithm, using hardware as described
herein. Alternatively, this may be achieved (for example in the
embodiment of FIG. 2) by reading out the CCD at different
resolutions, for different JPEG resolution levels. Alternatively,
varying resolutions may be achieved by zooming the source image 92
up or down, for example using a zooming lens or by suitable control
of an SLM which generates the light.
[0142] Compression of color images may be achieved by converting
the image from an RGB format into a YUV format (if it is not
already so represented) and then compressing each of the Y, U and V
components. Typically, only the Y component is compressed at a full
spatial resolution, while the U and V components are compressed at
half their spatial resolution. In one exemplary embodiment of the
invention, different hardware is provided for the different
components. Alternatively, the same hardware is used, sequentially.
Alternatively, other color component separation methods may be
used.
[0143] In an exemplary embodiment of the invention, an image
sequence, such as a video sequence is compressed utilizing the
above methodology. In an exemplary embodiment of the invention,
each of the images in the sequence sequences is compressed in turn
using the above method of JPEG compression, providing a series of
JPEG compressed images. In an exemplary embodiment of the
invention, inter-frame compression is achieved by motion estimation
for example using adaptive differential coding by subtracting
consecutive images. In an exemplary embodiment of the invention,
consecutive images are subtracted using an SLM which is driven with
a previous image's density distribution. In a self-electro-optic
effect device, the SLM can be programmed directly using the
previous image, without requiring external electronics to store or
otherwise manipulate the image.
[0144] In some configurations, especially those using lower quality
optics, light from one group of pixels (i.e., an 8.times.8 block)
pixel may spill into an adjacent group, adding noise to the
processing process. In some cases, but not typically, even leakage
between two adjacent pixels is a problem. In some embodiments of
the invention, this issue is tackled by separating light in
adjacent channels (pixels), so as to reduce the probability, degree
and/or intensity of overlap.
[0145] In some embodiments of the invention, the separation is
achieved using spatial separation. In one exemplary embodiment of
the invention, a light absorbing material is provided between
adjacent groups or pixels (e.g., on SLM, CDD or in optical path
between them).
[0146] FIG. 5B is a schematic cross-section of a channel separation
method in accordance with an exemplary embodiment of the invention.
A device 114 comprises a plurality of channels 116. The channels
are separated by absorbing columns 115. In one embodiment of the
invention (not shown) the columns are solid. However, light may
reflect off the side of such a solid column. Thus, in an exemplary
embodiment of the invention, each of columns 115 comprises a
plurality of spaced apart absorbing portions 117. When a near
parallel ray of light hits such a portion (as shown by arrow 118)
the ray is likely to hit the absorbing material at a near
perpendicular angle, assuring a high absorption. In one particular
implementation, a plurality of layers 119 are provided, each layer
having at least one absorbing portion 117 defined thereon. Layers
119 are stacked to achieve the configuration shown in FIG. 5B.
portions 117 may be the thickness of a layer, in which cases the
layers are optionally arranged so that portions 117 of two
contiguous layers are not aligned. Alternatively, portions 117 are
shallow. In some embodiments, a generous spacing between portions
117 is provided, so that light will be less likely to be reflected
off the sides of portions 117. Alternatively or additionally to
spacing, portions 117 may have a sawtooth pattern defined thereon
which has a face substantially perpendicular to light rays 118.
Although absorbing portions 117 are shown to have a face
perpendicular to main path of the light, other angles may also be
used advantageously, for example to provide faces which are
perpendicular to off-axis light rays, such as light ray 118.
[0147] Alternatively to light absorbing material, beam forming
elements may be provided to maintain the light beams in paths
corresponding to their individual channels. Alternatively or
additionally, light from adjacent groups or pixels may be separated
using divergent optics, so that there is dead space between the
individual beams. Alternatively or additionally, inactive CCD or
SLM elements may be used so that the pixels are generated and/or
detected in spatial separation. Alternatively or additionally,
non-square pixels are used, for example circular pixels, so that
there is less contact between adjacent pixels. Alternatively or
additionally, the pixel groups are mapped onto non-square regions,
for example circles, to minimize overlap.
[0148] Alternatively or additionally to spatial separation,
temporal separation may be practiced. In one example, the image
plane is separated into two or more sets of pixels such that there
is spatial separation between pixels (or specific groups thereof)
of each plane, within the plane. Then the two planes are processed
at a relative temporal delay, to reduce inter-pixel interactions.
The separation may be achieved, for example at the SLM or at the
detector.
[0149] Alternatively or additionally, frequency separation may be
practiced, with adjacent pixels or other pixels in danger of
overlap having different wavelengths of light.
[0150] Alternatively or additionally, polarization frequency may be
practiced, for example adjacent pixels using light polarized at
90.degree. relative to each other. Optionally, each pixel utilizes
two polarizers, one when it is generated (or later in the optical
path) and one when it is detected (or earlier in the optical path).
Possibly, source polarization is provided by the SLM, in addition
to or instead of a separate polarizer.
[0151] In the above separation methods, different configurations
may be used based on the expected degree of leakage of light. For
example, in a simplest case, the separation is in a checkerboard
pattern having alternating "black" and "white" pixels, with the
"black" pixels (or pixel groups) being one channel type (e.g.,
polarization angle, frequency, time delay), and the "white" pixels
having a second value. Alternatively more than two channels are
used, for example if leakage of a pixel to a distance of more than
one pixel is expected. In the example of polarization, the relative
angle may be selected to be 70.degree., rather than 90.degree..
[0152] Alternatively or additionally to physical based separation
methods, a calculation based separation method is provided, as
follows:
[0153] In an exemplary embodiment of the invention, instead of
using a single lenslet per each channel (e.g., 8.times.8 block), a
single lenslet (or other optical element) is provided for a
plurality of channels, for example, for an array of 3*3 channels.
Optionally, prisms are added to the all the channels except to the
central one in order to obtain the desired DCT coefficients in the
same positions at the output plane as in the above systems.
Alternatively, other spatially shifting optical elements may be
used.
[0154] In an exemplary embodiment of the invention, overlapping
between the information of two adjacent channels in the output
plane is removed using matched sampling. Denoting the DCT of one of
the channels "A" by H.sub.a(x) and the DCT of its adjacent channel
"B" by H.sub.b(x). Since channels A and B are adjacent in the
output plane (in the overlapping region):
E(x)=H.sub.a(x)+H.sub.b(x)e.sup.2.pi.ix.DELTA.x/(.lambda.f)
[0155] where E is the total field. The linear phase arises since
shift is expressed as a linear phase in the Fourier plane. .DELTA.x
is the size of a channel in the input plane. .lambda. is the
wavelength and f is the focal length.
[0156] The root of the intensity in the Fourier plane is 25 E ( x )
= [ H a ( x ) + H b ( x ) cos ( 2 xx f ) ] 2 + [ H b ( x ) sin ( 2
xx f ) ] 2
[0157] It should be noted that the maximal frequency of H.sub.a or
of H.sub.b is smaller than .DELTA.x/.lambda.f since the dimensions
of each channel do not exceed .DELTA.x. Thus, H is more or less
constant when sampled within its own pixel. This is not generally
true regarding the fast oscillating sine and cosine: 26 H a , b ( n
x + x 4 ) = H a , b ( n x + x 2 ) = H a , b ( n x ) x = f x
[0158] where n is the pixel number and .delta.x is pixels
dimensions. On the other hand: 27 sin ( 2 x n x + 2 x x 4 ) = 1 sin
( 2 x n x + 2 x x 2 ) = 0 cos ( 2 x n x + 2 x x 4 ) = 0 cos ( 2 x n
x + 2 x x 2 ) = - 1
[0159] Thus, the output intensity root becomes: 28 E ( n x + x 4 )
= H a ( n x ) 2 + H b ( n x ) 2 E ( n x + x 2 ) = H a ( n x ) + H b
( n x )
[0160] which allows, by a fixed computation, to extract the value
of the present facet H.sub.a and the overlapping information coming
from the adjacent facet H.sub.b.
[0161] It should be noted that since the dimensions of the input
channel are unchanged, the above-mentioned derivation does not
change the length of the system required due to the conventional
matching condition. The fixed computations may be applied at
various stages of the system, depending on the system design, for
example using optical processing, on the CCD plane or in a digital
post-processing.
[0162] FIG. 6 is a simplified schematic flowchart of a base-line
method 120 of MPEG-compliant compression, which flowchart does not
show various feedback loops often utilized in the MPEG compression
method. One of the main advantages of the MPEG compression method
over the JPEG compression method is that the MPEG method takes into
account similarities between consecutive images. One of the main
tools for similarity determination is motion estimation, in which
the motion of portions of the image are determined, so that an
image can be reconstructed from spatially translated parts of
previous images. Transmitting only the amount of translation
usually requires less bandwidth than transmitting coefficients for
an entire block. Thus, in an exemplary method, input data is
transformed using a DCT transform (122). Motion estimation is
performed (124). The resulting coefficients of translation data is
quantized (126), encoded (128) and combined with a header (130) to
form a data stream.
[0163] In an exemplary embodiment of the invention, alternatively
or additionally to performing the DCT step using optical processing
methods, also the motion estimation is performed using optical
processing. In an exemplary embodiment of the invention, motion
estimation is performed by performing an autocorrelation of the
source data with itself, allowing small amounts of block motion,
using well known optical means, to determine block motion.
Alternatively however, a DCT based motion estimation scheme is
used. Thus, a same or similar hardware as used for the DCT may also
be used for at least part of the motion estimation. A method of
motion estimation using DCT is described in a Ph.D. Dissertation
titled "Low Complexity and High Throughput Fully DCT-Based Motion
Compensated Video Coders", by Ut-Va Koc, presented in 1996 to K. J.
Ray Liu of the institute for systems research and sponsored by the
National Science Foundation Engineering Research Center Program,
the University of Maryland, Harvard University and Industry, in U.
V. Koc and K. J. R. Liu, "Low-Complexity Motion Estimation
Techniques", U.S. Pat. No. 5,790,686, issued Aug. 4, 1998 and in U.
V. Koc and K. J. R. Liu, "DCT-Based Motion Estimation", IEEE Trans.
on Image Processing, Vol. 7, No. 7, pp. 948-965, July, 1998, the
disclosures of which are incorporated herein by reference. The
method described therein can be summarized as follows (based on
table 4.2 in the Ph.D. dissertation), with the DCT portions
optionally being performed as described herein. Optionally, other
elements of the process are also implemented using optical
components, for example peak finding.
[0164] a. Compute the 2D DCT coefficients of second kind
(2D-DCT-II) of an N.times.N block of pixels at the current frame t,
{x.sub.t(m,n); m,n.epsilon.{0, . . . ,N-1}}.
[0165] b. Convert stored 2D-DCT-II coefficients of the
corresponding N.times.N block of pixels at the previous frame t-1,
{x.sub.t-1(m,n); m,n.epsilon.{0, . . . ,N-1}} into 2D DCT
coefficients of first kind (2D-DCT-I) through a simple rotation
unit T.
[0166] c. Find the pseudo phases {g.sup.CS(k,l); k=0, 1, . . . ,
N-1; l=1, 2, . . . , N} and {g.sup.SC(k,l); k=1, 2, . . . , N; l=0,
1, . . . , N-1}, which are calculated from the DCT coefficients
independently at each spectral location.
[0167] d. Determine the normalized pseudo phases f(k,l) and g(k,l)
from g.sup.CS and g.sup.SC by setting ill-formed pseudo phases to
zero.
[0168] e. obtain the inverse DCT (2D-IDCT-II) of f(k,l) and g(k,l)
as DCS(m,n) and DSC(m,n) for m,n.epsilon.{0, . . . , N-1 }
respectively.
[0169] f. Find peaks in DSC and DCS, which peak positions represent
the shift amounts and peak signs represent the direction of
movement.
[0170] g. Estimate the displacement from the signs and positions of
the found peaks.
[0171] It is noted that even in this method of motion estimation,
some processing is required beyond the DCT, however, a significant
portion of the computation may be dealt with by DCT or IDCT
transforming of the data (in parallel or in sequence for each
block). In an exemplary embodiment of the invention, the previous
image and/or its DCT coefficients are stored and/or provided using
suitable electronics. Possibly, the optical DCT transforming
elements are used for performing DCT and IDCT. Alternatively to the
above method of motion estimation, direct correlation of image
blocks may be used to estimate motion, for example, using image
correlation optical systems known in the art as part of the
compression process.
[0172] The above description has centered on compression, however,
it should be noted, that decompression is very similar to
compression and can often utilize similar or the same hardware. In
the example of JPEG, DCT (for compression) and inverse DCT (for
decompression) can be performed using a same optical transform
element. In the example of MPEG, motion compensation, i.e.,
recreating images by compensating for the effect of motion, which
motion was determined using motion estimation, can utilize a
similar DCT-based method, also described in the above doctorate. It
is noted that for some decompression methods, there is a
requirement for some processing before the transforming of
coefficients into an image domain. For example, in JPEG
de-compression, the compressed image data is un-runlength encoded
and de-quantized prior to being IDCTed. As with compression, these
processing steps may be performed optically and/or electronically,
depending on the implementation.
[0173] FIG. 7A is a schematic diagram of a direct-compression
camera system 200, in accordance with an exemplary embodiment of
the invention. In system 200, an image of a real object 202 is
acquired directly as transformed and/or compressed data, rather
than being acquired as image data which is later compressed. There
are many applications in which an image is acquired in order to be
stored on a digital media or in order to be transmitted over
bandwidth-limited transmission lines. Examples of such applications
include digital cameras, security cameras, teleconferencing
systems, live-feed TV cameras and video-telephones. In an exemplary
embodiment of the invention, the data is acquired in a compressed
manner, using the above described methods of compressing optical
data, except that it is optionally the original optical waves,
arriving from the object, that are compressed, rather than an
electronic representation which is separately acquired and
compressed.
[0174] In the exemplary system of FIG. 7A, an optional object lens
204 focuses and directs the light from object 202 onto a DCT lens
(or lens matrix) 206. Lens 206 may also comprise other optical
elements which perform further steps of the image compression
method. The processed light is collected by a CCD 208 and then
further processed and stored in a storage 210. In real-time
embodiments, the acquired data may be transmitted, instead of or in
addition to storage. The compression method performed may be a
method suitable for still images, such as JPEG or a method suitable
for moving images.
[0175] In an exemplary embodiment of the invention, DCT lens 206 is
designed to operate on white light. Alternatively, the light
arriving from the scene is filtered so that it is monochrome.
Alternatively or additionally, the image is acquired under
controlled lighting situations, so that the light has known
characteristics, such as being coherent, monochromatic or formed of
a small number of narrow spectral bands. Alternatively or
additionally, the image is acquired using a monochromatic light,
possible a laser flash, so that the characteristics of the light
are controlled by system 200. Such controlled lighting is
especially useful for low-light level cameras, such as those using
GICCD (Gated Intensified CCD) technology. Also, the use of coherent
light simplifies the use of hologram-based image processing
techniques.
[0176] Alternatively or additionally to an objective lens 204, a
light encoding module, such as a combination CCD/SLM may be used,
to convert incoming light into light having desired spatial and
spectral characteristics. Alternatively or additionally, a
self-electro-optical effect shutter is used, in which the impinging
light is used to modulate the transmission of last or other
controlled light.
[0177] FIG. 7B is a schematic block diagram of a YUV-separated
implementation 220 of the embodiment of FIG. 7A. Color images may
be compressed by separately compressing each color component, or
more commonly, each of the YUV components. These components may be
determined using a look-up table or by simple arithmetic on the R,
G and B components of an incoming image. These separations may be
performed using optical means and/or electronic means, shown
generally as a splitter 222. Each of the resulting color components
(224) is then processed separately, using a dedicated DCT 206 and a
dedicated CCD 208. The results are then added using a combiner 226.
It should be noted that the U component and the V component are
usually processed at a lower resolution than the Y component. Thus,
the U and the V can share optical components. Alternatively or
additionally, all three components are processed using a single
optical path, for example on different parts of a same lens-CCD
set. Alternatively or additionally, the three components are
processed sequentially.
[0178] A component which performs image compression or
decompression may be packed in various ways, depending on the
application. In one application, a PC graphics card includes an
optical processor for aiding in displaying MPEG images. In another
example, a CCD camera includes an MPEG or a JPEG optical module so
that they can provide a compressed data output as well as a
standard data output.
[0179] In an exemplary embodiment of the invention, the above
described optical elements are provided embedded in a transparent
substrate, such as a clear plastic or glass, so that once the
elements are coupled, there is no relative movement due to
vibration, heat or other external forces. It should be noted that
pixel-sized transverse shifts in the optical elements do not
substantially affect the output, providing the SLM can be
controlled to shift its image by the pixel shift error. In an
exemplary embodiment of the invention, the optical elements are
manufactured and tested without a surrounding matrix or with a
liquid surrounding matrix, which is then solidified when the
relative positions of the optical elements are determined. In an
exemplary embodiment of the invention, the optical processor is
calibrated by entering known data and measuring the compressed (or
other processing) output and then correcting for it.
[0180] FIG. 11 is a conjugate-zone array based optical processor
1000, in accordance with an exemplary embodiment of the invention.
Processor 1000 can be used to process temporally incoherent light,
which is optionally spatially coherent. In some embodiments, the
use of incoherent light allows a less robust and exact design to be
used, since there are fewer or no interference effects. A white
light source 1002 which is spatially coherent, for example a white
LED or a halogen source is spread using a lens 1004 to provide an
area source. Optionally, the light is collimated by the lens or by
a collimator into a parallel or fan beam. Alternatively, other
types of one or two dimensional light sources may be used.
Optionally, a multi-wavelength source in which each wavelength
profile is spatial coherent (e.g., 2, 4, 5, 10, 100 or more
spectral lines) is used instead of a white light source, to allow
better control over the constituent frequencies.
[0181] The light is spatially modulated by an SLM 1006, which may
be of any type, for example as described elsewhere in this
application. Alternatively, other spatially modulated white light
sources may be used. In some exemplary embodiments of the
invention, the light is spatially coherent. A Fourier transform is
applied to the spatially modulated light using two conjugated zone
plates 1008 and 1012. Other optical elements may be used as well.
Typically, processing multi-frequency light results in two
sometimes undesirable effects, (a) dispersion of the results; and
(b) wavelength based scaling. In an exemplary embodiment of the
invention, a combination of two or more compensating dispersive
elements is used, such that one compensate for the dispersion
and/or scaling caused by the other. In some cases, a controlled
amount of dispersion may be desired. Fourier derived transforms,
which are transforms that can be mathematically derived from an
expression of a Fourier transform, such as DCT, DFT or the Fourier
transform itself, can be determined, for example by providing
symmetrically mirrored input, as described herein. In an exemplary
embodiment of the invention, array versions of plates 1008 and 1012
are provided, to allow multi-channel processing. Various channel
separation methods may be used (for example channel separation
elements indicated as reference 1010), for example as described
herein. The results are detected using a detector array 1014.
Exemplary suitable conjugate zone plate designs are described in D.
Mendlovic, Z.
[0182] Zalevsky and P. Andreas, "A novel device for achieving
negative or positive dispersion and its application," Optik 110,
45-50 (1999), the disclosure of which is incorporated herein by
reference.
[0183] FIG. 8 is a schematic diagram of a lithographic
implementation of an optical compression system 300 in accordance
with an exemplary embodiment of the invention. An advantage of
lithographic optics is that they can be fabricated in large
batches, and, depending on the process, in conjunction with the
forming of electronic circuits on a same substrate. It should be
noted that most lithographic implementations utilize reflective
optics rather than transmissive optics. The above description has
focused on transmissive optics, however, reflective optics can also
be used for non-lithographic optical processors. Various
lithographic implementations will occur to a person skilled in the
art, however, an exemplary embodiment is shown in FIG. 8.
[0184] System 300 comprises generally of a substrate 301, one or
more reflective surfaces 303 which are etched and/or otherwise
lithographically processed or micro-machined to form reflective
optical elements and an interposing clear medium 305. An SLM or a
diode array 302 is used to provide an image. The light is reflected
off substrate 301 to a reflective DCT lens 304. The transformed
light is reflected back to substrate 301 and then to a CCD or other
optical array detector 306. Optionally, the CCD array or other
optical, electrical or electro-optical elements may be formed
directly on the substrate, for example as indicated by reference
308. In one example, a quantizer, or a holographic reflecting lens
are formed at location 308. Possibly, reference 308 indicates an
active element, such as an LCD array.
[0185] Alternatively or additionally, diffractive or refractive
elements, for example bi-refringent calcite crystals as described
below, may be used in part of the construction of system 300.
[0186] In an exemplary embodiment of the invention, device 300 is
manufactured to DCT a single 8.times.8 block rather than a whole
image. A plurality of systems 300 is optionally used to compress an
entire image. Alternatively, system 300 is manufactured to process
a single vector rather than an array. Although system 300 may form
a part of a dedicated JPEG or MPEG decoder or encoder, in an
exemplary embodiment of the invention, one or more system 300 type
elements are used for the construction of digital signal processor
or other integrated circuits, for example to assist in high-end
graphical applications.
[0187] In one exemplary embodiment of the invention, a reflective
SLM is coupled directly to a back of a CCD camera. Thus, cheaper,
more efficient and/or faster circuitry can be used to couple light
input at the CCD to encoding of light reflected by the SLM. In one
example, the CCD-SLM sandwich can encode laser light using light
from an external object, which impinges on the CCD. In another
example, electronic circuitry sandwiched between the SLM and the
CCD can perform various electronic processing steps, as suggested
herein. Typically, a highly parallel architecture can be achieved,
so a higher than standard throughput is envisioned for some
implementations. Several variations of such an SLM, especially with
the capability of processing the data between the CCD and the SLM,
are described in U.S. Pat. No. 5,227,886, the disclosure of which
is incorporated herein by reference. These SLMs can use parallel
connections between the CCD elements and the SLM elements or serial
connections.
[0188] FIGS. 12-15 illustrate various exemplary embodiments of
reflective optical processors. Various additional optical elements,
such as masks and SLMs may be added to the embodiments shown, to
achieve various effects. One or more of the following potential
advantages may be realized:
[0189] (a) reduction or elimination of dead zones of transmissive
SLMs, by using reflective SLMs;
[0190] (b) allowing electronic circuitry for SLM pixels to be
mounted behind the SLM, possibly on a same substrate, thus possibly
shortening communication and/or power lines and/or allowing faster
operation;
[0191] (c) allowing the construction of a smaller volume device
and/or devices of various geometries, such as "L" shaped or "U"
shaped, utilizing the optical path folding characteristics of
reflective optics.
[0192] FIG. 12 is a schematic diagram of a polarizing reflective
optical processor 1100, in accordance with an exemplary embodiment
of the invention. A light source 1102 is expanded using a lens 1104
to provide a one or two dimensional optical source. Alternatively,
other means of providing a large, preferably temporal coherent
source may be used. The light, which is optionally polarized, is
intercepted by a polarizing beam splitter 1108 or another similar
optical element, that reflects a significant part of the light
towards a reflective SLM 1106. A polarizing beam splitter is
preferred for some embodiments, for example, for reason of energy
efficiency, however, it is not essential. The light is modulated by
the SLM and reflected towards a detector array 1114. The light may
pass through beam splitter 1108 without any significant
interaction, for example if the SLM changes the polarization of the
light, or if a suitable .lambda./4 wave-plate is provided in the
light path. Alternatively, SLM 1106 is not perpendicular to the
path of the light, so the light is reflected at an angle, and
bypasses the beam splitter.
[0193] Before reaching detector 1114, the light may be imaged using
a lens 1109 onto an image plane 1111. The image may then be further
processed or conveyed using a lenslet array 1112 via an optional
multi-channel structure 1110 to detector array 1114. In some
embodiments, there is no lens 1109 and the space between lens 1109
and image plane 1111.
[0194] Optionally, a second reflective SLM 1107 or a polarization
changing mirror is provided perpendicular to SLM 1106, allowing a
second, optional reference, beam to be generated by system
1100.
[0195] FIG. 13 is a schematic diagram of a planar reflective
optical processor 1200, in accordance with an exemplary embodiment
of the invention. In this embodiment, the light source is a planar
type light source, in which light from a source 1202 is injected
into a light guide 1204 and emitted along its length. Various
designs may be used for light guide 1204. In one design, the light
is collimated before being injected and is allowed to leak out at
various points along the length of light guide 1204, for example
using a diffraction grating etched on the light guide, and is
reflected using total internal reflection from the walls of light
guide 1204 at other points along its length. In another design, the
light is not collimated and is reflected several times until a
sufficient expansion of the beam is achieved, at which point the
entire beam exists from the side of the light guide. The
diffraction grating may be non-uniform to control the uniformity of
the exiting beam. Alternatively or additionally, the SLM,
processing optics and/or detector may compensate for any
non-uniformity.
[0196] The exiting light is reflected by a polarizing beam splitter
1208 towards SLM 1206, which is optionally a polarization-affecting
SLM or associated with a polarizer. Splitter (or reflector, in
embodiments where it does not polarize) 1208 may also serve to
align the light at a desired angle relative to SLM 1206 and/or the
rest of system 1200. Alternatively or additionally, the light may
exit light guide 1204 only on the side near the SLM, so no
reflector is necessary. The spatially modulated light then passes
substantially unaffected through (traversely) light guide 1204 (or
it is reflected around it) to a lenslet array 1212, which processes
the modulated light and passes it though a multi-channel structure
1210 to a detector array 1214. Alternatively, lenslet array 1212
may be inside the multi-channel structure.
[0197] FIG. 14 is a schematic diagram of a sphere based reflective
optical processor 1300, in accordance with an exemplary embodiment
of the invention. A particular feature of this embodiment is
combining a processing lens, lenslet array or other optical
processing element with a reflector 1308.
[0198] Light from a source 1302 is expanded (and optionally
collimated) using a lens 1304. A spherical section 1308 is provided
with a non-reflecting surface 1307 and an at least partially
reflecting surface 1309. The light from lens 1304 passes through
spherical section 1308 (or a plurality of small spherical
reflectors), substantially unaffected and impinges on a combined
SLM-detector element 1313. In a multi-channel device, each channel
may have a separate spherical section.
[0199] In an exemplary embodiment of the invention, element 1313 is
formed of an array of interspersed reflective SLM elements 1306 and
detector elements 1314. These elements may be arranged in groups,
but this is not required. The distribution of element types may be
uniform. Alternatively, a no-uniform distribution is provided, for
example, a greater density of detector may be provided at the focal
point of section 1308, where greater accuracy may be required for
some types of calculations. The pixels may be distributed on a
Cartesian grid. Alternatively, other grids, such as a polar grid
may be used. Alternatively or additionally, the pixels are not
square, for example, being triangular, round or hexagonal. These
variations in pixel design and distribution may depend, for
example, on the type of processing to be performed using the
system.
[0200] Alternatively to being reflective, SLM elements 1306 may be
transmissive and source 1302 is on the other side of element
1313.
[0201] The spatially modulated light is then Fourier transformed by
spherical section 1308, and reflected back toward detectors 1314.
In some embodiments, a non-perfect spherical surface is used, for
example a parabolic surface.
[0202] In some embodiments of the invention, an optional polarizer
is added to increase the efficiency. In one embodiment, SLM
elements 1306 are polarization rotating (and/or a suitable
.lambda./4 plate is provided) and sphere 1308 only reflects
suitably polarized light. Alternatively or additionally, detectors
1314 are polarizing and only accept suitably polarized light.
Alternatively or additionally, lens 1304 and/or surface 1307 have a
pattern formed thereon to prevent light from directly impinging on
detector elements 1314.
[0203] FIG. 15 is a schematic diagram of a pin-hole based
reflective optical processor 1400, in accordance with an exemplary
embodiment of the invention. Light from a source 1402 is focused
using a lens 1404 and a lens array 1405 (or other suitable optical
elements) through a plurality of pinholes 1409 formed in a detector
array 1414. Each pinhole optionally corresponds to a single optical
channel. Alternatively to pin holes, small lens may be provided.
The light from each pinhole is spatially modulated by a reflective
SLM 1406 and then processed by a lenslet array 1412, to yield a
desired processed light on the plane of detector 1414. As with the
embodiment of FIG. 13, a polarizer may be provided. Alternatively
or additionally, SLM 1406 may polarize or include a polarizer.
[0204] Alternatively to using lens and a pinhole, other
configurations may be used, for example, a plurality of optical
fibers. Optionally, the light is polarized.
[0205] An optional advantage of this embodiment is that no beam
splitter is used.
[0206] The above description has centered on DCT based compression
methods. However, other transform based compression methods may
also be implemented in accordance with exemplary embodiments of the
invention. In one example, a wavelet compression method is
implemented using a block DWT (discrete wavelet transform).
Possibly, there is an overlap between blocks. Such a transform is
described, for example in G. Strang and T. Nguyen, "Wavelets and
Filter Banks", Wellesly-Cambridge Press, 1997, pp. 502, the
disclosure of which is incorporated herein by reference.
Optionally, such a wavelet compression implementation includes
bit-plane coding techniques such as SPIHT or EZW, possibly
implemented using a lookup table.
[0207] The above description has centered on image compression,
however, in accordance with an exemplary embodiment of the
invention, optical components are used for compressing other types
of signals, for example, audio signals. It is noted, however, that
image compression is generally more suitable for transform based
compression and being two dimensional, is more
computationally-complex to compress than other types of data.
[0208] In the above detailed description, various types of optical
data representations are suggested, as well as various types of
optical systems. In an exemplary embodiment of the invention, the
optical representation used is selected to match the optical
system, for example, an analog representation for an analog system.
in some cases, the data may be converted between representations,
to take advantage of particular optical configurations, for example
digital optical data may be converted into analog optical data to
use a particular lenslet-based implementation of a DCT transforming
element.
[0209] Many different types of SLMs may be used to practice various
embodiments of the present invention. However, in an exemplary
embodiment of the invention, a binary SLM is used for practicing
the present invention or even for performing linear transforms in
other applications. In an exemplary embodiment of the invention,
the data is separated into bit planes and each bit plane is
processed separately. Then the bit planes are combined to yield the
processed result. The following equation describes the relationship
between the Fourier transforming of bit-plane separated and
unseparated data: 29 F ( i 2 i a _ i ) = i 2 i F ( a _ i )
[0210] This equation is correct for all linear transforms. In an
exemplary embodiment of the invention, the data is separated into
bit-planes using an electronic circuit, however, also optical means
can be used. The data may be represented in several different ways,
depending on the specific application, including, spatial encoding
where adjacent pixels represent different bits and temporal
encoding, where the different bits are temporally separated.
Combinations of temporal and spatial separations may also be used.
In spatial separations, the bits may be arranged so that the MSB is
surrounded by lesser significant bits, so that cross-talk between
pixels (groups of bits) will be less likely to cause a modification
of the MSB. An alternative binary representation uses separate
optical channels (or channel portion) for the different bit
planes.
[0211] After processing, the processed bit planes may be combined
using optical or electronic means. The optical means may be analog
or digital. One example of an optical combining means is using a
weighted mask which reduces the intensity of light from each bit
plane response to the bit position and then all the light is
directed to a single CCD pixel. Another example of combining is
having each bit illuminate a different CCD pixel and then
performing weighted addition on the pixels. Alternatively or
additionally, different bit planes may be generated with different
intensity values depending on the bit position.
[0212] It is contemplated that the use of a binary SLM may be
advantageous also for other application using optical processing,
for example radar signal processing. By using high speed modulation
of parallel data beams, a higher system clock can be provided,
possibly even providing a better throughput than electronic
processors of a similar size, cost and/or heat dissipation.
[0213] Alternatively to a two-level SLM, three-or higher numbers of
discrete levels may be provided at the SLM. Alternatively or
additionally, although a radix based separation and combination of
data is described, other methods can be used to separate the data
and recombine it. In one example, a set of optionally orthogonal
basis vectors are used to separate the data and recombine it. Such
a set of basis vectors may be arbitrary. Alternatively, it may be
designed for other reasons, for example, for noise reduction, for
distributing noise evenly between bits and/or for matching the
basis vector set to a system characteristic, such as a system
MTF.
[0214] In some cases, the SLM is faster than propagation time in
the processor. Optionally, the processor is treated as a pipe-line
in which the SLM and detector are not processing the same data, but
rather there is a delay between the SLM and the CCD. Multiple data
streams may also be utilized using different frequencies of light.
In some cases, either the SLM or the CCD will be faster.
Optionally, several instances of the slower element are provided in
order not to slow the pipeline. Light from a plurality of SLMs can
be collimated to a single optical path and, conversely, light from
a single optical path can be projected or copied to a plurality of
CCDs. Such mixing and copying is especially useful when different
data streams are implemented using different frequencies of light.
However, such frequencies may also be differentiated using an
active optical filter such as an LCD-color filter-polarizer
combination.
[0215] The optical processing hardware is optionally dedicated for
particular tasks. Alternatively, in some embodiments of the
invention, the same hardware components may be used for different
steps in a process (such as a DCT component for compression and for
motion estimation), for different processes (such as compression
and decompression) and/or for different data blocks in a same
process, such as in serial processing of data blocks).
Alternatively or additionally, the hardware may be programmable, at
least to some extent. For example, by modifying the behavior of an
SLM and a CCD which form part of a Fourier-based data transform
optical component, different type of transforms can be achieved
with a single hardware, for example, DCT and DST. Alternatively or
additionally, the matching layer may be programmable, for example
being an addressable LCD, so that the size and/or location of
pinholes can be controlled. Alternatively or additionally, by
controlling the opacity of single LCD cells, different continuous
spatial filtering configurations can be achieved.
[0216] In some embodiments of the invention, the above transforming
of data or other processing of data are performed using other
optical and electro-optical effects, for example bi-refringent
calcite crystals as used in switching networks. Such crystals and
exemplary uses are described, for example in "All-Optical Reduced
State 4.times.4 switch", by Dan. M. Marom and David Mendlovic
Optics and Photonics News March 1996, p. 43, in "Optical Array
Generation and Interconnection Using Birefringent Slabs", Tomas W.
Stone and James M. Battiato, Applied Optics. Vol. 33 No. 2, pp.
182-191 January 1994 and in "Cantor Network, Control Algorithm,
Two-Dimensional Compact Structure and its Optical Implementation",
by Ning Wang, Liren Liu and Yaozu Yin, Applied Optics, Vol. 34 No.
35 P. 8176-8182, December 1995, the disclosures of which are
incorporated herein by reference.
[0217] In one exemplary embodiment of the invention, an optical
processing component is designed to implement a DCT algorithm by
simple manipulations of light, such as splitting, adding,
subtracting and/or multiplying by various factors. DIF (decimation
in frequency) or a DIT (decimation in time) algorithm are
considered to be especially suitable in accordance with an
exemplary embodiment of the invention. However, many other
algorithms are known for calculating a DCT and may be implemented
in accordance with other exemplary embodiments of the present
invention. FIGS. 9B and 10 describe an implementation using calcite
crystals, attenuators, phase retarders and polarizers to achieve
these effects. However, other optical elements may be used instead,
for example diffractive optics.
[0218] FIG. 9A is a flowgraph for performing a 8.times.1 DCT-II
using a DIF type algorithm. In the DIF and DIT representations, the
input data points are rearranged so into groups, each of which can
be recursively evaluated using a lower-order DIF or DIT algorithm
Reference 390 is a 8.times.1 DCT which uses two 4.times.1 DCT
components, indicated by a reference 392. A copy of this figure can
be found in "DCT-applications and Algorithms" by P. Yip and K. R.
Rao (Academic Press, 1990), page 61, the associated discussion (pp.
56-61) being incorporated herein by reference. FIGS. 9B and 10
illustrate one possible implementation of these flowgraphs. It is
noted that due to the differences between optical elements and line
diagrams, in some cases single operations are spread out between
several optical components or plural operations are combined in a
single optical component. Also, although the example shown is of a
DCT process, similar embodiments may be used for DFT (discrete
Fourier transform) and for DWT (discrete wavelet transform).
[0219] FIG. 9B is a schematic figure of a calcite based DCT
transforming optical element 400, in accordance with an exemplary
embodiment of the invention. Eight beams or circularly polarized
light, each representing one pixel of an 8 pixel vector are split
into two sets of beams by a first calcite crystal 402. One set of
beams "ordinary rays" comprise the light that is polarized at
0.degree. to the calcite polarization axis. The other beams,
"extraordinary rays" comprise light polarized at 90.degree. to the
axis. It should be noted that as the beams are processed, they stop
corresponding to the original pixels, however, for convenience, the
separate beams of light are referred to in the order in which they
are shown in the figure. It should be noted that in some cases,
fewer or more beams may be active during the DCT processing stage,
even if both the input and the output are for eight rays of light.
Four of the split beams (pixels 4-7) are recombined with the other
four beams (pixels 0-3) and then polarized by a linear polarizer
404 at 45.degree. and converted into zero axis polarized light by a
.lambda./4 plate 406. This completes a parallel shift and addition
operation on all the pixels. These beams are then spatial
recombined with the original beams for pixels 0-3 using a second
calcite crystal 408. However, two polarizations are transmitted by
crystal 408, the 90.degree. light being further shifted by the
following crystal (410). A third calcite crystal 410 is used to
combine the beams of pixels 0-3 with phase delayed beams of pixels
4-7, which pixel beam's are retarded using a .lambda./2 phase plate
412. The result of the combination is attenuated using an
attenuator 414, polarized using a polarizer 416 and then converted
into circularly polarized light using a .lambda./4 plate 418. Each
of pixel sets 0-3 and 4-7 are then processed using a DCT-II 4 bit
element 420 or 422, described in FIG. 10.
[0220] The output of elements 420 and 422 are further processed to
yield the final DCT. Beams 4-6 are retarded using a .lambda./4
retarding plate 424 and then combined with beams 5-7, using a
fourth calcite crystal 426. Beams 5-7 then sum up their two
polarizations using a 45.degree. polarizer 428, to yield the DCT
result in eight beams.
[0221] Typically, but not necessarily, a 2D DCT is desirable. One
way of generating a 2D DCT is to apply a DCT to the rows and then
process the result by columns. This can be achieved, for example,
by chaining two system 400, where one is perpendicular to the
other, thus performing first row transforms and then column
transforms. Phase information is maintained by the light, so there
is no need for separate circuitry to support chaining two DCT
elements. A .lambda./4 retarder 429 is optionally provided on beams
5-7 of the first system 400, to support the chaining.
[0222] In an exemplary embodiment of the invention, the system is
implemented as a 2.times.4 array, rather than as a 1.times.8 array.
In one calculated embodiment the system is about 27 times as long
as the width of each of the calcite crystals. It is noted that the
input and output are not in pixels order. In an exemplary
embodiment of the invention, the pixel order is generated by
suitable wiring of the SLM or of the CCD. In a 2.times.4 folded
embodiment, the required length is calculated to be 18 times the
width. It should be noted that the required length can vary by a
significant factor depending on engineering considerations, such as
materials, folded optical paths and noise considerations.
[0223] FIG. 10 is a schematic figure of a 4 pixel DCT element 440,
such as elements 420 and 422 of FIG. 9B. Letter indications a-d are
used to avoid mix-up with pixel beams 0-7 of FIG. 9B. Again it is
noted that as beams a-d are processed, they loose their original
meaning and are simply used to designate the ordinal location of
the beam in the figure. Beam d is retarded using a .lambda./2
retarder 442, then beam c is combined with beam d using a calcite
crystal 444. The resulting beam c is retarded again using a
.lambda./2 retarde6 447 and then split into beam c and beam d using
a second calcite crystal 447. Thus, the data in beams c and d is
exchanged. A .lambda./4 retarder 448 is applied to all the beams,
converting them to 45.degree. polarization. Beams c and d are
combined with beams a and b using a calcite crystal 450, thus
implementing addition operations a+c and h+d. The resulting beams a
and b are then combined with the original beams a and b, using a
calcite crystal 456, after the result beams being first polarized
using a polarizer 452 and then retarded using a .lambda./4 plate
454. Beams c and d are delayed using a phase plate 457 and then
have beams a and b combined with them, using a calcite crystal 458.
This completes a subtraction operation between the original
beams--a-d and b-c. The resulting beams c and d are attenuated
using an attenuator 460, polarized using a polarzer 462 and
retarded using a .lambda./4 retarder 464. Beams b-d are then
combined with beams a-c, using a calcite crystal 466.
[0224] At this point in the process, each pair of beams is
processed to yield a 2 input DCT. Beams a and c are polarized using
a polarizer 468 and retarded using a .lambda./4 retarder 470. A
calcite 472 combines the pre-466 crystal a beam with the current a
beam and spatially combines the b beam with the current c beam,
although they do not have the same polarization and are separated
by the next calcite. Beams b and d are delayed using a phase plate
474. A calcite 476 combines beams a and c into beams b and d. Beams
b and d are attenuated using an attenuator 478, beam b is polarized
using a polarizer 480 and then beams b and d are retarded using a
.lambda./4 retarder 482. A calcite crystal 484 is used to combine
beam d into beam c. The resulting beam c is polarized using a
polarizer 486 and is retarded using a .lambda./4 retarder 488. A
calcite crystal 490 spatially combines beam c into beam b. A
.lambda./2 retarder 492 retards beam b and a calcite crystal 494
splits out the pre-490 beam c. Elements 490-492 are used to
exchange the polarization states of beams b and c. Beam c is then
retarded using a .lambda./2 retarded 496, generating the DCT
result.
[0225] The description of FIGS. 9B and 10 have focused on
non-programmable embodiments. However, it should be noted that
bi-refringent switching networks usually include active elements,
which allow outside control of their behavior. An example of such
an element is a liquid crystal cell which can selectively (on the
application of an electric field) rotate the polarization of a
light ray. Another example is a beam switching element which
selectively swaps two beams. In some embodiments of the invention,
such controllable active elements are used to allow programming of
the device, however, in other embodiments this is not required.
Programming is especially useful for allowing a single component to
function in different ways for example for different image
portions, for compression or decompressing and/or for different
steps of processing.
[0226] The present application is related to the following four PCT
applications filed on same date as the instant application in the
IL receiving office, by applicant JTC2000 development (Delaware),
Inc.: attorney docket 141/01540 which especially describes various
optical processor designs, attorney docket 141/01542 which
especially describes data processing using separation into bit
planes and/or using feedback, attorney docket 141/01581 which
especially describes a method of optical sign extraction and
representation, and attorney docket 141/01582 which especially
describes a method of matching of discrete and continuous optical
components. The disclosures of all of these applications are
incorporated herein by reference.
[0227] It will be appreciated that the above described methods and
apparatus for optical processing may be varied in many ways,
including, changing the order of steps, which steps are performed
using electrical components and which steps are performed using
optical components, the representation of the data and/or the
hardware design. In addition, various distributed and/or
centralized hardware configurations may be used to implement the
above invention. In addition, a multiplicity of various features,
both of methods and of devices, have been described. It should be
appreciated that different features may be combined in different
ways. In particular, not all the features shown above in a
particular embodiment are necessary 1-5 in every similar embodiment
of the invention. Further, combinations of the above features are
also considered to be within the scope of some embodiments of the
invention. In addition, the scope of the invention includes methods
of using, constructing, calibrating and/or maintaining the
apparatus described herein. When used in the following claims, the
terms "comprises", "comprising", "includes", "including" or the
like mean "including but not limited to".
* * * * *