U.S. patent application number 12/754773 was filed with the patent office on 2011-01-13 for omniview motionless camera orientation system.
This patent application is currently assigned to Sony Corporation. Invention is credited to Nicholas Busko, Daniel P. Kuban, H. Lee Martin, Steven D. Zimmerman.
Application Number | 20110007129 12/754773 |
Document ID | / |
Family ID | 26686187 |
Filed Date | 2011-01-13 |
United States Patent
Application |
20110007129 |
Kind Code |
A1 |
Martin; H. Lee ; et
al. |
January 13, 2011 |
OMNIVIEW MOTIONLESS CAMERA ORIENTATION SYSTEM
Abstract
An apparatus and method is provided for converting digital
images for use in an imaging system. The apparatus includes a data
memory which stores digital data representing an image having a
circular or spherical field of view such as an image captured by a
fish-eye lens, a control input for receiving a signal for selecting
a portion of the image, and a converter responsive to the control
input for converting digital data corresponding to the selected
portion into digital data representing a planar image for
subsequent display. Various methods include the steps of storing
digital data representing an image having a circular or spherical
field of view, selecting a portion of the image, and converting the
stored digital data corresponding to the selected portion into
digital data representing a planar image for subsequent display. In
various embodiments, the data converter and data conversion step
may use an orthogonal set of transformation algorithms.
Inventors: |
Martin; H. Lee; (Knoxville,
TN) ; Kuban; Daniel P.; (Oak Ridge, TN) ;
Zimmerman; Steven D.; (Knoxville, TN) ; Busko;
Nicholas; (Knoxville, TN) |
Correspondence
Address: |
FINNEGAN, HENDERSON, FARABOW, GARRETT & DUNNER;LLP
901 NEW YORK AVENUE, NW
WASHINGTON
DC
20001-4413
US
|
Assignee: |
Sony Corporation
|
Family ID: |
26686187 |
Appl. No.: |
12/754773 |
Filed: |
April 6, 2010 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
08887319 |
Jul 2, 1997 |
7714936 |
|
|
12754773 |
|
|
|
|
08339663 |
Nov 14, 1994 |
|
|
|
08887319 |
|
|
|
|
08189585 |
Jan 31, 1994 |
5384588 |
|
|
08339663 |
|
|
|
|
08014508 |
Feb 8, 1993 |
5359363 |
|
|
08189585 |
|
|
|
|
07699366 |
May 13, 1991 |
5185667 |
|
|
08014508 |
|
|
|
|
Current U.S.
Class: |
348/36 ;
348/E5.024 |
Current CPC
Class: |
H04N 5/335 20130101;
H04N 5/23238 20130101; H04N 1/2158 20130101; A61B 2090/0813
20160201; H04N 5/2259 20130101; G08B 13/19628 20130101; G08B
13/19689 20130101; H04N 5/2628 20130101; H04N 7/183 20130101; G06T
3/0018 20130101; H04N 1/217 20130101; G06F 16/40 20190101; H04N
7/002 20130101 |
Class at
Publication: |
348/36 ;
348/E05.024 |
International
Class: |
H04N 5/225 20060101
H04N005/225 |
Claims
1-31. (canceled)
32. A system for providing perspective corrected views of a
selected portion of a received optical image captured using a wide
angle lens, the received optical image being distorted, the system
comprising: image capture means for receiving signals corresponding
to the received optical image and for digitizing the signal; input
image memory means for receiving the digitized signal; input means
for selecting a portion of the received image to view; image
transform processor means for processing the digitized signals to
produce an output signal corresponding to a perspective corrected
image of the selected portion of the received image; output image
memory means for receiving the output signal from the image
transform processor means; and output means connected to the output
image memory means for recording or displaying the perspective
corrected image of the selected portion; wherein the image
transform processor means comprises transform parameter calculation
means for calculating transform parameters for the selected portion
of the image and processes the digitized signal based on the
calculated transform parameters to generate the output signal.
33. A system according to claim 32, comprising a camera imaging
system for receiving the optical image and for producing the
signals corresponding to the received optical image for output to
the image capture means.
34. A system according to claim 33, comprising wide angle lens
means mounted on the camera imaging system for producing the
optical image for optical conveyance to the camera imaging
system.
35. A system according to claim 34, wherein the lens means is one
or more fish-eye lenses.
36. A system according to claim 32, wherein the input means
provides for input to the image transform processor means of one or
more of: a direction of view, tilting of a viewing angle, rotation
of a viewing angle, pan of the viewing angle, focus of the image
and magnification of the selected portion of the image.
37. A system according to claim 36, wherein tilting of the viewing
angle through at least 180 degrees is provided for.
38. A system according to claim 36, wherein rotation of the viewing
angle through 360 degrees is provided for.
39. A system according to claim 36, wherein pan of the viewing
angle through at least 180 degrees is provided for.
40. A system according to claim 39, wherein pan of the viewing
angle through 360 degrees is provided for.
41. A system according to claim 32, wherein the input means is a
user-operated manipulator switch means.
42. A system according to claim 32, wherein the input means is a
signal from a computer input means.
43. A system according to claim 32, wherein the image transform
processing means is programmed to implement the following
equations: x = R [ uA - vB + mR sin .beta. sin .differential. ] u 2
+ v 2 + m 2 R 2 y = R [ uC - vD - mR sin .beta. cos .differential.
] u 2 + v 2 + m 2 R 2 ##EQU00007## where: A=(cos .theta. cos
.differential.-sin .theta. sin .differential. cos .beta.) B=(sin
.theta. cos .differential.+cos .theta. sin .differential. cos
.beta.) C=(cos .theta. sin .differential.+sin .theta. cos
.differential. cos .beta.) D=(sin .theta. sin .differential.-cos
.theta. cos .differential. cos .beta.) and where: R=radius of the
image circle .beta.=zenith angle .differential.=Azimuth angle in
image plane .theta.=object plane rotation angle m=Magnification u,
v=object plane coordinates x, y=image plane coordinates.
44. A method for providing perspective corrected views of a
selected portion of an optical image captured with a wide angle
lens, the received optical image being distorted, the method
comprising: providing a digitized signal corresponding to the
optical image; selecting a portion of the optical image;
transforming the digitized signal to produce an output signal
corresponding to a perspective corrected image of the selected
portion of the received image; and displaying or recording the
perspective corrected image of the selected portion; wherein the
step of transforming the digitized signal comprises calculating
transform parameters for the selected portion of the image, the
calculated transform parameters being used to control the
transformation of the digitized signal to generate the output
signal.
45. A method according to claim 44, further comprising first
receiving the optical image, producing signals corresponding to the
received optical image, and digitizing the signals.
46. A method according to claim 44, further comprising capturing
the optical image with one or more fish-eye lenses.
47. A method according to claim 44, wherein the step of selecting
the portion of the image to view comprises selecting one or more
of: a direction of view; tilting of a viewing angle; rotation of a
viewing angle; pan of the viewing angle; focus of the image and
magnification of the selected portion of the image.
48. A method according to claim 47, wherein tilting of the viewing
angle through at least 180 degrees is provided for.
49. A method according to claim 47, wherein rotation of the viewing
angle through 360 degrees is provided for.
50. A method according to claim 47, wherein pan of the viewing
angle through at least 180 degrees is provided for.
51. A method according to claim 50, wherein pan of the viewing
angle through 360 degrees is provided for.
52. A method according to claim 44, wherein selection of the
portion of the image to view is achieved using a user-operated
manipulator switch means.
53. A method according to claim 44, wherein selection of the
portion of the image to view is controlled by a signal from a
computer input means.
54. A method according to claim 44, wherein the image
transformation implements the following two equations: x = R [ uA -
vB + mR sin .beta. sin .differential. ] u 2 + v 2 + m 2 R 2 y = R [
uC - vD - mR sin .beta. cos .differential. ] u 2 + v 2 + m 2 R 2
##EQU00008## where: A=(cos .theta. cos .differential.-sin .theta.
sin .differential. cos .beta.) B=(sin .theta. cos
.differential.+cos .theta. sin .differential. cos .beta.) C=(cos
.theta. sin .differential.+sin .theta. cos .differential. cos
.beta.) D=(sin .theta. sin .differential.-cos .theta. cos
.differential. cos .beta.) and where: R=radius of the image circle
.beta.=zenith angle .differential.=Azimuth angle in image plane
.theta.=object plane rotation angle m=Magnification u, v=object
plane coordinates x, y=image plane coordinates.
55. A method according to claim 44, wherein a plurality of portions
of the image are selected for viewing and are displayed either
simultaneously or consecutively.
56. A method according to claim 44, wherein the image is viewed
interactively by repeating the steps of selecting, transforming and
displaying the portion of the image.
57. A method according to claim 44, wherein the step of
transforming the image is based on lens characteristics of the wide
angle lens.
58. A method according to claim 57, wherein the step of
transforming the image is based on azimuth angle invariability and
equidistant projection.
59. A method according to claim 44, wherein the step of
transforming the image is performed at real time video rates.
Description
[0001] This application is a continuation of U.S. application Ser.
No. 08/339,663 filed Nov. 11, 1994, which is a continuation of U.S.
application Ser. No. 08/189,585 filed Jan. 31, 1994 (now U.S. Pat.
No. 5,384,588), which is a continuation-in-part of U.S. application
Ser. No. 08/014,508 filed Feb. 8, 1993 (now U.S. Pat. No.
5,359,363), which is a continuation-in-part of U.S. application
Ser. No. 07/699,366 filed May 13, 1991 (now U.S. Pat. No.
5,185,667). Each of the above-referenced U.S. patents is expressly
incorporated by reference herein.
BACKGROUND OF THE INVENTION
[0002] 1. Technical Field
[0003] This invention relates generally to an apparatus and method
for transforming perspective-distorted circular field of view
images into non-distorted, normal perspective images having various
orientations, rotations, and magnifications within the field of
view.
[0004] 2. Related Information
[0005] Camera viewing systems are used in abundance for
surveillance, inspection, security, and remote sensing. Remote
viewing is critical, for example, for robotic manipulation tasks.
Close viewing is necessary for detailed manipulation tasks wile
wide-angle viewing aids positioning of the robotic system to avoid
collisions with the work space. Most of these systems use either a
fixed-mount camera with a limited viewing field to reduce
distortion, or they utilize mechanical pan-and-tilt platforms and
mechanized zoom lenses to orient the camera and magnify its image.
In the applications where orientation of the camera and
magnification of its image are required, the mechanical solution is
large in size and can subtend a significant volume making the
viewing system difficult to conceal or use in close quarters.
Several cameras are usually necessary to provide wide-angle viewing
of the work space.
[0006] In order to provide a maximum amount of viewing coverage or
subtended angle, mechanical pan/tilt mechanisms usually use
motorized drives and gear mechanisms to manipulate the vertical and
horizontal orientation. An example of such a device is shown in
U.S. Pat. No. 4,728,839 issued to J. B. Coughlan, et al, on Mar. 1,
1988. Collisions with the working environment caused by these
mechanical pan/tilt orientation mechanisms can damage both the
camera and the work space and impede the remote handling operation.
Simultaneously, viewing in said remote environments is extremely
important to the performance of inspection and manipulation
activities.
[0007] Camera viewing systems that use internal optics to provide
wide viewing angles have also been developed in order to minimize
the size and volume of the camera and the intrusion into the
viewing area. These systems rely on the movement of either a mirror
or prism to change the tilt-angle of orientation and provide
mechanical rotation of the entire camera to change the pan angle of
orientation. Additional lenses are used to minimize distortion.
Using this means, the size of the camera orientation system can be
minimized, but "blind spots" in the center of the view result.
Also, these systems typically have no means of magnifying the image
and or producing multiple images from a single camera.
[0008] References that may be relevant to the evaluation of the
present invention are U.S. Pat. Nos. 4,772,942 issued to M. J. Tuck
on Sep. 20, 1988; 5,023,725 issued to D. McCutchen on Jun. 11,
1991; 5,067,019 issued to R. D. Juday on Nov. 19, 1991; and
5,068,735 issued to K. Tuchiya, et al on Nov. 26, 1991.
[0009] Accordingly, it is an object of the present invention to
provide an apparatus that can provide an image of any portion of
the viewing space within a selected field-of-view without moving
the apparatus, and then electronically correct for visual
distortions of the view.
[0010] It is another object of the present invention to provide
horizontal orientation (pan), vertical orientation (tilt) and
rotational orientation (rotation) of the viewing direction with no
moving mechanisms.
[0011] It is another object of the present invention to provide the
ability to magnify or scale the image (zoom in and out)
electronically.
[0012] It is another object of the present invention to provide
electronic control of the image intensity (iris level).
[0013] It is another object of the present invention to be able to
accomplish pan, tilt, zoom, rotation, and iris adjustments with
simple inputs made by a lay person from a joystick, keyboard
controller, or computer controlled means.
[0014] It is also an object of the present invention to provide
accurate control of the absolute viewing direction and orientations
using said input devices.
[0015] A further object of the present invention is to provide the
ability to produce multiple images with different orientations and
magnifications simultaneously from a single input image.
[0016] Another object of the present invention is to be able to
provide these images at real-time video rates, e.g. thirty
transformed images per second, and to support various display
format standards such as the National Television Standards
Committee RS-170 signal format and/or higher resolution formats
currently under development.
[0017] It is also an object of the present invention to provide a
system that can be used for automatic or manual surveillance of
selected environments, with optical views of these environments
corrected electronically to remove distortion so as to facilitate
this surveillance.
[0018] These and other objects of the present invention will become
apparent upon consideration of the drawings hereinafter in
combination with a complete description thereof.
DISCLOSURE OF THE INVENTION
[0019] In accordance with the present invention, there is provided
an omnidirectional viewing system that produces the equivalent of
pan, tilt, zoom, and rotation within a selected field-of-view with
no moving parts. Further, the present invention includes means for
controlling this omnidirectional viewing in surveillance
applications. This, device includes a means for digitizing an
incoming or prerecorded video image signal, transforming a portion
of the video image based upon operator or preselected commands, and
producing one or more output images that are in correct perspective
for human viewing. In one embodiment, the incoming image is
produced by a fisheye lens which has a wide angle field-of-view.
This image is captured into an electronic memory buffer. A portion
of the captured image, either in real time or as prerecorded,
containing a region-of-interest is transformed into a perspective
correct image by, an image processing computer. The image
processing computer provides direct mapping of the image
region-of-interest into a corrected image using an orthogonal set
of transformation algorithms. The viewing orientation is designated
by a command signal generated by either a human operator or
computerized input. The transformed image is deposited in a second
electronic memory buffer where it is then manipulated to produce
the output image or images as requested by the command signal. This
is coupled with appropriate alarms and other outputs to provide a
complete surveillance system for selected environments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] FIG. 1 shows a schematic block diagram of the signal
processing portion of the present invention illustrating the major
components thereof.
[0021] FIG. 2 is an exemplary drawing of a typical fisheye image
used as input by the present invention. Lenses having other
field-of-view values will produce images with similar distortion,
particularly when the field-of-view is about eighty degrees or
greater.
[0022] FIG. 3 is an exemplary drawing of the output image after
correction for a desired image orientation and magnification within
the original image.
[0023] FIG. 4 is a schematic diagram of the fundamental geometry
that the present invention embodies to accomplish the image
transformation.
[0024] FIG. 5 is a schematic diagram demonstrating the projection
of the object plane and position vector into image plane
coordinates.
[0025] FIG. 6 is a block diagram of the present invention as
utilized for surveillance/inspection applications incorporating the
basic transformation of video images obtained with, for example,
wide angle lenses to correct for optical distortions due to the
lenses, together with the control of the surveillance/inspection
and appropriate alarm systems.
[0026] FIGS. 7A and 7B, together, show a logic flow diagram
illustrating one specific embodiment of controller operation for
manual and automatic surveillance operations of the present
invention.
BEST MODE FOR CARRYING OUT THE INVENTION
[0027] In order to minimize the size of the camera orientation
system while maintaining the ability to zoom, a camera orientation
system that utilizes electronic image transformations rather than
mechanisms was developed. While numerous patents on mechanical
pan-and-tilt systems have been filed, no approach using strictly
electronic transforms and wide angle optics is known to have been
successfully implemented. In addition, the electro-optical approach
utilized in the present invention allows multiple images to be
extracted from the output of a single camera. These images can be
then utilized to energize appropriate alarms, for example, as a
specific application of the basic image transformation in
connection with a surveillance system. As utilized herein, the term
"surveillance" has a wide range including, but not limited to,
determining ingress or egress from a selected environment. Further,
the term "wide angle" as used herein means a field-of-view of about
eighty degrees or greater. Motivation for this device came from
viewing system requirements in remote handling applications where
the operating envelop of the equipment is a significant constraint
to task accomplishment.
[0028] The principles of the optical transform utilized in the
present invention can be understood by reference to the system 10
of FIG. 1. (This is also set forth in the afore-cited U.S. patent
application Ser. No. 07/699,366 that is incorporated herein by
reference.) Shown schematically at 11 is a wide angle, e.g., a
fisheye, lens that provides an image of the environment with a 180
degree field-of-view. The lens is attached to a camera 12 which
converts the optical image into an electrical signal. These signals
are then digitized electronically 13 and stored in an image buffer
14 within the present invention. An image processing system
consisting of an X-MAP and a Y-MAP processor shown as 16 and 17,
respectively, performs the two-dimensional transform mapping. The
image transform processors are controlled by the microcomputer and
control interface 15. The microcomputer control interface provides
initialization and transform parameter calculation for the system.
The control interface also determines the desired transformation
coefficients based on orientation angle, magnification, rotation,
and light sensitivity input from an input means such as a joystick
controller 22 or computer input means 23. The transformed image is
filtered by a 2-dimensional convolution filter 18 and the output of
the filtered image is stored in an output image buffer 19. The
output image buffer 19 is scanned out by display electronics 20 to
a video display device 21 for viewing.
[0029] A range of lens types can be accommodated to support various
fields of view. The lens optics 11 correspond directly with the
mathematical coefficients used with the X-MAP and Y-MAP processors
16 and 17 to transform the image. The capability to pan and tilt
the output image remains even though a different maximum field of
view is provided with a different lens element.
[0030] The invention can be realized by proper combination of a
number of optical and electronic devices. The lens 11 is
exemplified by any of a series of wide angle lenses from, for
example, Nikon, particularly the 8 mm F2.8. Any video source 12 and
image capturing device 13 that converts the optical image into
electronic memory can serve as the input for the invention such as
a Videk Digital Camera interfaced with Texas Instrument's TMS 34061
integrated circuits. Input and output image buffers 14 and 19 can
be constructed using Texas Instrument TMS44C251 video random access
memory chips or their equivalents. The control interface can be
accomplished with any of a number of microcontrollers including the
Intel 80C196. The X-MAP and Y-MAP transform processors 16 and 17
and image filtering 19 can be accomplished with application
specific integrated circuits or other means as will be known to
persons skilled in the art. The display driver can also be
accomplished with integrated circuits such as the Texas Instruments
TMS34061. The output video signal can be of the NTSC RS-170, for
example, compatible with most commercial television displays in the
United States. Remote control 22 and computer control 23 are
accomplished via readily available switches and/or computer systems
that also will be well known. These components function as a system
to select a portion of the input image (fisheye or other wide
angle) and then mathematically transform the image to provide the
proper prospective for output. The keys to the success of the
invention include:
[0031] (1) the entire input image need not be transformed, only the
portion of interest;
[0032] (2) the required mathematical transform is predictable based
on the lens characteristics; and
[0033] 3) calibration coefficients can be modified by the end user
to correct for any lens/camera combination supporting both new and
retrofit applications.
[0034] The transformation that occurs between the input memory
buffer 14 and the output memory buffer 19, as controlled by the two
coordinated transformation circuits 16 and 17, is better understood
by referring to FIGS. 2 and 3. The image shown in FIG. 2 is a
rendering of the image of a grid pattern produced by a fisheye
lens. This image has a field-of-view of 180 degrees and shows the
contents of the environment throughout an entire hemisphere. Notice
that the resulting image in FIG. 2 is significantly distorted
relative to human perception. Similar distortion will be obtained
even with lesser field-of-view lenses. Vertical grid lines in the
environment appear in the image plane as 24a, 24b, and 24c.
Horizontal grid lines in the environment appear in the image plane
as 25a, 25b, and 25c. The image of an object is exemplified by 26.
A portion of the image in FIG. 2 has been corrected, magnified, and
rotated to produce the image shown in FIG. 3. Item 27 shows the
corrected representation of the object in the output display. The
results shown in the image in FIG. 3 can be produced from any
portion of the image of FIG. 2 using the present invention. The
corrected perspective of the view is demonstrated by the
straightening of the grid pattern displayed in FIG. 3. In the
present invention, these transformations can be performed at
real-time video rates (e.g., thirty times per second), compatible
with commercial video standards.
[0035] The transformation portion of the invention as described has
the capability to pan and tilt the output image through the entire
field of view of the lens element by changing the input means, e.g.
the joystick or computer, to the controller. This allows a large
area to be scanned for information as can be useful in security and
surveillance applications. The image can also be rotated through
any portion of 360 degrees on its axis changing the perceived
vertical of the displayed image. This capability provides the
ability to align the vertical image with the gravity vector to
maintain a proper perspective in the image display regardless of
the pan or tilt angle of the image. The invention also supports
modifications in the magnification used to display the output
image. This is commensurate with a zoom function that allows a
change in the field of view of the output image. This function is
extremely useful for inspection and surveillance operations. The
magnitude of zoom provided is a function of the resolution of the
input camera, the resolution of the output display, the clarity of
the output display, and the amount of picture element (pixel)
averaging that is used in a given display. The invention supports
all of these functions to provide capabilities associated with
traditional mechanical pan (through 180 degrees), tilt (through 180
degrees), rotation (through 360 degrees), and zoom devices. The
digital system also supports image intensity scaling that emulates
the functionality of a mechanical iris by shifting the intensity of
the displayed image based on commands from the user or an external
computer.
[0036] The postulates and equations that follow are based on the
image transformation portion of the present invention utilizing a
wide angle lens as the optical element. These also apply to other
field-of-view lens systems. There are two basic properties and two
basic postulates that describe the perfect wide angle lens system.
The first property of such a lens is that the lens has a 2.pi.
steradian field-of-view and the image it produces is a circle. The
second property is that all objects in the field-of-view are in
focus, i.e. the perfect wide angle lens has an infinite
depth-of-field. The two important postulates of this lens system
(refer to FIGS. 4 and 5) are stated as follows:
[0037] Postulate 1: Azimuth angle invariability--For object points
that lie in a content plane that is perpendicular to the image
plane and passes through the image plane origin, all such points
are mapped as image points onto the line of intersection between
the image plane and the content plane, i.e. along a radial line.
The azimuth angle of the image points is therefore invariant to
elevation and object distance changes within the content plane.
[0038] Postulate 2: Equidistant Projection Rule--The radial
distance, r, from the image plane origin along the azimuth angle
containing the projection of the object point is linearly
proportional to the zenith angle .beta., where .beta. is defined as
the angle between a perpendicular line through the image plane
origin and the line from the image plane origin to the object
point. Thus the relationship:
r=k.beta. (1)
[0039] Using these properties and postulates as the foundation of
the lens system, the mathematical transformation for obtaining a
perspective corrected image can be determined. FIG. 4 shows the
coordinate reference frames for the object plane and the image
plane. The coordinates u,v describe object points within the object
plane. The coordinates x,y,z describe points within the image
coordinate frame of reference.
[0040] The object plane shown in FIG. 4 is a typical region of
interest to determine the mapping relationship onto the image plane
to properly correct the object. The direction of view vector,
DOV[x,y,z], determines the zenith and azimuth angles for mapping
the object plane, UV, onto the image plane, XY. The object plane is
defined to be perpendicular to the vector, DOV[x,y,z].
[0041] The location of the origin of the object plane in terms of
the image plane [x,y,z] in spherical coordinates is given by:
x=D sin .beta. cos .differential.
y=D sin .beta. sin .differential.
z=D cos .beta. (2)
where D=scaler length from the image plane origin to the object
plane origin, .beta. is the zenith angle, and .differential. is the
azimuth angle in image plane spherical coordinates. The origin of
object plane is represented as a vector using the components given
in Equation 1 as:
DOV[x,y,z]=[D sin .beta. cos .differential., D sin .beta. sin
.differential., D cos .beta.] (3)
[0042] DOV[x,y,z] is perpendicular to the object plane and its
scaler magnitude D provides the distance to the object plane. By
aligning the YZ plane with the direction of action of DOV[x,y,z],
the azimuth angle .differential. becomes either 90 or 270 degrees
and therefore the x component becomes zero resulting in the
DOV[x,y,z] coordinates:
DOV[x,y,z]=[0, -D sin .beta., D cos .beta.] (4)
[0043] Referring now to FIG. 5, the object point relative to the UV
plane origin in coordinates relative to the origin of the image
plane is given by the following:
x=u
y=v cos .beta.
z=v sin .beta. (5)
therefore, the coordinates of a point P(u,v) that lies in the
object plane can be represented as a vector P[x,y,z] in image plane
coordinates:
P[x,y,z]=[u, v cos .beta., v sin .beta.] (6)
where P[x,y,z] describes the position of the object point in image
coordinates relative to the origin of the UV plane. The object
vector O[x,y,z] that describes the object point in image
coordinates is then given by:
O[x,y,z]=DOV[x,y,z]+P[x,y,z] (7)
O[x,y,z]=[u, v cos .beta.-D sin .beta., v sin .beta.+D cos .beta.]
(8)
Projection onto a hemisphere of radius R attached to the image
plane is determined by scaling the object vector O[x,y,z] to
produce a surface vector S[x,y,z]:
S [ x , y , z ] = RO [ x , y , z ] O [ x , y , z ] ( 9 )
##EQU00001##
[0044] By substituting for the components of O[x,y,z] from Equation
8, the vector S[x,y,z] describing the image point mapping onto the
hemisphere becomes:
S [ x , y , z ] = RO [ u , ( v cos .beta. - D sin .beta. ) , ( v
sin .beta. + D cos .beta. ) ] u 2 + ( v cos .beta. - D sin .beta. )
2 + ( v sin .beta. + D cos .beta. ) 2 ( 10 ) ##EQU00002##
[0045] The denominator in Equation 10 represents the length or
absolute value of the vector O[x,y,z] and can be simplified through
algebraic and trigonometric manipulation to give:
S [ x , y , z ] = RO [ u , ( v cos .beta. - D sin .beta. ) , ( v
sin .beta. + D cos .beta. ) ] u 2 + v 2 + D 2 ( 11 )
##EQU00003##
[0046] From Equation 11; the mapping onto the two-dimensional image
plane can be obtained for both x and y as:
x = Ru u 2 + v 2 + D 2 ( 12 ) y = R ( v cos .beta. - D sin .beta. )
u 2 + v 2 + D 2 ( 13 ) ##EQU00004##
[0047] Additionally, the image plane center to object plane
distance D can be represented in terms of the image circular radius
R by the relation:
D=mR (14)
[0048] where m represents the scale factor in radial units R from
the image plane origin to the object plane origin. Substituting
Equation 14 into Equations 12 and 13 provides a means for obtaining
an effective scaling operation or magnification which can be used
to provide zoom operation.
x = Ru u 2 + v 2 + m 2 R 2 ( 15 ) y = R ( v cos .beta. - mR sin
.beta. ) u 2 + v 2 + m 2 R 2 ( 16 ) ##EQU00005##
[0049] Using the equations for two-dimensional rotation of axes for
both the UV object plane and the XY image plane the last two
equations can be further manipulated to provide a more general set
of equations that provides for rotation within the image plane and
rotation within the object plane.
x = R [ uA - vB + mR sin .beta. sin .differential. ] u 2 + v 2 + m
2 R 2 ( 17 ) y = R [ uC - vD - mR sin .beta. cos .differential. ] u
2 + v 2 + m 2 R 2 ( 18 ) ##EQU00006##
[0050] where:
A=(cos o cos .differential.-sin o sin .differential. cos
.beta.)
B=(sin o cos .differential.+cos o sin .differential. cos .beta.
C=(cos o sin .differential.+sin o cos .differential. cos
.beta.)
D=(sin o sin .differential.-cos o cos .differential. cos .beta.)
(19)
[0051] and where:
[0052] R=radius of the image circle
[0053] .beta.=zenith angle
[0054] .differential.=Azimuth angle in image plane
[0055] o=Object plane rotation angle
[0056] m=Magnification
[0057] u,v=object plane coordinates
[0058] x,y=image plane coordinates
[0059] The Equations 17 and 18 provide a direct mapping from the UV
space to the XY image space and are the fundamental mathematical
result that supports the functioning of the present omnidirectional
viewing system with no moving parts. By knowing the desired zenith,
azimuth, and object plane rotation angles and the magnification,
the locations of x and y in the imaging array can be determined.
This approach provides a means to transform an image from the input
video buffer to the output video buffer exactly. Also, the image
system is completely symmetrical about the zenith, therefore, the
vector assignments and resulting signs of various components can be
chosen differently depending on the desired orientation of the
object plane with respect to the image plane. In addition, these
postulates and mathematical equations can be modified for various
lens elements as necessary for the desired field-of-view coverage
in a given application.
[0060] The input means defines the zenith angle, .beta., the
azimuth angle, .differential., the object rotation, o, and the
magnification, m. These values are substituted into Equations 19 to
determine values for substitution into Equations 17 and 18. The
image circle radius, R, is a fixed value that is determined by the
camera lens and element relationship. The variables u and v vary
throughout the object plane determining the values for x and y in
the image plane coordinates.
[0061] From the foregoing, it can be seen that a wide angle lens
provides a substantially hemispherical view that is captured by a
camera. The image is then transformed into a corrected image at a
desired pan, tilt, magnification, rotation, and focus based on the
desired view as described by a control input. The image is then
output to a television display with the perspective corrected.
Accordingly, no mechanical devices are required to attain this
extensive analysis and presentation of the view of an environment
through 180 degrees of pan, 180 degrees of tilt, 360 degrees of
rotation, and various degrees of zoom magnification.
[0062] As indicated above, one application for the perspective
correction of images obtained with a motionless wide angle camera
is in the field of surveillance. The term "surveillance" is meant
to include inspection and like operations as well. It is often
desired to continuously or periodically view a selected environment
to determine activity in that environment. The term "environment"
is meant to include such areas as rooms, warehouses, parks and the
like. This activity might be, for example, ingress or egress of
some object relative to that environment. It might also be some
action that is taking place in that environment. It may be desired
to carry out this surveillance either automatically at the desired
frequency (or continuously), or upon demand by an operator. The
size of the environment may require more than one motionless camera
for complete surveillance.
[0063] Such a surveillance system is indicated generally at 30 of
FIG. 6. A video camera unit 32, including a wide angle lens 31, is
utilized to view the selected environment (or portion of the
environment), with the output therefrom being electrical signals
related to the elements as seen by the camera system. These
signals, when present, are either directly presented to an image
transformation system 33 (the components of FIG. 1 without the
camera/lens and the TV monitor thereof) or to a videotape recorder
34 for subsequent processing in the image transformation system 33.
This permits evaluation during "post event" review as well as a
review of events that occur in real time. It will be understood
that additional camera-lens units in an environment, as well as
videotape recorders, can be utilized as indicated at 35.
[0064] Various external elements are utilized to govern the
operation of the transformation system. For example, appropriate
discrete switches 36 are used for the selection of the environment
or portion of the environment to be monitored. These switches can
be positioned (and operated) either at the control center or at the
environment (or other remote location). When positioned in the
environment, these switches can indicate some action occurring in
the environment (door opening, window breaking, etc.) with the
result that the virtual camera of the system is directed to the
point of interest and then signal an external alarm for creating an
audible alarm if desired. Also, alarm conditions can activate the
video tape recorder discussed below. Since the system monitors the
presence of an incoming video signal, the device can signal an
alarm when the incoming video signal is disrupted. Where the
monitoring is to be preselected, one input can be a computer 38.
Another form of control is through the use of operator controls 40
such that the operator can select at any time the operation of the
transformation. Options that are available in either of these types
of control are "Quad display" (either through the control by the
computer 38 or the operator controls 40) wherein four displays
occur on a monitor. Another option available through either control
is that of "tweening" which is a selection of moving the effective
view of the camera incrementally between active points or switching
between active cameras within the environment. As previously
described, these inputs are used also for selecting pan, tilt, zoom
and rotation.
[0065] The output of the transformation system 33 is typically in
digital format. As such, this output can control alarm enunciators
42 positioned at any location, or other forms of discrete alarms
44. They can also activate a videotape recording machine 46. In
addition, the alarms 44 can be used to detect and announce loss of
video signal, and permit external interrogation (manual or
automated) of system status by the computer interface of the system
parameters including component or power failure. Such interrogation
would include verification of operation, video input,
pan-tilt-rotation angles, magnification and setup parameters. As in
the system of FIG. 1, this surveillance system 30 provides for
pictorial environment display on a TV-type monitor 48 and/or on the
tape of the recording machine 46.
[0066] FIGS. 7A and 7A, jointly, form a logic flow diagram that
illustrates how one specific embodiment of the controller 30 can
perform manual and automatic surveillance activities. A decision is
made at 50 as to whether the system is under computer (external)
control (see 38 of FIG. 6) or manual (internal) control. If under
computer operation, the camera orientations and magnifications are
communicated directly to the system for action at 52. In the event
of internal control, it is next determined if any environmental
switches are closed as at 54. These switches typically are hard
wired, magnetic infrared or other forms that indicate a change in
the environment in a certain location. The choice of a specific
type of switch for each application will be known by persons
skilled in the art. These changes (if "YES") give rise to signals
at 56 to point its visual camera in the direction of interest and
then signal an external alarm for creating an audible alarm 42
and/or turning on the video tape recorder 34.
[0067] After initiating these steps, or if the answer is "NO" at
54, the switches on the unit's control panel are read at 58 to
determine the configuration and display actions needed. "Quad
display" (either four displays or one display on the monitor 48) is
checked at 60 and, if the four displays are desired (the "YES"),
this is initiated at 62. If "tweening" (incremental effective
movement of a camera or switching between cameras) is desired, this
is checked at 64 and the appropriate selection is made at 66.
[0068] Inputs for pan, tilt, zoom and rotation are interpreted at
68 and applied to the presently active display camera. Every user
interaction resets the scan timer at 70 so that while the user is
in control, no virtual camera change is occurring. When the scan
time reaches zero, as monitored at 72, the next camera is made
active and the image being displayed changes direction and/or
content as at 74 to thereby update operation as at 52.
[0069] From the foregoing, it will be understood by those versed in
the art that an advanced surveillance system has been provided.
This system utilizes at least one motionless video camera, having a
wide angle lens, within the environment where surveillance is
desired. The perspective of images obtained with the lens/camera
are corrected through transformation according to the technology of
Ser. No. 07/699,366 either directly or after storage in a videotape
recorder. Many operational conditions are selectable including
tilt, pan, zoom and rotation. Further, multi-image displays can be
obtained, and the images can be incrementally scanned or switching
between cameras are other options. The system provides for
automatic operation coupled with user operation if desired.
[0070] While certain specific elements of construction are
indicated throughout the description of the present invention,
these are given for illustration and not for limitation. Thus, the
invention is to be limited only by the appended claims and their
equivalents.
* * * * *