U.S. patent application number 14/595203 was filed with the patent office on 2015-10-01 for system and method of video wall setup and adjustment using automated image analysis.
This patent application is currently assigned to Userful Corporation. The applicant listed for this patent is Timothy Griffin, Adam Ryan McDaniel. Invention is credited to Timothy Griffin, Adam Ryan McDaniel.
Application Number | 20150279037 14/595203 |
Document ID | / |
Family ID | 54191113 |
Filed Date | 2015-10-01 |
United States Patent
Application |
20150279037 |
Kind Code |
A1 |
Griffin; Timothy ; et
al. |
October 1, 2015 |
System and Method of Video Wall Setup and Adjustment Using
Automated Image Analysis
Abstract
A system is disclosed for identifying, placing and configuring a
physical arrangement of a plurality of displays via image analysis
of captured digital camera images depicting unique configuration
images output to said displays to facilitate uniform operation of
said plurality of displays as a single display area for example as
a video wall. The system pairs and configures displays depicted in
the captured images to individual displays within the physical
arrangement through controlling and analyzing of the output of said
displays captured in said images. A method and computer readable
medium are also disclosed that operate in accordance with the
system.
Inventors: |
Griffin; Timothy; (Calgary,
CA) ; McDaniel; Adam Ryan; (Calgary, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Griffin; Timothy
McDaniel; Adam Ryan |
Calgary
Calgary |
|
CA
CA |
|
|
Assignee: |
Userful Corporation
Calgary
CA
Griffin; Timothy
Calgary
CA
|
Family ID: |
54191113 |
Appl. No.: |
14/595203 |
Filed: |
January 12, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61926295 |
Jan 11, 2014 |
|
|
|
Current U.S.
Class: |
345/1.3 |
Current CPC
Class: |
G09G 2340/14 20130101;
G09G 5/006 20130101; G09G 2370/042 20130101; G09G 5/12 20130101;
G09G 2370/022 20130101; G06F 3/1438 20130101; G09G 2320/0693
20130101; G09G 2370/06 20130101; G06F 3/1446 20130101; G09G 2370/16
20130101; G06T 1/60 20130101; H04N 9/3147 20130101 |
International
Class: |
G06T 7/00 20060101
G06T007/00; G06T 7/60 20060101 G06T007/60; H04N 9/12 20060101
H04N009/12; G06F 3/14 20060101 G06F003/14; G02F 1/1333 20060101
G02F001/1333 |
Claims
1. A control module in communication with each of said plurality of
displays, configured to receive display information from, and
provide output commands to, individual ones of the plurality of
displays; unique configuration images, designed to be interpretable
via computerized image analysis of their captured output, to
provide information on ones of identity, edges, corners, color
characteristics, settings, size and placement of individual
displays. Said unique configuration images being output to
individual ones of the plurality of displays, in response to
commands from the control module; a digital camera device in
communication with the control module, being configured to capture
and send for analysis digital camera images depicting ones of the
plurality of displays including the unique configuration images
output thereupon at the time of capture; an automated image
analysis module, in communication with the control module, for
receiving and analyzing said digital camera images, said analyzing
comprising: isolating image data from the unique configuration
images output thereupon; pairing ones of the depicted displays in
digital camera images to corresponding ones of the plurality of
displays; deriving individual display mapping data relative to ones
of identity, position, placement, rotation, settings and color for
ones of the displays within the video wall; said mapping data being
stored in computer readable memory and applied to facilitate
uniformity of output of said plurality of displays.
2. The system of claim 1, further comprising a Graphical User
Interface (GUI) module consisting of a user interacting with a
web-page being rendered by a web-browser running on a web-browsing
device comprising a digital camera, the web-browser in
communication with the control module, being configured to request
the user to grant camera access and capture digital images of the
plurality of displays.
3. The system of claim 2, further comprising the automated method
being used in conjunction with a GUI controlled by a user certain
ones of the setup and configuration information required being
provided by the user other being performed via the automated image
analysis module.
4. The system of claim 3, further comprising the GUI being
configured to display a graphical representation of the mapping
comprising a plurality of blocks, each block a representing, and
corresponding to, one of devices comprising the video wall, the
user being able to manipulate elements of the display to further
adjust the mapping data.
5. The system of claim 2, where the digital camera device embedded
within a smart-phone device and the GUI is provided by a native
smart-phone application in communication with the control module
over wireless network connection.
6. The system of claim 2, wherein the user is interacting with the
GUI via ones of: a web browser; a laptop; a smartphone; a tablet; a
personal computer; a mobile device; a touch-screen; a mouse; a
keyboard; an input device; voice commands; gesture input; touch
input.
7. The system of claim 1, wherein the control module further
comprises a web-server running an embedded PC housed within at
least one of the plurality of displays.
8. The system of claim 1, further comprising the plurality of
displays being updated to output, using the mapping data, at least
one image spanning the plurality of displays.
9. The system of claim 1, being further configured to perform the
outputting (of the unique configuration images), capturing (via a
digital camera device), and analyzing (to derive mapping data)
multiple times in sequence, each time utilizing the updated mapping
data and each time further facilitating uniformity of output to the
plurality of displays, the output of subsequent unique
configuration images being controlled by the system.
10. The system of claim 1 wherein, the updating of the mapping data
based on digital image analysis performed by the automated image
analysis module includes ones of: adjusting the aspect ratio or
size of the video-wall canvas to match one or more of the bounding
edges of of the total display canvas captured by the camera;
spatially positioning (shifting and rotating) of ones of the
displays based on detected markers; adjusting the relative size of
each display based on detected locations of display corner markers;
modifying the positioning and scaling of the images in response to
detected physical display sizes; increasing or decreasing the
relative brightness settings for image data sent to individual ones
of the displays; increasing or decreasing various color settings
for image data sent to individual ones of the displays; increasing
or decreasing various color settings in communication with the
display itself via a communications protocol; detecting the size of
the bezel for ones of the displays.
11. The system of claim 1, wherein, the sending for analysis of the
captured images comprises wireless transmission of image data from
the digital camera device over a wireless communication
network.
12. The system of claim 1, further comprising the digital camera
device supplying additional meta-data about the captured image
comprising ones of: camera orientation, detected ambient light,
detected distance from the subject, focal length, shutter speed,
flash settings, camera aperture, detected camera rotation angle
relative to the horizon, GPS location these additional data being
used to increase the accuracy or speed of image analysis or provide
additional details about the video wall.
13. The system of claim 1, wherein, visual elements, being specific
unique identification symbols, are used in the configuration images
to facilitate assessing ones of the identity, relative position
rotation and color of the displays, these visual elements being
ones of: embedded QR codes; specific corner markers; to facilitate
spatial location of corners of the display; specific edge markers;
linear patterns across the canvas as a way of assessing continuity
across bezel edges between different displays; individual pixels at
the edge of each display are illuminated to ensure they are visible
within the canvas providing an edge-check method; specific color(s)
as a means of assessing color uniformity between multiple ones of
the displays; QR code indicating embedded display identity within
the image lines proximal to display edges indicating display edges;
markers proximal to display corners indicating display corners;
solid blocks of color depicting color characteristics; settings;
corner and edge markers depicting relative display size; a sequence
of lines spanning the multiple displays within the video wall
canvas facilitating precise positioning of displays; a uniform
color across all displays.
14. The system of claim 1, where the image analysis software
corrects for planar spatial analysis based on the position and
angle of the camera.
15. The system of claim 1, where the display information received
from the display via the control module includes display sizing and
resolution information. The automated image analysis module further
comprising used this sizing and resolution information to assist in
paring ones of the depicted displays.
16. The system of claim 1, where several images are used in
rotation to precisely determine alignment, the images comprising:
at least one an identification image to determine the identify of
each display; at least one a corner coordinates image to determine
the spacing rotation and placement of displays; at least one color
calibration images to match and calibrate color amongst multiple
displays.
17. The system of claim 1, further comprising error checks being
performed on captured image data either prior to or after sending
for analysis, where checks and feedback to the camera operator form
part of the are performed on the captured image and error messages
are generated for output to the user, said detected errors
conditions comprising ones of: the detected number of displays in
the captured image not match the detected number of displays in
communication with the server the incidence-angle of the captured
image deviating too far from the recommended 90 deg angle; the
clarity, contrast, and resolution of the captured image being
sub-optimal for automated detection routines; captured image being
too far or too close to the video wall; light or flash reflections
being too strong for image detection.
18. A computer implemented method of adjusting, within a video-wall
canvas, ones of identity, placement, color characteristics and
configuration of individual ones of a plurality of displays by a
control module in communication with each of said plurality of
displays, the control module also being in communication with an
image analysis module, the image analysis module also being in
communication with a digital camera device, in order to facilitate
the operation of said plurality of displays as a video-wall, the
method comprising: detecting the plurality of displays; retrieving
information from said displays; generating of unique configuration
images, the configuration images having been designed to
communicate, via computerized image analysis, ones of the
corresponding display's identity, edges and corners, placement
within the canvas and color calibration; creating a test canvas
based on said configuration images for outputting said unique
configuration images to individual ones of the plurality of
displays; outputting the said test canvas to the displays capturing
via the digital camera device digital images of the plurality of
displays including the unique configuration images output
thereupon; retrieving by the image analysis module over a network
said digital images for analysis; analyzing, by the image analysis
module, of the received digital images; pairing ones of the
depicted displays in digital camera images to corresponding ones of
the plurality of displays; deriving individual display mapping data
relative to ones of identity, position, placement, rotation,
settings and color for ones of the displays within the video wall;
adjusting in response to the analyzing said identification,
placement, and configuration for individual ones of the plurality
of displays; storing said settings in computer readable memory;
applying the updated settings to facilitate uniformity of output
through an updated canvas.
19. The method of claim 18, further comprising a Graphical User
Interface (GUI) consisting of a user interacting with a web-page
being rendered by a web-browser running on a web-browsing device
comprising a digital camera, the web-browser in communication with
the control module, being configured to request the user to grant
camera access and capture digital images of the plurality of
displays.
20. A computer-readable medium storing one or more computer
readable instructions configured to cause one or more processors
to: display, via a control module in communication with each of a
plurality of displays, unique configuration images, said images
designed to be interpretable via computerized image analysis of
their captured output, to provide information information on ones
of identity, edges, corners, color characteristics, settings, size
and placement of individual displays in the form of a test canvas;
receive, via the control module, display information from
individual ones of the plurality of displays: receive, via the
control module, images from a digital camera device configured to
capture, and send for analysis, digital images depicting ones of
the plurality of displays including the unique configuration images
output thereupon at the time of capture; deliver via the control
module both said digital images and the said test canvas as
displayed to an automated image analysis module, for analysis of
the digital camera images; analyze the images in the automated
image analysis module, said analyzing comprising: isolating image
data from the unique configuration images output thereupon; pairing
ones of the depicted displays in digital camera images to
corresponding ones of the plurality of displays; deriving
individual display mapping data relative to ones of identity,
position, placement, rotation, settings and color for ones of the
displays within the physical arrangement; retrieve, via the control
module, the individual display mapping data write, via the control
module, configuration file to permanent storage;
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority from U.S. Provisional
Patent Application No. 61/926,295 filed on Jan. 12, 2014, which is
hereby incorporated by reference.
FIELD OF INVENTION
[0002] Large electronic displays, may be formed from an array of
monitors referred to as a "video-wall". For example video-wall
might be comprised of a 3 by 3 array of nine monitor, each monitor
simultaneously displaying a segment of a single image, thereby
creating the appearance of a single large display comprised of
rectangular portions.
[0003] The present invention relates generally to improving the
setup and operation of large displays and particularly to network
addressable video-wall displays.
BACKGROUND OF THE INVENTION
[0004] The present invention relates generally to improving the
setup and operation of video-wall displays and particularly to
network addressable displays.
[0005] A video-wall display system is a method to overcome the
costs of manufacturing and installing very large displays, by
assembling a large display using multiple smaller displays arranged
and working together. By dividing a single image into several
sub-images and displaying the sub-images on an appropriately
arranged array of display devices a larger display with higher
resolution can be created.
[0006] Because the plurality of display devices need to be operated
together to display a single image or canvas across a video-wall
(rather than a separate independent image for each display), the
set-up of the output displays is critical and their fine tuning can
be laborious. Informing the server of the initial positioning of
each display (so that the image segments are sent to the
appropriate displays); the precise cropping of each of the
sub-images (to allow the eye to interpret continuity of the total
image across the bezels of the displays where no image can appear);
and the adjustment of the color of the sub-segments of the image to
provide equal luminosity, color and intensity/brightness ranges
across the whole array of displays within the video-wall, are all
essential to providing the optimal viewing experience. With
conventional approaches to video-wall setup these tasks can be
laborious. This invention offers methods of automating the setup
process to improve the ease and speed of video-wall setup.
DESCRIPTION OF THE INVENTION
[0007] A video wall server splits source-video into sub-images and
distributes these sub-images to multiple listening display devices.
Built-in algorithms optimize, parse and scale the individual
video-wall segments. To accomplish this splitting efficiently it is
beneficial to create a configuration file stored in a computer
readable medium using information on the position, configuration
and settings for each of individual physical display and how they
relate to the video-wall canvas. Using such a configuration file
allows the video wall server to efficiently create a seamless
canvas across the display units. This invention deals with methods
of supplying the information for the creation of such files by
means of feedback based on test-canvasses and to sequentially
changing the configuration file before redeploying a test-canvas to
further improve the overall viewer-image.
[0008] Configuration of Displays: This invention provides methods
equipping the server with a configuration file containing: [0009]
the overall shape of the video wall; [0010] the ordering of the
sub-images within the video wall; [0011] any further rotation or
displacement of displays required to form the appropriate canvas on
the video wall; [0012] interactively fine-tuning the positioning
and bezel width of the displays to achieve perfect alignment across
display monitor bezels; [0013] adjusting the color intensity of
displays to achieve a uniform color across the video-wall; Once
this information is established it is stored in the server's
configuration files.
[0014] The methods to achieve the five types of adjustments
outlined above by automation presented typically involve a user
interacting with the server via a GUI containing instructions and a
camera in communication with the server. In a typical usage the
user would have a smart-phone, tablet, laptop or similar device,
interacting with the server via the web. The user giving permission
to the server to use that camera to obtain digital images of the
canvas as displayed across the video wall and the server giving
instructions to the user about positioning he camera and where
required to supply eye-based evaluation concerning the correctness
of any changes to the displays made.
[0015] The server knows (via DPMS and EDID) certain details about
each display (aspect ratio, number of pixels, etc.). Using these in
conjunction with the image captured from the camera gives a unique
ability to identify the exact positioning of the display.
[0016] The ordering and overall shape. Once the display units have
been mounted to form the wall and connected to the server the
server will know the number of display units involved and will
analyze for shape. This can be accomplished by sending each display
a unique computer-recognizable image. This could for example be a
specialized "bar codes" designed for image recognition software
(similar to QR codes). The image should have special symbols used
to identify the exact spatial location of the corner pixels of each
display. Next a message would be sent requesting the user to point
the camera at the displays in the wall. Digital analysis of the
image in comparison to the information as displayed allows the
server to determine which displays are in the wall (some may be
displaying in a different room), to identify the geometric
placement of the displays (rectangular, linear or "artistic"
(meaning in an informal non-geometric setup) and the position in
which each signal sent appears in the display (which Ethernet or
other connection leads to each display position). In addition it
determines the rotation (do the images need to be rotated through
90 or 180 degrees and what rotations are needed for non-standard
setups).
[0017] Once the digital image analysis has been completed the
server would re-adjust the canvas presented across the screens and
instruct the user to ready the camera for another image. This
correction process would continue until the server's digital
analysis was satisfied with the overall alignment, in addition it
might ask for by-eye evaluation to confirm the result.
[0018] Interactive fine tuning of placement and rotation. Generally
the canvas on the video wall will appear to be interrupted by the
bezels making up the edges of each display monitor. The fine tuning
is used to minimize the bezel effect by appropriately moving each
of the displays a few pixel widths horizontally or vertically. For
example this could be achieved by displaying a test canvas of
diagonal lines on the video wall. The digital analysis being aware
of the exact location of these lines in the canvas sent to the
displays can examine the lines on the digital image very precisely
for alignment and by calculation measure the number of pixels each
display must be moved vertically or horizontally to achieve perfect
alignment. Once these corrections have been made and a new canvas
displayed it can be checked digitally and by eye.
[0019] Adjusting color intensity across the canvas. In a typical
embodiment the next stage would be to check for color. The canvas
might be such that each display contains the same pattern of
rectangles each of a different color (perhaps red, blue and green)
displayed with a range of intensities. Now the analysis is of each
color intensity across all of the displays, so that any fine
distinction between the treatment of a particular color/intensity
combination can be adjusted for. Other tests of a similar nature
can be used for particular differences between displays.
[0020] In an alternative and potentially complimentary method of
calibration a moving image is output to the video-wall (for example
horizontal and lines moving across the video-wall canvas) are
captured and communicated in real-time by the camera and image
analysis software interprets the captured frames to determine
positioning.
[0021] The stage-wise process the methods outlined above are
carried out in stages and at each stage the configuration file
being used by the server is updated based on the new adjustments
calculated, so that the end result is a file that can be used to
promote perfect display of any video file presented to the server.
Color calibration can be achieved in two possible ways.
[0022] In one embodiment of the invention color calibration is done
by controlling monitor settings via the centralized server software
being in communication with the display settings (potentially via
an RS232 or other interface) and a uniform image canvas is output
to the display. In an alternative embodiment color adjustments are
stored in the server software and color adjustments are done by the
server as it is output to the display itself. In the first of these
cases the display settings are permanently stored on the server in
a configuration file.
[0023] In one realization the same color is output on each of the
displays within the video-wall and after each change in the
display, an image is captured for analysis. This image analysis
detects relative differences between each display and adjust color
output characteristics on individual displays within the
video-wall, successively adjusting hue, intensity and brightness of
the individual images so that the same high and low values for each
display are achievable by each of the individual displays within
the video-wall, making the fine adjustments necessary to the color
output characteristics and settings of each individual display.
[0024] In one embodiment of the invention the computer-recognizable
images output to each of displays includes a right-angled line in
various corners of the displays comprising the video-walls to aid
in detecting the exact placement of these corners in relation to
other display corners within the video-wall.
[0025] In another embodiment of the invention, component displays
within the video-wall provide instructions to the user on how to
connect their camera to the display (for example by providing a
URL, to visit on their network-connected or Internet-connected
device).
[0026] Visual prompting and status indicators to assist during
video-wall setup. As displays are linked into a video-wall it is
helpful to the individual setting up the video-wall to receive
visual feedback from the displays themselves as screens are added
to or removed from the video-wall. In one embodiment of the
invention, visual status indicators shows progress as each
display's position within the video-wall has been successfully
identified and the display is "linked into" the video-wall. For
example, a line, pattern, color change, picture, or animated effect
is used to differentiate monitors which have been added or
positioned within the video-wall from those that haven't. A
different status indicator such as an image, icon, or input prompt
could be output to those displays which are being output to by the
video-wall server, but are still awaiting
placement/assignment/relative-positioning within the video-wall. In
one embodiment, once an an adjacency relationship is established
between edges of displays within the video-wall a status indicates
that the edges of both displays have been successfully linked. In
one embodiment, once the full video-wall has been setup, will show
a visual success image indicator spanning the full video-wall.
[0027] In one embodiment of the invention, in addition to the image
data, the digital camera device also provides meta-data about the
image. Data such as: camera orientation, detected ambient light,
detected distance from the subject, focal length, shutter speed,
flash settings, camera aperture, detected camera rotation angle
relative to the horizon, GPS location. this additional data can be
used to increase the accuracy or speed of image analysis or provide
additional details about the video wall.
[0028] In one embodiment of the invention, a smart phone or other
mobile device with an embedded camera device is in communication
wirelessly with a video wall control server (which is in turn in
communication with the video wall displays). The video wall control
server outputs one or more optimized configuration images to the
video-wall displays. Application code executed on the mobile
device, either by the browser or by a native mobile device
application) captures image data from said camera (this could be a
still image, a stream of video data, or a sequence of still images)
and forwards this image data over a wireless connection to the
server.
[0029] An an image analysis module (could be either executed on the
server or on the mobile device, or parts of the analysis could be
performed by each) processes the captured image data
[0030] determining the display identity and placing each within the
captured image then subsequently assessing differences in display
placement, rotation, color, brightness, contrast, and other
attributes of the various displays present within the capture image
data. Via these comparisons the automated image analysis module is
able to determine any adjustments required for mappings of various
ones of the displays captured in the image and subsequently
translate these adjustments into changes to the video wall
configuration mapping file(s) or data stores. The updated mapping
would then be communicated by the control module, in response to
these changes to the server, server updates test images or
sequences to the next test image, repeating any failed steps as
necessary and moving to subsequent configuration tests as
successful calibration of each unique configuration image is
achieved.
[0031] In one embodiment the user is visiting a web-page with their
mobile device (equipped with a camera), and the server is a
web-server. That web-server also being in communication (able to
send controlling signals) to the displays comprising the video
wall. The displays being controlled by the web-server to output the
configuration images.
[0032] With the above embodiments in mind, it should be understood
that the embodiments might employ various computer-implemented
operations involving data stored in computer systems. These
operations are those requiring physical manipulation of physical
quantities. Usually, though not necessarily, these quantities take
the form of electrical or magnetic signals capable of being stored,
transferred, combined, compared, and otherwise manipulated.
Further, the manipulations performed are often referred to in
terms, such as producing, identifying, determining, or comparing.
Any of the operations described herein that form part of the
embodiments are useful machine operations. The embodiments also
relate to a device or an apparatus for performing these operations.
The apparatus can be specially constructed for the required
purpose, or the apparatus can be a general-purpose computer
selectively activated or configured by a computer program stored in
the computer. In particular, various general-purpose machines can
be used with computer programs written in accordance with the
teachings herein, or it may be more convenient to construct a more
specialized apparatus to perform the required operations.
[0033] The embodiments can also be embodied as computer readable
code on a computer readable medium. The computer readable medium is
any data storage device that can store data, which can be
thereafter read by a computer system. Examples of the computer
readable medium include hard drives, solid state drives (SSD),
network attached storage (NAS), read-only memory, random-access
memory, Optical discs (CD/DVD/Blu-ray/HD-DVD), magnetic tapes, and
other optical and non-optical data storage devices. The computer
readable medium can also be distributed over a network coupled
computer system so that the computer readable code is stored and
executed in a distributed fashion. Embodiments described herein may
be practiced with various computer system configurations including
hand-held devices, tablets, microprocessor systems,
microprocessor-based or programmable consumer electronics,
minicomputers, mainframe computers and the like. The embodiments
can also be practiced in distributed computing environments where
tasks are performed by remote processing devices that are linked
through a wire-based or wireless network.
[0034] Although the method operations were described in a specific
order, it should be understood that other operations may be
performed in between described operations, described operations may
be adjusted so that they occur at slightly different times or the
described operations may be distributed in a system which allows
the occurrence of the processing operations at various intervals
associated with the processing.
[0035] While the system and method has been described in
conjunction with several specific embodiments, it is evident to
those skilled in the art that many further alternatives,
modifications and variations will be apparent in light of the
foregoing description. Thus, the embodiments described herein are
intended to embrace all such alternatives, modifications,
applications and variations as may fall within the spirit and scope
of the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] Embodiments will now be described more fully with reference
to the accompanying drawings in which:
[0037] FIG. 1 illustrates the basic problem to be solved.
[0038] FIG. 2 illustrates the need for precise mapping, placement
and bezel correction of displays when creating a video wall
[0039] FIG. 3 shows a schematic diagram depicting a 2.times.2 video
wall
[0040] FIG. 4 illustrates in detail camera use in combination with
uniquely identifying output images.
[0041] FIG. 5 shows a process flow-chart of automated video-wall
calibration.
[0042] FIG. 6 illustrates the display adjustment process.
[0043] FIG. 7 illustrates a specific embodiment of the whole
process
[0044] FIG. 8 is the flow diagram for an image analysis and
detection module.
DETAILED DESCRIPTION OF THE DRAWINGS
[0045] FIG. 1 illustrates the basic problem. It shows a complex
video wall layout (multiple displays arranged artistically at
multiple angles). It also shows how the image output to each of
these displays has been correctly aligned and correctly color
calibrated so that the image displays correctly and evenly across
on all screens regardless of their placement, rotation, spacing,
bezel, etc. The 9 displays comprising the video wall are showing a
test pattern illustrating the precise placement of the artistic
display orientations within a video-wall. Once the physical
video-wall displays have been installed. The mapping and placement
of these individual displays within video wall canvas (10) the on
the video-wall server can be accomplished by multiple means,
however disclosed herein is an automated method for accomplishing
this process adaptable even to complex non-standard video wall
layouts such as this.
[0046] FIG. 2 illustrates the need for precise mapping, placement
and bezel correction of displays when creating a video wall. The
figure again shows a 9-screen video wall this time arranged in a
3.times.3 grid configuration video-wall presenting. The output
image shown on the two video walls depicted is again an image of
diagonal straight lines which have been drawn to span the whole
video-wall canvas. In both versions the pattern is interrupted by
the bezel edges of the nine displays which form pairs of horizontal
and vertical interruption bands. However in the upper illustration
of FIG. 2 (where the display placement within the video wall canvas
has not been corrected/adjusted to account for bezel interruptions)
you can see that the lines on the video-wall canvas do not align
correctly hence to not appearing as portions of a single straight
line. This is emphasized by the overlayed dotted straight diagonal
lines (21) and (22). Notice that the lines on the non-aligned
(non-bezel corrected) canvas do not exactly follow these ruled
lines. The bottom half of the figure illustrates the view after
effective bezel adjustments have been performed: the diagonals on
the canvas line up accurately and the fine lines added at (23) and
(24) confirm this. In practice such lines cannot be generated by
the video wall server, and evaluation of alignment must be observed
by a viewer either by eye or by automated image capture (method for
which is disclosed herein).
[0047] FIG. 3 shows a schematic diagram depicting a 2.times.2 video
wall comprising four independent displays working together to
output a video wall that has been configured to correctly
compensate for the display bezel. Also shown is a tablet computing
device with a built in camera (31) which is being operated by a
user (32) to capture an image of all four of the displays
comprising the video wall (along with their output).
[0048] FIG. 4 illustrates in more detail the method of using a
camera in combination with uniquely identifying output images along
with automated image analysis to inform the server both of the
identity of the individual displays in the video wall as well as
the precise relative position and size of the displays in the
video-wall. In this example, the video-wall server has output a
different QR code to each of the nine displays comprising the video
wall as seen in the top half of the FIG. (41). and it also shows a
blown up enlarged view of one single display with the QR code (45)
with corner markers (46) which would be a potential cue used as
part of the system to help spatially locate edges and corners
correctly through the image capture and analysis process. Also
depicted is a schematic is a smart-phone or tablet equipped with a
built in camera (42) being used to capture and relay images of the
real-world video-wall output and relay it back to the video-wall
web-server, allowing the video-wall server, through automated image
recognition and analysis of the captured rendition of the output
images, to correctly align and place the individual sub-image
segments (and displays) within the video-wall canvas to create a
mapping for use when outputting to the video wall. In one
embodiment different images are output at different stages in the
automated calibration process. For example at the next stage the
server might output a line patter for precise calibration and use
the image capture of a line pattern across the displays to
automatically make the fine adjustments needed to allow for bezel
corrections. Finally a color adjustment image (e.g., uniform colors
across all displays) might be output across the video-wall using
feedback from image analysis of the camera captured image to access
differences in color output across various displays comprising the
video wall. A second video wall is depicted in the lower half of
FIG. 4 showing a non-rectangular video-wall (43) this time with a
different type of identification and calibration image
differentiating individual displays. Again the image of the
video-wall is captured by a camera, in this case depicted as a
mobile phone or tablet (44).
[0049] FIG. 5 shows a process flow-chart of automated video-wall
calibration (50. Here (in step 51) video-wall is configured to
output unique images to each of the unassigned displays within the
system (some of which may be arranged into a video-wall). The user
is then prompted (either on the displays or from within an
administrative GUI potentially accessed from a mobile device such
as a tablet, smart-phone, or laptop) to take a photo of the
video-wall (step 52). In the case that the user is using a mobile
device, the administrative web-application may request permission
to directly access the mobile device's camera. The user then takes
a photo of the video-wall and transfers this to the server for
automated analysis (step 53). The server then (using automated
image analysis of the specially output images) determines which
display is arranged where within the video-wall and also determines
the exact spacing, rotation. and placement of the displays by
recognizing the distinctive identifying images in the photo and
also by calculating the distances between each adjacent display
edge within the photo (step 54). This information is then used to
create a video-wall configuration file which is stored in a
computer readable medium (step 55). A test pattern is then output
to each of the displays comprising the video-wall (the displays
that were contained within the photo taken by the administrator in
step 52) (56), and it is checked for the accuracy of the completed
step and reported either by an automated process or by a human
observer (57). Subsequently further adjustments to this pattern are
made if it is not satisfactory and a sequence of new test patterns
analyzed (58) continuing until the total result is optimal and the
process ends (59).
[0050] FIG. 6 illustrates the display adjustment process (60). The
server outputs the appropriate (as needed at this stage in the
process) unique identification images or color calibration image(s)
to all the screens in the array (61) The user captures a live image
of the screens (as requested by the server) (62). These are
transferred to the server and analyzed (63). If both the user and
server regard the results as satisfactory this step in the
adjustment process ends (66), otherwise. The server adjusts the
settings and or relationships between displays in the array based
on results of automated image analysis (65) and outputs appropriate
images at (61) once more.
[0051] FIG. 7 is an illustration of one specific embodiment of the
whole video-wall system process of set-up and use with a networked
video wall using zero clients. The user using a browser device with
a camera authenticates as a user and authorizes the web calibration
process (70). The server first discovers and connects to the
zero-client devices over the network and builds a list of displays
and assigns a unique identity to each display (71). Next it
collects the available display resolutions and other available
settings from all the connected displays (72). Then the automated
setup process is launched beginning with a browser accessible GUI
containing instructions for the user being launched on the user's
web-browser device with an embedded camera (73). Initially the
web-server requests permission to access to a camera (if permission
in not granted the system will fall back to manual calibration
methods). Once permission has been granted the user will be
provided with instructions and real time feedback as needed
throughout the whole process to assist them to correctly capture an
image of the video wall (e.g., it may provide screen recognition
features to ensure all screens have been captured from a usable
angle). The process outputs unique calibration image optimized for
automated image recognition to each display (e.g., QR codes with
corner markers for initial identification image) (74). Capture
image(s) and send to web-server for automated image analysis
adjusting identification, positioning, and calibration settings and
as required (75). The camera is controlled (either by the user or
the web-server) to capture one or more images of the video wall of
sufficient resolution to perform the required analysis (e.g., via
html media capture or similar method). The images are transmitted
to the web-server (wirelessly) for analysis by image analysis
module (see flow-chart in FIG. 8). Further image data from the
camera may also sent (such as orientation, ambient light, GPS,
etc.). Update the settings file representing the positions and
order of the display units as communicated in automated analysis.
As needed throughout the whole calibration process update this file
and update both the user instructions and individual displays as
changes occur to this file. Check the results of the calibration
(710) and evaluate if the current step is satisfactory in all
respects (and if not continue calibration at (74)). If it is
satisfactory, proceed to the next set of calibration images, for
example line calibration images to fine-tune display placement or
color calibration once no further calibration steps remain
calculate canvas size and position of displays within canvas and
write all sub-image mapping info to the settings file.
[0052] The preliminary steps in the setup of the video wall system
are now complete and the system is ready to process and deliver
content to the video wall displays. It can now receive content for
display via the primary GPU processing application output frame by
frame to the frame buffer (78), and process (e.g.,
crop/split/rotate/resize/color-convert) based on stored identity,
placement, and calibration settings individual sub-image portions
(79) to be encoded and sent to the appropriate devices for output
to the appropriate secondary display adapters which in turn outputs
the transformed image data to the corresponding displays (710),
together creating displaying the video wall image across all
displays. This decoding and displaying process is continued while
the video-wall is in use (711), and ends when terminated (712).
[0053] FIG. 8 is the flow diagram for an image analysis and
detection module. The process begins at (80) and starts by
receiving image(s) from camera device and the display information
appropriate to the displays that formed the canvas for the camera
(81). It analyzes the newly received image for recognized markers
and match to existing data (82). It performs initial checks on the
image and provides error messages to the user as required (e.g.,
does the number of displays in the captured image match the number
of detected displays in communication with the web-server? Is the
angle, clarity, and resolution of the image sufficient for
automated detection routines?) (83). Next it matches the
geometrical model of the video wall (either pre-existing model
built at an earlier stage in the automated setup process or built
by combining the data retrieved from output to the displays with
the display positioning and sizing information obtained through
image analysis (84). Next the process isolates the captured display
area of each detected display and analyzes the image data from each
display (85); and utilizing the spatial information from the
captured image automatically determines the placement of this
display (86). By comparing the captured display area of each of the
plurality of displays for perceived differences in the unique
configuration images (87), it determines the adjustments to the
mapping settings as required for the identity, position, size and
output characteristics of the plurality of displays visible within
the captured image (88), It returns the adjusted mapping settings
for the plurality of displays based on required adjustments
determined in the previous step (89). This ends the current step in
image analysis (810) and allows a new new and better canvas to be
displayed using the modifications to the updated configuration
file.
* * * * *