U.S. patent application number 12/514674 was filed with the patent office on 2010-04-22 for system for embedding data.
Invention is credited to Leonid Dorrendorf, Zeev Geyzel.
Application Number | 20100100971 12/514674 |
Document ID | / |
Family ID | 39365938 |
Filed Date | 2010-04-22 |
United States Patent
Application |
20100100971 |
Kind Code |
A1 |
Geyzel; Zeev ; et
al. |
April 22, 2010 |
SYSTEM FOR EMBEDDING DATA
Abstract
A method and system including receiving marking information,
determining, at least in part, based on the marking information, a
plurality of color element additives, adding the plurality of color
element additives to at least one color element of a video frame,
wherein the at least one color element includes a color element R,
a color element G, and a color element B. Related methods and
systems are also described.
Inventors: |
Geyzel; Zeev; (Alon Shvut,
IL) ; Dorrendorf; Leonid; (Adumim, IL) |
Correspondence
Address: |
LADAS & PARRY LLP
26 WEST 61ST STREET
NEW YORK
NY
10023
US
|
Family ID: |
39365938 |
Appl. No.: |
12/514674 |
Filed: |
November 5, 2007 |
PCT Filed: |
November 5, 2007 |
PCT NO: |
PCT/IB07/54477 |
371 Date: |
December 10, 2009 |
Current U.S.
Class: |
726/32 |
Current CPC
Class: |
G06T 1/0021 20130101;
G06T 1/0028 20130101; G06T 1/0085 20130101; G06T 7/90 20170101 |
Class at
Publication: |
726/32 |
International
Class: |
G06F 21/00 20060101
G06F021/00 |
Foreign Application Data
Date |
Code |
Application Number |
Nov 16, 2006 |
IL |
179351 |
Mar 26, 2007 |
IL |
182201 |
Claims
1-96. (canceled)
97. A method comprising: at a marking information receiver,
receiving a unique device identifier comprising information
identifying a rendering device, the information identifying the
rendering device comprising marking information; at a determiner,
determining, at least in part, based on the marking information, a
plurality of color element additives; at a color element adder,
adding the plurality of color element additives to at least one
color element of a video frame, wherein the at least one color
element comprises a color element R, a color element G, and a color
element B.
98. The method according to claim 97 and wherein the marking
information comprises one of: a copyright mark; and access rights
data.
99. The method according to claim 98 and wherein the access rights
data comprise playback/copying permission.
100. The method according to claim 97 and wherein the at least one
color element comprises a Red-Green-Blue color element.
101. The method according to claim 97 and wherein the at least one
color element comprises a chrominance/luminance color element.
102. The method according to claim 101 and wherein the
chrominance/luminance color element comprises one of: a YCbCr
chrominance/luminance color element; a YPbPr chrominance/luminance
color element; a YDbDr chrominance/luminance color element; and a
xvYCC chrominance/luminance color element.
103. The method according to claim 97 and wherein the determining
the plurality of color element additives comprises, at least in
part: providing a variable R(t), a variable G(t), and a variable
B(t), each of R(t), G(t), and B(t) denoting one of the plurality of
color element additives; providing a variable A, the variable A
denoting a wave amplitude; providing a variable t, the variable t
denoting a frame number; providing a variable f.sub.R, a variable
f.sub.G, and a variable f.sub.B, the variable f.sub.R, the variable
f.sub.G, and the variable f.sub.B each denoting one of a plurality
of values determined based, at least in part, on the marking
information; providing a variable .tau., the variable .tau.
denoting a base wavelength; providing a variable .phi., the
variable .phi. denoting a wave phase; providing a constant a,
denoting a base frequency; and determining: R ( t ) = A * sin ( 2
.pi. ( f R + a ) .tau. ( t + .phi. R ) ) ##EQU00010## G ( t ) = A *
sin ( 2 .pi. ( f G + a ) .tau. ( t + .phi. G ) ) ##EQU00010.2## B (
t ) = A * sin ( 2 .pi. ( f B + a ) .tau. ( t + .phi. B ) ) .
##EQU00010.3##
104. The method according to claim 103 and wherein R(t), G(t), and
B(t) are each rounded to an integer value.
105. The method according to claim 103 and wherein each of the
values denoted by the variable f.sub.R, the variable f.sub.G, and
the variable f.sub.B comprises a binary value.
106. The method according to claim 103 and wherein A comprises at
least one of: a value sufficiently low as to not substantially
cause a change in color intensity; a value sufficiently high as to
be substantially distinct upon detection, such that A comprises a
value noticeable above detected background noise; a value in a
range of 1-4% of total amplitude; and 1.ltoreq.A.ltoreq.10.
107. The method according to claim 103 and wherein
180.ltoreq..tau..ltoreq.3000.
108. The method according to claim 103 and wherein at least one of:
.phi. is randomly selected; and .phi. is randomly selected each
time a new byte of marking information is utilized as input to
determine a value of the variable f.sub.R, a value of the variable
f.sub.G, and a value of the variable f.sub.B.
109. The method according to claim 103 and wherein a comprises a
value such that: 0.ltoreq.a.ltoreq.80.
110. The method according to claim 103 and further comprising
modifying a value of R(t), G(t), and B(t) by a fractional
multiplier before the adding.
111. The method according to claim 110 and wherein the fractional
multiplier is incrementally increased until the fractional
multiplier is equal to one (1).
112. The method according to claim 103 and wherein the variable
R(t), the variable G(t), and the variable B(t) are applied to the
color element R, the color element G, and the color element B,
respectively, and the applying of variable R(t), the variable G(t),
and the variable B(t) to the color element R, the color element G,
and the color element B, respectively, comprises applying to every
pixel comprised in a video screen.
113. The method according to claim 103 and wherein the variable
R(t), the variable G(t), and the variable B(t) are applied to a
color element R, a color element G, and a color element B,
respectively, such that at least one of: the color element R, the
color element G, and the color element B do not exceed a maximum
value allowed for each said color element in a color representation
system; and the color element R, the color element G, and the color
element B do not fall below a minimum value allowed for each said
color element in the color representation system.
114. The method according to claim 113 and wherein the color
representation system comprises one of: a Red-Green-Blue color
element; and a chrominance/luminance color element.
115. A method comprising: providing a plurality of video frames; at
a segmenter, segmenting the plurality of video frames into groups
of video frames; at a marking information receiver, receiving a
unique device identifier comprising information identifying a
rendering device, the information identifying the rendering device
comprising marking information; at a determiner, determining, based
at least in part, on the marking information, a plurality of color
element additives; at a selector, selecting some of the groups of
video frames for modification; and at a color element adder, adding
the plurality of color element additives to a plurality of color
elements of a plurality of video frames comprised in the selected
groups of video frames.
116. The method according to claim 115 and wherein no groups of
video frames are selected for modification for a period of time
between 0.25 and 0.75 of a base wavelength.
117. The method according to claim 115 and wherein the marking
information comprises one of: a copyright mark; access rights
data.
118. The method according to claim 117 and wherein the access
rights data comprise playback/copying permission.
119. The method according to claim 115 and wherein the at least one
color element comprises a Red-Green-Blue color element.
120. The method according to claim 115 and wherein the color
element comprises a chrominance/luminance color element.
121. The method according to claim 120 and wherein the
chrominance/luminance color element comprises one of: a YCbCr
chrominance/luminance color element; a YPbPr chrominance/luminance
color element; a YDbDr chrominance/luminance color element; and a
xvYCC chrominance/luminance color element.
122. The method according to claim 115 and wherein the determining
the plurality of color element additives comprises, at least in
part: providing a variable R(t), a variable G(t), and a variable
B(t) each of R(t), G(t), and B(t), each denoting one of the
plurality of color element additives; providing a variable A, the
variable A denoting a wave amplitude; providing a variable t, the
variable t denoting a frame number; providing a variable f.sub.R, a
variable f.sub.G, and a variable f.sub.B, the variable f.sub.R, the
variable f.sub.G, and the variable f.sub.B each denoting one of a
plurality of binary values determined based, at least in part, on
the marking information; providing a variable .tau., the variable
.tau. denoting a base wavelength; providing a variable .phi., the
variable .phi. denoting a wave phase; providing a constant a,
denoting a base frequency; and determining: R ( t ) = A * sin ( 2
.pi. ( f R + a ) .tau. ( t + .phi. R ) ) ##EQU00011## G ( t ) = A *
sin ( 2 .pi. ( f G + a ) .tau. ( t + .phi. G ) ) ##EQU00011.2## B (
t ) = A * sin ( 2 .pi. ( f B + a ) .tau. ( t + .phi. B ) )
##EQU00011.3##
123. The method according to claim 122 and wherein R(t), G(t), and
B(t) are each rounded to an integer value.
124. The method according to claim 122 and wherein each of the
values denoted by the variable f.sub.R, the variable f.sub.G, and
the variable f.sub.B comprises a binary value.
125. The method according to claim 122 and wherein A comprises at
least one of: a value sufficiently low as to not substantially
cause a change in color intensity; a value sufficiently high as to
be substantially distinct upon detection, such that A comprises a
value noticeable above detected background noise; a value in a
range of 1-4% of total amplitude; and 1.ltoreq.A.ltoreq.10.
126. The method according to claim 122 and wherein
180.ltoreq..tau..ltoreq.3000.
127. The method according to claim 122 and wherein at least one of:
.phi. is randomly selected; and .phi. is randomly selected each
time a new byte of marking information is utilized as input to
determine a value of the variable f.sub.R, a value of the variable
f.sub.G, and a value of the variable f.sub.B.
128. The method according to claim 122 and wherein a comprises a
value such that: 0.ltoreq.a.ltoreq.80.
129. The method according to claim 122 and further comprising
modifying a value of R(t), G(t), and B(t) by a fractional
multiplier before the adding.
130. The method according to claim 129 and wherein the fractional
multiplier is incrementally increased until the fractional
multiplier is equal to one (1).
131. The method according to claim 122 and wherein the variable
R(t), the variable G(t), and the variable B(t) are applied to the
color element R, the color element G, and the color element B,
respectively, and the applying of variable R(t), the variable G(t),
and the variable B(t) to the color element R, the color element G,
and the color element B, respectively, comprises applying to every
pixel comprised in a video screen.
132. The method according to claim 122 and wherein the variable
R(t), the variable G(t), and the variable B(t) are applied to a
color element R, a color element G, and a color element B,
respectively, such that at least one of: the color element R, the
color element G, and the color element B do not exceed a maximum
value allowed for each said color element in a color representation
system; and the color element R, the color element G, and the color
element B do not fall below a minimum value allowed for each said
color element in a color representation system.
133. The method according to claim 132 and wherein the color
representation system comprises one of: a Red-Green-Blue color
element; and a chrominance/luminance color element.
134. A method comprising: capturing a video stream; at a segmenter,
segmenting the video stream into a plurality of video segments; at
a splitter, splitting each segment of the plurality of video
segments into a plurality of video frames comprised therein; for
each one of the plurality of video segments: at a first determiner,
determining a color mass for every individual video frame of the
plurality of video frames by summing color value coordinates
comprised in the individual video frame; at an aggregator,
aggregating a result of the determining into three series of color
value coordinates for every individual video segment over the
plurality of video segments, each one of the three series of color
value coordinates corresponding to a distinct color element; for
each one of the three series of color value coordinates: at a
discrete Fourier transform applier, applying a discrete Fourier
transform to the series of color value coordinates; at a second
determiner, determining, as a result of the discrete Fourier
transform, an intensity of a plurality of frequencies; at a third
determiner, determining a peak frequency from among the plurality
of frequencies; and at a marking information determiner,
determining at least a portion of marking information as a result
of the determining the peak frequency; and at a combiner, combining
the determined at least a portion of marking information for each
one of the plurality of video segments, thereby determining the
marking information.
135. The method according to claim 134 and wherein the capturing a
video stream comprises at least one of: capturing a video stream
from a streaming content network; downloading the video stream from
a peer-to-peer file content sharing network; capturing the stream
from an illegal content distribution scheme.
136. The method according to claim 136 and wherein the illegal
content distribution scheme comprises at least one of: an online
illegal content distribution scheme; an offline illegal content
distribution scheme; retail sale of pirated DVDs.
137. The method according to claim 134 and wherein each video
segment of the plurality of video segments comprises an at least
partially overlapping video segment.
138. The method according to claim 134 and wherein each video
segment of the plurality of video segments is of a length
approximately equal to 1.5 times a base wavelength.
139. The method according to claim 137 and wherein the base
wavelength, denoted .tau., is positive.
140. The method according to claim 134 and wherein the color value
coordinates are comprised in a Red-Green-Blue color value
coordinate system.
141. The method according to claim 134 and wherein the color value
coordinates are comprised in a chrominance/luminance color value
coordinate system.
142. The method according to claim 141 and wherein the
chrominance/luminance color value coordinate system comprises one
of: a YCbCr chrominance/luminance color element; a YPbPr
chrominance/luminance color element; a YDbDr chrominance/luminance
color element; and a xvYCC chrominance/luminance color element. a
YCbCr chrominance/luminance color value coordinate system.
143. The method according to claim 134 and also comprising:
providing a variable R, a variable G, and a variable B, the
variable R, the variable G, and the variable B each respectively
denoting a color value coordinate; providing a variable R'(t), a
variable G'(t), and a variable B'(t), the variable R'(t), the
variable G'(t), and the variable B'(t) each denoting a series of
sums of color value components in a plurality of frames; providing
a variable t, the variable t denoting a frame number; providing a
variable .omega., the variable .omega. denoting a frequency;
providing a variable L, the variable L denoting a length, in
frames, of a video segment presently undergoing analysis; providing
a variable C and a variable S, the variable C denoting a cosine
portion of the discrete Fourier transform, and the variable S
denoting a sine portion of the discrete Fourier transform;
providing a variable A, the variable A denoting an intensity of
frequency .omega. in the video segment presently undergoing
analysis; determining A ( .omega. ) = 2 C ( .omega. ) 2 + S (
.omega. ) 2 for ##EQU00012## C ( .omega. ) = t = 0 L - 1 R ' ( t )
cos ( .omega. t ) ##EQU00012.2## S ( .omega. ) = t = 0 L - 1 R ' (
t ) sin ( .omega. t ) ; ##EQU00012.3## determining A ( .omega. ) =
2 C ( .omega. ) 2 + S ( .omega. ) 2 for ##EQU00012.4## C ( .omega.
) = t = 0 L - 1 G ' ( t ) cos ( .omega. t ) ##EQU00012.5## S (
.omega. ) = t = 0 L - 1 G ' ( t ) sin ( .omega. t ) ; and
##EQU00012.6## determining A ( .omega. ) = 2 C ( .omega. ) 2 + S (
.omega. ) 2 for ##EQU00012.7## C ( .omega. ) = t = 0 L - 1 B ' ( t
) cos ( .omega. t ) ##EQU00012.8## S ( .omega. ) = t = 0 L - 1 B '
( t ) sin ( .omega. t ) . ##EQU00012.9##
144. The method according to claim 143 and wherein .omega.
comprises a value such that: .omega..gtoreq.0.
145. A system comprising: a marking information receiver; a
determiner, which determines, at least in part, based on the
received marking information, a plurality of color element
additives; a color element adder, which adds the plurality of color
element additives to at least one color element of a video frame,
wherein the at least one color element comprises a color element R,
a color element G, and a color element B.
146. A system comprising: a plurality of video frames; a segmenter,
which segments the plurality of video frames into groups of video
frames; a marking information receiver; a determiner, which
determines, at least in part, based on the received marking
information, a plurality of color element additives; a selector,
which selects some of the groups of video frames for modification;
and a color element adder, which adds a plurality of color element
additives to a plurality of color elements of a plurality of video
frames comprised in the selected groups of video frames.
147. A system comprising: a captured video stream; a segmenter,
which segments the video stream into a plurality of video segments;
a splitter, which splits each segment of the plurality of video
segments into a plurality of video frames comprised therein; a
first determiner, which determines for each one of the plurality of
video segments, a color mass for every individual video frame of
the plurality of video frames by summing color value coordinates
comprised in the individual video frame; an aggregator which
aggregates results of the first determiner into three series of
color value coordinates for every individual video segment over the
plurality of video segments, each one of the three series of color
value coordinates corresponding to a distinct color element; a
discrete Fourier transform applier, which applies a discrete
Fourier transform to each one of the three series of color value
coordinates; a second determiner, which determines, as a result of
the discrete Fourier transform, an intensity of a plurality of
frequencies for each one of the three series of color value
coordinates; a third determiner, which determines a peak frequency
from among the plurality of frequencies for each one of the three
series of color value coordinates; a marking information
determiner, which determines at least a portion of marking
information as a result of the determining the peak frequency for
each one of the three series of color value coordinates; and a
combiner which combines the determined at least a portion of
marking information for each one of the plurality of video
segments, thereby determining the marking information.
148. A system comprising: a marking information receiving means;
determining means operative to determine, at least in part, based
on the received marking information, a plurality of color element
additives; color element adding means, operative to add the
plurality of color element additives to at least one color element
of a video frame, wherein the at least one color element comprises
a color element R, a color element G, and a color element B.
149. A system comprising: a plurality of video frames; segmenting
means, operative to segment the plurality of video frames into
groups of video frames; marking information receiving means;
determining means, operative to determine, at least in part, based
on the received marking information, a plurality of color element
additives; selecting means, operative to select some of the groups
of video frames for modification; and color element adding means,
operative to add a plurality of color element additives to a
plurality of color elements of a plurality of video frames
comprised in the selected groups of video frames.
150. A system comprising: a captured video stream; segmenting
means, operative to segment the video stream into a plurality of
video segments; splitting means, operative to split each segment of
the plurality of video segments into a plurality of video frames
comprised therein; first determining means, operative to determine
for each one of the plurality of video segments, a color mass for
every individual video frame of the plurality of video frames by
summing color value coordinates comprised in the individual video
frame; aggregating means operative to aggregate results of the
first determiner into three series of color value coordinates for
every individual video segment over the plurality of video
segments, each one of the three series of color value coordinates
corresponding to a distinct color element; a discrete Fourier
transform applying means, operative to apply a discrete Fourier
transform to each one of the three series of color value
coordinates; second determining means, operative to determine, as a
result of the discrete Fourier transform, an intensity of a
plurality of frequencies for each one of the three series of color
value coordinates; third determining means, operative to determine
a peak frequency from among the plurality of frequencies for each
one of the three series of color value coordinates; marking
information determining means, operative to determine at least a
portion of marking information as a result of the determining the
peak frequency for each one of the three series of color value
coordinates; and combining means operative to combine the
determined at least a portion of marking information for each one
of the plurality of video segments, thereby determining the marking
information.
Description
FIELD OF THE INVENTION
[0001] The present invention relates to data embedding systems, and
particularly to data embedding systems using unique identification
as input.
BACKGROUND OF THE INVENTION
[0002] With the recent advances in Internet content distribution,
including peer-to-peer networks and real-time video streaming
systems, in order to prevent unauthorized distribution of content,
it becomes important to embed data in video to trace the point of
distribution. The point of distribution is often an authorized
viewer, such as a cinema where pirated copies are made with
camcorders, or a set-top-box TV decoder whose output is captured
and re-encoded into a video file. After tracing the source,
measures can be taken to prevent further unauthorized
distribution.
[0003] Embedding signals in video is a rich field both in academic
research and commercial inventions. Covert watermarking in the
compressed (MPEG) domain is well known in the art, as are overt
watermarks that appear as bitmaps on top of the video, and
steganographic watermarks.
[0004] Digital Watermarking of Visual Data: State of the Art and
New Trends, by M. Barni, F. Bartolini and A. Piva., Congres Signal
processing X: Theories and Applications (Tampere, 4-8 Sep. 2000),
EUPSICO 2000: European Signal Processing Conference No 10, Tampere,
Finland (Apr. 9, 2000), briefly reviews the state of the art in
digital watermarking of visual data. A communication perspective is
adopted to identify the main issues in digital watermarking and to
present the most common solutions adopted by the research
community. The authors first consider the various approaches to
watermark embedding and hiding. The communication channel is then
taken into account, and the main research trends in attack modeling
are overviewed. Particular attention is paid to watermark recovery
due to the impact it has on the final reliability of the whole
watermarking system.
[0005] Multichannel Watermarking of Color Images, by M. Barni, F.
Bartolini and A. Piva., published in IEEE Transactions on Circuits
and Systems for Video Technology, Vol. 12, No. 3, March 2002,
describes that in the field of image watermarking, research has
been mainly focused on grayscale image watermarking, whereas the
extension to the color case is usually accomplished by marking the
image luminance, or by processing each color channel separately. In
this paper, a DCT domain watermarking technique expressly designed
to exploit the peculiarities of color images is presented. The
watermark is hidden within the data by modifying a subset of
full-frame DCT coefficients of each color channel. Detection is
based on a global correlation measure which is computed by taking
into account the information conveyed by the three color channels
as well as their interdependency. To ultimately decide whether or
not the image contains the watermark, the correlation value is
compared to a threshold. With respect to existing grayscale
algorithms, a new approach to threshold selection is proposed,
which permits reducing the probability of missed detection to a
minimum, while ensuring a given false detection probability.
Experimental results, as well as theoretical analysis, are
presented to demonstrate the validity of the new approach with
respect to algorithms operating on image luminance only.
[0006] Digital Watermarking for 3D Polygons using Multiresolution
Wavelet Decomposition, by Satoshi Kanai, Hiroaki Date, and Takeshi
Kishinami, available on the World Wide Web at
citeseer.ist.psu.edu/504450.html, describes that recently much
interest is being taken in methods to protect the copyright of
digital data and preventing illegal duplication of it. However, in
the area of CAD/CAM and CG, there are no effective ways to protect
the copyright of 3D geometric models. As a first step to solve this
problem, a new digital watermarking method for 3D polygonal models
is introduced in this paper. Watermarking is one of the copyright
protection methods where an invisible watermark is secretly
embedded into the original data. The proposed watermarking method
is based on wavelet transform (WT) and multiresolution
representation (MRR) of the polygonal model. The watermark can be
embedded in the large wavelet coefficient vectors at various
resolution levels of the MRR. This makes the embedded watermark
imperceptible and invariant to the affine transformation. And also
makes the control of the geometric error caused by the watermarking
reliable. First the requirements and features of the proposed
watermarking method are discussed. Second the mathematical
formulations of WT and MRR of the polygonal model are shown. Third
the algorithm of embedding and extracting the watermark is
proposed. Finally, the effectiveness of the proposed watermarking
method is shown through several simulation results.
[0007] U.S. Pat. No. 7,068,809 of Stach describes a method wherein
segmentation techniques are used in methods for embedding and
detecting digital watermarks in multimedia signals, such as images,
video and audio. A digital watermark embedder segments a media
signal into arbitrary shaped regions based on a signal
characteristic, such as a similarity measure, texture measure,
shape measure or luminance or other color value extrema measure.
The attributes of these regions are then used to adapt an auxiliary
signal such that it is more effectively hidden in the media signal.
In one example implementation, the segmentation process takes
advantage of a human perceptibility model to group samples of a
media signal into contiguous regions based on their similarities.
Attributes of the region, such as its frequency characteristics,
are then adapted to the frequency characteristics of a desired
watermark signal. One embedding method adjusts a feature of the
region to embed elements of an auxiliary signal, such as an error
correction encoded message signal. The detecting method re-computes
the segmentation, calculates the same features, and maps the
feature values to symbols to reconstruct an estimate of the
auxiliary signal. The auxiliary signal is then demodulated or
decoded to recover the message using error correction
decoding/demodulation operations.
[0008] U.S. Pat. No. 6,950,532 of Schumann et al. describes a
visual copyright protection system, the visual copyright protection
system including input content, a disruption processor, and output
content. The disruption processor inserts disruptive content to the
input content creating output content that impedes the ability of
optical recording devices to make useful copies of output
content.
[0009] The following references are also believed to reflect the
present state of the art:
[0010] U.S. Pat. No. 6,760,463 to Rhoads;
[0011] U.S. Pat. No. 6,721,440 to Reed et al.; and
[0012] WO 02/07362 of Digimarc Corp.
[0013] The disclosures of all references mentioned above and
throughout the present specification, as well as the disclosures of
all references mentioned in those references, are hereby
incorporated herein by reference.
SUMMARY OF THE INVENTION
[0014] The present invention seeks to provide an improved video
watermarking system.
[0015] There is thus provided in accordance with a preferred
embodiment of the present invention a method including receiving
marking information, determining, at least in part, based on the
marking information, a plurality of color element additives, adding
the plurality of color element additives to at least one color
element of a video frame, wherein the at least one color element
includes a color element R, a color element G, and a color element
B.
[0016] Further in accordance with a preferred embodiment of the
present invention the marking information includes information
identifying a rendering device.
[0017] Still further in accordance with a preferred embodiment of
the present invention the information identifying a rendering
device includes a unique device identifier.
[0018] Additionally in accordance with a preferred embodiment of
the present invention the marking information includes a copyright
mark.
[0019] Moreover in accordance with a preferred embodiment of the
present invention marking information includes access rights
data.
[0020] Further in accordance with a preferred embodiment of the
present invention the access rights data include playback/copying
permission.
[0021] Still further in accordance with a preferred embodiment of
the present invention the at least one color element includes a
Red-Green-Blue color element.
[0022] Additionally in accordance with a preferred embodiment of
the present invention the at least one color element includes a
chrominance/luminance color element.
[0023] Moreover in accordance with a preferred embodiment of the
present invention the chrominance/luminance color element includes
a YCbCr chrominance/luminance color element.
[0024] Further in accordance with a preferred embodiment of the
present invention the chrominance/luminance color element includes
a YPbPr chrominance/luminance color element.
[0025] Still further in accordance with a preferred embodiment of
the present invention the chrominance/luminance color element
includes a YDbDr chrominance/luminance color element.
[0026] Additionally in accordance with a preferred embodiment of
the present invention the chrominance/luminance color element
includes a xvYCC chrominance/luminance color element.
[0027] Further in accordance with a preferred embodiment of the
present invention the determining the plurality of color element
additives includes, at least in part providing a variable R(t), a
variable G(t), and a variable B(t), each of R(t), G(t), and B(t)
denoting one of the plurality of color element additives, providing
a variable A, the variable A denoting a wave amplitude, providing a
variable t, the variable t denoting a frame number, providing a
variable f.sub.R, a variable f.sub.G, and a variable f.sub.B, the
variable f.sub.R, the variable f.sub.G, and the variable f.sub.B
each denoting one of a plurality of values determined based, at
least in part, on the marking information, providing a variable
.tau., the variable .tau. denoting a base wavelength, providing a
variable .phi., the variable .phi. denoting a wave phase, providing
a constant a, denoting a base frequency, and determining
R ( t ) = A * sin ( 2 .pi. ( f R + a ) .tau. ( t + .phi. R ) )
##EQU00001## G ( t ) = A * sin ( 2 .pi. ( f G + a ) .tau. ( t +
.phi. G ) ) ##EQU00001.2## B ( t ) = A * sin ( 2 .pi. ( f B + a )
.tau. ( t + .phi. B ) ) . ##EQU00001.3##
[0028] Still further in accordance with a preferred embodiment of
the present invention R(t), G(t), and B(t) are each rounded to an
integer value.
[0029] Additionally in accordance with a preferred embodiment of
the present invention each of the values denoted by the variable
f.sub.R, the variable f.sub.G, and the variable f.sub.B includes a
binary value.
[0030] Moreover in accordance with a preferred embodiment of the
present invention A includes a value sufficiently low as to not
substantially cause a change in color intensity.
[0031] Further in accordance with a preferred embodiment of the
present invention A includes a value sufficiently high as to be
substantially distinct upon detection, such that A includes a value
noticeable above detected background noise.
[0032] Still further in accordance with a preferred embodiment of
the present invention A includes a value in a range of 1-4% of
total amplitude.
[0033] Additionally in accordance with a preferred embodiment of
the present invention, 1.ltoreq.A.ltoreq.10.
[0034] Moreover in accordance with a preferred embodiment of the
present invention, 180.ltoreq..tau..ltoreq.3000.
[0035] Further in accordance with a preferred embodiment of the
present invention .phi. is randomly selected.
[0036] Still further in accordance with a preferred embodiment of
the present invention .phi. is randomly selected each time a new
byte of marking information is utilized as input to determine a
value of the variable f.sub.R, a value of the variable f.sub.G, and
a value of the variable f.sub.B.
[0037] Additionally in accordance with a preferred embodiment of
the present invention a includes a value such that
0.ltoreq.a.ltoreq.80.
[0038] Moreover in accordance with a preferred embodiment of the
present invention the method further includes modifying a value of
R(t), G(t), and B(t) by a fractional multiplier before the
adding.
[0039] Further in accordance with a preferred embodiment of the
present invention the fractional multiplier is incrementally
increased until the fractional multiplier is equal to one (1).
[0040] Still further in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to the color element R, the color element
G, and the color element B, respectively, and the applying of
variable R(t), the variable G(t), and the variable B(t) to the
color element R, the color element G, and the color element B,
respectively, includes applying to every pixel included in a video
screen.
[0041] Additionally in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to a color element R, a color element G,
and a color element B, respectively, such that the color element R,
the color element G, and the color element B do not exceed a
maximum value allowed for each the color element in a color
representation system.
[0042] Moreover in accordance with a preferred embodiment of the
present invention the color representation system includes a
Red-Green-Blue color element.
[0043] Further in accordance with a preferred embodiment of the
present invention the color representation system includes a
chrominance/luminance color element.
[0044] Still further in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to a color element R, a color element G,
and a color element B, respectively, such that the color element R,
the color element G, and the color element B do not fall below a
minimum value allowed for each the color element in a color
representation system.
[0045] Additionally in accordance with a preferred embodiment of
the present invention the color representation system includes a
Red-Green-Blue color element.
[0046] Moreover in accordance with a preferred embodiment of the
present invention the color representation system includes a
chrominance/luminance color element.
[0047] There is also provided in accordance with another preferred
embodiment of the present invention a method including providing a
plurality of video frames, segmenting the plurality of video frames
into groups of video frames, receiving marking information,
determining, based at least in part, on the marking information, a
plurality of color element additives, selecting some of the groups
of video frames for modification, and adding the plurality of color
element additives to a plurality of color elements of a plurality
of video frames included in the selected groups of video
frames.
[0048] Further in accordance with a preferred embodiment of the
present invention no groups of video frames are selected for
modification for a period of time between 0.25 and 0.75 of a base
wavelength.
[0049] Still further in accordance with a preferred embodiment of
the present invention the marking information includes information
identifying a rendering device.
[0050] Additionally in accordance with a preferred embodiment of
the present invention the information identifying a rendering
device includes a unique device identifier.
[0051] Moreover in accordance with a preferred embodiment of the
present invention the marking information includes a copyright
mark.
[0052] Further in accordance with a preferred embodiment of the
present invention the marking information includes access rights
data.
[0053] Still further in accordance with a preferred embodiment of
the present invention the access rights data include
playback/copying permission.
[0054] Additionally in accordance with a preferred embodiment of
the present invention the color element includes a Red-Green-Blue
color element.
[0055] Moreover in accordance with a preferred embodiment of the
present invention the color element includes a
chrominance/luminance color element.
[0056] Further in accordance with a preferred embodiment of the
present invention the chrominance/luminance color element includes
a YCbCr chrominance/luminance color element.
[0057] Still further in accordance with a preferred embodiment of
the present invention the chrominance/luminance color element
includes a YPbPr chrominance/luminance color element.
[0058] Additionally in accordance with a preferred embodiment of
the present invention the chrominance/luminance color element
includes a YDbDr chrominance/luminance color element.
[0059] Moreover in accordance with a preferred embodiment of the
present invention the chrominance/luminance color element includes
a xvYCC chrominance/luminance color element.
[0060] Further in accordance with a preferred embodiment of the
present invention the determining the plurality of color element
additives includes, at least in part providing a variable R(t), a
variable G(t), and a variable B(t) each of R(t), G(t), and B(t),
each denoting one of the plurality of color element additives,
providing a variable A, the variable A denoting a wave amplitude,
providing a variable t, the variable t denoting a frame number,
providing a variable f.sub.R, a variable f.sub.G, and a variable
f.sub.B, the variable f.sub.R, the variable f.sub.G, and the
variable f.sub.B each denoting one of a plurality of binary values
determined based, at least in part, on the marking information,
providing a variable .tau., the variable .tau. denoting a base
wavelength, providing a variable .phi., the variable .phi. denoting
a wave phase, providing a constant a, denoting a base frequency,
and determining
R ( t ) = A * sin ( 2 .pi. ( f R + a ) .tau. ( t + .phi. R ) )
##EQU00002## G ( t ) = A * sin ( 2 .pi. ( f G + a ) .tau. ( t +
.phi. G ) ) ##EQU00002.2## B ( t ) = A * sin ( 2 .pi. ( f B + a )
.tau. ( t + .phi. B ) ) . ##EQU00002.3##
[0061] Still further in accordance with a preferred embodiment of
the present invention R(t), G(t), and B(t) are each rounded to an
integer value.
[0062] Additionally in accordance with a preferred embodiment of
the present invention each of the values denoted by the variable
f.sub.R, the variable f.sub.G, and the variable f.sub.B includes a
binary value.
[0063] Moreover in accordance with a preferred embodiment of the
present invention A includes a value sufficiently low as to not
substantially cause a change in color intensity.
[0064] Further in accordance with a preferred embodiment of the
present invention A includes a value sufficiently high as to be
substantially distinct upon detection, such that A includes a value
noticeable above detected background noise.
[0065] Still further in accordance with a preferred embodiment of
the present invention A includes a value in a range of 1-4% of
total amplitude.
[0066] Additionally in accordance with a preferred embodiment of
the present invention, 1.ltoreq.A.ltoreq.10.
[0067] Moreover in accordance with a preferred embodiment of the
present invention, 180.ltoreq..tau..ltoreq.3000.
[0068] Further in accordance with a preferred embodiment of the
present invention .phi. is randomly selected.
[0069] Still further in accordance with a preferred embodiment of
the present invention .phi. is randomly selected each time a new
byte of marking information is utilized as input to determine a
value of the variable f.sub.R, a value of the variable f.sub.G, and
a value of the variable f.sub.B.
[0070] Additionally in accordance with a preferred embodiment of
the present invention a includes a value such that
0.ltoreq.a.ltoreq.80.
[0071] Moreover in accordance with a preferred embodiment of the
present invention the method further includes modifying a value of
R(t), G(t), and B(t) by a fractional multiplier before the
adding.
[0072] Further in accordance with a preferred embodiment of the
present invention fractional multiplier is incrementally increased
until the fractional multiplier is equal to one (1).
[0073] Still further in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to the color element R, the color element
G, and the color element B, respectively, and the applying of
variable R(t), the variable G(t), and the variable B(t) to the
color element R, the color element G, and the color element B,
respectively, includes applying to every pixel included in a video
screen.
[0074] Additionally in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to a color element R, a color element G,
and a color element B, respectively, such that the color element R,
the color element G, and the color element B do not exceed a
maximum value allowed for each the color element in a color
representation system.
[0075] Moreover in accordance with a preferred embodiment of the
present invention the color representation system includes a
Red-Green-Blue color element.
[0076] Further in accordance with a preferred embodiment of the
present invention the color representation system includes a
chrominance/luminance color element.
[0077] Still further in accordance with a preferred embodiment of
the present invention the variable R(t), the variable G(t), and the
variable B(t) are applied to a color element R, a color element G,
and a color element B, respectively, such that the color element R,
the color element G, and the color element B do not fall below a
minimum value allowed for each the color element in a color
representation system.
[0078] Additionally in accordance with a preferred embodiment of
the present invention the color representation system includes a
Red-Green-Blue color element.
[0079] Moreover in accordance with a preferred embodiment of the
present invention the color representation system includes a
chrominance/luminance color element.
[0080] There is also provided in accordance with still another
preferred embodiment of the present invention a method including
capturing a video stream, segmenting the video stream into a
plurality of video segments, splitting each segment of the
plurality of video segments into a plurality of video frames
included therein, for each one of the plurality of video segments
determining a color mass for every individual video frame of the
plurality of video frames by summing color value coordinates
included in the individual video frame, aggregating a result of the
determining into three series of color value coordinates for every
individual video segment over the plurality of video segments, each
one of the three series of color value coordinates corresponding to
a distinct color element, for each one of the three series of color
value coordinates applying a discrete Fourier transform to the
series of color value coordinates, determining, as a result of the
discrete Fourier transform, an intensity of a plurality of
frequencies, determining a peak frequency from among the plurality
of frequencies, and determining at least a portion of marking
information as a result of the determining the peak frequency, and
combining the determined at least a portion of marking information
for each one of the plurality of video segments, thereby
determining the marking information.
[0081] Further in accordance with a preferred embodiment of the
present invention the capturing a video stream includes capturing a
video stream from a streaming content network.
[0082] Still further in accordance with a preferred embodiment of
the present invention the capturing a video stream includes
downloading the video stream from a peer-to-peer file content
sharing network.
[0083] Additionally in accordance with a preferred embodiment of
the present invention the capturing a video stream includes
capturing the stream from an illegal content distribution
scheme.
[0084] Moreover in accordance with a preferred embodiment of the
present invention the illegal content distribution scheme includes
an online illegal content distribution scheme.
[0085] Further in accordance with a preferred embodiment of the
present invention the illegal content distribution scheme includes
an offline illegal content distribution scheme.
[0086] Still further in accordance with a preferred embodiment of
the present invention the illegal content distribution scheme
includes retail sale of pirated DVDs.
[0087] Additionally in accordance with a preferred embodiment of
the present invention each video segment of the plurality of video
segments includes an at least partially overlapping video
segment.
[0088] Moreover in accordance with a preferred embodiment of the
present invention each video segment of the plurality of video
segments is of a length approximately equal to 1.5 times a base
wavelength.
[0089] Further in accordance with a preferred embodiment of the
present invention the base wavelength, denoted .tau., is
positive.
[0090] Still further in accordance with a preferred embodiment of
the present invention the color value coordinates are included in a
Red-Green-Blue color value coordinate system.
[0091] Additionally in accordance with a preferred embodiment of
the present invention the color value coordinates are included in a
chrominance/luminance color value coordinate system.
[0092] Moreover in accordance with a preferred embodiment of the
present invention the chrominance/luminance color value coordinate
system includes a YCbCr chrominance/luminance color value
coordinate system.
[0093] Further in accordance with a preferred embodiment of the
present invention the chrominance/luminance color value coordinate
system includes a YPbPr chrominance/luminance color value
coordinate system.
[0094] Still further in accordance with a preferred embodiment of
the present invention the chrominance/luminance color value
coordinate system includes a YDbDr chrominance/luminance color
value coordinate system.
[0095] Additionally in accordance with a preferred embodiment of
the present invention the chrominance/luminance color value
coordinate system includes a xvYCC chrominance/luminance color
value coordinate system.
[0096] Moreover in accordance with a preferred embodiment of the
present invention the method also includes providing a variable R,
a variable G, and a variable B, the variable R, the variable G, and
the variable B each respectively denoting a color value coordinate,
providing a variable R'(t), a variable G'(t), and a variable B'(t),
the variable R'(t), the variable G'(t), and the variable B'(t) each
denoting a series of sums of color value components in a plurality
of frames, providing a variable t, the variable t denoting a frame
number, providing a variable .omega., the variable .omega. denoting
a frequency, providing a variable L, the variable L denoting a
length, in frames, of a video segment presently undergoing
analysis, providing a variable C and a variable S, the variable C
denoting a cosine portion of the discrete Fourier transform, and
the variable S denoting a sine portion of the discrete Fourier
transform, providing a variable A, the variable A denoting an
intensity of frequency .omega. in the video segment presently
undergoing analysis,
determining A ( .omega. ) = 2 C ( .omega. ) 2 + S ( .omega. ) 2 for
##EQU00003## C ( .omega. ) = t = 0 L - 1 R ' ( t ) cos ( .omega. t
) ##EQU00003.2## S ( .omega. ) = t = 0 L - 1 R ' ( t ) sin (
.omega. t ) , determining A ( .omega. ) = 2 C ( .omega. ) 2 + S (
.omega. ) 2 for ##EQU00003.3## C ( .omega. ) = t = 0 L - 1 G ' ( t
) cos ( .omega. t ) ##EQU00003.4## S ( .omega. ) = t = 0 L - 1 G '
( t ) sin ( .omega. t ) , and ##EQU00003.5## determining A (
.omega. ) = 2 C ( .omega. ) 2 + S ( .omega. ) 2 for ##EQU00003.6##
C ( .omega. ) = t = 0 L - 1 B ' ( t ) cos ( .omega. t )
##EQU00003.7## S ( .omega. ) = t = 0 L - 1 B ' ( t ) sin ( .omega.
t ) . ##EQU00003.8##
[0097] Further in accordance with a preferred embodiment of the
present invention .omega. includes a value such that
.omega..gtoreq.0.
[0098] There is also provided in accordance with still another
preferred embodiment of the present invention a system including a
marking information receiver, a determiner, which determines, at
least in part, based on the received marking information, a
plurality of color element additives, a color element adder, which
adds the plurality of color element additives to at least one color
element of a video frame, wherein the at least one color element
includes a color element R, a color element G, and a color element
B.
[0099] There is also provided in accordance with still another
preferred embodiment of the present invention a system including a
plurality of video frames, a segmenter, which segments the
plurality of video frames into groups of video frames, a marking
information receiver, a determiner, which determines, at least in
part, based on the received marking information, a plurality of
color element additives, a selector, which selects some of the
groups of video frames for modification, and a color element adder,
which adds a plurality of color element additives to a plurality of
color elements of a plurality of video frames included in the
selected groups of video frames.
[0100] There is also provided in accordance with still another
preferred embodiment of the present invention a system including a
captured video stream, a segmenter, which segments the video stream
into a plurality of video segments, a splitter, which splits each
segment of the plurality of video segments into a plurality of
video frames included therein, a first determiner, which determines
for each one of the plurality of video segments, a color mass for
every individual video frame of the plurality of video frames by
summing color value coordinates included in the individual video
frame, an aggregator which aggregates results of the first
determiner into three series of color value coordinates for every
individual video segment over the plurality of video segments, each
one of the three series of color value coordinates corresponding to
a distinct color element, a discrete Fourier transform applier,
which applies a discrete Fourier transform to each one of the three
series of color value coordinates, a second determiner, which
determines, as a result of the discrete Fourier transform, an
intensity of a plurality of frequencies for each one of the three
series of color value coordinates, a third determiner, which
determines a peak frequency from among the plurality of frequencies
for each one of the three series of color value coordinates, a
marking information determiner, which determines at least a portion
of marking information as a result of the determining the peak
frequency for each one of the three series of color value
coordinates, and a combiner which combines the determined at least
a portion of marking information for each one of the plurality of
video segments, thereby determining the marking information.
[0101] There is also provided in accordance with still another
preferred embodiment of the present invention a signal including a
video stream including a plurality of video frames, each of the
plurality of video frames including a plurality of pixels, and each
pixel of the plurality of pixels including a plurality of color
elements, wherein at least one of the color elements included in
one of the pixels included in one of the plurality of video frames
has been modified by having a color element additive added
thereto.
[0102] There is also provided in accordance with still another
preferred embodiment of the present invention a signal including a
video stream including a plurality of video frames, each of the
plurality of video frames including a plurality of pixels, and each
pixel of the plurality of pixels including a plurality of color
elements, wherein the plurality of video frames has been segmented
into groups of video frames, a plurality of color element additives
has been determined, based, at least in part, on received marking
information, some of the groups of video frames were selected for
modification, and the plurality of color element additives has been
added to the plurality of color elements included in the selected
groups of video frames.
[0103] There is also provided in accordance with still another
preferred embodiment of the present invention a storage medium
including a video stream including a plurality of video frames,
each of the plurality of video frames including a plurality of
pixels, and each pixel of the plurality of pixels including a
plurality of color elements, wherein at least one of the color
elements included in one of the pixels included in one of the
plurality of video frames has been modified by having a color
element additive added thereto.
[0104] There is also provided in accordance with still another
preferred embodiment of the present invention a storage medium
including a video stream including a plurality of video frames,
each of the plurality of video frames including a plurality of
pixels, and each pixel of the plurality of pixels including a
plurality of color elements wherein the plurality of video frames
has been segmented into groups of video frames, a plurality of
color element additives has been determined, based, at least in
part, on received marking information, some of the groups of video
frames were selected for modification, and the plurality of color
element additives has been added to the plurality of color elements
included in the selected groups of video frames.
BRIEF DESCRIPTION OF THE DRAWINGS
[0105] The present invention will be understood and appreciated
more fully from the following detailed description, taken in
conjunction with the drawings in which:
[0106] FIG. 1 is a simplified block drawing of a video data
embedding system constructed and operative in accordance with a
preferred embodiment of the present invention;
[0107] FIG. 2 is a simplified block drawing depicting manipulation
on input bits in the video data embedding system of FIG. 1;
[0108] FIG. 3 is a simplified illustration depicting a single pixel
comprised in a video frame before and after data embedding,
according to the system of FIG. 1;
[0109] FIG. 4 is a simplified illustration depicting a plurality of
individual pixels comprised in a plurality of video frames
comprising embedded data, a graphical depiction of a data embedding
function used, at least in part, to embed data in the plurality of
video frames, and a graphical representation of an effect of the
data embedding function on the pixels of individual frames within
the system of FIG. 1;
[0110] FIG. 5 is a simplified illustration of an embedded data
detection portion of the video data embedding system of FIG. 1;
[0111] FIG. 6 is a simplified illustration depicting use of a
Fourier transform in the embedded data detection portion of the
video data embedding system of FIG. 1; and
[0112] FIGS. 7-9B are simplified flowcharts of preferred methods of
operation of the system of FIG. 1.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
[0113] Reference is now made to FIG. 1, which is a simplified block
drawing of a video data embedding system constructed and operative
in accordance with a preferred embodiment of the present invention.
The system of FIG. 1 comprises a content rendering device 10. The
content rendering device 10 preferably comprises marking
information 15 and a data embedding system 20.
[0114] The marking information 15 preferably comprises any
appropriate information, for example and without limiting the
generality of the foregoing, information identifying the rendering
device 10, and preferably a unique device ID for the content
rendering device 10. Alternatively and preferably, a copyright mark
or other access rights data, for example and without limiting the
generality of the foregoing, the playback/copying permissions to be
obeyed by the content rendering device 10. Those skilled in the art
will appreciate that copyright information may, for example and
without limiting the generality of the foregoing, be a single bit,
indicating copyrighted/not copyrighted. Alternatively, copyright
may be indicated in a plurality of bits, such as, and without
limiting the generality of the foregoing, permission to copy but
not to burn to CD. It is assumed that authorized playback devices
respect such signals, while unauthorized playback devices are
assumed not to respect such signals. It is appreciated that
combinations of appropriate types of identifying information may
alternatively be used as the marking information 15.
[0115] The data embedding system 20 is preferably operative to
inject embedded data, depicted in FIG. 1 as an asterisk, *, onto
frames 30, 40, 50 of a video stream 60.
[0116] The operation of the system of FIG. 1 is now described. The
video stream 60 is depicted as comprising three distinct types of
video frames: [0117] frames not yet comprising embedded data 30;
[0118] frames presently being embedded with data 40; and [0119]
frames already embedded with data 50.
[0120] The data embedding system 20 preferably receives the marking
information 15 as an input, generates the embedded data, depicted
as an asterisk, *, and injects a watermark (termed herein "WM")
into the frames presently being embedded with data 40.
[0121] Content comprising the video stream 60, now comprising a
plurality of frames already embedded with data 50, may be uploaded
or otherwise made available on a content sharing network 70. The
content sharing network 70 typically comprises a either a streaming
content sharing network or a peer-to-peer content sharing network.
Alternatively, the content sharing network 70 may comprise any
appropriate type of online and/or offline content distribution
scheme, for example and without limiting the generality of the
foregoing, retail sale of pirated DVDs. A second device 80 may then
acquire the video stream 60 from the content sharing network
70.
[0122] A broadcaster, a content owner, or other appropriately
authorized agent may also acquire the video stream 60 from the
content sharing network 70. Upon acquisition of the video stream 60
from the content sharing network 70 by the broadcaster, content
owner, or other interested stakeholder, the video stream 60 is
preferably input into a detection device 90. The detection device
90 preferably extracts the embedded data, depicted as an asterisk,
*, from each of the frames already embedded with data 50 comprised
in the video stream 60. The extracted embedded data is then input
into a embedded data detection system 95. The embedded data
detection system 95 preferably is able to determine the injected
marking information 15 from the input embedded data.
[0123] Reference is now additionally made to FIG. 2, which is a
simplified block drawing depicting manipulation on input bits in
the video data embedding system of FIG. 1. The content rendering
device 10 of FIG. 1 is depicted in FIG. 2. It is appreciated that a
single byte, B.sub.i, of input data is depicted (among a plurality
of bytes) being processed for injection as a component of the
embedded data. The injection occurs over a large number of frames,
denoted below as .tau..
[0124] Those skilled in the art will appreciate that a digital
video frame is presented to a viewer as an ordered arrangement of
pixels on a viewing monitor or screen. Certain changes may be made
to one or more of the pixels which will, typically, not be
perceptible to the viewer. For example and without limiting the
generality of the foregoing, a color element of the pixel may be
represented by a triad of Red-Green-Blue values, typically
expressed as values ranging from 0-255. A slight change in the
value of the Red-Green-Blue values, for example and without
limiting the generality of the foregoing, from 179-221-18 to
184-220-20 will, typically, not be perceptible to the viewer.
[0125] Those skilled in the art will appreciate that pixel color
may alternatively be expressed in any appropriate color space, such
as any of the well known Chrominance/Luminance systems (for
instance, YCbCr; YPbPr; YDbDr), or according to the xvYCC standard,
IEC 61966-2-4. For simplicity of discussion, pixel color is
expressed herein, in a non-limiting manner, as a RGB triplet.
[0126] As discussed above, the data embedding system 20 receives
the marking information 15 as input. The marking information 15 is
expressed as a series of bytes: B.sub.0, B.sub.1, . . .
B.sub.k.
[0127] Each byte comprises eight bits:
B i = b i 0 b i 1 b i 2 b i 3 b i 4 b i 5 b i 6 b i 7
##EQU00004##
[0128] Each byte B.sub.i is then extended with one bit to extended
byte E.sub.i:
E i = s i b i 0 b i 1 b i 2 b i 3 b i 4 b i 5 b i 6 b i 7
##EQU00005##
Where s.sub.i=1 for the first extended byte, and 0 for all other
extended bytes.
[0129] E.sub.i is then split into three binary values;
f R = s i b i 0 b i 1 ##EQU00006## f G = b i 2 b i 3 b i 4
##EQU00006.2## and ##EQU00006.3## f B = b i 5 b i 6 b i 7
##EQU00006.4##
where f.sub.R, f.sub.G, and f.sub.B comprise a triplet of binary
values. For example and without limiting the generality of the
foregoing let: f.sub.R=101 (binary)=5 (decimal); f.sub.G=010
(binary)=2 (decimal); and f.sub.B=110 (binary)=6 (decimal). It is
appreciated that the above values of f.sub.R, f.sub.G, and f.sub.B
have been selected on a purely arbitrary basis, and, as such,
comprise but one possible example.
[0130] It is further appreciated that alternative preferred methods
of segmenting bytes of the marking information 15 may also be used.
For example and without limiting the generality of the foregoing,
using more than one bit to indicate the position of a byte in the
sequence B.sub.0, B.sub.1, . . . B.sub.k, or using portion sizes of
more or less than a byte, for instance 2 bits of position
information per 8 bits of identifying information enables appending
a precise position to every byte of a 4-byte instance of marking
information 15. It is appreciated that any bit width and any
corresponding space size between 1 and infinity comprises a
possible preferred embodiment of the present invention.
[0131] In the above example, f.sub.R, f.sub.G, and f.sub.B each
comprise one in a space of eight possible frequencies of a sine
wave. It is appreciated that by changing the method of segmentation
of the marking information 15, other preferred implementations of
f.sub.R, f.sub.G, and f.sub.B are possible. For instance, in a
preferred embodiment of the present invention, extended bytes
E.sub.i are preferably derived from 12-bit portions of the marking
information 15, without adding positional bits. E.sub.i then
comprises 12 bits, and f.sub.R, f.sub.G, and f.sub.B each comprise
4 bits:
Ei=b.sub.i0b.sub.i1b.sub.i2b.sub.i3b.sub.i4b.sub.i5b.sub.i6b.sub.i7b.sub.-
i8b.sub.i9b.sub.i10b.sub.i11 then:
f.sub.R=b.sub.i0b.sub.i1b.sub.i2b.sub.i3;
f.sub.G=b.sub.i4b.sub.i5b.sub.i6b.sub.i7;
f.sub.B=b.sub.i8b.sub.i9b.sub.i10b.sub.i11. In which case, f.sub.R,
f.sub.G, and f.sub.B each comprise one of a space of sixteen
different frequencies of a sine wave.
[0132] It is appreciated that segmentation of marking information
preferably need not comprise values comprising the same number of
bits as each other with respect to frequencies. For example and
without limiting the generality of the foregoing, f.sub.G can be
assigned a 3-bit width while f.sub.R and f.sub.B are 4 bits
wide.
[0133] The eight frequencies mentioned above, preferably provide
inputs into the following equations, in order to define a change in
the corresponding color space dimensions, as follows:
R ( t ) = A * sin ( 2 .pi. ( f R + a ) .tau. ( t + .phi. R ) )
##EQU00007## G ( t ) = A * sin ( 2 .pi. ( f G + a ) .tau. ( t +
.phi. G ) ) ##EQU00007.2## B ( t ) = A * sin ( 2 .pi. ( f B + a )
.tau. ( t + .phi. B ) ) ##EQU00007.3##
(It is appreciated, for example, that in a Chrominance/Luminance
system, R(t) is preferably represented as Y(t), f.sub.R is
preferably represented as f.sub.Y, and .phi..sub.R is preferably
represented as .phi..sub.Y.)
Where:
[0134] t--frame number R(t), G(t), and B(t)--the change to apply to
values of R, G, and B respectively, at frame t. Those skilled in
the art will appreciate that the changes R(t), G(t), and B(t) are
rounded to the nearest integer in video representation systems that
use integers for color component values. A--wave amplitude. A is
preferably low enough that the viewer will not notice any change in
color intensity, yet high enough that, upon detection (described
below with reference to FIGS. 5 and 6), will stand out above
background noise. Accordingly, it is preferable that A be in the
range of 1%-4% of total amplitude, and therefore, if the video
pixels are defined in the R, G, B domain with values between 0 and
255, preferably 1.ltoreq.A.ltoreq.10. .tau.--base wavelength,
expressed in frames. Assuming 30 frames per second, in order to
achieve reasonable detection times and to avoid causing flickering
which is noticeable to the viewer at higher frequencies, .tau. is
preferably in the range 180-3000. .phi.--wave phase. A random .phi.
is preferably chosen per byte B.sub.0, B.sub.1, . . . , B.sub.k.
Randomly varying .phi. preferably causes waves of the same
frequency to cancel each other out, when summed over an overly long
period of time. Therefore, detection is made harder for attackers
not familiar with the exact data embedding method. .phi. is an
integer between 0 and .tau.. a--base frequency, comprising a
constant. Numerically low frequencies are typically unusable due to
associated noise levels and flicker. Given the parameters above, a
preferably ranges between 0 and 80. It is preferable that the
frequencies which result from an addition of a to f.sub.R, f.sub.G,
and f.sub.B, and division by .tau., range between 2 Hz and 0.5 Hz.
Frequencies above 2 Hz may cause jitter which is perceptible to the
viewer, and thus, undesirable. Likewise, frequencies below 0.5 Hz
comprise longer sine wave lengths, and thus comprise detection
times which are slower, and hence, undesirable.
[0135] It is appreciated that the functions R(t), G(t), and B(t)
comprise harmonic functions. Thus, -A.ltoreq.R(t).ltoreq.A;
-A.ltoreq.G(t).ltoreq.A; and -A.ltoreq.B(t).ltoreq.A.
[0136] As a non-limiting example, continuing the discussion above,
where:
f.sub.R=101 (bin)=5 (dec), let:
A=5,
[0137] a=9 (chosen to be between 8 and 12), .tau.=900 (a value
chosen so as to be between 180-1800),
.phi..sub.R=400 (0<.phi..sub.R<.tau.), and
[0138] t=1776 (chosen arbitrarily for the present example).
Thus , for R ( t ) = A * sin ( 2 .pi. ( f R + a ) .tau. ( t + .phi.
R ) ) , R ( t ) = A * sin [ ( 2 * .pi. * ( f R + a ) ) / .tau. * (
t + .PHI. R ) ] = 5 * sin ( 2 * .pi. * ( 5 + 9 ) * ( 1776 + 400 ) /
900 = 5 * sin 28 .pi. * 2176 / 900 ) = 5 * sin .pi. * 67.6978 = 5 *
sin 212.6789 = 5 * - 0.8131 = - 4.0655 ##EQU00008##
As mentioned above, R(t) is rounded to the nearest integer, and
therefore, the value of R in any pixel in frame t would be
decreased by 4.
[0139] It is appreciated that the values of R, G, and B can never
exceed the maximum imposed by the video color representation
system, regardless of the values of R(t), G(t), and B(t). For
example and without limiting the generality of the foregoing, in
systems of ROB values between 0 and 255, R, G, and B can never go
above a maximum of 255. Likewise, the value of R, G, and B can
never go below a minimum of 0, regardless of the values of R(t),
G(t), and B(t). For example and without limiting the generality of
the foregoing, if G(t)=-3 and G=2 in frame t, after data embedding,
G=0.
[0140] The data embedding system 20 preferably applies the
modifications of R(t), G(t), and B(t) to an entire picture's color
components for a period of about 2*.tau., before taking a next
extended byte, E.sub.i+1. Specifically, the modifications of R(t),
G(t), and B(t) are preferably applied to every pixel of a video
screen. After each byte of the marking information 15 has been used
to generate R(t), G(t), and B(t), the data embedding system 20
cycles back to E.sub.0.
[0141] In some preferred embodiment of the present invention, the
data embedding system 20 preferably applies no modifications for a
small randomly timed break of length between 1/4*.tau. and
3/4*.tau.. The small randomly timed break in inserting the WM is
added in order to enable waves of different phase to preferably
cancel each other out, when summed over a long period of time,
thereby adding an element of confusion and thereby making an attack
on the data embedding system more difficult.
[0142] In some preferred embodiments of the present invention, each
phase of data embedding starts with a gradual fade-in. Each
modification value R(t), G(t), and B(t) is multiplied by some
fraction for several frames, in order to prevent any flicker from
suddenly appearing. For example and without limiting the generality
of the foregoing, a first trio of R(t), G(t), and B(t) is
preferably multiplied by 0.1. A second trio of R(t), G(t), and B(t)
is preferably multiplied by 0.2, and so on, until the multiplicand
reaches 1. It is appreciated that in preferred embodiments where
values of R(t), G(t), and B(t) are multiplied by a fractional
value, due to the effect of rounding to an integer, some of the
multiplications result in repetitions of certain values of R(t),
G(t), and B(t). For instance, if R(t) ranges from -5 to 5, then,
multiplying by 0.1, 0.2, . . . 1 gives products which are going to
be at most plus/minus 0.5, plus/minus 1, plus/minus 1.5, . . .
plus/minus 5, meaning that because of rounding to integers, every
second multiplication gets rounded, either up or down. Those
skilled in the art will appreciate that, as wave phase .phi. will
have an effect on fade-in, it is not important if the fade-in is
precisely timed to the amplitude.
[0143] Reference is now made to FIG. 3, which is a simplified
illustration depicting a single pixel comprised in a video frame
before and after data embedding, according to the system of FIG. 1.
FIG. 3 focuses on a single pixel, depicted, by way of example, as
having Red, Green, and Blue values of: Red=235; Green=17; and
Blue=186. Applying the WM method described above, and, by way of
example, R(t)=1; G(t)=-3; and B(t)=2. Thus, the color values of the
pixel with WM applied are Red=236; Green=14; and Blue=188.
[0144] Reference is now made to FIG. 4, which is a simplified
illustration depicting a plurality of individual pixels comprised
in a plurality of video frames comprising embedded data, a
graphical depiction of a data embedding function used, at least in
part, to embed data in the plurality of video frames, and a
graphical representation of an effect of the data embedding
function on the pixels of individual frames within the system of
FIG. 1. FIG. 4 depicts a graphical representation of one of the
harmonic functions described above, where the value of the function
ranges from -A to A. For example and without limiting the
generality of the foregoing, assume that R(t) is depicted in FIG.
4. Accordingly, the value of R(t), and the corresponding
modification of the Red color value R in a given video frame is
seen to fluctuate between -A and A, as R(t) fluctuates.
[0145] Those skilled in the art will appreciate that a video signal
or other appropriate signal may comprise video comprising embedded
data as described above with reference to FIGS. 1-4. Those skilled
in the art will appreciate that video comprising embedded data as
described above with reference to FIGS. 1-4 may be stored on a
compact disk (CD), a digital versatile disk (DVD), flash memory, or
other appropriate storage medium.
[0146] Reference is now made to FIG. 5, which is a simplified
illustration of an embedded data detection portion of the video
data embedding system of FIG. 1. A detection device 90 acquires the
video stream 60 (FIG. 1), for example and without limiting the
generality of the foregoing, by capturing the video stream 60 (FIG.
1) from a streaming content sharing network 70 or downloading the
video stream 60 (FIG. 1) from a peer-to-peer file content sharing
network 70. The detecting agent preferably splits the video stream
60 into short overlapping segments of length of approximately
1.5*.tau.. The detecting agent then preferably splits each segment
into individual frames and preferably determines a color mass of
each individual frame by summing up the R, G, B color values
coordinates of each individual frame.
[0147] Summing up the R, G, B color values components of each
individual frame results in three series of data being formed, one
series for each component. The three series are denoted below as
R'(t), G'(t) and B'(t). Each one of the three series is subjected
to frequency analysis by means of Discrete Fourier Transform, with
decomposition on the frequencies in the WM range, such that, for
example and without limiting the generality of the foregoing, for
component R:
C ( .omega. ) = t = 0 L - 1 R ' ( t ) cos ( .omega. t )
##EQU00009## S ( .omega. ) = t = 0 L - 1 R ' ( t ) sin ( .omega. t
) ##EQU00009.2## A ( .omega. ) = 2 C ( .omega. ) 2 + S ( .omega. )
2 ##EQU00009.3##
Where:
[0148] t--is the frame number; .omega.--is a frequency, taken in
the range between a and a+8, where a is the same as during
injection; L--is the length (in frames) of the video portion
subjected to analysis; C and S--are the cosine and sine parts of
the transform, respectively; and A corresponds to the intensity of
frequency .omega. in the analyzed segment. Similar analysis is
performed for G and B.
[0149] Reference is now made to FIG. 6, which is a simplified
illustration depicting use of a Fourier transform in the embedded
data detection portion of the video data embedding system of FIG.
1. For the purposes of the discussion of FIG. 6, it is appreciated
that the abscissa of the graph on the left side of FIG. 6 comprises
`t` (frame number), and the ordinate of the graph on the left side
of FIG. 6 comprises R(t) (the sum of R component over frame t).
Likewise, the abscissa of the graph on the right side of FIG. 6
comprises .omega. (frequency), and the ordinate of the graph on the
left side of FIG. 6 comprises A(.omega.) (amplitude/intensity of
frequency .omega.). A frequency injected by the data embedding
system 20 (FIG. 1) into one of the color components, appears as a
peak on the frequency chart, on the right of FIG. 6, and is,
accordingly, preferably discernable to either a human operator or
to a computerized program. In one preferred embodiment of the
present invention, the WM detection portion of the video data
embedding system of FIG. 1 preferably analyzes the ratio of the
intensity of the most prominent frequency detected to the intensity
of the second most prominent frequency detected in order to decide
if the most prominent frequency detected does comprise a dominant
frequency, thereby indicating successful detection.
[0150] The detection device 90 (FIG. 1) preferably extracts and
determines dominant frequencies from consecutive video segments. In
preferred embodiments where only one position bit is utilized, as
explained above, frequencies encoding the first extended byte
E.sub.0 are preferably identified by the start bit s.sub.0. Once
the first extended byte E.sub.0 is determined, each subsequent byte
can be determined by translating frequencies back. Accordingly,
marking information 15 can be determined in its entirety.
[0151] It is appreciated that in embodiments where more positional
bits are used, a relative position of a byte E.sub.i in a sequence
of marking information bytes E.sub.0 . . . E.sub.n can preferably
be determined by the value of the positional bits. In embodiments
that do not use positional bits, the sequence of marking
information bytes can preferably be determined by other means,
including: [0152] correlating the values of the extended bytes, for
example, the last byte comprising a checksum of the first bytes;
[0153] using special frequencies for one of the extended bytes, for
example, using a frequency in the first byte that is lower than the
minimum frequency used in any other byte; and [0154] using other
signals, such as marking the last byte by a period of "silence" (no
injection of any frequencies) following that byte.
[0155] It is the opinion of the inventors of the present invention
that the color mass frequency data embedding technique described
herein is highly resistant to known attacks. Specifically: [0156]
Filtering--the proposed WM technique cannot be detected or removed
using standard low-pass filters, video color balance tools, etc,
since the frequencies used by the invention preferably comprise
frequencies which are below the range normally considered to be
noise; [0157] Resizing (stretching), rotation, and cropping--since
the whole screen carries the WM information uniformly, no known
attack using geometric transformation can damage the WM; and [0158]
Collusion attacks--collusion attacks typically work by averaging
several video signals comprising WMs, or choosing each frame out of
several frames comprising WMs, thereby resulting in a WM that
combines data from all originally examined signals. In particular,
a frequency analysis of the combined signal typically reveals all
injected frequencies. If the data embedding system 20 (FIG. 1)
waits between injections of separate bytes, as is described above,
then the resulting signal preferably contains intervals when only
one of the original WMs is present, thereby allowing signal
separation. Standard error-correction techniques, well known in the
art, used both at injection and at detection, preferably are
utilized in order to assist in separating the WMs.
[0159] Reference is now made to FIGS. 7-9B, which are simplified
flowcharts of preferred methods of operation of the system of FIG.
1. FIGS. 7-9B are believed to be self-explanatory in light of the
above discussion.
[0160] It is appreciated that software components of the present
invention may, if desired, be implemented in ROM (read only memory)
form. The software components may, generally, be implemented in
hardware, if desired, using conventional techniques.
[0161] It is appreciated that various features of the invention
which are, for clarity, described in the contexts of separate
embodiments may also be provided in combination in a single
embodiment. Conversely, various features of the invention which
are, for brevity, described in the context of a single embodiment
may also be provided separately or in any suitable
subcombination.
[0162] It will be appreciated by persons skilled in the art that
the present invention is not limited by what has been particularly
shown and described hereinabove. Rather the scope of the invention
is defined only by the claims which follow:
* * * * *