U.S. patent number 10,346,624 [Application Number 14/186,528] was granted by the patent office on 2019-07-09 for methods, systems, and devices for obscuring entities depicted in captured images.
This patent grant is currently assigned to ELWHA LLC. The grantee listed for this patent is Elwha LLC. Invention is credited to Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud.
View All Diagrams
United States Patent |
10,346,624 |
Holman , et al. |
July 9, 2019 |
Methods, systems, and devices for obscuring entities depicted in
captured images
Abstract
Computationally implemented methods and systems include
acquiring an image that includes a depiction of a feature of one or
more entities, attaining identification of a particular entity of
the one or more entities for which the depiction of the feature is
present in the image, and obtaining relationship data that
indicates whether the particular entity has a relationship with a
device that facilitated acquisition of the image. In addition to
the foregoing, other aspects are described in the claims, drawings,
and text.
Inventors: |
Holman; Pablos (Seattle,
WA), Hyde; Roderick A. (Redmond, WA), Levien; Royce
A. (Lexington, MA), Lord; Richard T. (Gig Harbor,
WA), Lord; Robert W. (Seattle, WA), Malamud; Mark A.
(Seattle, WA) |
Applicant: |
Name |
City |
State |
Country |
Type |
Elwha LLC |
Bellevue |
WA |
US |
|
|
Assignee: |
ELWHA LLC (Bellevue,
WA)
|
Family
ID: |
52809717 |
Appl.
No.: |
14/186,528 |
Filed: |
February 21, 2014 |
Prior Publication Data
|
|
|
|
Document
Identifier |
Publication Date |
|
US 20150104080 A1 |
Apr 16, 2015 |
|
Related U.S. Patent Documents
|
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
Issue Date |
|
|
14185741 |
Feb 20, 2014 |
|
|
|
|
14051213 |
Oct 10, 2013 |
|
|
|
|
14055471 |
Oct 16, 2013 |
|
|
|
|
14055543 |
Oct 16, 2013 |
|
|
|
|
14084254 |
Nov 19, 2013 |
|
|
|
|
14084579 |
Nov 19, 2013 |
|
|
|
|
14084581 |
Nov 19, 2013 |
|
|
|
|
14084591 |
Nov 19, 2013 |
|
|
|
|
14108077 |
Dec 16, 2013 |
|
|
|
|
14108107 |
Dec 16, 2013 |
|
|
|
|
14108185 |
Dec 16, 2013 |
10102543 |
|
|
|
14108217 |
Dec 16, 2013 |
|
|
|
|
14109682 |
Dec 17, 2013 |
|
|
|
|
14109726 |
Dec 17, 2013 |
|
|
|
|
14145873 |
Dec 31, 2013 |
|
|
|
|
14145886 |
Dec 31, 2013 |
|
|
|
|
14148523 |
Jan 6, 2014 |
|
|
|
|
14148560 |
Jan 6, 2014 |
10185841 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F
21/6209 (20130101); G06Q 50/184 (20130101) |
Current International
Class: |
H04L
29/06 (20060101); G06Q 50/18 (20120101); G06F
21/62 (20130101) |
References Cited
[Referenced By]
U.S. Patent Documents
Foreign Patent Documents
|
|
|
|
|
|
|
2009188922 |
|
Aug 2009 |
|
JP |
|
2009288245 |
|
Dec 2009 |
|
JP |
|
WO 2014/150073 |
|
Sep 2014 |
|
WO |
|
Other References
Gao, Yongsheng et al., "Face Recognition Using Line Edge Map," IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24,
No. 6, Jun. 2002, pp. 764-779. cited by applicant .
Kim, Dong-Ju et al., "Face Recognition via Local Directional
Pattern," International Journal of Security and Its Applications,
vol. 7, No. 2, Mar. 2013, pp. 191-200. cited by applicant .
Kodate, Kashiko et al., "Compact Parallel Optical Correlator for
Face Recognition, and Its Application," Face Recognition, Kresimir
Delac and Mislay Grgic (Ed.), ISBN: 978-3-902613-03-5,
InTech,Available from:
http://www.intechopen.com/books/face_recognition/compact_parallel_optical-
_correlator_for_face_recognition_and_its_application, Jun. 2007,
pp. 235-249. cited by applicant .
Krawczyk, H. et al., "HMAC-based Extract-and-Expand Key Derivation
Function (HKDF)," Internet Engineering Task Force (IETF), Request
for Comments: 5869, May 2010, 15 pages. cited by applicant .
Vander Lugt, A.B., "Signal Detection by Complex Spatial Filtering,"
Report of Project Michigan, Institute of Science and Technology,
The University of Michigan, Jul. 1963, 56 pages. cited by applicant
.
PCT International Search Report; International App. No.
PCT/US2014/060141; dated Jan. 16, 2015; pp. 1-4. cited by applicant
.
PCT International Search Report; International App. No.
PCT/US2014/060148; dated Mar. 31, 2015; pp. 1-4. cited by applicant
.
"Liquidated damages"; Wikipedia; bearing a date of May 4, 2011;
printed on Jan. 12, 2017; pp. 1-2 located at:
http://en.wikipedia.org/wiki/Liquidated_damages. cited by applicant
.
Yamada et al.;"Use of Invisible Noise Signals to Prevent Privacy
Invasion through Face Recognition from Camera Images"; Oct. 29-Nov.
2, 2012; pp. 1315-1316; Nara, Japan. cited by applicant .
"Circuitry"; Merriam-Webster; bearing a date of Jan. 23, 2018;
printed on Feb. 21, 2018; 1 page. cited by applicant .
Ashok et al.; "Do Not Share! Invisible Light Beacons for Signaling
Preferences to Privacy-Respecting Cameras"; VLCS '14; Sep. 7, 2014;
pp. 1-6; ACM. cited by applicant .
Chattopadhyay et al.; "PrivacyCam: a Privacy Preserving Camera
Using uCLinux on the Blackfin DSP"; IEEE Workshop on Embedded
Vision Systems; Jun. 2007; pp. 1-8; IEEE. cited by applicant .
Kapadia et al. "Virtual Walls: Protecting Digital Privacy in
Pervasive Environments"; Pervasive; 2007; pp. 162-179;
Springer-Verlag, Berlin Heidelberg, Germany. cited by applicant
.
Konings et al; "PriFi Beacons: Piggybacking Privacy Implications on
WiFi Beacons"; UbiComp '13; Sep. 8-12, 2013; pp. 83-86; ACM;
Zurich, Switzerland. cited by applicant .
Laibowitz et al.; "Wearable Sensing for Dynamic Management of Dense
Ubiquitous Media"; IEEE; 2009; pp. 1-6. cited by applicant .
Pidcock et al; "NotiSense: An Urban Sensing Notification System to
Improve Bystander Privacy"; 2011; pp. 1-5; PhoneSense. cited by
applicant.
|
Primary Examiner: Hoffman; Brandon S
Assistant Examiner: Corum, Jr.; William A
Claims
What is claimed is:
1. A system comprising: one or more elements of programmable
hardware configured to perform functions including at least:
acquire metadata and encrypted data of an image that contains a
depiction of a feature of a particular entity, the metadata
including a pointer associated with a beacon included in the image;
obtain term data using the pointer, which term data includes one or
more terms of service that govern release of a clear version of the
image, the one or more terms of service specifying one or more
damages; and release an at least partially clear version of the
image based on an evaluation of the one or more damages.
2. The system of claim 1, wherein said one or more elements of
programmable hardware is further configured to perform a function
comprising: capture the image that contains the depiction of the
feature of the particular entity.
3. The system of claim 1, wherein said acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire from another
device metadata and encrypted data of an image that contains a
depiction of a feature of a particular entity, the metadata
including a pointer associated with a beacon included in the
image.
4. The system of claim 1, wherein said acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire via a
network metadata and encrypted data of an image that contains a
depiction of a feature of a particular entity, the metadata
including a pointer associated with a beacon included in the
image.
5. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire name
metadata and encrypted data of an image that contains a depiction
of a feature of a particular entity, the metadata including a
pointer associated with a beacon included in the image.
6. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire number
metadata and encrypted data of an image that contains a depiction
of a feature of a particular entity, the metadata including a
pointer associated with a beacon included in the image.
7. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: identify the particular entity.
8. The system of claim 7, wherein said identify the particular
entity comprises: identify the particular entity at least partly
through analysis of one or more previously captured images.
9. The system of claim 7, wherein said identify the particular
entity comprises: identify the particular entity at least partly
through analysis of the metadata.
10. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform functions
comprising: request identification data of the particular entity;
and receive the identification data of the particular entity.
11. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: identify the particular entity at least partly based on
the beacon.
12. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine whether a name of the particular entity is
stored.
13. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine whether the particular entity is included
within a contact list stored in memory.
14. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine at least partly via user input whether the
particular entity has a relationship with a device.
15. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire metadata and
obfuscated data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image.
16. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon associated with the particular entity in the
image.
17. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine whether the particular entity is included
within a social network friend list.
18. The system of claim 1, wherein the release an at least
partially clear version of the image based on an evaluation of the
one or more damages comprises: release for viewing an at least
partially clear version of the image based on an evaluation of the
one or more damages.
19. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: establish a handshake communication with the
beacon.
20. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine whether a pre-existing relationship exists
with respect to the particular entity based at least partly on one
or more images previously captured or stored.
21. The system of claim 1, wherein the release an at least
partially clear version of the image based on an evaluation of the
one or more damages comprises: release an at least partially clear
version of the image based on an evaluation of the one or more
damages and based on a specific key.
22. The system of claim 1, wherein the one or more elements of
programmable hardware is further configured to perform a function
comprising: determine whether a pre-existing social media
relationship exists with the particular entity.
23. The system of claim 1, wherein the one or more elements of
programmable hardware are further configured to perform an
operation comprising: automatically adjust the one or more terms of
service based at least partly on a popularity of the particular
entity on social media.
24. The system of claim 23, wherein the automatically adjust the
one or more terms of service based at least partly on a popularity
of the particular entity on social media comprises: automatically
adjust the one or more damages based at least partly on a
popularity of the particular entity on social media.
25. The system of claim 24, wherein the automatically adjust the
one or more damages based at least partly on a popularity of the
particular entity on social media comprises: automatically adjust
the one or more damages based at least partly on a popularity of
the particular entity on social media and based at least partly on
the feature of the particular entity depicted in the image.
26. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer
broadcasted by a beacon included in the image.
27. The system of claim 1, wherein the release an at least
partially clear version of the image based on an evaluation of the
one or more damages comprises: release an at least partially clear
version of the image based on one or more determined revenues for
the image being greater than the one or more damages.
28. The system of claim 1, wherein the release an at least
partially clear version of the image based on an evaluation of the
one or more damages comprises: release an at least partially clear
version of the image whereby the depiction of the feature of the
particular entity has been modified based on an evaluation of the
one or more damages.
29. The system of claim 1, wherein the one or more elements of
programmable hardware are further configured to perform an
operation comprising: perform entity recognition with respect to
the image to identify the feature of the particular entity, wherein
the one or more terms of service are adjusted based on
identification of the entity.
30. The system of claim 1, wherein the acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon included in the image comprises: acquire metadata and
encrypted data of an image that contains a depiction of a feature
of a particular entity, the metadata including a pointer associated
with a beacon depicted in the image.
31. A method implemented by at least one computer processing
device, the method comprising: acquiring metadata and encrypted
data of an image that contains a depiction of a feature of a
particular object, the metadata including a pointer associated with
a beacon included in the image; obtaining term data using the
pointer, which term data includes one or more terms of service that
govern release of a clear version of the image, the one or more
terms of service specifying one or more damages; and releasing an
at least partially clear version of the image based on an
evaluation of the one or more damages.
32. A system comprising: circuitry configured for acquiring
metadata and encrypted data of an image that contains a depiction
of a feature of a particular object, the metadata including a
pointer associated with a beacon included in the image; circuitry
configured for obtaining term data using the pointer, which term
data includes one or more terms of service that govern release of a
clear version of the image, the one or more terms of service
specifying one or more damages; and circuitry configured for
releasing an at least partially clear version of the image based on
an evaluation of the one or more damages.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
If an Application Data Sheet (ADS) has been filed on the filing
date of this application, it is incorporated by reference herein.
Any applications claimed on the ADS for priority under 35 U.S.C.
.sctn..sctn. 119, 120, 121, or 365(c), and any and all parent,
grandparent, great-grandparent, etc. applications of such
applications, are also incorporated by reference, including any
priority claims made in those applications and any material
incorporated by reference, to the extent such subject matter is not
inconsistent herewith.
The present application is related to and/or claims the benefit of
the earliest available effective filing date(s) from the following
listed application(s) (the "Priority Applications"), if any, listed
below (e.g., claims earliest available priority dates for other
than provisional patent applications or claims benefits under 35
USC .sctn. 119(e) for provisional patent applications, for any and
all parent, grandparent, great-grandparent, etc. applications of
the Priority Application(s)). In addition, the present application
is related to the "Related Applications," if any, listed below.
PRIORITY APPLICATIONS
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation of U.S. patent application
Ser. No. 14/185,741, entitled METHODS, SYSTEMS, AND DEVICES FOR
OBSCURING ENTITIES DEPICTED IN CAPTURED IMAGES, naming Pablos
Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert
W. Lord, and Mark A. Malamud as inventors, filed 20 Feb. 2014,
which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/051,213, entitled METHODS, SYSTEMS, AND
DEVICES FOR FACILITATING VIABLE DISTRIBUTION OF DATA COLLECTED BY
WEARABLE COMPUTATION, naming Pablos Holman, Roderick A. Hyde, Royce
A. Levien, Richard T. Lord, Robert W. Lord, and Mark A. Malamud as
inventors, filed 10 Oct. 2013, which is currently co-pending or is
an application of which a currently co-pending application is
entitled to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/055,471, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING IMAGE DATA FROM CAPTURED IMAGES, naming Pablos
Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert
W. Lord, and Mark A. Malamud as inventors, filed 16 Oct. 2013,
which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/055,543, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING IMAGE DATA FROM CAPTURED IMAGES, naming Pablos
Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert
W. Lord, and Mark A. Malamud as inventors, filed 16 Oct. 2013,
which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/084,254, entitled DEVICES, METHODS, AND
SYSTEMS FOR ANALYZING CAPTURED IMAGE DATA AND PRIVACY DATA, naming
Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord,
Robert W. Lord, and Mark A. Malamud as inventors, filed 19 Nov.
2013, which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/084,579 entitled DEVICES, METHODS, AND
SYSTEMS FOR ANALYZING CAPTURED IMAGE DATA AND PRIVACY DATA, naming
Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord,
Robert W. Lord, and Mark A. Malamud as inventors, filed 19 Nov.
2013, which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/084,581, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING IMAGE DATA FROM CAPTURED IMAGES, naming Pablos
Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert
W. Lord, and Mark A. Malamud as inventors, filed 19 Nov. 2013,
which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/084,591, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING IMAGE DATA FROM CAPTURED IMAGES, naming Pablos
Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord, Robert
W. Lord, and Mark A. Malamud as inventors, filed 19 Nov. 2013,
which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/108,077, entitled METHODS, SYSTEMS, AND
DEVICES FOR DELIVERING IMAGE DATA FROM CAPTURED IMAGES TO DEVICES,
naming Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T.
Lord, Robert W. Lord, and Mark A. Malamud as inventors, filed 16
Dec. 2013, which is currently co-pending or is an application of
which a currently co-pending application is entitled to the benefit
of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/108,107, entitled METHODS, SYSTEMS, AND
DEVICES FOR DELIVERING IMAGE DATA FROM CAPTURED IMAGES TO DEVICES,
naming Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T.
Lord, Robert W. Lord, and Mark A. Malamud as inventors, filed 16
Dec. 2013, which is currently co-pending or is an application of
which a currently co-pending application is entitled to the benefit
of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/108,185, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING INSERTED DATA INTO CAPTURED IMAGES, naming
Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord,
Robert W. Lord, and Mark A. Malamud as inventors, filed 16 Dec.
2013, which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/108,217, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING INSERTED DATA INTO CAPTURED IMAGES, naming
Pablos Holman, Roderick A. Hyde, Royce A. Levien, Richard T. Lord,
Robert W. Lord, and Mark A. Malamud as inventors, filed 16 Dec.
2013, which is currently co-pending or is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/109,682, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING CAPTURED IMAGE DATA THAT IS RECEIVED BY
DEVICES, naming Pablos Holman, Roderick A. Hyde, Royce A. Levien,
Richard T. Lord, Robert W. Lord, and Mark A. Malamud as inventors,
filed 17 Dec. 2013, which is currently co-pending or is an
application of which a currently co-pending application is entitled
to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/109,726, entitled METHODS, SYSTEMS, AND
DEVICES FOR HANDLING CAPTURED IMAGE DATA THAT IS RECEIVED BY
DEVICES, naming Pablos Holman, Roderick A. Hyde, Royce A. Levien,
Richard T. Lord, Robert W. Lord, and Mark A. Malamud as inventors,
filed 17 Dec. 2013, which is currently co-pending or is an
application of which a currently co-pending application is entitled
to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/145,873, entitled METHODS, SYSTEMS, AND
DEVICES FOR MONITORING PRIVACY BEACONS RELATED TO ENTITIES DEPICTED
IN IMAGES, naming Pablos Holman, Roderick A. Hyde, Royce A. Levien,
Richard T. Lord, Robert W. Lord, and Mark A. Malamud as inventors,
filed 31 Dec. 2013, which is currently co-pending or is an
application of which a currently co-pending application is entitled
to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/145,886, entitled METHODS, SYSTEMS, AND
DEVICES FOR MONITORING PRIVACY BEACONS RELATED TO ENTITIES DEPICTED
IN IMAGES, naming Pablos Holman, Roderick A. Hyde, Royce A. Levien,
Richard T. Lord, Robert W. Lord, and Mark A. Malamud as inventors,
filed 31 Dec. 2013, which is currently co-pending or is an
application of which a currently co-pending application is entitled
to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/148,523, entitled DEVICES, METHODS, AND
SYSTEMS FOR MANAGING REPRESENTATIONS OF ENTITIES THROUGH USE OF
PRIVACY BEACONS, naming Pablos Holman, Roderick A. Hyde, Royce A.
Levien, Richard T. Lord, Robert W. Lord, and Mark A. Malamud as
inventors, filed 6 Jan. 2014, which is currently co-pending or is
an application of which a currently co-pending application is
entitled to the benefit of the filing date.
For purposes of the USPTO extra-statutory requirements, the present
application constitutes a continuation-in-part of U.S. patent
application Ser. No. 14/148,560, entitled DEVICES, METHODS, AND
SYSTEMS FOR MANAGING REPRESENTATIONS OF ENTITIES THROUGH USE OF
PRIVACY BEACONS, naming Pablos Holman, Roderick A. Hyde, Royce A.
Levien, Richard T. Lord, Robert W. Lord, and Mark A. Malamud as
inventors, filed 6 Jan. 2014, which is currently co-pending or is
an application of which a currently co-pending application is
entitled to the benefit of the filing date.
RELATED APPLICATIONS
None.
The United States Patent Office (USPTO) has published a notice to
the effect that the USPTO's computer programs require that patent
applicants reference both a serial number and indicate whether an
application is a continuation, continuation-in-part, or divisional
of a parent application. Stephen G. Kunin, Benefit of Prior-Filed
Application, USPTO Official Gazette Mar. 18, 2003. The USPTO
further has provided forms for the Application Data Sheet which
allow automatic loading of bibliographic data but which require
identification of each application as a continuation,
continuation-in-part, or divisional of a parent application. The
present Applicant Entity (hereinafter "Applicant") has provided
above a specific reference to the application(s) from which
priority is being claimed as recited by statute. Applicant
understands that the statute is unambiguous in its specific
reference language and does not require either a serial number or
any characterization, such as "continuation" or
"continuation-in-part," for claiming priority to U.S. patent
applications. Notwithstanding the foregoing, Applicant understands
that the USPTO's computer programs have certain data entry
requirements, and hence Applicant has provided designation(s) of a
relationship between the present application and its parent
application(s) as set forth above and in any ADS filed in this
application, but expressly points out that such designation(s) are
not to be construed in any way as any type of commentary and/or
admission as to whether or not the present application contains any
new matter in addition to the matter of its parent
application(s).
If the listings of applications provided above are inconsistent
with the listings provided via an ADS, it is the intent of the
Applicant to claim priority to each application that appears in the
Priority Applications section of the ADS and to each application
that appears in the Priority Applications section of this
application.
All subject matter of the Priority Applications and the Related
Applications and of any and all parent, grandparent,
great-grandparent, etc. applications of the Priority Applications
and the Related Applications, including any priority claims, is
incorporated herein by reference to the extent such subject matter
is not inconsistent herewith.
BACKGROUND
This application is related to the capture of images that may
include personality rights.
SUMMARY
Recently, there has been an increased popularity in wearable
computers, e.g., computers that are placed in articles of clothing
or clothing accessories, e.g., watches, eyeglasses, shoes, jewelry,
accessories, shirts, pants, headbands, and the like. As technology
allows electronic devices to become smaller and smaller, more and
more items may be "smart" items, e.g., may contain a computer.
In addition, image capturing technology has also improved, allowing
for high quality digital cameras that can capture pictures, audio,
video, or a combination thereof. These digital cameras may be small
enough to fit onto wearable computers, e.g., inside of eyeglasses.
In some instances, the digital camera may blend into the eyeglasses
mold, and may not be immediately recognizable as a camera. Such
eyeglasses may be indistinguishable or somewhat distinguishable
from standard eyeglasses that do not contain a camera and/or a
computer.
Further, the cost of data storage has decreased dramatically, and
it is not uncommon for an average person in a developed nation to
have access to enough digital storage to store months' and/or
years' worth of video and pictures. As the cost of data storage has
decreased dramatically, so too has the cost of processors to
process that data, meaning that automation may be able to take an
entire day's worth of surreptitious recording, and isolate those
portions of the recording that captured persons, either specific
persons or persons in general.
Accordingly, with technology, it is possible for a person to "wear"
a computer, in the form of eyeglasses, watches, shirts, hats, or
through a pocket-sized device carried by a person, e.g., a cellular
telephone device. This wearable computer may be used to record
people, e.g., to capture pictures, audio, video, or a combination
thereof a person, without their knowledge. Thus, conversations that
a person may assume to be private, may be recorded and widely
distributed. Moreover, a person may be surreptitiously recorded
while they are in a locker room, in a bathroom, or in a telephone
booth. It may be difficult or impossible to tell when a person is
being recorded. Further, once proliferation of these wearable
computers with digital cameras becomes widespread, people must
assume that they are under surveillance 100% of the time that they
are not in their house.
Therefore, a need has arisen to provide systems that attempt to
limit the capture and distribution of a person's personality
rights. The present invention is directed to devices, methods, and
systems that attempt to limit the capture and distribution of
captured images of persons. Specifically, the present invention is
directed to devices, methods, and systems that attempt to limit the
capture and distribution of captured images of persons, implemented
at a device that carries out the capturing of the image. In some
embodiments, this device may be a wearable computer, but in other
embodiments, any image capturing device or any device that has an
image capturing device incorporated into its functionality may
implement the devices, methods, and systems described herein.
The instant application is directed to devices, methods, and
systems that have a capability to capture images, and in which the
capture of those images may include capturing images of a person,
persons, or portion(s) of a person for which a privacy beacon may
be associated. The privacy beacon may be optical, digital, or other
form (e.g., radio, electromagnetic, biomechanic, quantum-state, and
the like), and may be detected through digital or optical
operations, as discussed herein. The instant application describes
devices, methods and systems that may interface with other parts of
a larger system, which may be described in detail in this or other
applications.
In one or more various aspects, a method includes, but is not
limited to, acquiring an image that includes a depiction of a
feature of one or more entities, attaining identification of a
particular entity of the one or more entities for which the
depiction of the feature is present in the image, obtaining
relationship data that indicates whether the particular entity has
a relationship with a device that facilitated acquisition of the
image, and performing obfuscation on at least a portion of the
image, wherein the depiction of the feature of the particular
entity is excluded from the obfuscation when the obtained
relationship data indicates that the particular entity has the
relationship with the device that facilitated the acquisition of
the image. In addition to the foregoing, other method aspects are
described in the claims, drawings, and text forming a part of the
disclosure set forth herein.
In one or more various aspects, one or more related systems may be
implemented in machines, compositions of matter, or manufactures of
systems, limited to patentable subject matter under 35 U.S.C. 101.
The one or more related systems may include, but are not limited
to, circuitry and/or programming for carrying out the
herein-referenced method aspects. The circuitry and/or programming
may be virtually any combination of hardware, software, and/or
firmware configured to effect the herein-referenced method aspects
depending upon the design choices of the system designer, and
limited to patentable subject matter under 35 USC 101.
In one or more various aspects, a system includes, but is not
limited to, means for acquiring an image that includes a depiction
of a feature of one or more entities, means for attaining
identification of a particular entity of the one or more entities
for which the depiction of the feature is present in the image,
means for obtaining relationship data that indicates whether the
particular entity has a relationship with a device that facilitated
acquisition of the image, and means for performing obfuscation on
at least a portion of the image, wherein the depiction of the
feature of the particular entity is excluded from the obfuscation
when the obtained relationship data indicates that the particular
entity has the relationship with the device that facilitated the
acquisition of the image. In addition to the foregoing, other
system aspects are described in the claims, drawings, and text
forming a part of the disclosure set forth herein.
In one or more various aspects, a system includes, but is not
limited to, circuitry for acquiring an image that includes a
depiction of a feature of one or more entities, circuitry for
attaining identification of a particular entity of the one or more
entities for which the depiction of the feature is present in the
image, circuitry for obtaining relationship data that indicates
whether the particular entity has a relationship with a device that
facilitated acquisition of the image, and performing obfuscation on
at least a portion of the image, wherein the depiction of the
feature of the particular entity is excluded from the obfuscation
when the obtained relationship data indicates that the particular
entity has the relationship with the device that facilitated the
acquisition of the image. In addition to the foregoing, other
system aspects are described in the claims, drawings, and text
forming a part of the disclosure set forth herein.
In one or more various aspects, a computer program product,
comprising a signal bearing medium, bearing one or more
instructions including, but not limited to, one or more
instructions for acquiring an image that includes a depiction of a
feature of one or more entities, one or more instructions for
attaining identification of a particular entity of the one or more
entities for which the depiction of the feature is present in the
image, one or more instructions for obtaining relationship data
that indicates whether the particular entity has a relationship
with a device that facilitated acquisition of the image, and one or
more instructions for performing obfuscation on at least a portion
of the image, wherein the depiction of the feature of the
particular entity is excluded from the obfuscation when the
obtained relationship data indicates that the particular entity has
the relationship with the device that facilitated the acquisition
of the image. In addition to the foregoing, other computer program
product aspects are described in the claims, drawings, and text
forming a part of the disclosure set forth herein.
In one or more various aspects, a device is defined by a
computational language, such that the device comprises one or more
interchained physical machines ordered for acquiring an image that
includes a depiction of a feature of one or more entities, one or
more interchained physical machines ordered for attaining
identification of a particular entity of the one or more entities
for which the depiction of the feature is present in the image, one
or more interchained physical machines ordered for obtaining
relationship data that indicates whether the particular entity has
a relationship with a device that facilitated acquisition of the
image, and one or more interchained physical machines ordered for
performing obfuscation on at least a portion of the image, wherein
the depiction of the feature of the particular entity is excluded
from the obfuscation when the obtained relationship data indicates
that the particular entity has the relationship with the device
that facilitated the acquisition of the image.
In addition to the foregoing, various other method and/or system
and/or program product aspects are set forth and described in the
teachings such as text (e.g., claims and/or detailed description)
and/or drawings of the present disclosure.
The foregoing is a summary and thus may contain simplifications,
generalizations, inclusions, and/or omissions of detail;
consequently, those skilled in the art will appreciate that the
summary is illustrative only and is NOT intended to be in any way
limiting. Other aspects, features, and advantages of the devices
and/or processes and/or other subject matter described herein will
become apparent by reference to the detailed description, the
corresponding drawings, and/or in the teachings set forth
herein.
BRIEF DESCRIPTION OF THE FIGURES
For a more complete understanding of embodiments, reference now is
made to the following descriptions taken in connection with the
accompanying drawings. The use of the same symbols in different
drawings typically indicates similar or identical items, unless
context dictates otherwise. The illustrative embodiments described
in the detailed description, drawings, and claims are not meant to
be limiting. Other embodiments may be utilized, and other changes
may be made, without departing from the spirit or scope of the
subject matter presented here.
FIG. 1, including FIGS. 1-A through 1-T, shows a high-level system
diagram of one or more exemplary environments in which transactions
and potential transactions may be carried out, according to one or
more embodiments. FIG. 1 forms a partially schematic diagram of an
environment(s) and/or an implementation(s) of technologies
described herein when FIGS. 1-A through 1-T are stitched together
in the manner shown in FIG. 1-P, which is reproduced below in table
format.
In accordance with 37 C.F.R. .sctn. 1.84(h)(2), FIG. 1 shows "a
view of a large machine or device in its entirety . . . broken into
partial views . . . extended over several sheets" labeled FIG. 1-A
through FIG. 1-T (Sheets 1-20). The "views on two or more sheets
form, in effect, a single complete view, [and] the views on the
several sheets . . . [are] so arranged that the complete figure can
be assembled" from "partial views drawn on separate sheets . . .
linked edge to edge. Thus, in FIG. 1, the partial view FIGS. 1-A
through 1-T are ordered alphabetically, by increasing in columns
from left to right, and increasing in rows top to bottom, as shown
in the following table:
TABLE-US-00001 TABLE 1 Table showing alignment of enclosed drawings
to form partial schematic of one or more environments. (1, 1) -
FIG. 1-A (1, 2) - FIG. 1-B (1, 3) - FIG. 1-C (1, 4) - FIG. 1-D (1,
5) - FIG. 1-E (2, 1) - FIG. 1-F (2, 2) - FIG. 1-G (2, 3) - FIG. 1-H
(2, 4) - FIG. 1-I (2, 5) - FIG. 1-J (3, 1) - FIG. 1-K (3, 2) - FIG.
1-L (3, 3) - FIG. 1-M (3, 4) - FIG. 1-N (3, 5) - FIG. 1-O (4, 1) -
FIG. 1-P (4, 2) - FIG. 1-Q (4, 3) - FIG. 1-R (4, 4) - FIG. 1-S (4,
5) - FIG. 1-T
In accordance with 37 C.F.R. .sctn. 1.84(h)(2), FIG. 1 is " . . . a
view of a large machine or device in its entirety . . . broken into
partial views . . . extended over several sheets . . . [with] no
loss in facility of understanding the view." The partial views
drawn on the several sheets indicated in the above table are
capable of being linked edge to edge, so that no partial view
contains parts of another partial view. As here, "where views on
two or more sheets form, in effect, a single complete view, the
views on the several sheets are so arranged that the complete
figure can be assembled without concealing any part of any of the
views appearing on the various sheets." 37 C.F.R. .sctn.
1.84(h)(2).
It is noted that one or more of the partial views of the drawings
may be blank, or may not contain substantive elements (e.g., may
show only lines, connectors, and the like). These drawings are
included in order to assist readers of the application in
assembling the single complete view from the partial sheet format
required for submission by the USPTO, and, while their inclusion is
not required and may be omitted in this or other applications,
their inclusion is proper, and should be considered
intentional.
FIG. 1-A, when placed at position (1,1), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-B, when placed at position (1,2), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-C, when placed at position (1,3), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-D, when placed at position (1,4), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-E, when placed at position (1,5), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-F, when placed at position (2,1), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-G, when placed at position (2,2), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-H, when placed at position (2,3), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-I, when placed at position (2,4), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-J, when placed at position (2,5), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-K, when placed at position (3,1), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-L, when placed at position (3,2), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-M, when placed at position (3,3), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-N, when placed at position (3,4), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-O, when placed at position (3,5), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-P, when placed at position (4,1), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-Q, when placed at position (4,2), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-R, when placed at position (4,3), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-S, when placed at position (4,4), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 1-T, when placed at position (4,5), forms at least a portion
of a partially schematic diagram of an environment(s) and/or an
implementation(s) of technologies described herein.
FIG. 2A shows a high-level block diagram of an exemplary
environment 200A, including image capture device 220A, according to
one or more embodiments.
FIG. 2B shows a high-level block diagram of an exemplary
environment 200B, including image capture device 220B, according to
one or more embodiments.
FIG. 2C shows a high-level block diagram of an exemplary
environment 200C, including image capture device 220C, according to
one or more embodiments.
FIG. 2D shows a high-level block diagram of an exemplary
environment 200D, including image capture device 220D, according to
one or more embodiments.
FIG. 2E shows a high-level block diagram of an exemplary
environment 200E, including image capture device 210 and image
receipt device 220E, according to one or more embodiments.
FIG. 2F shows a high-level block diagram of an exemplary
environment 200F, including image capture device 210 and remote
computer device 220F, according to one or more embodiments.
FIG. 2G shows a high-level block diagram of a computing device,
e.g., a device 220 operating in an exemplary environment 200*,
according to one or more embodiments.
FIG. 3A shows a high-level block diagram of an exemplary image
capturing device 302, according to one or more embodiments.
FIG. 3B shows a high-level block diagram of an exemplary image
capturing device 304, according to one or more embodiments.
FIG. 3C shows a high-level block diagram of an exemplary image
capturing device 306, according to one or more embodiments.
FIG. 3D shows a high-level block diagram of an exemplary image
capturing device 308, according to one or more embodiments.
FIG. 3E shows a high-level block diagram of an exemplary image
capturing device 309, according to one or more embodiments.
FIG. 4A shows a high-level block diagram of an exemplary
environment 400A including a computing device 420A and a server
device 430A.
FIG. 4B shows a high-level block diagram of an exemplary
environment 400B including a computing device 420B and a server
device 430B.
FIG. 4C shows a high-level block diagram of an exemplary
environment 400C including a computing device 420C and a server
device 430C.
FIG. 4D shows a high-level block diagram of an exemplary
environment 400D including a computing device 420D and a server
device 430D.
FIG. 4E shows a high-level block diagram of an exemplary
environment 400B including a computing device 420E and a server
device 430E.
FIG. 5A shows a high-level block diagram of an exemplary
environment 500A including a computing device 520A and a server
device 530A.
FIG. 5B shows a high-level block diagram of an exemplary
environment 500B including a computing device 520B and a server
device 530B.
FIG. 5C shows a high-level block diagram of an exemplary
environment 500C including a computing device 520C and a server
device 530C.
FIG. 5D shows a high-level block diagram of an exemplary
environment 500D including a computing device 520D and a server
device 530D.
FIG. 6, including FIGS. 6A-6C, shows a particular perspective of an
image that contains a depiction of a feature of a particular entity
acquiring module 252 of processing module 250 of device 220 of FIG.
2G, according to an embodiment.
FIG. 7, including FIGS. 7A-7D, shows a particular perspective of an
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image attaining module 254 of processing module 250
of device 220 of FIG. 2G, according to an embodiment.
FIG. 8, including FIGS. 8A-8E, shows a particular perspective of a
relation data that describes a relation between the particular
entity and a device that facilitated the acquisition of the image
that contains the depiction of the feature of the particular entity
obtaining module, wherein the relation may be nonextant 256 of
processing module 250 of device 220 of FIG. 2G, according to an
embodiment.
FIG. 9, including FIGS. 9A-9C, shows a particular perspective of a
obfuscation of a particular portion of the image, wherein the
depiction of the feature of the particular entity is excluded from
the particular portion of the image when the relation data
indicates that the relation between the particular entity and the
device that facilitated the acquisition of the image that contains
the depiction of the feature of the particular entity is extant
performing module 258 of processing module 250 of device 220 of
FIG. 2G, according to an embodiment.
FIG. 10 is a high-level logic flowchart of a process, e.g.,
operational flow 1000, according to an embodiment.
FIG. 11A is a high-level logic flow chart of a process depicting
alternate implementations of an acquiring an image operation 1002,
according to one or more embodiments.
FIG. 11B is a high-level logic flow chart of a process depicting
alternate implementations of an acquiring an image operation 1002,
according to one or more embodiments.
FIG. 11C is a high-level logic flow chart of a process depicting
alternate implementations of an acquiring an image operation 1002,
according to one or more embodiments.
FIG. 12A is a high-level logic flow chart of a process depicting
alternate implementations of an identifying a particular entity
operation 1004, according to one or more embodiments.
FIG. 12B is a high-level logic flow chart of a process depicting
alternate implementations of an identifying a particular entity
operation 1004, according to one or more embodiments.
FIG. 12C is a high-level logic flow chart of a process depicting
alternate implementations of an identifying a particular entity
operation 1004, according to one or more embodiments.
FIG. 12D is a high-level logic flow chart of a process depicting
alternate implementations of an identifying a particular entity
operation 1004, according to one or more embodiments.
FIG. 13A is a high-level logic flow chart of a process depicting
alternate implementations of an obtaining relationship data
operation 1006, according to one or more embodiments.
FIG. 13B is a high-level logic flow chart of a process depicting
alternate implementations of an obtaining relationship data
operation 1006, according to one or more embodiments.
FIG. 13C is a high-level logic flow chart of a process depicting
alternate implementations of an obtaining relationship data
operation 1006, according to one or more embodiments.
FIG. 13D is a high-level logic flow chart of a process depicting
alternate implementations of an obtaining relationship data
operation 1006, according to one or more embodiments.
FIG. 13E is a high-level logic flow chart of a process depicting
alternate implementations of an obtaining relationship data
operation 1006, according to one or more embodiments.
FIG. 14A is a high-level logic flow chart of a process depicting
alternate implementations of a performing obfuscation operation
1008, according to one or more embodiments.
FIG. 14B is a high-level logic flow chart of a process depicting
alternate implementations of a performing obfuscation operation
1008, according to one or more embodiments.
FIG. 14C is a high-level logic flow chart of a process depicting
alternate implementations of a performing obfuscation operation
1008, according to one or more embodiments.
DETAILED DESCRIPTION
In the following detailed description, reference is made to the
accompanying drawings, which form a part hereof. In the drawings,
similar symbols typically identify similar or identical components
or items, unless context dictates otherwise. The illustrative
embodiments described in the detailed description, drawings, and
claims are not meant to be limiting. Other embodiments may be
utilized, and other changes may be made, without departing from the
spirit or scope of the subject matter presented here.
Thus, in accordance with various embodiments, computationally
implemented methods, systems, circuitry, articles of manufacture,
ordered chains of matter, and computer program products are
designed to, among other things, provide an interface for acquiring
an image that includes a depiction of a feature of one or more
entities, attaining identification of a particular entity of the
one or more entities for which the depiction of the feature is
present in the image, obtaining relationship data that indicates
whether the particular entity has a relationship with a device that
facilitated acquisition of the image, and performing obfuscation on
at least a portion of the image, wherein the depiction of the
feature of the particular entity is excluded from the obfuscation
when the obtained relationship data indicates that the particular
entity has the relationship with the device that facilitated the
acquisition of the image.
The claims, description, and drawings of this application may
describe one or more of the instant technologies in
operational/functional language, for example as a set of operations
to be performed by a computer. Such operational/functional
description in most instances would be understood by one skilled
the art as specifically-configured hardware (e.g., because a
general purpose computer in effect becomes a special purpose
computer once it is programmed to perform particular functions
pursuant to instructions from program software (e.g., a high-level
computer program serving as a hardware specification)).
Importantly, although the operational/functional descriptions
described herein are understandable by the human mind, they are not
abstract ideas of the operations/functions divorced from
computational implementation of those operations/functions. Rather,
the operations/functions represent a specification for massively
complex computational machines or other means. As discussed in
detail below, the operational/functional language must be read in
its proper technological context, i.e., as concrete specifications
for physical implementations.
The logical operations/functions described herein are a
distillation of machine specifications or other physical mechanisms
specified by the operations/functions such that the otherwise
inscrutable machine specifications may be comprehensible to a human
reader. The distillation also allows one of skill in the art to
adapt the operational/functional description of the technology
across many different specific vendors' hardware configurations or
platforms, without being limited to specific vendors' hardware
configurations or platforms.
Some of the present technical description (e.g., detailed
description, drawings, claims, etc.) may be set forth in terms of
logical operations/functions. As described in more detail herein,
these logical operations/functions are not representations of
abstract ideas, but rather are representative of static or
sequenced specifications of various hardware elements. Differently
stated, unless context dictates otherwise, the logical
operations/functions will be understood by those of skill in the
art to be representative of static or sequenced specifications of
various hardware elements. This is true because tools available to
one of skill in the art to implement technical disclosures set
forth in operational/functional formats--tools in the form of a
high-level programming language (e.g., C, java, visual basic),
etc.), or tools in the form of Very high speed Hardware Description
Language ("VHDL," which is a language that uses text to describe
logic circuits)--are generators of static or sequenced
specifications of various hardware configurations. This fact is
sometimes obscured by the broad term "software," but, as shown by
the following explanation, those skilled in the art understand that
what is termed "software" is a shorthand for a massively complex
interchaining/specification of ordered-matter elements. The term
"ordered-matter elements" may refer to physical components of
computation, such as assemblies of electronic logic gates,
molecular computing logic constituents, quantum computing
mechanisms, etc.
For example, a high-level programming language is a programming
language with strong abstraction, e.g., multiple levels of
abstraction, from the details of the sequential organizations,
states, inputs, outputs, etc., of the machines that a high-level
programming language actually specifies. See, e.g., Wikipedia,
High-level programming language,
http://en.wikipedia.org/wiki/High-level_programming_language (as of
Jun. 5, 2012, 21:00 GMT). In order to facilitate human
comprehension, in many instances, high-level programming languages
resemble or even share symbols with natural languages. See, e.g.,
Wikipedia, Natural language,
http://en.wikipedia.org/wiki/Natural_language (as of Jun. 5, 2012,
21:00 GMT).
It has been argued that because high-level programming languages
use strong abstraction (e.g., that they may resemble or share
symbols with natural languages), they are therefore a "purely
mental construct" (e.g., that "software"--a computer program or
computer programming--is somehow an ineffable mental construct,
because at a high level of abstraction, it can be conceived and
understood by a human reader). This argument has been used to
characterize technical description in the form of
functions/operations as somehow "abstract ideas." In fact, in
technological arts (e.g., the information and communication
technologies) this is not true.
The fact that high-level programming languages use strong
abstraction to facilitate human understanding should not be taken
as an indication that what is expressed is an abstract idea. In
fact, those skilled in the art understand that just the opposite is
true. If a high-level programming language is the tool used to
implement a technical disclosure in the form of
functions/operations, those skilled in the art will recognize that,
far from being abstract, imprecise, "fuzzy," or "mental" in any
significant semantic sense, such a tool is instead a near
incomprehensibly precise sequential specification of specific
computational machines--the parts of which are built up by
activating/selecting such parts from typically more general
computational machines over time (e.g., clocked time). This fact is
sometimes obscured by the superficial similarities between
high-level programming languages and natural languages. These
superficial similarities also may cause a glossing over of the fact
that high-level programming language implementations ultimately
perform valuable work by creating/controlling many different
computational machines.
The many different computational machines that a high-level
programming language specifies are almost unimaginably complex. At
base, the hardware used in the computational machines typically
consists of some type of ordered matter (e.g., traditional
electronic devices (e.g., transistors), deoxyribonucleic acid
(DNA), quantum devices, mechanical switches, optics, fluidics,
pneumatics, optical devices (e.g., optical interference devices),
molecules, etc.) that are arranged to form logic gates. Logic gates
are typically physical devices that may be electrically,
mechanically, chemically, or otherwise driven to change physical
state in order to create a physical reality of logic, such as
Boolean logic.
Logic gates may be arranged to form logic circuits, which are
typically physical devices that may be electrically, mechanically,
chemically, or otherwise driven to create a physical reality of
certain logical functions. Types of logic circuits include such
devices as multiplexers, registers, arithmetic logic units (ALUs),
computer memory, etc., each type of which may be combined to form
yet other types of physical devices, such as a central processing
unit (CPU)--the best known of which is the microprocessor. A modern
microprocessor will often contain more than one hundred million
logic gates in its many logic circuits (and often more than a
billion transistors). See, e.g., Wikipedia, Logic gates,
http://en.wikipedia.org/wiki/Logic_gates (as of Jun. 5, 2012, 21:03
GMT).
The logic circuits forming the microprocessor are arranged to
provide a microarchitecture that will carry out the instructions
defined by that microprocessor's defined Instruction Set
Architecture. The Instruction Set Architecture is the part of the
microprocessor architecture related to programming, including the
native data types, instructions, registers, addressing modes,
memory architecture, interrupt and exception handling, and external
Input/Output. See, e.g., Wikipedia, Computer architecture,
http://en.wikipedia.org/wiki/Computer_architecture (as of Jun. 5,
2012, 21:03 GMT).
The Instruction Set Architecture includes a specification of the
machine language that can be used by programmers to use/control the
microprocessor. Since the machine language instructions are such
that they may be executed directly by the microprocessor, typically
they consist of strings of binary digits, or bits. For example, a
typical machine language instruction might be many bits long (e.g.,
32, 64, or 128 bit strings are currently common). A typical machine
language instruction might take the form
"11110000101011110000111100111111" (a 32 bit instruction).
It is significant here that, although the machine language
instructions are written as sequences of binary digits, in
actuality those binary digits specify physical reality. For
example, if certain semiconductors are used to make the operations
of Boolean logic a physical reality, the apparently mathematical
bits "1" and "0" in a machine language instruction actually
constitute a shorthand that specifies the application of specific
voltages to specific wires. For example, in some semiconductor
technologies, the binary number "1" (e.g., logical "1") in a
machine language instruction specifies around +5 volts applied to a
specific "wire" (e.g., metallic traces on a printed circuit board)
and the binary number "0" (e.g., logical "0") in a machine language
instruction specifies around -5 volts applied to a specific "wire."
In addition to specifying voltages of the machines' configurations,
such machine language instructions also select out and activate
specific groupings of logic gates from the millions of logic gates
of the more general machine. Thus, far from abstract mathematical
expressions, machine language instruction programs, even though
written as a string of zeros and ones, specify many, many
constructed physical machines or physical machine states.
Machine language is typically incomprehensible by most humans
(e.g., the above example was just ONE instruction, and some
personal computers execute more than two billion instructions every
second). See, e.g., Wikipedia, Instructions per second,
http://en.wikipedia.org/wiki/Instructions_per_second (as of Jun. 5,
2012, 21:04 GMT). Thus, programs written in machine language--which
may be tens of millions of machine language instructions long--are
incomprehensible to most humans. In view of this, early assembly
languages were developed that used mnemonic codes to refer to
machine language instructions, rather than using the machine
language instructions' numeric values directly (e.g., for
performing a multiplication operation, programmers coded the
abbreviation "mult," which represents the binary number "011000" in
MIPS machine code). While assembly languages were initially a great
aid to humans controlling the microprocessors to perform work, in
time the complexity of the work that needed to be done by the
humans outstripped the ability of humans to control the
microprocessors using merely assembly languages.
At this point, it was noted that the same tasks needed to be done
over and over, and the machine language necessary to do those
repetitive tasks was the same. In view of this, compilers were
created. A compiler is a device that takes a statement that is more
comprehensible to a human than either machine or assembly language,
such as "add 2+2 and output the result," and translates that human
understandable statement into a complicated, tedious, and immense
machine language code (e.g., millions of 32, 64, or 128 bit length
strings). Compilers thus translate high-level programming language
into machine language.
This compiled machine language, as described above, is then used as
the technical specification which sequentially constructs and
causes the interoperation of many different computational machines
such that useful, tangible, and concrete work is done. For example,
as indicated above, such machine language--the compiled version of
the higher-level language--functions as a technical specification
which selects out hardware logic gates, specifies voltage levels,
voltage transition timings, etc., such that the useful work is
accomplished by the hardware.
Thus, a functional/operational technical description, when viewed
by one of skill in the art, is far from an abstract idea. Rather,
such a functional/operational technical description, when
understood through the tools available in the art such as those
just described, is instead understood to be a humanly
understandable representation of a hardware specification, the
complexity and specificity of which far exceeds the comprehension
of most any one human. With this in mind, those skilled in the art
will understand that any such operational/functional technical
descriptions--in view of the disclosures herein and the knowledge
of those skilled in the art--may be understood as operations made
into physical reality by (a) one or more interchained physical
machines, (b) interchained logic gates configured to create one or
more physical machine(s) representative of sequential/combinatorial
logic(s), (c) interchained ordered matter making up logic gates
(e.g., interchained electronic devices (e.g., transistors), DNA,
quantum devices, mechanical switches, optics, fluidics, pneumatics,
molecules, etc.) that create physical reality of logic(s), or (d)
virtually any combination of the foregoing. Indeed, any physical
object which has a stable, measurable, and changeable state may be
used to construct a machine based on the above technical
description. Charles Babbage, for example, constructed the first
mechanized computational apparatus out of wood, with the apparatus
powered by cranking a handle.
Thus, far from being understood as an abstract idea, those skilled
in the art will recognize a functional/operational technical
description as a humanly-understandable representation of one or
more almost unimaginably complex and time sequenced hardware
instantiations. The fact that functional/operational technical
descriptions might lend themselves readily to high-level computing
languages (or high-level block diagrams for that matter) that share
some words, structures, phrases, etc. with natural language should
not be taken as an indication that such functional/operational
technical descriptions are abstract ideas, or mere expressions of
abstract ideas. In fact, as outlined herein, in the technological
arts this is simply not true. When viewed through the tools
available to those of skill in the art, such functional/operational
technical descriptions are seen as specifying hardware
configurations of almost unimaginable complexity.
As outlined above, the reason for the use of functional/operational
technical descriptions is at least twofold. First, the use of
functional/operational technical descriptions allows
near-infinitely complex machines and machine operations arising
from interchained hardware elements to be described in a manner
that the human mind can process (e.g., by mimicking natural
language and logical narrative flow). Second, the use of
functional/operational technical descriptions assists the person of
skill in the art in understanding the described subject matter by
providing a description that is more or less independent of any
specific vendor's piece(s) of hardware.
The use of functional/operational technical descriptions assists
the person of skill in the art in understanding the described
subject matter since, as is evident from the above discussion, one
could easily, although not quickly, transcribe the technical
descriptions set forth in this document as trillions of ones and
zeroes, billions of single lines of assembly-level machine code,
millions of logic gates, thousands of gate arrays, or any number of
intermediate levels of abstractions. However, if any such low-level
technical descriptions were to replace the present technical
description, a person of skill in the art could encounter undue
difficulty in implementing the disclosure, because such a low-level
technical description would likely add complexity without a
corresponding benefit (e.g., by describing the subject matter
utilizing the conventions of one or more vendor-specific pieces of
hardware). Thus, the use of functional/operational technical
descriptions assists those of skill in the art by separating the
technical descriptions from the conventions of any vendor-specific
piece of hardware.
In view of the foregoing, the logical operations/functions set
forth in the present technical description are representative of
static or sequenced specifications of various ordered-matter
elements, in order that such specifications may be comprehensible
to the human mind and adaptable to create many various hardware
configurations. The logical operations/functions disclosed herein
should be treated as such, and should not be disparagingly
characterized as abstract ideas merely because the specifications
they represent are presented in a manner that one of skill in the
art can readily understand and apply in a manner independent of a
specific vendor's hardware implementation.
Those having skill in the art will recognize that the state of the
art has progressed to the point where there is little distinction
left between hardware, software (e.g., a high-level computer
program serving as a hardware specification), and/or firmware
implementations of aspects of systems; the use of hardware,
software, and/or firmware is generally (but not always, in that in
certain contexts the choice between hardware and software can
become significant) a design choice representing cost vs.
efficiency tradeoffs. Those having skill in the art will appreciate
that there are various vehicles by which processes and/or systems
and/or other technologies described herein can be effected (e.g.,
hardware, software (e.g., a high-level computer program serving as
a hardware specification), and/or firmware), and that the preferred
vehicle will vary with the context in which the processes and/or
systems and/or other technologies are deployed. For example, if an
implementer determines that speed and accuracy are paramount, the
implementer may opt for a mainly hardware and/or firmware vehicle;
alternatively, if flexibility is paramount, the implementer may opt
for a mainly software (e.g., a high-level computer program serving
as a hardware specification) implementation; or, yet again
alternatively, the implementer may opt for some combination of
hardware, software (e.g., a high-level computer program serving as
a hardware specification), and/or firmware in one or more machines,
compositions of matter, and articles of manufacture, limited to
patentable subject matter under 35 USC 101. Hence, there are
several possible vehicles by which the processes and/or devices
and/or other technologies described herein may be effected, none of
which is inherently superior to the other in that any vehicle to be
utilized is a choice dependent upon the context in which the
vehicle will be deployed and the specific concerns (e.g., speed,
flexibility, or predictability) of the implementer, any of which
may vary. Those skilled in the art will recognize that optical
aspects of implementations will typically employ optically-oriented
hardware, software (e.g., a high-level computer program serving as
a hardware specification), and or firmware.
In some implementations described herein, logic and similar
implementations may include computer programs or other control
structures. Electronic circuitry, for example, may have one or more
paths of electrical current constructed and arranged to implement
various functions as described herein. In some implementations, one
or more media may be configured to bear a device-detectable
implementation when such media hold or transmit device detectable
instructions operable to perform as described herein. In some
variants, for example, implementations may include an update or
modification of existing software (e.g., a high-level computer
program serving as a hardware specification) or firmware, or of
gate arrays or programmable hardware, such as by performing a
reception of or a transmission of one or more instructions in
relation to one or more operations described herein. Alternatively
or additionally, in some variants, an implementation may include
special-purpose hardware, software (e.g., a high-level computer
program serving as a hardware specification), firmware components,
and/or general-purpose components executing or otherwise invoking
special-purpose components. Specifications or other implementations
may be transmitted by one or more instances of tangible
transmission media as described herein, optionally by packet
transmission or otherwise by passing through distributed media at
various times.
Alternatively or additionally, implementations may include
executing a special-purpose instruction sequence or invoking
circuitry for enabling, triggering, coordinating, requesting, or
otherwise causing one or more occurrences of virtually any
functional operation described herein. In some variants,
operational or other logical descriptions herein may be expressed
as source code and compiled or otherwise invoked as an executable
instruction sequence. In some contexts, for example,
implementations may be provided, in whole or in part, by source
code, such as C++, or other code sequences. In other
implementations, source or other code implementation, using
commercially available and/or techniques in the art, may be
compiled//implemented/translated/converted into a high-level
descriptor language (e.g., initially implementing described
technologies in C or C++ programming language and thereafter
converting the programming language implementation into a
logic-synthesizable language implementation, a hardware description
language implementation, a hardware design simulation
implementation, and/or other such similar mode(s) of expression).
For example, some or all of a logical expression (e.g., computer
programming language implementation) may be manifested as a
Verilog-type hardware description (e.g., via Hardware Description
Language (HDL) and/or Very High Speed Integrated Circuit Hardware
Descriptor Language (VHDL)) or other circuitry model which may then
be used to create a physical implementation having hardware (e.g.,
an Application Specific Integrated Circuit). Those skilled in the
art will recognize how to obtain, configure, and optimize suitable
transmission or computational elements, material supplies,
actuators, or other structures in light of these teachings.
The term module, as used in the foregoing/following disclosure, may
refer to a collection of one or more components that are arranged
in a particular manner, or a collection of one or more
general-purpose components that may be configured to operate in a
particular manner at one or more particular points in time, and/or
also configured to operate in one or more further manners at one or
more further times. For example, the same hardware, or same
portions of hardware, may be configured/reconfigured in
sequential/parallel time(s) as a first type of module (e.g., at a
first time), as a second type of module (e.g., at a second time,
which may in some instances coincide with, overlap, or follow a
first time), and/or as a third type of module (e.g., at a third
time which may, in some instances, coincide with, overlap, or
follow a first time and/or a second time), etc. Reconfigurable
and/or controllable components (e.g., general purpose processors,
digital signal processors, field programmable gate arrays, etc.)
are capable of being configured as a first module that has a first
purpose, then a second module that has a second purpose and then, a
third module that has a third purpose, and so on. The transition of
a reconfigurable and/or controllable component may occur in as
little as a few nanoseconds, or may occur over a period of minutes,
hours, or days.
In some such examples, at the time the component is configured to
carry out the second purpose, the component may no longer be
capable of carrying out that first purpose until it is
reconfigured. A component may switch between configurations as
different modules in as little as a few nanoseconds. A component
may reconfigure on-the-fly, e.g., the reconfiguration of a
component from a first module into a second module may occur just
as the second module is needed. A component may reconfigure in
stages, e.g., portions of a first module that are no longer needed
may reconfigure into the second module even before the first module
has finished its operation. Such reconfigurations may occur
automatically, or may occur through prompting by an external
source, whether that source is another component, an instruction, a
signal, a condition, an external stimulus, or similar.
For example, a central processing unit of a personal computer may,
at various times, operate as a module for displaying graphics on a
screen, a module for writing data to a storage medium, a module for
receiving user input, and a module for multiplying two large prime
numbers, by configuring its logical gates in accordance with its
instructions. Such reconfiguration may be invisible to the naked
eye, and in some embodiments may include activation, deactivation,
and/or re-routing of various portions of the component, e.g.,
switches, logic gates, inputs, and/or outputs. Thus, in the
examples found in the foregoing/following disclosure, if an example
includes or recites multiple modules, the example includes the
possibility that the same hardware may implement more than one of
the recited modules, either contemporaneously or at discrete times
or timings. The implementation of multiple modules, whether using
more components, fewer components, or the same number of components
as the number of modules, is merely an implementation choice and
does not generally affect the operation of the modules themselves.
Accordingly, it should be understood that any recitation of
multiple discrete modules in this disclosure includes
implementations of those modules as any number of underlying
components, including, but not limited to, a single component that
reconfigures itself over time to carry out the functions of
multiple modules, and/or multiple components that similarly
reconfigure, and/or special purpose reconfigurable components.
Those skilled in the art will recognize that it is common within
the art to implement devices and/or processes and/or systems, and
thereafter use engineering and/or other practices to integrate such
implemented devices and/or processes and/or systems into more
comprehensive devices and/or processes and/or systems. That is, at
least a portion of the devices and/or processes and/or systems
described herein can be integrated into other devices and/or
processes and/or systems via a reasonable amount of
experimentation. Those having skill in the art will recognize that
examples of such other devices and/or processes and/or systems
might include--as appropriate to context and application--all or
part of devices and/or processes and/or systems of (a) an air
conveyance (e.g., an airplane, rocket, helicopter, etc.), (b) a
ground conveyance (e.g., a car, truck, locomotive, tank, armored
personnel carrier, etc.), (c) a building (e.g., a home, warehouse,
office, etc.), (d) an appliance (e.g., a refrigerator, a washing
machine, a dryer, etc.), (e) a communications system (e.g., a
networked system, a telephone system, a Voice over IP system,
etc.), (f) a business entity (e.g., an Internet Service Provider
(ISP) entity such as Comcast Cable, Qwest, Southwestern Bell,
etc.), or (g) a wired/wireless services entity (e.g., Sprint,
Cingular, Nextel, etc.), etc.
In certain cases, use of a system or method may occur in a
territory even if components are located outside the territory. For
example, in a distributed computing context, use of a distributed
computing system may occur in a territory even though parts of the
system may be located outside of the territory (e.g., relay,
server, processor, signal-bearing medium, transmitting computer,
receiving computer, etc. located outside the territory).
A sale of a system or method may likewise occur in a territory even
if components of the system or method are located and/or used
outside the territory. Further, implementation of at least part of
a system for performing a method in one territory does not preclude
use of the system in another territory
In a general sense, those skilled in the art will recognize that
the various embodiments described herein can be implemented,
individually and/or collectively, by various types of
electro-mechanical systems having a wide range of electrical
components such as hardware, software, firmware, and/or virtually
any combination thereof, limited to patentable subject matter under
35 U.S.C. 101; and a wide range of components that may impart
mechanical force or motion such as rigid bodies, spring or
torsional bodies, hydraulics, electro-magnetically actuated
devices, and/or virtually any combination thereof. Consequently, as
used herein "electro-mechanical system" includes, but is not
limited to, electrical circuitry operably coupled with a transducer
(e.g., an actuator, a motor, a piezoelectric crystal, a Micro
Electro Mechanical System (MEMS), etc.), electrical circuitry
having at least one discrete electrical circuit, electrical
circuitry having at least one integrated circuit, electrical
circuitry having at least one application specific integrated
circuit, electrical circuitry forming a general purpose computing
device configured by a computer program (e.g., a general purpose
computer configured by a computer program which at least partially
carries out processes and/or devices described herein, or a
microprocessor configured by a computer program which at least
partially carries out processes and/or devices described herein),
electrical circuitry forming a memory device (e.g., forms of memory
(e.g., random access, flash, read only, etc.)), electrical
circuitry forming a communications device (e.g., a modem,
communications switch, optical-electrical equipment, etc.), and/or
any non-electrical analog thereto, such as optical or other analogs
(e.g., graphene based circuitry). Those skilled in the art will
also appreciate that examples of electro-mechanical systems include
but are not limited to a variety of consumer electronics systems,
medical devices, as well as other systems such as motorized
transport systems, factory automation systems, security systems,
and/or communication/computing systems. Those skilled in the art
will recognize that electro-mechanical as used herein is not
necessarily limited to a system that has both electrical and
mechanical actuation except as context may dictate otherwise.
In a general sense, those skilled in the art will recognize that
the various aspects described herein which can be implemented,
individually and/or collectively, by a wide range of hardware,
software, firmware, and/or any combination thereof can be viewed as
being composed of various types of "electrical circuitry."
Consequently, as used herein "electrical circuitry" includes, but
is not limited to, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
forming a general purpose computing device configured by a computer
program (e.g., a general purpose computer configured by a computer
program which at least partially carries out processes and/or
devices described herein, or a microprocessor configured by a
computer program which at least partially carries out processes
and/or devices described herein), electrical circuitry forming a
memory device (e.g., forms of memory (e.g., random access, flash,
read only, etc.)), and/or electrical circuitry forming a
communications device (e.g., a modem, communications switch,
optical-electrical equipment, etc.). Those having skill in the art
will recognize that the subject matter described herein may be
implemented in an analog or digital fashion or some combination
thereof.
Those skilled in the art will recognize that at least a portion of
the devices and/or processes described herein can be integrated
into an image processing system. Those having skill in the art will
recognize that a typical image processing system generally includes
one or more of a system unit housing, a video display device,
memory such as volatile or non-volatile memory, processors such as
microprocessors or digital signal processors, computational
entities such as operating systems, drivers, applications programs,
one or more interaction devices (e.g., a touch pad, a touch screen,
an antenna, etc.), control systems including feedback loops and
control motors (e.g., feedback for sensing lens position and/or
velocity; control motors for moving/distorting lenses to give
desired focuses). An image processing system may be implemented
utilizing suitable commercially available components, such as those
typically found in digital still systems and/or digital motion
systems.
Those skilled in the art will recognize that at least a portion of
the devices and/or processes described herein can be integrated
into a data processing system. Those having skill in the art will
recognize that a data processing system generally includes one or
more of a system unit housing, a video display device, memory such
as volatile or non-volatile memory, processors such as
microprocessors or digital signal processors, computational
entities such as operating systems, drivers, graphical user
interfaces, and applications programs, one or more interaction
devices (e.g., a touch pad, a touch screen, an antenna, etc.),
and/or control systems including feedback loops and control motors
(e.g., feedback for sensing position and/or velocity; control
motors for moving and/or adjusting components and/or quantities). A
data processing system may be implemented utilizing suitable
commercially available components, such as those typically found in
data computing/communication and/or network computing/communication
systems.
Those skilled in the art will recognize that at least a portion of
the devices and/or processes described herein can be integrated
into a mote system. Those having skill in the art will recognize
that a typical mote system generally includes one or more memories
such as volatile or non-volatile memories, processors such as
microprocessors or digital signal processors, computational
entities such as operating systems, user interfaces, drivers,
sensors, actuators, applications programs, one or more interaction
devices (e.g., an antenna USB ports, acoustic ports, etc.), control
systems including feedback loops and control motors (e.g., feedback
for sensing or estimating position and/or velocity; control motors
for moving and/or adjusting components and/or quantities). A mote
system may be implemented utilizing suitable components, such as
those found in mote computing/communication systems. Specific
examples of such components entail such as Intel Corporation's
and/or Crossbow Corporation's mote components and supporting
hardware, software, and/or firmware.
For the purposes of this application, "cloud" computing may be
understood as described in the cloud computing literature. For
example, cloud computing may be methods and/or systems for the
delivery of computational capacity and/or storage capacity as a
service. The "cloud" may refer to one or more hardware and/or
software components that deliver or assist in the delivery of
computational and/or storage capacity, including, but not limited
to, one or more of a client, an application, a platform, an
infrastructure, and/or a server The cloud may refer to any of the
hardware and/or software associated with a client, an application,
a platform, an infrastructure, and/or a server. For example, cloud
and cloud computing may refer to one or more of a computer, a
processor, a storage medium, a router, a switch, a modem, a virtual
machine (e.g., a virtual server), a data center, an operating
system, a middleware, a firmware, a hardware back-end, a software
back-end, and/or a software application. A cloud may refer to a
private cloud, a public cloud, a hybrid cloud, and/or a community
cloud. A cloud may be a shared pool of configurable computing
resources, which may be public, private, semi-private,
distributable, scaleable, flexible, temporary, virtual, and/or
physical. A cloud or cloud service may be delivered over one or
more types of network, e.g., a mobile communication network, and
the Internet.
As used in this application, a cloud or a cloud service may include
one or more of infrastructure-as-a-service ("IaaS"),
platform-as-a-service ("PaaS"), software-as-a-service ("SaaS"),
and/or desktop-as-a-service ("DaaS"). As a non-exclusive example,
IaaS may include, e.g., one or more virtual server instantiations
that may start, stop, access, and/or configure virtual servers
and/or storage centers (e.g., providing one or more processors,
storage space, and/or network resources on-demand, e.g., EMC and
Rackspace). PaaS may include, e.g., one or more software and/or
development tools hosted on an infrastructure (e.g., a computing
platform and/or a solution stack from which the client can create
software interfaces and applications, e.g., Microsoft Azure). SaaS
may include, e.g., software hosted by a service provider and
accessible over a network (e.g., the software for the application
and/or the data associated with that software application may be
kept on the network, e.g., Google Apps, SalesForce). DaaS may
include, e.g., providing desktop, applications, data, and/or
services for the user over a network (e.g., providing a
multi-application framework, the applications in the framework, the
data associated with the applications, and/or services related to
the applications and/or the data over the network, e.g., Citrix).
The foregoing is intended to be exemplary of the types of systems
and/or methods referred to in this application as "cloud" or "cloud
computing" and should not be considered complete or exhaustive.
One skilled in the art will recognize that the herein described
components (e.g., operations), devices, objects, and the discussion
accompanying them are used as examples for the sake of conceptual
clarity and that various configuration modifications are
contemplated. Consequently, as used herein, the specific exemplars
set forth and the accompanying discussion are intended to be
representative of their more general classes. In general, use of
any specific exemplar is intended to be representative of its
class, and the non-inclusion of specific components (e.g.,
operations), devices, and objects should not be taken limiting.
The herein described subject matter sometimes illustrates different
components contained within, or connected with, different other
components. It is to be understood that such depicted architectures
are merely exemplary, and that in fact many other architectures may
be implemented which achieve the same functionality. In a
conceptual sense, any arrangement of components to achieve the same
functionality is effectively "associated" such that the desired
functionality is achieved. Hence, any two components herein
combined to achieve a particular functionality can be seen as
"associated with" each other such that the desired functionality is
achieved, irrespective of architectures or intermedial components.
Likewise, any two components so associated can also be viewed as
being "operably connected", or "operably coupled," to each other to
achieve the desired functionality, and any two components capable
of being so associated can also be viewed as being "operably
couplable," to each other to achieve the desired functionality.
Specific examples of operably couplable include but are not limited
to physically mateable and/or physically interacting components,
and/or wirelessly interactable, and/or wirelessly interacting
components, and/or logically interacting, and/or logically
interactable components.
To the extent that formal outline headings are present in this
application, it is to be understood that the outline headings are
for presentation purposes, and that different types of subject
matter may be discussed throughout the application (e.g.,
device(s)/structure(s) may be described under
process(es)/operations heading(s) and/or process(es)/operations may
be discussed under structure(s)/process(es) headings; and/or
descriptions of single topics may span two or more topic headings).
Hence, any use of formal outline headings in this application is
for presentation purposes, and is not intended to be in any way
limiting.
Throughout this application, examples and lists are given, with
parentheses, the abbreviation "e.g.," or both. Unless explicitly
otherwise stated, these examples and lists are merely exemplary and
are non-exhaustive. In most cases, it would be prohibitive to list
every example and every combination. Thus, smaller, illustrative
lists and examples are used, with focus on imparting understanding
of the claim terms rather than limiting the scope of such
terms.
With respect to the use of substantially any plural and/or singular
terms herein, those having skill in the art can translate from the
plural to the singular and/or from the singular to the plural as is
appropriate to the context and/or application. The various
singular/plural permutations are not expressly set forth herein for
sake of clarity.
One skilled in the art will recognize that the herein described
components (e.g., operations), devices, objects, and the discussion
accompanying them are used as examples for the sake of conceptual
clarity and that various configuration modifications are
contemplated. Consequently, as used herein, the specific exemplars
set forth and the accompanying discussion are intended to be
representative of their more general classes. In general, use of
any specific exemplar is intended to be representative of its
class, and the non-inclusion of specific components (e.g.,
operations), devices, and objects should not be taken limiting.
Although one or more users maybe shown and/or described herein,
e.g., in FIG. 1, and other places, as a single illustrated figure,
those skilled in the art will appreciate that one or more users may
be representative of one or more human users, robotic users (e.g.,
computational entity), and/or substantially any combination thereof
(e.g., a user may be assisted by one or more robotic agents) unless
context dictates otherwise. Those skilled in the art will
appreciate that, in general, the same may be said of "sender"
and/or other entity-oriented terms as such terms are used herein
unless context dictates otherwise.
In some instances, one or more components may be referred to herein
as "configured to," "configured by," "configurable to,"
"operable/operative to," "adapted/adaptable," "able to,"
"conformable/conformed to," etc. Those skilled in the art will
recognize that such terms (e.g. "configured to") generally
encompass active-state components and/or inactive-state components
and/or standby-state components, unless context requires
otherwise.
It is noted that "wearable computer" is used throughout this
specification, and in the examples given, it is generally a
wearable computer that captures images. However, this is merely for
exemplary purposes. The same systems may apply to conventional
digital cameras, and any other camera, including security cameras,
surveillance cameras, motor vehicle mounted cameras, road/traffic
cameras, cameras at automated teller machines, and the like.
Referring now to FIG. 1, in an embodiment, an entity, e.g., a user
of a privacy beacon, e.g., user 2105, e.g., a person, e.g., "Jules
Caesar," may be associated with a "Don't Capture Me" (hereinafter
"DCM") privacy beacon, e.g., DCM Beacon 2110. In an embodiment, a
DCM beacon may be active, e.g., may contain circuitry and be an
active unit, e.g., something wearable, e.g., on a piece of
clothing, or on a ring, or on a drone associated with the user. In
an embodiment, the DCM beacon may be passive, e.g., it may be
something that can be detected in the electromagnetic spectrum, or
can be otherwise detected but does not contain any circuitry or
advanced logic gates of its own. In an embodiment, the DCM beacon
may be a combination of the two.
In an embodiment, a DCM beacon may be detectable by a machine or a
human being (e.g., a stop sign painted on a user's forehead may be
a DCM beacon). In an embodiment, a DCM beacon may be detectable by
a particular type of machine, structure, or filter, and may be
otherwise undetectable or difficult to detect through human senses.
For example, in an embodiment, a DCM beacon may be seen using
ultraviolet or infrared light, or a DCM beacon may emit light
outside the visible spectrum. In an embodiment, a DCM beacon may be
visible or detectable after a filter is applied, e.g., a DCM beacon
may be visible after a red filter is applied, or after a
transformation is applied to a captured image, e.g., a Fourier
transformation.
In an embodiment, a DCM beacon may be detected optically. In
another embodiment, a DCM beacon may be detected by sensing a
different kind of wave emitted by a DCM beacon, e.g., a wave in the
nonvisible electromagnetic spectrum, a sound wave, an
electromagnetic wave, and the like. In an embodiment, a DCM beacon
may use quantum entanglement (e.g., through use of an
entanglement-based protocol, among others).
In an embodiment, a DCM beacon may transmit data, e.g., a terms of
service for the user (e.g., user 2105) for which the DCM beacon
(e.g., DCM beacon 2110) is associated or linked. In an embodiment,
a DCM beacon may be encoded with a location of data, e.g., a web
address of a server where terms of service for the user (e.g., user
2105) for which the DCM beacon (e.g., DCM beacon 2110) is
associated.
In an embodiment, a DCM beacon may be provided by a drone, of any
size, e.g., nanometers to full-sized aircraft, that is associated
with the user.
In an embodiment, a DCM beacon may be provided by a piece of
electronics that a user carries, e.g., a cellular telephone,
tablet, watch, wearable computer, or otherwise.
In an embodiment, a DCM beacon may be embedded in the user,
ingested by the user, implanted in the user, taped to the skin of
the user, or may be engineered to grow organically in the user's
body.
In an embodiment, a DCM beacon may be controlled by a magnetic
field or other field emitted by a user, either through a user's
regular electromagnetic field or through a field generated by a
device, local or remote, associated with the user.
Referring again to FIG. 1, in an embodiment, a different user,
e.g., a wearable computer user 3105, may have a wearable computer
3100. A wearable computer may be a pair of eyeglasses, a watch,
jewelry, clothing, shoes, a piece of tape placed on the user's
skin, it may be ingested by the user or otherwise embedded into the
user's body. Wearable computer 3100 may be a piece of electronics
carried by a user 3105. Wearable computer 3100 may not be a
"wearable" computer in a traditional sense, but may be a laptop
computer, tablet device, or smartphone carried by a user. In an
embodiment, wearable computer 3100 may not be associated with a
user at all, but may simply be a part of a surveillance system,
e.g., a security camera, or a camera at an Automated Teller Machine
("ATM").
Wearable Computer That Captures the Image (FIGS. 1-I; 1-J, 1-N,
1-O).
Referring now to FIG. 1, e.g., FIG. 1-J, wearable computer 3100 may
include a wearable computer image capturing device 3110, e.g., a
lens. Wearable computer image capturing device 3110 may include
functionality to capture images, e.g., an image sensor, e.g., a
charge-coupled device ("CCM") or a complementary metal-oxide
semiconductor ("CMOS"), an analog-to digital converter, and/or any
other equipment used to convert light into electrons. Wearable
computer image capturing device 3110 may capture the optical data,
which may remain as light data, or may be converted into electrons
through an image sensor, as raw data. This raw data, e.g., raw data
2200 may be captured by the optical image data acquiring module
3120 of wearable computer 3100. Optical image data acquiring module
3120 may be configured to acquire an image, e.g., an image of user
2105. As described above, a DCM beacon 2110 may be associated with
user 2105. In an embodiment, at this point in the operation of
wearable computer 3100, no processing has been performed on the raw
image data 2200.
Although not pictured here, wearable computer image capturing
device 3110 may also include circuitry to detect audio (e.g., a
microphone) and/or video (e.g., the ability to capture frames above
a certain rate of frames per second). This circuitry and its
related explanation have been omitted to maintain simplicity of the
drawing, however, through this application, "raw image data 2200"
should be considered to also possibly include still pictures,
video, and audio, in some embodiments.
Referring now to FIG. 1-I, in an embodiment, wearable computer 3100
then may transfer the raw/optical image data 2200 to an image path
splitting module 3130. This splitting path may be optical, e.g., a
set of mirrors/lenses, for the case in which raw image data 2200 is
still in optical form, or digital, e.g., through use of known
electrical signal splitters. Image path splitting module 3130 may
be implemented as hardware, software, or a combination thereof.
Referring again to FIG. 1, e.g., FIG. 1-I, in an embodiment, the
north (upper) branch, as illustrated in FIG. 1, transmits the raw
image data 2200 to an image prior-to-processing encryption module
3150. Image prior-to-processing encryption module 3150 may receive
the raw image data 2200. From there, image prior-to-processing
encryption module 3150 may acquire an encryption key that is
device-specific, e.g., wearable computer device specific encryption
key 3182. In an embodiment, wearable computer device-specific
encryption key 3182 may be stored in wearable computer device
memory 3180, which also may include encrypted image storage 3184,
and a wearable computer user-specific encryption key 3186. In
another embodiment, device-specific encryption key 3182 may be
retrieved from elsewhere, e.g., cloud storage. In another
embodiment, device-specific encryption key 3182 may be generated in
real time by the device. In another embodiment, device-specific
encryption key 3182 may be generated in real time by the device
based on random user input (e.g., the last five words spoken by the
device and recorded).
In an embodiment, image prior-to-processing encryption module 3150
may generate encrypted image data 2210. Encrypted image data 2210
may be stored in encrypted image storage 3184 of wearable computer
device memory 3180. In an embodiment, encrypted image data 2210
also may be transmitted to central server encrypted data and beacon
metadata transmission module 3170.
Referring again to FIG. 1-I and FIG. 1-N, in an embodiment, the
south (lower) branch, as illustrated in FIG. 1, may transmit the
raw image data 2200 to a DCM beacon detecting module 3140. In an
embodiment, DCM beacon detecting module 3140 may include one or
more of optics-based DCM beacon detecting module 3142, which may be
configured to detect the DCM beacon in an optical signal (e.g.,
light). In an embodiment, DCM beacon detecting module 3140 may
include digital image processing-based DCM beacon detecting module
3144, which may be configured to detect the DCM beacon in a
converted electron signal (e.g., data signal). In an embodiment,
DCM beacon detecting module 3140 is configured to detect a presence
or an absence of a DCM beacon, e.g., DCM beacon 2110, associated
with the entity (e.g., user 2105, e.g., "Jules Caesar"), without
performing any additional processing on the image, or releasing the
image for other portions of wearable computer 3100 to use. In an
embodiment, for example, raw image data 2200 is not stored in
device memory of wearable computer 3100 in a form that is
accessible to other applications and/or programs available to
wearable computer 3100 or other computing devices that may
communicate with wearable computer 3100. For example, a user 3105
of wearable computer 3100 may not, at this stage in processing,
capture the raw data 2200 and upload it to a social networking
site, e.g., Facebook. In an embodiment, DCM beacon detecting module
3140 may be implemented in hardware, which may prevent users or
third parties from bypassing the DCM beacon detecting module 3140,
without disassembling the device and physically altering the
circuit/logic.
Referring now to FIG. 1-N, in an embodiment, the DCM beacon
detecting module 3140 may detect the DCM beacon 2110. For example,
in the exemplary embodiment shown in FIG. 1, DCM beacon detecting
module 3140 may detect the DCM beacon 2110 that is associated with
user 2105, e.g., Jules Caesar. Thus, DCM beacon detecting module
3140 now knows to lock the image data and prevent unencrypted image
data from being accessed on the device. Although not shown in this
example, if the DCM beacon had not been found, then in an
embodiment, the image data 2200 would have been released for use by
the device, e.g., for uploading to social network or cloud storage,
for example.
In an embodiment, the detected DCM beacon 2110 associated with
Jules Caesar may be transmitted to DCM beacon metadata generating
module 3160. DCM beacon metadata generating module 3160 may
generate metadata based on the detection of the beacon. The
metadata may be as simple as "the image data contains a privacy
beacon," e.g., Boolean data. In an embodiment, the metadata may be
more complex, and may identify the user associated with the privacy
beacon, e.g., the metadata may describe "A privacy beacon
associated with Jules Caesar has been found in the image data." In
another embodiment, the metadata may include the terms of service
associated with the personality rights of Jules Caesar, an example
of which terms of service will be provided in more detail
herein.
In an embodiment, the detected DCM beacon 2110 may be very simple
(e.g., optically detectable), and to obtain/generate metadata
associated with the detected DCM beacon 2110, DCM beacon metadata
generating module 3160 may include a DCM server contacting module
3162, which may contact one or more entities to obtain more
information regarding the DCM beacon 2110. The DCM beacon metadata
generating module 3160 may, in some embodiments, transmit the DCM
beacon, or the image in which the DCM beacon was captured, to the
external entity, in order to obtain more accurate data. For
example, the DCM server contacting module 3162 may contact service
term management server 5000, which may have DCM beacon registry
5010, which will be discussed in more detail further herein.
In an embodiment, DCM beacon metadata generating module 3160 may
generate the DCM beacon metadata 2230, and transfer DCM beacon
metadata 2230 to central server encrypted data and beacon metadata
transmission module 3170.
Referring again to FIG. 1, e.g., FIG. 1-I, central server encrypted
data and beacon metadata transmission module 3170 may receive the
encrypted image data 2210 and the DCM beacon metadata 2230 (e.g.,
see FIG. 1-N). In an embodiment, central server encrypted data and
beacon metadata transmission module 3170 may facilitate the
transmission of encrypted image data 2210 and DCM beacon metadata
2230 to a server, e.g., wearable computer encrypted data receipt
and determination server 4000, which will be discussed in more
detail herein. In an embodiment, central server encrypted data and
beacon metadata transmission module 3170 may include one or more of
DCM beacon metadata transmission module 3172, which may be
configured to transmit the DCM beacon metadata 2230, and encrypted
data transmission module 3174, which may be configured to transmit
the encrypted image data 2210.
Wearable Computer server (FIGS. 1-H, 1-G)
Referring again to FIG. 1, e.g., FIG. 1-H, in an embodiment, a
system may include a wearable computer server, e.g., wearable
computer encrypted data receipt and determination server 4000. In
an embodiment, a wearable computer server may be provided by a
manufacturer of the wearable device 3100. In an embodiment, a
wearable computer server may be provided by a developer of one or
more software applications for the wearable device 3100. In an
embodiment, wearable computer server 4000 may not have a direct
relationship with wearable device 3100 prior to receiving the
encrypted image data and the DCM beacon metadata 2230, as will be
discussed in more detail herein. In an embodiment, a wearable
computer server 4000 may be implemented at a home computer of a
user, for example, and may communicate only with wearable devices
that are associated with that user. In another embodiment, a
wearable computer server 4000 may communicate with many wearable
devices 3100, which may or may not have some relationship. In an
embodiment, wearable computer server 4000 may communicate with one
or more wearable devices 3100 through use of a communication
network, which may use any known form of device communication. In
an embodiment, wearable computer server 4000 may be chosen by
wearable device 3100, either due to proximity or due to one or more
properties or characteristics of wearable computer server 4000. In
an embodiment, wearable computer server 4000 may be free to agree
or disagree to process DCM beacon and image data received from
various wearable devices 3100. In an embodiment, wearable computer
server 4000 may be distributed across many computers and/or
servers.
In an embodiment, wearable computer encrypted data receipt and
determination server 4000 may include an encrypted data and beacon
metadata reception module 4100. Encrypted data and beacon metadata
reception module 4100 may receive encrypted image data 2210 and DCM
beacon metadata 2230 from wearable computer 3100, e.g., central
server encrypted data and beacon metadata transmission module 3170.
In an embodiment, encrypted data and beacon metadata reception
module 4100 may include a DCM beacon metadata reception module
4104. DCM beacon metadata reception module 4104 may be configured
to acquire a privacy metadata, e.g., DCM beacon metadata 2230,
corresponding to a detection of a DCM beacon, e.g., DCM beacon
2110, in the one or more images captured by the image capture
device, e.g., wearable computer 3100. In an embodiment, encrypted
data and beacon metadata reception module 4100 may include
encrypted data reception module 4102. In an embodiment, encrypted
data reception module 4102 may be configured to acquire one or more
of a block of encrypted data corresponding to one or more images
that previously have been encrypted, e.g., encrypted image data
2210. In an embodiment, encrypted data module 4102 may transmit, or
facilitate the transmission of, encrypted image data 2210 to an
entity that will perform a secondary detection of the privacy
beacon, e.g., DCM beacon detection test duplicating server 4800,
which will be discussed in more detail further herein.
Referring again to FIG. 1-H, in an embodiment, encrypted data and
beacon metadata reception module 4100 may transmit the received DCM
beacon metadata to DCM beacon metadata reading module 4120. If the
DCM beacon metadata 2230 indicates that a DCM beacon was not found,
then, in an embodiment, processing may transfer to module 4220,
which will be discussed in more detail further herein. In the
example shown in FIG. 1, the DCM beacon 2110 associated with Jules
Caesar was found, and the DCM beacon metadata 2230 indicates this
state to DCM beacon metadata reading module 4120.
Referring now to FIG. 1-G, in an embodiment, when the presence of
the DCM beacon 2110 is determined through the DCM beacon metadata,
e.g., DCM beacon metadata 2230, then a DCM beacon TOS retrieval
module 4122 may retrieve term data from a location, which may be a
remote location, e.g., a DCM beacon management server 5100, which
will be discussed in more detail further herein. In an embodiment,
DCM beacon TOS retrieval module 4122 may retrieve term data that
includes a terms of service that specifies one or more conditions
in which the image containing the DCM beacon 2110 may be used. In
an embodiment, the TOS may also specify one or more penalties for
using the personality rights that may be associated with the image,
without acquiring permission or paying a licensing fee prior to
releasing or utilizing the image. In an embodiment, the TOS also
may include language forcing the entity that viewed the privacy
beacon to accept the TOS upon viewing of the beacon. The TOS will
be described in more detail with respect to modules 5000 and
5100.
Referring again to FIG. 1-G, in an embodiment, wearable computer
encrypted data receipt and determination server 4000 also may
include an encrypted data value calculation module 4130. Encrypted
data value calculation module 4130 may use one or more algorithms
or other methods of inducing or deducing an estimate regarding how
much advertising or other revenue may be garnered by using the
images containing the entity associated with the privacy beacon.
For example, in an embodiment, encrypted data value calculation
module 4130 may include a facial recognition program to recognize
the person or persons associated with the beacon. In another
embodiment, however, this may not be necessary, because the DCM
beacon metadata and/or the ToS may identify the person. In an
embodiment, encrypted data value calculation module 4130 may use
various heuristics to calculate ad revenue, e.g., based on models
used by popular advertising methods, or based on prior releases of
images of the person associated with the DCM beacon 2110. In an
embodiment, module 4130 may use social networking to acquire a
focus group and test the image on the focus group, in order to
assist in revenue determination. For example, in the example shown
in FIG. 1, the image in question is of Jules Caesar, who is the
reclusive leader of the Roman Empire, and so the ad revenue
generated from having an actual picture of Jules Caesar, or a video
of Jules Caesar drinking a mead-and-tonic, may have high net
value.
Referring again to FIG. 1-G, in an embodiment, the ToS acquired
from DCM beacon TOS retrieval module 4122, and the encrypted data
valuation calculated from encrypted data value calculation module
4130 may be sent to release of encrypted data determination module
4140. Release of encrypted data determination module 4140 may make
a determination, at least partly based on the acquired metadata,
and at least partly based on a value calculation based on the
representation of the feature of the person associated with the DCM
beacon 2110 (e.g., Jules Caesar drinking a mead-and-tonic). That
determination may be regarding whether to allow an action, e.g.,
processing, decryption, distribution, editing, releasing, sharing,
saving, posting to a social network, and the like, of the image. In
an embodiment, the decision may be based on whether the potential
advertising revenue outweighs the potential damages retrieved from
the terms of service. In an embodiment, this calculation may be a
strict number comparison (e.g., is "revenue" greater than
"damages"). In an embodiment, the calculation may include more
complex factors, e.g., likelihood of success on a damages claim,
likelihood that revenues will increase, secondary revenue factors
from increased traffic and/or brand awareness, and the like. In
addition, in an embodiment, the comparison may not be strictly less
than/greater than, e.g., in a risk adverse algorithm, if the
numbers are close, then the determination may be to not release the
encrypted data, even if the potential ad revenue is calculated as
larger than the potential damages by a small amount.
Referring again to FIG. 1-G, if the determination made by release
of encrypted data determination module 4140 is "NO," e.g., the
potential revenue is less than the potential damages, then the
encrypted data 2210 is moved to an encrypted data holding and/or
quarantine module 4150. In an embodiment, the data from encrypted
data holding and/or quarantine module 4150 is deleted after a
predetermined time period, e.g., seven days. In an embodiment, the
data is simply stored, encrypted and locked away. In an embodiment,
the encrypted image data 2210 may be transmitted to an ad
replacement value determination server 4400, shown in FIG. 1-F,
which will be discussed in more detail herein.
Referring again to FIG. 1-G, if the determination made by release
of encrypted data determination module 4140 is "YES," e.g., the
potential revenue is more than the potential damages, then the
encrypted data 2210 is transferred to encrypted data decryption
enabling module 4152, shown in FIG. 1-H. In an embodiment,
encrypted data decryption enabling module 4152 may be configured to
determine whether to perform decryption of at least a portion of
the encrypted data 2210 based on the result from module 4140 by
transmitting the encrypted image data 2210 to wearable computer
acquired encrypted data decryption and re-encryption server 4200,
which will be discussed in more detail.
Wearable Computer Acquired Encrypted Data Decryption and
Re-Encryption Server 4200 (FIGS. 1-L and 1-M)
Referring now to FIG. 1-M, in an embodiment, the system may include
wearable computer acquired encrypted data decryption and
re-encryption server 4200. In an embodiment, wearable computer
acquired encrypted data decryption and re-encryption server 4200
may be a portion of wearable computer server 4000. In an
embodiment, however, wearable computer acquired encrypted data
decryption and re-encryption server 4200 may be a different server
than wearable computer server 4000, and may be controlled by a
different entity. For example, in an embodiment, the owner of the
wearable computer 3100 hardware may control wearable computer
server 4000. After the decision is made to decrypt the data at the
wearable computer server 4000, control may be handed off to a
different server in control of software on the wearable computer,
e.g., software that handles pictures taken by the wearable computer
3100. In another embodiment, wearable computer acquired encrypted
data decryption and re-encryption server 4200 may be controlled by
a social networking/media site, e.g., Facebook, who may have an
agreement to acquire the image data at the same time as the
device.
Referring again to FIG. 1-M, in an embodiment, wearable computer
acquired encrypted data decryption and re-encryption server 4200
may include encrypted data acquiring module 4210, which may acquire
the encrypted image data 2210 from the wearable computer server
4000. In an embodiment, wearable computer acquired encrypted data
decryption and re-encryption server 4200 may include a privacy
metadata acquiring module 4220, which may acquire privacy metadata
from module 4120, if the DCM beacon was never detected and the
image is free to be used. For example, in an embodiment, image data
with no DCM beacon may be treated similarly to image data with a
DCM beacon, but that has been determined to have an advertising
value greater than a potential damages value. For example, in an
embodiment, image data with no DCM beacon may be treated as image
data with potential damages value of zero.
Referring again to FIG. 1-M, in an embodiment, wearable computer
acquired encrypted data decryption and re-encryption server 4200
may include data indicating profitability of image with DCM beacon
acquiring module 4230, which may receive data from module 4150 of
wearable computer server 4000 indicating that the image should be
decrypted regardless of the DCM beacon because of its potential
profitability.
Referring again to FIG. 1-M, in an embodiment, wearable computer
acquired encrypted data decryption and re-encryption server 4200
may include image data decryption preparation module 4240, which
may receive data from one or more of data indicating profitability
of image with DCM beacon acquiring module 4230, encrypted data
acquiring module 4210, and privacy metadata acquiring module 4220.
In an embodiment, module 4240 may prepare the image or images for
decryption, e.g., perform pre-processing, check image integrity,
reconfirm the privacy beacon calculations, and the like.
Referring now to FIG. 1-L, wearable computer acquired encrypted
data decryption and re-encryption server 4200 may include
device-specific key retrieving module 4250 which may retrieve the
device-specific key used to encrypt/decrypt the encrypted image
data 2210. In an embodiment, device-specific key retrieving module
4250 may include a device-specific key retrieving from device
module 4252, which may be configured to retrieve the
device-specific key directly from the device that encrypted the
image, e.g., wearable computing device 3100. In an embodiment,
device-specific key retrieving module 4250 may include a
device-specific key retrieving from server module 4254, which may
be configured to retrieve the device-specific key from a server,
e.g., from wearable computer encrypted data receipt and
determination server 400, or from DCM beacon detection test
duplicating server 4800, or from another server not depicted in
FIG. 1.
Referring again to FIG. 1-L, in an embodiment, image data
decryption with device-specific key module 4260 may take the
device-specific key retrieved from module 4250, and apply it to the
encrypted image data 2210 to generate decrypted image data 2280, as
shown by the icon with the unlocked lock in FIG. 1-L.
Referring again to FIG. 1-L, the image data has been decrypted.
However, to protect security, in some embodiments, the data may be
re-encrypted with a key that is not tied to a specific device, but
may be tied to a specific user of the device, e.g., the key may be
related to user 3105, rather than wearable device 3100. This
embodiment will be described in more detail herein. This embodiment
allows the re-encrypted data to be securely sent to a different
device belonging to the user, e.g., a smart TV, a home computer, a
video game system, or another portable electronic device, e.g., a
cellular smartphone. In an embodiment, the re-encryption with a
user specific key may be omitted.
In an embodiment, wearable computer acquired encrypted data
decryption and re-encryption server 4200 may include a
user-specific key retrieving module 4270, that may be configured to
obtain, through generation, acquisition, reception, or retrieval,
of a user-specific encryption key. The user-specific encryption key
may be delivered to image data encrypting with user-specific key
module 4280, which, in an embodiment, also may receive the
decrypted image data 2280.
Referring again to FIG. 1-L, in an embodiment, image data
encrypting with user-specific key module 4280 may be configured to
encrypt the block of decrypted data through use of a unique user
code that is related to the user 3105 of the wearable device 3100.
The again-encrypted image data then may be transferred to encrypted
image data transmitting module 4290. In an embodiment, encrypted
image data transmitting module 4290 may transmit the image data
that has been encrypted with a user-specific key to one or more
other devices, which will be discussed in more detail herein.
Computing Device That Receives the Image Data (FIGS. 1-S and
1-T).
Referring now to FIG. 1-S, in an embodiment, the system may include
a computing device 3200, which may be a wearable computer or other
device. In an embodiment, computing device 3200 may be the same as
wearable computer 3100, but it does not necessarily have to be the
same. In an embodiment, computing device 3200 receives the image
data. In an embodiment, as described above, the received image data
has been encrypted with a user-specific code. Thus, in such an
embodiment, computing device 3200 may be associated with user 3105
of the wearable computing device 3100. For example, a user 3105 may
have a wearable computing device 3100 that captures images of
people. After processing those images at the server 4000, for
example, the images, which, in some embodiments, now may be
encrypted with a user-specific code, may be transmitted to
computing device 3200, which may be the user 3105's home media
center back at her house. In another embodiment, computing device
3200 may be user 3105's laptop device, or user 3105's smartphone or
tablet device. And, as previously mentioned, in another embodiment,
computing device 3200 may simply be the user 3105's wearable
computing device 3100 that captured the images originally.
In an embodiment, the computing device 3200 and the wearable
computing device 3100 pictured in FIG. 1 are the same device. In an
embodiment, the encryption, transmission to a server, decryption,
and transmission back, may occur invisibly to the user 3105, e.g.,
to the user 3105 of the wearable computing device 3100, the images
are available to her after they are recorded and saved, with a
delay that is not specified. In some embodiments, the user 3105 may
not be informed of the path taken by the captured image data.
In an embodiment, wearable computing device 3100 may include an
encrypted image data receiving module 3210 configured to acquire
the data encrypted by the user-specific key code from encrypted
image data transmitting module 4290 of wearable computer 4200. In
an embodiment, computing device 3200 may include image data release
verification acquiring module 3220, which may be configured to
determine that the images received from the encrypted image data
transmitting module 4290 of wearable computer 4200 have been
approved for release and/or use. In an embodiment, the
determination may be made based on the ground that the images are
encrypted with a user-specific key rather than a device specific
key, if it is possible to tell from the encrypted information
(e.g., in some embodiments, different types of encryption that may
leave a different "signature" may be used). In an embodiment, the
determination may be made by again analyzing the image data. In an
embodiment, image data release verification acquiring module 3220
may include encrypted image data analysis module 3222 which may
perform analysis on the encrypted image data, including, but not
limited to, reading metadata attached to the encrypted image data,
to verify that the received encrypted image data is approved for
release and/or processing. In an embodiment, image data release
verification acquiring module 3220 may include release verification
data retrieving module 3224, which may be configured to obtain
release verification data from the device that performed the
verification, e.g., server 4000, or from a different device.
Referring now to FIG. 1-T, in an embodiment, computing device 3200
may include device memory 3280. Device memory 3280 may store the
wearable computer user-specific encryption/decryption key 3286,
which may be used to decrypt the received encrypted image data. In
an embodiment, device memory 3280 also may include encrypted image
storage 3284, which may include one or more image data, which may
be encrypted.
Referring again to FIG. 1-S, in an embodiment, computing device
3200 may include user-specific decryption key obtaining module
3230, which may obtain the user-specific encryption/decryption key.
In an embodiment, user-specific decryption key obtaining module
3230 may include encryption/decryption key external source
obtaining module 3232, which may be configured to obtain the
encryption/decryption key from an external source, e.g., server
4000. In an embodiment, user-specific decryption key obtaining
module may include encryption/decryption key memory retrieving
module 3234, which may be configured to retrieve the
encryption/decryption key from device memory 3280 of computing
device 3200.
Referring again to FIG. 1-S, in an embodiment, computing device
3200 may include image decryption module 3240, which may use the
user-specific encryption/decryption key to decrypt the image data.
In an embodiment, the decrypted image data then may be sent to
decrypted image release module 3250, where the clear image data may
be accessed by the device, and transmitted to other locations,
posted to social networking or cloud storage, be shared,
manipulated, saved, edited, and otherwise have open access to the
decrypted image data.
Ad Replacement Value Determination Server (FIG. 1-F).
Referring back to FIG. 1-G, as discussed briefly above, release of
encrypted data determination module 4140 may determine not to
release the encrypted data, which may be stored in an encrypted
data holding and/or quarantine module 4150. In an embodiment, the
encrypted data and the DCM beacon may be transmitted to an ad
replacement value determination server, as shown in FIG. 1-F.
Referring now to FIG. 1-F, in an embodiment, the system may include
an ad replacement value determination server 4400. Ad replacement
value determination server 4400 may take the encrypted image data
and determine if there is a way to monetize the images such that
the monetization may outweigh the potential damages. For example,
ad replacement value determination server 4400 may calculate
potential earnings and limited damages liability, if, for example,
an entity with the DCM beacon, e.g., Jules Caesar, is instead shown
with an advertisement where his head would normally be. In an
embodiment, ad replacement value server may be controlled by a
different entity than server 4000, and there may be an agreement in
place for the ad replacement value determination server 4400 to
receive encrypted data for which the server 4000 decides it does
not want to allow distribution. For example, ad replacement value
server 4400 may be run by a smaller social networking site that
cares less about potential damages because they have fewer assets,
or are less risk-averse. In another embodiment, ad replacement
value determination server 4400 may be part of server 4000, and it
may be a practice of server 4000 to send an encrypted image for
further analysis after the server 4000 determines that the image is
not likely to be profitable without modification.
Referring again to FIG. 1-F, in an embodiment, ad replacement value
determination server 4400 may include a DCM beacon metadata
reception module 4410 configured to receive the DCM beacon metadata
from the wearable computer encrypted data receipt and determination
server 4000. In an embodiment, ad replacement value determination
server 4400 may include an encrypted data reception module 4420
that may be configured to receive the encrypted data from the
wearable computer encrypted data receipt and determination server
4000, e.g., from the encrypted data holding module 4150.
Referring again to FIG. 1-F, in an embodiment, ad replacement value
determination server 4400 may include a DCM beacon term acquiring
module 4430, which may acquire one or more terms of service from
service term management server 5000 and/or DCM beacon management
server 5100, similarly to DCM beacon terms-of-service retrieval
module 4122 of wearable computer encrypted data receipt and
determination server 4000. In an embodiment, DCM beacon term
acquiring module may include DCM beacon remote retrieval module
4432. In an embodiment, DCM beacon term acquiring module may be
configured to retrieve term data from a remote location, e.g.,
service term management server 5000, which term data may correspond
to a term of service associated with a release of image data that
includes the person with which the DCM beacon is associated, e.g.,
Jules Caesar.
Referring again to FIG. 1-F, in an embodiment, ad replacement value
determination server 4400 may include an encrypted data value
calculation with standard ad placement module 4440. In an
embodiment, standard ad placement module 4440 may perform a similar
calculation as encrypted data value calculation module 4130 of
wearable computer encrypted data receipt and determination server
4000. In an embodiment, for example, encrypted data value
calculation with standard ad placement module 4440 may calculate
whether an estimated advertising revenue from one or more
advertisement images placed in the encrypted image data will be
greater than an estimated potential liability for distribution of
the images. In an embodiment, the estimated potential liability is
based at least in part on the terms of service which may be
retrieved by the DCM beacon term acquiring module 4430.
Referring again to FIG. 1-F, in an embodiment, ad replacement value
determination server 4400 may include encrypted image data
modification with intentionally obscuring ad placement module 4450.
In an embodiment, encrypted image data modification with
intentionally obscuring ad placement module 4450 may be configured
to modify the encrypted image data (e.g., which, in some
embodiments, may require limited decryption and then re-encryption)
by replacing one or more areas associated with the entity related
to the DCM beacon, e.g., Jules Caesar's face (e.g., or in another
embodiment, Jules Caesar's genitalia, if, e.g., it was a naked
picture of Jules Caesar), with one or more advertisement
images.
Referring again to FIG. 1-F, in an embodiment, ad replacement value
determination server 4400 may include modified encrypted data value
calculation with intentionally obscuring ad placement module 4460.
In an embodiment, modified encrypted data value calculation with
intentionally obscuring ad placement module 4460 may be configured
to calculate an estimated advertising revenue from the modified
image data. In an embodiment, the modified image data then may be
distributed through modified encrypted data distributing module
4470.
Tracking Server (FIG. 1-E).
Referring now to FIG. 1-E, in an embodiment, a system may include
tracking server 9000. Tracking server 9000 may be configured to log
use of a "Don't Capture Me" (hereinafter "DCM") beacon by one or
multiple users. In an embodiment, tracking server 9000 may track
active DCM beacons, e.g., beacon 2110, through communication with
said one or more beacons. In an embodiment, tracking server may
track DCM beacons through other means, e.g., social networking and
the like. The DCM beacon does not need to be an active DCM beacon
in order to be tracked by tracking server 9000.
In an embodiment, tracking server 9000 may include deployment of
one or more active and/or passive DCM beacons monitoring module
9010. Deployment of one or more active and/or passive DCM beacons
monitoring module 9010 may include one or more of active DCM beacon
monitoring module 9012 and passive DCM beacon monitoring/data
gathering module 9020. In an embodiment, passive DCM beacon
monitoring/data gathering module 9020 may gather data about the
passive DCM beacon by observing it, e.g., through satellite video
capture, through other image capturing devices, e.g., phone
cameras, security cameras, laptop webcams, and the like, or through
other means. In an embodiment, passive DCM beacon monitoring/data
gathering module 9020 may include user input module 9022, which may
receive an indication from a user, e.g., a switch flipped on a
user's cell phone, indicating that the user is using the DCM
beacon. In an embodiment, passive DCM beacon monitoring/data
gathering module 9020 may include a device status module which
tracks a device with which the passive DCM beacon is associated,
e.g., a wearable computer that is a shirt, or a cellular phone
device in the pocket. In an embodiment, passive DCM beacon
monitoring/data gathering module 9020 may include a social media
monitoring module that monitors posts on social networking sites to
determine if the DCM beacon is being used, and a location of the
user.
Referring again to FIG. 1-E, in an embodiment, tracking server 9000
may include a record of the deployment of the one or more active
and/or passive DCM beacons storing module 9030, which may be
configured to store a record of usage and/or detection logs of the
DCM beacons that are monitored. In an embodiment, record of the
deployment of the one or more active and/or passive DCM beacons
storing module 9030 may store a record of the deployment in
deployment record storage 9032. In an embodiment, record of the
deployment of the one or more active and/or passive DCM beacons
storing module 9030 may transmit all or portions of the recorded
record through record of the deployment of one or more active
and/or passive DCM beacons transmitting module 9040.
Service Term Management Server 5000 (FIG. 1-A)
Referring now to FIG. 1-A, in an embodiment, the system may include
service term management server 5000, which may manage terms of
service that are associated with a DCM beacon and/or a person. In
an embodiment, service term management server 5000 may include a
DCM beacon registry 5010. In an embodiment, the DCM beacon registry
5010 may include one or more of a user's name, e.g., Jules Caesar,
a terms of service associated with Jules Caesar, which may be
custom to Jules Caesar, or may be a generic terms of service that
is used for many persons, and various representations of portions
of Jules Caesar, e.g., likeness, handprint, footprint, voiceprint,
pictures of private areas, and the like.
Referring again to FIG. 1-A, in an embodiment, the system may
include a terms of service generating module 5020. Terms of service
generating module 5020 may create a terms of service for the user
Jules Caesar. A sample Terms of Service is shown in FIG. 1-A and is
reproduced here. It is noted that this is a condensed Terms of
Service meant to illustrate an exemplary operation of the system in
the environment, and accordingly, several necessary legal portions
may be omitted. Accordingly, the example Terms of Service should
not be considered as a binding, legal document, but rather a
representation of what the binding, legal document would look like,
that would enable one skilled in the art to create a full Terms of
Service.
Exemplary Terms of Service for User 2105 (Jules Caesar)
1. By capturing an image of any part of the user Jules Caesar
(hereinafter "Image"), or providing any automation, design,
resource, assistance, or other facilitation in the capturing of the
Image, you agree that you have captured these Terms of Service and
that you acknowledge and agree to them. If you cannot agree to
these Terms of Service, you should immediately delete the captured
Image. Failure to do so will constitute acceptance of these Terms
of Service.
2. The User Jules Caesar owns all of the rights associated with the
Image and any representation of any part of Jules Caesar
thereof;
3. By capturing the Image, you agree to provide the User Jules
Caesar just compensation for any commercialization of the User's
personality rights that may be captured in the Image.
4. By capturing the Image, you agree to take all reasonable actions
to track the Image and to provide an accounting of all
commercialization attempts related to the Image, whether successful
or not.
5. By capturing the Image, you accept a Liquidated Damages
agreement in which unauthorized use of the Image will result in
mandatory damages of at least, but not limited to, $1,000,000.
In an embodiment, terms of service generating module may include
one or more of a default terms of service storage module 5022, a
potential damage calculator 5024, and an entity interviewing for
terms of service generation module. In an embodiment, default terms
of service storage module 5022 may store the default terms of
service that are used as a template for a new user, e.g., when
Jules Caesar signs up for the service, this is the terms of service
that is available to him. In an embodiment, potential damage
calculator 5024 may determine an estimate of how much in damages
that Jules Caesar could collect for a breach of his personality
rights. In an embodiment, for example, potential damage calculator
may search the internet to determine how much Jules Caesar appears
on social media, blogs, and microblog (e.g., Twitter) accounts. In
an embodiment, entity interviewing for terms of service generation
module 5026 may create an online questionnaire/interview for Jules
Caesar to fill out, which will be used to calculate potential
damages to Jules Caesar, e.g., through determining Jules Caesar's
net worth, for example.
In an embodiment, service term management server 5000 may include
terms of service maintenance module 5030, which may maintain the
terms of service and modify them if, for example, the user becomes
more popular, or gains a larger online or other presence. In an
embodiment, terms of service maintenance module 5030 may include
one or more of a social media monitoring module 5042, that may
search social networking sites, and an entity net worth tracking
module 5034 that may have access to the entity's online bank
accounts, brokerage accounts, property indexes, etc., and monitor
the entity's wealth.
In an embodiment, serviced term management server 5000 may include
a use of representations of an entity detecting module 5040. In an
embodiment, use of representations of an entity detecting module
5040 may include one or more of a social media monitoring module
5042, a public photo repository monitoring module 5044, and a
public blog monitoring module 5046. In an embodiment, use of
representations of an entity detecting module 5040 may track uses
of representations, e.g., images, of the user Jules Caesar, to try
to detect violations of the terms of service, in various
forums.
DCM Beacon Management Server 5100 (FIG. 1-C)
Referring now to FIG. 1-C, in an embodiment, the system may include
a DCM beacon management server 5100, which may be configured to
manage the DCM beacon associated with a user, e.g., DCM beacon 2110
for user 2105, e.g., Jules Caesar. In an embodiment, DCM beacon
management server 5100 and service term management server 5000 may
be the same server. In another embodiment, DCM beacon management
server 5100 and service term management server 5000 may be hosted
by different entities. For example, a specialized entity may handle
the terms of service generation, e.g., a valuation company that may
be able to determine a net "social network" worth of a user, e.g.,
Jules Caesar, and use that to fashion the terms of service.
Referring again to FIG. 1-C, in an embodiment, DCM beacon
management server 5100 may include DCM beacon communication with
entity wanting to avoid having their image captured module 5110.
DCM beacon communication with entity wanting to avoid having their
image captured module 5110 may be configured to communicate with a
user, e.g., user 2105, e.g., Jules Caesar, and may handle the
creation, generation, maintenance, and providing of the DCM beacon
2110 to Jules Caesar, whether through electronic delivery or
through conventional delivery systems (e.g., mail, pickup at a
store, etc.). In an embodiment, DCM beacon communication with
entity wanting to avoid having their image captured module 5110 may
include one or more of DCM beacon transmission module 5112, DCM
beacon receiving module 5114, and DCM beacon generating module
5116.
In an embodiment, DCM beacon management server 5100 may include
entity representation acquiring module 5120. Entity representation
acquiring module 5100 may be configured to receive data regarding
one or more features of the user that will be associated with the
DCM beacon. For example, the user might upload pictures of his
body, face, private parts, footprint, handprint, voice recording,
hairstyle, silhouette, or any other representation that may be
captured and/or may be deemed relevant.
In an embodiment, DCM beacon management server 5100 may include DCM
beacon association with one or more terms of service and one or
more entity representations module 5130. In an embodiment, DCM
beacon association with one or more terms of service and one or
more entity representations module 5130 may be configured to, after
generation of a DCM beacon, obtain a terms of service to be
associated with that DCM beacon. In an embodiment, the terms of
service may be received from service term management server
5000.
In an embodiment, DCM beacon management server 5100 may include a
DCM beacon capture detecting module 5140. DCM beacon capture
detection module 5140 may detect when a DCM beacon is captured,
e.g., if it is an active beacon, or it may receive a notification
from various servers (e.g., server 4000) and/or wearable devices
(e.g., wearable device 3100) that a beacon has been detected, if it
is a passive DCM beacon.
In an embodiment, when a DCM beacon is detected, DCM beacon
management server 5100 may include terms of service associated with
DCM beacon distributing module, which may be configured to provide
the terms of service associated with the DCM beacon to an entity
that captured the image including the DCM beacon, e.g., to module
4122 of wearable computer encrypted data receipt and determination
server 4000, or DCM beacon remote retrieval module 4430 of ad
replacement value determination server 4400, for example.
Wearable Computer with Optional Paired Personal Device 3300 (FIGS.
1-Q and 1-R)
Referring now to FIG. 1-R, in an embodiment, the system may include
a wearable computer 3300. Wearable computer 3300 may have
additional functionality beyond capturing images, e.g., it may also
store a user's contact list for emails, phone calls, and the like.
In another embodiment, wearable computer 3300 may be paired with
another device carried by a user, e.g., the user's smartphone
device, which stores the user's contact list. As will be described
in more detail herein, wearable computer 3300 operates similarly to
wearable computer 3100, except that entities with DCM beacons are
obscured, unless they have a preexisting relationship with the
user. It is noted that DCM beacon detection and encryption may
operate similarly in wearable computer 3300 as in wearable computer
3100, and so substantially duplicated parts have been omitted.
Referring again to FIG. 1-R, in an embodiment, wearable computer
3300 may include an image capturing module 3310, which may capture
an image of Jules Caesar, who has DCM beacon "A", Beth Caesar, who
has DCM beacon "B", and Auggie Caesar, who has no DCM beacon. In an
embodiment, wearable computer 3300 may include an image acquiring
module 3320, which may be part of image capturing module 3310, to
acquire one or more images captured by an image capture device,
e.g., the image of Jules Caesar, Beth Caesar, and Auggie
Caesar.
In an embodiment, wearable computer 3300 may include an entity
identification module 3330, which may perform one or more
recognition algorithms on the image in order to identify persons in
the image. Entity identification module may use known facial
recognition algorithms, for example, or may ask the user for input,
or may search the internet for similar images that have been
identified, for example.
Referring again to FIG. 1-R, in an embodiment, wearable computer
3300 may include preexisting relationship data retrieval module
3340, which may retrieve names of known persons, e.g., from a
device contact list, e.g., device contact list 3350. In the example
shown in FIG. 1, Jules Caesar is in the contact list of the device
3300. It is noted that the device contact list 3350 may be stored
on a different device, e.g., the user's cellular telephone.
Referring now to FIG. 1-Q, in an embodiment, wearable computer 3300
may include data indicating an identified entity from the image
data has a preexisting relationship obtaining module 3360, which,
in an embodiment, may obtain data indicating that one of the
entities recorded in the image data (e.g., Jules Caesar) is in the
user's contact list.
Referring again to FIG. 1-Q, in an embodiment, wearable computer
3300 may include entities with preexisting relationship marking to
prevent obfuscation module 3370. In an embodiment, entities with
preexisting relationship marking to prevent obfuscation module 3370
may attach a marker to the image, e.g., a real marker on the image
or a metadata attachment to the image, or another type of marker,
that prevents obfuscation of that person, regardless of DCM beacon
status, because they are in the user's contact list.
Referring again to FIG. 1-Q, in an embodiment, wearable computer
3300 may include unknown entities with DCM beacon obscuring module
3380, which may obfuscate any of the entities in the image data
that have a DCM beacon and are not in the contact list. For
example, in the example shown in FIG. 1, Beth Caesar's image is
obscured, e.g., blurred, blacked out, covered with advertisements,
or the like, because she has a DCM beacon associated with her
image, and because she is not in the user's contact list. Jules
Caesar, on the other hand, is not obscured because a known entity
marker was attached to his image at module 3370, because Jules
Caesar is in the contact list of an associated device of the user.
Auggie Caesar is not obscured regardless of contact list status,
because there is no DCM beacon associated with Auggie Caesar.
Referring again to FIG. 1-Q, after the image is obscured, obscured
image 3390 of wearable computer 3300 may release the image to the
rest of the device for processing, or to another device, the
Internet, or cloud storage, for further operations on the image
data.
Active DCM Beacon 6000 (FIGS. 1-P and 1-K).
Referring now to FIG. 1-P, in an embodiment, a user 2107 may be
associated with an active DCM beacon 2610, which will be discussed
in more detail herein. The word "Active" in this context merely
means that the DCM beacon has some form of circuitry or
emitter.
Referring now to FIG. 1-K, in an embodiment, the system may include
an active DCM beacon 6000, which may show an active DCM beacon,
e.g., active DCM beacon 2610, in more detail. In an embodiment,
beacon 6000 may include DCM beacon broadcasting module 6010. In an
embodiment, DCM beacon broadcasting module 6010 may broadcast a
privacy beacon associated with at least one user, e.g., user 2107,
from at or near the location of user 2107. The beacon may be
detected by an image capturing device when the user is captured in
an image.
Referring again to FIG. 1-K, in an embodiment, the beacon 6000 may
include an indication of DCM beacon detection module 6020, which
may detect, be informed of, or otherwise acquire an indication that
the active DCM beacon has been captured by an image capturing
device. In an embodiment, indication of DCM beacon detection module
6020 may include one or more of DCM beacon scanning module 6022,
which may scan nearby devices to see if they have detected the
beacon, and DCM beacon communications handshake module 6024, which
may establish communication with one or more nearby devices to
determine if they have captured the beacon.
Referring again to FIG. 1-K, in an embodiment, beacon 6000 may
include term data broadcasting module 6030, which may broadcast, or
which may order to be broadcasted, term data, which may include the
terms of service. In an embodiment, term data broadcasting module
6030 may include one or more of a substantive term data
broadcasting module 6032, which may broadcast the actual terms of
service, and pointer to term data broadcasting module 6034, which
may broadcast a pointer to the terms of service data that a
capturing device may use to retrieve the terms of service from a
particular location.
DCM Beacon Test Duplicating Sever 4800 (FIGS. 1-C and 1-D)
Referring now to FIG. 1-C, in an embodiment, the system may include
a DCM beacon test duplicating server 4800. In an embodiment, the
DCM beacon test duplicating server 4800 may take the image data,
and perform the test for capturing the beacon again, as a
redundancy, as a verification, or as a protection for wearable
computer server 4000. In an embodiment, DCM beacon test duplicating
server 4800 may be a part of wearable computer server 4000. In
another embodiment, DCM beacon test duplicating server 4800 may be
separate from wearable computer server 4000, and may be controlled
by a different entity, e.g., a watchdog entity, or an independent
auditing agency.
Referring again to FIG. 1-C, in an embodiment, DCM beacon test
duplicating server 4800 may include encrypted data reception for
secondary DCM beacon detection module 4810, which may acquire the
encrypted image data containing the user, e.g., user 2105, e.g.,
Jules Caesar, and the associated DCM beacon, e.g., DCM beacon
2110.
Referring again to FIG. 1-C, in an embodiment, DCM beacon test
duplicating server 4800 may include a device-specific key
retrieving module 4820, which may retrieve the device-specific key,
e.g., from wearable computer device 3100, or from wearable computer
server 4000. In an embodiment, DCM beacon test duplicating server
4800 may include image data decryption with device-specific key
module 4830, which may apply the device-specific key obtained by
device-specific key retrieving module 4820, and apply it to the
encrypted image data, to generate decrypted image data.
Referring again to FIG. 1-C, in an embodiment, the unencrypted
image data may be sent to DCM beacon detecting module 4840 of DCM
beacon test duplicating server 4800. If the raw image data was
optical in its original form, then it may be reconverted to optical
(e.g., light) data. In an embodiment, DCM beacon detecting module
4840 may perform a detection for the DCM beacon, as previously
described. In an embodiment, DCM beacon detecting module 4840 may
include one or more of an optics-based DCM beacon detecting module
4842 and a digital image processing-based DCM beacon detecting
module 4844.
Referring now to FIG. 1-D, after the test for detecting the DCM
beacon 2220 (which may be the same as the DCM beacon 2210, but is
detected at a different place, so a different number has been
assigned), DCM beacon detection at duplicating sever result
obtaining module 4850 may obtain the result of the detection
performed at DCM beacon test duplicating server 4800. Similarly,
DCM beacon detection at device result obtaining module 4860 may
obtain the result from the DCM beacon detection performed at
wearable computer device 3100. The results from module 4850 and
4860 may be stored at DCM beacon test result storage and logging
module 4870 of DCM beacon test duplicating server 4800.
Referring again to FIG. 1-D, the test results from DCM beacon test
duplicating server 4800 and from wearable computer 3100 may be
stored at DCM beacon test result storage and logging module 4870,
and such results may be kept for a predetermined length of time. In
an embodiment, the results may be transmitted to a requesting party
using DCM beacon test result transmitting module 4880.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring an image,
said image including at least one representation of a feature of at
least one entity, detecting a presence of a privacy beacon
associated with the at least one entity from the acquired image,
without performance of a further process on the acquired image,
encrypting the image using a unique device code prior to
performance of one or more image processes other than privacy
beacon detection, said unique device code unique to an image
capture device and not transmitted from the image capture device,
and facilitating transmission of the encrypted image and privacy
beacon data associated with the privacy beacon to a location
configured to perform processing on one or more of the encrypted
image and the privacy beacon data.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring a block of
encrypted data corresponding to one or more images that have
previously been encrypted through use of a unique device code
associated with an image capture device configured to capture the
one or more images, wherein at least one of the one or more images
includes at least one representation of a feature of at least one
entity, acquiring a privacy metadata, said privacy metadata
corresponding to a detection of a privacy beacon in the one or more
images captured by the image capture device, said privacy beacon
associated with the at least one entity, and determining, at least
partly based on the acquired privacy metadata, and partly based on
a value calculation based on the representation of the feature of
the at least one entity for which the privacy beacon is associated,
whether to allow processing, which may include distribution,
decryption, etc., of the encrypted data block.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring a block of
encrypted data corresponding to one or more images that have
previously been encrypted through use of a unique device code
associated with an image capture device configured to capture the
one or more images, wherein at least one of the one or more images
includes at least one representation of a feature of at least one
entity, acquiring a privacy metadata indicating detection of a
privacy beacon in the one or more images captured by the image
capture device, said privacy beacon associated with the at least
one entity, retrieving term data from a remote location, said term
data corresponding to a term of service associated with a potential
release of the block of encrypted data corresponding to the one or
more images that have previously been encrypted through use of the
unique device code associated with the image capture device
configured to capture the one or more images, calculating an
expected valuation corresponding to potential revenue associated
with the release of at least a portion of the block of encrypted
data corresponding to the one or more images that have previously
been encrypted through use of the unique device code associated
with the image capture device configured to capture the one or more
images, and determining whether to perform decryption of at least a
portion of the block of encrypted data at least partially based on
the calculation of the expected valuation corresponding to the
potential revenue associated with the release of the at least the
portion of the block of encrypted data, and at least partially
based on the retrieved term data corresponding to the term of
service.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring a block of
encrypted data corresponding to one or more images that have
previously been encrypted through use of a unique device code
associated with an image capture device configured to capture the
one or more images, wherein at least one of the one or more images
includes at least one representation of a feature of at least one
entity, acquiring a privacy metadata indicating a lack of detection
of a privacy beacon in the one or more images captured by the image
capture device, decrypting the block of encrypted data
corresponding to the one or more images that have previously been
encrypted through use of a unique device code associated with the
image capture device, and encrypting the block of decrypted data
through use of a unique entity code that is related to an entity
associated with the image capture device configured to capture the
one or more images. Referring again to the system, in an
embodiment, a computationally-implemented method may include
acquiring a block of encrypted data from a remote location, said
block of encrypted data corresponding to one or more images
captured by an image capture device, said block of encrypted data
previously encrypted through use of a unique entity code that is
related to an entity associated with the image capture device,
receiving an indication that the one or more images captured by the
image capture device were approved for decryption through a
verification related to privacy metadata associated with the one or
more images, obtaining the unique entity code related to the entity
associated with the image capture device, and releasing the one or
more images through decryption of the block of encrypted data
acquired from the remote location using the obtained unique entity
code related to the entity associated with the image capture
device.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring a block of
encrypted data corresponding to one or more images that have
previously been encrypted through use of a unique device code
associated with an image capture device configured to capture the
one or more images, wherein at least one of the one or more images
includes at least one representation of a feature of at least one
entity, retrieving term data from a remote location, said term data
corresponding to a term of service associated with a potential
release of the one or more images that have previously been
encrypted through use of the unique device code associated with the
image capture device configured to capture the one or more images,
calculating whether an estimated advertising revenue from one or
more advertisement images placed in the one or more images of the
block of encrypted data will be greater than an estimated potential
liability for distribution of the one or more images of the block
of encrypted data, said estimated potential liability at least
partly based on the retrieved term data, modifying the one or more
images of the block of encrypted data by replacing one or more
areas associated with one or more entities at least partially
depicted in the one or more images with the one or more
advertisement images, and calculating a modified estimated
advertising revenue from the modified one or more images of the
block of encrypted data.
Referring again to the system, in an embodiment, a
computationally-implemented method may include monitoring a
deployment of a privacy beacon associated with a user, said privacy
beacon configured to alert a wearable computer of one or more terms
of service associated with said user in response to recordation of
image data that includes said privacy beacon by said wearable
computer, and said privacy beacon configured to instruct said
wearable computer to execute one or more processes to impede
transmission of the one or more images that include the user
associated with said privacy beacon, and storing a record of the
deployment of the privacy beacon associated with the user, said
record configured to be retrieved upon request to confirm whether
the privacy beacon associated with the user was active at a
particular time.
Referring again to the system, in an embodiment, a
computationally-implemented method may include receiving data
regarding one or more features of one or more entities that are
designated for protection by one or more terms of service,
associating the one or more terms of service with a privacy beacon
configured to be captured in an image when the one or more features
of the one or more entities are captured in the image, and
providing the terms of service to one or more media service
providers associated with a device that captured an image that
includes the privacy beacon, in response to receipt of an
indication that an image that includes the privacy beacon has been
captured.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring one or
more images that have previously been captured by an image capture
device, wherein at least one of the one or more images includes at
least one representation of a feature of one or more entities,
identifying a first entity for which at least one representation of
a first entity feature is present in the one or more images, and a
second entity for which at least one representation of a second
entity feature is present in the one or more images, obtaining data
indicating that the first entity has a preexisting relationship
with an entity associated with the image capture device, e.g., in a
contact list, preventing an obfuscation of the representation of
the first entity for which the preexisting relationship with the
entity associated with the image capture device has been indicated,
and obfuscating the representation of the second entity for which
at least one representation of the second entity feature is present
in the one or more images.
Referring again to the system, in an embodiment, a
computationally-implemented method may include broadcasting a
privacy beacon associated with at least one entity from a location
of the at least one entity, said privacy beacon configured to be
detected by an image capturing device upon capture of an image of
the at least one entity, acquiring an indication that the privacy
beacon associated with the at least one entity has been captured by
the image capturing device, and broadcasting term data including
one or more conditions and/or consequences of distribution of one
or more images that depict at least a portion of the at least one
entity.
Referring again to the system, in an embodiment, a
computationally-implemented method may include acquiring a block of
encrypted data corresponding to one or more images that have
previously been encrypted through use of a unique device code
associated with an image capture device configured to capture the
one or more images, wherein at least one of the one or more images
includes at least one representation of a feature of at least one
entity, decrypting the block of encrypted data corresponding to the
one or more images that have previously been encrypted through use
of the unique device code associated with the image capture device
configured to capture the one or more images, performing an
operation to detect a presence of a privacy beacon associated with
the at least one entity from the one or more images, wherein the
privacy beacon previously had been detected by the image capture
device, and storing outcome data corresponding an outcome of the
operation to detect the presence of the privacy beacon associated
with the at least one entity of the one or more images, wherein
said outcome data includes an indication of whether a result of the
performed operation to detect the presence of the privacy beacon
associated with the at least one entity from the one or more images
matches the previous detection of the privacy beacon by the image
capture device.
FIGS. 2A-2F illustrate example environments 200* (e.g., 200A-200F)
in which methods, systems, circuitry, articles of manufacture, and
computer program products and architecture, in accordance with
various embodiments, may be implemented by one or more devices
220*. As will be discussed in more detail herein, devices 220* may
be any kind of device, e.g., they may be an image capture device,
or a device that communicates with an image capture device, e.g., a
smartphone, a remote server, a network resource, and the like.
Devices 220* may be a wearable computer. Devices 220* may be a
stationary camera. In an embodiment, the devices 220* may be more
complex than the devices that capture images, e.g., as in a
stationary camera example. For example, a stationary camera, e.g.,
mounted in an ATM machine, or at the door of a secured building,
may transmit images to another computer, e.g., devices 220*, for
processing, which will be discussed in more detail herein. In
another embodiment, devices 220* may be simpler than a device that
captured the image. In yet another embodiment, devices 220* may be
the device that captured the image.
Referring now to FIG. 2, FIG. 2A illustrates an example environment
200A in which the device 220* is an image capture device 220A. In
an embodiment, image capture device 220A may include an image
capturing module 222A, which may be a lens, CMOS or CCD sensor,
optical component, or any combination thereof. In an embodiment,
image capturing module 222A may capture an image, e.g., captured
image 22A, which for exemplary purposes, may include the entities
"Jules Caesar," "Beth Caesar," and "Auggie Caesar." Image capture
device 220A may identify the entities in the captured image 22A,
e.g., through known facial recognition techniques, e.g., facial
recognition using local directional patterns, and facial
recognition using a line edge map. These techniques are described,
respectively, in "Face Recognition via Local Directional Pattern"
by Dong-Ju Kim, Sang-Heon Lee, and Myoung-Kyu Sohn, and "Face
Recognition Using Line Edge Map," by Yongsheng Gao and Maylor K. H.
Leung, the entireties of which are incorporated herein by
reference.
Referring again to FIG. 2A, in an embodiment, image capture device
220A may compare the identified entities in the captured image 22A
to one or more lists of contacts that are "known" to the device,
e.g., either stored on the device, e.g., through a device contact
list 236A, or stored elsewhere but are accessible to the device,
e.g., a social network site friend list 235A. This list may be used
to generate an obscured image, e.g., obscured image 26A, which
obscures the images that have a DCM beacon but are not present in
the contact list 236A. For example, in the example given in FIG.
2A, the entity "Jules Caesar" has a DCM beacon associated with the
entity, but because Jules Caesar is in the device contact list,
Jules Caesar's image is not obscured in the obscured image 26A. In
contrast, in the example given in FIG. 2A, entity "Beth Caesar" has
a DCM beacon associated with the entity, and is not in a device
contact list 236A or social network site friend list 235A.
Referring now to FIG. 2B, FIG. 2B illustrates an example
environment 200B in which the device 220* is an image capture
device 220B. In the embodiment shown in FIG. 2B, entities that are
not recognized by the device as "known" are obscured regardless of
the presence of a DCM beacon. For example, in an embodiment, image
capture device 220B may include an image capturing module 222B,
which may be a lens, CMOS or CCD sensor, optical component, or any
combination thereof. In an embodiment, image capturing module 222B
may capture an image, e.g., captured image 22B, which for exemplary
purposes, may include the entities "Jules Caesar," "Beth Caesar,"
and "Auggie Caesar." Image capture device 220B may identify the
entities in the captured image 22B, e.g., through known facial
recognition techniques previously described.
Referring again to FIG. 2B, in an embodiment, image capture device
220B may compare the identified entities in the captured image 22B
to one or more lists of contacts that are "known" to the device,
e.g., either stored on the device, e.g., through a device contact
list 236B, or stored elsewhere but are accessible to the device,
e.g., a social network site friend list 235B. This list may be used
to generate an obscured image, e.g., obscured image 26B, which
obscures the images that have a DCM beacon but are not present in
the contact list 236B. For example, in the example given in FIG.
2B, the entity "Jules Caesar" has a DCM beacon associated with the
entity, but because Jules Caesar is in the device contact list,
Jules Caesar's image is not obscured in the obscured image 26B. In
contrast, in the example given in FIG. 2B, entities "Beth Caesar"
and Auggie Caesar are not known to the device, and thus are
obscured regardless of the presence of a DCM beacon associated with
the entity.
Referring now to FIG. 2C, FIG. 2C illustrates an example
environment 200C in which the device 220* is an image capture
device 220C. In an embodiment, image capture device 220C may
include an image capturing module 222C, which may be a lens, CMOS
or CCD sensor, optical component, or any combination thereof. In an
embodiment, image capturing module 222C may capture an image, e.g.,
captured image 22C, which for exemplary purposes, may include the
entities "Jules Caesar," "Beth Caesar," and "Auggie Caesar." Image
capture device 220C may identify the entities in the captured image
22C, e.g., through known facial recognition techniques previously
described.
Referring again to FIG. 2C, in an embodiment, image capture device
220C may determine an identity of one or more of the entities in
the image, e.g., by receiving input, e.g., input from device user
237, e.g., or by receiving input from one or more devices and/or
users of devices. In an embodiment, entities that were able to be
identified by the user, or entities whose user identification
matched an independently-verified identification of the entities,
may not be obscured. For example, in an embodiment, if a user
correctly identifies "Jules Caesar," then Jules Caesar may not be
obscured in the obscured image 26C. In another embodiment, if the
user asserts that Jules Caesar is a known entity, then Jules Caesar
may not be obscured. In still another embodiment, if the user
identifies the "Jules Caesar" as "Bill Smith," and independent
verification (e.g., automated facial recognition) determines that
the entity is Jules Caesar, then the entity may still be
obscured.
Referring now to FIG. 2D, FIG. 2D illustrates an example
environment 200D in which the device 220* is an image capture
device 220D. For example, in an embodiment, image capture device
220D may include an image capturing module 222D, which may be a
lens, CMOS or CCD sensor, optical component, or any combination
thereof. In an embodiment, image capturing module 222D may capture
an image, e.g., captured image 22D, which for exemplary purposes,
may include the entities "Jules Caesar," "Beth Caesar," and "Auggie
Caesar." In an embodiment, image capture device 220D may include an
entity recognition module 228 which may be configured to identify
one or more of the entities captured in the captured image 22D. In
an embodiment, the entity recognition module 228 may use, for
example, a facial characteristic database 229 to assist in
identification of the entities in the captured image 22D. In an
embodiment, entity recognition module 228 may facilitate generation
of an identified entity captured image 24D. The identified entity
captured image 24D may, for example, identify the entities in the
captured image 24D and their positions in the image in the metadata
of the image.
Referring again to FIG. 2D, identified entity captured image 24D
may be used to generate an obscured image, e.g., obscured image
26D, which obscures the images that have a DCM beacon but are not
present in the contact list. For example, in the example given in
FIG. 2D, the entity "Jules Caesar" has a DCM beacon associated with
the entity, but because Jules Caesar has been recognized by the
entity recognition module 228, Jules Caesar's image is not obscured
in the obscured image 26D. In contrast, in the example given in
FIG. 2D, entity "Beth Caesar" has a DCM beacon associated with the
entity, and was not recognized by the entity recognition module
228.
Referring now to FIG. 2E, FIG. 2E illustrates an example
environment in which the device 220* is an image receipt device
220E that communicates, e.g., receives the captured image 22E from
an image capture device 210. example, in an embodiment, image
capture device 210 may include an image capturing module 222E,
which may be a lens, CMOS or CCD sensor, optical component, or any
combination thereof. In an embodiment, image capturing module 222E
may capture an image, e.g., captured image 22E, which for exemplary
purposes, may include the entities "Jules Caesar," "Beth Caesar,"
and "Auggie Caesar." In an embodiment, image capture device 210 may
transmit captured image 22E to image receipt device 220E, e.g.,
through a communication network 240, e.g., a Wi-Fi network, a
cellular (e.g., 4G, LTE, CDMA, etc.) network, or other network. In
an embodiment, image capture device 210 may transmit captured image
22E to image receipt device 220E through a private network 240A. In
an embodiment, private network 240A may be a short range network,
e.g., one that utilizes 60 GHz frequency. In another embodiment,
private network 240A may be a proprietary network that is
accessible only to particular types of devices, e.g., devices
manufacture by a particular manufacturer, or that have a particular
application installed in their memory.
Referring again to FIG. 2E, image receipt device 220E may receive
captured image 22E. Image receipt device 220E may identify the
entities in the captured image 22E, e.g., through known facial
recognition techniques, e.g., facial recognition using local
directional patterns, and facial recognition using a line edge map.
In an embodiment, image receipt device 220E may compare the
identified entities in the captured image 22E to one or more lists
of contacts that are "known" to the device, e.g., either stored on
the device, e.g., through a device contact list 221, or stored
elsewhere but are accessible to the device, e.g., a social network
site friend list 235A (e.g., as shown in FIG. 2A). In an
embodiment, the lists of contacts may be specific to the type of
device. For example, if image receipt device 220E is a cellular
telephone device, then there may be a "recent calls" list, e.g.,
recent calls list 223, which may be used to determine whether
identified entities in the captured image 22E are known to the
device. In an embodiment, one or more lists of persons may also be
retrieved from image capture device 210, e.g., from capture device
contact list 218. In an embodiment, one or more of these lists may
be used to generate an obscured image, e.g., obscured image 26E,
which obscures the images that have a DCM beacon but are not
present in the contact list. For example, in the example given in
FIG. 2E, the entity "Jules Caesar" has a DCM beacon associated with
the entity, but because Jules Caesar is in the device contact list
221, Jules Caesar's image is not obscured in the obscured image. In
contrast, in the example given in FIG. 2E, entity "Beth Caesar" has
a DCM beacon associated with the entity, and is not in a list
available to the image receipt device 220E.
Referring again to FIG. 2E, in an embodiment, image receipt device
220E and image capture device 210 may both be associated with a
user 115. For example, image receipt device 220E may be a
smartphone device, tablet device, or laptop device carried by a
user. In an embodiment, for example, image capture device 210 may
be a wearable computer, e.g., a Google Glass device, or another
wearable computer that transmits the captured images to a different
device located in a proximity of a user. In an embodiment, the
image capture device may be of limited processing power or have
limited access to various contact lists (e.g., may have a metered
network connection or only a short-range network connection).
Referring now to FIG. 2F, FIG. 2F illustrates an example
environment in which the device 220* is a remote computer device
220F that communicates, e.g., receives the captured image 22F from
an image capture device 211. For example, in an embodiment, image
capture device 211 may include an image capturing module 222F,
which may be a lens, CMOS or CCD sensor, optical component, or any
combination thereof. In an embodiment, image capturing module 222F
may capture an image, e.g., captured image 22F, which for exemplary
purposes, may include the entities "Jules Caesar," "Beth Caesar,"
and "Auggie Caesar." In an embodiment, image capture device 211 may
transmit captured image 22F to a remote computer device 220F, e.g.,
through a communication network 240, e.g., a Wi-Fi network, a
cellular (e.g., 4G, LTE, CDMA, etc.) network, or other network. In
an embodiment, image capture device 211 may transmit captured image
22F to remote computer device 220F through a private network 240A.
In an embodiment, remote computer device 220F may be in proximity
to image capture device 211. In another embodiment, remote computer
device 220F may be distant from the image capture device 211. In an
embodiment, remote computer device 220F may receive images from
many image captured devices 211. In another embodiment, remote
computer device 220F may be a home computer, for example, and may
receive images from one image capture device 211.
Referring again to FIG. 2F, remote computer device 220F may receive
captured image 22F. Remote computer device 220F may identify the
entities in the captured image 22F, e.g., through known facial
recognition techniques, e.g., facial recognition using local
directional patterns, and facial recognition using a line edge map.
In an embodiment, remote computer device 220F may compare the
identified entities in the captured image 22F to one or more lists
of contacts that are "known" to the device, e.g., either stored on
the device, e.g., through a device contact list 221, or stored
elsewhere but are accessible to the device, e.g., a social network
site friend list 233A (e.g., which may be local or stored at a
remote location). In an embodiment, the lists of contacts may be
specific to the type of device. In an embodiment, one or more lists
of persons may also be retrieved from image capture device, e.g.,
from capture device contact list 219. In an embodiment, one or more
of these lists may be used to generate an obscured image, e.g.,
obscured image 26F, which obscures the images that have a DCM
beacon but are not present in the contact list. For example, in the
example given in FIG. 2F, the entity "Jules Caesar" has a DCM
beacon associated with the entity, but because Jules Caesar is in
the device contact list, Jules Caesar's image is not obscured in
the obscured image 26F. In contrast, in the example given in FIG.
2E, entity "Beth Caesar" has a DCM beacon associated with the
entity, and is not in a list available to the image receipt device
220F.
In various embodiments, the communication network 240 may include
one or more of a local area network (LAN), a wide area network
(WAN), a metropolitan area network (MAN), a wireless local area
network (WLAN), a personal area network (PAN), a Worldwide
Interoperability for Microwave Access (WiMAX), public switched
telephone network (PTSN), a general packet radio service (GPRS)
network, a cellular network, and so forth. The communication
networks 240 may be wired, wireless, or a combination of wired and
wireless networks. It is noted that "communication network" as it
is used in this application refers to one or more communication
networks, which may or may not interact with each other.
Referring now to FIG. 2G, FIG. 2G shows a more detailed version of
computing device 220, which is an example of device 220* according
to various embodiments. Computing device 220 may be any electronic
device, portable or not, that may be operated by or associated with
one or more users. Computing device 220 may interact with a user
115. As set forth above, user 115 may be a person, or a group of
people, or another entity that mimics the operations of a user. In
an embodiment, user 115 may be a computer or a computer-controlled
device. Computing device 220 may be, but is not limited to, a
wearable computer. Computing device 220 may be any device that is
equipped with an image capturing component, including, but not
limited to, a cellular phone, a network phone, a smartphone, a
tablet, a music player, a walkie-talkie, a radio, an augmented
reality device (e.g., augmented reality glasses and/or headphones),
wearable electronics, e.g., watches, belts, earphones, or "smart"
clothing, earphones, headphones, audio/visual equipment, media
player, television, projection screen, flat screen, monitor, clock,
appliance (e.g., microwave, convection oven, stove, refrigerator,
freezer), a navigation system (e.g., a Global Positioning System
("GPS") system), a medical alert device, a remote control, a
peripheral, an electronic safe, an electronic lock, an electronic
security system, a video camera, a personal video recorder, a
personal audio recorder, and the like.
Referring again to FIG. 2G, computing device 220 may include a
device memory 245. In an embodiment, device memory 245 may include
memory, random access memory ("RAM"), read only memory ("ROM"),
flash memory, hard drives, disk-based media, disc-based media,
magnetic storage, optical storage, volatile memory, nonvolatile
memory, and any combination thereof. In an embodiment, device
memory 245 may be separated from the device, e.g., available on a
different device on a network, or over the air. For example, in a
networked system, there may be many computing devices 220 whose
device memory 245 is located at a central server that may be a few
feet away or located across an ocean. In an embodiment, device
memory 245 may comprise of one or more of one or more mass storage
devices, read-only memory (ROM), programmable read-only memory
(PROM), erasable programmable read-only memory (EPROM), cache
memory such as random access memory (RAM), flash memory,
synchronous random access memory (SRAM), dynamic random access
memory (DRAM), and/or other types of memory devices. In an
embodiment, memory 245 may be located at a single network site. In
an embodiment, memory 245 may be located at multiple network sites,
including sites that are distant from each other.
Referring again to FIG. 2G, in an embodiment, computing device 220
may include one or more of an image capture component 262, image
receipt component 263, database interface component 264 (e.g.,
which, in an embodiment, may include one or more of internal
database interface component 264A and external database interface
component 264B), and entity interface 266.
Referring again to FIG. 2G, FIG. 2G shows a more detailed
description of computing device 220. In an embodiment, computing
device 220 may include a processor 222. Processor 222 may include
one or more microprocessors, Central Processing Units ("CPU"), a
Graphics Processing Units ("GPU"), Physics Processing Units,
Digital Signal Processors, Network Processors, Floating Point
Processors, and the like. In an embodiment, processor 222 may be a
server. In an embodiment, processor 222 may be a distributed-core
processor. Although processor 222 is as a single processor that is
part of a single computing device 220, processor 222 may be
multiple processors distributed over one or many computing devices
220, which may or may not be configured to operate together.
Processor 222 is illustrated as being configured to execute
computer readable instructions in order to execute one or more
operations described above, and as illustrated in FIGS. 10,
11A-11C, 12A-12D, 13A-13E, and 14A-14C. In an embodiment, processor
222 is designed to be configured to operate as processing module
250, which may include one or more of image that contains a
depiction of a feature of a particular entity acquiring module 252,
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image attaining module 254, relation data that
describes a relation between the particular entity and a device
that facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity obtaining module,
wherein the relation may be nonextant 256, and obfuscation of a
particular portion of the image, wherein the depiction of the
feature of the particular entity is excluded from the particular
portion of the image when the relation data indicates that the
relation between the particular entity and the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity is extant
performing module 258.
FIGS. 3A-3E refer to an "image capture device," which is defined as
any device that is equipped with the ability to capture images, and
not necessarily a wearable computer or a device designed
specifically to capture images.
Referring now to FIG. 3A, FIG. 3A shows an exemplary embodiment of
a computing device 220 as image capture device 302. In an
embodiment, image capture device 302 may include an image capture
component, e.g., a lens 306A. Image capture component 306A may
capture an image including the user 105 and the DCM beacon 110, and
capture that image as raw (optical or digital) data 120. In an
embodiment, image capture device 302 may include beacon detection
module 310A that is configured to detect DCM beacon 110, either
optically, digitally, or other, depending on the embodiment. After
detection of the beacon, the image data may be sent to an image
data encryption module 320A to encrypt the image. In an embodiment,
if the beacon is not detected, the image data 120 is released past
barrier 340A and the other image capture device modules 350A may
operate on the image data 120. In an embodiment, the encrypted
data, and data associated with the DCM beacon 110 (although not
necessarily the beacon itself) may be transmitted to encrypted data
and beacon transmitting module 330A, which may transmit the
encrypted data and beacon data to an external source, e.g., server
3000 as described in FIG. 1. It is noted that beacon detection
module 310A, image data encryption module 320A, and encrypted data
and beacon transmitting module 330A may be separated from other
image capture device modules 350A by barrier 340A.
In an embodiment, barrier 340A may be a physical barrier, e.g.,
beacon detection module 310A, lens 306A, image data encryption
module 320A, and encrypted data and beacon transmitting module 330A
may be hard-wired to each other and electrically excluded from
other image capture device modules 350A. In another embodiment,
barrier 340A may be implemented as a programmed barrier, e.g., the
image data 120 is not transmitted to modules other than beacon
detection module 310A, lens 306A, image data encryption module
320A, and encrypted data and beacon transmitting module 330A. In
another embodiment, barrier 340A may be implemented as a data
access barrier, e.g., the captured image data 120 may be protected,
e.g., with an access or clearance level, so that only beacon
detection module 310A, lens 306A, image data encryption module
320A, and encrypted data and beacon transmitting module 330A may
read or operate on the image data 120. In another embodiment,
barrier 340A may not be a complete barrier, e.g., barrier 340A may
allow "read" access to the image data, but not "copy" or "write"
access. In another embodiment, barrier 340A may be a barrier to
transmission, e.g., the image may be viewed locally at the device,
but may be barred from being saved to a removable memory, or
uploaded to a cloud storage or social networking site/social media
site.
Referring now to FIG. 3B, FIG. 3B shows an embodiment of a
computing device 220 as image capture device 304. In an embodiment,
image capture device 304 may include an image capture component,
e.g., a lens and sensor 306B. Image capture component 306B may
capture an image including the user 105 and the DCM beacon 110, and
capture that image as raw (optical or digital) data 120. In an
embodiment, image capture device 304 may include image path
splitting module 305B that may receive the raw data 120 as a
signal, e.g., optical or digital, and split the signal into two
branches. As shown in FIG. 3B, one branch, e.g., the north branch,
sends the raw signal to image data encryption module 320B, which
may encrypt the image. In an embodiment, the other branch, e.g.,
the south branch, may send the signal to a beacon detection module
310B, which may detect the DCM beacon 110. In an embodiment, if the
DCM beacon 110 is detected, then the unencrypted image data that
arrived at beacon detection module 310B is destroyed. In an
embodiment, if the DCM beacon 110 is not detected, then the
encrypted image data from image data encryption module 320B is
destroyed, and the unencrypted image data at beacon detection
module 310B is allowed to pass to other image capture device
modules 350B. In an embodiment, the beacon detection result and the
encrypted image data are transmitted to the encrypted data and
beacon transmitting module 330B. In an embodiment, barrier 340B may
separate image path splitting module 305B, beacon detection module
310B, image data encryption module 320B, and encrypted data and
beacon transmitting module 330B from other image capture device
modules 350B.
In an embodiment, barrier 340B may be a physical barrier, e.g.,
beacon detection module 310B, lens 306B, image data encryption
module 320B, and encrypted data and beacon transmitting module 330B
may be hard-wired to each other and electrically excluded from
other image capture device modules 350B. In another embodiment,
barrier 340B may be implemented as a programmed barrier, e.g., the
image data 120 is not transmitted to modules other than image path
splitting module 305B, beacon detection 310B, lens 306B, image data
encryption module 320B, and encrypted data and beacon transmitting
module 330B. In another embodiment, barrier 340B may be implemented
as a data access barrier, e.g., the captured image data may be
protected, e.g., with an access or clearance level, so that only
beacon detection module 310B, lens 306B, image data encryption
module 320B, and encrypted data and beacon transmitting module 330B
may read or operate on the image data 120. In another embodiment,
barrier 340B may not be a complete barrier, e.g., barrier 340B may
allow "read" access to the image data, but not "copy" or "write"
access. In another embodiment, barrier 340B may be a barrier to
transmission, e.g., the image may be viewed locally at the device,
but may be barred from being saved to a removable memory, or
uploaded to a cloud storage or social networking site/social media
site.
Referring now to FIG. 3C, FIG. 3C shows an embodiment of a
computing device 220 implemented as image capture device 306. In an
embodiment, image capture device 306 may include an image capture
component 306C that captures optical data 120A. In an embodiment,
optical data 120A may be sent to optical splitting module 305C,
which may split the optical signal, e.g., the light, into two
paths. Referring to FIG. 3C, the "south" path may transmit the
light to an optical filter 312, which may filter the light for a
specific characteristic, e.g., a wavelength or an object, according
to known optical filtration techniques. In an embodiment, the
filtered optical signal may then be transmitted to a filtered
optical signal beacon detection module 310C, which may detect the
beacon 110 in the optical data 120A.
Referring again to FIG. 3C, the "north" path from optical splitting
module 305C may transmit the optical image data to an
optical-to-digital converter 314, e.g., a CMOS or CCD sensor. In an
embodiment, the digital signal then may be transmitted to image
data encryption module 320C, and the encrypted data transmitted to
encrypted data and beacon transmitting module 330C, along with the
beacon detection result, for transmission to an external source,
e.g., server 3000 as shown in FIG. 1. In an embodiment, barrier
340C may prevent access to the unencrypted image data by other
image capture device modules 350C. In an embodiment, barrier 340C
may function similarly to barriers 340A and 340B, and the
descriptions of those barriers and their possible implementations
also may apply to barrier 340C. In an embodiment, image data
encryption module 320C, encrypted data beacon and transmitting
module 330C, and optical-to-digital converter 314 may be controlled
by beacon detection control module 325, which may be part of the
processor of image capture device 306, or may be a separate
processor. In an embodiment, beacon detection control module 325
may form part or all of processor 222 of computing device 220 of
FIG. 2G.
Referring now to FIG. 3D, FIG. 3D shows an exemplary implementation
of a computing device 220 implemented as image capture device 308,
according to an embodiment. Image capture device 308 may include an
optical image collector 306D that may capture an image including
the user 105 and the DCM beacon 110, and capture that image as
optical data 120A. Optical data 120A may then be sent to optical
splitting module 305D, which may split the optical signal, e.g.,
the light, into two paths. Referring to FIG. 3D, the "south" path
may transmit the light to an optical transformation module 332,
which may apply a transformation, e.g., a Fourier transformation,
to the optical image data. The transformed optical data from module
332, as well as a reference image from optical beacon reference
signal providing module 334 may be transmitted to optical beacon
detection module 310D. Optical beacon detection module 310D may
optically detect the beacon using Fourier transformation and an
optical correlator. The basic operation of performing optical image
object detection is described in the publically-available (at the
University of Michigan Online Library) paper "Report of Project
MICHIGAN, SIGNAL DETECTION BY COMPLEX SPATIAL FILTERING," by A. B.
Vander Lugt, printed in July 1963 at the Radar Laboratory at the
Institute of Science and Technology, the University of Michigan,
which is hereby incorporated by reference in its entirety.
Applicant's representative is including a copy of this paper with
the filing of this application, for the convenience of the
Examiner.
Referring again to FIG. 3D, the "north" path from optical splitting
module 305D may transmit the optical image data to an
optical-to-digital converter 324, e.g., a CMOS or CCD sensor. In an
embodiment, the digital signal then may be transmitted to image
data encryption module 320D, and the encrypted data transmitted to
encrypted data and beacon transmitting module 330D, along with the
beacon detection result, for transmission to an external source,
e.g., server 3000 as shown in FIG. 1. In an embodiment, barrier
340D may prevent access to the unencrypted image data by other
image capture device modules 350D. In an embodiment, barrier 340D
may function similarly to barriers 340A and 340B, and the
descriptions of those barriers and their possible implementations
also may apply to barrier 340D. In an embodiment, image data
encryption module 320D, encrypted data and beacon transmitting
module 330D, and optical-to-digital converter 324 may be controlled
by beacon detection control module 335, which may be part of the
processor of image capture device 308, or may be a separate
processor. In an embodiment, beacon detection control module 335
may form part or all of processor 222 of computing device 220 of
FIG. 2G.
Referring now to FIG. 3E, FIG. 3E shows an exemplary embodiment of
an implementation of computing device 220 as image capture device
309. In an embodiment, image capture device 309 may include an
optical image collector 306E, e.g., a lens, which may collect the
optical data 120A. Optical data 120A may be emitted to an optical
beacon detection module 310E, which may detect the DCM beacon 110
using one of the above-described optical detection methods. After
detection of the beacon using optical techniques, the optical
signal may be captured by an optical-to-digital conversion module
344, and converted to digital image data, which is transferred to
image data encryption module 320E for encryption. In an embodiment,
modules 306E, 310E, 344, and 320E, are hard-wired to each other,
and separated from encrypted data and beacon transmitting module
330E and other image capture device modules 350E by barrier 340E
(which, in this embodiment, is shown for exemplary purposes only,
because the physical construction of modules 306E, 310E, 344, and
320E removes the need for a barrier 340E, whether implemented as
hardware, programming, security, or access. In this embodiment, the
image data is encrypted prior to interaction with the "main"
portions of image capture device 309, and after the beacon data has
been optically detected.
FIGS. 4A-4E show one or more embodiments of a server device 230,
according to one or more embodiments. Unless otherwise stated or
contradictory to FIGS. 4A-4E, the server devices 430A, 430B, 430C,
430D, and 430E may include the elements of server device 230, as
previously described. Similarly, unless otherwise stated or
contradictory to FIGS. 4A-4E, the computing devices 420A, 420B,
420C, 420D, and 420E may include the elements of computing device
230, as previously described.
Referring now to FIG. 4A, FIG. 4A shows an exemplary implementation
of server device 230 as server device 430A operating in exemplary
environment 400A. In an embodiment, computing device 420A further
includes a location and time log and transmission module 422A. In
an embodiment, location and time log and transmission module 422A
may record a location, e.g., through global positioning sensors,
triangulation using radio signals, or other methods, of the
computing device 420A, and a time that the image is captured, at
the time the image is captured. This data of location and time of
the image capture, e.g., location and time of detection data 162,
may be transmitted to server device 430A, as shown in FIG. 4A.
Referring again to FIG. 4A, server device 430A may include a beacon
metadata acquisition module 433. Beacon metadata acquisition module
433 may include location and time of beacon detection data
acquisition module 433A. Location and time of beacon detection data
acquisition module 433A may receive the location and time of
detection data 162. In an embodiment in which the beacon metadata
150 is binary beacon metadata 150A, additional data regarding the
image may be obtained. For example, server device 430A may transmit
the location and time of detection data 162 to a remote location,
e.g., to beacon support server 490. Beacon support server may
include, for example, a geotagged and timestamped list of detected
beacons 436, which may track a location and time when a beacon is
detected. Beacon support server 490 may be associated with DCM
beacon 110, and may be configured to log each time DCM beacon 110
is detected, e.g., in an embodiment in which DCM beacon 110 is an
active beacon that can determine when it is detected. In an
embodiment, beacon support server 490 may use the location and time
of detection data 162 to determine which DCM beacon 110 is
detected, and transmit the beacon identification information back
to server device 430A, e.g., to beacon identification data
acquisition module 433B. In an embodiment, this beacon
identification information may be used by server device 430A. In an
embodiment, the beacon identification information may be used to
identify the entity in the image, without decrypting the image, for
example.
Referring now to FIG. 4B, FIG. 4B shows an exemplary implementation
of server device 230 as server device 430B operating in exemplary
environment 400B. In an embodiment, the computing device 420B may
generate beacon metadata 150, which may be binary beacon metadata
150A, and transmit the binary beacon metadata 150A to server device
430B. In an embodiment, server device 430B receives the binary
beacon metadata 150A, e.g., through use of beacon metadata
acquisition module 443, which may describe whether a beacon was
detected in the encrypted image data block 160, but which may, in
an embodiment, not provide additional data regarding the beacon. In
an embodiment, server device 430B may include encrypted image
analysis and data extraction module 442, which may perform analysis
on the encrypted image 24, if possible. Such analysis may include,
for example, that the encrypted image data block 160 may have
metadata that is not encrypted or that may be read through the
encryption. In an embodiment, for example, the image 22 may be
encrypted in such a manner that certain characteristics of the
encrypted image 24 may be obtained without decrypting the image. In
an embodiment, server device 430B may use encrypted image analysis
and data extraction module 442 to determine more information about
the image, e.g., which may be used to perform valuation of the
image and/or to retrieve term data regarding one or more terms of
service associated with the DCM beacon 110 and the entity Jules
Caesar 105.
Referring now to FIG. 4C, FIG. 4C shows an exemplary implementation
of server device 230 as server device 430C operating in exemplary
environment 400C. In an embodiment, computing device 420C may
transmit the beacon metadata 150, which may be binary beacon
metadata 150A, to server device 430C. Beacon metadata 150 may be
obtained by beacon metadata acquisition module 456. In an
embodiment, beacon metadata acquisition module 456 may relay data
regarding the received metadata to a decision-making portion of
server device 430C, e.g., a central processor. In an embodiment,
server device 430C may determine that it wants more data regarding
the image 22, in order to retrieve term data, or perform a
valuation of the image data. Accordingly, in an embodiment, server
device 430C may include encrypted image analysis and data
extraction module 436, which may operate similarly to encrypted
image analysis and data extraction module 442, and also, in an
embodiment, encrypted image analysis and data extraction module 436
may transmit the encrypted image data block to a "sandbox," e.g.,
image decryption sandbox 492. Image decryption sandbox 492 may
place the image in a virtual or physical "sandbox" where other
processes may be unable to access the data. Image decryption
sandbox 492 may be part of server device 430C, or may be a separate
entity. In an embodiment, image decryption sandbox 492 may decrypt
the encrypted image. Encrypted image decryption and beacon
identification module 493 may perform analysis on the decrypted
image, including identifying the beacon, or identifying the entity,
or a combination thereof. The identification data then may be given
to beacon identification data reception module 438. In an
embodiment, the decrypted image data is then trapped in the sandbox
and/or destroyed.
Referring now to FIG. 4D, FIG. 4D shows an exemplary implementation
of server device 230 as server device 430D operating in exemplary
environment 400D. In an embodiment, computing device 420D may
transmit beacon metadata 150, e.g., beacon identifier metadata
150B, to server device 430D. In an embodiment, beacon identifier
metadata 150B may identify the beacon, e.g., the DCM beacon 110.
The identification may be a unique identification, e.g. "this
beacon is associated with user #13606116, Jules Caesar," or, in an
embodiment, the identification may be a class of beacon, e.g.,
"this is a beacon with a $100,000 dollar liquidated damages clause
associated with using a likeness of the entity associated with the
beacon," or "this is a beacon of a television celebrity," or "this
is a beacon provided by Image Protect Corporation."
Referring again to FIG. 4D, server device 430D receives the beacon
identifier metadata 150B, e.g., through use of beacon metadata
acquisition module 447. In an embodiment, server device 430D may
transmit the identifier to an external location, e.g., a terms of
service transmission server 485. Terms of service transmission
server 485 may store terms of service associated with various
beacons in its terms of service repository 489. In an embodiment,
each unique beacon may be associated with its own unique terms of
service. In another embodiment, there may be common terms of
service for various users. In another embodiment, there may be
common terms of service for various classes of users. In an
embodiment, the terms of service may vary depending on how much the
entity, e.g., Jules Caesar, is paying to use the beacon
service.
In an embodiment, terms of service transmission server 485 may
include beacon identifier lookup table 487. Beacon identifier
lookup table 487 may receive the beacon identifier metadata 150B,
and use the beacon identifier metadata 150B to obtain the terms of
service associated with that beacon, e.g., terms of service data
151. In an embodiment, terms of service data 151 then may be
transmitted to server device 430D.
Referring now to FIG. 4E, FIG. 4E shows an exemplary implementation
of server device 230 as server device 430E operating in exemplary
environment 400E. In an embodiment, computing device 420E may
detect the DCM beacon 110, and may obtain the terms of service from
the detected beacon (e.g., the terms of service may be read from
the beacon, e.g., in compressed binary). In an embodiment, the
computing device 420E may use the detected beacon data to obtain
the terms of service data from another location, e.g., a terms of
service data server (not pictured).
Referring again to FIG. 4E, in an embodiment, computing device 420E
may transmit beacon metadata 150, e.g., beacon identifier and terms
of service metadata 150C, to server device 430E. Beacon metadata
acquisition module 444 may receive the beacon identifier and terms
of service metadata 150C, and detect that the terms of service are
present in the beacon metadata 150. In an embodiment, beacon
metadata terms of service reading module 454 may read the terms of
service from the beacon metadata 150.
The foregoing examples are merely provided as examples of how
beacon data may operate, and how identifying data and/or term of
service data may be obtained by the various server devices, and
should not be interpreted as limiting the scope of the invention,
which is defined solely by the claims. Any and all components of
FIGS. 4A-4E may be combined with each other, modified, or
eliminated.
FIGS. 5A-5D show one or more embodiments of a computing device 230,
among other components, operating in an environment 500 (e.g.,
500A-500D), according to one or more embodiments. Unless otherwise
stated or contradictory to FIGS. 5A-5D, the server devices 530A,
530B, 530C, and 530D may include the elements of server device 230,
as previously described. Similarly, unless otherwise stated or
contradictory to FIGS. 5A-5D, the computing devices 520A, 520B,
520C, and 520D may include the elements of computing device 220, as
previously described.
Referring now to FIG. 5A, FIG. 5A shows an exemplary implementation
of server device 230 as server device 530A operating in exemplary
environment 500A. In an embodiment, as shown in FIG. 5A, computing
device 520A may capture an image that includes an entity 105 that
may be associated with a privacy beacon, e.g., DCM beacon 110. In
an embodiment, the captured image, e.g., image 22, may be encrypted
into encrypted image 24 using a device-based encryption key. In an
embodiment, encrypted image 24 may be combined with beacon
metadata, e.g., beacon metadata 150, in an encrypted image data
block 160. In another embodiment, beacon metadata 150 may be
separate from encrypted image data block 160. In an embodiment, the
encrypted image 24 may be transmitted to a server device 530A by
encrypted image data transmitting module 180. In an embodiment, a
decryption determination module 532A may determine to decrypt the
image, e.g., in a process described in one or more of this and/or
previous applications incorporated by reference. In an embodiment,
server device 530A may include decryption module 534A, which may
apply a device-based decryption key to the encrypted image 24 to
generate decrypted image data. In an embodiment, client-based
encryption module 536A may apply a client-based encryption key to
the decrypted image data, to generate a client-based encrypted
image. In an embodiment, the client-based encrypted image then may
be transmitted back to the computing device 520A, which may be a
wearable computer, e.g., to client-based encrypted data receiving
module 190. In an embodiment, upon receipt of the client-based
encrypted image the client-based encrypted image decryption module
195 may decrypt the client-based encrypted image.
In an embodiment, one or more of the originally-captured image 22,
the decrypted image data in the decryption module 534A of server
device 530A, and the decrypted image data in the client-based
encryption module 536A may be identical. In another embodiment, the
substantive portion of the data (e.g., the color data) may be
identical, and other data, e.g., header data or compression data,
may be different. In another embodiment, the decrypted image data
in the decryption module 534A of server device 530A, and the
decrypted image data in the client-based encryption module 536A may
be slightly different.
Referring now to FIG. 5B, FIG. 5B shows an exemplary implementation
of server device 230 as server device 530B operating in exemplary
environment 500B. In an embodiment, as shown in FIG. 5B, computing
device 520B may capture an image that includes an entity 105 that
may be associated with a privacy beacon, e.g., DCM beacon 110. In
an embodiment, the captured image, e.g., image 22, may be encrypted
into encrypted image 24 using a device-based encryption key. In an
embodiment, encrypted image 24 may be combined with beacon
metadata, e.g., beacon metadata 150, in an encrypted image data
block 160. In another embodiment, beacon metadata 150 may be
separate from encrypted image data block 160. In an embodiment, the
encrypted image 24 may be transmitted to a server device 530B by
encrypted image data transmitting module 180. In an embodiment, a
decryption determination module 532G may determine to decrypt the
image, e.g., in a process described in one or more of this and/or
previous applications incorporated by reference. In an embodiment,
server device 530B may include decryption module 534B, which may
apply a device-based decryption key to the encrypted image 24 to
generate decrypted image data. In an embodiment, client-based
encryption module 536B may apply a client-based encryption key to
the decrypted image data, to generate a client-based encrypted
image.
Referring again to FIG. 5B, in an embodiment, the client-based
encrypted image then may be transmitted to a device that is not the
computing device 520B, e.g., rather to other client-associated
computer device 550. Other client-associated computer device 550
may include client-based encrypted data receiving module 191 and/or
client-based encrypted image decryption module 194 which may
decrypt the client-based encrypted image, similarly to modules 190
and 195 of FIG. 5A, but not part of computing device 520B. In an
embodiment, computer device 550 may also be worn or carried by the
client, e.g., a smartphone carried by the client that was wearing
the wearable computer 520B. In an embodiment, computer device 550
may be remote from the client, e.g., the client's home computer. In
another embodiment, computer device 550 may be a shared server,
e.g., where the client stores images on the cloud. In an embodiment
similar to the one described above, the computing device 520B may
not possess the decrypted image at any point during the
process.
Referring again to FIG. 5B, similarly to FIG. 5A, in an embodiment,
one or more of the originally-captured image 22, the decrypted
image data in the decryption module 534B of server device 530B, and
the decrypted image data in the client-based encryption module 536B
may be identical. In another embodiment, the substantive portion of
the data (e.g., the color data) may be identical, and other data,
e.g., header data or compression data, may be different. In another
embodiment, the decrypted image data in the decryption module 534B
of server device 530B, and the decrypted image data in the
client-based encryption module 536B may be slightly different.
Referring now to FIG. 5C, FIG. 5C shows an exemplary implementation
of server device 230 as server device 530C operating in exemplary
environment 500C. In an embodiment, as shown in FIG. 5A, computing
device 520C may capture an image that includes an entity 105 that
may be associated with a privacy beacon, e.g., DCM beacon 110. In
an embodiment, the captured image, e.g., image 22, may be encrypted
into encrypted image 24 using a device-based encryption key. In an
embodiment, encrypted image 24 may be combined with beacon
metadata, e.g., beacon metadata 150, in an encrypted image data
block 160. In another embodiment, beacon metadata 150 may be
separate from encrypted image data block 160. In an embodiment, the
encrypted image 24 may be transmitted to a server device 530C by
encrypted image data transmitting module 180. In an embodiment, a
decryption determination module 532C may determine to decrypt the
image, e.g., in a process described in one or more of this and/or
previous applications incorporated by reference.
Referring again to FIG. 5C, in an embodiment, one or more of the
decision to decrypt the encrypted image 24, and the encrypted image
24 may be transmitted to a client-based encryption handling device
560. In an embodiment, client-based encryption handling device 560
may include decryption module 562, which may apply a device-based
decryption key to the encrypted image 24 to generate decrypted
image data. In an embodiment, client-based encryption module 564
may apply a client-based encryption key to the decrypted image
data, to generate a client-based encrypted image. In an embodiment,
the client-based encrypted image, then may be transmitted back to
the computing device 520A, which may be a wearable computer, e.g.,
to client-based encrypted data receiving module 190. In an
embodiment, upon receipt of the client-based encrypted image the
client-based encrypted image decryption module 195 may decrypt the
client-based encrypted image.
Referring again to FIG. 5C, similarly to FIG. 5A, in an embodiment,
one or more of the originally-captured image 22, the decrypted
image data in the decryption module 562 of client-based encryption
handling device 560, and the decrypted image data in the
client-based encryption module 564 may be identical. In another
embodiment, the substantive portion of the data (e.g., the color
data) may be identical, and other data, e.g., header data or
compression data, may be different. In another embodiment, the
decrypted image data in the decryption module 562 of client-based
encryption handling device 560, and the decrypted image data in the
client-based encryption module 564 may be slightly different.
Referring now to FIG. 5D, FIG. 5D shows an exemplary implementation
of server device 230 as server device 530D operating in exemplary
environment 500D. In an embodiment, as shown in FIG. 5D, computing
device 520D may capture an image that includes an entity 105 that
may be associated with a privacy beacon, e.g., DCM beacon 110. In
an embodiment, the captured image, e.g., image 22, may be encrypted
into encrypted image 24 using a device-based encryption key. In an
embodiment, encrypted image 24 may be combined with beacon
metadata, e.g., beacon metadata 150, in an encrypted image data
block 160. In another embodiment, beacon metadata 150 may be
separate from encrypted image data block 160. In an embodiment, the
encrypted image 24 may be transmitted to a server device 530D by
encrypted image data transmitting module 180. In an embodiment, a
decryption determination module 532D may determine to decrypt the
image, e.g., in a process described in one or more of this and/or
previous applications incorporated by reference. In an embodiment,
server device 530D may include decryption module 534D, which may
apply a device-based decryption key to the encrypted image 24 to
generate decrypted image data. In an embodiment, client-based
encryption module 536D may apply a client-based encryption key to
the decrypted image data, to generate a client-based encrypted
image.
Referring again to FIG. 5D, in an embodiment, the client-based
encrypted image then may be transmitted to a device that is not the
computing device 520D, e.g., rather to a social networking server
570 or file repository 570. In an embodiment, social networking
server 570 may include client-based encrypted data receiving module
192, similarly to client-based encrypted data receiving module 190
of FIG. 5A. In an embodiment, social networking server 570 may
include the client-based encrypted image decryption module 197,
which may be similar to the client-based encrypted image decryption
module 195 of FIG. 5A, and which may decrypt the client-based
encrypted image. In an embodiment, social networking server may
automatically decrypt the image, and/or take one or more actions,
e.g., posting the image to a user's account, e.g., their "wall" on
Facebook, or a similar structure. In another embodiment, the social
networking server 570 may wait to decrypt the image, and/or to take
one or more actions with the image, until the client that captured
the image logs into the social networking service associated with
the social networking server.
Referring again to FIG. 5D, similarly to FIG. 5A, in an embodiment,
one or more of the originally-captured image 22, the decrypted
image data in the decryption module 534D of server device 530D, and
the decrypted image data in the client-based encryption module 536D
may be identical. In another embodiment, the substantive portion of
the data (e.g., the color data) may be identical, and other data,
e.g., header data or compression data, may be different. In another
embodiment, the decrypted image data in the decryption module 534D
of server device 530D, and the decrypted image data in the
client-based encryption module 536D may be slightly different.
Referring now to FIG. 6, FIG. 6 illustrates an exemplary
implementation of the image that contains a depiction of a feature
of a particular entity acquiring module 252. As illustrated in FIG.
6, the image that contains a depiction of a feature of a particular
entity acquiring module may include one or more sub-logic modules
in various alternative implementations and embodiments. For
example, as shown in FIG. 6, e.g., FIG. 6A, in an embodiment,
module 252 may include one or more of image that contains a
depiction of a feature of a particular entity capture module 602
and image that contains a depiction of a feature of a particular
entity receiving module 606. In an embodiment, module 602 may
include image that contains a depiction of a feature of a
particular entity capture through use of an image capture component
module 604. In an embodiment, module 606 may include one or more of
image that contains a depiction of a feature of a particular entity
receiving from an image capture device module 608 and image that
contains a depiction of a feature of a particular entity receiving
at an image receipt device from an image capture device module 610.
In an embodiment, module 610 may include image that contains a
depiction of a feature of a particular entity receiving, from an
image capture device, at an image receipt device that is configured
to access an acquaintance database module 612. In an embodiment,
module 612 may include one or more of image that contains a
depiction of a feature of a particular entity receiving, from an
image capture device, at an image receipt device that is configured
to access a device contact list module 614 and image that contains
a depiction of a feature of a particular entity receiving, from an
image capture device, at an image receipt device that is configured
to access a social networking site friend list module 616.
Referring again to FIG. 6, e.g., FIG. 6B, in an embodiment, module
252 may include module 606 and module 610, as previously described.
In an embodiment, module 610 may include one or more of image that
contains a depiction of a feature of a particular entity receiving
at an image receipt device from an image capture device that is
configured to communicate on a same network as the image receipt
device module 618, image that contains a depiction of a feature of
a particular entity receiving at an image receipt device from an
image capture device that is configured to access one or more same
resources as the image receipt device module 620, image that
contains a depiction of a feature of a particular entity receiving
at an image receipt device from an image capture device that is
under common control as the image receipt device module 624, and
image that contains a depiction of a feature of a particular entity
receiving at an image receipt device from an image capture device
that has one or more properties in common with the image receipt
device module 626. In an embodiment, module 620 may include image
that contains a depiction of a feature of a particular entity
receiving at an image receipt device from an image capture device
that is configured to access one or more data storage resources as
the image receipt device module 622. In an embodiment, module 626
may include image that contains a depiction of a feature of a
particular entity receiving at an image receipt device from an
image capture device that has a same manufacturer as the image
receipt device module 628.
Referring again to FIG. 6, e.g., FIG. 6C, in an embodiment, module
252 may include one or more of image that contains a depiction of a
feature of a person acquiring module 630, image that contains a
depiction of a feature of a particular entity capturing module 634,
and access to the captured image inhibiting prior to obfuscation of
the at least the portion of the image module 636. In an embodiment,
module 630 may include image that contains a depiction of a face of
a person acquiring module 632. In an embodiment, module 636 may
include one or more of captured image storing at a limited-access
location prior to obfuscation of the at least the portion of the
image module 638 and access by one or more applications to the
captured image inhibiting prior to obfuscation of the at least the
portion of the image module 640. In an embodiment, module 640 may
include access by social network interaction applications to the
captured image inhibiting prior to obfuscation of the at least the
portion of the image module 642.
Referring now to FIG. 7, FIG. 7 illustrates an exemplary
implementation of identification data related to an identity of the
particular entity for which the depiction of the feature of the
particular entity is present in the image attaining module 254. As
illustrated in FIG. 7, the identification data related to an
identity of the particular entity for which the depiction of the
feature of the particular entity is present in the image attaining
module 254 may include one or more sub-logic modules in various
alternative implementations and embodiments. For example, as shown
in FIG. 10, e.g., FIG. 10A, in an embodiment, module 254 may
include one or more of identification data related to an identity
of the particular entity for which the depiction of the feature of
the entity is present in the image receiving module 702,
identification data that uniquely identifies the particular entity
for which the depiction of the feature of the entity is present in
the image attaining module 712, and identification data related to
an identity of each of one or more entities that includes the
particular entity for which the depiction of the feature of the
particular entity is present in the image attaining module 714. In
an embodiment, module 702 may include one or more of identification
data related to an identity of the particular entity for which the
depiction of the feature of the entity is present in the image
receiving with the image data module 704, unique name of an
identity of the particular entity for which the depiction of the
feature of the entity is present in the image receiving module 708,
and assigned identification number of the particular entity for
which the depiction of the feature of the entity is present in the
image receiving module 710. In an embodiment, module 704 may
include identification data related to an identity of the
particular entity for which the depiction of the feature of the
entity is present in the image receiving as metadata with the image
data module 706.
Referring again to FIG. 7, e.g., FIG. 7B, in an embodiment, module
254 may include particular entity for which the depiction of the
feature of the particular entity is present in the image
identifying module 716. In an embodiment, module 716 may include
one or more of particular entity for which the depiction of the
feature of the particular entity is present in the image
identifying through facial identification module 718, particular
entity for which the depiction of the feature of the particular
entity is present in the image identifying through analysis of one
or more previously captured images module 720, and particular
entity for which the depiction of the feature of the particular
entity is present in the image identifying through analysis of
metadata of the image module 722. In an embodiment, module 722 may
include particular entity for which the depiction of the feature of
the particular entity is present in the image identifying through
analysis of image tag metadata of the image module 724. In an
embodiment, module 724 may include particular entity for which the
depiction of the feature of the particular entity is present in the
image identifying through analysis of image tag metadata inputted
by a user of an image capture device that captured the image module
726.
Referring again to FIG. 7, e.g., FIG. 7C, in an embodiment, module
254 may include one or more of identification data related to an
identity of the particular entity for which the depiction of the
feature of the particular entity is present in the image requesting
module 728 and identification data related to an identity of the
particular entity for which the depiction of the feature of the
particular entity is present in the image receiving module 730. In
an embodiment, module 728 may include one or more of identification
data related to an identity of the particular entity for which the
depiction of the feature of the particular entity is present in the
image requesting from an image capture device that captured the
image module 732, identification data related to an identity of the
particular entity for which the depiction of the feature of the
particular entity is present in the image requesting from a user of
an image capture device that captured the image module 734, and
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting from an external resource module
736. In an embodiment, module 736 may include one or more of
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting from a social networking site
module 738 and identification data related to an identity of the
particular entity for which the depiction of the feature of the
particular entity is present in the image requesting from an image
management site module 740. In an embodiment, module 730 may
include one or more of unique identification data related to an
identity of the particular entity for which the depiction of the
feature of the particular entity is present in the image receiving
module 742, nonunique identification data related to an identity of
the particular entity for which the depiction of the feature of the
particular entity is present in the image receiving module 744, and
identification data related to a group to which the particular
entity for which the depiction of the feature of the particular
entity is present in the image belongs receiving module 746.
Referring again to FIG. 7, e.g., FIG. 7D, in an embodiment, module
254 may include one or more of identification data that describes
whether the particular entity is recognizable by a device that
captured the image related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image attaining module 748, identification data
related to whether a privacy beacon is associated with the
particular entity for which the depiction of the feature of the
particular entity is present in the image attaining module 752,
privacy beacon associated with the particular entity for which the
depiction of the feature of the particular entity is present in the
image detecting in the image module 754, and particular entity
identifying at least partially through use of the detected privacy
beacon module 756. In an embodiment, module 748 may include
identification data that describes whether the particular entity is
recognizable by an entity that controls the device that captured
the image related to an identity of the particular entity for which
the depiction of the feature of the particular entity is present in
the image attaining module 750. In an embodiment, module 756 may
include one or more of particular entity identification from
analysis of the detected privacy beacon module 758 and particular
entity identification retrieving from a database through use of
index data derived from the detected privacy beacon module 760.
Referring now to FIG. 8, FIG. 8 illustrates an exemplary
implementation of relation data that describes a relation between
the particular entity and a device that facilitated the acquisition
of the image that contains the depiction of the feature of the
particular entity obtaining module, wherein the relation may be
nonextant 256. As illustrated in FIG. 8, the relation data that
describes a relation between the particular entity and a device
that facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity obtaining module,
wherein the relation may be nonextant 256 may include one or more
sub-logic modules in various alternative implementations and
embodiments. For example, as shown in FIG. 8, e.g., FIG. 8A, in an
embodiment, module 256 may include one or more of relation data
that describes whether data about the particular entity is stored
on the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity
obtaining module 802, relation data that describes whether data
about the particular entity is accessible to the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity obtaining module
804, relation data that describes whether a name of the particular
entity is stored on the device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity obtaining module 806, and relation data that
describes whether the depicted feature of the particular entity has
previously been depicted in one or more previously-captured images
associated with the device that facilitated the acquisition of the
image that contains the depiction of the feature of the particular
entity obtaining module 812. In an embodiment, module 806 may
include one or more of relation data that describes whether a name
of the particular entity is stored in a contact list associated
with the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity
obtaining module 808 and relation data that describes whether a
name of the particular entity is stored in a friend list accessible
to the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity
obtaining module 810.
Referring again to FIG. 8, e.g., FIG. 8B, in an embodiment, module
256 may include one or more of relation data that describes whether
the depicted feature of the particular entity has previously been
depicted in one or more previously-captured images captured by the
device that facilitated the acquisition of the image that contains
the depiction of the feature of the particular entity obtaining
module 814, relation data that describes whether the particular
entity is known to the device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity obtaining module 816, relation data that
describes whether the particular entity is known to a control
entity that controls the device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity obtaining module 818, and relation data that
describes a relation between the particular entity and a device
that facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity querying module,
wherein the relation may be nonextant 820.
Referring again to FIG. 8, e.g., FIG. 8C, in an embodiment, module
256 may include one or more of inputted identification of the
particular entity at the device that facilitated the acquisition of
the image receiving module 822 and inputted identification of the
particular entity at the device and an obtained identity of the
particular entity obtained from a remote location comparison for
determining the relation data that describes the relation between
the particular entity and the device module 824. In an embodiment,
module 824 may include one or more of identification of the
particular entity from the remote location obtaining module 826 and
inputted identification of the particular entity and obtained
identification of the particular entity from the remote location
comparing module 828. In an embodiment, module 826 may include one
or more of identification of the particular entity from a facial
recognition database obtaining module 830, identification of the
particular entity from a social network site obtaining module 832,
and identification of the particular entity from a public image
repository obtaining module 834.
Referring again to FIG. 8, e.g., FIG. 8D, in an embodiment, module
256 may include one or more of relation data that describes a
relation between the particular entity and an image capture device
that captured the image that contains the depiction of the feature
of the particular entity obtaining module, wherein the relation may
be nonextant 836, relation data that describes a relation between
the particular entity and a receiver device that received the image
that was captured by an image capture device that captured the
image that contains the depiction of the feature of the particular
entity obtaining module, wherein the relation may be nonextant 838,
and relation data that describes a relation between the particular
entity and a device that facilitated the acquisition of the image
that contains the depiction of the feature of the particular entity
receiving from a device user module, wherein the relation may be
nonextant 844. In an embodiment, module 838 may include one or more
of relation data that describes a relation between the particular
entity and a smartphone device that received the image that was
captured by an image capture device that captured the image that
contains the depiction of the feature of the particular entity
obtaining module, wherein the relation may be nonextant 840 and
relation data that describes a relation between the particular
entity and a remote server device that received the image that was
captured by an image capture device that captured the image that
contains the depiction of the feature of the particular entity
obtaining module, wherein the relation may be nonextant 842.
Referring again to FIG. 8, e.g., FIG. 8E, in an embodiment, module
256 may include one or more of relation data that describes a
relation between the particular entity and a device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity receiving from
the device that facilitated the acquisition of the image module,
wherein the relation may be nonextant 846 and relation data that
describes a relation between the particular entity and a control
entity that controls the device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity receiving from the device that facilitated the
acquisition of the image module, wherein the relation may be
nonextant 848.
Referring now to FIG. 9, FIG. 9 illustrates an exemplary
implementation of obfuscation of a particular portion of the image,
wherein the depiction of the feature of the particular entity is
excluded from the particular portion of the image when the relation
data indicates that the relation between the particular entity and
the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity is
extant performing module 258. As illustrated in FIG. 9, the
obfuscation of a particular portion of the image, wherein the
depiction of the feature of the particular entity is excluded from
the particular portion of the image when the relation data
indicates that the relation between the particular entity and the
device that facilitated the acquisition of the image that contains
the depiction of the feature of the particular entity is extant
performing module 258 may include one or more sub-logic modules in
various alternative implementations and embodiments. For example,
as shown in FIG. 9, e.g., FIG. 9A, in an embodiment, module 258 may
include one or more of the particular portion of the image that
includes the depiction of the feature of the particular entity
selecting for obfuscation when the relation data indicates that the
relation between the particular entity and the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity is absent module
902 and obfuscation of the selected portion of the image performing
module 904. In an embodiment, module 904 may include one or more of
image manipulation to reduce image clarity of the selected portion
of the image performing module 906, noise addition to the selected
portion of the image performing module 910, and image obscuration
function execution on the selected portion of the image performing
module 912. In an embodiment, module 906 may include image
manipulation to reduce image clarity of the selected portion of the
image below a threshold level at which a particular facial
recognition algorithm is capable of execution performing module
908.
Referring again to FIG. 9, e.g., FIG. 9B, in an embodiment, module
258 may include one or more of obfuscation of total image, when the
relation data indicates that the relation between the particular
entity and the device that facilitated the acquisition of the image
that contains the depiction of the feature of the particular entity
is absent performing module 914 and obfuscation of a particular
portion of the image that contains a further entity, wherein the
depiction of the feature of the particular entity is excluded from
the particular portion of the image when the relation data
indicates that the relation between the particular entity and the
device that facilitated the acquisition of the image that contains
the depiction of the feature of the particular entity is extant
performing module 916.
Referring again to FIG. 9, e.g., FIG. 9C, in an embodiment, module
258 may include one or more of particular portion of the image that
contains the depiction of the feature of the particular entity
identifying module 918, obfuscation of a further portion of the
image that is other than the identified particular portion of the
image that contains the depiction of the feature of the particular
entity performing module 920, and determination regarding whether
to perform obfuscation of the particular portion of the image, at
least partly based on the relation data performing module 922. In
an embodiment, module 920 may include one or more of obfuscation of
a further portion of the image that depicts a feature of one or
more entities other than the particular entity performing module
924 and obfuscation of an entire portion of the image that is other
than the identified particular portion of the image that contains
the depiction of the feature of the particular entity performing
module 928. In an embodiment, module 924 may include obfuscation of
a further portion of the image that depicts a feature of one or
more entities other than the particular entity for which a privacy
beacon is detected performing module 926. In an embodiment, module
922 may include one or more of obfuscation of the particular
portion of the image when the relation data indicates that the
relation between the particular entity and the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity is absent
performing module 930 and obfuscation of the particular portion of
the image when the relation data indicates that the relation
between the particular entity and the device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity is extant avoiding module 932.
Referring now to FIG. 10, FIG. 10 shows operation 1000, e.g., an
example operation of server device 230 operating in an environment
200. In an embodiment, operation 1000 may include operation 1002
depicting acquiring an image that includes a depiction of a feature
of one or more entities. For example, FIG. 2, e.g., FIG. 2G, shows
image that contains a depiction of a feature of a particular entity
acquiring module 252 acquiring (e.g., obtaining, receiving,
calculating, selecting from a list or other data structure,
receiving, retrieving, or receiving information regarding,
performing calculations to find out, retrieving data that
indicates, receiving notification, receiving information that leads
to an inference, whether by human or automated process, or being
party to any action or transaction that results in informing,
inferring, or deducting, including but not limited to circumstances
without absolute certainty, including more-likely-than-not and/or
other thresholds) an image (e.g., a description of a graphic
picture that is a visual representation of something, regardless of
whether that something is coherent, nonsensical, abstract, or
otherwise) that includes a depiction (e.g., a form of, e.g.,
pixels, vector maps, instructions for recreating, a set of
brightness and color values, and the like) of a feature (e.g., a
body, a part of a body, a thing carried by a body, a thing worn by
a body, a thing possessed by a body, where the body is not
necessarily human, living, or animate) of one or more entities
(e.g., one or more of a thing, e.g., a person, a rock, a deer,
anything that has separate and distinct existence and objective or
conceptual reality).
Referring again to FIG. 10, operation 1000 may include operation
1004 depicting attaining identification of a particular entity of
the one or more entities for which the depiction of the feature is
present in the image. For example, FIG. 2, e.g., FIG. 2G, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image attaining module 254 attaining (e.g.,
obtaining, receiving, calculating, selecting from a list or other
data structure, receiving, retrieving, or receiving information
regarding, performing calculations to find out, retrieving data
that indicates, receiving notification, receiving information that
leads to an inference, whether by human or automated process, or
being party to any action or transaction that results in informing,
inferring, or deducting, including but not limited to circumstances
without absolute certainty, including more-likely-than-not and/or
other thresholds) identification (e.g., one or more pieces of
information regarding, where the information is in the form of
data, including, but not limited to, a name, a characteristic of, a
property of, a number identifying, a name of a group to which the
entity belongs, a fact about the entity, and a binary signal of
whether the entity is known to another particular entity or to a
centralized database, etc.) of a particular entity (e.g., a
specifically-identified thing, e.g., a person, for which a feature
is depicted in the image, e.g., "Jules Caesar" in the previous
examples) for which the depiction (e.g., a form of, e.g., pixels,
vector maps, instructions for recreating, a set of brightness and
color values, and the like) of a feature (e.g., a body, a part of a
body, a thing carried by a body, a thing worn by a body, a thing
possessed by a body, where the body is not necessarily human,
living, or animate) of the entity (e.g., one or more of a thing,
e.g., a person, a rock, a deer, anything that has separate and
distinct existence and objective or conceptual reality) is present
in the image (e.g., a description of a graphic picture that is a
visual representation of something, regardless of whether that
something is coherent, nonsensical, abstract, or otherwise)
Referring again to FIG. 10, operation 1000 may include operation
1006 depicting obtaining relationship data that indicates whether
the particular entity has a relationship with a device that
facilitated acquisition of the image. For example, FIG. 2, e.g.,
FIG. 2G, shows relation data that describes a relation between the
particular entity and a device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity obtaining module, wherein the relation may be
nonextant 256 obtaining (e.g., acquiring, receiving, calculating,
selecting from a list or other data structure, receiving,
retrieving, or receiving information regarding, performing
calculations to find out, retrieving data that indicates, receiving
notification, receiving information that leads to an inference,
whether by human or automated process, or being party to any action
or transaction that results in informing, inferring, or deducting,
including but not limited to circumstances without absolute
certainty, including more-likely-than-not and/or other thresholds)
relationship data (e.g., data, which may be in any format or of any
complexity, including yes/no, e.g., binary data, and also including
complex data structures or databases, and everything in between)
that indicates whether the particular entity (e.g., a
specifically-identified thing, e.g., a person, for which a feature
is depicted in the image, e.g., "Jules Caesar" in the previous
examples) has a relationship (e.g., this term in this application
means there is some connection between the entity, e.g., Jules
Caesar, and/or one of the devices associated with Jules Caesar, and
the device, or an entity that controls the device, for example,
Jules Caesar and the person taking the image are friends, or Jules
Caesar appears in a contact list or friend list of the device or a
database accessible by the device, or similar) with a device (e.g.,
this could be an image capture device, e.g., as in FIGS. 2A-2D, an
image receipt device, as in FIG. 2E, a remote computer device,
e.g., as in FIG. 2F, or any other device that is capable of
capturing images or receiving captured images) that facilitated
acquisition (e.g., captured the image, received at least a portion
of the captured image, or facilitated one or more steps to further
the execution of capturing the image or receiving the captured
image) of the image (e.g., a description of a graphic picture that
is a visual representation of something, regardless of whether that
something is coherent, nonsensical, abstract, or otherwise).
Referring again to FIG. 10, operation 1000 may include operation
1008 depicting performing obfuscation on at least a portion of the
image, wherein the depiction of the feature of the particular
entity is excluded from the obfuscation when the obtained
relationship data indicates that the particular entity has the
relationship with the device that facilitated the acquisition of
the image. For example, FIG. 2, e.g., FIG. 2G, shows obfuscation of
a particular portion of the image, wherein the depiction of the
feature of the particular entity is excluded from the particular
portion of the image when the relation data indicates that the
relation between the particular entity and the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity is extant
performing module 258 performing (e.g., executing one or more steps
that are intended to achieve the furtherance of) obfuscation (e.g.,
obscuring, hiding, covering, making more difficult to read or
process, by any known or future technique, including, but not
limited to, blurring, obscuring, pixelating, noisifying, covering,
deleting, hiding, moving, scrambling, etc.) on at least a portion
of the image (e.g., the description of a graphic picture that is a
visual representation of something, regardless of whether that
something is coherent, nonsensical, abstract, or otherwise),
wherein the depiction of the feature (e.g., a body, a part of a
body, a thing carried by a body, a thing worn by a body, a thing
possessed by a body, where the body is not necessarily human,
living, or animate) of the particular entity (e.g., one or more of
a thing, e.g., a person, a rock, a deer, anything that has separate
and distinct existence and objective or conceptual reality, e.g.,
in this case, a specifically-identified thing, e.g., a person, for
which a feature is depicted in the image, e.g., "Jules Caesar" in
the previous examples) is excluded (e.g., is not performed) from
the obfuscation (e.g., the obscuring, hiding, covering, making more
difficult to read or process, by any known or future technique,
including, but not limited to, blurring, obscuring, pixelating,
noisifying, covering, deleting, hiding, moving, scrambling, etc.)
when the obtained relationship data (e.g., data, which may be in
any format or of any complexity, including yes/no, e.g., binary
data, and also including complex data structures or databases, and
everything in between, that is in any way related to some
connection between the entity, e.g., Jules Caesar, and/or one of
the devices associated with Jules Caesar, and the device, or an
entity that controls the device, for example, Jules Caesar and the
person taking the image are friends, or Jules Caesar appears in a
contact list or friend list of the device or a database accessible
by the device, or similar) indicates that the particular entity
(e.g., one or more of a thing, e.g., a person, a rock, a deer,
anything that has separate and distinct existence and objective or
conceptual reality, e.g., in this case, a specifically-identified
thing, e.g., a person, for which a feature is depicted in the
image, e.g., "Jules Caesar" in the previous examples) has the
relationship e.g., this term in this application means there is
some connection between the entity, e.g., Jules Caesar, and/or one
of the devices associated with Jules Caesar, and the device, or an
entity that controls the device, for example, Jules Caesar and the
person taking the image are friends, or Jules Caesar appears in a
contact list or friend list of the device or a database accessible
by the device, or similar) with the device (e.g., this could be an
image capture device, e.g., as in FIGS. 2A-2D, an image receipt
device, as in FIG. 2E, a remote computer device, e.g., as in FIG.
2F, or any other device that is capable of capturing images or
receiving captured images) that facilitated acquisition (e.g.,
captured the image, received at least a portion of the captured
image, or facilitated one or more steps to further the execution of
capturing the image or receiving the captured image) of the image
(e.g., a description of a graphic picture that is a visual
representation of something, regardless of whether that something
is coherent, nonsensical, abstract, or otherwise).
An example terms of service is listed below with the numbered
paragraphs 1-5. Many other variations of terms of service are known
and used in click-through agreements that are common at the time of
filing, and the herein example is intended to be exemplary only and
not limiting in any way.
1. By capturing an image of any part of the user Jules Caesar
(hereinafter "Image"), or providing any automation, design,
resource, assistance, or other facilitation in the capturing of the
Image, you agree that you have captured these Terms of Service and
that you acknowledge and agree to them. If you cannot agree to
these Terms of Service, you should immediately delete the captured
Image. Failure to do so will constitute acceptance of these Terms
of Service.
2. The User Jules Caesar owns all of the rights associated with the
Image and any representation of any part of Jules Caesar
thereof;
3. By capturing the Image, you agree to provide the User Jules
Caesar just compensation for any commercialization of the User's
personality rights that may be captured in the Image.
4. By capturing the Image, you agree to take all reasonable actions
to track the Image and to provide an accounting of all
commercialization attempts related to the Image, whether successful
or not.
5. By capturing the Image, you accept a Liquidated Damages
agreement in which unauthorized use of the Image will result in
mandatory damages of at least, but not limited to, $1,000,000.
A privacy beacon may include, but is not limited to, one or more of
a marker that reflects light in a visible spectrum, a marker that
reflects light in a nonvisible spectrum, a marker that emits light
in a visible spectrum, a marker that emits light in a nonvisible
spectrum, a marker that emits a radio wave, a marker that, when a
particular type of electromagnetic wave hits it, emits a particular
electromagnetic wave, an RFID tag, a marker that uses near-field
communication, a marker that is in the form of a bar code, a marker
that is in the form of a bar code and painted on a user's head and
that reflects light in a nonvisible spectrum, a marker that uses
high frequency low penetration radio waves (e.g., 60 GHz radio
waves), a marker that emits a particular thermal signature, a
marker that is worn underneath clothing and is detectable by an
x-ray-type detector, a marker that creates a magnetic field, a
marker that emits a sonic wave, a marker that emits a sonic wave at
a frequency that cannot be heard by humans, a marker that is
tattooed to a person's bicep and is detectable through clothing, a
marker that is a part of a user's cellular telephone device, a
marker that is broadcast by a part of a user's cellular telephone
device, a marker that is broadcast by a keychain carried by a
person, a marker mounted on a drone that maintains a particular
proximity to the person, a marker mounted in eyeglasses, a marker
mounted in a hat. a marker mounted in an article of clothing, the
shape of the person's face is registered as the beacon, a feature
of a person registered as the beacon, a marker displayed on a
screen, a marker in the form of an LED, a marker embedded on a
page, or a book, a string of text or data that serves as a marker,
a marker embedded or embossed onto a device, and the like.
FIGS. 11A-11C depict various implementations of operation 1002,
depicting acquiring an image that includes a depiction of a feature
of one or more entities according to embodiments. Referring now to
FIG. 11A, operation 1002 may include operation 1102 depicting
capturing the image that includes the depiction of the feature of
one or more entities. For example, FIG. 6, e.g., FIG. 6A shows
image that contains a depiction of a feature of a particular entity
capturing module 602 capturing the image (e.g., an image of two
people on a park bench) that includes the depiction of the feature
(e.g., a face and shoulders) of one or more entities (e.g., one of
the two people on the park bench).
Referring again to FIG. 11A, operation 1102 may include operation
1104 depicting capturing the image that includes the depiction of
the feature of one or more entities, through use of an image
capture component of an image capture device. For example, FIG. 6,
e.g., FIG. 6A, shows image that contains a depiction of a feature
of a particular entity capture through use of an image capture
component module 604 capturing the image (e.g., an image of two
people eating in a restaurant) that includes the depiction of the
feature (e.g., a face) of one or more entities (e.g., one of the
two people eating at the restaurant), through use of an image
capture component (e.g., a CMOS or CCD sensor of a digital camera
that is part of a smartphone or a wearable computer, that can be
configured to capture images or video or both) of an image capture
device (e.g., a wearable computer, e.g., Google Glass).
Referring again to FIG. 11A, operation 1002 may include operation
1106 depicting receiving the image that includes the depiction of
the feature of one or more entities. For example, FIG. 6, e.g.,
FIG. 6A, shows image that contains a depiction of a feature of a
particular entity receiving module 606 receiving (e.g., acquiring,
at a device or entity that was not responsible for the capturing of
the image) the image (e.g., an image of a person at a bar) that
includes the depiction of the feature (e.g., a face) of one or more
entities (e.g., the person at the bar).
Referring again to FIG. 11A, operation 1106 may include operation
1108 depicting receiving the image that includes the depiction of
the feature of one or more entities from an image capture device.
For example, FIG. 6, e.g., FIG. 6A, shows image that contains a
depiction of a feature of a particular entity receiving from an
image capture device module 608 receiving the image (e.g., an image
of two people waiting for a bus) that includes the depiction of the
feature (e.g., a face) of one or more entities (e.g., the two
people waiting for the bus) from an image capture device (e.g., a
camera mounted in a smartphone device).
Referring again to FIG. 11A, operation 1106 may include operation
1110 depicting receiving, at an image receipt device, the image
that includes the depiction of the feature of one or more entities
from an image capture device that is linked to the image receipt
device. For example, FIG. 6, e.g., FIG. 6A, shows image that
contains a depiction of a feature of a particular entity receiving
at an image receipt device from an image capture device module 610
receiving, at an image receipt device (e.g., a user's smart phone),
the image (e.g., an image of people on a camping trip) that
includes the depiction of the feature (e.g., a face) of one or more
entities (e.g., one or more persons on the camping trip) from an
image capture device (e.g., a head-mounted camera) that is linked
to the image receipt device (e.g., the user who is wearing the head
mounted camera's smartphone).
Referring again to FIG. 11A, operation 1110 may include operation
1112 depicting receiving, at the image receipt device that has
access to an acquaintance database, the image that includes the
depiction of the feature of the one or more entities from the image
capture device that is linked to the image receipt device. For
example, FIG. 6, e.g., FIG. 6A, shows image that contains a
depiction of a feature of a particular entity receiving at an image
receipt device that is configured to access an acquaintance
database from an image capture device module 612 receiving, at the
image receipt device (e.g., a user's smartphone) that has access to
an acquaintance database (e.g., a database that includes names of
one or more people, e.g., a contact list, a friend list, a text
file of names, a phone directory, an email contact list, a list of
all the persons that have contacted the device through instant
messaging, text messaging, e-mail, voice calls, and the like), the
image that includes the depiction of the feature of the one or more
entities (e.g., a face of one or more persons) from the image
capture device (e.g., a wearable computer, e.g., an EyeTap device)
that is linked to the image receipt device (e.g., the user's
smartphone).
Referring again to FIG. 11A, operation 1112 may include operation
1114 depicting receiving, at the image receipt device that has
access to a contact list of a user of the image receipt device, the
image that includes the depiction of the feature of the one or more
entities from the image capture device that is linked to the image
receipt device. For example, FIG. 6, e.g., FIG. 6A, shows image
that contains a depiction of a feature of a particular entity
receiving, from an image capture device, at an image receipt device
that is configured to access a device contact list acquaintance
database module 614 receiving, at the image receipt device (e.g., a
remote server managed by a manufacturer of the user's wearable
computer) that has access to a contact list (e.g., a list of
persons for which the entity has information) of a user of the
image receipt device (e.g., a person that has access to the remote
server), the image that includes the depiction of the feature of
the one or more entities (e.g., a face of the person) from the
image capture device (e.g., a wearable computer, e.g., a LifeLog
device) that is linked to the image receipt device (e.g., the
remote server managed by the manufacturer of the user's wearable
computer).
Referring again to FIG. 11A, operation 1112 may include operation
1116 depicting receiving, at the image receipt device that has a
friend list of one or more entities known to a user of the image
receipt device through a social networking site, the image that
includes the depiction of the feature of the one or more entities
from the image capture device that is linked to the image receipt
device. For example, FIG. 6, e.g., FIG. 6A, shows image that
contains a depiction of a feature of a particular entity receiving,
from an image capture device, at an image receipt device that is
configured to access a social networking site friend list module
616 receiving, at the image receipt device (e.g., a laptop device
of a user that is sitting in a coffee shop, wearing a wearable
computer) that has a friend list (e.g., a list of "contacts"
associated with a social network site, e.g., Facebook) of one or
more entities (e.g., persons, companies, products, restaurants,
anything that has a presence on social media) known to a user of
the image receipt device (e.g., the person using the laptop,
sitting in the coffee shop), the image (e.g., a picture of two
people having breakfast at the coffee shop, three tables over from
where the person using the laptop is located) that includes the
depiction of the feature (e.g., a profile image of one of the two
people having breakfast at the coffee shop) of the one or more
entities (e.g., one of the two people having breakfast at the
coffee shop) from the image capture device (e.g., the wearable
computer worn by the user in the coffee shop) that is linked (e.g.,
the laptop device is capable of communicating with the wearable
computer) to the image receipt device (e.g., the laptop
device).
Referring now to FIG. 11B, operation 1110 may include operation
1118 depicting receiving, at the image receipt device, the image
that includes the depiction of the feature of one or more entities
from the image capture device that communicates on a same network
as the image receipt device. For example, FIG. 6, e.g., FIG. 6B,
shows image that contains a depiction of a feature of a particular
entity receiving at an image receipt device from an image capture
device that is configured to communicate on a same network as the
image receipt device module 618 receiving, at the image receipt
device (e.g., a home computer of a user that is out at a baseball
game), the image that includes the depiction of the feature of one
or more entities (e.g., a person watching the baseball game) from
the image capture device (e.g., a wearable computer, e.g., a
hypothetical Apple-branded wearable computer, e.g., "iGlasses")
that communicates on a same network (e.g., a wireless network) as
the image receipt device (e.g., a home computer).
Referring again to FIG. 11B, operation 1110 may include operation
1120 depicting receiving, at an image receipt device, the image
that includes the depiction of the feature of one or more entities
from an image capture device that shares one or more resources with
the image receipt device. For example, FIG. 6, e.g., FIG. 6B, shows
image that contains a depiction of a feature of a particular entity
receiving at an image receipt device from an image capture device
that is configured to access one or more same resources as the
image receipt device module 620 receiving, at an image receipt
device (e.g., a smartphone of a mother, where the image capture
device is a wearable computer or a camera operated by her child),
the image that includes the depiction of the feature of one or more
entities (e.g., three people at the mall) from an image capture
device (e.g., a smartphone carried by a child) that shares one or
more resources (e.g., the two devices share a common data plan, or
access a common share drive, or have the same application installed
on them) with the image receipt device (e.g., the mother's
smartphone).
Referring again to FIG. 11B, operation 1120 may include operation
1122 depicting receiving, at an image receipt device, the image
that includes the depiction of the feature of one or more entities
from the image capture device that shares a data storage resource
with the image receipt device. For example, FIG. 6, e.g., FIG. 6B,
shows image that contains a depiction of a feature of a particular
entity receiving at an image receipt device from an image capture
device that is configured to access one or more data storage
resources as the image receipt device module 622 receiving (e.g.,
reading from a particular directory in a shared storage resource),
at an image receipt device (e.g., a tablet device held by a person
that is also wearing a wearable computer), the image that includes
the depiction of the feature of one or more entities (e.g., a
picture of three friends at a bar) from the image capture device
(e.g., a wearable computer, e.g., a hypothetical Microsoft-branded
wearable computer, e.g., a Microsoft "KinectVision") that shares a
data storage resource (e.g., the tablet device uses the same cloud
storage, e.g., Microsoft's "SkyDrive" or DropBox) with the image
receipt device (e.g., the tablet device receives the image by
retrieving it from a particular directory in the cloud storage, to
which the tablet device also has access).
Referring again to FIG. 11B, operation 1110 may include operation
1124 depicting receiving, at the image receipt device, the image
that includes the depiction of the feature of one or more entities
from the image capture device that is under control by a same
entity as the image receipt device. For example, FIG. 6, e.g., FIG.
6B, shows image that contains a depiction of a feature of a
particular entity receiving at an image receipt device from an
image capture device that is under common control as the image
receipt device module 624 receiving, at the image receipt device
(e.g., a user's smartphone device), the image that includes the
depiction of the feature of one or more entities (e.g., a picture
of two men at a baseball game) from the image capture device (e.g.,
a wearable computer, e.g., Nokia SmartGlasses) that is under
control by a same entity (e.g., the same user controls the
smartphone that is receiving the image and the wearable computer
that captured the image) as the image receipt device (e.g., the
user's smartphone device).
Referring again to FIG. 11B, operation 1110 may include operation
1126 depicting receiving, at the image receipt device, the image
that includes the depiction of the feature of one or more entities
from the image capture device that has at least one characteristic
in common with the image receipt device. For example, FIG. 6, e.g.,
FIG. 6B, shows image that contains a depiction of a feature of a
particular entity receiving at an image receipt device from an
image capture device that has one or more properties in common with
the image receipt device module 626 receiving, at the image receipt
device (e.g., a tablet device running a particular application,
e.g., a hypothetical "ObscurAway"), the image that includes the
depiction of the feature of one or more entities (e.g., a picture
that shows the faces of two people in fishing boat on a fishing
trip) from the image capture device (e.g., a wearable computer,
e.g., a hypothetical Samsung-branded wearable computer, e.g.,
Samsung "Spectacles") that has at least one characteristic in
common (e.g., is also running the same application, e.g.,
"ObscurAway") with the image receipt device (e.g., the tablet
device).
Referring again to FIG. 11B, operation 1126 may include operation
1128 depicting receiving, at the image receipt device, the image
that includes the depiction of the feature of one or more entities
from the image capture device that has a same manufacturer as the
image receipt device. For example, FIG. 6, e.g., FIG. 6B, shows
image that contains a depiction of a feature of a particular entity
receiving at an image receipt device from an image capture device
that has a same manufacturer as the image receipt device module 628
receiving, at the image receipt device (e.g., a smartphone device
manufactured by Apple, Inc.), the image that includes the depiction
of the feature of the one or more entities (e.g., a picture of two
people playing chess in a park) from the image capture device
(e.g., a wearable computer device manufactured by Apple, Inc.) that
has a same manufacturer as the image receipt device (e.g., the
smartphone that is manufactured by Apple, Inc.).
Referring now to FIG. 11C, operation 1002 may include operation
1130 depicting acquiring the image that includes the depiction of
the feature of one or more persons. For example, FIG. 6, e.g., FIG.
6C, shows image that contains a depiction of a feature of a person
acquiring module 630 acquiring the image that includes the
depiction of the feature (e.g., a body shot) of one or more persons
(e.g., two girls on spring break vacation).
Referring again to FIG. 11C, operation 1130 may include operation
1132 depicting acquiring the image that includes the depiction of a
face of one or more persons. For example, FIG. 6, e.g., FIG. 6C,
shows image that contains a depiction of a face of a person
acquiring module 632 acquiring the image that includes the
depiction of a face of one or more persons.
Referring again to FIG. 11C, operation 1002 may include operation
1134 depicting capturing the image that includes the depiction of
the feature of one or more entities. For example, FIG. 6, e.g.,
FIG. 6C, shows image that contains a depiction of a feature of a
particular entity capturing module 634 capturing the image that
includes the depiction of the feature (e.g., a face) of one or more
entities (e.g., a person sitting at an airport).
Referring again to FIG. 11C, operation 1002 may include operation
1136 depicting preventing further access to the captured image
prior to the performance of obfuscation on the at least the portion
of the image. For example, FIG. 6, e.g., FIG. 6C, shows access to
the captured image inhibiting prior to obfuscation of the at least
the portion of the image module 636 preventing further access
(e.g., one or more of copying, viewing, posting to a social
networking site, e-mailing, manipulating) to the captured image
prior to the performance of obfuscation (e.g., altering the image
to decrease recognizability) on the at least the portion of the
image (e.g., the portion that depicts the face of the person).
Referring again to FIG. 11C, operation 1136 may include operation
1138 depicting storing the captured image in a location to which
access is restricted, prior to the performance of obfuscation on
the at least the portion of the image. For example, FIG. 6, e.g.,
FIG. 6C, shows captured image storing at a limited-access location
prior to obfuscation of the at least the portion of the image
module 638 storing the captured image (e.g., an image of someone
sitting at a bar) in a location to which access is restricted
(e.g., an area of memory to which some programs do not have access,
or a remote location that requires a login to access), prior to the
performance of obfuscation on the at least the portion of the image
(e.g., the portion that contains the face of the person sitting at
the bar).
Referring again to FIG. 11C, operation 1136 may include operation
1140 depicting denying access to one or more applications
configured to perform one or more operations on the captured image,
prior to the performance of obfuscation on the at least the portion
of the image. For example, FIG. 6, e.g., FIG. 6C, shows access by
one or more applications to the captured image inhibiting prior to
obfuscation of the at least the portion of the image module 640
denying access to one or more applications (e.g., an image
transmitting application) configured to perform one or more
operations (e.g., transmission of the image to a different
location) on the captured image (e.g., an image of three friends at
a football game), prior to the performance of obfuscation on the at
least the portion of the image (e.g., the portion of the images
that shows the faces of the three friends at the football
game).
Referring again to FIG. 11C, operation 1140 may include operation
1142 depicting denying access to an application configured to
upload the image to a social networking site, prior to the
performance of obfuscation on the at least the portion of the
image. For example, FIG. 6, e.g., FIG. 6C, shows access by social
network interaction applications to the captured image inhibiting
prior to obfuscation of the at least the portion of the image
module 642 denying access to an application configured to upload
the image to a social networking site, prior to the performance of
obfuscation on the at least the portion of the image (e.g., an
image of a kid sitting in a fire truck).
FIGS. 12A-12D depict various implementations of operation 1004,
depicting attaining identification of a particular entity of the
one or more entities for which the depiction of the feature is
present in the image, according to embodiments. Referring now to
FIG. 12A, operation 1004 may include operation 1202 depicting
receiving an identification of the particular entity of the one or
more entities for which the depiction of the feature is present in
the image. For example, FIG. 7, e.g., FIG. 7A, shows identification
data related to an identity of the particular entity for which the
depiction of the feature of the entity is present in the image
receiving module 702 receiving an identification (e.g., a name
followed by a unique string of digits) of the particular entity
(e.g., the person depicted in the image, e.g., Jules Caesar) of the
one or more entities for which the depiction of the feature is
present in the image (e.g., the face of the person in the
image).
Referring again to FIG. 12A, operation 1202 may include operation
1204 depicting receiving, with the image data, the identification
of the particular entity of the one or more entities for which the
depiction of the feature is present in the image. For example, FIG.
7, e.g., FIG. 7A, shows identification data related to an identity
of the particular entity for which the depiction of the feature of
the entity is present in the image receiving with the image data
module 704 receiving, with the image data (e.g., from the same
source as the image data, or approximately at the same time, or
coded into the image data, e.g., as metadata or as a header), the
identification (e.g., a common name) of the particular entity
(e.g., one of the people in a team picture) of the one or more
entities (e.g., a team picture of a men's league hockey team) is
present in the image (e.g., the team picture).
Referring again to FIG. 12A, operation 1204 may include operation
1206 depicting receiving, as metadata of the image data, the
identification of the particular entity of the one or more entities
for which the depiction of the feature is present in the image. For
example, FIG. 7, e.g., FIG. 7A, shows identification data related
to an identity of the particular entity for which the depiction of
the feature of the entity is present in the image receiving as
metadata with the image data module 706 receiving, as metadata of
the image data (e.g., data that includes the image), the
identification (e.g., a not-necessarily-unique name) of the
particular entity (e.g., a person depicted in the image, e.g., a
person at a bar) of the one or more entities for which the
depiction of the feature is present in the image.
Referring again to FIG. 12A, operation 1202 may include operation
1208 depicting receiving a name of the particular entity of the one
or more entities for which the depiction of the feature is present
in the image. For example, FIG. 7, e.g., FIG. 7A, shows unique name
of an identity of the particular entity for which the depiction of
the feature of the entity is present in the image receiving module
708 receiving a name of the particular entity (e.g., "John Smith")
of the one or more entities (e.g., there may be many people in the
picture) for which the depiction of the feature (e.g., a face of
John Smith) is present in the image.
Referring again to FIG. 12A, operation 1202 may include operation
1210 depicting receiving an identification number of the particular
entity of the one or more entities for which the depiction of the
feature is present in the image. For example, FIG. 7, e.g., FIG.
7A, shows assigned identification number of the particular entity
for which the depiction of the feature of the entity is present in
the image receiving module 710 receiving an identification number
(e.g., "426-264224") of (e.g., that identifies) the particular
entity (e.g., a person in a picture) of the one or more entities
for which the depiction of the feature is present in the image
(e.g., an image of two people at a baseball game).
Referring again to FIG. 12A, operation 1004 may include operation
1212 depicting identifying the one or more entities for which the
depiction of the feature is present in the image. For example, FIG.
7, e.g., FIG. 7A, shows identification data that uniquely
identifies the particular entity for which the depiction of the
feature of the entity is present in the image attaining module 712
identifying the one or more entities (e.g., the one or more
persons) for which the depiction of the feature (e.g., the face) is
present in the image (e.g., an image of two people on a date at a
restaurant).
Referring again to FIG. 12A, operation 1004 may include operation
1214 depicting identifying each entity of the one or more entities,
including the particular entity, for which the depiction of the
feature is present in the image. For example, FIG. 7, e.g., FIG.
7A, shows identification data related to an identity of each of one
or more entities that includes the particular entity for which the
depiction of the feature of the particular entity is present in the
image attaining module 714 identifying each entity of the one or
more entities, including the particular entity, for which the
depiction of the feature (e.g., the face) is present in the image
(e.g., a picture of a group at a high school reunion).
Referring now to FIG. 12B, operation 1004 may include operation
1216 depicting identifying the particular entity of the one or more
entities for which the depiction of the feature is present in the
image. For example, FIG. 7, e.g., FIG. 7B, shows particular entity
for which the depiction of the feature of the particular entity is
present in the image identifying module 716 identifying (e.g.,
obtaining data about the entity that identifies the entity, whether
specifically, uniquely, or part of a group) the particular entity
of the one or more entities for which the depiction of the feature
is present in the image (e.g., an image of three friends at the
mall).
Referring again to FIG. 12B, operation 1216 may include operation
1218 depicting identifying the particular entity of the one or more
entities for which the depiction of a face of the particular entity
is present in the image, through use of a facial recognition
algorithm. For example, FIG. 7, e.g., FIG. 7B, shows particular
entity for which the depiction of the feature of the particular
entity is present in the image identifying through facial
identification module 718 identifying the particular entity (e.g.,
one of the persons in the image for which there is a DCM beacon) of
the one or more entities (e.g., the other persons in the image) for
which the depiction of a face of the particular entity is present
in the image, through use of a facial recognition algorithm (e.g.,
an eigenfaces algorithm).
Referring again to FIG. 12B, operation 1216 may include operation
1220 depicting identifying the particular entity of the one or more
entities for which the depiction of the feature is present in the
image, through use of one or more previously-captured images. For
example, FIG. 7, e.g., FIG. 7B, shows particular entity for which
the depiction of the feature of the particular entity is present in
the image identifying through analysis of one or more previously
captured images module 720 identifying the particular entity (e.g.,
a famous tennis player) of the one or more entities for which the
depiction of the feature (e.g., a rear end) is present in the image
(e.g., a surreptitious image of a tennis player on the tennis
court), through use of one or more previously captured images
(e.g., through images that show the same body part, or show the
person from a different angle but at the same time, etc.).
Referring again to FIG. 12B, operation 1216 may include operation
1222 depicting identifying the particular entity of the one or more
entities for which the depiction of the feature is present in the
image, at least partially through use of metadata in the image. For
example, FIG. 7, e.g., FIG. 7B, shows particular entity for which
the depiction of the feature of the particular entity is present in
the image identifying through analysis of metadata of the image
module 722 identifying the particular entity of the one or more
entities for which the depiction of the feature (e.g., the face) is
present in the image, at least partially through use of metadata
(e.g., obtaining the location and time of the image and using a
location-deduction algorithm, or using image tags in the metadata)
in the image (e.g., a picture of a professional baseball game).
Referring again to FIG. 12B, operation 1222 may include operation
1224 depicting identifying the particular entity of the one or more
entities for which the depiction of the feature is present in the
image, at least partially through use of one or more image tags
that regard the image. For example, FIG. 7, e.g., FIG. 7B, shows
particular entity for which the depiction of the feature of the
particular entity is present in the image identifying through
analysis of image tag metadata of the image module 724 identifying
the particular entity of the one or more entities for which the
depiction of the feature is present in the image (e.g., the face of
a celebrity that is present in a picture taken at a bar), at least
partially through use of one or more image tags (e.g., the user
added the image tag "LeBron James" because the picture was of the
famous basketball player LeBron James) that regard the image.
Referring again to FIG. 12B, operation 1224 may include operation
1226 depicting identifying the particular entity of the one or more
entities for which the depiction of the feature is present in the
image, at least partially through use of one or more image tags
that regard the image, said one or more image tags provided by a
user of the image capture device. For example, FIG. 7, e.g., FIG.
7B, shows particular entity for which the depiction of the feature
of the particular entity is present in the image identifying
through analysis of image tag metadata inputted by a user of an
image capture device that captured the image module 726 identifying
the particular entity of the one or more entities for which the
depiction of the feature is present in the image, at least
partially through use of one or more image tags that regard the
image (e.g., an image of three friends at a bar, which was tagged
by the device that captured the image through use of an
algorithm/recognition application provided by a social networking
site), said one or more image tags provided by a user of the image
capture device.
Referring now to FIG. 12C, operation 1004 may include operation
1228 depicting requesting identification of the particular entity
of the one or more entities for which the depiction of the feature
is present in the image. For example, FIG. 7, e.g., FIG. 7C, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting module 728 requesting (e.g.,
facilitating the presentation of a request for information, e.g.,
from the person operating the device) identification of the
particular entity (e.g., the person in the image) of the one or
more entities for which the depiction of the feature is present in
the image (e.g., a picture of two people at a party).
Referring again to FIG. 12C, operation 1004 may include operation
1230 depicting receiving identification of the particular entity of
the one or more entities for which the depiction of the feature is
present in the image. For example, FIG. 7, e.g., FIG. 7C, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image receiving module 730 receiving identification
of the particular entity of the one or more entities for which the
depiction of the feature (e.g., the face of the person) is present
in the image.
Referring again to FIG. 12C, operation 1228 may include operation
1232 depicting requesting identification of the particular entity
for which the depiction of the feature is present in the image from
an image capture device that captured the image. For example, FIG.
7, e.g., FIG. 7C, shows identification data related to an identity
of the particular entity for which the depiction of the feature of
the particular entity is present in the image requesting from an
image capture device that captured the image module 732 requesting
identification of the particular entity for which the depiction of
the feature is present in the image from an image capture device
(e.g., a wearable computer, e.g., an Oculon Optoelectronics
device)
Referring again to FIG. 12C, operation 1228 may include operation
1234 depicting requesting identification of the particular entity
for which the depiction of the feature is present in the image from
a user of an image capture device that captured the image. For
example, FIG. 7, e.g., FIG. 7C, shows identification data related
to an identity of the particular entity for which the depiction of
the feature of the particular entity is present in the image
requesting from a user of an image capture device that captured the
image module 734 requesting identification of the particular entity
(e.g., a person in the captured image) for which the depiction of
the feature (e.g., a face) is present in the image (e.g., a picture
of three friends at a bar) from a user of an image capture device
(e.g., a wearable computer) that captured the image (e.g., the
picture of three friends at a bar).
Referring again to FIG. 12C, operation 1228 may include operation
1236 depicting requesting identification of the particular entity
for which the depiction of the feature is present in the image from
a remote resource. For example, FIG. 7, e.g., FIG. 7C, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting from an external resource module
736 requesting identification of the particular entity for which
the depiction of the feature is present in the image (e.g., an
image of two people waiting for a bus) from a remote resource
(e.g., from a remote server that is operated by a facial
recognition service provider, e.g., Animetrics).
Referring again to FIG. 12C, operation 1236 may include operation
1238 depicting requesting identification of the particular entity
for which the depiction of the feature is present in the image from
a social networking site. For example, FIG. 7, e.g., FIG. 7C, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting from a social networking site
module 738 requesting identification of the particular entity for
which the depiction of the feature (e.g., a face of the person) is
present in the image (e.g., a picture from a lake vacation) from a
social networking site (e.g., Facebook, Twitter, Instagram,
SnapChat, etc.).
Referring again to FIG. 12C, operation 1236 may include operation
1240 depicting requesting identification of the particular entity
for which the depiction of the feature is present in the image from
an image management site. For example, FIG. 7, e.g., FIG. 7C, shows
identification data related to an identity of the particular entity
for which the depiction of the feature of the particular entity is
present in the image requesting from an image management site
module 740 requesting identification of the particular entity for
which the depiction of the feature (e.g., a person's eyes) is
present in the image (e.g., a closeup picture of a football player)
from an image management site (e.g., Google's Picasa, Snapfish,
Shutterfly, etc.).
Referring again to FIG. 12C, operation 1230 may include operation
1242 depicting receiving unique identification of the particular
entity of the one or more entities for which the depiction of the
feature is present in the image. For example, FIG. 7, e.g., FIG.
7C, shows unique identification data related to an identity of the
particular entity for which the depiction of the feature of the
particular entity is present in the image receiving module 742
receiving unique identification (e.g., social security number, name
with a unique numeric string appended, an ID to one or more online
services, e.g., a gamer tag, a live ID, an email address, etc.) of
the particular entity (e.g., a person depicted in the image, in
line at a concession stand at a football game) of the one or more
entities for which the depiction of the feature is present in the
image (e.g., a security image taken by a security camera mounted in
a stadium for the Super Bowl football game).
Referring again to FIG. 12C, operation 1230 may include operation
1244 depicting receiving general identification of the particular
entity of the one or more entities for which the depiction of the
feature is present in the image. For example, FIG. 7, e.g., FIG.
7C, shows nonunique identification data related to an identity of
the particular entity for which the depiction of the feature of the
particular entity is present in the image receiving module 744
receiving general identification (e.g., identification as part of a
group (e.g., a group that has privacy beacons associated with them,
or a group that has a net worth of over one million dollars, or a
group of professional athletes), or an identification that may
belong to multiple people (e.g., one John Smith of thousands) of
the particular entity of the one or more entities for which the
depiction of the feature is present in the image.
Referring again to FIG. 12C, operation 1230 may include operation
1246 depicting receiving identification of a group to which the
particular entity belongs, of the one or more entities for which
the depiction of the feature is present in the image. For example,
FIG. 7, e.g., FIG. 7C, shows identification data related to a group
to which the particular entity for which the depiction of the
feature of the particular entity is present in the image belongs
receiving module 746 receiving identification of a group to which
the particular entity belongs (e.g., a group of persons who have
registered with a privacy database, or a group of employees of a
particular company, or members of a particular foreign military),
of the one or more entities for which the depiction of the feature
is present in the image.
Referring now to FIG. 12D, operation 1004 may include operation
1248 depicting attaining data that regards whether the particular
entity for which the depiction of the feature is present in the
image is known to an entity that controls an image capture device
that captured the image. For example, FIG. 7, e.g., FIG. 7D, shows
identification data that describes whether the particular entity is
recognizable by a device that captured the image related to an
identity of the particular entity for which the depiction of the
feature of the particular entity is present in the image attaining
module 748 attaining data that regards whether the particular
entity (e.g., the person in the image) for which the depiction of
the feature (e.g., the face) is present in the image (e.g., a
surreptitious picture of someone in an airport) is known to an
entity (e.g., the taker of the image) that controls (e.g.,
instructed the image to be taken, either by executing a program
that controls a remote camera, or pushing a button on a camera, or
giving a wearable computer a haptic or auditory command) an image
capture device that captured the image.
Referring again to FIG. 12D, operation 1248 may include operation
1250 depicting attaining data that regards whether the particular
entity for which the depiction of the feature is present in the
image is known to the entity that controls the image capture device
that captured the image, at least partly through use of one or more
images previously captured by the image capture device. For
example, FIG. 7, e.g., FIG. 7D, shows identification data that
describes whether the particular entity is recognizable by an
entity that controls the device that captured the image related to
an identity of the particular entity for which the depiction of the
feature of the particular entity is present in the image attaining
module 750 attaining data that regards whether the particular
entity (e.g., the person depicted in the image) for which the
depiction of the feature (e.g., the person's face) is present in
the image (e.g., a picture of people at a cooking class) is known
to the entity (e.g., the person that took the picture) that
controls (e.g., that gives one or more commands to) the image
capture device (e.g., a wearable computer) that captured the image
(e.g., the image of the people at a cooking class), at least partly
through use of one or more images (e.g., previously taken pictures
that contain identified faces for comparison) previously captured
by the image capture device (e.g., the wearable computer).
Referring again to FIG. 12D, operation 1004 may include operation
1252 depicting determining whether a privacy beacon is associated
with the particular entity of the one or more entities for which
the depiction of the feature is present in the image. For example,
FIG. 7, e.g., FIG. 7D, shows identification data related to whether
a privacy beacon is associated with the particular entity for which
the depiction of the feature of the particular entity is present in
the image attaining module 752 determining whether a privacy beacon
(e.g., a detectable marker that indicates that an entity captured
in an image may have terms and/or conditions associated with the
potential use of her image) is associated with the particular
entity (e.g., the person whose face is shown in the image) of the
one or more entities (e.g., multiple people in the image) for which
the depiction of the feature (e.g., the person's face) is present
in the image (e.g., the picture of two people on a fishing
trip).
Referring again to FIG. 12D, operation 1004 may include operation
1254 depicting detecting, in the image, a privacy beacon associated
with the particular entity of the one or more entities for which
the depiction of the feature is present in the image. For example,
FIG. 7, e.g., FIG. 7D, shows privacy beacon associated with the
particular entity for which the depiction of the feature of the
particular entity is present in the image detecting in the image
module 754 detecting, in the image, a privacy beacon (e.g., a
detectable marker that indicates that an entity captured in an
image may have terms and/or conditions associated with the
potential use of her image) is associated with the particular
entity (e.g., the person whose face is shown in the image) of the
one or more entities (e.g., multiple people in the image) for which
the depiction of the feature (e.g., the person's face) is present
in the image (e.g., a picture of two people on a date at a fancy
restaurant).
Referring again to FIG. 12D, operation 1004 may include operation
1256 depicting identifying the particular entity at least partially
through use of the detected privacy beacon. For example, FIG. 7,
e.g., FIG. 7D, shows particular entity identifying at least
partially through use of the detected privacy beacon module 756
identifying the particular entity (e.g., a celebrity) at least
partially through use of the detected privacy beacon (e.g., a
detectable marker that indicates that an entity captured in an
image may have terms and/or conditions associated with the
potential use of her image, examples of which are previously
outlined in this application).
Referring again to FIG. 12D, operation 1256 may include operation
1258 depicting obtaining an identity of the particular entity from
data that is read from the detected privacy beacon. For example,
FIG. 7, e.g., FIG. 7D, shows particular entity identification from
analysis of the detected privacy beacon module 758 obtaining an
identity of the particular entity from data that is read (e.g., the
privacy beacon contains (e.g., emits, broadcasts, forms, etc.)
identification data, e.g., name, unique ID number, etc.) from the
detected privacy beacon (e.g., a detectable marker that indicates
that an entity captured in an image may have terms and/or
conditions associated with the potential use of her image, examples
of which are previously outlined in this application).
Referring again to FIG. 12D, operation 1256 may include operation
1260 depicting retrieving an identity of the particular entity from
a database through use of index data that is read from the detected
privacy beacon. For example, FIG. 7, e.g., FIG. 7D, shows
particular entity identification retrieving from a database through
use of index data derived from the detected privacy beacon module
760 retrieving an identity of the particular entity from a
database, through use of index data (e.g., a key value for an entry
into a database, e.g., "privacy beacon TK-402") that is read from
the detected privacy beacon (e.g., a detectable marker that
indicates that an entity captured in an image may have terms and/or
conditions associated with the potential use of her image, examples
of which are previously outlined in this application).
FIGS. 13A-13E depict various implementations of operation 1006,
depicting obtaining relationship data that indicates whether the
particular entity has a relationship with a device that facilitated
acquisition of the image, according to embodiments. Referring now
to FIG. 13A, operation 1006 may include operation 1302 depicting
obtaining relationship data that indicates whether data about the
particular entity is stored on the device that facilitated
acquisition of the image. For example, FIG. 8, e.g., FIG. 8A, shows
relation data that describes whether data about the particular
entity is stored on the device that facilitated the acquisition of
the image that contains the depiction of the feature of the
particular entity module 802 obtaining relationship data that
indicates whether data about the particular entity (e.g., a person
attending a Matt & Kim concert) is stored on the device that
facilitated acquisition of the image (e.g., an image of people at a
Matt & Kim concert).
Referring again to FIG. 13A, operation 1006 may include operation
1304 depicting obtaining relationship data that indicates whether
data about the particular entity is accessible to the device that
facilitated acquisition of the image. For example, FIG. 8, e.g.,
FIG. 8A, shows relation data that describes whether data about the
particular entity is accessible to the device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity module 804 obtaining relationship data
(e.g., binary yes/no data indicating whether the device has
information about the entity) that indicates whether data about the
particular entity (e.g., name, contact info, characteristics about,
characteristics about a device owned by, etc.) to the device (e.g.,
the smartphone device that received the image from a wearable
computer) that facilitated acquisition of the image (e.g., a
picture of two guys at a baseball game).
Referring again to FIG. 13A, operation 1006 may include operation
1306 depicting obtaining relationship data that indicates whether a
name of the particular entity is stored on the device that
facilitated acquisition of the image. For example, FIG. 8, e.g.,
FIG. 8A, shows relation data that describes whether a name of the
particular entity is stored on the device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity module 806 obtaining relationship data
(e.g., numeric data that indicates the size of the data block
stored about the particular entity on the device, along with a
summary of the contents of the data block, that may be in "flag"
form (e.g., binary form)) that indicates whether a name of the
particular entity is stored on the device (e.g., a wearable
computer, e.g., Google Glass) that facilitated acquisition (e.g.,
capture) of the image (e.g., a surreptitious picture of five people
at a train station).
Referring again to FIG. 13A, operation 1306 may include operation
1308 depicting obtaining relationship data that indicates whether
the name of the particular entity is stored in a contact list of
the device that facilitated acquisition of the image. For example,
FIG. 8, e.g., FIG. 8A, shows relation data that describes whether a
name of the particular entity is stored in a contact list
associated with the device that facilitated the acquisition of the
image that contains the depiction of the feature of the particular
entity module 808 obtaining relationship data (e.g., yes/no data)
that indicates whether the name of the particular entity is stored
in a contact list of the device (e.g., a smartphone device) that
facilitated acquisition (e.g., received from an image capture
device, e.g., a wearable computer, or captured the image with its
own image capture component) of the image (e.g., an image of two
friends having dinner at a fancy restaurant).
Referring again to FIG. 13A, operation 1306 may include operation
1310 depicting obtaining relationship data that indicates whether
the name of the particular entity is stored in a friend list
accessible to the device. For example, FIG. 8, e.g., FIG. 8A, shows
relation data that describes whether a name of the particular
entity is stored in a friend list accessible to the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity module 810
obtaining relationship data (e.g., a description of the friend list
that stores the name of the entity, if found) that indicates
whether the name of the particular entity is stored in a friend
list (e.g., for a social networking site, a list of people that are
"friends" of the user)
Referring again to FIG. 13A, operation 1006 may include operation
1312 depicting obtaining relationship data that indicates whether
the depicted feature of the particular entity has previously been
stored in one or more images on the device that facilitated
acquisition of the image. For example, FIG. 8, e.g., FIG. 8A, shows
relation data that describes whether the depicted feature of the
particular entity has previously been depicted in one or more
previously-captured images associated with the device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity module 812
obtaining relationship data (e.g., a percentage likelihood) that
indicates whether the depicted feature (e.g., full body shot) of
the particular entity (e.g., a particular celebrity) has previously
been stored in one or more images on the device (e.g., the digital
SLR camera) that facilitated acquisition of the image (e.g., an
image of two people having coffee in a coffee shop).
Referring now to FIG. 13B, operation 1006 may include operation
1314 depicting obtaining relationship data that indicates whether
the depicted feature of the particular entity is present in one or
more images previously captured on the device that facilitated
acquisition of the image. For example, FIG. 8, e.g., FIG. 8B, shows
relation data that describes whether the depicted feature of the
particular entity has previously been depicted in one or more
previously-captured images captured by the device that facilitated
the acquisition of the image that contains the depiction of the
feature of the particular entity module 814 obtaining relationship
data that indicates whether the depicted feature of the particular
entity is present in one or more images previously captured on the
device (e.g., an image capture device) that facilitated acquisition
of the image (e.g., an image of three friends at a blackjack table
in Las Vegas).
Referring again to FIG. 13B, operation 1006 may include operation
1316 depicting obtaining relationship data that indicates whether
the particular entity is known to a control entity that controls
the device that facilitated acquisition of the image. For example,
FIG. 8, e.g., FIG. 8B, shows relation data that describes whether
the particular entity is known to the device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity obtaining module 816 obtaining
relationship data that indicates whether the particular entity is
known to a control entity (e.g., the person that took the picture,
or that is controlling the device that's receiving the picture from
the image capture device) that controls the device (e.g., the
smartphone device) that facilitated acquisition of the image.
Referring again to FIG. 13B, operation 1006 may include operation
1318 depicting obtaining relationship data that indicates whether a
control entity that controls the device that facilitated
acquisition of the image has indicated that the particular entity
is known to the control entity. For example, FIG. 8, e.g., FIG. 8B,
shows relation data that describes whether the particular entity is
known to a control entity that controls the device that facilitated
the acquisition of the image that contains the depiction of the
feature of the particular entity obtaining module 818 obtaining
relationship data that indicates whether a control entity (e.g., a
person operating the wearable computer) that controls the device
(e.g., the wearable computer) that facilitated acquisition of the
image (e.g., an image of a person sitting at a desk taken by a
wearable computer in an office cubicle farm) has indicated that the
particular entity (e.g., the person captured in the picture) is
known to the control entity (e.g., the person operating the
wearable computer).
Referring again to FIG. 13B, operation 1006 may include operation
1320 depicting querying a user of the device that facilitated
acquisition of the image with regard to whether the particular
entity has a relationship with the user of the device. For example,
FIG. 8, e.g., FIG. 8B, shows relation data that describes a
relation between the particular entity and a device that
facilitated the acquisition of the image that contains the
depiction of the feature of the particular entity querying module,
wherein the relation may be nonextant 820 querying a user of the
device (e.g., an image capture device, e.g., a wearable computer)
that facilitated acquisition of the image (e.g., that captured the
image) with regard to whether the particular entity has a
relationship with the user of the device (e.g., the wearable
computer).
Referring now to FIG. 13C, operation 1006 may include operation
1322 depicting receiving an inputted identification of the
particular entity at the device that facilitated the acquisition of
the image. For example, FIG. 8, e.g., FIG. 8C, shows inputted
identification of the particular entity at the device that
facilitated the acquisition of the image receiving module 822
receiving an inputted identification (e.g., spoken input, or
typed-in input) of the particular entity (e.g., the person depicted
in the image) at the device (e.g., the image capture device, e.g.,
the wearable computer, or at another device that receives the image
from the wearable computer) that facilitated the acquisition of the
image (e.g., an image of two people at a party).
Referring again to FIG. 13C, operation 1006 may include operation
1324 depicting determining whether the particular entity has the
relationship with the device that facilitated acquisition of the
image by a comparison of the inputted identification of the
particular entity with an obtained identification of the particular
entity from a remote database. For example, FIG. 8, e.g., FIG. 8C,
shows inputted identification of the particular entity at the
device and an obtained identity of the particular entity obtained
from a remote location comparison for determining the relation data
that describes the relation between the particular entity and the
device module 824 determining whether the particular entity (e.g.,
the person depicted in the picture) has the relationship with the
device (e.g., the device recognizes that the person is someone that
is "known" to the device or the device user) that facilitated
acquisition of the image by a comparison of the inputted
identification of the particular entity (e.g., the person depicted
in the image) with an obtained identification of the particular
entity from a remote database (e.g., a face recognition database,
e.g., which may be run by a social networking site).
Referring again to FIG. 13C, operation 1324 may include operation
1326 depicting obtaining identification of the particular entity
from the remote database. For example, FIG. 8, e.g., FIG. 8C, shows
identification of the particular entity from the remote location
obtaining module 826 obtaining identification (e.g., a name) of the
particular entity (e.g., the person depicted in the image) from the
remote database (e.g., a face recognition database of various faces
that is compiled by a social networking site that has lots of
pictures).
Referring again to FIG. 13C, operation 1324 may include operation
1328 depicting comparing the inputted identification of the
particular entity with the obtained identification of the
particular entity from the remote database. For example, FIG. 8,
e.g., FIG. 8C, shows inputted identification of the particular
entity and obtained identification of the particular entity from
the remote location comparing module 828 comparing the inputted
identification (e.g., by the operator of the acquiring device) of
the particular entity (e.g., the person depicted in the image) with
the obtained identification of the particular entity from the
remote database (e.g., the face recognition database of various
faces that is compiled by a social networking site that has lots of
pictures).
Referring again to FIG. 13C, operation 1326 may include operation
1330 depicting obtaining identification of the particular entity
from a facial recognition database. For example, FIG. 8, e.g., FIG.
8C, shows identification of the particular entity from a facial
recognition database obtaining module 830 obtaining identification
of the particular entity (e.g., the person depicted in the image)
from a facial recognition database (e.g., a government or
university-sponsored database).
Referring again to FIG. 13C, operation 1326 may include operation
1332 depicting obtaining identification of the particular entity
from a social networking site. For example, FIG. 8, e.g., FIG. 8C,
shows identification of the particular entity from a social network
site obtaining module 832 obtaining identification of the
particular entity (e.g., the person depicted in the image) from a
social networking site (e.g., Facebook).
Referring again to FIG. 13C, operation 1326 may include operation
1334 depicting obtaining identification of the particular entity
from a public image repository in which images are tagged. For
example, FIG. 8, e.g., FIG. 8C, shows identification of the
particular entity from a public image repository obtaining module
834 obtaining identification of the particular entity from a public
image repository (e.g., Google's Picasa) in which images are tagged
(e.g., in which metadata is added to the image or stored separately
that identifies one or more entities in or characteristics about
the image).
Referring now to FIG. 13D, operation 1006 may include operation
1336 depicting obtaining relationship data that indicates whether
the particular entity has a relationship with an image capture
device that captured the image. For example, FIG. 8, e.g., FIG. 8D,
shows relation data that describes a relation between the
particular entity and an image capture device that captured the
image that contains the depiction of the feature of the particular
entity obtaining module, wherein the relation may be nonextant 836
obtaining relationship data (e.g., binary data indicating whether
or not the relationship is present) that indicates whether the
particular entity (e.g., the person depicted in the image) has a
relationship with an image capture device (e.g., a wearable
computer) that captured the image (e.g., three women at a
wedding).
Referring again to FIG. 13D, operation 1006 may include operation
1338 depicting obtaining relationship data that indicates whether
the particular entity has a relationship with a receiver device
that received the image from an image capture device. For example,
FIG. 8, e.g., FIG. 8D, shows relation data that describes a
relation between the particular entity and a receiver device that
received the image that was captured by an image capture device
that captured the image that contains the depiction of the feature
of the particular entity obtaining module, wherein the relation may
be nonextant 838 obtaining relationship data that indicates whether
the particular entity has a relationship (e.g., has stored data
about, e.g., has received an email from, or has corresponded with)
with a receiver device (e.g., a remote server that receives the
image) that received the image (e.g., an image of two women playing
basketball) from an image capture device (e.g., a wearable
computer).
Referring again to FIG. 13D, operation 1338 may include operation
1340 depicting obtaining relationship data that indicates whether
the particular entity has the relationship with a smartphone device
that received the image from an image capture device. For example,
FIG. 8, e.g., FIG. 8D, shows relation data that describes a
relation between the particular entity and a smartphone device that
received the image that was captured by an image capture device
that captured the image that contains the depiction of the feature
of the particular entity obtaining module, wherein the relation may
be nonextant 840 obtaining relationship data that indicates whether
the particular entity (e.g., the person depicted in the image) has
the relationship (e.g., is in the contact list of) a smartphone
device that received the image (e.g., an image of three women at a
bar) from an image capture device (e.g., a wearable computer).
Referring again to FIG. 13D, operation 1338 may include operation
1342 depicting obtaining relationship data that indicates whether
the particular entity has the relationship with a remote server
device that received the image from an image capture device. For
example, FIG. 8, e.g., FIG. 8D, shows relation data that describes
a relation between the particular entity and a remote server device
that received the image that was captured by an image capture
device that captured the image that contains the depiction of the
feature of the particular entity obtaining module, wherein the
relation may be nonextant 842 obtaining relationship data that
indicates whether the particular entity (e.g., the person depicted
in the image) has the relationship with a remote server device
(e.g., a home computer that communicates with one or more wearable
computers that are away from the house) that received the image
(e.g., a picture of a family of four at a baseball game) from an
image capture device (e.g., a wearable computer).
Referring again to FIG. 13D, operation 1006 may include operation
1344 depicting receiving relationship data that indicates whether
the particular entity has the relationship with the device that
facilitated the acquisition of the image, from a user of the device
that facilitated the acquisition of the image. For example, FIG. 8,
e.g., FIG. 8D, shows relation data that describes a relation
between the particular entity and a device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity receiving from a device user module,
wherein the relation may be nonextant 844 receiving relationship
data that indicates whether the particular entity (e.g., the person
depicted in the image) has the relationship (e.g., has previously
communicated with, e.g., through the particular entity's own
device) with the device that facilitated the acquisition of the
image, from a user of the device that facilitated the acquisition
of the image (e.g., an image of two people at a bar).
Referring now to FIG. 13E, operation 1006 may include operation
1346 depicting receiving relationship data that indicates whether
the particular entity has the relationship with the device that
facilitated the acquisition of the image, from the device that
facilitated the acquisition of the image. For example, FIG. 8,
e.g., FIG. 8E, shows relation data that describes a relation
between the particular entity and a device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity receiving from the device that facilitated
the acquisition of the image module, wherein the relation may be
nonextant 846 receiving relationship data that indicates whether
the particular entity (e.g., the person depicted in the image, for
which a privacy beacon has been detected) has the relationship with
the device (e.g., a tablet device that received the image from a
wearable computer) that facilitated the acquisition of the image
(e.g., an image of people at a park, from an image capture device,
e.g., the wearable computer), from the device that facilitated the
acquisition of the image.
Referring again to FIG. 13E, operation 1006 may include operation
1348 depicting obtaining relationship data, from the device that
facilitated acquisition of the image, that indicates whether the
particular entity is known to a control entity that controls the
device that facilitated acquisition of the image. For example, FIG.
8, e.g., FIG. 8E, shows relation data that describes a relation
between the particular entity and a control entity that controls
the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity
receiving from the device that facilitated the acquisition of the
image module, wherein the relation may be nonextant 848 obtaining
relationship data, from the device (e.g., a cellular telephone
device) that facilitated acquisition of the image, that indicates
whether the particular entity is known to a control entity that
controls the device that facilitated acquisition of the image.
FIGS. 14A-14C depict various implementations of operation 1008,
depicting performing obfuscation on at least a portion of the
image, wherein the depiction of the feature of the particular
entity is excluded from the obfuscation when the obtained
relationship data indicates that the particular entity has the
relationship with the device that facilitated the acquisition of
the image, according to embodiments. Referring now to FIG. 14A,
operation 1008 may include operation 1402 depicting selecting a
portion of the image for obfuscation that includes the depiction of
the feature of the particular entity when the obtained relationship
data does not indicate the relationship between the particular
entity and the device. For example, FIG. 9, e.g., FIG. 9A, shows
the particular portion of the image that includes the depiction of
the feature of the particular entity selecting for obfuscation when
the relation data indicates that the relation between the
particular entity and the device that facilitated the acquisition
of the image that contains the depiction of the feature of the
particular entity is absent module 902 selecting a portion of the
image (e.g., an image of several people at a bar), wherein the
depiction of the feature (e.g., the face and shoulders) of the
entity (e.g., person "A" in the picture) is excluded from the
obfuscation (e.g., blurring or otherwise obscuring) when the
obtained relationship data indicates that the particular party
(e.g., the person in the image, e.g., person "A") has the
relationship with the device that facilitated the acquisition of
the image.
Referring again to FIG. 14A, operation 1008 may include operation
1404 depicting performing obfuscation of the selected portion of
the image. For example, FIG. 9, e.g., FIG. 9A, shows obfuscation of
the selected portion of the image performing module 904 performing
obfuscation (e.g., obscuring, e.g., blurring, pixelating, adding
noise to, filtering, shading, redacting, blocking out, etc.) of the
selected portion of the image (e.g., the portion of the image that
contains the face and shoulders of person "A").
Referring again to FIG. 14A, operation 1404 may include operation
1406 depicting performing image manipulation of the selected
portion of the image to reduce clarity of the image below a
particular threshold level. For example, FIG. 9, e.g., FIG. 9A,
shows image manipulation to reduce image clarity of the selected
portion of the image performing module 906 performing image
manipulation of the selected portion of the image (e.g., a portion
of a full body shot of a person) to reduce clarity of the image
below a particular threshold level.
Referring again to FIG. 14A, operation 1406 may include operation
1408 depicting performing image manipulation of the selected
portion of the image to reduce clarity of the image below a
particular threshold level at which facial recognition can be
performed on the image. For example, FIG. 9, e.g., FIG. 9A, shows
image manipulation to reduce image clarity of the selected portion
of the image below a threshold level at which a particular facial
recognition algorithm is capable of execution performing module 908
performing image manipulation of the selected portion of the image
to reduce clarity of the image below a particular threshold level
at which facial recognition can be performed on the image (e.g., an
image of two people on a fishing boat).
Referring again to FIG. 14A, operation 1404 may include operation
1410 depicting adding noise to the selected portion of the image to
obscure the selected portion of the image. For example, FIG. 9,
e.g., FIG. 9A, shows noise addition to the selected portion of the
image performing module 910 adding noise to the selected portion of
the image (e.g., an image showing the face of a celebrity) to
obscure the selected portion of the image (e.g., the image showing
the face of the celebrity).
Referring again to FIG. 14A, operation 1404 may include operation
1412 depicting obscuring the selected portion of the image through
use of one or more image manipulation techniques. For example, FIG.
9, e.g., FIG. 9A, shows image obscuration function execution on the
selected portion of the image performing module 912 obscuring the
selected portion of the image through use of one or more image
manipulation techniques (e.g., blur, gaussify, depixelate, blend,
cover, etc.).
Referring now to FIG. 14B, operation 1008 may include operation
1414 depicting performing obfuscation on the entire image, when the
obtained relationship data indicates no relationship between the
particular entity and the device that facilitated the acquisition
of the image. For example, FIG. 9, e.g., FIG. 9B, shows obfuscation
of total image, when the relation data indicates that the relation
between the particular entity and the device that facilitated the
acquisition of the image that contains the depiction of the feature
of the particular entity is absent performing module 914 performing
obfuscation on the entire image, when the obtained relationship
data indicates no relationship between the particular entity and
the device that facilitated the acquisition of the image.
Referring again to FIG. 14B, operation 1008 may include operation
1416 depicting performing obfuscation on each entity identified in
the image, except for the particular entity for which exists the
relationship with the device that facilitated the acquisition of
the image. For example, FIG. 9, e.g., FIG. 9B, shows obfuscation of
a particular portion of the image that contains a further entity,
wherein the depiction of the feature of the particular entity is
excluded from the particular portion of the image when the relation
data indicates that the relation between the particular entity and
the device that facilitated the acquisition of the image that
contains the depiction of the feature of the particular entity is
extant performing module 916 performing obfuscation on each entity
identified in the image, except for the particular entity for which
exists the relationship with the device that facilitated the
acquisition of the image (e.g., an image of people at a party).
Referring now to FIG. 14C, operation 1008 may include operation
1418 depicting identifying a particular portion of the image that
contains the depiction of the feature of the particular entity. For
example, FIG. 9, e.g., FIG. 9C, shows particular portion of the
image that contains the depiction of the feature of the particular
entity identifying module 918 identifying a particular portion of
the image (e.g., an image of two people at a baseball game) that
contains the depiction of the feature (e.g., a face) of the
particular entity (e.g., a person depicted in the image).
Referring again to FIG. 14C, operation 1008 may include operation
1420 depicting performing obfuscation on a further portion of the
image that is other than the identified particular portion of the
image. For example, FIG. 9, e.g., FIG. 9C, shows obfuscation of a
further portion of the image that is other than the identified
particular portion of the image that contains the depiction of the
feature of the particular entity performing module 920 performing
obfuscation on a further portion of the image (e.g., a portion
containing all the faces of the entities that are not the
identified particular entity) that is other than the identified
particular portion of the image (e.g., the portion containing the
face of the particular entity).
Referring again to FIG. 14C, operation 1008 may include operation
1422 depicting determining whether to perform obfuscation of the
particular portion of the image, at least partly based on the
obtained relationship data. For example, FIG. 9, e.g., FIG. 9C,
shows determination regarding whether to perform obfuscation of the
particular portion of the image, at least partly based on the
relation data performing module 922 determining whether to perform
obfuscation of the particular portion of the image (e.g., a portion
containing the private parts of a famous celebrity woman), at least
partly based on the obtained relationship data (e.g., indicating
whether the person taking the picture knew the woman, or just
happened to take an opportune photo).
Referring again to FIG. 14C, operation 1420 may include operation
1424 depicting performing obfuscation on the further portion of the
image that depicts a feature of the one or more entities other than
the particular entity. For example, FIG. 9, e.g., FIG. 9C, shows
obfuscation of a further portion of the image that is other than
the identified particular portion of the image that contains the
depiction of the feature of the particular entity and that depicts
a feature of one or more entities other than the particular entity
performing module 924 performing obfuscation on the further portion
of the image (e.g., an image of three people at a party, where the
further portion is the image of two of the three women in the
picture that are not the particular entity, e.g., that are not
recognized as known to the device/device operator) that depicts a
feature (e.g., a face) of the one or more entities other than the
particular entity (e.g., the image of two of the three women in the
picture that are not the particular entity, e.g., that are not
recognized as known to the device/device operator).
Referring again to FIG. 14C, operation 1424 may include operation
1426 depicting performing obfuscation on the further portion of the
image that depicts a feature of the one or more entities other than
the particular entity, for which a privacy beacon is detected. For
example, FIG. 9, e.g., FIG. 9C, shows obfuscation of a further
portion of the image that depicts a feature of one or more entities
other than the particular entity for which a privacy beacon is
detected performing module 926 performing obfuscation on the
further portion (e.g., an image of three people at a party, where
the further portion is the image of two of the three women in the
picture that are not the particular entity, e.g., that are not
recognized as known to the device/device operator) that depicts a
feature (e.g., a face) of the one or more entities other than the
particular entity (e.g., the image of two of the three women in the
picture that are not the particular entity, e.g., that are not
recognized as known to the device/device operator), for which a
privacy beacon (e.g., a detectable marker that indicates that an
entity captured in an image may have terms and/or conditions
associated with the potential use of her image, examples of which
are previously outlined in this application) has been detected.
Referring again to FIG. 14C, operation 1420 may include operation
1428 depicting performing obfuscation on all other portions of the
image other than the particular portion. For example, FIG. 9, e.g.,
FIG. 9C, shows obfuscation of an entire portion of the image that
is other than the identified particular portion of the image that
contains the depiction of the feature of the particular entity
performing module 928 performing obfuscation on all other portions
of the image other than the particular portion (e.g., the portion
containing the face of the particular entity, e.g., the
celebrity).
Referring again to FIG. 14C, operation 1422 may include operation
1430 depicting performing obfuscation of the particular portion of
the image when the obtained relationship data does not indicate the
relationship between the particular party and the device that
facilitated acquisition of the image. For example, FIG. 9, e.g.,
FIG. 9C, shows obfuscation of the particular portion of the image
when the relation data indicates that the relation between the
particular entity and the device that facilitated the acquisition
of the image that contains the depiction of the feature of the
particular entity is absent performing module 930 performing
obfuscation (e.g., one or more operations that decrease the
recognizability of a depiction of a feature of an entity, e.g.,
blur) of the particular portion of the image (e.g., the private
parts of a celebrity) when the obtained relationship data (e.g.,
data that indicates that the depicted entity, e.g., a woman is not
known to the device that acquired the image, e.g., because it was
taken surreptitiously) does not indicate the relationship and
between the particular party (e.g., a woman) and the device (e.g.,
a wearable computer) that facilitated acquisition (e.g., that
captured the image) of the image (e.g., a risque picture of a
woman).
Referring again to FIG. 14C, operation 1422 may include operation
1432 depicting excluding obfuscation of the particular portion of
the image when the obtained relationship data indicates the
relationship between the particular party and the device that
facilitated acquisition of the image. For example, FIG. 9, e.g.,
FIG. 9C, shows obfuscation of the particular portion of the image
when the relation data indicates that the relation between the
particular entity and the device that facilitated the acquisition
of the image that contains the depiction of the feature of the
particular entity is extant avoiding module 932 excluding
obfuscation (e.g., one or more operations that decrease the
recognizability of a depiction of a feature of an entity, e.g.,
blur) of the particular portion of the image (e.g., the private
parts of a celebrity) when the obtained relationship data (e.g.,
data that indicates that the depicted entity, e.g., a woman is
known to the device that acquired the image, e.g., it is her
husband's device and she is in his contact list) and the device
(e.g., the husband's cell phone) that facilitated acquisition
(e.g., that received the image from a wearable computer that
captured the image) of the image (e.g., a risque picture of a
woman).
It is noted that, in the foregoing examples, various concrete,
real-world examples of terms that appear in the following claims
are described. These examples are meant to be exemplary only and
non-limiting. Moreover, any example of any term may be combined or
added to any example of the same term in a different place, or a
different term in a different place, unless context dictates
otherwise.
All of the above U.S. patents, U.S. patent application
publications, U.S. patent applications, foreign patents, foreign
patent applications and non-patent publications referred to in this
specification and/or listed in any Application Data Sheet, are
incorporated herein by reference, to the extent not inconsistent
herewith.
The foregoing detailed description has set forth various
embodiments of the devices and/or processes via the use of block
diagrams, flowcharts, and/or examples. Insofar as such block
diagrams, flowcharts, and/or examples contain one or more functions
and/or operations, it will be understood by those within the art
that each function and/or operation within such block diagrams,
flowcharts, or examples can be implemented, individually and/or
collectively, by a wide range of hardware, software (e.g., a
high-level computer program serving as a hardware specification),
firmware, or virtually any combination thereof, limited to
patentable subject matter under 35 U.S.C. 101. In an embodiment,
several portions of the subject matter described herein may be
implemented via Application Specific Integrated Circuits (ASICs),
Field Programmable Gate Arrays (FPGAs), digital signal processors
(DSPs), or other integrated formats. However, those skilled in the
art will recognize that some aspects of the embodiments disclosed
herein, in whole or in part, can be equivalently implemented in
integrated circuits, as one or more computer programs running on
one or more computers (e.g., as one or more programs running on one
or more computer systems), as one or more programs running on one
or more processors (e.g., as one or more programs running on one or
more microprocessors), as firmware, or as virtually any combination
thereof, limited to patentable subject matter under 35 U.S.C. 101,
and that designing the circuitry and/or writing the code for the
software (e.g., a high-level computer program serving as a hardware
specification) and or firmware would be well within the skill of
one of skill in the art in light of this disclosure. In addition,
those skilled in the art will appreciate that the mechanisms of the
subject matter described herein are capable of being distributed as
a program product in a variety of forms, and that an illustrative
embodiment of the subject matter described herein applies
regardless of the particular type of signal bearing medium used to
actually carry out the distribution. Examples of a signal bearing
medium include, but are not limited to, the following: a recordable
type medium such as a floppy disk, a hard disk drive, a Compact
Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer
memory, etc.; and a transmission type medium such as a digital
and/or an analog communication medium (e.g., a fiber optic cable, a
waveguide, a wired communications link, a wireless communication
link (e.g., transmitter, receiver, transmission logic, reception
logic, etc.), etc.)
While particular aspects of the present subject matter described
herein have been shown and described, it will be apparent to those
skilled in the art that, based upon the teachings herein, changes
and modifications may be made without departing from the subject
matter described herein and its broader aspects and, therefore, the
appended claims are to encompass within their scope all such
changes and modifications as are within the true spirit and scope
of the subject matter described herein. It will be understood by
those within the art that, in general, terms used herein, and
especially in the appended claims (e.g., bodies of the appended
claims) are generally intended as "open" terms (e.g., the term
"including" should be interpreted as "including but not limited
to," the term "having" should be interpreted as "having at least,"
the term "includes" should be interpreted as "includes but is not
limited to," etc.).
It will be further understood by those within the art that if a
specific number of an introduced claim recitation is intended, such
an intent will be explicitly recited in the claim, and in the
absence of such recitation no such intent is present. For example,
as an aid to understanding, the following appended claims may
contain usage of the introductory phrases "at least one" and "one
or more" to introduce claim recitations. However, the use of such
phrases should not be construed to imply that the introduction of a
claim recitation by the indefinite articles "a" or "an" limits any
particular claim containing such introduced claim recitation to
claims containing only one such recitation, even when the same
claim includes the introductory phrases "one or more" or "at least
one" and indefinite articles such as "a" or "an" (e.g., "a" and/or
"an" should typically be interpreted to mean "at least one" or "one
or more"); the same holds true for the use of definite articles
used to introduce claim recitations. In addition, even if a
specific number of an introduced claim recitation is explicitly
recited, those skilled in the art will recognize that such
recitation should typically be interpreted to mean at least the
recited number (e.g., the bare recitation of "two recitations,"
without other modifiers, typically means at least two recitations,
or two or more recitations).
Furthermore, in those instances where a convention analogous to "at
least one of A, B, and C, etc." is used, in general such a
construction is intended in the sense one having skill in the art
would understand the convention (e.g., "a system having at least
one of A, B, and C" would include but not be limited to systems
that have A alone, B alone, C alone, A and B together, A and C
together, B and C together, and/or A, B, and C together, etc.). In
those instances where a convention analogous to "at least one of A,
B, or C, etc." is used, in general such a construction is intended
in the sense one having skill in the art would understand the
convention (e.g., "a system having at least one of A, B, or C"
would include but not be limited to systems that have A alone, B
alone, C alone, A and B together, A and C together, B and C
together, and/or A, B, and C together, etc.). It will be further
understood by those within the art that typically a disjunctive
word and/or phrase presenting two or more alternative terms,
whether in the description, claims, or drawings, should be
understood to contemplate the possibilities of including one of the
terms, either of the terms, or both terms unless context dictates
otherwise. For example, the phrase "A or B" will be typically
understood to include the possibilities of "A" or "B" or "A and
B."
With respect to the appended claims, those skilled in the art will
appreciate that recited operations therein may generally be
performed in any order. Also, although various operational flows
are presented in a sequence(s), it should be understood that the
various operations may be performed in other orders than those
which are illustrated, or may be performed concurrently. Examples
of such alternate orderings may include overlapping, interleaved,
interrupted, reordered, incremental, preparatory, supplemental,
simultaneous, reverse, or other variant orderings, unless context
dictates otherwise. Furthermore, terms like "responsive to,"
"related to," or other past-tense adjectives are generally not
intended to exclude such variants, unless context dictates
otherwise.
This application may make reference to one or more trademarks,
e.g., a word, letter, symbol, or device adopted by one manufacturer
or merchant and used to identify and/or distinguish his or her
product from those of others. Trademark names used herein are set
forth in such language that makes clear their identity, that
distinguishes them from common descriptive nouns, that have fixed
and definite meanings, or, in many if not all cases, are
accompanied by other specific identification using terms not
covered by trademark. In addition, trademark names used herein have
meanings that are well-known and defined in the literature, or do
not refer to products or compounds for which knowledge of one or
more trade secrets is required in order to divine their meaning.
All trademarks referenced in this application are the property of
their respective owners, and the appearance of one or more
trademarks in this application does not diminish or otherwise
adversely affect the validity of the one or more trademarks. All
trademarks, registered or unregistered, that appear in this
application are assumed to include a proper trademark symbol, e.g.,
the circle R or bracketed capitalization (e.g., [trademark name]),
even when such trademark symbol does not explicitly appear next to
the trademark. To the extent a trademark is used in a descriptive
manner to refer to a product or process, that trademark should be
interpreted to represent the corresponding product or process as of
the date of the filing of this patent application.
Throughout this application, the terms "in an embodiment," "in one
embodiment," "in an embodiment," "in several embodiments," "in at
least one embodiment," "in various embodiments," and the like, may
be used. Each of these terms, and all such similar terms should be
construed as "in at least one embodiment, and possibly but not
necessarily all embodiments," unless explicitly stated otherwise.
Specifically, unless explicitly stated otherwise, the intent of
phrases like these is to provide non-exclusive and non-limiting
examples of implementations of the invention. The mere statement
that one, some, or may embodiments include one or more things or
have one or more features, does not imply that all embodiments
include one or more things or have one or more features, but also
does not imply that such embodiments must exist. It is a mere
indicator of an example and should not be interpreted otherwise,
unless explicitly stated as such.
Those skilled in the art will appreciate that the foregoing
specific exemplary processes and/or devices and/or technologies are
representative of more general processes and/or devices and/or
technologies taught elsewhere herein, such as in the claims filed
herewith and/or elsewhere in the present application.
* * * * *
References