U.S. patent application number 17/138208 was filed with the patent office on 2021-04-22 for system, method, and apparatus for an interactive container.
This patent application is currently assigned to Omni Consumer Products, LLC. The applicant listed for this patent is Omni Consumer Products, LLC. Invention is credited to Stephen Howard, Larry McNutt.
Application Number | 20210117040 17/138208 |
Document ID | / |
Family ID | 1000005315829 |
Filed Date | 2021-04-22 |
![](/patent/app/20210117040/US20210117040A1-20210422-D00000.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00001.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00002.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00003.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00004.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00005.png)
![](/patent/app/20210117040/US20210117040A1-20210422-D00006.png)
United States Patent
Application |
20210117040 |
Kind Code |
A1 |
Howard; Stephen ; et
al. |
April 22, 2021 |
SYSTEM, METHOD, AND APPARATUS FOR AN INTERACTIVE CONTAINER
Abstract
An interactive container creation method, apparatus and system.
The method includes creating a list, deploying the list to at least
one device, calibrating and identifying touch areas, identifying at
least one of an asset and a shape to be defined as a touch area,
identifying the x,y axis of each point for a predetermined number
of points for each of the at least one of asset or shape, and
creating a touch area based of the identified x,y axis.
Inventors: |
Howard; Stephen; (Dallas,
TX) ; McNutt; Larry; (Carrollton, TX) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Omni Consumer Products, LLC |
Addison |
TX |
US |
|
|
Assignee: |
Omni Consumer Products, LLC
Addison
TX
|
Family ID: |
1000005315829 |
Appl. No.: |
17/138208 |
Filed: |
December 30, 2020 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15394799 |
Dec 29, 2016 |
10891003 |
|
|
17138208 |
|
|
|
|
15258973 |
Sep 7, 2016 |
|
|
|
15394799 |
|
|
|
|
14535823 |
Nov 7, 2014 |
9465488 |
|
|
15258973 |
|
|
|
|
13890709 |
May 9, 2013 |
9360888 |
|
|
14535823 |
|
|
|
|
14985044 |
Dec 30, 2015 |
|
|
|
15394799 |
|
|
|
|
PCT/US2015/068192 |
Dec 30, 2015 |
|
|
|
14985044 |
|
|
|
|
62311354 |
Mar 21, 2016 |
|
|
|
62373272 |
Aug 10, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04N 13/204 20180501;
G06F 3/0418 20130101; G06F 3/0425 20130101; H04N 5/232 20130101;
G06K 9/00355 20130101; G06K 9/00288 20130101; G06F 2203/04101
20130101; G06F 3/017 20130101; G06K 9/209 20130101; G06F 3/04886
20130101; G06F 3/0482 20130101; H04N 5/2251 20130101; G06F
2203/04108 20130101 |
International
Class: |
G06F 3/041 20060101
G06F003/041; G06F 3/042 20060101 G06F003/042; G06F 3/0482 20060101
G06F003/0482; H04N 13/204 20060101 H04N013/204; H04N 5/225 20060101
H04N005/225; G06F 3/0488 20060101 G06F003/0488; G06K 9/00 20060101
G06K009/00; G06K 9/20 20060101 G06K009/20; G06F 3/01 20060101
G06F003/01; H04N 5/232 20060101 H04N005/232 |
Claims
1. An interactive system comprising a processor capable of
executing instruction relating to an interactive container creation
method for creating an interactive experience, the interactive
container creation method comprising: identifying at least one of
an asset and a shape, in an image captured by a camera, to be
defined as a touch area in a first interactive container, wherein
the identifying comprises at least: retrieving a baseline depth;
retrieving a value of a real-time area; determining a difference
between the baseline depth and the values; comparing the difference
to a threshold and determining if the real-time area is to be
defined as the touch area based on the comparison; identifying the
x, y axis of at least one point of the at least one of asset or
shape; and creating the touch area based on the identified x,y axis
and in response to the real-time area being defined as the touch
area in the first interactive container based on the
comparison.
2. The interactive container creation method of claim 1, further
comprising: creating a list; and deploying the list to at least one
device.
3. The interactive container creation method of claim 2, further
comprising: creating a correlation between (i) at least a portion
of the list to (ii) the touch area, image captured from the camera
or a projected image from a projector, wherein the touch area of
the first interactive container produces an activity identified in
the list resulting from interaction related to the touch area, and
wherein the produced activity is in a second interactive
container.
4. The interactive container creation method of claim 3, wherein
the second interactive container relates to the display from the
projector.
5. The interactive container creation method of claim 4, wherein
the display is viewable by a human eye without need for wearable
devices.
6. The interactive container creation method of claim 1, wherein
the retrieving the baseline depth comprises retrieving a baseline
depth area utilizing multiple depth frames.
7. The interactive container creation method of claim 1, wherein
the value is a moving average.
8. The interactive container creation method of claim 1, wherein
the method utilizes at least one of a depth camera, a weighted
average to add items into the container over time, an interactive
container that at least one of communicates and causes change in
another container.
9. The interactive container creation method of claim 3, wherein a
radius of a surrounding pixels changes based on the depth of the
camera.
10. The interactive container creation method of claim 2, wherein
the list comprises at least one of an image, an asset, an
attribute, a wisp, a rule, a menu, axis location and any
combination thereof, wherein the attribute is at least one of
audio, video, image, display, or combination thereof, and wherein
the asset is at least one of an object, a person, printout of an
object or person, a displayed item, an image, a video, an
identified item or person, or a combination thereof.
11. The interactive container creation method of claim 2, wherein
the list is created on a machine by identifying at least one of the
asset and the shape.
12. The interactive container creation method of claim 2, wherein
the list is deployed simultaneously on several devices in the same
or in different locations.
13. The interactive container creation method of claim 1, further
comprising a calibration method, wherein the calibration method
comprises: identifying an item to be defined as the touch area,
wherein the item is one of the asset, a display, a shape, light,
exposure, contrast, RGB difference, infrared, or a combination
thereof; identifying coordinates of predetermined number of points
related to the item; and identifying an area within the
predetermined points as the touch area.
14. The interactive container creation method of claim 13, wherein
the calibration method utilizes the camera to identify the
coordinates.
15. The interactive container creation method of claim 13, wherein
the calibration method is performed on a single container or
multiple containers at the same time.
16. The interactive container creation method of claim 13, wherein
the calibration method is one of automatic or manual.
17. The interactive container creation method of claim 13, wherein
the calibration method further comprises identifying one of a rule,
a menu, a display, and an activity related to the identified touch
area.
18. The interactive container creation method of claim 13, further
comprising at least one of: training an image to detect at least
one of a known image, asset, logo, item or combination thereof; and
cropping calibration stream to calibrate only areas of
interest.
19. An interactive container creation method for creating an
interactive experience, the method comprising: calibrating and
identifying at least one interactive container; identifying at
least one of an asset and a shape to be defined as a touch area in
the interactive container; identifying the x,y axis of at least one
point of the at least one of the asset and the shape; and creating
a correlation between the touch area and at least a portion of a
list, wherein the list identifies a plurality of activities
resulting from interaction related to the touch area, and wherein
each touch area of the interactive container produces an activity
from among the plurality of activities identified in the list.
20. An interactive container creation system for creating an
interactive experience, comprising: a processor; a storage medium
comprising at least one of deployed data and touch area data,
wherein the processor is coupled to the storage medium; a touch
detector for generating the touch area data, wherein the touch area
data relates to at least one of an asset or a shape; a touch
listener, coupled to the processor and the touch detector, for
determining an activity related to a touch area, wherein each touch
area of the interactive container produces an activity determined
by the touch area data resulting from interaction related to the
touch area; and at least one input/output device for at least one
of receiving input to the interactive container system or to cause
an action related to the interactive container system.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 15/394,799, filed Dec. 27, 2016, which is a
continuation-in-part of U.S. application Ser. No. 15/258,973, filed
on Sep. 7, 2016, which is a continuation-in-part of U.S.
application Ser. No. 14/535,823 filed Nov. 7, 2014, which is a
continuation-in-part of U.S. application Ser. No. 13/890,709 filed
May 9, 2013. This application is a continuation of U.S. patent
application Ser. No. 15/394,799, filed Dec. 27, 2016, which is a
continuation-in-part of U.S. application Ser. No. 14/985,044 and a
continuation-in-part of PCT Application No. PCT/US2015/068192 both
filed on Dec. 30, 2015. This application claims priority to U.S.
Provisional Applications 62/311,354 filed on Mar. 21, 2016 and
62/373,272 filed on Aug. 10, 2016. The above identified patent
applications are incorporated herein by reference in their entirety
to provide continuity of disclosure.
FIELD OF THE INVENTION
[0002] The disclosure relates to systems, apparatus and methods for
creating and operating interactive containers. More specifically,
this disclosure relates to creating and operating interactive
containers that relate to any assets that are projected, printed,
displayed, etc.
BACKGROUND OF THE INVENTION
[0003] It has become more common from assets of different origin or
type to communicate and cause an activity based on such
interaction. For example, it has become common for users to utilize
their portable devices to control various products in their home
and/or office made by different manufacturers. The selection of the
assets and its interaction can be customizable and variable.
Therefore, it is desirable to be able to simulate such interactions
and to be able to customize it. In addition, some assets may be
susceptible to tampering. Thus, it is beneficial to display an
interactive image, printout, etc. of such assets. Therefore, there
is a need for an improved system, apparatus and method for creating
and operating interactive container(s).
SUMMARY OF THE INVENTION
[0004] Embodiments described herein relate to an interactive
container creation method, apparatus and system. The method
includes creating a list, deploying the list to at least one
device, calibrating and identifying touch areas, identifying at
least one of an asset and a shape to be defined as a touch area,
identifying the x,y axis of each point for a predetermined number
of points for each of the at least one of asset or shape, and
creating a touch area based of the identified x,y axis.
BRIEF DESCRIPTION OF DRAWINGS
[0005] Reference will now be made to the following drawings:
[0006] FIG. 1 is an embodiment illustrating a flow diagram of a
method for creating at least one interactive container;
[0007] FIG. 2 is an embodiment illustrating a flow diagram of a
method for calibrating at least one interactive container;
[0008] FIG. 3 is a block diagram illustrating an embodiment of an
apparatus of interactive containers;
[0009] FIG. 4 is a block diagram illustrating an embodiment of an
interactive system relating to at least one interactive
container;
[0010] FIG. 5 is an embodiment illustrating a flow diagram of a
method for refining touch recognition; and
[0011] FIG. 6A-C are diagrams depicting an embodiment of an
interactive container.
DETAILED DESCRIPTION
[0012] In the descriptions that follow, like parts are marked
throughout the specification and drawings with the same numerals,
respectively. The drawing figures are not necessarily drawn to
scale and certain figures may be shown in exaggerated or
generalized form in the interest of clarity and conciseness.
[0013] It will be appreciated by those skilled in the art that
aspects of the present disclosure may be illustrated and described
herein in any of a number of patentable classes or context
including any new and useful process, machine, manufacture, or
composition of matter, or any new and useful improvement thereof.
Therefore, aspects of the present disclosure may be implemented
entirely in hardware or combining software and hardware
implementation that may all generally be referred to herein as a
"circuit," "module," "component," or "system" (including firmware,
resident software, micro-code, etc.). Further, aspects of the
present disclosure may take the form of a computer program product
embodied in one or more computer readable media having computer
readable program code embodied thereon.
[0014] Any combination of one or more computer readable media may
be utilized. The computer readable media may be a computer readable
signal medium, any type of memory or a computer readable storage
medium. For example, a computer readable storage medium may be, but
not limited to, an electronic, magnetic, optical, electromagnetic,
or semiconductor system, apparatus, or device, or any suitable
combination of the foregoing. More specific examples of the
computer readable storage medium would include, but are not limited
to: a portable computer diskette, a hard disk, a random access
memory ("RAM"), a read-only memory ("ROM"), an erasable
programmable read-only memory ("EPROM" or Flash memory), an
appropriate optical fiber with a repeater, a portable compact disc
read-only memory ("CD-ROM"), an optical storage device, a magnetic
storage device, or any suitable combination of the foregoing. Thus,
a computer readable storage medium may be any tangible medium that
can contain, or store a program for use by or in connection with an
instruction execution system, apparatus, or device.
[0015] Computer program code for carrying out operations utilizing
a processor for aspects of the present disclosure may be written in
any combination of one or more programming languages, markup
languages, style sheets and JavaScript libraries, including but not
limited to Windows Presentation Foundation (WPF), HTML/CSS, XAML,
and JQuery, C, Basic, *Ada, Python, C++, C#, Pascal, *Arduino.
Additionally, operations can be carried out using any variety of
compiler available.
[0016] Aspects of the present disclosure are described herein with
reference to flowchart illustrations and/or block diagrams of
methods, systems and computer program products according to
embodiments of the disclosure. It will be understood that each
block of the flowchart illustrations and/or block diagrams, and
combinations of blocks in the flowchart illustrations and/or block
diagrams, can be implemented by computer program instructions.
[0017] These computer program instructions may be provided to a
processor of a general purpose computer, special purpose computer,
or other programmable data processing apparatus to produce a
machine, such that the instructions, which execute via the
processor of the computer or other programmable instruction
execution apparatus, create a mechanism for implementing the
functions/acts specified in the flowchart and/or block diagram
block or blocks.
[0018] These computer program instructions may also be stored in a
computer readable medium that when executed can direct a computer,
processor, other programmable data processing apparatus, or other
devices to function in a particular manner, such that the
instructions when stored in the computer readable medium produce an
article of manufacture including instructions which when executed,
cause a computer to implement the function/act specified in the
flowchart and/or block diagram block or blocks. The computer
program instructions may also be loaded onto a computer, processor,
other programmable instruction execution apparatus, or other
devices to cause a series of operational steps to be performed on
the computer, other programmable apparatuses or other devices to
produce a computer implemented process such that the instructions
which execute on the computer or other programmable apparatus
provide processes for implementing the functions/acts specified in
the flowchart and/or block diagram block or blocks.
[0019] FIG. 1 is an embodiment illustrating a flow diagram of a
method 100 for creating at least one interactive container. The
method 100 starts at step 102 and proceeds to step 104. At step
102, the method 100 creates a list. The list may contain images,
assets, attributes, WISPSs, rules, menus, etc. A WISP in this
application relates to a shell that defines the rules and the
interaction between the assets and/or containers. In an embodiment,
the creation of the list is performed at a remote location or on a
cloud. In other embodiments, the creation of the list is performed
on the same device operating the interaction between the assets,
menus, and/or containers. In such embodiments, the deployment step
would not be necessary.
[0020] At step 106, the method 100 deploys at least one list to a
device that is operating the interaction between the assets, menus,
and/or containers. In one embodiment, the deployment may occur on
several devices that may or may not be at the same location. The
device(s) may be at the same location as the container being
operated. In one embodiment, the axis location, i.e. x, y, x,
location of the assets may be incorporated into the list at the
list creation time or it may be determined on the device
controlling the interaction, i.e., a device located at the same
location as the container. The device controlling the interaction
may learn the location of the assets, it may display the assets, or
it may scan for characteristics to learn their location. In one
embodiment, a list may already exist and only changes, omissions
and/or additions are deployed, rather than the entire list.
Furthermore, the deployment may be initiated/conducted manually or
it may be automatic.
[0021] At step 108, the method 100 calibrates assets subjects in
the container and/or identifies the touch areas. During the
calibration process, the method 100 may perform projection mapping
for every container to ensure that the display matches the physical
space. In one embodiment, the method 100 uses image training during
calibration to detect a known image, item, logo, etc.
[0022] In other embodiments, a person manually calibrates the
system by shifting from point to point identifying the touch area
and triggering a new touch area when the current touch area is done
and another touch area exists and needs to be identified by the
system. Whereas, during an automatic calibration, the system
automatically identifies a predetermined number of points per touch
area relating to assets and/or shapes. In another embodiment, a
calibration stream is cropped to where only areas of interest are
calibrated. Only calibrating areas of interest results in a more
accurate and more efficient calibration. The calibration process is
better described in FIG. 2. Method 100 ends at step 110.
[0023] FIG. 2 is an embodiment illustrating a flow diagram of a
method 200 for calibrating at least one interactive container.
Method 200 starts at step 202 and proceeds to step 204, wherein the
method 202 detects an asset or shape displayed that needs to be
defined as a touch area. At step 206, the method 200 identifies a
predetermined number of points relating to the asset or shape where
each point is defined by its x, y axis. At step 208, the method 200
determines if there are more assets or shapes to be identified as
touch areas. If there are more assets or shapes to be identified as
touch areas, the method 200 returns to step 204. Otherwise, the
method 200 ends at step 210.
[0024] For example, a projector displays a pre-determined shape
over a touch area not identified yet. Using a camera, the method
identifies the x, y axis for each point in a pre-determined number
of points relating to the asset or displayed shape. Once the axis
is identified, the method 200 proceeds to the next asset or shape
in the container. The method 200 may perform such function on a
single container or multiple containers. The method 200 may utilize
asset identification, display recognition, shape recognition,
light, exposure, contrast, RGB difference, infrared, etc. to
determine the areas that need to be identified as touch areas. When
all touch areas are identified, the camera and/or method are
capable of identifying the touch areas and identify the
corresponding rule, menu, activity etc. relating to the touch
area.
[0025] FIG. 3 is a block diagram illustrating an embodiment of an
apparatus 300 of interactive containers. In this embodiment, the
apparatus 300 has two containers 302A and 302B, where container 302
A has two menus/attributes 304A and 304B. Container 302B has a
single menu/attributes 304C. Each of the menu/attribute's 304A,
304B and 304C has a WISP/Rules 306A, 306B and 306C, respectively.
Each of the WISP/Rules 306A, 306B and 306C has assets 308A, 308B
and 308C, respectively.
[0026] A single interactive apparatus 300 may include any number of
containers that may or may not communicate and/or interact. As
such, in one embodiment, interacting with one container may cause a
change in another container. Containers create an interactive
experience using the menus/attributes and WISP/rules relating to
assets. The menu/attributes are options at an instance, which may
be a default instance or options that come about due to an
interaction or touch on or around a menu item or attribute
presented. A container may contain any number of menus/attributes
306, which may interact or stand alone. Attributes may be audio,
video, image, change in display, etc. WISP/rules are the
interactive active mask over a touch area that triggers a menu or
attribute due to a pre-determined activity. Assets may be
pre-determined object or person, printouts of objects, displayed
items, images, video, an identified object or person, and the
like.
[0027] In one embodiment, a weighted average may be used. In such
an embodiment, a new object/asset is added to a container. The
weighted average method adds the object/asset incrementally over
time where the accounting of the new item increases in percentile
in relation to the whole picture over time. Such a method insures
that the item is truly added, allows of real-time reaction to
change in a container, and allows for a realistic change over
time.
[0028] FIG. 4 is a block diagram illustrating an embodiment of an
interactive system 400 relating to at least one interactive
container. In this embodiment, the system 400 includes a processor
402, memory/storage medium 404, a calibrator 406, a touch detector
408, a touch listener 410, an analytics module 412 and an I/O 414.
The memory 404 include deployed data 404A, touch area data 404B,
analytics data 404C and the likes.
[0029] Even though all these items are shown to be in the same
system 400, yet, they may be distributed in multiple systems that
may or may not be in the same location. In one embodiment, a cloud
may communicate with the systems 400 to deploy items from remote
locations, such as, the deployed data 404A.
[0030] The touch detector 408 detects touch and its related
information, which includes identifying coordinate related to a
touch area. In one embodiment, the touch detector 408 may
distinguish between a hover and a touch, where the distinction
relates to the z axis of the touch. If the hand or object is closer
to the object or further from a camera or system then it is a
touch. If the hand or object is further from the object or closer
to a camera or system then it is hover. In one embodiment, the
touch detector may identify different types of touch based on
thresholds, such as time, proximity, color of the object doing the
touch, based on a sequence of touches, etc. The touch detection 408
may refine the recognition of a touch by performing the method of
FIG. 5, which will be described herein below. In another
embodiment, the touch detector may crop areas to where only areas
of interest are detected, resulting in a touch detection that is
more accurate and more efficient.
[0031] The touch listener 410 reads the coordinates determined by
the touch detector and determines if the touch occurred in a touch
area identified during calibration. The touch listener 410
determines the type of reaction or no reaction to take place based
on the deployed data, the location of the touch and sometime the
type of touch. In some cases, the touch listener 410 may facilitate
a zoom in/out or a drag based on the determination of the type of
touch. Touch listener may determine that there are no persons
and/or no touch for a predetermined time or sense a person walk
away and initiate a default display or a predetermined
activity.
[0032] The analytics module 412 is designed to collect data and/or
measure characteristics related to a predetermined object, person,
movement, lack of movement, etc. for example, the analytics module
412 may identify a person, follow a path of a person, follow
selections of a person, duration of a touch, lack of touch, list a
person's activity, determine gender, personal characteristics,
traffic, dwell time, etc.
[0033] FIG. 5 is an embodiment illustrating a flow diagram of a
method 500 for refining touch recognition. The method 500 starts at
step 502 and proceeds to step 504. At step 504 the method 500
creates a baseline depth area using multi-frames from a depth
camera. At step 506, the method 500 creates a moving average of a
real-time area from the depth camera. At step 508, the method 500
determines the difference between the baseline and the moving
average. At step 510, the method 500 determines if the difference
is less than a pre-determined threshold. If it is less, then the
method 500 proceeds to step 512 and looks at the surrounding pixels
to determine if the event is a touch or noise. If the surrounding
pixels have the same z-axis depth, the event is a touch, and the
method 500 proceeds to step 514. In one embodiment, the radius of
the surrounding pixels changes based on the depth of the camera. If
the difference is greater than the threshold, then determine that
the event is a touch, at step 514. If the surrounding pixels have
different z-axis, then the method 500 proceeds to step 516. At step
516, the method 500 determines that the event is not a touch. From
steps 514 and 516, the method 500 proceeds to step 518 where it
ends.
[0034] FIG. 6A-C are diagrams depicting an embodiment of an
interactive container. In FIG. 6A, a container is shown that
displays a car engine with its mechanics and electronics. In FIG.
6B, a touch is detected activating a touch area. In FIG. 6C, the
touch results in the display of information related to the touch
area. In other embodiments, such a touch may result in an engine
sound, a menu display, a video activation etc.
[0035] It will be appreciated by those skilled in the art that
changes could be made to the embodiments described above without
departing from the broad inventive concept. It is understood,
therefore, that this disclosure is not limited to the particular
embodiments herein, but it is intended to cover modifications
within the spirit and scope of the present disclosure as defined by
the appended claims.
* * * * *