U.S. patent application number 15/239272 was filed with the patent office on 2016-12-08 for intentionally annoying toy with an interactive shut-off mechanism.
The applicant listed for this patent is Andrew Breckman. Invention is credited to Andrew Breckman.
Application Number | 20160354701 15/239272 |
Document ID | / |
Family ID | 57451392 |
Filed Date | 2016-12-08 |
United States Patent
Application |
20160354701 |
Kind Code |
A1 |
Breckman; Andrew |
December 8, 2016 |
INTENTIONALLY ANNOYING TOY WITH AN INTERACTIVE SHUT-OFF
MECHANISM
Abstract
The sound emitting device in at least one configuration is
intended to emit a continuous sound upon activation that requires a
code be manually entered or requires the user to complete a series
of challenges to silence the device. The sound emitting device has
multiple power sources so the removal of one of the power sources
will not silence the device. In one embodiment, the sound emitting
device has a display embedded into the device, which may displays
code information and provide further user interaction.
Additionally, a method of silencing a sound emitting device is
disclosed.
Inventors: |
Breckman; Andrew; (Madison,
NJ) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Breckman; Andrew |
Madison |
NJ |
US |
|
|
Family ID: |
57451392 |
Appl. No.: |
15/239272 |
Filed: |
August 17, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14317052 |
Jun 27, 2014 |
|
|
|
15239272 |
|
|
|
|
61842166 |
Jul 2, 2013 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
A63H 3/28 20130101; A63H
2200/00 20130101; A63H 3/006 20130101 |
International
Class: |
A63H 3/28 20060101
A63H003/28; A63H 5/00 20060101 A63H005/00; A63H 33/26 20060101
A63H033/26; A63H 3/00 20060101 A63H003/00 |
Claims
1. A sound emitting device comprising: a processor coupled to a
non-transitory computer readable memory, wherein the non-transitory
computer readable memory has at least one code stored thereon; a
display having at least two touch based regions, wherein each of
the at least two touch based regions corresponds to a first light
source and a first tone and a second light source and a second tone
respectively; at least one power source; and at least one speaker,
the at least one speaker being configured to generate a continuous
sound effect upon activation of the sound emitting device, wherein
the continuous sound effect ceases when the at least one code is
entered via the display.
2. The sound emitting device of claim 1 wherein the first tone and
the second tone comprises a singular tone, multiple tones, or a
dialogue or a combination thereof.
3. The sound emitting device of claim 1 wherein the sound emitting
device is a doll.
4. The sound emitting device of claim 1 wherein the at least one
power source comprises at least one dry cell battery, at least one
rechargeable battery, or a combination thereof.
5. The sound emitting device of claim 1 wherein the at least one
code comprises contact with at least one of the at least two touch
based regions.
6. The sound emitting device of claim 1 wherein the at least one
code comprises contact with each of the at least two touch based
regions.
7. The sound emitting device of claim 1 wherein there are two power
sources.
8. A sound emitting doll comprising: a processor contained within
the sound emitting doll, the processor being coupled to a
non-transitory computer readable memory having machine readable
instructions stored thereon that when executed by the processor
cause the processor to perform the steps of, producing a continuous
sound from at least one speaker associated with the sound emitting
doll, and ceasing to produce the continuous sound upon at least one
code being entered via a display, wherein the display has four
touch based regions with each of the four touch based regions
corresponds to a separate light and a separate tone, and wherein
the code comprises contact with at least one of the at least two
touch based regions.
9. The doll of claim 8 wherein the at least one code is a
combination of comprises contact with each of the at least two
touch based regions.
10. The doll of claim 8 wherein the continuous sound is at least
one tone or a dialogue.
11. The doll of claim 8 wherein the display is an electronic touch
based display.
12. The doll of claim 8 wherein the display is a manual touch based
display.
13. A method of silencing a sound emitting device, the method
comprising the steps of: providing the sound emitting device,
wherein the sound emitting device comprises, a processor coupled to
a non-transitory computer readable memory, wherein the
non-transitory computer readable memory has at least one code
stored thereon; a display having at least two touch based regions,
wherein each of the at least two touch based regions corresponds to
a first light source and a first tone and a second light source and
a second tone respectively; at least one power source; and at least
one speaker, the at least one speaker being configured to generate
a continuous sound effect upon activation of the sound emitting
device, wherein the continuous sound effect ceases when the at
least one code is entered via the display; activating the sound
emitting device causing a continuous sound to emanate from the
sound emitting device; a user entering, via the display, a touch
based code, wherein if the touch based code is corrected entered,
then the continuous sound ceases, and wherein if the touch based
code is incorrectly entered, then the continuous sound
continues.
14. The method of claim 13 further comprising the step of:
prompting, by the sound emitting device, the touch based code to be
entered by the user.
Description
CLAIM OF PRIORITY
[0001] This application claims the priority of U.S. application
Ser. No. 14/317,052 filed on Jun. 27, 2014, which claims priority
of U.S. Application 61/842,166 filed on Jul. 2, 2013, the contents
of both which are fully incorporated herein by reference in their
entirety.
FIELD OF THE EMBODIMENTS
[0002] The field of the embodiments of the present invention
relates to generally to sound emitting devices and toys, namely
dolls that have auditory and visual components, and methods of
ceasing said auditory and/or visual components. In particular, the
present invention and its embodiments make a continuous noise upon
activation and only a properly entered code, sequence, etc. can
quiet the sound emitting device.
BACKGROUND OF THE EMBODIMENTS
[0003] Toys and games have long been a staple of society as a way
to entertain the masses. They have evolved from the most basic of
games, such as marbles, to highly complex machines such as today's
video game consoles. Nowadays, the toy and entertainment market is
a multi-billion dollar business with no signs of slowing down.
[0004] The integration of printed circuit boards and processors
into what used to be the simplest of toys has fundamentally changed
the way children now play and interact with their toys. Dolls used
to simply be just a doll. Now, however, they can move, talk, walk,
and respond to external stimuli. Some dolls have "needs" controlled
by preset programs to let an individual know when they need to be
fed or changed. Once the appropriate action has been taken, the
doll will then react accordingly. For example, a crying doll may
signal the owner to "feed" the doll, and the owner "feeding" the
doll will cause the crying to cease. These dolls have become quite
popular among children and more advanced versions are used in
schools to teach responsibility to adolescents.
[0005] However, this opens up a market for toys that do not quit
talking or making noise simply because you want them to, or because
you performed a simple action. Such a toy would require much more
substantive action to be taken and may be quieted only by the
person who knows or can complete the secret mechanism or code
necessary. The purpose is to playfully annoy another by leaving
them with the task of quieting the device. The present invention
and its embodiments meets and exceeds these objectives.
Review of Related Technology
[0006] U.S. Pat. No. 8,414,346 pertains to an infant simulator
capable of emulating the care requirements of an infant and
recording the quality of care and responsiveness of a person caring
for the infant simulator and/or signaling the person caring for the
infant simulator when care is required. The infant simulator is
capable of sensing the unacceptable environmental conditions of
exposure to direct sunlight and exposure to temperature extremes
and to which the infant simulator is subjected. The infant
simulator is also programmed with the ancillary features of
multiple behavior modes based upon the historic level of care
experienced by the infant, and/or the health of the infant, and
perceptibly different demand and distress signals for each type of
environmental event.
[0007] U.S. RE36,776 pertains to an infant care simulation system
for use in teaching individuals the realities, responsibilities and
constraints inherent in caring for young babies. The system also
demonstrates the special problems of drug-dependent babies.
Basically, the system includes a doll having the shape and weight
of a young baby and accessories of the sort used with such a baby.
The doll and accessories are assigned to an individual for an
extended period such as several days. A sound system and electronic
circuitry are included within the doll to generate sounds
simulating a baby crying at selected intervals for selected time
periods. A spring loaded key or other manual switch is provided so
that the individual can turn off the crying sound by holding the
key in an off position. Preferably the key is secured to the
assigned individual in a way preventing it being given to another
person. Indicators showing rough handling, improper positioning of
the doll, periods before a response is made to a crying signal,
etc. are provided. Mechanisms demonstrating the characteristics of
a drug-dependent baby are included. The overall system also
includes accessories, such as car seats, strollers and diaper bags
that are to be taken everywhere with the doll.
[0008] U.S. Patent Application 2008/0176481 pertains to an
interactive baby doll. The interactive baby doll has a head, a
body, and a display on the surface of the body. The display
controllably displays a plurality of different images. Each image
depicts an action to be taken with the baby doll. A plurality of
sensors are located in the head or body. The sensors detect when a
depicted action is taken with the baby doll, and the subsequent
display or sounds of the baby doll depends on whether or not the
sensors sense that the action depicted by the image displayed on
the display is taken within a period of time after the image is
displayed on the display.
[0009] Various devices are known in the art. However, their
structure and means of operation are substantially different from
the present disclosure. The other inventions fail to solve all the
problems taught by the present disclosure. The present invention
provides for toys, namely, dolls that require a particular code,
challenge, or mechanism to cause the toy to be silenced. At least
one embodiment of this invention is presented in the drawings below
and will be described in more detail herein.
SUMMARY OF THE EMBODIMENTS
[0010] The present invention and its embodiments describes and
teaches, in at least one embodiment, a sound emitting device having
a processor contained within the sound emitting device, the
processor containing a code wherein the code is numerical or
orientation based; a power source contained within the sound
emitting device; and at least one speaker contained within the
sound emitting device, the at least one speaker generates a
continuous sound effect upon activation, and the sound emitting
device may have at least one digital gyroscope contained
therein.
[0011] Preferably, the sound emitting device is a doll, which emits
a tone or a dialogue upon activation. This tone or dialogue is
emitted continuously until a particular code is correctly entered.
In this instance, the code may be spatially (orientation) based and
the code may be completed by orienting the appendages in a
particular configuration.
[0012] The sound emitting device may further have at least two
touch based sensors, at least two light emitting diodes (LEDs), and
a touch based code. Here, a first tone would correspond to a first
sensor and a first light, whereas a second tone corresponds to a
second senor and a second light and so forth. This touch based code
is preferably a pattern of tones and lights that must be correctly
repeated by the user after first displayed by the sound emitting
device. The sound emitting device is powered by at least one dry
cell battery or at least one rechargeable battery.
[0013] In another embodiment, there is a doll having a processor
contained within the doll, the processor containing a code wherein
the code is a string of digits; at least two power sources
contained within the doll; at least one speaker contained within
the doll, wherein the at least one speaker generates a continuous
sound effect upon activation; and a liquid crystal display (LCD),
the liquid crystal display having a translucent covering. The doll
may further comprise at least one digital gyroscope. Upon
activation, the code is displayed on the embedded display. The doll
emits a tone or dialogue until a plurality of challenges linked to
the corresponding code have been completed, upon which the doll
ceases to continue making noise. The doll may be powered by at
least one dry cell battery and at least one rechargeable
battery.
[0014] Further, a method of silencing a sound emitting device is
disclosed having the steps of: receiving a string of digits, the
string of digits being in numerical length of 1-7 digits; accessing
a mobile application associated with the sound emitting device;
inputting the string of digits into the mobile application;
receiving a plurality of challenges; and completing the plurality
of challenges to silence the sound emitting device. The plurality
of challenges are designed to be, level, turn, or pattern based, or
any combination of the aforementioned challenge types. The number,
type and difficulty of challenges assigned to a user are designed
to be random.
[0015] In another embodiment of the present invention there is a
sound emitting device comprising: a processor coupled to a
non-transitory computer readable memory, wherein the non-transitory
computer readable memory has at least one code stored thereon; a
display having at least two touch based regions, wherein each of
the at least two touch based regions corresponds to a first light
source and a first tone and a second light source and a second tone
respectively; at least one power source; and at least one speaker,
the at least one speaker being configured to generate a continuous
sound effect upon activation of the sound emitting device, wherein
the continuous sound effect ceases when the at least one code is
entered via the display.
[0016] In yet another embodiment of the present invention there is
a sound emitting doll comprising: a processor contained within the
sound emitting doll, the processor being coupled to a
non-transitory computer readable memory having machine readable
instructions stored thereon that when executed by the processor
cause the processor to perform the steps of, producing a continuous
sound from at least one speaker associated with the sound emitting
doll, and ceasing to produce the continuous sound upon at least one
code being entered via a display, wherein the display has four
touch based regions with each of the four touch based regions
corresponds to a separate light and a separate tone, and wherein
the code comprises contact with at least one of the at least two
touch based regions.
[0017] In yet another embodiment of the present invention there is
a method of silencing a sound emitting device, the method
comprising the steps of: providing the sound emitting device,
wherein the sound emitting device comprises, a processor coupled to
a non-transitory computer readable memory, wherein the
non-transitory computer readable memory has at least one code
stored thereon; a display having at least two touch based regions,
wherein each of the at least two touch based regions corresponds to
a first light source and a first tone and a second light source and
a second tone respectively; at least one power source; and at least
one speaker, the at least one speaker being configured to generate
a continuous sound effect upon activation of the sound emitting
device, wherein the continuous sound effect ceases when the at
least one code is entered via the display; activating the sound
emitting device causing a continuous sound to emanate from the
sound emitting device; a user entering, via the display, a touch
based code, wherein if the touch based code is corrected entered,
then the continuous sound ceases, and wherein if the touch based
code is incorrectly entered, then the continuous sound
continues.
[0018] In general, the present invention succeeds in conferring the
following, and others not mentioned, benefits and objectives.
[0019] It is an object of the present invention to provide a sound
emitting device that requires substantive user interaction.
[0020] It is an object of the present invention to provide a sound
emitting device that emits a continuous and annoying tone or
dialogue.
[0021] It is an object of the present invention to provide a sound
emitting device that has multiple power sources and cannot be
silenced by removing one of the power sources.
[0022] It is an object of the present invention to provide a sound
emitting device that can only be silenced by the manual input of a
particular code or completion of a series of challenges.
[0023] It is an object of the present invention to provide a doll
that makes a continuous sound, and requires user interaction to
silence the sound.
[0024] It is an object of the present invention to provide a doll
that requires a user to interact with a mobile application.
[0025] It is an object of the present invention to provide a doll
gives commands to a user.
[0026] It is an object of the present invention to provide a method
for silencing the sound emitting device or doll.
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] FIG. 1 is a frontal view of a preferred embodiment of the
present invention.
[0028] FIG. 2 is a frontal view of an alternate embodiment of the
present invention.
[0029] FIG. 3 is a frontal view of an alternate embodiment of the
present invention.
[0030] FIG. 4 is a frontal view of an alternate embodiment of the
present invention.
[0031] FIG. 5 is a flow chart illustrating a method of silencing an
embodiment of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0032] The preferred embodiments of the present invention will now
be described with reference to the drawings. Identical elements in
the various figures are identified, as far as possible, with the
same reference numerals.
[0033] Reference will now be made in detail to embodiments of the
present invention. Such embodiments are provided by way of
explanation of the present invention, which is not intended to be
limited thereto. In fact, those of ordinary skill in the art may
appreciate upon reading the present specification and viewing the
present drawings that various modifications and variations can be
made thereto without deviating from the innovative concepts of the
invention.
[0034] FIG. 1 illustrates a preferred embodiment of the present
invention. The sound emitting device 100 preferably takes the form
of a doll. However, one will appreciate that the sound emitting
device 100 may take any number of forms including household items,
personal items, and electronics. The sound emitting device 100
preferably has a head 101, two arms 102, a torso 108, and two legs
104. Each of the appendages (including the head 101) can move or
twist independently of one another to change the position of the
body as need be. The electrical components are disposed within the
device. The internal components may include a printed circuit board
103, processor 109, speaker(s) 114, LEDs 112, power sources 107,
wiring (not shown), and digital gyroscope(s) 111. In some
instances, there at least two power sources 107 such that the sound
emitting device 100 can function with only one of the two power
sources 107 coupled to the sound emitting device 100, thereby
preventing a user from attempting to silence the device by removing
a power source 107.
[0035] The sound emitting device 100 begins to emit a continuous
sound through at least one of the speaker(s) 114 upon activation.
Activation of the sound emitting device 100 may be prompted by a
number of means including depressing any of the touch based sensors
106, orienting a limb in a particular direction, by changing the
orientation of the sound emitting device 100 as a whole, or by a
start switch (not shown). Once activated, the only way to silence
the sound emitting device 100 is to complete the programmed
step(s). This step or steps can vary depending on the preinstalled
settings of each sound emitting device 100.
[0036] The sound emitting device 100 may be silenced by the
changing the orientation of the limb or limbs. Each of the limbs
preferably are operably coupled to a three axis digital gyroscope
111. In some instances, it may or may not be necessary to include
the gyroscopes 111 in each of the limbs and various configurations
employing the gyroscopes 111 may exist. The digital gyroscopes 111
send spatial orientation readouts to the processor 109 which
processes the information. When the correct orientation is
achieved, a signal is sent from the processor 109 to silence the
speakers 114 and consequently the sound emitting device 100. As
stated, the orientation necessary may include more than one limb.
For example, the right arm may need to be rotated upwards about
90.degree. and the left leg needs to be rotated outwards about
90.degree.. Upon completion, the sound emitting device 100 will
silent itself.
[0037] Alternatively, the sound emitting device 100 may be silenced
by repeating a particular sequence presented to the user. In this
case, the LEDs 112 located in each of the appendages will flash.
Accompanying the flash may be a sound or tone produced by a speaker
114 located in each appendage respectively. The user then interacts
with the sound emitting device 100 by repeating the light and sound
sequence by depressing the touch based sensors 106 located in the
corresponding appendages. The length of the sequence will vary, and
the sequence may get sequentially longer as the user plays along.
Additionally, the sequences and response time(s) for the user may
be timed. That is, if the user were to take too long on any one
move or on the sequence as a whole, the user would fail. For
example, the first pattern may be one light and one sound and the
final pattern could be ten lights and ten sounds. If the user were
to fail by employing the wrong sensor or taking too much time
between depressing the sensors, then the pattern resets and the
user starts at the first sequence.
[0038] Yet another silencing process may involve the user following
directions given to them by the sound emitting device 100. In this
instance, the touch based sensors 106 and gyroscopes 111 would
provide the information to the processor 109 to confirm the
directives are being followed correctly. Thus, the device may
instruct the user to "squeeze the right hand" or "move the right
leg." Following these directives correctly adds to the sequence as
stated above. By making an incorrect move, the process restarts at
the beginning. The sound emitting device 100 may also include a
command before the directive such as, "I ask you to move my right
leg." Only directives employing that particular "I ask" or
similarly phrased commands are valid and contribute to the correct
sequence. Thus, if a user follows a directive without the proper
command then the user has failed and the sequence restarts.
[0039] FIG. 2 shows a frontal view of an alternate embodiment of
the present invention. The embodiment is substantially similar to
and contains the elements and functionality as described in FIG. 1.
The inclusion of a display, such as a liquid crystal display (LCD),
210 provides a more interactive experience. For protection
purposes, the display 210 and/or directional pad 214 is preferably
covered with a translucent covering 212. The display 210 may be
generally rectangular in shape as shown. However, the display 210
may be a number of shapes including triangular or circular. The
location of the display 210 may vary depending on the particular
doll to a location best suited for that doll. The translucent
covering 212 may be a polymer or plastic of an appropriate strength
and clarity as to protect the display 210 underneath.
[0040] Here, the display 210 is capable of expressing various
characters and numerals. In some instances, the display 210 may be
of higher quality and capable of displaying more complex graphics,
such as images. Once the sound emitting device 100 is activated,
the tone or dialogue begins to emanate from the speakers 114.
Depending on the programming, one of a number of different actions
can occur. Upon activation, a string of numerals will appear on the
display 210. This string of numerals is required to aid in the user
deactivating the sound emitting device 100. The numerals are a code
which outputs a random number and random type of challenge(s). The
successful completion of these challenges is what will silence the
sound emitting device 100.
[0041] Alternatively, the sound emitting device 100 may prompt the
user with a manual code comprising manipulating a number of the
installed sensors on the sound emitting device 100. In this
instance, the sound emitting device 100, asks a user to guess the
code. The code being a combination of moving appendages and/or
squeezing the touch based sensor(s) 106. After the user inputs a
specified amount of actions, the display 210 will display the
number of correct actions (i.e. 3 of 5 correct). The user then
manipulates the sound emitting device 100 again to receive another
readout. This process continues until the code is completed in a
trial and error scenario. In some instances, the sound emitting
device 100 may give a readout on the display 210 wherein the
sequence of the code guessed correctly is identified (i.e. 3 of 5
correct, actions 1, 3, and 4 are correct). As previously stated,
the user must again achieve completion of the code in this scenario
as well.
[0042] Further, upon activation, the sound emitting device 100 may
prompt the user, via the display 210, to complete a game or series
of challenges presents on the display 210. Here, the use may
interact with the sound emitting device 100 via the directional pad
214. The directional pad 214 may comprise a number of interactive
buttons including selection buttons and directional buttons. The
display 210 may display, for example, a video game and the user
controls the actions taken on the display 210 by interacting with
the directional pad 214.
[0043] FIG. 3 shows a frontal view of another alternate embodiment
of the present invention. This embodiment is similar to that of
FIG. 2 but does not contain the display 210, translucent covering
212, and the associated functionality. In this embodiment, the
sound emitting device 100 may be dressed in Western themed attire.
The sound emitting device 100 again has a head 101, two arms 102, a
torso, 108, and two legs 104. This embodiment also contains touch
based sensors 106, speakers 114, and LEDs 112 and may contain other
components as previously described in FIG. 1. However, whereas, in
FIG. 1 the code is orientation based and in FIG. 2 may require one
to interact with a mobile application (see FIG. 4), the code in
this embodiment is preferably sung by the sound emitting device
100.
[0044] Once the user presses the start switch or other means of
activating the sound emitting device 100, the sound emitting device
100 begins to play a song or other musical arrangement. The song
may vary in accordance with the particular dress theme or character
of the sound emitting device 100, and in this instance the song may
be similar to a square dance routine. The song may require the user
to interact with the LEDs 112 or touch based sensors 106. For
example, the sound emitting device 100 may sing a command such as
"squeeze my left hand and spin me around." The user can then
depress the touch based sensor 106 in the left hand, and the
spinning of the sound emitting device 100 can be monitored by the
digital gyroscope 111 (see FIG. 1).
[0045] The sound emitting device 100 may try to mislead or confuse
the user. This may be done with such commands "squeeze my right
hand." In most instances, especially when under pressure or by way
of a rapid fire of commands, a user may squeeze the left hand (the
user's right when viewed facing sound emitting device 100) instead
of the device's 100 right hand. This would cause a reset of the
song, and the user would have to start again from the beginning.
Additionally, the device may increase or decrease the frequency of
the directives in the song. These changes may be in response to a
user getting a particular number of commands correct or incorrect
in a row. The sound emitting device 100 may embody a number of
other alternative characters such as pop singers, rock stars, and
sports fans.
[0046] Referring now to FIG. 4, the sound emitting device 100
generally has a head 101, two arms 102, a torso 108, and two legs
104. Each of the appendages, including the head 101, can move or
twist independently of one another to change the position of the
body as need be. Such a layout is only intended to be
representative and other iterations and combinations of appendages
and parts may be contained under the purview of this invention.
Further, the sound emitting device may contain a display 310,
speaker or sound emitting mechanism 114, first region 120, second
region 122, third region 124, fourth region 126, and selection
buttons 128.
[0047] In such an embodiment, the sound emitting device 100 may be
silenced by repeating a particular sequence presented to the user
by the sound emitting device 100 via the display 310. The display
310 may be an electronic display (e.g. touch screen) or may be a
manual display (e.g. area with depressible buttons). The display
may comprise light emitting diodes or another light source
configured to generate symbols, characters, icons, images,
patterns, colors, and the like and varying combinations
thereof.
[0048] The display 310 is preferably distinguished by regions, that
is, contacting one region generates one response by the sound
emitting device 100 and contacting another region generates another
response. Here, the display 310 is shown having a first region 120,
second region 122, third region 124, and fourth region 126.
However, the display 310 may have anywhere between one and twenty
separate regions.
[0049] In addition, the display 310 may have selection buttons 128
that may control parameters associated with the display 310 or the
sound emitting device 100 as a whole. One may be able to readily
manipulate such parameters or have to first solve the code in order
to have access to the selection buttons 128.
[0050] Each region may be programmed to have its own color or
combination of colors. Further, each region may be programmed to be
coupled with one or more sounds or tones produced by speaker(s)
114. For example, contacting or depressing a button in one region
will generate a color or a tone. In some instances, a region may
continually show a particular color to define the limits of the
region, and depressing or contacting the region will then generate
a tone or other response by the sound emitting device 100.
[0051] In one embodiment, the user must attempt to guess the
correct sequence of contacting the regions to satisfy the code. The
display 310 may then show a user after an attempt how "close" they
were to solving the code. The display 310 may show the number of
inputted sequences or may show other data to signify to a user what
part of the code was correct and which part was incorrect.
[0052] In another embodiment, the user may interact with the sound
emitting device 100 by repeating a light and sound sequence by
contact or depressing a portion of the respective region when
prompted by the display 310. For example, the display 310 displays
a pattern of light/sound/etc. associated with the display 310 and
the corresponding regions. The user must then repeat back this
pattern by contacting the requisite region(s).
[0053] The length of the provided sequence (code) will vary, and
the sequence may get sequentially longer as the user interacts with
the sound emitting device 100. Additionally, the sequences and
response time(s) for the user may be timed. That is, if the user
were to take too long on any one move or on the sequence as a
whole, the user would fail. For example, the first pattern may be
one light and one sound and the final pattern could be ten lights
and ten sounds. If the user were to fail by employing the wrong
sensor or taking too much time between depressing the sensors, then
the pattern resets and the user starts at the first sequence.
[0054] FIG. 5 shows a flowchart demonstrating one method of
silencing a sound emitting device 400. The method 400 starts with a
user receiving a string of digits from a sound emitting device 100.
The string of digits is displayed on the display 210 (see FIG. 2)
embedded in the sound emitting device 100. The length of the string
of digits may vary but will represent a number of between 1 and 7
digits in length. The particular number received by the user
corresponds to a particular set of challenges for the user to
complete to silence the sound emitting device 100.
[0055] The user then accesses the mobile application 404 associated
with the sound emitting device 100. In the written code for the
mobile application are numbers that correspond to the output
numerical string. When the user inputs the numerical string 406,
the corresponding set of challenges is presented to the user 408.
If the user incorrectly inputs the numerical string given to them,
they will be redirected and prompted to input the numerical string
again 416. Upon a successful input, the challenges are presented to
the user 408. The challenges may take a number of forms including
but not limited to level-based, turn-based (against computer),
timed memory sequences, reaction time sequences, trivia, timed
trivia, and direction following. The user completes a random number
of random challenges generated by the mobile application. Upon, and
only upon, completion of all the challenges 410 the device will
deactivate and turn off 412. If the user does not complete all the
challenges or cannot, then the device will stay active 418.
[0056] Alternatively, there may be questions presented on the
display. The answers may be designated by a letter (i.e. A, B, C,
etc.) or by a body part such as LH (left hand). If designated by a
letter, the letter will correspond to a certain body part. By
depressing a touch based sensor 106 in that part, the user answers
the question. The same goes for if an answer is identified by the
body part instead of a letter. This methodology follows the same
method 400 identified above, replacing the need to access a web or
mobile based application. While this methodology was described in
relation to the embodiment shown in FIG. 2, it can be applicable to
any embodiment of the present invention described herein or
otherwise.
* * * * *