U.S. patent application number 13/918451 was filed with the patent office on 2013-12-19 for interactive input device.
This patent application is currently assigned to Muzik LLC. The applicant listed for this patent is Muzik LLC. Invention is credited to John Cawley, Jason Hardi.
Application Number | 20130339850 13/918451 |
Document ID | / |
Family ID | 49757146 |
Filed Date | 2013-12-19 |
United States Patent
Application |
20130339850 |
Kind Code |
A1 |
Hardi; Jason ; et
al. |
December 19, 2013 |
INTERACTIVE INPUT DEVICE
Abstract
An implementation of the technology includes a control device
that is used in conjunction with a computing device (e.g. a media
player or smartphone), that allows a user to control the operation
of the computing device without directly handling the computing
device itself. This allows the user to interact with the computing
device in a more convenient manner.
Inventors: |
Hardi; Jason; (Miami Beach,
FL) ; Cawley; John; (Raleigh, NC) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Muzik LLC |
Miami Beach |
FL |
US |
|
|
Assignee: |
Muzik LLC
Miami Beach
FL
|
Family ID: |
49757146 |
Appl. No.: |
13/918451 |
Filed: |
June 14, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61660662 |
Jun 15, 2012 |
|
|
|
61749710 |
Jan 7, 2013 |
|
|
|
61762605 |
Feb 8, 2013 |
|
|
|
Current U.S.
Class: |
715/702 |
Current CPC
Class: |
G06F 1/1694 20130101;
G06F 3/016 20130101; H04L 65/601 20130101; G06F 3/017 20130101;
H04L 65/4076 20130101; B62D 1/04 20130101; G11B 27/105 20130101;
H04R 1/1041 20130101; H04W 4/025 20130101; H04W 4/80 20180201; G06F
3/04883 20130101; G06F 3/033 20130101; G06F 1/169 20130101; G06F
3/041 20130101; G11B 27/34 20130101; H04M 1/6066 20130101; G06F
3/165 20130101; H04B 1/385 20130101 |
Class at
Publication: |
715/702 |
International
Class: |
G06F 3/01 20060101
G06F003/01 |
Claims
1. A system for interacting with an application on a processing
apparatus, comprising: an input module; a processing module; and a
transmission module; wherein the input component is configured to
detect a tactile input applied by a user; wherein the processing
component is configured to translate the input into an application
command; and wherein the transmission component is adapted to
transmit the command to the processing apparatus.
2. The system of claim 1, wherein the tactile input comprises one
or more of the following: a momentary touching gesture, a sustained
touching gesture, and a swiping gesture.
3. The system of claim 1, wherein the tactile input comprises a
series of two of more of the following: a momentary touching
gesture, a sustained touching gesture, and a swiping motion
gesture.
4. The system of claim 1, wherein the processing component is
configured to determine a number of fingers used by the user to
apply the input.
5. The system of claim 4, wherein the processing component is
configured to translate the input into an application command based
on the number of fingers detected.
6. The system of claim 1, further comprising one or more audio
speakers.
7. The system of claim 1, wherein the application is a media
management application.
8. The system of claim 1, wherein the processing apparatus is one
of the following: a media player, a smartphone, a gaming device,
and a computer.
9. The system of claim 1, wherein the command comprises a command
to control a media playback function of the processing
apparatus.
10. The system of claim 1, wherein the command comprises a command
to transmit a message to a recipient through a communications
network.
11. The system of claim 10, wherein the communications network
comprises one or more of the following: a LAN, a WAN, the internet,
and a cellular network.
12. The system of claim 10, wherein the recipient is a
communications device, a social media website, an email server, and
a telephone.
13. The system of claim 10, further comprising a steering wheel for
controlling a vehicle.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority to U.S.
Prov. App. No., 61/660,662; Social User Motion Controlled Headphone
Apparatuses, Methods and Systems; filed Jun. 15, 2012; U.S. Prov.
App. No. 61/749,710; Interactive Networked Headphones; filed Jan.
7, 2013; and U.S. Prov. App. No. 61/762,605; System and Method of
Remote Content Interaction; filed February 8, 2013; all of which
are incorporated by reference in their entirety.
BACKGROUND
[0002] This specification relates to remote input devices and more
specifically input devices integrated with output devices.
Computing devices are commonly used by a user to perform a wide
variety of functions. A user issues commands to a computing device
by interacting with one or more controls, the input is often done
through an input device such as a keyboard, touchpad, mouse, or
touchscreen. The computing device outputs content in response to
the user commands in various forms via a video monitor, speaker,
headphones or other sensory/perceptive device(s). It may be
desirable to input controls and commands to the computing device
directly from the output device, such as inputting commands to an
audio player via a headphone or interacting with a social media
channel in real time via a headphone as an audio file is played.
With the exception of rudimentary output commands such as "play,"
"stop," "pause," and "volume," current output devices do not allow
for controls or input to software programs running on the computing
device.
SUMMARY
[0003] This specification describes technologies relating to
interactive remote input devices and interactive output devices,
such as, for example and without limitation network connected
interactive headphones, interactive dongles, interactive cables,
interactive speakers and interactive hand controllers.
[0004] In general, one innovative aspect of the subject matter
described in this specification can be embodied in a headphone
apparatus and a media player device that are used in conjunction to
provide a user with audio playback of media content, and to allow
the user to interact with social media sites, email providers,
supplementary content providers, and ad providers based on the
media content being played. In an exemplary embodiment of the
apparatus the headphones are operably connected to the media player
through a hardwire connection or through a wireless connection,
such as Bluetooth or Wi-Fi. The media player communicates with a
network gateway through wireless network connection, such as
through a cellular connection or Wi-Fi connection. The network
gateway provides network connectivity to the Internet, facilitating
access to various content and service providers connected to the
Internet. Content and service providers may include email servers,
social media sites, ad servers, and content servers.
[0005] Other implementations are contemplated. For example, the
media player may be one of many types of mobile devices, such as a
cellular telephone, a tablet, a computer, a pager, a gaming device,
or a media player. In other implementations, the wireless network
connection may be one of many types of communications networks
through which data can be transferred, such as a Wi-Fi network, a
cellular telephone network, a satellite communications network, a
Bluetooth network, or an infrared network. In other
implementations, the content and service providers may also include
search engines, digital content merchant sites, instant messaging
providers, SMS message providers, VOIP providers, fax providers,
content review sites, and online user forums.
[0006] Implementations of the present invention may include a
system for interacting with an application on a processing
apparatus, comprising: an input module; a processing module; and a
transmission module; wherein the input component is configured to
detect a tactile input applied by a user; wherein the processing
component is configured to translate the input into an application
command; and wherein the transmission component is adapted to
transmit the command to the processing apparatus.
[0007] In another implementation of the present invention, a method
for providing input to a processing device comprises: providing a
networked processing device, wherein the processing device delivers
content as an output to an output device; providing an input module
configured to detect a tactile input applied by a user, and
translating the input into an application command at the processing
unit.
[0008] Implementations of the present invention may comprise one or
more of the following features. The input component is adjacent the
output device. The tactile input on the input component comprises
one or more of the following: a momentary touching gesture, a
sustained touching gesture, and a swiping gesture. The tactile
input comprises a series of two of more of the following: a
momentary touching gesture, a sustained touching gesture, and a
swiping motion gesture. The processing component is configured to
determine a number of fingers used by the user to apply the input.
The processing component is configured to translate the input into
an application command based on the number of fingers detected. The
system comprises one or more audio speakers. The application is a
media management application. The processing apparatus is one of
the following: a media player, a smartphone, a gaming device, and a
computer. The command comprises a command to control a media
playback function of the processing apparatus. The command
comprises a command to broadcast a user preference or user
indication over a network, such as a social network. The command
comprises a command to transmit a message to a recipient through a
communications network. The communications network comprises one or
more of the following: a LAN, a WAN, the internet, and a cellular
network. The recipient is a communications device, a social media
website, an email server, and a telephone. The system comprises a
user control device for controlling a device unassociated with the
system for interacting with an application on a processing
apparatus. The control device comprises a steering wheel for
controlling a vehicle. The output device comprises one or more
video displays, audio speakers, headphones, ear buds,
[0009] The details of one or more embodiments of the subject matter
described in this specification are set forth in the accompanying
drawings and the description below. Other features, aspects, and
advantages of the subject matter will become apparent from the
description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 is a schematic diagram of an example control
system.
[0011] FIG. 2 is a flow chart showing an example usage of a control
system.
[0012] FIG. 3 is a flow chart showing an example usage of a control
system.
[0013] FIG. 4A-D are example embodiments of an input module.
[0014] FIGS. 5A-F are example user interactions.
[0015] FIGS. 6A-H are example user interactions.
[0016] FIG. 7 is an example input module.
[0017] FIGS. 8A-E are example embodiments of control systems.
[0018] FIG. 9 shows example embodiments of control systems.
[0019] FIG. 10 is an example network of the present invention
including interactive, networked headphones.
[0020] FIG. 11A is an example of an interactive, networked
headphone of the present invention.
[0021] FIG. 11B is an example of an implementation of the present
invention.
[0022] FIG. 11C is an example of an implementation of the present
invention.
[0023] FIG. 12 is an example of a method of an implementation of
the present invention.
[0024] FIG. 13 is an example of an implementation of the present
invention.
[0025] FIG. 14 is an example of an implementation of the present
invention.
[0026] FIG. 15 is an example of a method of an implementation of
the present invention.
[0027] Like reference numbers and designations in the various
drawings indicate like elements.
DETAILED DESCRIPTION
[0028] Broadly, an implementation of the technology includes a
control device that is used in conjunction with a computing device
(e.g. a media player or smartphone), that allows a user to control
the operation of the computing device without directly handling the
computing device itself. In example implementations the computing
device may be controlled from a traditional output device, such as
a headphone, speaker, ear bud, speaker cable, wearable output
display such as heads-up display glasses or visors, a cable
comprising an input device such as an interactive input device on a
headphone or ear bud cable, or even a remote input device
disassociated with the computing device, such as a steering wheel,
a dash board panel, a visual or audio kiosk, and the like.
[0029] Providing input to the computing device from an interactive
output device or remote input device, allows the user to interact
with the computing device in a more convenient manner. For
instance, a user may use the control device to interact with the
computing device, without first having to remove the computing
device from a storage location (e.g. a clothing pocket, a carrying
bag, a holding bracket, an armband, etc.) In this manner, a user
may use the control device to operate the computing device, without
exposing the computing device to potential damage due to
mishandling or environmental factors. The user may also use the
control device to operate a computing device that is not readily
accessible, for example a device that is secured in a container or
a protective housing, or built into a fixed enclosure (e.g. a
household audio system or a media system in a vehicle).
[0030] In another example, a user may use the control device to
interact with the computing device without having to look at either
the control device or the computing device. In this manner, a user
may use the computing device while engaging in other activities,
such as walking, running, reading, driving, or any other activity
where averting one's attention is undesirable. In addition, a user
may use the control device to simplify specific tasks of the
computing device, such that the user may issue complex instructions
to the computing device using relatively simple inputs on the
control device. For example, in one implementation the user may
share or "like" content, such as a music recording being played on
a mobile phone and delivered to the user via an output device such
as headphones, wherein the headphones include an input component
such that the user can communicate preferences for the music file
with other users over a social network by simple manipulation of
the input device on the headset.
[0031] In yet another example, a user may share content in real
time to a predetermined set of additional users (e.g., members of a
contact list, attendees to an event, users within a geographic or
localized area). Also, multiple users can communicate and share
files via a network with a single device, such as a communal audio
speaker (e.g., multiple users can share one or more music files to
a device to create a just in time play list by simple manipulation
of the input component on the user headphones.
[0032] FIG. 1 illustrates an example implementation of a control
device 100 used to control the operation of a computing device 150.
A control device 100 includes an input module 102, a processing
module 104, a transmission module 106, and a power module 108. Each
module 102, 104, 106, and 108 may be interconnected through one or
more connection interfaces 110, which may provide a connective
pathway for the transfer of power or data between each of the
modules. The transmission module 106 is connected to the computing
device 150 through another connection interface 152, which provides
a connective pathway for the transfer of data between control
device 100 and computing device 150.
[0033] In general, the input module 102 is provided so that a user
can physically interact with the control device 100. The input
module 102 includes one or more sensors to detect physical
interaction from the user, and also includes electronic components
necessary to convert the physical interactions into a form that may
be interpreted by the other modules of the device (e.g. by
digitizing the input so that it may be understood by the processing
module 104 and the transmission module 106). The input module 102
may include one or more types of sensors, for instance
touch-sensitive sensors, buttons, switches, or dials, or
combinations of one or more of these sensors.
[0034] In general, the processing module 104 is provided so that
control device 100 may interpret the user's physical interactions,
and translate these interactions into specific commands to the
computing device 150. In general, the transmission module 106 is
provided so that the control device 100 can transmit commands to
the computing device 150. The transmission module 106 may include
components to encode and transmit data to computing device 150 in a
form recognizable by computing device 150. The transmission module
106 may include, for example, a serial communication module, a
universal serial bus (USB) communication module, a Bluetooth
networking module, a WiFi networking module, a cellular phone
communication module (e.g. a CDMA or GSM radio), or any other
module for communicating with computing device 150.
[0035] In general, the power module 108 is provided to supply power
to each of the other modules of control device 100. The power
module 108 may be of any standard form for powering small
electronic circuit board devices such as the following power cells:
alkaline, lithium hydride, lithium ion, lithium polymer, nickel
cadmium, solar cells, and/or the like. Other types of AC or DC
power sources may be used as well. In the case of solar cells, in
some implementations, the case provides an aperture through which
the solar cell may capture photonic energy. In some
implementations, power module 108 is located external to system
100, and power for each of the modules of control device 100 is
provided through connection interface 110 or another connection
interface, which may provide a connective pathway for the transfer
of power between the externally located power module 108 and the
components of system 100.
[0036] In general, the connection interface 152 which provides a
connective pathway for the transfer of data between control device
100 and computing device 150. Connection interface 152 may be a
wired connection, a wireless connection, or a combination of both.
For instance, connection interface 152 may be a serial connection,
a USB connection, a Bluetooth connection, WiFi connection, a
cellular connection (e.g. a connection made through a CDMA or GSM
network), or combinations or one or more of these connections.
[0037] In some implementations, connection interface 152 is
established over a wireless connection, and a "paired" relationship
must be established between control system 100 and computing device
150 before the user may use control system 100 to issue commands to
computing device 150. For example, in some implementations,
connection interface 152 is a Bluetooth connection, and a user
interacts with computing device 150, first to view a list of active
Bluetooth modules in the vicinity, then to select the module
representing control system 100. Computing device 150 and control
system 100 establish a Bluetooth connection interface 152, and data
can be transmitted between the two through this connection
interface. In another example, control system 100 may contain a
near-field communication (NFC) identification tag that uniquely
identifies control system 100, and computing device 150 has a NFC
reader that is capable of reading the identification information
from the NFC identification tag. In these implementations, a user
may pair control system 100 and computing device 150 by placing the
NFC identification tag in proximity to the NFC reader, and
computing device 150 establishes a connection interface 152 with
control system 100. This paired relationship may be retained by
control system 100 or computing device 150, such that each device
will continue to communicate with the other over subsequent usage
sessions. In paired relationship may also be altered, such that
control system 100 may be paired with another computing device
150.
[0038] In some implementations, connection interface 152 may also
be used to transmit data unassociated with control system 100. For
example, in some implementations, a control system 100 may be used
with an audio headset, such that audio information from computing
device 150 is transmitted through connection interface 152 to
transmission module 106, then transmitted to the audio headset
using another connection interface. In these implementations, data
associated with control system 100 (i.e. data that related to the
operation of control system 100) and data unassociated with control
system 100, may both be transmitted through connection interface
152, either simultaneously or in an alternating manner. In some
implementations, connection interface 152 is an analog connection,
such as those commonly used to transfer analog audio data to a
headset, and may include multiple channels, for instance channels
commonly used for left audio, right audio, microphone input, etc.
Control system 100 may transfer analog audio data, as well as data
associated with control system 100, through such a connection
interface 152. For instance, transmission module 106 may encode the
data according to patterns of shorted analog channels, then
transmit the encoded data by shorting one or more of the channels
of connection interface 152. These patterns of shorted channels may
be interpreted by computing system 150. In some implementations,
transmission module 106 may transfer data by sending patterns of
analog voltage waveforms to computing system 150, where may then be
interpreted by computing system 150. Combinations of more than one
encoding and transmission techniques may also be used.
[0039] An example usage 200 of a control device 100 for controlling
a computing device 150 is illustrated in FIG. 2. The control device
100 first detects an input from the user (block 202). Generally,
the input is detected by the input module 102, and may include any
form of physical interaction, for instance a touching motion, a
swiping or sweeping motion, a pressing motion, a toggling motion,
or a turning motion. In some implementations, the control device
100 may detect the number of fingers that the user uses to apply
the input, in addition to detecting the physical motion of the
input.
[0040] The control device 100 then translates the input into a
command to a computing device (block 204). Generally, input module
102 detects and digitizes the user's input, and processing module
104 translates the digitized input into a command. In some
implementations, the processing module 104 receives digitized
information describing the user's interaction with input module 102
and compares it to a predetermined list of inputs and associated
commands. If processing module 104 determines that the user's input
matches an input from the predetermined list, it selects the
command associated with the matched input from the list. In some
implementations, processing module 104 does not require an exact
match between the user's interactions, and may instead select the
closest match. A match may be based on one or more criteria, for
instance based on a temporal similarity, a spatial similarity, or a
combination of the two.
[0041] In general, a command may include any instruction to a
computing device to perform a particular task. For instance,
example commands may include instructions to the computing device
to start the playback of a particular media file, stop the playback
of a media file, adjust the volume of playback, jog to a particular
time-point within the media file, or select another media file
entirely. In another example, commands may include instructions to
the computing device to transmit data to a recipient across a
network connection. Example network connections include local area
networks (LANs), wide area networks (WANs), Bluetooth networks,
cellular networks, or any other network where data may be
transmitted between two computing devices. Example recipients may
include other computing devices, for instance a computer, a
smartphone, or a server. For instance, a command may include
instructions to send a particular message across a WAN to a server
operated by a social media website, such that a user may interact
with the social media website. In another example a command may
include instructions to send a particular message across a cellular
network to another computing device, such that a user may interact
with another user through the other user's computing device.
[0042] The control device 100 then transmits the translated command
to the computing device (block 206). In general, the translated
command is transmitted through the transmission module 206 to the
computing device 150. Upon receipt of the translated command, the
computing device 150 may execute the command.
[0043] Another example usage 300 of control device 100 to revise
the association between a user's inputs and the corresponding
commands is illustrated in FIG. 3 The control device 100 first
detects an input from the user (302). Generally, the input is
detected by the input module 102, and may include any form of
physical interaction, for instance a touching motion, a swiping or
sweeping motion, a pressing motion, a toggling motion, or a turning
motion. In some implementations, the control device 100 may detect
the number of fingers that the user uses to apply the input, in
addition to detecting the physical motion of the input.
[0044] The control device 100 then associates a new command to the
detected input (block 304). A user may specify the new command in
various ways. For instance, in some implementations, a user may
interact with control device 100, computing device 150, or a
combination of the two, to instruct control device 100 to associate
a new command with a particular user input. In an example, a user
may interact with control device 100 to browse through a list of
possible commands and select from a command from this list. In
another example, a user may interact with computing device 150 to
browse through a list of possible commands and select a command
from this list, where the selected command is transmitted to
control device 100 through connection interface 152. The selected
command is received by the transmission module 106, then is
transmitted to the processing module 104.
[0045] The control device 100 then stores the association between
the new command and the detected input for future retrieval (block
306). In general, the association between the new command and the
detected input is saved by processing module 104, and may be
recalled whenever a user interacts with control device 100 or
computing device 150. In this manner a user can initiate commands
to a program with simple remote inputs, for example, a user can set
up the control device to recognize that designated user inputs
(e.g., a two finger sweep across the control interface or input
module) should instruct a music player to share a music file with a
designated group of friends or other users. In another example,
voting options may be incorporated into traditionally passive
content, for example, an audio stream such as that provided by an
internet radio provider could include a prompt to initiate a known
user input (e.g. hold two fingers on the interface or control
module or depress two buttons simultaneously on the control module)
to have an e-mail sent about a product advertised, or to enroll the
recipient in an additional service.
[0046] In general, the input module 102 includes one or more
sensors to detect physical interaction from the user. Example
arrangements of sensors are shown in FIG. 4. Referring to FIG. 4A,
an input module 102a may include a single touch-sensitive sensor
402. The sensor 402 may be of various types, for instance a sensor
capable of resistive sensing or a sensor capable of conductance
sensing. In some implementations, the sensor 402 may detect
interaction from a user in the form of physical interaction, for
instance a touching motion, a swiping or sweeping motion, a
pressing motion, a toggling motion, or a turning motion. In some
implementations the sensor 402 may detect the absolute or relative
position of a user's finger upon the sensor 402, in order to
provide additional spatial information regarding the nature of the
user's interactions. In some implementations, the sensor 402 may
detect the period of time in which a user interacts with sensor
402, in order to provide additional temporal information regarding
the nature of the user's interactions.
[0047] Referring to FIG. 4B, another example input module 102b may
include several individual touch-sensitive sensors, for instance
five sensors 404a-e. The sensors 404a-e may be of various types,
for instance a sensor capable of resistive sensing or a sensor
capable of capacitive sensing, or a combination of two or more
types of sensors. In some implementations the sensors 404a-e may
detect the absolute or relative position of a user's finger upon
the sensors 404a-e, in order to provide additional spatial
information regarding the nature of the user's interactions. In
some implementations, the sensors 404a-e may detect the period of
time in which a user interacts with sensors 404a-e, in order to
provide additional temporal information regarding the nature of the
user's interactions. In some implementations, each of the sensors
404a-e is discrete, such that input module 102b is able to discern
which of the sensors 404a-e were touched by the user. While input
module 102b is illustrated as having five sensors, any number of
individual sensors may be used. Similarly, while input module 102b
is illustrated as having rectangular sensors arranged in a
grid-like pattern, sensors may take any shape, and may be arranged
in any pattern. For example, referring to FIG. 4C, an example input
module 102c may include eight sensors 406a-h arranged in a circle,
such that each sensor represents a sector of a circle.
[0048] Input modules 102 that detect physical interaction from a
user through capacitive sensing may be implemented in various ways.
For instance, in some embodiments, an input module 102 includes a
printed circuit board (PCB) in a two layer stack. A first layer
includes one or more conductive surfaces (for instance copper pads)
that serve as capacitive elements, where each capacitive element
corresponds to a touch-sensitive sensor. The opposing layer houses
a microcontroller and support circuitry to enable
resistive-capacitive (RC) based capacitive touch sensing in each of
the capacitive elements. Firmware loaded into the microcontroller
continuously measures the capacitance of each capacitive element
and uses the capacitance reading to detect the presence of a
finger, track finger swipe motion, and determine the gesture motion
of the user.
[0049] A gesture detection algorithm may be included as a part of
the firmware. In some implementations, a gesture event is detected
when the capacitance measurement of any capacitive element
increases over a finger detection threshold set in firmware. The
gesture event ceases in either case of the capacitance measurement
dropping back below the finger detection threshold or a timeout is
reached
[0050] Once a gesture is detected, the microcontroller may
communicate with another component, for instance processing module
104, which gesture occurred. This communication may occur through a
wired connection, such as communication interface 110, or through a
wireless connection, such as through a WiFi, Bluetooth, infrared,
near-field communication (NFC) connection.
[0051] The input module 102 uses a capacitive touch scheme that
measures capacitance of an isolated section of one or more of the
capacitive elements by alternatively charging and discharging the
capacitive elements through a known resistor. The combination of
the resistor value and capacitance of the capacitive elements
define the rate at which the capacitive elements charge and
discharge. Since the resistor is a fixed value, the discharge rate
has a direct relation to each capacitive element's capacitance. The
capacitance of the capacitive elements is measured by recording the
amount of time it takes to charge then discharge the capacitive
elements. The capacitance of the capacitive elements in an
unchanging environment will remain the same. When a finger comes
very close or touches the capacitive elements, the finger increases
the measureable capacitance of the capacitive elements by storing
charge, thus causing the charge and discharge events to take
longer. Since the measured capacitance increases in the presence of
a finger, the firmware may then use the capacitance measurement as
a means to decide that a finger is touching a sensor of input
module 102.
[0052] Input module 102 may include sensors other than
touch-sensitive sensors. For example, referring to FIG. 4D, an
input module 102d may include several physical buttons 408a-d. A
user may interact with input module 102 by pressing one or more of
the buttons 408a-d. Input module 102 may detect one or more events
associated with this interaction, for example by detecting when a
button is depressed, how long a button is held, when a button is
released, a sequence or pattern of button presses, or a timing
between two or more button pressed.
[0053] In some implementations, input module 102 includes one or
more proximity sensors. These proximity sensors may be used to
detect the motion objects near to the sensor. For example,
proximity sensors may be used to detect a user waving his hand
close to input module 102. These proximity sensors may also be used
to detect the presence of objects near to the sensor. For example,
a proximity sensor may be used to detect that system 100 is in
close proximity to a user.
[0054] In some implementations, input module 102 includes one or
more accelerometer sensors, such that input module 102 may
determine the motion or the orientation of control system 100. For
instance, an input module 102 with one or more accelerometer
sensors may be able to determine if control system 100 is upright
or not upright, or if control system 100 is being moved or
stationary.
[0055] In general, the input modules 102 may detect a broad range
of user interactions. Referring to FIG. 5, an input module 102a
that includes a single touch-sensitive sensor 402 may detect and
differentiate between several distinct types of user interaction.
For instance, referring to FIG. 5A, the input module 102a may
determine that a user applied a horizontal left-to-right motion to
the input module by recognizing that the user initiated contact
with sensor 402 at point 510a, sustained contact along path 510b in
the direction of arrow 510c, then released contact at point 510d.
In another example, referring to FIG. 5B, the input module 102a may
determine that a user applied a vertical bottom-to-top motion to
the input module by recognizing that the user initiated contact
with sensor 402 at point 520a, sustained contact along path 520b in
the direction of arrow 520c, then released contact at point
520d.
[0056] The input module 102 is not limited to recognizing
straight-line user interactions. For instance, referring to FIG.
5C, the input module 102a may determine that a user applied an
[0057] S-shaped motion to the input module by recognizing that the
user initiated contact with sensor 402 at point 530a, sustained
contact along path 530b in the direction of arrow 530c, then
released contact at point 530d.
[0058] The input module 102 may also detect touching motions. For
instance, referring to FIG. 5D, the input module 102a may determine
that a user applied a touching motion to the input module by
recognizing that the user initiated contact with the sensor 402 at
point 540 and released contact at point 540. In some
implementations, sensor 402 is sensitive to the location of point
520a, and can differentiate among different points of contact along
sensor 402. In some implementations, sensor 402 is sensitive to the
time in between when the user initiated contact with the sensor and
when the user released contact with the sensor. Thus, input module
102 may provide both spatial and temporal information regarding a
user's interactions.
[0059] In addition, input module 102 may also detect multiple
points of contact, and may differentiate, for example, between an
interaction applied by a single finger and an interaction applied
by multiple fingers. For instance, referring to FIG. 5E, the input
module 102a may determine that a user applied a touching motion to
the input module using two fingers by recognizing that the user
initiated contact with the sensor 402 at two points 550a and 552a,
and released contact from points 550 and 552. In another example,
referring to FIG. 5F, the input module 102a may determine that a
user applied a horizontal left-to-right motion to the input module
using two fingers by recognizing that the user initiated contact
with sensor 402 at points 560a and 562b, sustained contact along
paths 560b and 562b in the direction of arrows 560c and 562c, then
released contact at points 560d and 562d.
[0060] In some implementations, the input module 102 may determine
spatially and temporally-dependent information about a user's
input, even if each sensor is limited only to making a binary
determination regarding whether the sensor is being touched, and is
otherwise not individually capable of determining more detailed
spatial information. For example, referring to FIG. 6, an input
module 102b may include several individual touch-sensitive sensors,
for instance five sensors 402a-e. If the sensors 402a-e are capable
of making only a binary determination regarding the presence or
lack of user contact on each of the sensors, and cannot make a
determination about the specific location of contact on each
sensor, the input module 102b may still recognize several types of
user interaction.
[0061] For instance, referring to FIG. 6A, the input module 102b
may determine that a user applied a horizontal left-to-right motion
to the input module by recognizing that the user initiated contact
with at point 610a, sustained contact along path 610b in the
direction of arrow 610c, then released contact at point 610d. Input
module 102b may make this determination based on a detection of
contact on sensors 404b, 404c, and 404d in sequential order.
[0062] In another example, referring to FIG. 6B, the input module
102b may similarly determine that a user applied a vertical
bottom-to-top motion to the input module by recognizing that the
user initiated contact with at point 620a, sustained contact along
path 620b in the direction of arrow 620c, then released contact at
point 620d. Input module 102b may make this determination based on
a detection of contact on sensors 404e, 404c, and 404a in
sequential order.
[0063] In another example, referring to FIG. 6C, the input module
102b may determine that a user initiated contact at point 630a,
sustained contact along path 630b in the direction of arrow 630c,
then released contact at point 630d.
[0064] The input module 102 may also detect touching motions. For
instance, referring to FIG. 5D, the input module 102b may determine
that a user applied a touching motion to the input module by
recognizing that the user initiated contact with the sensor 402c at
point 640 and released contact at point 640.
[0065] The input module 102 may also detect touching motions from
multiple points of contact. For instance, referring to FIG. 5E, the
input module 102b may determine that a user applied a touching
motion to the input module by recognizing that the user initiated
contact with the sensor 402b at point 650a and contact with the
sensor 402c at point 650b, and released contact at points 650a and
650b.
[0066] In another example, referring to FIG. 6F, the input module
102b may determine that a user applied a horizontal left-to-right
motion to the input module using two fingers by recognizing that
the user initiated contact at points 660a and 662b, sustained
contact along paths 660b and 662b in the direction of arrows 660c
and 662c, then released contact at points 660d and 662d. Input
module 102b may make this determination based on a detection of
contact on sensors 404e, 404c and 404a simultaneously, and 404a in
sequential order.
[0067] In some implementations, sensors 402, for instance sensors
402a-e, may be capable of individually determining spatial
information, and may use this information to further differentiate
between different types of user interaction. For instance,
referring to FIG. 6G, an input module 102b may determine that a
user applied multiple points of contact onto a single sensor 404c.
In another example, referring to FIG. 6H, an input module 102b may
determine that a user applied user applied a horizontal
left-to-right motion to the input module using two fingers by
recognizing that two points of contact exist along the same
sequence of sensors 404b, 404c, and 404d.
[0068] An input module 102 need not have sensors arranged in a
grid-like pattern in order to determine spatial information about a
user's interaction. For example, referring to FIG. 7A, an input
module 102c with eight sensors 406a-h arranged as sectors of a
circle has a sensor 406a in the 0.degree. position, a sensor 406b
in the 45.degree. position, a sensor 406c in the 90.degree.
position, a sensor 406d in the 135.degree. position, a sensor 406e
in the 180.degree. position, a sensor 406f in the 225.degree.
position, a sensor 406g in the 270.degree. position, and a sensor
406h in the 360.degree. position. In an example implementation,
each sensor's capacitance measurement reading is converted into X
and Y components that provide the finger's location relative to the
center of the array of sensors 406a-h. This is possible because the
angle of each sensor 406 is known and may be used with cosine and
sine functions to extract the components. For simplicity's sake,
each sensor 406 is centered on 45.degree. offsets from the unit
circle 0.degree.. To calculate the X component of the sensor 406b
located at 45.degree., for example, the firmware multiplies the
sensor's capacitance reading by cos(45.degree.). For the Y
component, the sensor's capacitance reading is multiplied by
sin(45.degree.). Once the X and Y components are known for all 8
sensors, all X's are summed together, and all Y's are summed
together. The resulting point approximates the location of the
finger on the array of sensors 406a-h.
[0069] If the finger is in the center of the array of sensors
406a-h, then each sensor will have some non-zero, but similar
capacitance reading. When the X components of two oppositely faced
sensors (e.g. sensors 406a and 408e), are summed together, they
have a cancelling effect. As the finger moves outward from center,
one or two sensors will show increasing capacitance readings, while
the other 6-7 sensors' readings will decrease. The result seen in
the summed X and Y values tracks the finger away from center and
outward in the direction of the one or two sensors that the finger
is in contact with.
[0070] A gesture detection algorithm may be included as a part of
the firmware of input module 102c. In some implementations, when a
user's finger first touches the input module 102, the algorithm
stores the starting location, as determined by the summed X and Y
values calculated from the capacitance readings. The algorithm then
waits for the detection of the finger to disappear at which point
the last known location of the finger before being removed is
stored off as an ending location. If no timeout is reached, and
both a start and stop event have occurred, the gesture algorithm
decides which gesture has occurred by analyzing the measured
starting point, ending point, slope of the line formed by the two
points, distance between the two points, and the change in X and
Y.
[0071] In general, the algorithm may differentiate between multiple
gestures. In some implementations, there are 4 "non-timeout"
gestures that must be distinguished from each other (each direction
is referenced to headphones on a user's head, so forward is motion
from the back of the head towards the face): "up," "down,"
"forward," and "backward". First, the algorithm determines whether
the gesture was horizontal or vertical, then determines whether the
motion was forward, backward, up or down.
[0072] For example, to differentiate between a horizontal motion
and a vertical motion, the algorithm may compare the change in X
(X2-X1) to the change in Y (Y2-Y1) and select the larger of the
two. If change in Y is larger than change in X, then the motion is
assumed to be vertical.
[0073] To differentiate between an upward or downward motion, the
algorithm may determine if Y2-Y1 is positive or negative. For
example, if Y2-Y1 is positive, then the motion is assumed to be
upward. If Y2-Y1 is negative, then the motion is assumed to be
downward. If change in X is larger than change in Y, then the
motion is assumed to be horizontal.
[0074] To differentiate between a forward and a backward motion,
the algorithm may determine if X2-X1 is positive or negative. For
example, if X2-X1 is positive, then the motion is assumed to be
forward. If X2-X1 is negative, then the motion is assumed to be
backward.
[0075] In general, each direction swipe may initiate a separate
command. In addition to the swipe based gestures shown above, the
touch algorithm can also detect that a user is holding the finger
on one or more of the sensors of the input module 102. A "hold"
gesture is detected in the event that a timeout occurs prior to an
end-of-gesture event (finger lifting off the sensors of input
module 102). Once it is determined that a finger is not being
removed from the sensors, its location is analyzed to determine
which command is intended.
[0076] In some implementations, input module 102c may differentiate
between different finger locations when a user interacts with input
module 102c. For instance, in some embodiments, input module 102c
includes an algorithm that differentiates between four finger
locations (e.g. "up", "down", "back", "forward"). This may be done
by comparing the capacitance readings of the sensors centered in
each cardinal location, for instance sensor 406a at the 0.degree.
position, sensor 406c at the 90.degree. position, sensor 306e at
the 180.degree. position, and sensor 406g at the 270.degree.
position. The sensor with the highest capacitance reading indicates
the position of the finger.
[0077] In this manner, the input module 102 may detect and
differentiate between several different types of user interaction,
and control device 100 use this information to assign a unique
command to each of these different user interactions.
[0078] The user may use one or more actions (e.g. pressing a
button, gesturing, waving a hand, etc.) to interact with computing
device 150, without requiring that the user directly interact with
computing device 150. For instance, without a control device 100, a
user must interact with a computing device 150 by averting his
attention from a different activity to ensure that he is sending
his intended commands to computing device 150 (i.e. touching the
correct area of a touchscreen, inputting the correct sequence of
commands, etc.) In contrast, a user may instead use control device
100 to interact with control device 100, and may use gestures to
replace or supplement the normal commands of computing device 150.
As an example, a user may input a forward swiping gesture in order
to command the computing device 150 to skip a currently playing
content item, and to playback the next content item on a playlist.
In another example, a user may input a backward swiping gesture in
order to command the computing device 150 to playback the previous
content item on a playlist. In this manner, each gesture may
correspond to a particular command, and a user may input these
gestures into input module 102 rather than manually enter the
commands into computing device 150.
[0079] In general, a user may also use one or more actions to input
commands unrelated to controlling content playback. For example, in
some implementations, gestures may be used to input commands
related to interacting with other systems on a network, for
instance websites and social media sites. For instance, a user may
input a hold gesture on a forward part of input module 102 in order
to command the computing device 150 to share the currently playing
content item on a social media site. Sharing may include
transmitting data to the social media site that includes
identifying information regarding the currently playing content
item, a user action relating to the content (e.g. "liking" the
content, "linking" the content to other users, etc.), a
pre-determined message introducing the content to other users, or
any other data related to sharing the content item with others. In
general, gestures and other actions may be used to issue any
command, including commands to visit a website, send a message
(e.g. an email, SMS message, instant message, etc.), purchase an
item (e.g. a content item, a physical product, a service, etc.), or
any other command that may be performed on the computing device
150.
[0080] In some embodiments, control system 100 may send commands to
computing device 150 based on the proximity of control system 100
to the user. For example, a control system 100 may include an input
module 102 with one or more proximity sensors. These sensors may
detect when control system 100 is in close proximity to the user,
and may issue commands according to this detection. For example,
input module 102 may be arranged in such a way that its proximity
sensors are positioned to detect the presence of a user when
control system 100 is in a typical usage position (e.g. against the
body of a user). When control system 100 is moved away from the
user, control system 100 may respond by issuing one or more
commands to computing system 150, for instance a command to stop
playback of any currently playing content items, a command to send
one or more messages indicating that the user is away from the
device, or a command to switch computing system 150 into a lower
power state to conserve energy. Commands may also be issued when
the control system 100 is moved back towards the user and into a
typical usage position. For example, when control system 100 is
moved back towards the user, control system 100 may respond by
issuing commands to computing system 150 to restart playback of a
content item, send one or more messages indicating that the user
has returned, or a command to switch computing system 150 into an
active-use state.
[0081] In some embodiments, control system 100 may send commands to
a computing device 150 based on the orientation or motion of
control system 100. For example, a control system 100 may include
an input module 102 with one or more accelerometer sensors. These
sensors may the orientation of control system 100, and may issue
commands according to this detection. For example, input module 102
may detect that control system 100 is upright, and send a command
to computing system 150 in response.
[0082] In some implementations, control system 100 may send
commands to computing system 150 based on determinations from more
than one sensor. For example, control system 100 may determine if
control system 100 is being actively used by a user based on
determinations from the proximity sensors and the accelerometers.
For instance, if the proximity sensors determine that no objects
are in proximity to control system 100, and the accelerometers
determine that control system 100 is in a non-upright position,
control system 100 may determine that it is not being actively used
a user, and will command computing system 150 to enter a lower
power state. For all other combinations of proximity and
orientation, the system may determine that it is being actively
used by a user, and will command computing system 150 to enter an
actively-use state. In this manner, control system 100 may consider
determinations from more than one sensor before issuing a
particular command.
[0083] In some implementations, control system 100 may include one
or more audio sensors, such as microphones. These sensors may be
provided in order for control system 100 detect and interpret
auditory data. For instance, an audio sensor may be used to listen
for spoken commands from the user. Control system 100 may interpret
these spoken commands and translate them into commands to computing
system 150. In some implementations, control system 100 may include
more than one audio sensor. In some implementations, different
audio sensors can be used for difference purposes. For example,
some implementations may include two audio sensors, one for
recording audio used for telephone calls, and one for recording
audio for detecting spoken user commands.
[0084] In some implementations, control system 100 may include one
or more display modules in order to display information to a user.
These display modules may be, for example, LED lights, incandescent
lights, LCD displays, OLED displays, LCD displays, or any other
type of display component that can visually present information to
a user. In some embodiments, control system 100 includes multiple
display modules, with either multiple display modules of the same
type, or with multiple display modules of more than one type.
Display modules can display any type of visual information to a
user. For instance, a display module may display information
regarding the operation of control system 100 (i.e. power status,
pairing status, command-related statuses, etc.), information
regarding computing system 150 (i.e. volume level, content item
information, email content, Internet content, telephone content,
power status, pairing status, command-related statuses, etc.), or
other content (i.e. variating aesthetically-pleasing displays,
advertisements, frequency spectrum histograms, etc.).
[0085] In general, control device 100 may be used in conjunction
with a variety of user-controllable devices. For instance,
referring to FIG. 8A, control device 100 may be used in conjunction
with an audio headset 700. Portions of control device 100 may
mounted to headset 800, or mounted within portions of headset 800.
For instance, portions of control device 100 may be mounted within
ear piece 802, ear piece 804, or connecting band 806. In some
implementations, portions of input module 102 may be mounted to
headset 800 in such a way that is readily accessible by a user. For
instance, in some implementations, sensor 402 is mounted on an
exterior surface of ear piece 802, so that a user may interact with
control device 100 by touching ear piece 802. The control device
100 may be connected to a computing device 150 through a connection
interface 152. Connection interface 152 may also provide a
connectively pathway for the transfer of data between headset 800
and the computing device, for instance audio information when
computing device 150 plays back a media file. While FIG. 8A
illustrates the use of a single touch sensor 402, various types of
sensors may be used, and in various combinations. For example,
multiple touch sensors may be used (for example sensors 404a-e and
406a-h), physical controls (for example buttons 408a-d), or
combinations of touch sensors and physical controls.
[0086] In another example, control device 100 may be used in
conjunction with an audio headset, but the components of control
device 100 may be in a separate housing rather than mounted within
the headset. Referring to FIG. 8B, a control device 100 may be
housed in a casing 820 that is external to a headset 830. Portions
of control device 100 may be mounted to the exterior of the casing
820. For instance, in some implementations, buttons 408a-d are
mounted on an exterior surface of casing 820, so that a user may
interact with control device 100 by touching an exterior surface of
casing 820. Connection interface 152 may be connected on one end to
the transmission module 106 of the control device 100, and may have
a detachable connector 824 on the other end. The detectable
connector 824 may plug into a computing device 150, and may be
repeatedly connected and detached by the user so that control
device 100 may be swapped between computing devices. While FIG. 8B
illustrates the use of a buttons 408a-d, various types of sensors
may be used, and in various combinations. For example, one or more
touch sensors may be used (for example sensors 402, 404a-e, and
406a-h), physical controls (for example buttons 408a-d), or
combinations of touch sensors and physical controls.
[0087] In another example, control device 100 may be used in
conjunction with an audio headset, but the components of control
device 100 may be in a separate housing that is mounted away from
the headset. Referring to FIG. 8C, a control device 100 may be
housed in a casing 820 that is separate from a headset 842.
Portions of control device 100 may be mounted to the exterior of
the casing 840. For instance, in some implementations, touch sensor
402 is mounted on an exterior surface of casing 840, so that a user
may interact with control device 100 by touching an exterior
surface of casing 840. Connection interface 152 may be connected on
one end to the transmission module 106 of the control device 100,
and may have a detachable connector 824 on the other end. The
detectable connector 824 may plug into a computing device 150, and
may be repeatedly connected and detached by the user so that
control device 100 may be swapped between computing devices. In
some implementations, a connector port 826 may be provided on the
exterior of casing 820, where the connector port 826 provides
detachable data transmission access to control device 100. In some
implementations, connector port 826 provides a connection for data
transmission between computing device 150 and headset 842, such
that headset 842 can also communicate with computing device 150. A
user may plug the headset 842 into connector port 826 so that the
headset or presentation device can receive audio information from
computing device 150. In some implementations, other devices may be
plugged into connector port 826, either in addition to or instead
of headset 842. For instance, in some implementations, a microphone
may be plugged into connector port 826, such audio information from
the microphone is transmitted to control system 100, then
transmitted to computing device 150. In some implementations, a
display device may be plugged into connector port 826, such that
audio and/or video data from computing device 150 is transmitted to
the display device for presentation. In some implementations, one
than one device may be plugged into connector port 826. For
instance, in some implementations, a headset and a display device
may be plugged into connector port 826, such that audio information
from computing device 150 is played back on the headset, and video
information is played back on the display device. In another
example, a headset and a microphone may be plugged into connector
port 826, such that audio information from computing device 150 is
played back on the headset, and audio information from the
microphone is transmitted from the microphone to computing device
150. While FIG. 8C illustrates the use of a single touch sensor
402, various types of sensors may be used, and in various
combinations. For example, multiple touch sensors may be used (for
example sensors 404a-e and 406a-h), physical controls (for example
buttons 408a-d), or combinations of touch sensors and physical
controls.
[0088] In some implementations, a user may use a control device 100
during a performance in order to share information regarding the
performance to a group of pre-determined recipients or recipients
belonging to a group. In an example, a control device 100 may be
connected a computing device 150, and to one or more other devices,
such as a headset, a microphone, or a display device. A user may
use computing device 150 to play content items for an audience, for
example to play audio and/or video content to an audience as a part
of a performance. During this performance, the user may use the one
or more devices connected to control device 100, for instance a
headset (e.g. to monitor audio information from computing device
150), a microphone (e.g. to address an audience), and a display
device (e.g. to present visual data, such as images or video to an
audience). During this performance, the user may also use control
system 100 to send commands to computing device 150, such as to
control the playback of content items, and to share information
regarding the performance. For instance, the user may use control
system 100 to command computing device 150 to transmit information
regarding the currently playing content item (e.g. the name of a
song or video, the name of the creator of the song or video, or any
other information) to one or more recipients, such as by posting a
message to a social media site, by emailing one or more users, by
sending an SMS or instant message to one or more users, or by other
such communications methods. In this manner, a user may use a
control device 100 in conjunction with several other devices to
render a performance, as well as to share information regarding the
performance to one or more recipients.
[0089] In another example, control device 100 may be used in
conjunction with other audio and video playback devices, for
instance a speaker apparatus 860. Portions of control device 100
may mounted to the speaker apparatus 860, or mounted within
portions of the speaker apparatus 860. In some implementations,
portions of input module 102 may be mounted to speaker apparatus
860 in such a way that is readily accessible by a user. For
instance, in some implementations, sensor 402 is mounted on an
exterior surface of speaker apparatus 840, so that a user may
interact with control device 100 by touching an exterior surface of
speaker apparatus 860. In some implementations, speaker apparatus
860 may also include a connection interface (not shown), that
provides data connectivity between control device 100 and a
computing device 150. In some implementations, speaker apparatus
860 includes a computing device 150. In some implementations, the
computing device 150 includes data storage and data processing
capabilities, such that media files may be stored and played back
from within the speaker apparatus 860. In some implementations, the
computing device 150 may also include one or more data connection
interfaces, such that a user may transmit media files to the
computing device 150 for playback. The data connection interfaces
may be include components for transferring data through local area
networks (LANs), wide area networks (WANs), Bluetooth networks,
cellular networks, or any other network where network capable of
data transmission. While FIG. 8D illustrates the use of a single
touch sensor 402, various types of sensors may be used, and in
various combinations. For example, multiple touch sensors may be
used (for example sensors 404a-e and 406a-h), physical controls
(for example buttons 408a-d), or combinations of touch sensors and
physical controls.
[0090] In another example, control device 100 may be used in
conjunction with other devices not normally associated with media
playback. For instance, referring to FIG. 8D, a control device 100
may be used in conjunction with a steering wheel 880. Typically, a
steering wheel 880 is manipulated by a user to control the
direction of a vehicle. In some implementations, the portions of
control device 100 may mounted to the steering wheel 880, or
mounted within portions of steering wheel 880. In some
implementations, portions of input module 102 may be mounted to
steering wheel 880 in such a way that is readily accessible by a
user. For instance, in some implementations, sensor 402 is mounted
on an exterior surface of the steering wheel 880, so that a user
may interact with control device 100 by touching steering wheel
880. The control device 100 may be connected to a computing device
150 through a connection interface 152. While FIG. 8E illustrates
the use of a multiple single touch sensors 402, various types of
sensors may be used, and in various combinations. For example,
multiple touch sensors may be used (for example sensors 404a-e and
406a-h), physical controls (for example buttons 408a-d), or
combinations of touch sensors and physical controls.
[0091] While example implementations of control device 100 are
described above, these examples are not exhaustive. In general,
control device 100 may be used in conjunction with any
user-operated device, and may be used to control any computing
device 150.
[0092] For instance, referring to FIG. 9, in another example
implementation the control system 100 may include a computing
component attached to a headphone 800 (such as one of headphones
800a-e), where the computing device includes a touch pad, a motion
sensor and/or a camera. In one implementation, a user may make a
hand gesture near the headphone to provide a control command to the
audio source, e.g., swirling the point finger clockwise may
indicate a "replay" request; pointing one finger forwardly may
indicate a "forward" request; pointing one finger downwardly may
indicate a "pause" request, and/or the like. In one implementation,
control system 100 may capture the user hand gesture via a camera,
or a remote control sensor held by the user, or the user making
different movement on the touch pad, and/or the like. Such captured
movement data may be analyzed by the control system 100 and
translated into control commands for the audio source to change the
audio playing status.
[0093] In some implementations, the control system 100 may
facilitate social sharing between users. For example, a user may
make a command so that the control system 100 may automatically
post the currently played song to social media, e.g., Tweeting
"John Smith is listening to #Scientist #Coldplay," a Facebook
message "John Smith likes Scientist, Coldplay," and/or the
like.
[0094] In some implementations, a user may make a gesture to share
audio content to another user using another control system 100. For
example, the user may made a "S" shaped gesture on a touch pad of
the control system 100 of a headphone 800, which may indicate
"sharing" with another control system 100 user in a detectable
range (e.g., Bluetooth, etc.). In another implementation, the
control system 100 may communicate via Near Field Communication
(NFC) handshake. The second control system 100 may receive the
sharing message and adjust the audio source to an Internet radio
the first control system 100 user is listening to, so that the two
users may be able to listen to the same audio content. In some
implementations, the sharing may be conducted among two or more
control system 100 users. In some implementations, the control
system 100 may share the radio frequency from one user to another,
so that they can be tuned to the same radio channel.
[0095] In some implementations, the control system 100 may allow a
user to configure user preferred "shortcut keys" for a command. For
example, in one implementation, the control system 100 may be
connected to a second device (e.g., other than a headphone), such
as a computer, a smart phone, and/or the like, which may provide a
user interface for a user to set up short-key movements. For
example, the user may select one finger double-tab as sharing the
currently played song to a social media platform (e.g., Twitter,
Facebook, etc.) as a "like" event, two finger double-tab as sharing
the currently played song to social media by posting a link of the
song, and/or the like.
[0096] In some implementations, the control system 100 may include
a headphone with aesthetic designs. For example, the earpad portion
may have a transparent design, a colored exterior spin that may
feature sponsor information and/or branding logos. In some
implementations, the control system 100 may include a headphone
with touch pad, a touch screen that may show social sharing
information (e.g., Tweets, Facebook messages, etc.). In some
implementations, the control system 100 may include a headphone
with a removable headband portion to feature user customized
graphics. The user may remove the headband portion from the
headphone for cleaning purposes. In some implementations, the
control system 100 may include a headphone that may be adaptable to
helmets.
[0097] In some implementations, the control system 100 may be
engaged in a "DJ display" mode, wherein a digital screen at the
headphone may display color visualizations including variating
color bars that illustrates the frequency of the audio content
being played.
[0098] In some implementations, the control system 100 may provide
APIs to allow third party services. For example, the control system
100 may include a microphone so that a user may speak over a phone
call. In some implementations, a user may instantiate a mobile
component at the audio source (e.g., a computer, a smart phone,
etc.). When the audio source detects an incoming audio
communication request (e.g., a Skype call, a phone call, and/or the
like), the control system 100 may automatically turn down the
volume of the media player, and a user may make a gesture to answer
to the incoming audio communication request, e.g., by tapping on
the touch pad of the headphone as the user may have configured
one-tap as the shortcut key, etc.
[0099] In some implementations, the control system 100 may allow a
user to sing and record the user's own singing. In one
implementation, the Motion-HP may instantiate a "Karaoke" mode so
that the control system 100 may perform remix of background
soundtrack of a song that is being played and the recorded user's
singing to make a cover version of the song. In one implementation,
the user may make a gesture on the touch pad of the control system
100 to share the "cover" version to social media.
[0100] In some implementations, the control system 100 may provide
audio recognition (e.g., a "Shazam" like component, etc.). In some
implementations, when a user is listening to a radio channel
without digital identification of the audio content, the control
system 100 may identify the audio content via an audio recognition
procedure.
[0101] In some implementations, the control system 100 may
broadcast audio content it receives from an audio source to other
control systems 100 via Bluetooth, NFC, etc. For example, a user
may connect his/her control system 100 to a computer to listen to
media content, and broadcast the content to other control systems
100 so that other users may hear the same media content via
broadcasting without directly connecting to an audio source.
[0102] In some implementations, the control system 100 may include
accelerometers to sense the body movement of the user to facilitate
game control in a game play environment. In one implementation, the
control system 100 may be engaged as a remote game control via
Bluetooth, NFC, WiFi, and/or the like, and a user may move his head
to create motions which indicate game control commands.
[0103] In some implementations, the control system 100 may
automatically send real-time audio listening status of a user to
his subscribed followers, e.g., the fan base, etc.
[0104] In some implementations, the control system 100 may be
accompanied by a wrist band, which may detect a user's pulse to
determine the user's emotional status, so that the control system
100 may automatically select music for the user. For example, when
a heavy pulse is sensed, the control system 100 may select soft and
soothing music to the user.
[0105] In some implementations, the control system 100 may comprise
a flash memory to store the user's social media feeds, user's
configuration of audio settings, user defined shortcut keys, and/or
the like. For example, when the user connects a control system 100
to a different audio source, the user does not need to re-configure
the parameters of control system 100.
[0106] In some implementations, the control system 100 may allow a
user to add third party music services, such as but not limited to
iTunes, Pandora, Rhapsody, and/or the like, to the control system
100. In further implementations, the user may configure shortcut
keys for selection of music services, control the playlist, and/or
the like.
[0107] In some implementations, the control system 100 may provide
registration services in order to access full usage of the control
system 100. For example, a user may access a registration platform
via a computer, etc. A user may be allowed to access limited
features of the control system 100, e.g., play music, etc., but not
able to access additional features such as "DJ mode," "Karaoke
mode," and/or the like.
[0108] Further implementations of the control system 100 include
analytics for targeting advertisements, revenue sharing between
advertising channels and sponsors, music selection and
recommendation to a user, and/or the like.
[0109] Some implementations of the present technology may be fully
integrated into a headphone apparatus, and used in conjunction with
a media player device to provide a user with audio playback of
media content, and to allow the user to interact with social media
sites, email providers, supplementary content providers, and ad
providers based on the media content being played. FIG. 10
illustrates an exemplary embodiment of the apparatus 1000. The
headphones 1000 are operably connected to a media player 1002
through a connection 1004, for instance hardwire connection or a
wireless connection, such as Bluetooth or Wi-Fi. The media player
1002 communicates with a network gateway 1006 through wireless
network connection 1008, such as through a cellular connection or
Wi-Fi connection. The network gateway 1006 provides network
connectivity to the Internet 1010 through a network connection
1014, facilitating access to various content and service providers
1012a-d connected to the Internet 1010 through network connections
1016. Content and service providers 1012a-d may include email
servers, social media sites, ad servers, and content servers.
[0110] Other implementations are contemplated. For example, the
media player 1002 may be one of many types of mobile devices, such
as a cellular telephone, a tablet, a computer, a pager, a gaming
device, or a media player. In some implementations, the wireless
network connection 1008 may be one of many types of communications
networks through which data can be transferred, such as a Wi-Fi
network, a cellular telephone network, a satellite communications
network, a Bluetooth network, or an infrared network. In some
implementations, the content and service providers 1012a-d may also
include search engines, digital content merchant sites, instant
messaging providers, SMS message providers, VOIP providers, fax
providers, content review sites, and online user forums.
[0111] FIG. 11A illustrates an example embodiment of the headphones
1000. The headphones include a first earpiece assembly 1102, a
second earpiece assembly 1104, and a headband assembly 1106 that
securely positions the earpieces 1102 and 1104 over the ears of a
user. Each earpiece assembly 1102 and 1104 includes one or more
externally accessible touch sensor arrays 1108 and 1110 for user
interaction.
[0112] FIG. 11B illustrates the components of the first earpiece
assembly 1102. Mounted on the Main PCB 1112 are a microcontroller
1114, a baseband digital signal processor (DSP) 1116, a Kalimba DSP
1118, an audio/video codec 1120, random access memory (RAM) 1122,
and non-volatile "flash" memory 1124. Also connected to the Main
PCB 1112 are a USB connector 1126, a wired connector 1128, light
emitting diode (LED) indicators 1130, a power switch 1132, an audio
driver 1134, and touch sensor array 1108. The first earpiece
assembly 1102 is connected to the second earpiece 1106 assembly
through a wired connection 1136 passing through the headband
assembly 1106.
[0113] FIG. 11C illustrates the components of the second earpiece
assembly 1104. The Slave PCB 1138 is connected to the Main PCB 1112
of the first earpiece assembly 1102 through a hardwire connection
1136. Also connected to the Slave PCB 1138 are a battery 1142,
microphone array 1144, near-field communication (NFC) module 1146,
an audio driver 1148, and a touch sensor array 1110.
[0114] The Main PCB 1112 and Slave PCB 1138 provide connectivity
between the various components of the earpiece assemblies. The
microcontroller 1114 accepts inputs from the touch sensor arrays
1108 and 1100, USB connector 1126, and wired connector 1128, and if
necessary, translates the inputs into machine compatible commands.
Commands and other data are transmitted between the microprocessor
and/or the connected components. For example, audio from the
microphone array 1144 and the wired connector 1128 is digitally
encoded by the codec 1120 and processed by the baseband DSP 1116
and Kalimba DSP 1118, where it may be modified and mixed with other
audio information. Mixed audio is decoded by the codec 1120 into an
analog representation and is output to the audio drivers 1134 and
1148 for playback. LEDs 1130 are connected to the microcontroller
1114 and may be illuminated or flashed to indicate the operational
status of the headphone apparatus. Power is supplied by the battery
1142 connected to the microcontroller 1114, and power may be
toggled by using a power switch 1132. Additional components, such
as wireless transceivers 1150, may be connected to and controlled
by the microprocessor 1114. The microprocessor 1114 may transmit
data to an externally connected computing device, such as a smart
phone or media player, via the wireless transceivers 1114, the USB
connector 1126, or the wired connector 1128. The data may include
data used to identify the specific model, features, and unique
identifying information of the headphones.
[0115] Other implementations are contemplated. For example, one or
more of the touch sensor arrays 1108 and 1110 may instead be
physical buttons, switches, or dials. Additional connectors may be
provided on the first or second earpiece assemblies 1102 and 1104,
including an audio output port, an optical port, Firewire port, an
Ethernet port, a SATA port, a power input port, a Lightning port,
or a serial port. Power, digital data, or analog data may be input
into the apparatus or output from the apparatus using these ports.
In some implementations, the headphone apparatus 1000 may also
include a video display unit, such that visual content may be
displayed on the device. The video display unit may be a LCD
display, or may be a heads-up display (HUD) that overlays visual
data over a transparent or translucent viewing element. In some
embodiments, one or more of the components stored in each of the
earpiece assemblies 1102 and 1104 may be relocated to the other
earpiece assembly or to an external housing unit. The housing unit
may be positioned on the headband 1106, on one of the wired
connections (i.e. connections 1128 and 1136), or elsewhere on the
headphone apparatus 1000. In some implementations, the headphone
1000 may have a GPS device that can be used to determine locational
data. In some implementations, the battery 1142 is removable.
[0116] The user may use the touch sensor arrays 1108 and 1110 to
input commands into the headphone apparatus 1000. For example, each
of the individual input surfaces of sensor arrays 1108 and 1110 of
may be programmed to correspond to specific functions, such as
play, stop, rewind, fast forward, pause, repeat, skip, volume
increase, or volume decrease. Additional commands may include a
command to wirelessly "pair" the headphone 1000 to another wireless
device, a command to create a post on a social networking site, a
command to draft an email, or a command to search for additional
information regarding the media content currently being played. The
touch sensor arrays 1108 and 1110 may be of a PCB, Flex-PCB, or ITO
film based design.
[0117] Additional commands may be programmed depending on the
length of time the button or touch sensor is activated. For
example, a brief touch may correspond to a command to fast forward,
while a longer touch may correspond to a command to skip forward to
the next track. Additional commands may be programmed depending on
a sequence of multiple inputs. For example, pressing the touch
array 1108 or 1110 twice may correspond to a command to create a
post on a social media site, while pressing the touch array 1108 or
1110 three times may correspond to a command to draft an email. In
addition, touching the sensor array 1108 or 1110 in a specific
order and within a certain timeframe, such to simulate a gesture,
can correspond to a command. For example, touching the bottom,
middle, and top sensors of array 1108 in sequence in single sliding
motion may correspond to a command to increase the volume. Touching
the top, middle, and bottom sensors of array 1108 in sequence in a
single sliding motion may correspond to a command to decrease the
volume. Other such "gestures" can be recognized as user commands,
including a sliding left to right motion, a sliding right of left
motion, a clockwise circular motion, or a counter-clockwise
circular motion.
[0118] In some implementations, the headphone 1000 may be "paired"
with another device through a wireless connection, such that the
headphone will only communicate with the paired device. Example
wireless connections may include Bluetooth, enabled through an
appropriately provided Bluetooth transceiver. Near-field
communication (NFC) tags, for instance a tag on NFC module 1146,
may be used to simplify the "pairing" process. For example, the NFC
tag may be pre-programmed from the factory with the unique
Bluetooth ID information of the Bluetooth transceiver. A device
capable of reading NFC tags can be passed over the NFC tag in order
to access the Bluetooth ID information. This information can be
used to uniquely identify the Bluetooth transceiver contained
within the headphone assembly and to establish the "paired"
connection without requiring additional manual entry of the
Bluetooth ID by a user. The NFC tag may also contain other
information used to identify the specific model, features, and
unique identifying information of the headphones 1000.
[0119] FIG. 12 illustrates exemplary tasks that may be performed by
various implementations of the present technology. A media player
(for instance, media player 1002) loads a playlist of media content
to be played (block 1202), plays the media content, and recognizes
contextual information about the media contents of the playlist
(block 1204). Examples of contextual information may include the
name of the track, the media player may also determine the location
of the user using a built in GPS sensor, or using a GPS sensor
located on the headphone assembly (block 1205).
[0120] Using the contextual and location information, the apparatus
may deliver supplemental content to the user. The media player
sends a request to content servers (for instance, content servers
1012a-d) for supplemental content based on the contextual and
location information acquired (block 1206). Supplemental content
may include information such as biographical information about the
artist, album art or other visual data about the artist, social
media messages written by or written about the artist, a list of
past and previous tour dates by the artist, "remixed" or
alternative tracks, a listing of related merchandise, or a list of
"similar" artists and tracks. The media player receives the
supplemental content from the content servers (block 1208),
aggregates the summary information into display templates (block
1210), and displays the aggregated information to the user (1212).
An example display template 1300 with aggregated information 1302
is illustrated in FIG. 13. A user may interact with the aggregated
data by selecting an item 1306 that he wishes to learn more about.
The phone will direct the user to an external site where more
detailed information is displayed about the selected item, or to an
Internet-based marketplace where merchandise related to the
selected item is offered for sale (block 1214).
[0121] The apparatus may also deliver ad content based on the
contextual information and location information collected. The
media player sends a request to ad servers for ad content based on
the contextual and location information acquired (block 1220). Ad
content may include static images, videos, text, audio recordings,
or other forms of media (block 1222). The media player receives the
ad content from the ad servers, inserts the ads into display
templates (block 1224), and displays the ads to the user (block
1226). An example display template 1300 with ads 1308 is
illustrated in FIG. 13. The user may interact with the ads by
selecting an ad that he wishes to learn more about. The phone will
direct the user to an external site where more detailed information
is displayed about the selected ad, or to an Internet-based
marketplace where merchandise related to the selected ad is offered
for sale (block 1228).
[0122] The apparatus may also allow the user to share media or
other content with one or more users. The media player receives a
command from the user to share content with a local second user
(1240). The command may be of a voice command or an input from the
touch sensor array. The media player searches and connects to the
local second user's device over a wireless connection (1242).
Wireless connection can be established over any of several common
wireless networks including Wi-Fi, Bluetooth, or infrared. After
establishing a connection, the media player transmits the content
to the second user's device over the wireless connection
(1244).
[0123] In some implementations, the user may instead share media or
other content with one or more users over an Internet connection.
In these implementations, the media player may access the Internet
and search for a second user or for a content sharing site through
the Internet connection. Access to the Internet may be over any of
several common wireless networks including Wi-Fi, Bluetooth,
infrared, a cellular network, or a satellite network. The media
player connects to the second user's device or the content sharing
site over the Internet connection, and transmits the content to the
second user's device or content sharing site. The media player may
also draft and send a message to one or more users, notifying the
one or more users of the newly shared content and providing the
location from which it can be retrieved.
[0124] The apparatus may also allow the user to interact with
various social media sites based upon the contextual data and
locational data acquired. In these implementations, the media
player receives a command from the user to interact with a social
media site (block 1260). The media player generates a message or an
action based upon the contextual and location information (block
1262). Examples of messages may include "[User Name] is listening
to [Track Name] by [Artist Name] at [Location]", "[User Name] is
playing [Album Name] on the way to [Location]," or any similar
message identifying contextual and location information in a social
media compatible format. Messages and actions may be transmitted to
social media sites using established application programming
interfaces (APIs) to ensure compatibility (block 1264).
[0125] In some implementations, the message may also be modified by
the user to allow for personalization. The message may also include
photographs, videos, audio, or any other related content, either
generated by the user or retrieved from content servers or ad
servers. Examples of actions may include "liking" an artist or
track and subscribing to an artist's social media page. Example
social media sites may include Facebook, Twitter, Google+,
Instagram, or any other such site. In some embodiments, the
apparatus may also send messages or perform other such actions over
other networking sites or services, such as email, instant
messaging providers, SMS message providers, VOIP providers, fax
providers, content review sites, and online user forums.
[0126] Referring to FIG. 14, in some implementations, the apparatus
may operate in "karaoke mode," such that it records the user's
voice and mixes it with a background audio sound track. The
apparatus enters "karaoke mode" after receiving an appropriate
command from the user via voice command or touch sensor input
(block 1402). Audio content from a playlist is played on one side
audio channel (i.e. audio driver 1134), while audio is recorded
from the microphone (i.e. microphone array 1144) and played over
the other side audio channel (i.e. audio driver 1148) (block 1404).
Audio from the microphone is mixed with the audio track and saved
locally, for example on the flash memory 1124 or RAM 1122 (block
1406). The mixed audio track may be uploaded to a content sharing
site or a social media site via an appropriate Internet connection
(block 1410). The mixed audio track may be shared using mechanisms
described above, such as through the use of a generated message on
a social media site, a generated email message, or message through
any other such communications network (block 1410). This generated
message is then transmitted to the recipient, for example
transmitted to the social media site using an appropriate API or to
an email server for transmission to the recipient (block 1412). The
mixed audio track may also be retained on local storage for future
playback.
[0127] In some embodiments, "karaoke mode" may instead identify the
selected audio track using contextual information and access a
vocal-free version of the audio track from an appropriate content
server. The vocal-free version of the audio track may be used in
place of the vocalized version, resulting in a "karaoke" mix that
better accentuates the user's own voice without interference from
the original vocalizations. The vocal-free version of the audio
track may also be mixed with the vocalized version, such that a
reduced portion of the original vocalizations remain in the final
mix. In some embodiments, accessing the vocal-free versions may
also include connecting to an Internet-connected marketplace, such
that vocal-free versions may be purchased, downloaded, stored, and
used for "karaoke" mode using the apparatus.
[0128] In some embodiments, the features of the media player 1002
may be limited or enabled based upon the connected headphone 1000.
Identifying information from the headphone 1000 may be transferred
from the headphone 1000 to the media player 1002 via the wired
connector 1128 or via a wireless connection, such as through as
through a Bluetooth network, Wi-Fi network, NFC, or other such
communication connections. Identifying information is validated
against a list of authorized devices, and features of the media
player may be disabled or enabled as desired. For example, a user
may plug in a headphone 1000 as described above. Information
identifying the headphone is transmitted to the media player 1002
and is validated against a recognized list of compatible devices,
and all features of the media player are enabled as a result. The
user may alternatively plug in a headphone that is not recognized
by the media player 1002. Certain features, for example "karaoke
mode," may be disabled on the media player as a result
[0129] Various implementations of the present invention allow for
the control of, interaction with, and creation of content via a
remote device, such as an audio headphone, to a base station such
as a mobile device, mp3 player, cell phone, mobile phone, smart
phone, tablet computer, e-book reader, laptop computer, smart
television, smart video screen, networked video players, game
networks and the like. For example, example embodiments of the
present invention allow for the programming of short-cut commands,
such as hand gestures received at or near the headphones, to
initiate a command on a software application running on a smart
phone, such as posting a "like" on social network relating to a
song played on the headphone.
[0130] Previous attempts to control content or content players via
remote devices such as headphones and remote controls have allowed
user manipulation of the audio visual content as experienced by the
user (e.g., adjusting volume, pausing, rewinding, etc.).
Implementations of the present invention allow for the user to
create additional content from the remote device for distribution
over a network, such as comments relating to content, accessing
promotional offers, product registration, participation in live
promotions, etc. Such layered content creation has previously been
done through user input at the base device, such as typing into a
smart phone to indicate a favorable response or opinion for a song.
With various implementations of the present invention, a user can
program the base device, such as a smart phone, to recognize simple
inputs made at the remote device and associate those inputs with a
specific command to be executed in programs or applications running
on the device or accessible by the device.
[0131] By way of example, and without limitation, a user can
download a program onto a smartphone that recognizes input made via
an input pad on a headphone. The input, such as a circle made by
the finger on the input pad (or touch sensor array) can be
associated with a command on an mp3 player application. The circle
motion can be associated with a command to pull all songs of a
related genre from a sponsor's play list.
[0132] In a broad implementation of the present invention, and with
reference to FIG. 15, a method of remote access to a hosted
application comprises the steps of creating associated command
(block 1500) (e.g., abbreviated inputs at a remote device
associated with the execution of a function or step in a hosted
application) and receiving a remote command for execution. More
specifically, a method of remote access to a hosted application
comprises the steps of: recording a user input from a sensor on a
remote device (block 1502); associating the recorded user input
with a specific command (block 1504); storing the command-input
association (block 1506); receiving a user input on a sensor on a
remote device (block 1508); transmitting the user input from the
remote device to a base device (block 1510); receiving at the base
device the user input transmitted from the remote device (block
1512); comparing the input with the previously recorded inputs for
association with a command specific to an application running on or
accessible by the base device (block 1514); matching the user input
to the desired command (block 1516) and executing the command
(block 1518). In some embodiments the execution of the command
(block 1518) may initiate certain cloud functionality (block 1520)
to allow user interaction with content available over a network,
such as the Internet, a web page, a blogosphere, a blog spot, a
social networked, a shared media network, a closed or private
network, and the like.
[0133] Various implementations of the invention utilize human vital
and biological data collected via the external device, such as
interactive headphones, to choose music according to mood and/or
activity level. For example, when a user is working out in the gym
more up-beat music is played while running and more relaxing music
is played as the user begins to walk, cool-off and wind down an
activity session. This includes a relational database of music,
artist and songs with mood classification (pumped-up, calm/relax,
etc.) The association of content with activity can be made with
simple commands entered via the touch pad on the interactive
headphones, or the device can include an accelerometer to detect
activity levels. The application running on the base device can
include GPS or other location determining software as well as logic
to correlate location with calendar entries or other data to
determine or confirm activity.
[0134] In other examples, the software application of some
implementations of the device can recognize when headphones are
removed via indication from the headphones. In a particular
commercial implementation, a music aggregator, such as Pandora
would be able to determine when music is played and when it is
paused based on whether the interactive headphones are over the
ears or not, thereby avoiding unnecessary licensing fees for the
music.
[0135] In another example, a user can interact with content, such
as just-in-time promotions, targeted marketing, geo-based
marketing, and the like, by associating simple commands with
registration of the user for participation in a promotional offer,
opt-in or opt-out of promotional offers or materials, voting,
association, and the like.
[0136] Implementations of the invention are not limited to
headphones, but can be incorporated into dongles, or other external
input devices. The methods of creating layered content and
interacting with programs and content hosted on a base device via
commands entered into a remote device can be implemented in video
devices or headphones/video combinations.
[0137] The operations described in this specification can be
implemented as operations performed by a data processing apparatus
on data stored on one or more computer-readable storage devices or
received from other sources.
[0138] The term "data processing apparatus" encompasses all kinds
of apparatus, devices, and machines for processing data, including
by way of example a programmable processor, a computer, a system on
a chip, or multiple ones, or combinations, of the foregoing The
apparatus can include special purpose logic circuitry, e.g., an
FPGA (field programmable gate array) or an ASIC
(application-specific integrated circuit). The apparatus can also
include, in addition to hardware, code that creates an execution
environment for the computer program in question, e.g., code that
constitutes processor firmware, a protocol stack, a database
management system, an operating system, a cross-platform runtime
environment, a virtual machine, or a combination of one or more of
them. The apparatus and execution environment can realize various
different computing model infrastructures, such as web services,
distributed computing and grid computing infrastructures.
[0139] A computer program (also known as a program, software,
software application, script, or code) can be written in any form
of programming language, including compiled or interpreted
languages, declarative or procedural languages, and it can be
deployed in any form, including as a stand-alone program or as a
module, component, subroutine, object, or other unit suitable for
use in a computing environment. A computer program may, but need
not, correspond to a file in a file system. A program can be stored
in a portion of a file that holds other programs or data (e.g., one
or more scripts stored in a markup language document), in a single
file dedicated to the program in question, or in multiple
coordinated files (e.g., files that store one or more modules,
sub-programs, or portions of code). A computer program can be
deployed to be executed on one computer or on multiple computers
that are located at one site or distributed across multiple sites
and interconnected by a communication network.
[0140] The processes and logic flows described in this
specification can be performed by one or more programmable
processors executing one or more computer programs to perform
actions by operating on input data and generating output. The
processes and logic flows can also be performed by, and apparatus
can also be implemented as, special purpose logic circuitry, e.g.,
an FPGA (field programmable gate array) or an ASIC
(application-specific integrated circuit).
[0141] Processors suitable for the execution of a computer program
include, by way of example, both general and special purpose
microprocessors, and any one or more processors of any kind of
digital computer. Generally, a processor will receive instructions
and data from a read-only memory or a random access memory or both.
The essential elements of a computer are a processor for performing
actions in accordance with instructions and one or more memory
devices for storing instructions and data. Generally, a computer
will also include, or be operatively coupled to receive data from
or transfer data to, or both, one or more mass storage devices for
storing data, e.g., magnetic, magneto-optical disks, or optical
disks. However, a computer need not have such devices. Moreover, a
computer can be embedded in another device, e.g., a mobile
telephone, a personal digital assistant (PDA), a mobile audio or
video player, a game console, a Global Positioning System (GPS)
receiver, or a portable storage device (e.g., a universal serial
bus (USB) flash drive), to name just a few. Devices suitable for
storing computer program instructions and data include all forms of
non-volatile memory, media and memory devices, including by way of
example semiconductor memory devices, e.g., EPROM, EEPROM, and
flash memory devices; magnetic disks, e.g., internal hard disks or
removable disks; magneto-optical disks; and CD-ROM and DVD-ROM
disks. The processor and the memory can be supplemented by, or
incorporated in, special purpose logic circuitry.
[0142] To provide for interaction with a user, embodiments of the
subject matter described in this specification can be implemented
on a computer having a display device, e.g., a CRT (cathode ray
tube) or LCD (liquid crystal display) monitor, for displaying
information to the user and a keyboard and a pointing device, e.g.,
a mouse or a trackball, by which the user can provide input to the
computer. Other kinds of devices can be used to provide for
interaction with a user as well; for example, feedback provided to
the user can be any form of sensory feedback, e.g., visual
feedback, auditory feedback, or tactile feedback; and input from
the user can be received in any form, including acoustic, speech,
or tactile input. In addition, a computer can interact with a user
by sending documents to and receiving documents from a device that
is used by the user; for example, by sending web pages to a web
browser on a user's client device in response to requests received
from the web browser.
[0143] Embodiments of the subject matter described in this
specification can be implemented in a computing system that
includes a back-end component, e.g., as a data server, or that
includes a middleware component, e.g., an application server, or
that includes a front-end component, e.g., a client computer having
a graphical user interface or a Web browser through which a user
can interact with an implementation of the subject matter described
in this specification, or any combination of one or more such
back-end, middleware, or front-end components. The components of
the system can be interconnected by any form or medium of digital
data communication, e.g., a communication network. Examples of
communication networks include a local area network ("LAN") and a
wide area network ("WAN"), an inter-network (e.g., the Internet),
and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
[0144] The computing system can include clients and servers. A
client and server are generally remote from each other and
typically interact through a communication network. The
relationship of client and server arises by virtue of computer
programs running on the respective computers and having a
client-server relationship to each other. In some embodiments, a
server transmits data (e.g., an HTML page) to a client device
(e.g., for purposes of displaying data to and receiving user input
from a user interacting with the client device). Data generated at
the client device (e.g., a result of the user interaction) can be
received from the client device at the server.
[0145] While this specification contains many specific
implementation details, these should not be construed as
limitations on the scope of any inventions or of what may be
claimed, but rather as descriptions of features specific to
particular embodiments of particular inventions. Certain features
that are described in this specification in the context of separate
embodiments can also be implemented in combination in a single
embodiment. Conversely, various features that are described in the
context of a single embodiment can also be implemented in multiple
embodiments separately or in any suitable subcombination. Moreover,
although features may be described above as acting in certain
combinations and even initially claimed as such, one or more
features from a claimed combination can in some cases be excised
from the combination, and the claimed combination may be directed
to a subcombination or variation of a subcombination.
[0146] Similarly, while operations are depicted in the drawings in
a particular order, this should not be understood as requiring that
such operations be performed in the particular order shown or in
sequential order, or that all illustrated operations be performed,
to achieve desirable results. In certain circumstances,
multitasking and parallel processing may be advantageous. Moreover,
the separation of various system components in the embodiments
described above should not be understood as requiring such
separation in all embodiments, and it should be understood that the
described program components and systems can generally be
integrated together in a single software product or packaged into
multiple software products.
[0147] Thus, particular embodiments of the subject matter have been
described. Other embodiments are within the scope of the following
claims. In some cases, the actions recited in the claims can be
performed in a different order and still achieve desirable results.
In addition, the processes depicted in the accompanying figures do
not necessarily require the particular order shown, or sequential
order, to achieve desirable results. In certain implementations,
multitasking and parallel processing may be advantageous.
* * * * *