U.S. patent application number 14/141194 was filed with the patent office on 2015-06-18 for providing vicarious tourism sessions.
This patent application is currently assigned to Google Inc.. The applicant listed for this patent is Google Inc.. Invention is credited to Udi Manber.
Application Number | 20150172607 14/141194 |
Document ID | / |
Family ID | 51534399 |
Filed Date | 2015-06-18 |
United States Patent
Application |
20150172607 |
Kind Code |
A1 |
Manber; Udi |
June 18, 2015 |
PROVIDING VICARIOUS TOURISM SESSIONS
Abstract
Methods, systems, and apparatus, including computer programs
encoded on a computer storage medium, for providing vicarious
tourism sessions. In one aspect, a method includes receiving a
request from a user of a user device to participate in a vicarious
tourism session; selecting candidate docents, wherein a docent is a
user who has registered to participate in vicarious tourism
sessions that are relevant to a particular geographic location;
providing data identifying the candidate docents for presentation
to the user device; receiving a user input selecting a candidate
docent; and initiating a vicarious tourism session between the
selected docent and the user, wherein initiating the vicarious
tourism session comprising providing a video feed of video captured
from a session accessory worn by the selected docent to the user
device for presentation to the user.
Inventors: |
Manber; Udi; (Los Altos
Hills, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Google Inc. |
Mountain View |
CA |
US |
|
|
Assignee: |
Google Inc.
Mountain View
CA
|
Family ID: |
51534399 |
Appl. No.: |
14/141194 |
Filed: |
December 26, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61785085 |
Mar 14, 2013 |
|
|
|
Current U.S.
Class: |
348/158 |
Current CPC
Class: |
H04N 7/185 20130101;
H04W 4/021 20130101; H04L 65/403 20130101; H04L 65/1069
20130101 |
International
Class: |
H04N 7/18 20060101
H04N007/18 |
Claims
1. A method performed by one or more computers, the method
comprising: receiving a request from a user of a user device to
participate in a vicarious tourism session; selecting candidate
docents, wherein a docent is a user who has registered to
participate in vicarious tourism sessions that are relevant to a
particular geographic location; providing data identifying the
candidate docents for presentation to the user device; receiving a
user input selecting a candidate docent; and initiating a vicarious
tourism session between the selected docent and the user, wherein
initiating the vicarious tourism session comprising providing a
video feed of video captured from a session accessory worn by the
selected docent to the user device for presentation to the
user.
2. The method of claim 1, wherein the request specifies one or more
tourism parameters, and wherein selecting candidate docents
comprises selecting as candidate docents available docents who have
registered to provide vicarious tourism sessions matching the
tourism parameters specified in the request.
3. The method of claim 2, wherein selecting candidate docents
further comprises: identifying available docents.
4. The method of claim 1, further comprising: ranking the candidate
docents based on a respective cost to the user to participate in a
vicarious tourism session with each candidate docent or on user
reviews of previous tourism sessions with each candidate
docent.
5. The method of claim 1, wherein providing data identifying the
candidate docents comprises: providing a map interface for
presentation to the user, wherein the map interface identifies
locations where candidate docents are available to participate in
interactive tourism sessions.
6. The method of claim 1, further comprising: overlaying relevant
information over the video feed for presentation to the user,
wherein the relevant information is relevant to the geographic
location of the selected docent.
7. The method of claim 1, further comprising: receiving an input
from the user; determining that the input is an instruction for the
selected docent; and translating the input into one of a
standardized set of commands that is understandable by the selected
docent.
8. The method of claim 7, wherein the standardized set of commands
includes one or more of an audio signal, a touch signal, or a
visual signal.
9. A system comprising one or more computers and one or more
storage devices storing instructions that, when executed by the one
or more computers, cause the one or more computers to perform
operations comprising: receiving a request from a user of a user
device to participate in a vicarious tourism session; selecting
candidate docents, wherein a docent is a user who has registered to
participate in vicarious tourism sessions that are relevant to a
particular geographic location; providing data identifying the
candidate docents for presentation to the user device; receiving a
user input selecting a candidate docent; and initiating a vicarious
tourism session between the selected docent and the user, wherein
initiating the vicarious tourism session comprising providing a
video feed of video captured from a session accessory worn by the
selected docent to the user device for presentation to the
user.
10. The system of claim 9, wherein the request specifies one or
more tourism parameters, and wherein selecting candidate docents
comprises selecting as candidate docents available docents who have
registered to provide vicarious tourism sessions matching the
tourism parameters specified in the request.
11. The system of claim 10, wherein selecting candidate docents
further comprises: identifying available docents.
12. The system of claim 9, the operations further comprising:
ranking the candidate docents based on a respective cost to the
user to participate in a vicarious tourism session with each
candidate docent or on user reviews of previous tourism sessions
with each candidate docent.
13. The system of claim 9, wherein providing data identifying the
candidate docents comprises: providing a map interface for
presentation to the user, wherein the map interface identifies
locations where candidate docents are available to participate in
interactive tourism sessions.
14. The system of claim 9, the operations further comprising:
overlaying relevant information over the video feed for
presentation to the user, wherein the relevant information is
relevant to the geographic location of the selected docent.
15. The system of claim 9, the operations further comprising:
receiving an input from the user; determining that the input is an
instruction for the selected docent; and translating the input into
one of a standardized set of commands that is understandable by the
selected docent.
16. The system of claim 15, wherein the standardized set of
commands includes one or more of an audio signal, a touch signal,
or a visual signal.
17. A computer storage medium encoded with instructions that, when
executed by one or more computers, cause the one or more computers
to perform operations comprising: receiving a request from a user
of a user device to participate in a vicarious tourism session;
selecting candidate docents, wherein a docent is a user who has
registered to participate in vicarious tourism sessions that are
relevant to a particular geographic location; providing data
identifying the candidate docents for presentation to the user
device; receiving a user input selecting a candidate docent; and
initiating a vicarious tourism session between the selected docent
and the user, wherein initiating the vicarious tourism session
comprising providing a video feed of video captured from a session
accessory worn by the selected docent to the user device for
presentation to the user.
18. The computer storage medium of claim 17, wherein the request
specifies one or more tourism parameters, and wherein selecting
candidate docents comprises selecting as candidate docents
available docents who have registered to provide vicarious tourism
sessions matching the tourism parameters specified in the
request.
19. The computer storage medium of claim 9, wherein providing data
identifying the candidate docents comprises: providing a map
interface for presentation to the user, wherein the map interface
identifies locations where candidate docents are available to
participate in interactive tourism sessions.
20. The computer storage medium of claim 1, the operations further
comprising: receiving an input from the user; determining that the
input is an instruction for the selected docent; and translating
the input into one of a standardized set of commands that is
understandable by the selected docent.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional
Application No. 61/785,085, filed on Mar. 14, 2013. The disclosure
of the prior application is considered part of and is incorporated
by reference in the disclosure of this application.
BACKGROUND
[0002] This specification relates to interactive environments that
connect network-enabled communication devices. Various types of
devices, e.g., desktop computers and mobile phones, can communicate
with one another using various data communication networks, e.g.,
the Internet.
SUMMARY
[0003] This specification describes technologies relating to
providing vicarious tourism sessions to users.
[0004] In general, one innovative aspect of the subject matter
described in this specification can be embodied in methods that
include the actions of receiving a request from a user of a user
device to participate in a vicarious tourism session; selecting
candidate docents, wherein a docent is a user who has registered to
participate in vicarious tourism sessions that are relevant to a
particular geographic location; providing data identifying the
candidate docents for presentation to the user device; receiving a
user input selecting a candidate docent; and initiating a vicarious
tourism session between the selected docent and the user, wherein
initiating the vicarious tourism session comprising providing a
video feed of video captured from a session accessory worn by the
selected docent to the user device for presentation to the
user.
[0005] Other embodiments of this aspect include corresponding
computer systems, apparatus, and computer programs recorded on one
or more computer storage devices, each configured to perform the
actions of the methods.
[0006] A system of one or more computers can be configured to
perform particular operations or actions by virtue of having
software, firmware, hardware, or a combination of them installed on
the system that in operation causes or cause the system to perform
the actions. One or more computer programs can be configured to
perform particular operations or actions by virtue of including
instructions that, when executed by data processing apparatus,
cause the apparatus to perform the actions.
[0007] The foregoing and other embodiments can each optionally
include one or more of the following features, alone or in
combination. The request can specify one or more tourism
parameters, and selecting candidate docents can include selecting
as candidate docents available docents who have registered to
provide vicarious tourism sessions matching the tourism parameters
specified in the request. Selecting candidate docents can further
include: identifying available docents.
[0008] The actions can further include: ranking the candidate
docents based on a respective cost to the user to participate in a
vicarious tourism session with each candidate docent or on user
reviews of previous tourism sessions with each candidate
docent.
[0009] Providing data identifying the candidate docents can
include: providing a map interface for presentation to the user,
wherein the map interface identifies locations where candidate
docents are available to participate in interactive tourism
sessions.
[0010] The actions can further include: overlaying relevant
information over the video feed for presentation to the user,
wherein the relevant information is relevant to the geographic
location of the selected docent.
[0011] The actions can further include: receiving an input from the
user; determining that the input is an instruction for the selected
docent; and translating the input into one of a standardized set of
commands that is understandable by the selected docent.
[0012] The standardized set of commands can include one or more of
an audio signal, a touch signal, or a visual signal.
[0013] Particular embodiments of the subject matter described in
this specification can be implemented so as to realize one or more
of the following advantages. Users of a vicarious tourism system
can easily experience touring various locations or points of
interest through vicarious tourism sessions without having to be
physically located in the location or at the point of interest.
During the vicarious tourism session, a user can easily communicate
with a docent giving the tour even if the user and the docent do
not speak the same language.
[0014] The details of one or more embodiments of the subject matter
described in this specification are set forth in the accompanying
drawings and the description below. Other features, aspects, and
advantages of the subject matter will become apparent from the
description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 shows an example vicarious tourism session
system.
[0016] FIG. 2 is a flow diagram of an example process for
initiating a vicarious tourism session.
[0017] Like reference numbers and designations in the various
drawings indicate like elements.
DETAILED DESCRIPTION
[0018] FIG. 1 shows an example vicarious tourism session system
140. The vicarious tourism session system 140 is an example of a
system implemented as computer programs on one or more computers in
one or more locations, in which the systems, components, and
techniques described below can be implemented.
[0019] A user can interact with the vicarious session system 140
using a user device 130 through a data communication network 102.
The network 102 enables data communication between multiple
electronic devices. Users can access content, provide content,
exchange information, and participate in vicarious tourism sessions
by use of the devices and systems that can communicate with each
other over the network 102. The network 102 can include, for
example, a local area network (LAN), e.g., a Wi-Fi network, a
cellular phone network, a wide area network (WAN), e.g., the
Internet, or a combination of them.
[0020] A user device 130 is an electronic device, or collection of
devices, that is under the control of a user and is capable of
interacting with the vicarious tourism session system 140 over the
network 102. Example user devices 130 include personal computers
132, mobile communication devices 134, and other devices that can
send and receive data over the network 102. A user device 130 is
typically configured with a user application, e.g., a web browser,
that sends and receives data over the network 102, generally in
response to user actions. The user application can enable a user to
display and interact with text, images, videos, music and other
content, which can be located on a web page on the World Wide Web
or a local area network.
[0021] The vicarious tourism session system 140 allows people to
organize, request, and participate in vicarious tourism sessions.
In a vicarious tourism session, a person interacts with another
person to allow one of the people, who for convenience may be
referred to as the visitor, to experience visiting a particular
physical location or point of interest, e.g., by viewing live video
being captured by the other person. In some instances, the visitor
acts purely as an observer. In other instances, however, the
visitor can play an active role, and information goes in both
directions during the session. The term "vicarious tourism session"
may thus refer to such an interaction, the period of interaction,
or a recording of such an interaction, as the context requires.
[0022] In particular, the vicarious tourism session system 140
allows visitors using user devices 130 to experience visiting a
particular physical location or point of interest by viewing video
being captured by docents using session accessories 160. That is,
during the vicarious tourism session, the vicarious tourism system
140 provides a video stream of video and audio captured by the
session accessory 160 worn by the docent to a user device 130 for
presentation to the user.
[0023] In general, session accessories 160 are devices that a
person can use to participate in sessions with the vicarious
tourism session system 140. Session accessories 160 will typically
be portable, personal, multimode, e.g., audio and video, electronic
devices. Session accessories 160 can be, for example, wearable
computing devices that include a camera and a microphone that may
be worn on a user's person. For example, a session accessory 160
can include a hat camera system that includes a camera mounted on a
hat, e.g., on the brim of a hat, which can connect wirelessly to
the vicarious tourism session system 140, e.g., by connecting
wirelessly to a mobile computing system, e.g., a mobile phone or
other mobile device, that can connect to the vicarious tourism
session system 140 or by connecting to the vicarious tourism
session system 140 over a Wi-Fi network. An example hat camera
system is described in more detail in U.S. patent application Ser.
No. 61/781,506, entitled "Wearable Camera Systems" and filed on
Mar. 14, 2013. Session accessories 160 can also include other
camera systems worn by a person that provide point-of-view video
data, for example, a helmet-mounted camera. Generally, a session
accessory 160 is a system that includes one or more of an audio
input device 160-1, a video input device 160-2, a display device
160-3, and optionally other input devices, e.g., for text or
gesture input.
[0024] A session accessory 160 may be used during a vicarious
tourism session to broadcast video taken from the point of view of
a docent wearing the session accessory 160 to another user
participating in the session. In implementations where the session
accessory 160 connects wirelessly to a mobile device, the mobile
device can be configured to communicate with the vicarious tourism
session system 140 through an application executing on the mobile
device. Optionally, a session accessory 160 may include multiple
video input devices 160-2 that, after being processed by the
vicarious tourism session system 140, can provide video feeds that
present panoramic views without distortion to other users
participating in a session. Further optionally, the video input
device 160-2 may include image stabilization features to improve
the stability of the video feed captured by the session accessory
160 and provided to other users participating in the session.
Further optionally, the zoom and position of the video input
devices 160-2 may be controllable by other users participating in a
session by submitting an input to the vicarious tourism session
system 140.
[0025] A docent is a user who has registered with the vicarious
tourism session system 140 in order to be accepted by the system to
provide vicarious tourism sessions that are relevant to a specified
geographic location or region or to a particular point of interest.
For example, a user may register to be a docent who provides
vicarious tours of the Great Wall of China or of Paris, France
using a session accessory 160. Vicarious tourism sessions are
described in more detail below with reference to FIG. 2.
[0026] Completed vicarious tourism sessions can be stored as
session data 142 so that they can be replayed by the user or, with
the visitors and docent's consent, by other users.
[0027] FIG. 2 is a flow diagram of an example process 200 for
initiating a vicarious tourism session between a user and a docent.
For convenience, the process 200 will be described as being
performed by a system of one or more computers located in one or
more locations. For example, a vicarious tourism session system,
e.g., the vicarious tourism session system 140 of FIG. 1,
appropriately programmed, can perform the process 200.
[0028] The system receives a request from a user of a user device
to participate in a vicarious tourism session (step 202). The
request can specify one or more tourism parameters. For example,
the request may identify a particular location or a particular
point of interest that the user would like to tour. The request may
specify a date and a time, or a range of dates and times, for the
vicarious tourism session. The request may specify a maximum amount
of money the user is willing to pay to participate in the vicarious
tourism session. The request may specify one or more types of
vicarious tourism sessions the user would like to participate in,
e.g., a monument visit session, a nature hike session, an
architectural tour session, and so on. The request may identify one
or more docents that the user would like to provide the vicarious
tourism session. In some implementations, if the request does not
specify values for one or more of the parameters, the system can
assign a default value for the parameter.
[0029] The system selects candidate docents in response to the
request (step 204). The system can select as a candidate docent any
available docent that has registered with the system to provide a
vicarious tourism session that meets the tourism parameters
specified in the received request. The system can determine which
docents of the docents registered with the system are available in
any of a variety of ways.
[0030] For example, the system can make the determination based on
availability data received from the docents that identifies time
periods during which the docents are available to give tours. As
another example, the system can make the determination based on
which docents are currently logged in to the system or based on
data that identifies a current presence status of the docents,
i.e., active, idle, busy, or offline.
[0031] The system provides data identifying the candidate docents
to the user device (step 206). For example, the system can provide
a map interface for presentation to the user that identifies
locations of vicarious tourism sessions provided by the candidate
docents. Once the user submits an input selecting a location, the
system can provide another user interface through which the user
can request a virtual tourism session. The system can rank the
available sessions based on, e.g., cost to the user to participate
in the session, on user reviews of previous tourism sessions
provided by a given docent, e.g., on reviews by the user, reviews
by other users, or both, or on whether the docent has expertise in
something of interest to the user, and provide the sessions for
display in an order according to the ranking Optionally, the user
interface can allow the user to sort the identified sessions
according to any of a set of criteria.
[0032] The system receives a user input selecting a candidate
docent (step 208) and initiating a vicarious tourism session
between the docent and the user of the user device (step 210). In
some implementations, if one or more other users select the same
candidate docent, e.g., within a predetermined time window of the
user, the system can initiate the vicarious tourism session between
the docent and multiple users. The vicarious tourism session is a
"point of view" session in which a docent wearing a session
accessory offers a user of a user device the experience of visiting
a particular physical location or point of interest. That is,
during the vicarious tourism session, the system provides a video
stream of video captured by the session accessory of the docent to
one or more user devices for presentation to one or more users. The
docent may also include in the video stream video or audio from
other sources, e.g., from a video camera with high quality zoom
lenses on a sturdy mounting. Optionally, when the video stream
includes video from a mounted video camera configured to
communicate with the system, the user may be able to control the
zoom and the position of the camera by providing an input to the
system. Further optionally, depending on the session accessory worn
by the docent, the user may be able to control the zoom and the
position of the camera or video input device of the session
accessory by providing an input to the system.
[0033] In some implementations, prior to initiating the vicarious
tourism session, the system allows the docent the opportunity to
agree to participate in the session or for the user and the docent
to negotiate the terms of the vicarious tourism session, e.g., by
providing a user interface identifying the vicarious tourism and
the user for presentation to the user or by establishing a channel
of communication between the user and the docent.
[0034] In some implementations, the docent can provide an itinerary
or a route for the session. The system can then provide information
identifying the itinerary or route for display to the user, e.g.,
prior to the user selecting a session. In some implementations, the
user may have an option to generate an itinerary or route, e.g., by
interacting with a map interface provided by the system, uploading
an existing itinerary or route, or by selecting from available
itineraries or routes maintained by the system for the location or
point or interest. In these implementations, the system may provide
the user-created, user-uploaded, or user-selected itinerary or
route to the docent for approval before initiating the vicarious
tourism session.
[0035] A user may be able to interact with the docent during the
vicarious tourism session. For example, the user may be able to
pose questions or make requests to the docent as part of the
vicarious tourism session. In particular, the user may be able to
give directions to the docent on where to go next, where to look,
how fast to go, and so on.
[0036] However, in many circumstances, the user and the docent may
not be able to effectively communicate directly due to not being
able to speak the same language, for example. Therefore, in some
implementations, the system can receive input from the user and
translate it into one of a standardized set of statements that will
be understandable by the docent. For example, the system can
receive an input from the user that specifies an instruction for
the docent, e.g., a user selection of a user interface element that
indicates that the user wants the docent to move in a specific
direction, a user swipe movement on a touchscreen display of the
user device in the specific direction, or a user movement of an
input device in the specific direction, and translate the inputs so
that they may be understood by the docent. For example, the system
can translate the instruction into an audio command in a language
that is spoken by the docent, e.g., a language specified in a user
profile of the docent or a most-commonly spoken language where the
docent is located. In some implementations, the system can generate
one of a pre-defined set of audio signals that correspond to the
received instruction, e.g., a hum, a bang, a squeak, and so on.
Depending on the features available on the session accessory or
other user devices possessed by the docent, other ways of signaling
a docent may be possible, e.g., by causing the session accessory or
a mobile device wirelessly connected to the session accessory to
vibrate, by causing a portion of the session accessory to move and
contact the docent in a pre-determined fashion, i.e., by causing a
pre-determined touch signal to be applied to the docent, or by
generating a signal that is visible to the docent while wearing the
session accessory, e.g., if the session accessory includes a
display.
[0037] Optionally, the system can overlay various kinds of
information over the video stream that is presented to the user.
For example, the system can overlay a map that displays the current
location of the docent, e.g., obtained by the system from the
wearable production device or from a different device on the
docent's person. The map may also display the proposed route for
the session. As another example, the system can, based on the
location information or by applying object recognition techniques
to the video being captured by the session accessory, detect points
of interest or other geographic entities near the location of the
docent and overlay information about the points of interest or
other geographic entities in proximity to the docent. As another
example, the system can overlay information obtained from a social
network account of the user, e.g., images of past visits to the
location by the user or by other users in the user's social
network, status updates by users in the user's social network, and
so on.
[0038] Embodiments of the subject matter and the operations
described in this specification can be implemented in digital
electronic circuitry, or in computer software, firmware, or
hardware, including the structures disclosed in this specification
and their structural equivalents, or in combinations of one or more
of them. Embodiments of the subject matter described in this
specification can be implemented as one or more computer programs,
i.e., one or more modules of computer program instructions, encoded
on computer storage medium for execution by, or to control the
operation of, data processing apparatus. Alternatively or in
addition, the program instructions can be encoded on an
artificially-generated propagated signal, e.g., a machine-generated
electrical, optical, or electromagnetic signal, that is generated
to encode information for transmission to suitable receiver
apparatus for execution by a data processing apparatus. A computer
storage medium can be, or be included in, a computer-readable
storage device, a computer-readable storage substrate, a random or
serial access memory array or device, or a combination of one or
more of them. Moreover, while a computer storage medium is not a
propagated signal, a computer storage medium can be a source or
destination of computer program instructions encoded in an
artificially-generated propagated signal. The computer storage
medium can also be, or be included in, one or more separate
physical components or media.
[0039] The operations described in this specification can be
implemented as operations performed by a data processing apparatus
on data stored on one or more computer-readable storage devices or
received from other sources. The term "data processing apparatus"
encompasses all kinds of apparatus, devices, and machines for
processing data, including by way of example a programmable
processor, a computer, a system on a chip, or multiple ones, or
combinations, of the foregoing. The apparatus can also include, in
addition to hardware, code that creates an execution environment
for the computer program in question, e.g., code that constitutes
processor firmware, a protocol stack, a database management system,
an operating system, a cross-platform runtime environment, a
virtual machine, or a combination of one or more of them. The
apparatus and execution environment can realize various different
computing model infrastructures, e.g., web services, distributed
computing and grid computing infrastructures.
[0040] A computer program (also known as a program, software,
software application, script, or code) can be written in any form
of programming language, including compiled or interpreted
languages, declarative or procedural languages, and it can be
deployed in any form, including as a stand-alone program or as a
module, component, subroutine, object, or other unit suitable for
use in a computing environment. A computer program may, but need
not, correspond to a file in a file system. A program can be stored
in a portion of a file that holds other programs or data, e.g., one
or more scripts stored in a markup language document, in a single
file dedicated to the program in question, or in multiple
coordinated files, e.g., files that store one or more modules,
sub-programs, or portions of code. A computer program can be
deployed to be executed on one computer or on multiple computers
that are located at one site or distributed across multiple sites
and interconnected by a communication network.
[0041] The processes and logic flows described in this
specification can be performed by one or more programmable
processors executing one or more computer programs to perform
actions by operating on input data and generating output.
Processors suitable for the execution of a computer program
include, by way of example, both general and special purpose
microprocessors, and any one or more processors of any kind of
digital computer. Generally, a processor will receive instructions
and data from a read-only memory or a random access memory or both.
The essential elements of a computer are a processor for performing
actions in accordance with instructions and one or more memory
devices for storing instructions and data. Generally, a computer
will also include, or be operatively coupled to receive data from
or transfer data to, or both, one or more mass storage devices for
storing data. However, a computer need not have such devices.
Moreover, a computer can be embedded in another device, e.g., a
mobile telephone, a smart phone, a mobile audio or video player, a
game console, a Global Positioning System (GPS) receiver, and a
wearable computer device, to name just a few. Devices suitable for
storing computer program instructions and data include all forms of
non-volatile memory, media and memory devices, including by way of
example semiconductor memory devices, magnetic disks, and the like.
The processor and the memory can be supplemented by, or
incorporated in, special purpose logic circuitry.
[0042] To provide for interaction with a user, embodiments of the
subject matter described in this specification can be implemented
on a computer having a display device for displaying information to
the user and a keyboard and a pointing device, e.g., a mouse or a
trackball, by which the user can provide input to the computer.
Other kinds of devices can be used to provide for interaction with
a user as well; for example, feedback provided to the user can be
any form of sensory feedback, e.g., visual feedback, auditory
feedback, or tactile feedback; and input from the user can be
received in any form, including acoustic, speech, or tactile input
and output.
[0043] While this specification contains many specific
implementation details, these should not be construed as
limitations on the scope of any inventions or of what may be
claimed, but rather as descriptions of features specific to
particular embodiments of particular inventions. Certain features
that are described in this specification in the context of separate
embodiments can also be implemented in combination in a single
embodiment. Conversely, various features that are described in the
context of a single embodiment can also be implemented in multiple
embodiments separately or in any suitable subcombination. Moreover,
although features may be described above as acting in certain
combinations and even initially claimed as such, one or more
features from a claimed combination can in some cases be excised
from the combination, and the claimed combination may be directed
to a subcombination or variation of a subcombination.
[0044] Similarly, while operations are depicted in the drawings in
a particular order, this should not be understood as requiring that
such operations be performed in the particular order shown or in
sequential order, or that all illustrated operations be performed,
to achieve desirable results. In certain circumstances,
multitasking and parallel processing may be advantageous. Moreover,
the separation of various system components in the embodiments
described above should not be understood as requiring such
separation in all embodiments, and it should be understood that the
described program components and systems can generally be
integrated together in a single software product or packaged into
multiple software products.
[0045] Thus, particular embodiments of the subject matter have been
described. Other embodiments are within the scope of the following
claims. In some cases, the actions recited in the claims can be
performed in a different order and still achieve desirable results.
In addition, the processes depicted in the accompanying figures do
not necessarily require the particular order shown, or sequential
order, to achieve desirable results. In certain implementations,
multitasking and parallel processing may be advantageous.
* * * * *