U.S. patent application number 13/840016 was filed with the patent office on 2014-04-10 for user interfaces for head-mountable devices.
This patent application is currently assigned to Google Inc.. The applicant listed for this patent is Google Inc.. Invention is credited to Max Benjamin Braun, Alexander Hanbing Chen, Antonio Bernardo Monteiro Costa, Alexander Faaborg, Michael J. LeBeau, Chris McKenzie, Nirmal Patel, Hayes Solos Raffle, Robert Allen Ryskamp, Richard The.
Application Number | 20140101608 13/840016 |
Document ID | / |
Family ID | 50432335 |
Filed Date | 2014-04-10 |
United States Patent
Application |
20140101608 |
Kind Code |
A1 |
Ryskamp; Robert Allen ; et
al. |
April 10, 2014 |
User Interfaces for Head-Mountable Devices
Abstract
Methods, apparatus, and computer-readable media are described
herein related to a user interface (UI) for a head-mountable device
(HMD). A computing device, such as an HMD, can display at least a
portion of a first linear arrangement of cards. The first linear
arrangement can include an ordered plurality of cards that can
include an actionable card and a bundle card that can correspond to
a group of cards. A moveable selection region can be displayed. A
given card can be selected by aligning the selection region with
the given card. After selection of a bundle card, the computing
device can display a second linear arrangement of cards that
includes a portion of the corresponding group of cards. After
selection of an actionable card, the computing device can display a
third linear arrangement of cards that includes action card(s)
selectable to perform action(s) based on the actionable card.
Inventors: |
Ryskamp; Robert Allen;
(Zurich, CH) ; Braun; Max Benjamin; (San
Francisco, CA) ; Patel; Nirmal; (Mountain View,
CA) ; McKenzie; Chris; (Brooklyn, NY) ;
Raffle; Hayes Solos; (Palo Alto, CA) ; Costa; Antonio
Bernardo Monteiro; (San Francisco, CA) ; The;
Richard; (New York City, NY) ; Chen; Alexander
Hanbing; (Brooklyn, NY) ; LeBeau; Michael J.;
(New York City, NY) ; Faaborg; Alexander;
(Mountain View, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Google Inc.; |
|
|
US |
|
|
Assignee: |
Google Inc.
Mountain View
CA
|
Family ID: |
50432335 |
Appl. No.: |
13/840016 |
Filed: |
March 15, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61710543 |
Oct 5, 2012 |
|
|
|
Current U.S.
Class: |
715/810 |
Current CPC
Class: |
G02B 2027/0141 20130101;
G06F 3/0484 20130101; G06F 3/0483 20130101; G06T 11/206 20130101;
G06F 3/0482 20130101; G06F 2203/04806 20130101; G02B 27/0172
20130101; G06F 40/103 20200101 |
Class at
Publication: |
715/810 |
International
Class: |
G06F 3/0482 20060101
G06F003/0482 |
Claims
1. A computing device, comprising: a processor; and a
non-transitory computer-readable medium configured to store program
instructions that, when executed by the processor, cause the
computing device to carry out functions comprising: displaying at
least a portion of a first linear arrangement of cards, wherein the
first linear arrangement comprises an ordered plurality of cards
that includes one or more first cards of a first card-type and one
or more second cards of a second card-type, and wherein each first
card corresponds to a group of cards; displaying a selection region
that is moveable with respect to the first linear arrangement,
wherein a given card is selected when the selection region is
aligned with the given card; in response to selection of a given
first card by the selection region, displaying at least a portion
of a second linear arrangement of cards, wherein the second linear
arrangement comprises an ordered plurality of the group of cards
that corresponds to the given first card; and in response to
selection of a given second card by the selection region,
displaying at least a portion of a third linear arrangement of
cards, wherein the third linear arrangement comprises one or more
third cards of a third type, wherein each third card is selectable
to perform an action based on the given second card.
2. The computing device of claim 1, wherein each first card is a
bundle card, wherein each second card is an actionable card, and
wherein each third card is an action card.
3. The computing device of claim 1, wherein the first linear
arrangement comprises a timeline, and wherein each card of the
first linear arrangement is associated with a specific time.
4. The computing device of claim 1, wherein each card of the first
linear arrangement comprises a relationship-related parameter.
5. The computing device of claim 4, wherein each card in the group
of cards comprises a same relationship-related parameter.
6. The computing device of claim 4, wherein the selected second
card is related to a first relationship-related parameter, and
wherein displaying at least the portion of the third linear
arrangement comprises determining the one or more third cards based
on the first relationship-related parameter.
7. The computing device of claim 1, wherein the computing device is
further configured with a touchpad, and wherein the functions
further comprise: initially displaying a single card from the first
linear arrangement using a single-card view; while displaying the
single card, receiving a first input via the touchpad; and in
response to the first input: switching to a multi-timeline view;
and displaying, in the multi-timeline view, the at least a portion
of the first linear arrangement of cards, wherein the at least the
portion of the first linear arrangement of cards comprises the
single card.
8. The computing device of claim 1, wherein the computing device is
configured to detect head movements, and wherein displaying the
selection region that is moveable with respect to the first linear
arrangement comprises moving the selection region with respect to
the first linear arrangement based on the head movements.
9. The computing device of claim 1, wherein displaying the at least
a portion of the third linear arrangement of cards comprises
displaying the third linear arrangement adjacent to and parallel to
the first linear arrangement, and wherein the third linear
arrangement begins with a third card aligned with and adjacent to
the selected second card.
10. The computing device of claim 1, wherein the functions further
comprise: after displaying the second linear arrangement of cards,
selecting a card other than the selected first card; and ceasing
display of the second linear arrangement.
11. The computing device of claim 1, wherein the second linear
arrangement further comprises the bundle card.
12. The computing device of claim 1, wherein the computing device
is configured as a head-mountable device.
13. A non-transitory computer-readable medium configured to program
instructions that, when executed by a processor of a computing
device, cause the computing device to carry out functions
comprising: displaying at least a portion of a first linear
arrangement of cards, wherein the first linear arrangement
comprises an ordered plurality of cards that includes one or more
first cards of a first card-type and one or more second cards of a
second card-type, and wherein each first card corresponds to a
group of cards; displaying a selection region that is moveable with
respect to the first linear arrangement, wherein a given card is
selected when the selection region is aligned with the given card;
in response to selection of a given first card by the selection
region, displaying at least a portion of a second linear
arrangement of cards, wherein the second linear arrangement
comprises an ordered plurality of the group of cards that
corresponds to the given first card; and in response to selection
of a given second card by the selection region, displaying at least
a portion of a third linear arrangement of cards, wherein the third
linear arrangement comprises one or more third cards of a third
type, and wherein each third card is selectable to perform an
action based on the given second card.
14. The non-transitory computer-readable medium of claim 13,
wherein each first card is a bundle card, wherein each second card
is an actionable card, and wherein each third card is an action
card.
15. The non-transitory computer-readable medium of claim 13,
wherein the first linear arrangement comprises a timeline, and
wherein each card of the first linear arrangement is associated
with a specific time.
16. The non-transitory computer-readable medium of claim 13,
wherein each card of the first linear arrangement comprises a
relationship-related parameter.
17. The non-transitory computer-readable medium of claim 16,
wherein each card in the bundle of cards comprises a same
relationship-related parameter.
18. The non-transitory computer-readable medium of claim 16,
wherein the selected second card is related to a first
relationship-related parameter, and wherein displaying at least the
portion of the third linear arrangement comprises determining the
one or more third cards based on the first relationship-related
parameter.
19. The non-transitory computer-readable medium of claim 13,
wherein the computing device is associated with a touchpad, and
wherein the functions further comprise: initially displaying a
single card of the plurality of cards using a single-card view;
while displaying the single card, receiving a first input via the
touchpad; in response to the first input: switching to a
multi-timeline view; and displaying, in the multi-timeline view,
the at least a portion of the first linear arrangement of cards,
wherein the at least the portion of the first linear arrangement of
cards comprises the single card.
20. The non-transitory computer-readable medium of claim 13,
wherein displaying the at least a portion of the third linear
arrangement of cards comprises displaying the third linear
arrangement adjacent to and parallel to the first linear
arrangement, and wherein the third linear arrangement begins with
an action card aligned with and adjacent to the selected actionable
card
21. The non-transitory computer-readable medium of claim 13,
wherein the computing device is configured as a head-mountable
device.
22. A method, comprising: displaying at least a portion of a first
linear arrangement of cards using a computing device, wherein the
first linear arrangement comprises an ordered plurality of cards
that includes one or more first cards of a first card-type and one
or more second cards of a second card-type, and wherein each first
card corresponds to a group of cards; displaying a selection region
that is moveable with respect to the first linear arrangement using
the computing device, wherein a given card is selected when the
selection region is aligned with the given card; in response to
selection of a given first card by the selection region, displaying
at least a portion of a second linear arrangement of cards using
the computing device, wherein the second linear arrangement
comprises an ordered plurality of the group of cards that
corresponds to the given first card; and in response to selection
of a given second card by the selection region, displaying at least
a portion of a third linear arrangement of cards using the
computing device, wherein the third linear arrangement comprises
one or more third cards of a third type, and wherein each third
card is selectable to perform an action based on the given second
card.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Patent App. No.
61/710,543, entitled "User Interfaces for Head-Mountable Devices",
filed on Oct. 5, 2012, the contents of which are fully incorporated
by referenced herein for all purposes.
BACKGROUND
[0002] Unless otherwise indicated herein, the materials described
in this section are not prior art to the claims in this application
and are not admitted to be prior art by inclusion in this
section.
[0003] Computing systems such as personal computers, laptop
computers, tablet computers, cellular phones, and countless types
of Internet-capable devices are prevalent in numerous aspects of
modern life. Over time, the manner in which these devices are
providing information to users is becoming more intelligent, more
efficient, more intuitive, and/or less obtrusive.
[0004] The trend toward miniaturization of computing hardware,
peripherals, as well as of sensors, detectors, and image and audio
processors, among other technologies, has helped open up a field
sometimes referred to as "wearable computing." In the area of image
and visual processing and production, in particular, it has become
possible to consider wearable displays that place a very small
image display element close enough to a wearer's (or user's) eye(s)
such that the displayed image fills or nearly fills the field of
view, and appears as a normal sized image, such as might be
displayed on a traditional image display device. The relevant
technology can be referred to as "near-eye displays."
[0005] Near-eye displays are fundamental components of wearable
displays, also sometimes called "head-mounted displays" (HMDs). A
head-mounted display places a graphic display or displays close to
one or both eyes of a wearer. To generate the images on a display,
a computer processing system can be used. Such displays can occupy
part or all of a wearer's field of view. Further, head-mounted
displays can be as small as a pair of glasses or as large as a
helmet.
SUMMARY
[0006] In one aspect, a method is provided. At a head-mountable
device (HMD), displaying a home card of an ordered plurality of
cards. While displaying the home card, receiving a first input at
the HMD. The first input is associated with a first input type. The
first input type includes a choose-next input type and a
choose-previous input type. In response to the first input type
being the choose-next input type: a next card of the ordered
plurality of cards is obtained, the next card being subsequent to
the home card in the ordered plurality of cards, and the HMD
displays the next card. In response to the first input type being
the choose-previous input type: a previous card of the ordered
plurality of cards is obtained, where the previous card is prior to
the home card in the ordered plurality of cards, and the HMD
displays the previous card.
[0007] In another aspect, a method is provided. At an HMD, a home
card is displayed. The HMD includes a user-interface (UI) state.
The UI state is in a home UI state. While in the home UI state, a
first UI of the HMD receives a first input. The first input is
associated with a first type of input. In response to the first
type of input being a choose-next type of input: the HMD displaying
a next card of an ordered plurality of cards, where the ordered
plurality of cards additionally includes the home card, where the
next card differs from the home card, and setting the UI state of
the HMD to a timeline-next state. In response to the first type of
input being a choose-previous type of input, the HMD displaying a
previous card of the ordered plurality of cards, where the
choose-previous type of input differs from the choose-next type of
input, and where the previous card differs from the both next card
and the home card, and setting the UI state of the HMD to a
timeline-previous state. In response to the first type of input
being a tap type of input: activating a second UI of the HMD, where
the first UI of the HMD is a touch-based UI and where the second UI
is a voice-based UI, and setting the UI state of the HMD to a
voice-home state. In response to the first type of input being a
speech-type of input: determining whether text associated with the
first input matches a predetermined text, and, in response to
determining that the text associated with first input matches the
predetermined text, activating the second UI and setting the UI
state of the HMD to the voice-home state.
[0008] In another aspect, a computing device is provided. The
computing device includes a processor and a non-transitory
computer-readable medium that is configured to store program
instructions that, when executed by the processor, cause the
computing device to carry out functions. The functions include:
displaying at least a portion of a first linear arrangement of
cards, where the first linear arrangement includes an ordered
plurality of cards that includes one or more first cards of a first
card-type and one or more second cards of a second card-type, and
where each first card corresponds to a group of cards; displaying a
selection region that is moveable with respect to the first linear
arrangement, where a given card is selected when the selection
region is aligned with the given card; in response to selection of
a given first card by the selection region, displaying at least a
portion of a second linear arrangement of cards, where the second
linear arrangement includes an ordered plurality of the group of
cards that correspond to the given first card; and in response to
selection of a given second card by the selection region,
displaying at least a portion of a third linear arrangement of
cards, where the third linear arrangement includes one or more
third cards of a third card-type, where each third card is
selectable to perform an action based on the given second card.
[0009] In another aspect, a non-transitory computer readable medium
is provided. The non-transitory computer-readable medium is
configured to store program instructions that, when executed by a
processor of a computing device, cause the computing device to
carry out functions. The functions include: displaying at least a
portion of a first linear arrangement of cards, where the first
linear arrangement includes an ordered plurality of cards that
includes one or more first cards of a first card-type and one or
more second cards of a second card-type, and where each first card
corresponds to a group of cards; displaying a selection region that
is moveable with respect to the first linear arrangement, where a
given card is selected when the selection region is aligned with
the given card; in response to selection of a given first card by
the selection region, displaying at least a portion of a second
linear arrangement of cards, where the second linear arrangement
includes an ordered plurality of the group of cards that correspond
to the given first card; and in response to selection of a given
second card by the selection region, displaying at least a portion
of a third linear arrangement of cards, where the third linear
arrangement includes one or more third cards of a third card-type,
where each third card is selectable to perform an action based on
the given second card.
[0010] In another aspect, a method is provided. A computing device
displays at least a portion of a first linear arrangement of cards.
The first linear arrangement includes an ordered plurality of
cards. The ordered plurality of cards includes one or more first
cards of a first card-type and one or more second cards of a second
card-type. Each first card corresponds to a group of cards. The
computing device displays a selection region that is moveable with
respect to the first linear arrangement, where a given card is
selected when the selection region is aligned with the given card.
In response to selection of a given first card by the selection
region, the computing device displays at least a portion of a
second linear arrangement of cards, where the second linear
arrangement includes an ordered plurality of the group of cards
corresponding to the given first card. In response to selection of
a given second card by the selection region, the computing device
displays at least a portion of a third linear arrangement of cards,
where the third linear arrangement includes one or more third cards
of a third card-type, where each third card is selectable to
perform an action based on the given second card.
[0011] In another aspect, a device is provided. The device
includes: means for displaying at least a portion of a first linear
arrangement of cards, where the first linear arrangement includes
an ordered plurality of cards that includes one or more first cards
of a first card-type and one or more second cards of a second
card-type, and where each first card corresponds to a group of
cards; means for displaying a selection region that is moveable
with respect to the first linear arrangement, where a given card is
selected when the selection region is aligned with the given card;
means for, in response to selection of a given first card by the
selection region, displaying at least a portion of a second linear
arrangement of cards, where the second linear arrangement includes
an ordered plurality of the group of cards that correspond to the
given first card; and means for, in response to selection of a
given second card by the selection region, displaying at least a
portion of a third linear arrangement of cards, where the third
linear arrangement includes one or more third cards of a third
card-type, where each third card is selectable to perform an action
based on the given second card.
[0012] These as well as other aspects, advantages, and alternatives
will become apparent to those of ordinary skill in the art by
reading the following detailed description, with reference where
appropriate to the accompanying drawings. Further, it should be
understood that this summary and other descriptions and figures
provided herein are intended to illustrative embodiments by way of
example only and, as such, that numerous variations are possible.
For instance, structural elements and process steps can be
rearranged, combined, distributed, eliminated, or otherwise
changed, while remaining within the scope of the embodiments as
claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] FIG. 1A illustrates a wearable computing system according to
an example embodiment.
[0014] FIG. 1B illustrates an alternate view of the wearable
computing device illustrated in FIG. 1A.
[0015] FIG. 1C illustrates another wearable computing system
according to an example embodiment.
[0016] FIG. 1D illustrates another wearable computing system
according to an example embodiment.
[0017] FIGS. 1E to 1G are simplified illustrations of the wearable
computing system shown in FIG. 1D, being worn by a wearer.
[0018] FIG. 2A illustrates a schematic drawing of a computing
device according to an example embodiment.
[0019] FIG. 2B shows an example projection of an image by an
example head-mountable device (HMD), according to an example
embodiment.
[0020] FIG. 3 shows an example home card of an example user
interface for a HMD, according to an example embodiment.
[0021] FIG. 4 shows example operations of a multi-tiered user model
for a user interface for a head-mountable device (HMD), according
to an example embodiment.
[0022] FIG. 5A shows a scenario of example timeline interactions,
according to an example embodiment.
[0023] FIG. 5B shows a scenario of example timeline interactions
including splicing a new card into a timeline, according to an
example embodiment.
[0024] FIG. 5C shows a scenario for using a multi-timeline display,
according to an example embodiment.
[0025] FIG. 6A shows an example of using a two-fingered swipe on a
touch-based UI of an HMD for zoomed scrolling, according to an
example embodiment.
[0026] FIG. 6B shows a scenario for using a clutch operation to
generate a multi-card display, according to an example
embodiment.
[0027] FIG. 6C shows a scenario for using a clutch operation to
generate a multi-timeline display, according to an example
embodiment.
[0028] FIG. 6D shows a scenario for using head movements to
navigate a multi-timeline display, according to an example
embodiment.
[0029] FIG. 7 shows a user-interface scenario including contextual
menus, according to an example embodiment.
[0030] FIG. 8 shows a user-interface scenario including a people
chooser, according to an example embodiment.
[0031] FIG. 9 shows a user-interface scenario with camera
interactions, according to an example embodiment.
[0032] FIG. 10A shows a user-interface scenario with photo bundles,
according to an example embodiment.
[0033] FIG. 10B shows a user-interface scenario with message
bundles, according to an example embodiment.
[0034] FIG. 11 shows a user-interface scenario with a timeline
having settings cards, according to an example embodiment.
[0035] FIG. 12 shows a user-interface scenario related to WiFi
settings, according to an example embodiment.
[0036] FIG. 13 shows a user-interface scenario related to Bluetooth
settings, according to an example embodiment.
[0037] FIG. 14A shows an example visual stack, according to an
example embodiment.
[0038] FIG. 14B shows another example visual stack, according to an
example embodiment.
[0039] FIG. 15 shows a user-interface scenario related to voice
interactions, according to an example embodiment.
[0040] FIG. 16A is a flow chart illustrating a method, according to
an example embodiment.
[0041] FIG. 16B is a flow chart illustrating another method,
according to an example embodiment.
[0042] FIG. 17 is a flow chart illustrating another method,
according to an example embodiment.
DETAILED DESCRIPTION
[0043] Example methods and systems are described herein. It should
be understood that the words "example" and "exemplary" are used
herein to mean "serving as an example, instance, or illustration."
Any embodiment or feature described herein as being an "example" or
"exemplary" is not necessarily to be construed as preferred or
advantageous over other embodiments or features. In the following
detailed description, reference is made to the accompanying
figures, which form a part thereof. In the figures, similar symbols
typically identify similar components, unless context dictates
otherwise. Other embodiments can be utilized, and other changes can
be made, without departing from the spirit or scope of the subject
matter presented herein.
[0044] The example embodiments described herein are not meant to be
limiting. It will be readily understood that the aspects of the
present disclosure, as generally described herein, and illustrated
in the figures, can be arranged, substituted, combined, separated,
and designed in a wide variety of different configurations, all of
which are explicitly contemplated herein.
A. OVERVIEW
[0045] In an example embodiment, a UI for a computing device can
include a timeline feature that allows the wearer to navigate
through a sequence of ordered screens. In the context of such a
timeline feature, each screen can be referred to as a "card." Among
the sequence of cards, one or more cards can be displayed, and of
the displayed card(s), one card can be "focused on" for possible
selection. For example, the timeline can be present one card for
display at a time, and the card being displayed is also the card
being focused on. In one embodiment, when a card is selected, the
card can be displayed using a single-card view that occupies
substantially all of the viewing area of the display. In some
embodiments, the computing device utilizing the herein-disclosed UI
can be configured as a HMD, wearable computer, tablet computer,
laptop computer, desktop computer, mobile telephone, and/or other
computing device. In particular embodiments, computing device 210
and/or remote device 230 discussed below in the context of FIG. 2A
can be configured to utilize the herein-disclosed UI.
[0046] Each card can be associated with a certain application,
object, or operation. The cards can be ordered by a time associated
with the card, application, object, or operation represented by the
card. For example, if a card shows a photo captured by a wearer of
the HMD at 2:57 PM, the time associated with the card is the time
associated with the underlying photo object of 2:57 PM. As another
example, a card representing a weather application can continuously
update temperature, forecast, wind, and other weather-related
information, and as such, the time associated with the weather
application can be the current time. As an additional example, a
card representing a calendar application can show a next
appointment in 2 hours from now, and so the time associated with
the card can be a time corresponding to the displayed next
appointment, or 2 hours in the future.
[0047] The timeline feature can allow the wearer to navigate
through the cards according to their associated times. For example,
a wearer could move their head to the left to navigate to cards
with times prior to a time associated with the focused-on card, and
to the right to navigate to cards with times after the time
associated with the focused-on card. As another example, the wearer
can use a touch pad or similar device as part of a touch-based UI
to make a swiping motion in one direction on the touch-based UI to
navigate to cards with times prior to the time associated with the
focused-on card, and make a swiping motion in another direction to
navigate to cards with times after the time associated with the
focused-on card.
[0048] Upon power up, the HMD can display a "home card", also
referred to as a home screen. The home card can display a clock,
and be associated with a time of "now" or a current time. In some
cases, the home card can display a clock, to reinforce the
association between the home card and now. Then, cards associated
with times before now can be viewed in the timeline as prior to the
home card, and cards associated with times equal to or after now
can be viewed in the timeline subsequent to the home card.
[0049] After viewing cards on the timeline, the wearer can choose
to interact with some cards. To select a card on the timeline for
interaction, the wearer can tap on the touch-based UI, also
referred to as performing a "tap operation", to select the
focused-on card for interaction. In some cases, a "contextual menu"
can be used to interact with the selected card. For example, if the
selected focused-on card shows a photo or an image captured by the
HMD, the contextual menu can provide one or more options or
operations for interacting with the selected photo, such as sharing
the image with one or more people, or deleting the photo.
[0050] Different contextual menus can be used for different
objects. For example, a contextual object for a contact or
representation of information about a person can have options or
operations such as call the contact, send a message to the contact,
delete the contact, or review/update contact details such as
telephone numbers, e-mail addresses, display names, etc.
[0051] Lists of some objects can be arranged by a different order
other than the time-based order used by the timeline. For example,
a list of contacts can be arranged by frequency of contact; e.g., a
contact for the person most-communicated-with using the HMD can be
displayed first in a list of contacts, the
second-most-communicated-with contact can be displayed second in
the list, and so on. Other orderings are possible as well.
[0052] Groups of cards that represent share a relationship can be
collected into a "bundle", or "stack" or "deck" of cards. The terms
bundle of cards, stack of cards, and deck of cards are used
interchangeably herein. A bundle of cards can include any cards
that can be considered to be related for a certain purpose, related
based on criteria and/or a related combination of criteria. For
example, a collection of photos captured within a certain span of
time can be represented as a photo bundle. As another example, a
collection of messages (e.g. an instant messaging session,
SMS/text-message exchange, or e-mail chain) can be represented as a
message bundle. A bundle card can be constructed for display on the
timeline that represents the bundle and, in some cases, summarizes
the bundle; e.g., shows thumbnail photos of photos in a photo
bundle. In some cases, data related to the card can be used to
track relationship(s) used to create bundles, e.g., a location
associated with a card, an indication that the card is a photo,
message, or other kind of card, a name of an application that
created the card, etc.
[0053] In some embodiments, cards can be classified according to
activities taken upon selection. For example, upon selection of a
bundle card, the bundle card can be replaced by one or more of the
cards the bundle card represents. An "actionable" card can be a
non-bundle card that the HMD can perform one or more actions
related to the actionable card. In some example scenarios, a photo
related to an actionable card can be shared, deleted, named, or
stored by the HMD. In some other example scenarios, a message
represented by an actionable card can be accepted, rejected, or
transferred by the HMD. The user interface can generate and/or use
"action" cards to represent actions that can be performed by the
HMD related to the actionable card.
[0054] The HMD can also use a speech or voice-based UI that can
include one or more microphones to capture audible input, such as
speech from the wearer. The HMD can use speakers or a BCT to
present audible output to the wearer. Upon receiving audible input,
the HMD can attempt to recognize the input as a speech command and
processing the command accordingly; for example, by converting the
audible input to text and operating on the text. The speech input
can represent commands to the HMD, such commands to search,
navigate, take photos, record videos, send messages, make telephone
calls, etc.
[0055] By organizing objects, applications, and operations into
cards, the UI can provide a relatively simple interface to a large
collection of possible data sources. Further, by enabling operation
on a collection of cards arranged in a natural fashion--according
to time in one example--the wearer can readily locate and then
utilize cards stored by the HMD.
B. EXAMPLE WEARABLE COMPUTING DEVICES
[0056] Systems and devices in which example embodiments can be
implemented will now be described in greater detail. In general, an
example system can be implemented in or can take the form of a
wearable computer (also referred to as a wearable computing
device). In an example embodiment, a wearable computer takes the
form of or includes a head-mountable device (HMD).
[0057] An example system can also be implemented in or take the
form of other devices, such as a mobile phone, among other
possibilities. Further, an example system can take the form of
non-transitory computer readable medium, which has program
instructions stored thereon that are executable by a processor to
provide the functionality described herein. An example system can
also take the form of a device such as a wearable computer or
mobile phone, or a subsystem of such a device, which includes such
a non-transitory computer readable medium having such program
instructions stored thereon.
[0058] An HMD can generally be any display device that is capable
of being worn on the head and places a display in front of one or
both eyes of the wearer. An HMD can take various forms such as a
helmet or eyeglasses. As such, references to "eyeglasses" or a
"glasses-style" HMD should be understood to refer to an HMD that
has a glasses-like frame so that it can be worn on the head.
Further, example embodiments can be implemented by or in
association with an HMD with a single display or with two displays,
which can be referred to as a "monocular" HMD or a "binocular" HMD,
respectively.
[0059] FIG. 1A illustrates a wearable computing system according to
an example embodiment. In FIG. 1A, the wearable computing system
takes the form of a head-mountable device (HMD) 102 (which can also
be referred to as a head-mounted display). It should be understood,
however, that example systems and devices can take the form of or
be implemented within or in association with other types of
devices, without departing from the scope of the invention. As
illustrated in FIG. 1A, the HMD 102 includes frame elements
including lens-frames 104, 106 and a center frame support 108, lens
elements 110, 112, and extending side-arms 114, 116. The center
frame support 108 and the extending side-arms 114, 116 are
configured to secure the HMD 102 to a user's face via a user's nose
and ears, respectively.
[0060] Each of the frame elements 104, 106, and 108 and the
extending side-arms 114, 116 can be formed of a solid structure of
plastic and/or metal, or can be formed of a hollow structure of
similar material so as to allow wiring and component interconnects
to be internally routed through the HMD 102. Other materials can be
possible as well.
[0061] One or more of each of the lens elements 110, 112 can be
formed of any material that can suitably display a projected image
or graphic. Each of the lens elements 110, 112 can also be
sufficiently transparent to allow a user to see through the lens
element. Combining these two features of the lens elements can
facilitate an augmented reality or heads-up display where the
projected image or graphic is superimposed over a real-world view
as perceived by the user through the lens elements.
[0062] The extending side-arms 114, 116 can each be projections
that extend away from the lens-frames 104, 106, respectively, and
can be positioned behind a user's ears to secure the HMD 102 to the
user. The extending side-arms 114, 116 can further secure the HMD
102 to the user by extending around a rear portion of the user's
head. Additionally or alternatively, for example, the HMD 102 can
connect to or be affixed within a head-mounted helmet structure.
Other configurations for an HMD are also possible.
[0063] The HMD 102 can also include an on-board computing system
118, an image capture device 120, a sensor 122, and a
finger-operable touch pad 124. The on-board computing system 118 is
shown to be positioned on the extending side-arm 114 of the HMD
102; however, the on-board computing system 118 can be provided on
other parts of the HMD 102 or can be remotely positioned from the
HMD 102 (e.g., the on-board computing system 118 could be wire- or
wirelessly-connected to the HMD 102). The on-board computing system
118 can include a processor and memory, for example. The on-board
computing system 118 can be configured to receive and analyze data
from the image capture device 120 and the finger-operable touch pad
124 (and possibly from other sensory devices, user interfaces, or
both) and generate images for output by the lens elements 110 and
112.
[0064] The image capture device 120 can be, for example, a camera
that is configured to capture still images and/or to capture video.
In the illustrated configuration, image capture device 120 is
positioned on the extending side-arm 114 of the HMD 102; however,
the image capture device 120 can be provided on other parts of the
HMD 102. The image capture device 120 can be configured to capture
images at various resolutions or at different frame rates. Many
image capture devices with a small form-factor, such as the cameras
used in mobile phones or webcams, for example, can be incorporated
into an example of the HMD 102.
[0065] Further, although FIG. 1A illustrates one image capture
device 120, more image capture devices can be used, and each can be
configured to capture the same view, or to capture different views.
For example, the image capture device 120 can be forward facing to
capture at least a portion of the real-world view perceived by the
user. This forward facing image captured by the image capture
device 120 can then be used to generate an augmented reality where
computer generated images appear to interact with or overlay the
real-world view perceived by the user.
[0066] The sensor 122 is shown on the extending side-arm 116 of the
HMD 102; however, the sensor 122 can be positioned on other parts
of the HMD 102. For illustrative purposes, only one sensor 122 is
shown. However, in an example embodiment, the HMD 102 can include
multiple sensors. For example, an HMD 102 can include sensors 102
such as one or more gyroscopes, one or more accelerometers, one or
more magnetometers, one or more light sensors, one or more infrared
sensors, and/or one or more microphones. Other sensing devices can
be included in addition or in the alternative to the sensors that
are specifically identified herein.
[0067] The finger-operable touch pad 124 is shown on the extending
side-arm 114 of the HMD 102. However, the finger-operable touch pad
124 can be positioned on other parts of the HMD 102. Also, more
than one finger-operable touch pad can be present on the HMD 102.
The finger-operable touch pad 124 can be used by a user to input
commands. The finger-operable touch pad 124 can sense at least one
of a pressure, position and/or a movement of one or more fingers
via capacitive sensing, resistance sensing, or a surface acoustic
wave process, among other possibilities. The finger-operable touch
pad 124 can be capable of sensing movement of one or more fingers
simultaneously, in addition to sensing movement in a direction
parallel or planar to the pad surface, in a direction normal to the
pad surface, or both, and can also be capable of sensing a level of
pressure applied to the touch pad surface. In some embodiments, the
finger-operable touch pad 124 can be formed of one or more
translucent or transparent insulating layers and one or more
translucent or transparent conducting layers. Edges of the
finger-operable touch pad 124 can be formed to have a raised,
indented, or roughened surface, so as to provide tactile feedback
to a user when the user's finger reaches the edge, or other area,
of the finger-operable touch pad 124. If more than one
finger-operable touch pad is present, each finger-operable touch
pad can be operated independently, and can provide a different
function.
[0068] In a further aspect, HMD 102 can be configured to receive
user input in various ways, in addition or in the alternative to
user input received via finger-operable touch pad 124. For example,
on-board computing system 118 can implement a speech-to-text
process and utilize a syntax that maps certain spoken commands to
certain actions. In addition, HMD 102 can include one or more
microphones via which a wearer's speech can be captured. Configured
as such, HMD 102 can be operable to detect spoken commands and
carry out various computing functions that correspond to the spoken
commands.
[0069] As another example, HMD 102 can interpret certain
head-movements as user input. For example, when HMD 102 is worn,
HMD 102 can use one or more gyroscopes and/or one or more
accelerometers to detect head movement. The HMD 102 can then
interpret certain head-movements as being user input, such as
nodding, or looking up, down, left, or right. An HMD 102 could also
pan or scroll through graphics in a display according to movement.
Other types of actions can also be mapped to head movement.
[0070] As yet another example, HMD 102 can interpret certain
gestures (e.g., by a wearer's hand or hands) as user input. For
example, HMD 102 can capture hand movements by analyzing image data
from image capture device 120, and initiate actions that are
defined as corresponding to certain hand movements.
[0071] As a further example, HMD 102 can interpret eye movement as
user input. In particular, HMD 102 can include one or more
inward-facing image capture devices and/or one or more other
inward-facing sensors (not shown) that can be used to track eye
movements and/or determine the direction of a wearer's gaze. As
such, certain eye movements can be mapped to certain actions. For
example, certain actions can be defined as corresponding to
movement of the eye in a certain direction, a blink, and/or a wink,
among other possibilities.
[0072] HMD 102 also includes a speaker 125 for generating audio
output. In one example, the speaker could be in the form of a bone
conduction speaker, also referred to as a bone conduction
transducer (BCT). Speaker 125 can be, for example, a vibration
transducer or an electroacoustic transducer that produces sound in
response to an electrical audio signal input. The frame of HMD 102
can be designed such that when a user wears HMD 102, the speaker
125 contacts the wearer. Alternatively, speaker 125 can be embedded
within the frame of HMD 102 and positioned such that, when the HMD
102 is worn, speaker 125 vibrates a portion of the frame that
contacts the wearer. In either case, HMD 102 can be configured to
send an audio signal to speaker 125, so that vibration of the
speaker can be directly or indirectly transferred to the bone
structure of the wearer. When the vibrations travel through the
bone structure to the bones in the middle ear of the wearer, the
wearer can interpret the vibrations provided by BCT 125 as
sounds.
[0073] Various types of bone-conduction transducers (BCTs) can be
implemented, depending upon the particular implementation.
Generally, any component that is arranged to vibrate the HMD 102
can be incorporated as a vibration transducer. Yet further it
should be understood that an HMD 102 can include a single speaker
125 or multiple speakers. In addition, the location(s) of
speaker(s) on the HMD can vary, depending upon the implementation.
For example, a speaker can be located proximate to a wearer's
temple (as shown), behind the wearer's ear, proximate to the
wearer's nose, and/or at any other location where the speaker 125
can vibrate the wearer's bone structure.
[0074] FIG. 1B illustrates an alternate view of the wearable
computing device illustrated in FIG. 1A. As shown in FIG. 1B, the
lens elements 110, 112 can act as display elements. The HMD 102 can
include a first projector 128 coupled to an inside surface of the
extending side-arm 116 and configured to project a display 130 onto
an inside surface of the lens element 112. Additionally or
alternatively, a second projector 132 can be coupled to an inside
surface of the extending side-arm 114 and configured to project a
display 134 onto an inside surface of the lens element 110.
[0075] The lens elements 110, 112 can act as a combiner in a light
projection system and can include a coating that reflects the light
projected onto them from the projectors 128, 132. In some
embodiments, a reflective coating may not be used (e.g., when the
projectors 128, 132 are scanning laser devices).
[0076] In alternative embodiments, other types of display elements
can also be used. For example, the lens elements 110, 112
themselves can include: a transparent or semi-transparent matrix
display, such as an electroluminescent display or a liquid crystal
display, one or more waveguides for delivering an image to the
user's eyes, or other optical elements capable of delivering an in
focus near-to-eye image to the user. A corresponding display driver
can be disposed within the frame elements 104, 106 for driving such
a matrix display. Alternatively or additionally, a laser or LED
source and scanning system could be used to draw a raster display
directly onto the retina of one or more of the user's eyes. Other
possibilities exist as well.
[0077] FIG. 1C illustrates another wearable computing system
according to an example embodiment, which takes the form of an HMD
152. The HMD 152 can include frame elements and side-arms such as
those described with respect to FIGS. 1A and 1B. The HMD 152 can
additionally include an on-board computing system 154 and an image
capture device 156, such as those described with respect to FIGS.
1A and 1B. The image capture device 156 is shown mounted on a frame
of the HMD 152. However, the image capture device 156 can be
mounted at other positions as well.
[0078] As shown in FIG. 1C, the HMD 152 can include a single
display 158 which can be coupled to the device. The display 158 can
be formed on one of the lens elements of the HMD 152, such as a
lens element described with respect to FIGS. 1A and 1B, and can be
configured to overlay computer-generated graphics in the user's
view of the physical world. The display 158 is shown to be provided
in a center of a lens of the HMD 152, however, the display 158 can
be provided in other positions, such as for example towards either
the upper or lower portions of the wearer's field of view. The
display 158 is controllable via the computing system 154 that is
coupled to the display 158 via an optical waveguide 160.
[0079] FIG. 1D illustrates another wearable computing system
according to an example embodiment, which takes the form of a
monocular HMD 172. The HMD 172 can include side-arms 173, a center
frame support 174, and a bridge portion with nosepiece 175. In the
example shown in FIG. 1D, the center frame support 174 connects the
side-arms 173. The HMD 172 does not include lens-frames containing
lens elements. The HMD 172 can additionally include a component
housing 176, which can include an on-board computing system (not
shown), an image capture device 178, and a button 179 for operating
the image capture device 178 (and/or usable for other purposes).
Component housing 176 can also include other electrical components
and/or can be electrically connected to electrical components at
other locations within or on the HMD. HMD 172 also includes a BCT
186.
[0080] The HMD 172 can include a single display 180, which can be
coupled to one of the side-arms 173 via the component housing 176.
In an example embodiment, the display 180 can be a see-through
display, which is made of glass and/or another transparent or
translucent material, such that the wearer can see their
environment through the display 180. Further, the component housing
176 can include the light sources (not shown) for the display 180
and/or optical elements (not shown) to direct light from the light
sources to the display 180. As such, display 180 can include
optical features that direct light that is generated by such light
sources towards the wearer's eye, when HMD 172 is being worn.
[0081] In a further aspect, HMD 172 can include a sliding feature
184, which can be used to adjust the length of the side-arms 173.
Thus, sliding feature 184 can be used to adjust the fit of HMD 172.
Further, an HMD can include other features that allow a wearer to
adjust the fit of the HMD, without departing from the scope of the
invention.
[0082] FIGS. 1E to 1G are simplified illustrations of the HMD 172
shown in FIG. 1D, being worn by a wearer 190. As shown in FIG. 1F,
when HMD 172 is worn, BCT 186 is arranged such that when HMD 172 is
worn, BCT 186 is located behind the wearer's ear. As such, BCT 186
is not visible from the perspective shown in FIG. 1E.
[0083] In the illustrated example, the display 180 can be arranged
such that when HMD 172 is worn, display 180 is positioned in front
of or proximate to a user's eye when the HMD 172 is worn by a user.
For example, display 180 can be positioned below the center frame
support and above the center of the wearer's eye, as shown in FIG.
1E. Further, in the illustrated configuration, display 180 can be
offset from the center of the wearer's eye (e.g., so that the
center of display 180 is positioned to the right and above of the
center of the wearer's eye, from the wearer's perspective).
[0084] Configured as shown in FIGS. 1E to 1G, display 180 can be
located in the periphery of the field of view of the wearer 190,
when HMD 172 is worn. Thus, as shown by FIG. 1F, when the wearer
190 looks forward, the wearer 190 can see the display 180 with
their peripheral vision. As a result, display 180 can be outside
the central portion of the wearer's field of view when their eye is
facing forward, as it commonly is for many day-to-day activities.
Such positioning can facilitate unobstructed eye-to-eye
conversations with others, as well as generally providing
unobstructed viewing and perception of the world within the central
portion of the wearer's field of view. Further, when the display
180 is located as shown, the wearer 190 can view the display 180
by, e.g., looking up with their eyes only (possibly without moving
their head). This is illustrated as shown in FIG. 1G, where the
wearer has moved their eyes to look up and align their line of
sight with display 180. A wearer might also use the display by
tilting their head down and aligning their eye with the display
180.
[0085] FIG. 2A illustrates a schematic drawing of a computing
device 210 according to an example embodiment. In an example
embodiment, device 210 communicates using a communication link 220
(e.g., a wired or wireless connection) to a remote device 230. The
device 210 can be any type of device that can receive data and
display information corresponding to or associated with the data.
For example, the device 210 can be a heads-up display system, such
as the head-mounted devices 102, 152, or 172 described with
reference to FIGS. 1A to 1G.
[0086] Thus, the device 210 can include a display system 212
comprising a processor 214 and a display 216. The display 210 can
be, for example, an optical see-through display, an optical
see-around display, or a video see-through display. The processor
214 can receive data from the remote device 230, and configure the
data for display on the display 216. The processor 214 can be any
type of processor, such as a micro-processor or a digital signal
processor, for example.
[0087] The device 210 can further include on-board data storage,
such as memory 218 coupled to the processor 214. The memory 218 can
store software that can be accessed and executed by the processor
214, for example.
[0088] The remote device 230 can be any type of computing device or
transmitter including a laptop computer, a mobile telephone, or
tablet computing device, etc., that is configured to transmit data
to the device 210. The remote device 230 and the device 210 can
contain hardware to enable the communication link 220, such as
processors, transmitters, receivers, antennas, etc.
[0089] Further, remote device 230 can take the form of or be
implemented in a computing system that is in communication with and
configured to perform functions on behalf of client device, such as
computing device 210. Such a remote device 230 can receive data
from another computing device 210 (e.g., an HMD 102, 152, or 172 or
a mobile phone), perform certain processing functions on behalf of
the device 210, and then send the resulting data back to device
210. This functionality can be referred to as "cloud"
computing.
[0090] In FIG. 2A, the communication link 220 is illustrated as a
wireless connection; however, wired connections can also be used.
For example, the communication link 220 can be a wired serial bus
such as a universal serial bus or a parallel bus. A wired
connection can be a proprietary connection as well. The
communication link 220 can also be a wireless connection using,
e.g., Bluetooth.RTM. radio technology, communication protocols
described in IEEE 802.11 (including any IEEE 802.11 revisions),
Cellular technology (such as GSM, CDMA, UMTS, EV-DO, WiMAX, or
LTE), or Zigbee.RTM. technology, among other possibilities. The
remote device 230 can be accessible via the Internet and can
include a computing cluster associated with a particular web
service (e.g., social-networking, photo sharing, address book,
etc.).
C. EXAMPLE IMAGE PROJECTION
[0091] FIG. 2B shows an example projection of UI elements described
herein via an image 280 by an example head-mountable device (HMD)
252, according to an example embodiment. Other configurations of an
HMD can also be used to present the UI described herein via image
280. FIG. 2B shows wearer 254 of HMD 252 looking at an eye of
person 256. As such, wearer 254's gaze, or direction of viewing, is
along gaze vector 260. A horizontal plane, such as horizontal gaze
plane 264 can then be used to divide space into three portions:
space above horizontal gaze plane 264, space in horizontal gaze
plane 264, and space below horizontal gaze plane 264. In the
context of projection plane 276, horizontal gaze plane 260 appears
as a line that divides projection plane into a subplane above the
line of horizontal gaze plane 260, a subplane below the line of
horizontal gaze plane 260, and the line where horizontal gaze plane
260 intersects projection plane 276. In FIG. 2B, horizontal gaze
plane 264 is shown using dotted lines.
[0092] Additionally, a dividing plane, indicated using dividing
line 274 can be drawn to separate space into three other portions:
space to the left of the dividing plane, space on the dividing
plane, and space to right of the dividing plane. In the context of
projection plane 276, the dividing plane intersects projection
plane 276 at dividing line 274. Thus, the dividing plane divides
projection plane into: a subplane to the left of dividing line 274,
a subplane to the right of dividing line 274, and dividing line
274. In FIG. 2B, dividing line 274 is shown as a solid line.
[0093] Humans, such wearer 254, when gazing in a gaze direction,
can have limits on what objects can be seen above and below the
gaze direction. FIG. 2B shows the upper visual plane 270 as the
uppermost plane that wearer 254 can see while gazing along gaze
vector 260, and shows lower visual plane 272 as the lowermost plane
that wearer 254 can see while gazing along gaze vector 260. In FIG.
2B, upper visual plane 270 and lower visual plane 272 are shown
using dashed lines.
[0094] The HMD can project an image for view by wearer 254 at some
apparent distance 262 along display line 282, which is shown as a
dotted and dashed line in FIG. 2B. For example, apparent distance
262 can be 1 meter, four feet, infinity, or some other distance.
That is, HMD 252 can generate a display, such as image 280, which
appears to be at the apparent distance 262 from the eye of wearer
254 and in projection plane 276. In this example, image 280 is
shown between horizontal gaze plane 264 and upper visual plane 270;
that is image 280 is projected above gaze vector 260. In this
example, image 280 is also projected to the right of dividing line
274. As image 280 is projected above and to the right of gaze
vector 260, wearer 254 can look at person 256 without image 280
obscuring their general view. In one example, the display element
of the HMD 252 is translucent when not active (i.e. when image 280
is not being displayed), and so the wearer 254 can perceive objects
in the real world along the vector of display line 282.
[0095] Other example locations for displaying image 280 can be used
to permit wearer 254 to look along gaze vector 260 without
obscuring the view of objects along the gaze vector. For example,
in some embodiments, image 280 can be projected above horizontal
gaze plane 264 near and/or just above upper visual plane 270 to
keep image 280 from obscuring most of wearer 254's view. Then, when
wearer 254 wants to view image 280, wearer 254 can move their eyes
such that their gaze is directly toward image 280.
D. AN EXAMPLE USER INTERFACE FOR AN HMD
[0096] FIGS. 3 through 15 collectively describe aspects of an
example user interface for an HMD such as discussed above at least
in the context of FIGS. 1A through 2. The HMD can be configured
with a user interface (UI) controller receiving inputs from at
least two user interfaces: a touch-based UI and a voice-based UI.
The touch-based UI can include a touch pad and a button, configured
to receive various touches, such as one-finger swipes in various
directions, two-finger or multi-finger swipes in various
directions, taps, button presses of various durations, and button
releases.
[0097] Once a touch is received, the touch-based UI can report the
touch; e.g., a "swipe forward" or "tap" to the HMD, or in some
cases, to a component of the HMD such as a UI controller. In other
embodiments, the HMD can act as the UI controller. As described
herein, the HMD includes any necessary components, such as but not
limited to one or more UI controllers, which are configured to
perform and control the UI operations described herein.
[0098] The voice-based UI can include a microphone configured to
receive various words, including commands, and to report the
received words; e.g., "Call Mom", to the HMD. In some embodiments,
the HMD can include a gaze-based UI that is configured to detect
duration and/or direction of one or more gazes of a wearer of the
HMD. For example, the gaze-based UI can be configured to detect
"dwell time" or how long the wearer gazes in a fixed direction, the
direction of the gaze, a rate of change of the gaze, and additional
information related to wearer gazes. In some cases, the HMD can
generate audible outputs; e.g., tones, words, songs, etc., that can
be heard by the wearer via headphones, speakers, or bone conduction
devices of the HMD.
[0099] The HMD can generate "cards", also referred to as screens or
images, which are capable of occupying the full display of the HMD
when selected. One card is a home card that is the first card
displayed when UI is activated, for example shortly after HMD
powers up or when the HMD wakes from a sleep or power-saving mode.
FIG. 3 shows an example home card 300 of an example user interface,
according to an example embodiment. Home card 300 includes
application status indicators 310, device status indicators 312,
hint 316 and a clock shown in large numerals indicating the current
time in the center of home card 300. Application status indicators
310 can indicate which application(s) are operating on the HMD. As
shown in FIG. 3, application status indicators 310 include camera
and Y-shaped road icons to respectively indicate operation of a
camera application and a navigation application. Such indicators
can remind the wearer what applications or processes are presently
running and/or consuming power and/or processor resources of the
HMD.
[0100] Device status indicators 312 can indicate which device(s)
are operating on the HMD and HMD status. As shown in FIG. 3, device
status indicators 312 include icons for a wireless network and a
Bluetooth network, respectively, that indicate the HMD is presently
configured for communication via a wireless network and/or a
Bluetooth network. In one embodiment, the HMD may not present
device status indicators 312 on home card 300.
[0101] Hint 314 is shown in FIG. 3 as "ok glass". Hint 314 is shown
in quotes to indicate that the hint is related to the voice-based
UI of the HMD. In some embodiments, hint 314 can be related to the
touch-based UI of the HMD. The words in hint 314 illustrated as "ok
glass" indicate that a wearer should say the words "ok glass" to
activate the voice-based UI of the HMD. In other words, "ok glass"
in this instance is a word (that can also be referred to as "a
hotword") that triggers activation of a voice-based UI. Other
hotwords can also be used.
[0102] As also indicated in the lower portion of FIG. 3, if hint
314 is used successfully a number, e.g., 5, of times, the HMD can
remove hint 314 from being displayed on home card 110. However, if
the HMD has a gaze-based UI and detects that a dwell time of the
wearer on the home card exceeds a threshold, such as a 30-second
threshold, the HMD can add hint 314 back to home card 110 to remind
the wearer about specific words, e.g., ok glass, used to activate
the voice-based UI. In one embodiment, the hotword presented as
hint 314 on home card 300 can be updated to make the user aware of
other functionality of the HMD, or to suggest queries or actions
based on the HMD's current geographic location or situational
context.
[0103] The UI can accept as inputs certain operations performed
using the touch-based UI. The UI can receive these operations and
responsively perform actions to enable the wearer to interact with
the HMD. These operations can be organized into tiers. FIG. 4 lists
example operations of a multi-tiered user model 400 for a user
interface for a head-mountable device (HMD), according to an
example embodiment.
[0104] As shown in FIG. 4, multi-tiered user model 400 has three
tiers: basic, intermediate, and advanced. The basic tier provides
the smallest number of operations of any tier of multi-tiered user
model 400. The intermediate tier includes all operations provided
by the basic tier, along with additional operations not provided by
the basic tier. Similarly, the advanced tier includes all
operations provided by the basic and intermediate tiers, along with
additional operations not provided by either the basic tier or
intermediate tier.
[0105] FIG. 4 shows that the basic tier of multi-tiered user model
400 provides tap, swipe forward, swipe backward, voice, and camera
button press operations. A tap operation can involve a single
physical tap--that is, one quick, slight strike with one or more
fingers on the touch pad of the touch-based UI. A swipe forward
operation, sometimes termed a swipe right, can involve a movement
forward by one or more fingers touching the touch pad, where
forward is the general direction from the wearer's ear toward the
wearer's eye when the wearer has the HMD on. A swipe backward
operation, sometimes termed a "swipe left", can involve a movement
backward by one or more fingers touching the touch pad, where
backward is the general direction from the wearer's eye toward the
wearer's ear when the wearer has the HMD on. A "swipe down"
operation can involve a downward movement by one or more fingers
touching the touch pad, where downward is the general direction
from the top of the wearer's head toward the wearer's neck when the
wearer has the HMD on.
[0106] While example embodiments in this description make reference
to particular directions of touchpad input such as up, down, left,
right, it should be understood that these are exemplary and that
embodiments where certain operations can be triggered via different
input directions are contemplated.
[0107] In one embodiment, the physical actions used by the wearer
to perform some or all of the herein-described operations can be
customized; e.g., by the wearer and/or other entity associated with
the HMD. For example, suppose the wearer prefers to perform a
physical action of a "double-tap"--that is, one physical tap
quickly followed by a second physical tap--rather than the
above-mentioned single physical tap, to perform a tap operation. In
this embodiment, the wearer and/or other entity could configure the
HMD to recognize a double-tap as a tap operation, such as by
training or setting the HMD to associate the double-tap with the
tap operation. As another example, suppose that the wearer would
like to interchange the physical operations to perform swipe
forward and backward operations; e.g., the swipe forward operation
would be performed using a physical action described above as a
swipe left and the swipe backward operation would be performed
using a physical action described above as a swipe right. In this
embodiment, the wearer could configure the HMD to recognize a
physical swipe left as a swipe forward operation and physical swipe
right as a swipe backward operation. Other customizations are
possible as well; e.g., using a sequence of swipes to carry out the
tap operation.
[0108] The tap operation can select a currently visible card. The
swipe forward operation can remove the currently visible card from
display and select a next card for display. The swipe backward
operation can remove the currently visible card from display and
select a previous card for display.
[0109] The swipe down operation can, depending on context, act to
go back, go home, or sleep. Going back can remove the currently
visible card from display and display a previously-visible card for
display. For example, the previously-visible card can be the card
that most recently viewed; e.g. if card A is currently visible and
card B is previously-viewed card, then the swipe down operation can
remove card A from visibility and display card B. Going home can
replace the currently visible card from display and display the
home card. Sleeping can cause part of the HMD, e.g., the display,
or all of the HMD to be deactivated.
[0110] A voice operation can provide access to a voice menu of
operations. Voice interactions with the UI are discussed below in
more detail in the context of FIG. 15. A camera button press can
instruct the HMD to take a photo using a camera associated with
and/or part of the HMD.
[0111] FIG. 4 shows that the intermediate tier of multi-tiered user
model 400 provides tap, swipe forward, swipe backward, voice, and
camera button press operations as described above in the context of
the basic tier. Also, the intermediate tier provides camera button
long press, two finger swipe forward, two finger swipe backward,
and two finger swipe down operations.
[0112] The camera button long press operation can instruct the HMD
to provide a capture menu for display and use. The capture menu can
provide one or more operations for using the camera associated with
HMD. The capture menu is discussed below in more detail in the
context of FIG. 7.
[0113] The two finger swipe forward operation removes the currently
visible card from display and selects a next card for display using
a "zoomed scroll". The two finger swipe forward operation removes
the currently visible card from display and selects the next card
for display using a zoomed scroll. Zoomed scrolls are discussed in
more detail in the context of at least FIG. 6A. The two finger
swipe down causes the HMD to sleep at this position in a
timeline.
[0114] FIG. 4 shows that the advanced tier of multi-tiered user
model 400 provides tap, swipe forward, swipe backward, voice, and
camera button press operations as described above in the context of
the basic tier, as well as camera button long press, two finger
swipe forward, two finger swipe backward, and two finger swipe down
operations described above in the context of the intermediate tier.
The advanced tier also provides one-finger press-and-hold,
two-finger press-and-hold, and nudge operations.
[0115] The one-finger press-and-hold operation zooms, or expands,
the display of the current card, or content related to the current
card, starting when the wearer presses on the touch-based UI and
continues to zoom as long as the wearer "holds" or keeps pressing
on the touch-based UI.
[0116] The two-finger press-and-hold can provide a "clutch"
operation, which can be performed by pressing on the touch-based UI
in two separate spots using two fingers and holding the fingers in
their respective positions on the touch-based UI. After the fingers
are held in position on the touch-based UI, the clutch operation is
engaged. In some embodiments, the HMD recognizes the clutch
operation only after the fingers are held for at least a threshold
period of time; e.g., one second. The clutch operation will stay
engaged as long as the two fingers remain on the touch based UI.
Clutch operations are discussed in more detail below in the context
of at least FIGS. 6B and 6C.
[0117] The nudge operation can be performed using a short, slight
nod of the wearer's head. For example, the HMD can be configured
with accelerometers or other motion detectors that can detect the
nudge and provide an indication of the nudge to the HMD. Upon
receiving indication of a nudge, the HMD can toggle an activation
state of the HMD. That is, if the HMD is active (e.g., displaying a
card on the activated display) before the nudge, the HMD can
deactivate itself (e.g., turn off the display) in response.
Alternatively, if the HMD is inactive before the nudge but is
active enough to detect nudges; e.g., within two or a few seconds
of notification of message arrival, the HMD can activate itself in
response.
[0118] By way of further example, in one scenario, the HMD is
powered on with the display inactive. In response to the HMD
receiving a new text message, an audible chime can be emitted by
the HMD. Then, if the wearer nudges within a few seconds of the
chime, the HMD can activate and present a card with the content of
the text message. If, from the activated state, the user nudges
again, the display will deactivate. Thus, in this example, the user
can interact with the device in a completely hands-free manner.
[0119] As mentioned above, the UI maintains a timeline or ordered
sequence of cards that can be operated on using the operations
described in FIG. 4 immediately above. FIG. 5A shows a scenario 500
of example timeline interactions, according to an example
embodiment.
[0120] Scenario 500 begins with home card 502 being displayed by an
HMD worn by a wearer. Home card 502 and cards 520a-520c can be
arranged as a "timeline" or ordered sequence of cards. In the
example shown in FIG. 5A, each card in timeline 510 has a specific
time associated with the card. The timeline can be ordered based on
the specific time associated with each card. In some cases, the
specific time can be "now" or the current time. For example, home
card 502 can be associated with the specific time of now. In other
cases, the time can be a time associated with an event leading to
the card. For example, FIG. 5A shows that card 520a represents a
photo taken at a time 2 hours ago. Then, card 520a can be
associated with the specific time of 1:28, which is 2 hours before
the current time of 3:28 shown on home card 500.
[0121] Cards 520b-520f represent current cards, or cards associated
with the specific time of now, or upcoming cards, or cards
associated with a future time. For example, card 520b is a current
card that includes an image currently generated by a camera
associated with the HMD, card 520c is a current card that includes
an image of a "hangout" or video conference call currently
in-progress generated by an application of the HMD, card 520d is a
current card that includes an image and text currently generated by
a navigation application/process presently running on the HMD, card
520e is a current card that includes images and text currently
generated by a weather application of the HMD, and 520f is an
upcoming card that includes images and text generated by a calendar
application of the HMD indicating an appointment for "Lunch with
Monica Kim" in "2 hours".
[0122] In scenario 500, the HMD can enable navigation of the time
line using swipe operations. For example, starting at home card
502, a swipe backward operation can cause the HMD to select and
display a previous card, such as card 520a, and a swipe forward
operation can cause the HMD to select and display a next card, such
as card 520b. Upon displaying card 520b, the swipe forward
operation can cause the HMD to select and display the previous
card, which is home card 502, and the swipe backward operation can
cause the HMD to select and display the next card, which is card
520c.
[0123] In scenario 500, there are no cards in timeline 510 that are
previous to card 520a. In one embodiment, the timeline is
represented as a circular timeline. For example, in response to a
swipe backward operation on card 520a requesting a previous card
for display, the HMD can select 520f for (re)display, as there are
no cards in timeline 510 that are after card 520f during scenario
500. Similarly, in response to a swipe forward operation on card
520f requesting a next card for display, the HMD can select 520a
for (re)display, as there are no cards in timeline 510 that are
after card 520f during scenario 500.
[0124] In another embodiment, instead of a circular representation
of the timeline, when the user navigates to the end of the
timeline, a notification is generated to indicate to the user that
there are no additional cards to navigate to in the instructed
direction. Examples of such notifications could include any of or a
combination of the following: a visual effect, an audible effect, a
glowing effect on the edge of the card, a three dimensional
animation twisting the edge of the card, a sound (e.g. a click), a
textual or audible message indicating that the end of the timeline
has been reached (e.g. "there are no cards older than this").
Alternatively, in one embodiment, an attempt by the user to
navigate past a card in a direction where there are no additional
cards could result in no effect, i.e. swiping right on card 520a
results in no perceptible change to the display or card 520a.
[0125] While displaying home card 502, a wearer of the HMD can
recite or utter a hotword, for example the words "ok glass" to
activate the voice-based interface of the HMD. In response, the HMD
can display card 530 that lists some of the commands that can be
uttered by the wearer to interact with the voice-based interface.
FIG. 5A shows example commands as "Google" to perform a search
query, "navigate to" to find directions to a location, "take a
photo" to capture an image using a camera associated with the HMD,
"record a video" to capture a sequence of images and/or associated
sounds, using a camera and/or a microphone associated with the HMD,
and "send a message" to generate and send an e-mail, SMS message,
instant message, or some other type of message.
[0126] While displaying card 530, the wearer can utter something in
response, which can lead to voice interactions with the UI, such as
those discussed below with respect to FIG. 15. The commands capable
of triggering voice interactions are not necessarily limited to
those presented on card 530 at the time the utterance is received.
For example, as the user dwells on card 530, additional commands
can be presented for other features. Further, such commands
presented on card 530 can change over time through further use of
the HMD, or can be remotely updated to surface additional features
or content of the HMD. Still further, similar to the frequent
contact aspects described herein, commands for frequently used
functions of the HMD can be presented on card 530. As such, these
commands can change over time based on use of the HMD by the
wearer.
[0127] In some examples, timelines can become lengthy. The UI
provides operations for speedy use of the UI, such as two-fingered
swipes and clutches, although other gestures to invoke such
navigation operations are possible. FIG. 6A shows an example of
using a two-fingered swipe on a touch-based UI of an HMD for zoomed
scrolling, according to an example embodiment.
[0128] FIG. 5B shows scenario 540 of example timeline interactions
including splicing a new card into timeline 550, according to an
example embodiment. Scenario 540 begins with a wearer of an HMD
using the HMD to observe timeline 550, focusing in on card 550b,
which is the home card for timeline 550. FIG. 5B shows the
focused-on card, card 550b, of timeline 550 using a dotted-line
border. At this point of scenario 540, card 550b is displayed by
the HMD using a single-card view.
[0129] Scenario 540 continues by the HMD receiving an incoming
telephone call from a contact, Kelly Young. For example, the HMD
can be configured with one or more transceivers configured to
establish, maintain, and tear down communication links, such as
communication link 220 discussed above in the context of FIG. 2A,
that utilize one of a number of cellular and/or other technologies
to originate and terminate wireless telephone calls.
[0130] Upon receiving the phone call, the HMD can generate,
retrieve, and/or determine card 560 representing the calling party,
Kelly Young, of the telephone call. Once available, the HMD can
display card 560 using a single-card view.
[0131] In scenario 540, the wearer of the HMD would like to answer
the call from Kelly Young. To accomplish this, the wearer can
perform a tap operation to bring up a contextual menu suitable for
the context of a telephone call. This contextual menu can have
options such as, but not limited to, answering the telephone call,
routing/forwarding the telephone call to another number (e.g.,
another phone, voice mail), ignoring/rejecting the telephone call,
putting the calling party on hold, bridging the calling party into
a three-way or multi-way call, bridging the calling party into a
video conference call, such as a hangout, and saving contact
information related to the telephone call.
[0132] In scenario 540, the first option of the contextual menu for
the telephone call is an answer option. The options of the
contextual menu can be displayed as an overlay on top of card 560
representing the telephone call. In some embodiments, such as shown
in FIG. 5B, card 570 can be generated by (a) displaying text and/or
graphics related to the contextual menu item overlaying (b) a
dimmed version of card 560. Card 570 can then be focused on and
displayed using a single-card view.
[0133] The wearer can answer the call by performing a tap operation
while the answer option is active; e.g., while card 570 is focused
on. In response, the HMD can generate display 580 by determining
where in the timeline a new card representing the telephone call
would be displayed. As the telephone call is a current and most
recent event for the HMD, a card representing the telephone call;
e.g. card 560, would be adjacent to and on the future/now side of a
timeline. That is, for timeline 550 shown at the top of FIG. 5B,
card 560 would be "spliced into", or inserted or placed into the
middle of, timeline 550 between home card 550b and card 550c.
[0134] The HMD can be configured to animate this splicing operation
by showing room being made for the to-be-spliced-in card in the
timeline and then showing the to-be-spliced-in card placed into the
timeline. Once spliced in, the HMD can show the spliced card in
single-card view as the focused-on card. For example, in response
to the tap operation performed while card 570 is displayed, the HMD
can switch to a zoomed-out, or multi-card, display of the timeline
as shown in display 580 showing part or all of cards 550a-550d of
timeline 550. Then, the HMD can show the cards on each side of the
to-be-spliced-in card; e.g., cards 550b and 550c, moving away from
the center of the zoomed-out display as indicated in display
580.
[0135] Then, once cards on each side of the to-be-spliced-in card
have each moved far enough away from the center of the display to
permit insertion of a new card, the to-be-spliced-in card can be
shown in the display between the cards on each side. Display 582
shows a stage of this insertion animation after cards 550b and 550c
have moved far enough apart to permit splicing in card 560--the
to-be-spliced-in card.
[0136] Timeline 550 at the bottom of FIG. 5B shows the result of
the splicing operation. Card 560 (the formed to-be-spliced-in card)
in shown between cards 550b and 550c in timeline 550, and is
indicated as being focused on by the HMD. As card 560 is focused in
by the HMD, card 560 can be shown in single-card mode.
[0137] Scenario 540 can conclude with the HMD answering the
telephone call before, during, or after the animation of the
splicing operation, and the telephone call between Kelly Young and
the wearer entering the talking state.
[0138] The splicing operation can be performed in reverse when a
card is to be removed from a timeline; that is, a "reverse splice"
can be performed. For example, after the call with Kelly Young is
completed, card 560 could be removed from the timeline 500. In an
embodiment, an animation that is substantially in the reverse of
the splicing process described above is used in conjunction with
removing card 560 from the timeline 550.
[0139] FIG. 5C shows scenario 584 using a multi-timeline display,
according to an example embodiment. Scenario 584 begins with a
wearer of an HMD using View A, shown at the top of FIG. 5C, that
can be generated by the HMD to observe home card 588a displayed in
single-card view 586. In scenario 584, a wearer of an HMD can
switch from single-card view 586 into a multi-timeline view using a
clutch operation, as discussed in detail below in the context of
FIG. 6D. In other scenarios, a different operation or operations
than a clutch can be performed to switch into the multi-timeline
view.
[0140] In scenario 584, multiple cards of main timeline 588 can be
displayed simultaneously upon entering the multi-timeline view.
View B of FIG. 5C, shown just below View A, illustrates a
multi-timeline view and shows three cards 588a, 588b, and 588c of
main timeline 588 in a linear arrangement. Card 588a is a home card
for main timeline 588, card 588b is a card representing an "Email"
from "LunchPal" that arrived "5 min ago", and card 588c is a bundle
card that shows a number of thumbnail images related to a bundle of
contacts called "Friends".
[0141] In scenario 584, card 588a was shown in while in single-user
view 586 and in an initial multi-timeline view. In some scenarios,
the initial multi-timeline view can be centered on the card shown
in a previous single-card view; e.g., home card 588a. In other
scenarios, multiple timelines can be displayed as part of the
initial multi-timeline view; for example, main timeline 588 can be
accompanied by a one or more timelines showing card representing
one or more contacts, photos, previous events, future events,
and/or other cards.
[0142] In scenario 584, the wearer of the HMD can select a card for
use by controlling a selection region; e.g., focus 688a shown in
FIG. 6D. A given card, such as card 588b, can be selected when the
selection region is aligned with the given card. In this context,
the selection region can be aligned with a given card in a display
when the selection region is placed over the given card in the
display, the selection region substantially overlaps the given card
in the display, and/or a UI action (e.g., a tap of a touchpad, a
click of a mouse, a key press) is performed when the selection
region overlaps the given card in the display. Other techniques for
aligning a selection region and a given card are possible as well.
In some embodiments, the selection region substantially overlaps
the given card when at least 50% of the selection region overlaps
the given card in the display. In some embodiments, the HMD can be
configured to detect head movements and the selection region can be
moved using the head movements.
[0143] In scenario 584, the wearer of the HMD selects card 588b
and, after the selection of card 588b, View C can be generated,
which is shown below and to the left of View B in FIG. 5C. View C
shows card 588b of main timeline 588 and a linear arrangement of
three action cards 590a, 590b, and 590c shown above card 588b; that
is, View C shows multiple linear arrangements simultaneously. As
shown in View C, the linear arrangement of action cards starts with
card 590a that is directly above selected card 588b, and the linear
arrangement of action cards is adjacent to, above, and parallel to
main timeline 588. Card 588a is shown in View C as greyed out to
indicate that card 588a is not selected.
[0144] Upon selection of action card 590a to "View All", the wearer
can view the e-mail represented by card 588b. Selection of action
card 590b to "Share" can enable the wearer to share; e.g., reply
to, forward, post to a website, etc., the e-mail represented by
card 588b. Selection of action card 590c to "Delete" can permit the
wearer to delete the e-mail represented by card 588b.
[0145] In scenario 584, the wearer selects card 590a to view all of
the e-mail represented by card 588b. After selection of card 590a,
the content of the e-mail is shown using three content cards 592a,
592b, and 592c shown in View D as adjacent to and above selected
card 590a. View D is shown directly to the right of View C in FIG.
5C.
[0146] View D also shows that the linear arrangement of contact
cards begins with card 592a, which is shown directly above selected
card 592a. View D does not show unselected action cards 590b and
590c; in some embodiments, unselected cards can be displayed. In
particular scenarios, unselected but displayed card can be
displayed in a visually distinct manner to indicate non-selection;
e.g., shown with a grey background as for card 588a in View C.
[0147] Scenario 584 continues with the wearer of the HMD
manipulating the selection region to return to the main timeline
588 and select card 588c as shown in View E. FIG. 5C shows View E
below and to the left of View D. As mentioned above, card 588c is a
bundle card representing a group of related cards; in this example,
a group of contact cards. Each contact card can have an indication
that the card is a contact card. In some embodiments, card
represented by bundle card 588c can have an indication that the
card is in the "Friends" bundle of cards/contacts. As such, the HMD
can determine cards in the "Friends" bundle by searching for each
card having an indication that the card is in the "Friends" group
of cards.
[0148] Upon selection of card 588c, the HMD can generate View F,
which shows contact cards 594a and 594b of the "Friends" bundle
displayed the linear arrangement with main timeline 588. View F is
shown in FIG. 5C directly below View E. Bundle card 588b is shown
by View F as remaining in the linear arrangement with main timeline
588. In some scenarios, contact cards 594a and 594b, as well as
additional cards in the "Friends" bundle can be shown in a linear
arrangement adjacent to the linear arrangement showing a selected
bundle card; e.g., card 588c. In other scenarios, upon selection of
bundle card 588c, bundle card 588c is no longer displayed; rather,
the bundle card can be considered to be replaced by the content of
the bundle.
[0149] Also, the splicing operation can utilize card generated by
other applications. For example, suppose a card representing a
navigation application/process is displayed on a timeline, and the
wearer uses a tap operation to activate the navigation
application/process to provide directions to a destination. To show
the directions to the destination, the navigation
application/process can generate a results card that includes one
or more directions. When first generated, the results card can be
spliced into the timeline, using the splicing operation described
immediately above. When the wearer arrives at the destination, the
results card can be removed using the reverse splice operation
described above. In some scenarios, multiple cards can be spliced
in and/or reverse spliced out of a timeline simultaneously or
substantially as so, such as when being added to or leaving a
multi-party hangout, telephone call, or other communication.
[0150] To speed movement in selecting next card(s) in the timeline,
a wearer can swipe forward with two fingers, as shown in FIG. 6A,
to perform a zoomed scroll to a next card. Similarly, to speed
movement in selecting previous card(s) in the timeline, a wearer
can swipe backward with two fingers, as also shown in FIG. 6, to
perform a zoomed scroll to a previous card.
[0151] Upon receiving a UI operation for a zoomed scroll, for
example, a two-fingered swipe forward, a reduced-size view of cards
can be displayed in the resulting timeline 610. That is, as shown
in FIG. 6A, multiple cards can be shown in example display 612
generated by the HMD. A swipe or drag operation associated with the
zoomed scroll can move content faster, e.g., 4 times faster, than
when performing a regular swipe or drag operation. Inertial free
scrolling can be performed as part of zoomed scrolling. After the
zoomed scroll completes, the focus for the UI is on card 614 of
timeline 610. FIG. 6A shows card 614 outlined using a thick dashed
line in the center of display 612.
[0152] A timeline that has been released after the zoomed scroll
can stay zoomed out, or can continue with reduced image views,
until a minimum velocity threshold for the timeline is reached.
After the minimum velocity threshold is reached, display 612 can be
instructed to zoom to the card that is closest to the center of
display 612; e.g., display 612 can zoom to card 614. That is, the
HMD can show card 614 as large as possible within display 612.
[0153] Additional techniques for rapid movement within a timeline
and between timelines can be provided by the UI. For example, a
clutch operation can lead to generation and display of a multi-card
display, such as shown in FIG. 6B, or a multi-timeline display,
such as shown in FIG. 6C. Navigation within the multi-card display
and/or multi-timeline display can, in some embodiments, be
performed using head movements. In other embodiments, the
multi-card display or multi-timeline display in toto can be focused
on, or displayed by the HMD. Thus, to aid navigation, a sub-focus
can be implemented to highlight a card or a timeline within a
multi-card or multi-timeline display.
[0154] FIG. 6B shows a scenario 620 for using clutch operation 642
to generate a multi-card display 634a, according to an example
embodiment. Scenario 620 begins with an HMD having timeline 630
with cards 630a through 630g, and with a focus on card 630d. During
scenario 620, prior to clutch 642, the HMD displays cards in the
timeline using a single-card view, while solely displaying a
focused-upon card. As the focus is on card 630d, which FIG. 6 shows
as a photo of a woman's face, the HMD displays a single-card view
of card 630d.
[0155] Scenario 620 continues with a wearer of the HMD performing
clutch operation 642 using the touch-based UI of the HMD. A clutch
operation can involve pressing on the touch-based UI of the HMD
using two fingers and holding the two-finger press until the HMD
recognizes the clutch operation 642 has been performed. Other
gestures, techniques, inputs or time thresholds can be used to
trigger the clutch operation. For example, in certain embodiments,
a three-finger gesture or a voice-action could be used to engage
and/or disengage the clutch operation.
[0156] Upon recognition of clutch operation 642, in scenario 620,
the HMD can generate and display multi-card display 634a, which is
shown in an expanded view as multi-card display 634b. In some
embodiments, the HMD can focus on the entire multi-card display
634a using focus 636. In other embodiments, the HMD can focus a
subset of cards, such as but not limited to, a single card, a row
of cards, a column of cards, a block of cards, or some other
selection of cards, within multi-card display 634a using sub-focus
638. For example, in scenario 620, the HMD is configured to display
sub-focus 638 on a single card. In some embodiments, the sub-focus
can remain on one or more cards at or near the center of the
display.
[0157] As shown in FIG. 6B using expanded multi-card display 634b,
the multi-card display shows nine cards: cards 630a through 630g of
timeline 630 and two other cards 640a and 640b not shown as part of
timeline 630. The wearer of the HMD can navigate around multi-card
display 634a, 634b using head movements, such as moving the
wearer's head up, down, left, and/or right. In some embodiments,
gaze tracking can be used in place of or in addition to head
movements for navigating around multi-card display 634a, 634b
and/or multi-timeline display 664a, 664b.
[0158] In scenario 620, "wrap-around" movements, or moving off the
end of a row or column to the respective other end of the row or
column, are enabled. Then, in response to respective movements
upward, downward, leftward, or rightward by the head of the wearer,
the sub-focus 638 can move from card 630d, as shown in FIG. 6B, to
respective cards 630a, 630g, 630f, or 630g. In particular
embodiments, wrap-around can be inhibited, so moving the wearer's
head leftward will not move sub-focus 638 from card 630d to card
630f, but rather sub-focus 638 will stay at the left-end of the
middle row on card 630d.
[0159] In some embodiments, in response to respective movements
diagonally up-and-left, up-and-right, down-and-left, and
down-and-right by the head of the wearer, the sub-focus 638 can
move from card 630d, as shown in FIG. 6B, to respective cards 630c,
630b, 640b, or 640c. Other types of head movements and/or UI
operations can be used as well or instead with multi-card display
634a, 634b, including but not limited to head movements and/or UI
operations that move the focus faster than and/or slower than one
card at a time, zooming in and out, reshaping sub-focus 638,
selecting card(s), and deselecting card(s).
[0160] In some embodiments, sub-focus 638 may not be used. For
example, in these embodiments, a leftward head movement can move
each of cards 630b, 630c, 630e, 630f, 640a, and 640b to the left by
one card and bring in new cards to the "right" of these cards (new
cards not shown in FIG. 6B) on to multi-card displays 634a and
634b. The new cards can be displayed in the respective positions of
card 630c, 630f, and 640b, and remove cards 630a, 630d, and 630g
from multi-card display 634a and 634b. Also, a rightward head
movement can move each of cards 630a, 630b, 630d, 630e, 630g, 640a
to the right by one card, bring in new cards to the "right" of
these cards (not shown in FIG. 4) on to multi-card displays 634a
and 634b. The new cards can be displayed in the respective
positions of card 630a, 630d, and 640g, and remove cards 630c,
630f, and 640b multi-card displays 634a and 634b.
[0161] In these embodiments, an upward head movement can: (1) bring
a new row of cards considered to be "above" the top row of cards;
e.g., cards in the positions of cards 630a, 630b, 630c of
multi-card displays 634a and 634b, (2) display the new row of cards
on the top row of multi-card displays 634a and 634b, (3) move the
top row of cards down to be displayed as the middle row of cards;
e.g. display cards 630a, 630b, and 630c in the positions of cards
630d, 630e, and 630f of multi-card displays 634a and 634b, (4) move
the middle row of cards down to the bottom row of cards e.g.
display cards 630d, 630e, and 630f in the positions of cards 630g,
640a, and 640b of multi-card displays 634a and 634b, thus removing
the bottom row of cards; e.g., cards 630g, 640a, and 640b, from
view on multi-card displays 634a and 634b.
[0162] In these embodiments, a downward head movement can: (1)
bring a new row of cards considered to be "below" the bottom row of
cards of multi-card displays 634a and 634b, (2) display the new row
of cards on the bottom row of multi-card displays 634a and 634b,
(3) move the bottom row of cards up to be displayed as the middle
row of cards; e.g. display cards 630g, 640a, and 640b in the
positions of cards 630d, 630e, and 630f of multi-card displays 634a
and 634b, (4) move the middle row of cards up to the top row of
cards e.g. display cards 630d, 630e, and 630f in the positions of
cards 630a, 630b, and 630c of multi-card displays 634a and 634b,
thus removing the top row of cards; e.g., cards 630a, 630b, and
630c, from view on multi-card displays 634a and 634b.
[0163] Scenario 620 continues with clutch 642 being released while
sub-focus 638 is on card 630g. Clutch 642 can be released by the
wearer removing one or both of their fingers from the touch-based
UI of the HMD. After clutch 642 is released, the HMD can use a
single-user view to display either (a) card 630c, as the card being
focused on before clutch operation 642 began, or (b) card 630g, as
the card focused on using sub-focus 638 just prior to release of
clutch 642. In response to clutch 642 being released for HMD
embodiments not using sub-focus 638, the HMD can use a single-user
view to display card 630c.
[0164] FIG. 6C shows a scenario 650 for using clutch operation 680
to generate a multi-timeline display 664a, according to an example
embodiment. Scenario 650 begins with an HMD displaying main
timeline 660 with a focus on card 660a. During scenario 650 prior
to clutch 680, the HMD displays cards in main timeline 660 using a
single-card view, displaying a focused-upon card. As the focus is
on card 660a, the HMD displays a single-card view of card 660a.
[0165] Scenario 650 continues with a wearer of the HMD performing
clutch operation 680. Upon recognition of clutch operation 680, in
scenario 650, the HMD can generate and display multi-timeline
display 664a, which is shown in an expanded view as multi-timeline
display 664b. In some embodiments, the HMD can focus on the entire
multi-timeline display 664a using focus 666. In other embodiments,
the HMD can focus a subset of cards and/or timelines, such as, but
not limited to, a single card, one, some, or all cards on a
timeline, a column of cards across one or more timelines, a block
of cards across multiple timelines, a single timeline, a group of
timelines, or some other selection of cards and/or timelines,
within multi-card display 664a using sub-focus 668.
[0166] As shown in FIG. 6C using expanded multi-timeline display
664b, the multi-timeline displays five timelines (TLs): timelines
670, 672, 674, 676, and 678. The multi-timeline display displays
five cards for each of displayed timelines 670, 672, 674, 676, and
678. The timelines can be selected for display based on a type of
object displayed in a card; e.g., a timeline having only photos,
only photo bundles, only messages, only message bundles, only cards
representing active applications. Additional criteria can be used
to further select items for a timeline; e.g., for photo objects,
some criteria can be: only photos taken before (or after) a
predetermined date, within a date range, at a location, as part of
a photo bundle, photos that were shared, photos that were shared
and with one or more messages received in response, etc. Other
criteria for photo objects and/or other types of objects are
possible as well for selection in a timeline. For example, in
scenario 650, all of the cards in timeline 670 represent photos in
a photo bundle, all of the cards in timeline 672 represent photos
taken in a given city location, and all of the cards in timeline
678 represent contacts that do not have associated
photos/images.
[0167] The additional timelines presented can represent different
user accounts associated with the HMD, for example, a first
timeline could be cards generated by a user's work account, e.g.
photos, events, contacts, email, messages, sent to or received by
his/her work account, e.g. user@google.com. In this example, the
HMD could be configured to allow access to multiple user accounts,
such as the user's personal account, e.g. user@gmail.com; such that
a second timeline accessible from the grid view could be cards
generated by the user's personal account, e.g. photos, events,
contacts, email, messages, sent to or received by his/her personal
account. This way, the user can easily interact with the HMD via
different profiles or personas, such as work or personal.
[0168] The timelines can be selected to be part or all of the main
timeline; for example, FIG. 6C shows that timeline 674 includes
five cards selected from main timeline 660. Cards can be selected
from main timeline 660 randomly, based on focus 662, based on a
type of object represented on the main timeline; e.g., select only
cards representing active applications visible from the main
timeline, and/or based on other criteria. For example, in scenario
650, timeline 674 includes card 660a, which was the focused-on card
prior to clutch 680, and the two cards on each side of card 660a in
main timeline 660. Other criteria for selecting cards from a main
timeline are possible as well.
[0169] One or more timelines can act as contextual menu(s) for
multi-timeline display 664a, including possible operations that can
be performed from multi-timeline display 664a, operations on
multi-timeline display 664a, and/or other operations. For example,
timeline 678 includes a menu of operations including navigate, take
a video, take a photo, remove a timeline option, and add a
timeline. Other operations are possible as well. For example, if
clutch is engaged from card 660a in main timeline 660, the
multi-timeline display 664a could present a contextual menu of
operations that could be executed based off of the presently
selected card 660a, e.g. share this card, delete the card, remove
from timeline, add to bundle, etc.
[0170] In one embodiment, the wearer of the HMD can navigate around
multi-timeline display 664a, 664b using head movements. For
example, in scenario 650, the HMD is configured to display
sub-focus 668, shown as a dotted line on both multi-timeline
displays 664a and 664b, shown focusing on a single timeline; e.g.,
timeline 668.
[0171] In one example of scenario 650, "wrap-around" movements, or
moving off the end of a row or column to the respective other end
of the row or column, are enabled. Then, in response to respective
movements upward, downward, leftward, or rightward by the head of
the wearer, the sub-focus 668 can move from timeline 674, as shown
in FIG. 6C, to respective timelines 672, 676, 672, or 676. In
particular embodiments, wrap-around can be inhibited, so moving the
head of the wearer leftward will not move sub-focus 668 from
timeline 674 to timeline 672 and moving the head of the wearer
rightward will not move sub-focus 668 from timeline 674 to timeline
676 but rather sub-focus 638 will stay on timeline 674 in response
to either the leftward or the rightward movement.
[0172] In some embodiments, in response to respective movements
diagonally up-and-left, up-and-right, down-and-left, and
down-and-right by the head of the wearer with wrap-around enabled,
the sub-focus 638 can move from timeline 674, as shown in FIG. 6C,
to respective cards 672, 672, 676, and 676. In particular
embodiments, wrap-around can be inhibited, but as each of the
diagonal movements has an up or down components, movement to a
respective timeline will succeed when sub-focus 668 is on timeline
674.
[0173] In some embodiments, sub-focus 668 may not be used. For
example, in these embodiments, a leftward head movement can move
each of timelines 670, 672, 674, 676, 678 to the left on
multi-timeline display 664a,664b by one or more cards and a
rightward head movement can move each of timelines 670, 672, 674,
676, 678 to the right on multi-timeline display 664a,664b by one or
more cards. Also in these embodiments, an upward head movement can
bring a time "above" timeline 670 (not shown in FIG. 6C) into view
as a top-most timeline on multi-timeline displays 664a and 664b,
move down each of timelines 670, 672, 674, 676 by one time line on
multi-timeline displays 664a and 664b, and remove timeline 678 from
view. Further, an upward head movement can bring a time "below"
timeline 678 (not shown in FIG. 6C) into view as a bottom-most
timeline on multi-timeline displays 664a and 664b, move up each of
timelines 672, 674, 676, 678 by one timeline on multi-timeline
displays 664a and 664b, and remove timeline 670 from view.
[0174] Other types of head movements and/or UI operations can be
used as well or instead with multi-timeline display 664a, 664b,
including but not limited to head movements and/or UI operations
that move the focus faster than and/or slower than one timeline at
a time, enable navigation of cards within a timeline, which can
include some or all of the navigation techniques discussed above
regarding multi-card displays 634a and 634b, zooming in and out,
reshaping sub-focus 668, selecting card(s)/timeline(s), and
deselecting card(s)/timeline(s).
[0175] Scenario 650 continues with clutch 680 being released while
sub-focus 688 is on timeline 670. After clutch 680 is released, the
HMD can use a single-card view to display a card on selected
timeline 670.
[0176] FIG. 6D shows scenario 682 for using head movements to
navigate a multi-timeline display, according to an example
embodiment. Scenario 682 begins with the HMD displaying a
single-card view 684 of a contact named "George Farley"
participating in a hangout, as shown at the upper-left hand corner
of FIG. 6D. A hangout can be indicated by the HMD using icon 684a
of a camera inside of a speech balloon. Scenario 682 continues with
the wearer of the HMD performing a clutch operation, or pressing
two fingers on the touch-based UI of the HMD for at least one
second.
[0177] After determining a clutch operation was performed, the HMD
can generate multi-timeline display 686a, shown in the
upper-right-hand corner of FIG. 6D as a rectangle with thick lines.
Multi-timeline display 686a is shown displaying a focus 688a and
parts of three timelines, including timeline (TL) 690a. In scenario
682, focus 688a, shown in FIG. 6D as a circular arrangement of gray
trapezoids, rests or focuses on card 684. Focus 688a rests on card
684, as card 684 which was the card previously being displayed in a
single-card view. In one embodiment, focus 688a element may not be
presented.
[0178] During scenario 682, head movements can be used target items
and move between levels of navigation. Each level of navigation can
be represented in a multi-timeline display as one or more cards on
a timeline. For example, multi-timeline display 686a shows that if
the wearer made a leftward head movement, card 692a on timeline
690a, representing a navigation application/process would be
centered on by focus 688a. Multi-timeline display 686a also shows
that if the wearer made a rightward head movement, card 692b on
timeline 690a representing a weather application would be centered
on by focus 688a. Similarly, multi-timeline display 686a shows that
if the wearer made respective upward or downward head movements,
respective cards 692c or 692d would be centered on by focus
688a.
[0179] Scenario 682 continues with the wearer making a downward
head tilt. After determining a downward head movement was
performed, the HMD can move focus 688a downward onto card 692d with
text of "expand". The HMD can generate multi-timeline display 686b
with focus 688b on card 692d, as shown in the center-left portion
of FIG. 6D. Multi-timeline display 686b shows that card 692d is
part of timeline 690b.
[0180] Timeline 690b represents a contextual menu for the hangout,
which includes card 692d to expand, or show other members in the
hangout, invite to request other people join the hangout, end the
hangout, and mute sound from one or more persons at the hangout.
Below timeline 690b, a card 694a representing an attendee of the
hangout is shown, in part to represent the next level of navigation
if the wearer were to decide to make another downward head
motion.
[0181] Scenario 682 continues with the wearer of the HMD making
another downward head motion. After determining a downward head
movement was performed, the HMD can move focus 688b downward onto
card 694a, which represents George Farley as a hangout
attendee.
[0182] The HMD can generate multi-timeline display 686c with focus
688c on card 694a, as shown in the center-right portion of FIG. 6D.
Multi-timeline display 686c shows that card 694a is part of
timeline 690c, which represents attendees of the hangout. FIG. 6D
shows that there are three other attendees at the hangout beyond
the wearer: Pieter Vrijman represented by card 694b, George Farley
represented by card 694a, and Richard The, who is represented by
card 694c. Below card 694a is card 696a with text of "mute",
representing a contextual menu of operations regarding attendees of
hangouts. Card 696a also represents the next level of navigation if
the wearer were to decide to make another downward head motion.
[0183] Scenario 682 continues with the wearer of the HMD making a
rightward head motion. After determining a rightward head movement
was performed, the HMD can move focus 688c rightward onto card
694c, which represents Richard The. The HMD can generate
multi-timeline display 686d with focus 688d on card 694c, as shown
in the lower-left corner of FIG. 6D. Below card 694c is card 696b
with text of "mute", representing a contextual menu of operations
regarding attendees of hangouts and the next level of navigation
corresponding to downward head movements.
[0184] Scenario 682 continues with the wearer releasing his/her
fingers from the touch-based UI of the HMD, thereby ending the
clutch operation. After determining the clutch operation has
completed, the HMD can revert to a single-card view as shown at the
lower right hand corner of FIG. 6D. In some embodiments, the
single-card view can view the last-focused card during
multi-timeline display. For example, the last focus; e.g., focus
688d, during multi-timeline display was on card 694c representing
Richard The. Then, the single-card view can display last-focused
card 696c in a single card view to end scenario 682.
[0185] The user interface can use contextual menus to designate
operations for specific objects, applications, and/or cards. FIG. 7
shows user-interface scenario 700 including contextual menus,
according to an example embodiment. A contextual menu is a menu of
operations or other possible selections that are based on a card.
For example, if the card is a card representing a video, a
contextual menu can include operations such as sharing the video,
editing the video, watching the video, deleting the video, adding
the video to a "video bundle" or collection of videos, annotating
the video, adding, deleting and/or editing sound associated with
the video, and/or other operations related to the video, including
but not limited more or fewer options.
[0186] Scenario 700 begins with the HMD receiving a tap while
displaying image 710. In some embodiments, image 710 is part of a
timeline. In response to the tap, the HMD can select operations for
a contextual menu, such as sharing and deleting the photo, based on
the displayed card; e.g., image 710. To display the contextual
menu, the HMD can then display card 720 to indicate that a share
operation can be performed on image 710. Card 720 also shows two
dots to indicate that the current contextual menu has two options,
with the leftmost dot being black and the rightmost dot being white
to indicate that the current Share option is the first option of
the two options.
[0187] To select the other option in the contextual menu, a wearer
can perform a swipe operation while card 720 is displayed. In
response to the swipe operation, card 722 can be displayed, where
card 722 is associated with a delete operation for image. As with
card 720, card 722 shows two dots to indicate that the current
contextual menu has two options, with the leftmost dot being white
and the rightmost dot being black to indicate that the current
Delete option is the second option of the two options. A swipe
operation while displaying card 722 causes (re)display of card
720.
[0188] If a tap operation is received while displaying card 720,
the HMD can interpret the tap operation as selection of the Share
option of the contextual menu. In response, a "people chooser" can
be used to select a first person for sharing.
[0189] The people chooser can display card 730, which includes an
image and a name of a first contact. FIG. 7 shows that card 730
indicates the first person as "Jane Smith". In response to viewing
card 730, the wearer can instruct the people chooser to show other
possible recipients of photo 710 via swiping through a list of
contacts. In scenario 700, the list of contacts can be represented
by cards that include: card 732a showing "Another Person", card
732b showing "Friends", and card 732c indicating other person(s),
circle(s), and/or social network(s) for sharing photos. People
choosers are also discussed in more detail at least in the context
of FIG. 8.
[0190] FIG. 7 shows that swiping left while card 732c is displayed
to request a next possible recipient can lead to re-displaying card
730 associated with Jane Smith. Similarly, FIG. 7 shows that
swiping right while card 730 is displayed to request a previous
possible recipient can lead to card 732c.
[0191] In scenario 700, the wearer taps on the touch-based UI while
card 730 is displayed, indicating that the wearer wishes to share
image 710 with Jane Smith. In response to this tap, card 734 is
displayed, which includes the word "Sending" and a progress bar. In
scenario 700, the HMD is configured to wait for a "grace period",
such as one or a few second(s), before carrying sending or deleting
images, to give the wearer a brief interval to cancel sending or
deleting the image.
[0192] The progress bar on card 734 can show the passing of the
time of the grace period for sending image 710. Once the grace
period expires or a tap is received, the HMD can send image 710,
e.g., via e-mail or multi-media message, to Jane Smith. If image
710 is sent successfully, the HMD can display card 736 with text of
"Sent" to indicate that image 710 was indeed successfully sent to
Jane Smith. After displaying card 736, the HMD can return to a
timeline display, such as discussed above in the context of at
least FIG. 5A.
[0193] If image 710 is not sent successfully or was cancelled, such
as by the wearer performing a swipe down operation during the grace
period, the HMD can display card 738 to indicate to the wearer that
the HMD was unsuccessful in sending image 710 sent to Jane Smith.
After displaying card 738, the HMD can return to a timeline
display, such as discussed above in the context of at least FIG.
5A.
[0194] If a tap operation is received while displaying card 722,
which FIG. 7 shows is the "Delete" card, the HMD can interpret the
tap operation as selection of the Delete option of the contextual
menu. In response to this tap, the HMD can display card 740 with
text of "Deleting" and a progress bar for a grace period that has
to expire before the HMD will delete image 710. Once the grace
period expires or a tap is received, the HMD can delete image 710.
Once image 710 is deleted, the HMD can display card 742 to indicate
to the wearer that image 710 was indeed deleted. After displaying
card 742, the HMD can return to a timeline display, such as
discussed above in the context of at least FIG. 5A.
[0195] FIG. 7 also shows that at any time while displaying cards
720, 722, 730, 732a-732c, 734, 736, 740, and 742, a swipe down
operation can be performed. In response, the HMD can stop the
current operation; e.g., send or delete, and return to displaying
image 710.
[0196] The UI can utilize "people choosers" or software configured
to help a wearer find a person from among the wearer's contacts,
such as when the wearer wants to contact that the person. FIG. 8
shows a user-interface scenario 800 including a people chooser,
according to an example embodiment. In scenario 800, two techniques
are shown for invoking the people chooser. While card 810 is
displayed, a wearer of an HMD can use a voice interface that
requests that the wearer "Speak a name from your contacts." Also or
instead, at 812, the HMD can be in a contextual menu with a "Share"
option that is selected.
[0197] After either card 810 or 812 is displayed, the people
chooser is invoked to permit selection of a person or "contact" as
a destination for sharing, being called, looked up in a contact
directory, or some other activity. The people chooser sorts
contacts by frequency of use, rather than by time of use; e.g.,
recency, to be a useful alternative to the timeline.
[0198] FIG. 8 shows that card 820 is selected for display by the
people chooser. Card 820 represents "Jane Smith". In scenario 800,
Jane Smith is the most frequently used contact. Card 820 includes
the contact's name, Jane Smith, and an image related to the
contact, e.g., a picture of Jane Smith. After reviewing the card
shown at 820, the wearer of the HMD can either tap or swipe the
touch-based UI to select "Jane Smith" as the person selected for
the activity; e.g., sharing, calling, etc., that can lead to
invocation of the people chooser.
[0199] If a tap is received while card 820 is shown, the HMD can
then take action 822 with the choice. If a swipe is received while
card 820 is displayed, then another card can be displayed for a
next-most recent contact; e.g., card 824 for "Another Person". To
select "Another Person" for the action while card 824 is displayed,
a wearer can either tap the HMD using the touch-based UI or say the
person's name, e.g., "Another Person", using the voice-based
interface. If "Another Person" is selected, the HMD can carry out
the action with "Another Person".
[0200] Otherwise, "Another Person" is not selected. Then, the
wearer can swipe again, and another card can be displayed for a
group of contacts, such as card 826 for "Friends". To select a
"Friend" for the action while card 826 is displayed, a wearer can
either tap the HMD using the touch-based UI or say the person's
name, e.g., "Friend", using the voice-based interface. If the
"Friends" group is selected, the HMD can provide cards in the
"Friends" group in response to swipe actions until either a contact
in the "Friends" group is selected or the "Friends" group is
exhausted without the wearer making a selection. Each item in the
"Friends" group, or friend, can be a contact or other
representation of a person, organization, group, family, etc. that
the wearer has designated as a friend. In one embodiment, the
"Friends" group can be a bundle or folder that enables access to
the items or friends within the bundle or folder. In one
embodiment, the "Friends" group can be a group of friends ordered
based on time of friend designation, most recent access, or by some
other criteria.
[0201] Otherwise, "Friends" are not selected. Then, the wearer can
swipe while card 826 is displayed to bring up card 828,
representing another contact frequently called by the wearer.
Scenario 800 can continue with swipes that show contacts until
either a contact is selected or until all contacts have been
displayed. If all contacts have been displayed, after displaying
the last selected contact, the HMD can "wrap-around" or return to
the first selected card; e.g., card 820 representing "Jane
Smith".
[0202] As mentioned above, the HMD can be configured with a camera,
and the UI can aid wearer interaction with the camera. FIG. 9 shows
a user-interface scenario 900 with camera interactions, according
to an example embodiment. Scenario 900 can begin by displaying card
910 or card 930 for an HMD configured with one or more cameras that
can perform at least the activities described herein.
[0203] While displaying card 910, at any point while utilizing the
UI of the HMD, the camera button; e.g., button 179 of HMD 172 shown
in FIG. 1D, can be pressed for either a short time; e.g., less than
one second, or a long time; e.g., longer than the short time. If
the camera button is pressed for the short time, also referred to
as a "short press" of the camera button, scenario 900 continues by
displaying card 920. Otherwise, if the camera button is pressed for
the long time, also referred to as a "long press" of the camera
button, scenario 900 continues by displaying card 934.
[0204] In response to the short press of the camera button, a photo
or still image is captured using the camera--an example image
capture is shown as card 920. If, after capturing the photo, a tap
is received, scenario 900 continues by displaying card 922;
otherwise, if either a swipe down is received or no interaction
with the touch-based UI is recorded during a wait interval; e.g.,
one second, scenario 900 continues by displaying card 924.
[0205] Card 922 is part of a contextual menu with options for
operating on the captured photo. The contextual menu can include
options such as a share option for the captured photo; e.g., as
indicated by the "Share" card shown at 922, a delete option for the
captured photo, and other options for the captured photo (e.g.,
editing the photo).
[0206] Card 924 shows the captured photo as "animated out"; that
is, the image of the captured photo is replaced with a blank card
shown as card 926 via an animated transition. After displaying card
926, the HMD can return to a previous state; e.g., a position in
the timeline being displayed at 910 before receiving the short
press of the camera button.
[0207] After displaying a home card, such as card 300 shown in FIG.
3, a tap can be received via the touch-based UI. In response to the
tap, the HMD can display a "Capture" card, such as card 930. After
displaying card 930, scenario 900 can continue with a display of
card 932.
[0208] Card 932 is shown in FIG. 9 as a "Photo" card, indicating
that to the wearer that a photo or still image can be captured
using the camera. If a swipe is received while displaying card 932,
scenario 900 can continue by displaying card 934; otherwise,
scenario 900 can continue at 950.
[0209] Card 934 is shown in FIG. 9 as a "Video" card to indicate to
the wearer that a video can be captured using the camera. If a
swipe is received while displaying card 934, scenario 900 can
continue by displaying card 936. In one embodiment, multiple camera
operations can occur simultaneously; e.g., the HMD can perform some
or all of recording video, capturing still images, capturing
timelapse images, and conducting video conferencing at the same
time. In more particular embodiments, the HMD can perform the
multiple camera operations and/or multiple telephone operations
simultaneously; e.g., the HMD can, while perform multiple camera
operations, conduct one or more two-party or multi-party voice
calls, dial one or more parties, have one or more voice calls on
hold, forward one or more voice call, and other telephone
operations.
[0210] Otherwise, the HMD can determine whether a new video session
is to be started to capture the requested video or if a pending
video session is to be rejoined. If the new video session is to be
started, the HMD can trigger the camera to start recording images
(if not already recording) and scenario 900 can continue by
displaying card 950. If the pending video session is to be
rejoined, the HMD can redirect to, or request display of, an
already-existing card for the pending video session and scenario
900 can continue by displaying a card for the pending video
session, shown in FIG. 9 as card 952.
[0211] Card 936 is shown in FIG. 9 as a "Timelapse" card to
indicate to the wearer that a timelapse image can be captured using
the camera. If, a swipe is received while displaying card 936,
scenario 900 can continue by displaying card 932.
[0212] Otherwise, the HMD can determine whether a new timelapse
session is to be started to capture the requested timelapse image
or if a pending timelapse session is to be rejoined. If the new
timelapse session is to be started, the HMD can trigger a timelapse
card to start displaying a timelapse image being captured by the
camera (if not already recording) and scenario 900 can continue by
displaying card 960. If the pending timelapse session is to be
rejoined, the HMD can redirect to an already-existing card for the
pending timelapse session and scenario 900 can continue by
displaying a card for the pending timelapse session, shown in FIG.
9 as card 962.
[0213] Upon displaying card 940, the HMD can launch a temporary
view finder and instruct the camera to begin capturing images. Upon
capturing each image, the HMD can display the image. While
displaying the image, the wearer can either (a) provide a tap to
the HMD and scenario 900 can continue by displaying card 942 or (b)
provide a swipe down using the HMD and scenario 900 can continue by
displaying card 944.
[0214] Upon displaying card 942, the HMD can capture an image using
the camera. Once captured, the HMD can display the captured image
for a short period of time; e.g., one or a few seconds. After
displaying the captured image for the short period, scenario 900
can proceed to display card 940.
[0215] Upon displaying card 944, which is a blank card, any image
for possible capture, e.g., card 940, animates out. In some
embodiments, the camera can be deactivated after animating out the
image, if no other application; e.g., video, is using the camera.
After displaying card 944, the HMD can return to a previous state;
e.g., a position in the timeline being displayed at 910 before
reaching 944.
[0216] Card 950 can be a card representing the new video session.
While the video session is active, the HMD can capture images and,
in some embodiments, sound, and store the captured video. Upon
capturing each image for the video session, the HMD can display the
captured image using card 950, which represents the new video
session. While displaying the images for the video session using
card 950, the wearer can either (a) provide a tap to the HMD and
scenario 900 can continue by displaying card 954 or (b) provide a
swipe down using the HMD and scenario 900 can continue by
displaying card 956.
[0217] Card 952 can be a card representing the pending video
session. While the video session is active, the HMD can capture
images, and in some embodiments, sound, and store the captured
video. Upon capturing each image for the video session, the HMD can
display the captured image using the card 952, which represents the
pending video session. While displaying the images for the video
session using card 952, the wearer can either (a) provide a tap to
the HMD and scenario 900 can continue by displaying card 954 or (b)
provide a swipe down using the HMD and scenario 900 can continue by
displaying card 956.
[0218] Card 954 can represent a contextual menu with options for
the captured video. The contextual menu can include options for the
captured video, such as a stop recording option, restart recording
option, delete video option, and other options.
[0219] Card 956 can be a blank card indicating to the wearer that
the video session has terminated. In some embodiments, the captured
video can be deleted after the video session is stopped, while in
other embodiments, the captured video or audio video can remain in
storage after the video session is stopped. In some embodiments,
the camera can be deactivated if no other application; e.g., a
timelapse photo capture, is using the camera. In other embodiments,
after displaying the blank card, the HMD can return to a previous
state; e.g., a position in the timeline being displayed using card
910 before card 956 was ever displayed.
[0220] Card 960 can represent the new timelapse session. While the
new timelapse session is active, the HMD can capture images for
addition to the timelapse image. Upon capturing each image for the
timelapse session, the HMD can display image(s) related to the new
timelapse session using card 960. While displaying card 960, the
wearer can either (a) provide a tap to the HMD and scenario 900 can
continue by displaying card 964 or (b) provide a swipe down using
the HMD and scenario 900 can continue by displaying card 966.
[0221] Card 962 can represent the pending timelapse session. While
the pending timelapse session is active, the HMD can capture images
for addition to the timelapse image. Upon capturing each image for
the timelapse session, the HMD can display image(s) related to the
pending timelapse session using card 962. While displaying card
962, the wearer can either (a) provide a tap to the HMD and
scenario 900 can continue by displaying card 964 or (b) provide a
swipe down using the HMD and scenario 900 can continue by
displaying card 966.
[0222] Card 964 can represent a contextual menu with options for
the captured timelapse image. The contextual menu can include
options for the captured timelapse image, such as a stop timelapse
option, a timelapse frequency option, a restart timelapse option,
and other options.
[0223] Card 966 can be a blank card that indicates to the wearer
that the timelapse session has terminated. In some embodiments, the
captured timelapse image can be deleted after the timelapse session
is stopped, while in other embodiments, the captured timelapse
image can remain in storage after the timelapse session is stopped.
In some embodiments, the camera can be deactivated if no other
application; e.g., video is using the camera. In other embodiments,
after displaying the blank card, the HMD can return to a previous
state; e.g., a position in the timeline being displayed using card
910 before card 966 was ever displayed.
[0224] Objects, such as photos and messages, can be grouped or
"bundled" by the UI to simplify interactions with these bundles.
FIG. 10A shows user-interface scenario 1000 with photo bundles,
according to an example embodiment. Scenario 1000 begins with an
HMD displaying photo bundle card (PBC) 1010 in a timeline. Photo
bundle card 1010 includes photo bundle indicator (PBI) 1010a,
example photo 1010b, and thumbnails 1010c. Photo bundle indicator
1010a, shown in FIG. 10A as a page with a turned-down corner,
indicates that a "photo bundle" or collection of photos is
associated with photo bundle card 1010. Example photo 1010b, shown
in FIG. 10A as occupying roughly one-half of photo bundle card
1010, provides a relatively large image of an example photo in the
photo bundle. Thumbnails 1010c, shown in FIG. 10A as collectively
occupying roughly one-half of photo bundle card 1010, provides four
relatively small images of four example photos in the photo
bundle.
[0225] While displaying photo bundle card 1010, the wearer of the
HMD can tap on a touch-based UI to instruct the HMD to display the
photos in the photo bundle. During scenario 1000, while displaying
photo bundle card 1010, the HMD can receive a tap and subsequently
display a card with photo 1012.
[0226] Each individual item within a bundle, e.g., a photo within a
photo bundle, functions the same with respect to the user interface
as it would if the item were displayed on the timeline. For
example, in the case of a photo, such as photo 1012, tapping on the
touch-based UI would enter a contextual menu for the photo, and
swiping down while in the contextual menu would return to photo
1012.
[0227] While displaying photo 1012, the HMD can receive a swipe
forward to display the next photo in the bundle or a swipe backward
to display the previous photo in the bundle. In scenario 1000 as
shown in FIG. 10A, the next photo can be photo 1014. As photo 1012
is the first photo in the bundle, the previous photo is the last
photo in the bundle, or photo 1018.
[0228] During scenario 1000, the HMD receives a swipe backward
while displaying photo 1012. In response to the swipe backward, the
HMD can display photo 1018 as discussed above. Scenario 1000
continues with the HMD receiving two more swipes backwards. In
response, the HMD can first display photo 1016 which is the
previous photo to photo 1018, and, after receiving the second swipe
backward, display photo 1014 which is the previous photo to photo
1016 as shown in FIG. 10A.
[0229] While displaying photo 1014, the HMD can receive a tap. In
response to the tap, the HMD can display photo bundle card 1010 and
scenario 1000 can end.
[0230] FIG. 10B shows user-interface scenario 1050 with message
bundles, according to an example embodiment. Scenario 1050 begins
with an HMD displaying message bundle card (MBC) 1060 in a
timeline. Message bundle card 1060 includes message bundle
indicator (MBI) 1060a and a most-recent message in the message
bundle, which includes image 1060b and message 1060c. Photo bundle
indicator 1060a, shown in FIG. 10B as a page with a turned-down
corner, indicates that a "message bundle" or collection of messages
is associated with message bundle card 1060. Image 1060b can be an
image associated with the sender of the most-recent message in the
message bundle. Message 1060c can include text, and in some
embodiments, other type(s) of data, that is sent with the
most-recent message in the message bundle. As shown in FIG. 10B,
image 1060b occupies roughly one-third of message bundle card 1060,
is an image of "Joe W." who sent message 1060c, which occupies
roughly two-thirds of message bundle card 1060. Message 1060c
includes text that says "Sounds great. See you there," and was sent
three minutes ago.
[0231] In scenario 1060, while displaying message bundle card 1060,
the wearer of the HMD can tap on a touch-based UI. Some bundles
have additional functionality, specific to the bundle, associated
with a tap. In the example of the message bundle, a contextual menu
can be displayed in response to the tap. FIG. 10B shows two options
in the contextual menu: a reply option associated with card 1070
and a read-all option associated with card 1072.
[0232] While card 1070 associated with the reply option is
displayed, the HMD can receive a tap. In response, the HMD can
interpret the tap as a selection to reply to the most recently
displayed message card. While card 1072 associated with the read
all option is displayed, the HMD can receive a tap, which can be
interpreted to read the messages in the message bundle, starting
with the most recent. In one embodiment, the HMD can start with the
first message in the message bundle rather than the most recent. In
response to receiving a swipe down while in the contextual menu for
message bundles, the HMD can select message bundle card 1060 for
display.
[0233] Each individual item within a bundle, e.g., a message within
a message bundle, functions the same with respect to the user
interface as it would if the item were displayed on the timeline.
For example, in the case of a message, such as message 1062,
tapping on the touch-based UI would enter a contextual menu for the
message, and swiping down while in the contextual menu for the
message would return to message 1062.
[0234] While displaying message 1062, the HMD can receive a swipe
forward to display the next message in the bundle or a swipe
backward to display the previous message in the bundle. In scenario
1050 as shown in FIG. 10B, the previous message can be message
1064. As message 1062 is the first message in the bundle, there is
no "next" message, so the last message in the bundle, or message
1066, can be displayed instead.
[0235] During scenario 1050, the HMD receives a swipe forward while
displaying message 1062. In response to the swipe forward, the HMD
can display message 1066 as discussed above. Scenario 1050
continues with the HMD receiving two more swipe forwards. In
response, the HMD can first display message 1064 which is the next
message to message 1066, and, after receiving the second swipe
forward, display message 1062, which is the next message to message
1064 as shown in FIG. 10B.
[0236] While displaying message 1062, the HMD can receive a tap. In
response to the tap, the HMD can enter a contextual menu for
message 1062 and scenario 1050 can end.
[0237] The HMD has various settings, including settings for
networks such as WiFi and Bluetooth networks. FIG. 11 shows
user-interface scenario 1100 with timeline 1110 including settings
cards 1120, 1130, according to an example embodiment. As shown in
FIG. 11, timeline 1110 has two settings cards 1120 and 1130 at the
now/future end of the timeline. As shown in FIG. 11, both cards
1120 and 1130 permit interaction with various "settings", e.g.,
controls, preferences, data, and/or other information, in response
to a tap input of the touch-based user interface.
[0238] Card 1120 is related to wireless network ("WiFi") settings,
which can be settings related to wireless networks operating using
one or more protocols, such as IEEE 802.11 protocols, which are
discussed in more detail below in the context of FIG. 12. Card 1130
is related to Bluetooth settings, which can be settings related to
short range wireless networks operating using one or more Bluetooth
protocols, which are discussed in more detail below in the context
of FIG. 13.
[0239] FIG. 12 shows user-interface scenario 1200 related to WiFi
settings, according to an example embodiment. Scenario 1200 begins
with an HMD displaying card 1210. Card 1210 indicates that the HMD
is connected via WiFi to a network of computers called
"GGuest."
[0240] During scenario 1200, in response to viewing card 1210, a
wearer of the HMD taps the touch-based UI of the HMD. In response,
the HMD displays card 1220, indicating both that the HMD is
connected to GGuest and a map of the general area around the
HMD.
[0241] After viewing card 1220, the wearer can swipe next through
cards 1230 and 1240 that indicate available networks for accessible
connections, card 1250 to begin the process to add another WiFi
network, and card 1260 to turn off the WiFi functionality of the
HMD. In some embodiments, swiping next after displaying card 1260
leads to display of card 1220. In other embodiments, swiping
previous after displaying card 1220 leads to display of card
1260.
[0242] In response to tapping while displaying card 1220, the HMD
displays card 1222 with text of "Forget". After viewing card 1222
during scenario 1200, wearer can use the touch-based UI of the HMD
to either (a) tap to instruct the HMD to begin a process of
forgetting, e.g., deleting stored information, about the currently
connected WiFi network, or (b) swipe to bring up card 1232 with
text of "Disconnect" to begin a process of disconnecting from the
currently connected WiFi network. In scenario 1200, the currently
connected WiFi network would be GGuest, as card 1220 was reached
after tapping card 1210, and card 1210 is associated with the
GGuest WiFi network.
[0243] During one aspect of scenario 1200, the wearer taps on the
touch-based HMD while card 1222 is displayed to instruct the HMD to
forget about the GGuest network. The process of forgetting about a
WiFi network is associated with a grace period to permit the wearer
to reconsider. In response to the tap operation, the HMD can
display card 1224 with text of "Forgetting" and progress bar 1224a.
Progress bar 1224a can take a length of time, such as equal to or
greater than the grace period, to complete display. After progress
bar 1224a is completely displayed, the grace period is deemed to
have expired.
[0244] Once the grace period expires or a tap is received during
display of card 1224, the HMD can delete stored information about
the currently connected WiFi network and display card 1226
indicating the currently connected WiFi network is now forgotten.
After displaying card 1226, the HMD can return to the settings
context menu.
[0245] During another aspect of scenario 1200, the wearer taps on
the touch-based HMD while card 1232 is displayed to instruct the
HMD to disconnect from the GGuest network. The process of
disconnecting from a WiFi network is associated with a grace period
to permit the wearer to reconsider. In response to the tap
operation, the HMD can display card 1234 with text of
"Disconnecting" and progress bar 1234a. Progress bar 1234a can take
a length of time, such as equal to or greater than the grace
period, to complete display. After progress bar 1234a is completely
displayed, the grace period is deemed to have expired.
[0246] Once the grace period expires or a tap is received during
display of card 1234, the HMD can disconnect from the currently
connected WiFi network and display card 1236 indicating that the
HMD is now disconnected from the previously-connected WiFi network.
After displaying card 1236, the HMD can return to the settings
context menu.
[0247] Card 1230 displays information about a nearby WiFi network
named "GA Ntwk" including the network's use of Wired Equivalent
Privacy or "WEP", and a map with location information about "GA
Ntwk." In response to tapping while displaying card 1230, the HMD
attempts to connect the "GA Ntwk" network and displays card 1244
with text of "Connecting"
[0248] After displaying card 1244, if the HMD is able to
successfully connect to the WiFi network, the HMD will display card
1246 with text of "Connected" and return to the setting context
menu. If the HMD is unable to successfully connect to the WiFi
network; e.g., the network is not open access and requires
authentication for access, the HMD will display card 1248 with text
of "Failed" and return to the previous card; e.g., card 1230 to
request additional input related to the "GA Ntwk." In some
embodiments, the HMD can automatically attempt WiFi reconnection
upon (initial) failure. In particular embodiments, the HMD will
automatically attempt WiFi reconnection for a fixed number of
attempts before indicating failure. If the HMD automatically
reattempts WiFi connection upon failure, the HMD can display card
1244 as the "previous" card.
[0249] Card 1240 displays information about a nearby WiFi network
named "Coffee Shop" including a map with location information about
"Coffee Shop". In response to tapping while displaying card 1240,
the HMD can determine that the "Coffee Shop" network is secured and
display card 1242. Card 1242 displays an icon of a Quick Response
(QR) code, text to "Enter Password", and a hint of "Generate QR
code at <Example URL>."
[0250] In scenario 1200, the QR code is provided to the HMD. For
example, the QR code can be on a sticker, poster, paper, or
otherwise displayed at the wearer's location; e.g., the "Coffee
Shop" location. As another example, the QR code can be generated
via a website in which the user entered the credentials for access
to the network. Once a suitable QR code is located, the wearer can
capture the QR code by pointing the HMD's camera at it. In other
embodiments, other techniques besides a QR code can be used to
enter network credentials, such as the wearer speaking the password
for access to a network.
[0251] In response to the HMD successfully capturing the QR code or
otherwise obtaining the password for the "Coffee Shop" network, the
HMD can display card 1244. After displaying card 1244, if the HMD
is able to successfully connect to the WiFi network, the HMD will
display card 1246 with text of "Connected" and return to the
setting context menu. If the HMD is unable to successfully connect
to the WiFi network, the HMD will display card 1248 with text of
"Connected" and return to the previous card; e.g., card 1230 to
request additional input related to the "Coffee Shop" network.
[0252] In some embodiments, the HMD can automatically reattempt
WiFi connection upon (initial) failure. In particular embodiments,
the HMD will automatically attempt WiFi reconnection for a fixed
number of attempts before indicating failure. If the HMD
automatically reattempts WiFi connection upon failure, the HMD can
display card 1244 as the "previous" card.
[0253] Card 1250 displays a QR code encoding information about a
WiFi network. In response to tapping while displaying card 1250,
the wearer can obtain a QR code and the HMD's camera can be
utilized to capture the QR code as discussed above. In other
embodiments, other techniques besides a QR code can be used to
enter network credentials, such as the wearer speaking the password
for access to a network. In response to the HMD obtaining the QR
code or otherwise obtaining the password for the WiFi network to be
added, the HMD can display card 1244.
[0254] After displaying card 1244, if the HMD is able to
successfully connect to the WiFi network, the HMD will display card
1246 with text of "Connected" and return to the setting context
menu. If the HMD is unable to successfully connect to the WiFi
network, the HMD will display card 1248 with text of "Failed" and
return to the previous card; e.g., card 1230 to request additional
input related to the WiFi network to be added indicated using card
1250. In some embodiments, the HMD can automatically reattempt WiFi
connection upon (initial) failure. In particular embodiments, the
HMD will automatically attempt WiFi reconnection for a fixed number
of attempts before indicating failure. If the HMD automatically
reattempts WiFi connection upon failure, the HMD can display card
1244 as the "previous" card.
[0255] In response to tapping card 1260, the HMD begins a process
of "turning off" or deactivating WiFi functionality for the HMD. In
scenario 1200, the process of deactivating WiFi functionality is
associated with a grace period to permit the wearer to cancel or
abort the WiFi deactivation. In response to the tap operation, the
HMD can display card 1262 with text of "Turning off" and progress
bar 1262a. Progress bar 1262a can take a length of time, such as
equal to or greater than the grace period, to complete display.
After progress bar 1262a is completely displayed, the grace period
is deemed to have expired.
[0256] Once the grace period expires or a tap is received during
display of card 1262, the HMD can deactivate WiFi functionality for
the HMD, and display card 1264 indicating the WiFi functionality
for the HMD is off. After displaying card 1264, the HMD can return
to the settings context menu.
[0257] In other example scenarios, card 1210 could indicate, as
shown using card 1212 of FIG. 12, that the HMD is not connected to
a WiFi network or, as shown using card 1214 of FIG. 12, that WiFi
functionality of the HMD is turned off. In those examples, card
1220 is not used, and tapping either card 1212 or 1214 leads to
display of card 1230.
[0258] If the WiFi functionality is off; e.g., card 1214 is
displayed, card 1260 displays a "Turn On" or similar text, and
tapping card 1260 while the WiFi functionality is initially off,
lead to activation of the HMD's WiFi functionality.
[0259] FIG. 13 shows user-interface scenario 1300 related to
Bluetooth settings, according to an example embodiment. Scenario
1300 begins with the HMD displaying card 1310 in a timeline. As
shown in FIG. 13, card 1310 includes a Bluetooth logo and text
indicating the HMD is "Connected to Galaxy Nexus [and] Home-PC."
During scenario 1300, in response to viewing card 1310, the wearer
performs a tap operation using the touch-based UI of the HMD.
[0260] In response to the tap operation, the HMD can display card
1320. Card 1320 shows an image of a mobile device and text of
"Connected to Galaxy Nexus." After viewing card 1320, the wearer
can swipe next through card 1330 that indicate connection to a
Home-PC and card 1340 to begin the process to "pair with" or
connect to another device using Bluetooth. In some embodiments,
swiping next after displaying card 1340 leads to display of card
1320. In other embodiments, swiping previous after displaying card
1320 leads to display of card 1340.
[0261] After viewing card 1320, the wearer can perform a tap
operation using the touch-based UI of the HMD. In response to this
tap operation, card 1332 is displayed with text of "Disconnect" to
indicate a disconnect operation to be performed on the current
Bluetooth connection. After viewing card 1332, the wearer can use
the touch-based UI to perform a swipe operation. In response to the
swipe, the HMD can display card 1322 with text of "Forget" to
indicate a forget operation for the current Bluetooth
connection.
[0262] After viewing card 1322 during scenario 1300, the wearer can
use the touch-based UI of the HMD to either (a) tap to instruct the
HMD to begin a process of forgetting about the current Bluetooth
connection, or (b) swipe to re-view card 1332. In scenario 1300,
the Bluetooth connection would be a connection between the HMD and
"Galaxy Nexus", as card 1322 was reached after tapping card 1320,
and card 1320 is associated with the HMD/Galaxy Nexus Bluetooth
connection.
[0263] During one aspect of scenario 1300, the wearer taps on the
touch-based HMD while card 1322 is displayed to instruct the HMD to
forget about the HMD/Galaxy Nexus Bluetooth connection. The process
of forgetting about a Bluetooth connection is associated with a
grace period to permit the wearer to reconsider. In response to the
tap operation, the HMD can display card 1324 with text of
"Forgetting" and a progress bar. The progress bar can take a length
of time, such as equal to or greater than the grace period, to
complete display. After the progress bar is completely displayed,
the grace period is deemed to have expired.
[0264] Once the grace period expires or a tap is received during
display of card 1324, the HMD can delete stored information about
the current Bluetooth connection and display card 1326 indicating
the current Bluetooth connection is now forgotten. After displaying
card 1326, the HMD can return to the home card context menu.
[0265] In another aspect of scenario 1300, the wearer taps on the
touch-based HMD while card 1332 is displayed to instruct the HMD to
disconnect from the Galaxy Nexus. The process of disconnecting a
Bluetooth connection is associated with a grace period to permit
the wearer to reconsider. In response to the tap operation, the HMD
can display card 1334 with text of "Disconnecting" and a progress
bar that can take a length of time, such as equal to or greater
than the grace period, to complete display. After the progress bar
is completely displayed, the grace period is deemed to have
expired.
[0266] Once the grace period expires or a tap is received during
display of card 1334, the HMD can disconnect from the current
Bluetooth connection and display card 1336 indicating that the HMD
is now disconnected from the previously-connected Bluetooth
connection. After displaying card 1336, the HMD can return to the
home card context menu.
[0267] In another example of scenario 1300, the wearer can use the
touch-based UI of the HMD to perform a tap operation while card
1330 is displayed. Card 1330 shows an image of a computer display
and has text of "Connected to Home-PC" to indicate a Bluetooth
connection between the HMD and a device named "Home-PC". In
response to this tap operation, the HMD can display card 1332 for
disconnecting the HMD/Home-PC connection, or after receiving a
swipe operation, the HMD can display card 1322 for disconnecting
the HMD/Home-PC connection. After either card 1322 or 1332 is
displayed, the wearer can use the touch-based UI to perform a tap
operation. In response to the tap, the HMD can respectively perform
the forgetting (after card 1322 display) or disconnecting
operations (after card 1322 display) for Bluetooth connections,
using the HMD/Home-PC connection as the current Bluetooth
connection, as the tap for card 1322/1332 was received after most
recently displaying card 1330 representing the HMD/Home-PC
connection.
[0268] After displaying card 1340, the HMD can be brought into, or
already be in, proximity of some other device configured to pair
with the HMD. In scenario 1300, the other device is a mobile phone
identified, e.g., as "Galaxy Nexus." In other scenarios, the HMD
can attempt to pair with a device other than the Galaxy Nexus. If
the other device attempts to pair with the HMD (or vice versa),
card 1342 can be displayed in response. As shown in FIG. 13, card
1342 includes an image of a mobile device and text of "Pair with
Galaxy Nexus? Tap if Galaxy Nexus displays 186403."
[0269] In response to the display of card 1342, the wearer can use
the touch-based UI to perform a tap operation, and so instruct the
HMD to pair with the other device; e.g., the Galaxy Nexus. After
receiving the tap, the HMD can display card 1344 with text of
"Pairing" to indicate that the HMD is attempting to pair with the
Galaxy Nexus.
[0270] If the pairing operation between the HMD and the Galaxy
Nexus is successful, the HMD can display card 1344 with text of
"Paired" and can return to the main timeline after "splicing" or
adding a card for the new device to the timeline. On the other
hand, if the pairing operation between the HMD and the Galaxy Nexus
is unsuccessful, the HMD can display card 1348 with text of
"Failed" and can return to card 1340 ("Pair with new device") to
possibly reattempt pairing with the Galaxy Nexus and/or pair with a
different device.
[0271] The HMD can arrange portions of a card in a "visual stack"
in order to generate the visual rendering of the card. FIG. 14A
shows example visual stack 1400, according to an example
embodiment. Visual stack 1400 is used to generate or render a card
from a set of overlaid images.
[0272] From the wearer's perspective, visual stack 1400 is the
collection of images viewed looking down viewport 1410 via a
"rectangular tube", shown with dashed lines in FIG. 14A, to main
timeline 1430. The collection of images can be part of timelines,
menus, etc., each of which can be considered to run independently
at different levels perpendicular to the rectangular tube. The
wearer can then perceive content on the display of the HMD through
the viewport 1410 to see the portions of the timelines, menus, etc.
that are within the rectangular tube.
[0273] FIG. 14A shows that three items are in visual stack 1400
between viewport 1410 and main timeline 1430: submenu 1420,
contextual menu 1422, and overlay 1424. Submenu 1420 includes three
images: an image of "Jane Smith", an image of "Another Person", and
an image associated with "Friends", with the image of "Another
Person" inside the rectangular tube. Contextual menu 1422 includes
two options: a "Share" option and a "Delete" option, with the
"Share" inside the rectangular tube. Thus, visual stack 1400 shows
contextual menu 1422 for a photo bundle card shown on main timeline
1430 with a "Share" option selected from the contextual menu 1422,
and a sharing destination of "Another Person" selected from submenu
1420.
[0274] FIG. 14B shows example visual stack 1450, according to an
example embodiment. From the wearer's perspective, visual stack
1450 is the collection of images viewed looking down viewport 1460
via a rectangular tube shown with dashed lines in FIG. 14A, to main
timeline 1480. FIG. 14B shows two items are in visual stack 1450
between viewport 1460 and main timeline 1480: action notification
1470 and overlay 1472. Action notification 1470 shows a "Send"
notification. Thus, visual stack 1400 shows a "Send" notification
for a photo bundle card shown on main timeline 1480.
[0275] In some embodiments, overlay 1472 is completely opaque with
respect to main timeline 1480. In these embodiments, the wearer
viewing visual stack 1450 sees action notification 1470 and overlay
1472. In other embodiments, overlay 1472 is partially or completely
transparent or translucent with respect to main timeline 1480. In
these embodiments, the wearer viewing visual stack 1450 sees action
notification 1470 and overlay 1472 with some portion(s) of the
photo bundle card shown on main timeline 1480 visible, depending on
the visibility of an image on main timeline 1480 through overlay
1472.
[0276] Along with the touch-based UI detailed above, the HMD can
utilize a voice-based interface. FIG. 15 shows a user-interface
scenario 1500 related to voice interactions, according to an
example embodiment. Scenario 1500 begins with a wearer of the HMD
reciting and/or uttering the phrase "ok glass", which can be
prompted by a hint provided on a home card such as discussed above
in the context of FIG. 3.
[0277] In response to the HMD recognizing "ok glass", the HMD can
receive the utterance "ok glass" via the voice-based interface and
display card 1510. Card 1510 shows the input command "ok glass" to
confirm the input received at the voice-based UI of the HMD and a
list of available voice commands including "Google", "navigate to",
"take a photo", "record a video", "send a message to", and "make a
call to."
[0278] The wearer of the HMD can tilt his/her head up or down to
respectively scroll up or down through lists, such as the list of
available voice commands. For example, card 1512 shows the result
of scrolling down the list of possible voice commands shown in card
1510, indicating the removal of the previously-visible available
voice command "Google" and the addition of the available voice
command "hangout with." The HMD can use tilt sensors,
accelerometers, motion detectors, and/or other devices/sensors to
determine if the wearer tilted their head and/or whether the wearer
tilted their head up or down.
[0279] While display of card 1510, the wearer utters the word
"Google", which causes card 1520 to be displayed. Card 1520 can
also be displayed in response to the wearer uttering "OK Google"
and the voice-based HMD recognizing the "OK Google" phrase. Card
1520, as shown in FIG. 15, includes a hint of "ask a question"
along with an icon of a microphone that can act as a reminder to
the wearer that they are using the voice-based interface.
[0280] In response to the display of card 1520, the wearer can
utter "How tall is the Eiffel Tower?" The HMD can then display card
1522 showing that the HMD is processing the input utterance, and
once processed, display card 1524. Card 1524 "echo-prints" or
repeats the input utterance of "How tall is the Eiffel Tower?" and
also prints an indicator of "searching" to inform the wearer that a
search is ongoing based on their input. After the search is
complete, the HMD can display card 1526, which includes an image of
the Eiffel Tower and an answer of "1,063 feet (324 m)" to the
question asked by the wearer.
[0281] In another aspect of scenario 1500, in response to card
1512, the viewer can utter "Navigate to". In response, the
voice-based UI of the HMD can capture the utterance, determine that
the utterance is "Navigate to" and display card 1530 showing an
echo-print of the "Navigate to" utterance.
[0282] Scenario 1500 includes two examples of a destination of the
"Navigate to" command. In one example, the wearer utters
"Restaurant 1" as the destination, indicated via card 1532. Card
1532 includes an echo-print of the "ok glass, navigate to
Restaurant 1" command and an indication that the HMD is searching
for a location of Restaurant 1. In this example, a single location
result is returned for "Restaurant 1". Card 1534 includes a map to
the single location that occupies about one-third of the card. The
remainder of card 1534 shows that the HMD is "navigating to
Restaurant 1" which is on "13.sup.th Street" and will take "16
minutes" to get to the restaurant.
[0283] In another navigation example, the wearer utters "Popular
Pueblo" as the destination, as indicated via card 1536. Card 1536
includes an echo-print of the "ok glass, navigate to Popular
Pueblo" command and an indication that the HMD is searching for a
location of Popular Pueblo. In this example, a search for a
location of "Popular Pueblo" returned multiple locations, as
indicated by location cards 1538. The wearer can use the
touch-based UI to swipe through the multiple location cards 1538 to
view each location cards individually, and to perform a tap
operation to select a particular Popular Pueblo location while the
desired Popular Pueblo location card is displayed. After the
desired Popular Pueblo location card is displayed and the tap
operation is completed, the desired location result is shown in
card 1540 of FIG. 15. Card 1540 includes a map to the desired
location that occupies about one-third of the card. The remainder
of card 1540 shows that the HMD is "navigating to Popular Pueblo"
which is on "14.sup.th Street" and will take "5 minutes" to get to
the desired Popular Pueblo.
[0284] In another aspect of scenario 1500, in response to card
1512, the viewer can utter "Send message to". In response, the
voice-based UI of the HMD can capture the utterance, determine that
the utterance is "Send message to" and display card 1550 showing an
echo-print of the "Navigate to" utterance, along with a list of
potential recipients of the message. FIG. 15 shows that the list of
potential recipients includes "Sarah Johnson", "Steve Johnson", and
"Julie Dennis".
[0285] To navigate through the list of potential recipients, the
wearer can tilt their head up or down to respectively scroll up or
down through the list as indicated by cards 1550 and 1552. Scenario
1500 continues with the wearer uttering "Sarah Johnson" to the HMD.
The voice-based UI of the HMD can capture the utterance, determine
that the utterance is "Sarah Johnson" and display card 1554 showing
an echo-print of the "send message to" utterance, along with "Sarah
Johnson" as a recipient of the message.
[0286] The HMD can wait for a period of time, e.g., one second, for
the wearer to provide additional recipients. If the wearer does not
provide additional recipients in that period of time, the HMD can
display a card, such as card 1556, for composing and echo-printing
a message. Card 1556 shows echo-printed utterances including "hi
sarah I'm on my way out will be a few" and blocks. The blocks
indicate that the HMD is in the process of recognizing the
utterances provided by the wearer and translating those utterances
into text. In some embodiments, speech can be translated to text
using one or more automatic speech recognition (ASR)
techniques.
[0287] After some time, the wearer stops uttering content for the
message. In scenario 1500, after uttering the content of the
message, the wearer decides to send the message to the recipient,
Sarah Johnson. To send the message, the user can either perform a
tap operation using the touch-based UI, or stop uttering for a
period of time; e.g., one second. In response, the HMD can display
a card such as card 1558 indicating that the message is in the
process of being sent. After the message is sent, the sent message
is spliced into the timeline.
E. EXAMPLE METHODS OF OPERATION
[0288] FIG. 16A is a flow chart illustrating a method 1600,
according to an example embodiment. In FIG. 16A, method 1600 is
described by way of example as being carried out by a computing
device, such as a wearable computer, and possibly a wearable
computer that includes a head-mounted display (HMD). However, it
should be understood that example methods, such as method 1600, can
be carried out by a wearable computer without wearing the computer.
For example, such methods can be carried out by simply holding the
wearable computer using the wearer's hands. Other possibilities can
also exist.
[0289] Further, example methods, such as method 1600, can be
carried out by devices other than a wearable computer, and/or can
be carried out by sub-systems in a wearable computer or in other
devices. For example, an example method can alternatively be
carried out by a device such as a mobile phone, which is programmed
to simultaneously display a graphic object in a graphic display and
also provide a point-of-view video feed in a physical-world window.
Other examples are also possible.
[0290] As shown in FIG. 16A, method 1600 begins at block 1610,
where an HMD can display a home card of an ordered plurality of
cards. In some embodiments, the ordered plurality of cards can be
ordered based on time. Each card in the ordered plurality of cards
is associated with a specific time. In particular of these
embodiments, the choose-next input type can be associated with
going forward in time. A specific time can be associated with the
next card that is equal to or later than a specific time associated
with the home card. The choose-previous input type can be
associated with going backward in time. A specific time can be
associated with the previous card that is equal to or earlier than
the specific time associated with the home card.
[0291] At block 1620, while displaying the home card, the HMD can
receive a first input. The first input can be associated with a
first input type. The first input type can include a choose-next
input type and a choose-previous input type. In some embodiments,
the HMD can include a touch-based UI, such as a touch pad, via
which the first input can be received. The user-input can be
received via other user-interfaces as well.
[0292] At block 1630, if the first input is of the choose-next
input type, the HMD can: (a) obtain a next card of the ordered
plurality of cards, where the next card is subsequent to the home
card in the ordered plurality of cards, and (b) display the next
card using the HMD.
[0293] At block 1640, in response to the first input type being the
choose-previous input type, the HMD can: (a) obtain a previous card
of the ordered plurality of cards, where the previous card is prior
to the home card in the ordered plurality of cards, and (b) display
the previous card using the HMD.
[0294] In some embodiments, method 1600 can further involve the HMD
receiving a next input while displaying the next card. The next
input can be associated with a next input type. The next input type
can include the choose-next input type and the choose-previous
input type. In response to the next input type being the
choose-next input type, the HMD can obtain a second-next card of
the plurality of cards, where the second-next card is subsequent to
the next card in the ordered plurality of card. The HMD can display
the second-next card. In response to the next input type being the
choose-previous input type, the HMD can obtain the home card and
display the home card.
[0295] In other embodiments, method 1600 can additionally include
that the HMD can, while displaying the previous card, receive a
previous input. The previous input can be associated with a
previous input type. The previous input type can include the
choose-next input type and the choose-previous input type. In
response to the previous input type being the choose-next input
type, the HMD can obtain the home card and display the home card.
In response to the previous input type being the choose-previous
input type, the HMD can obtain a second-previous card of the
plurality of cards, where the second-previous card is prior to the
previous card in the ordered plurality of cards. The HMD can
display the second-previous card.
[0296] In particular of the other embodiments, the second-previous
card can include a bundle card. The bundle card can represent a
collection of cards and can include a bundle card indicator. Then,
method 1600 can further include receiving a bundle-card input of a
bundle-card type at the HMD, while displaying the bundle card. A
first card of the collection of cards can be displayed in response
to the bundle-card type of input being a tap. While displaying the
first card of the collection of cards, the HMD can receive a
first-card input associated with a first-card type. In response to
the first-card input being the choose-next type of input, the HMD
can select a second card in the collection of cards, where the
second card is subsequent to the first card and display the second
card. In response to the first-card input being the choose-previous
type of input, the HMD can select a third card in the collection of
cards, where the third card is prior to the first card; and display
the third card.
[0297] In still other embodiments, the first input type can
additionally include a fast-choose-next input type and a
fast-choose-previous input type. Each of the choose-next input type
and the choose-previous input type can be associated with a first
card rate, and each of the fast-choose-next input type and the
fast-choose-previous input type can be associated with a second
card rate. The second card rate can exceed the first card rate.
[0298] In these embodiments, method 1600 can additionally include:
in response to the first input type being the choose-next input
type: (i) simulating movement at the first card rate through of the
ordered plurality of cards subsequent to the home card, and (ii)
obtaining the next card based on the simulated movement subsequent
to the home card at the first card rate. In response to the first
input type being the choose-previous input type, method 1600 can
include: (iii) simulating movement at the first card rate through
of the ordered plurality of cards prior to the home card, and (iv)
obtaining the previous card based on the simulated movement prior
to the home card at the first card rate. In response to the first
input type being the fast-choose-next input type, method 1600 can
include: (v) simulating movement at the second card rate through of
the ordered plurality of cards subsequent to the home card, and
(vi) obtaining a fast-next card based on the simulated movement
subsequent to the home card at the second card rate. In response to
the first input type being the fast-choose-previous input type,
method 1600 can additionally include: (vii) simulating movement at
the second card rate through of the ordered plurality of cards
prior to the home card, and (viii) obtaining a fast-previous card
based on the simulated movement prior to the home card at the
second card rate.
[0299] In particular of these embodiments, each of the choose-next
input type and the choose-previous input type can be associated
with a swipe made using a first number of fingers and each of the
fast-choose-next input type and the fast-choose-previous input type
can be associated with a swipe made using a second number of
fingers. Then, the first number of fingers can differ from the
second number of fingers.
[0300] FIG. 16B is a flow chart illustrating a method 1650,
according to an example embodiment. In FIG. 16B, method 1650 is
described by way of example as being carried out by a computing
device, such as a wearable computer, and possibly a wearable
computer that includes an HMD, but other techniques and/or device
can be used to carry out method 1650, such as discussed above in
the context of method 1600.
[0301] As shown in FIG. 16B, method 1650 begins at block 1660,
where a home card can be displayed by a head-mountable device
(HMD). The HMD can include a user-interface (UI) state, where the
UI state is in a home UI state.
[0302] In some embodiments, displaying the home card can include
displaying a hint for using a UI of the HMD on the home card. In
particular embodiments, the hint can include a hint for the
voice-based UI. In other particular embodiments, the hint can
include a hint for the touch-based UI. In still other particular
embodiments, displaying the hint can include determining whether a
number of times the hint is used successfully meets or exceeds a
threshold number of times. In response to the number of times the
hint is used successfully does not meet or exceed the threshold
number of times, the hint can be displayed on the home card. In
response to the number of times the hint is used successfully does
meet or exceed the threshold number of times, the hint can be
inhibited from display on the home card.
[0303] At block 1670, while in the home UI state, a first UI of the
HMD can receive a first input. The first input can be associated
with a first type of input.
[0304] At block 1680, in response to the first type of input being
a choose-next type of input, the HMD can: display a next card of an
ordered plurality of cards, where the ordered plurality of cards
also includes the home card, and where the next card can differ
from the home card, and set the UI state to a timeline-next
state.
[0305] At block 1682, the HMD can, in response to the first type of
input being a choose-previous type of input: display a previous
card of the ordered plurality of cards, where the choose-previous
type of input differs from the choose-next type of input, and where
the previous card differs from the both next card and the home
card, and set the UI state to a timeline-previous state.
[0306] At block 1684, the HMD can, in response to the first type of
input being a tap type of input: activate a second UI of the HMD,
where the first UI of the HMD is a touch-based UI and where the
second UI is a voice-based UI, and set the UI state of the HMD to a
voice-home state.
[0307] At block 1686, the HMD can, in response to the first type of
input being a speech-type of input, determine whether text
associated with the first input matches a predetermined text. In
response to determining that the text associated with first input
matches the predetermined text, the HMD can activate the second UI
and set the UI state to the voice-home state.
[0308] In some embodiments, method 1650 can additionally include:
in response to the first input being a sleep-type of input,
deactivating at least a portion of the HMD and setting the UI state
of the HMD to a deactivated state.
[0309] In other embodiments, method 1650 can additionally include:
receiving a second input using the first UI while in the
timeline-previous state. The second input can be associated with a
second type of input. In response to the second type of input being
the tap type of input: (i) one or more operations can be selected
based on the previous card, (ii) a menu of operation cards can be
generated based on the selected one or more operations, where each
operation card in the menu of operation cards can correspond to an
operation of the one or more operations, and (ii) at least one
operation card of the menu of operation cards can be displayed.
[0310] In particular of the other embodiments, displaying the at
least one operation card of the menu of operation cards can include
displaying text associated with the operation that overlays the
display of the previous card.
[0311] In other particular of the other embodiments, the one or
more operations can include an operation associated with a grace
period of time. Then, method 1650 can additionally include: while
displaying at least one operation card of the menu of operation
cards, receiving an operation input using the first UI where the
operation input has an operation type. In response to the operation
type of input being the tap type of input, an operation associated
with the displayed at least one operation card can be determined. A
determination can be made whether a grace period of time is
associated with the associated operation. If the grace period of
time is associated with the associated operation, a card can be
displayed that is configured to graphically indicate the grace
period of time, where the displaying takes at least the grace
period of time. After displaying the card configured to graphically
indicate the grace period of time, the HMD can perform the
associated operation.
[0312] In still other embodiments, method 1650 can additionally
include: while in the timeline-next state, receiving a second input
using the first UI. The second input can be associated with a
second type of input. In response to the second type of input being
the tap type of input: (a) one or more operations can be selected
based on the next card, (b) a menu of operations can be generated
based on the selected one or more operations, and (c) at least one
menu operation of the menu of operations can be displayed. In
particular of the still other embodiments, displaying the at least
one menu operation of the menu of operations can include displaying
text associated with the at least one menu operation that overlays
the display of the next card.
[0313] In even other embodiments, method 1650 can additionally
include: in response to the UI state of the HMD being in the
voice-home state, generating a menu card to display a menu of
operations for using the voice-based UI. The HMD can display the
menu card. After displaying the menu card, a head-related input
related to a head movement associated with the HMD can be received.
The menu card can be modified based on the head-related input. The
modified menu card can be displayed using the HMD.
[0314] FIG. 17 is a flow chart illustrating a method 1700,
according to an example embodiment. In FIG. 17, method 1700 is
described by way of example as being carried out by a computing
device, such as a wearable computer and possibly a wearable
computer that includes an HMD, but other techniques and/or devices
can be used to carry out method 1700, such as discussed above in
the context of method 1600.
[0315] Method 1700 can begin at block 1710. At block 1710, a
computing device can display at least a portion of a first linear
arrangement of cards. The first linear arrangement can include an
ordered plurality of cards that includes one or more first cards of
a first card-type and one or more second cards of a second
card-type. Each first card corresponds to a group of cards. Aspects
of the first linear arrangement are discussed above at least in the
context of at least FIGS. 5C, 6D, 7, 8, 10A, 10B, 14A, and 14B.
[0316] In some embodiments, the first linear arrangement can
include a timeline and each card of the first linear arrangement
can be associated with a specific time, such as discussed above at
least in the context of at least FIGS. 5A-15.
[0317] In other embodiments, each card of the ordered plurality of
cards can include a relationship-related parameter, such as a type
as discussed above at least in the context of at least FIGS. 5C,
6C, 6D, 10A, and 10B, or other kind(s) of relationship-related
parameter(s). In particular embodiments, each card in the group of
cards can be related to a same relationship-related parameter, such
as discussed above in the context of at least FIGS. 5C, 6C, 6D,
10A, and 10B.
[0318] At block 1720, the computing device can display a selection
region that is moveable with respect to the first linear
arrangement, where a given card is selected when the selection
region is aligned with the given card. Alignment of the selection
region and the given card is discussed above in more detail in the
context of FIG. 5C. Additional aspects of the selection region are
discussed above at least in the context of FIGS. 5C and 6A-6D.
[0319] In some embodiments, the HMD can be configured to detect
head movements. In these embodiments, displaying the selection
region that is moveable with respect to the first linear
arrangement can include moving the selection region with respect to
the first linear arrangement based on the head movements, such as
discussed above at least in the context of FIGS. 5C and 6D.
[0320] At block 1730, in response to selection of a given first
card by the selection region, the computing device can display at
least a portion of a second linear arrangement of cards, where the
second linear arrangement can include an ordered plurality of the
group of cards that corresponds to the given first card. Aspects of
the given first card are discussed above at least in the context of
FIGS. 5C, 6D, 7, 8, 10A, 10B, 14A, and 14B. In some embodiments,
the second linear arrangement can also include the given first
card.
[0321] At block 1740, in response to selection of a given second
card by the selection region, the computing device can display at
least a portion of a third linear arrangement of cards, where the
third linear arrangement includes one or more third cards of a
third card-type, where each third card is selectable to perform an
action based on the given second card. Aspects of actionable cards
are discussed above at least in the context of FIGS. 5A-15.
[0322] In some embodiments, the selected second card can be related
to a first relationship-related parameter, and displaying at least
the portion of the third linear arrangement can include determining
the one or more third cards based on the first relationship-related
parameter, such as discussed above in the context of at least FIGS.
5C and 6D. In other embodiments, the second linear arrangement can
include the bundle card.
[0323] In even other embodiments, the HMD can be configured with a
touchpad. In these embodiments, such as discussed above in the
context of at least FIGS. 5C and 6A-6D, method 1700 can further
include: initially displaying a single card of from the first
linear arrangement using a single-card view; while displaying the
single card, receiving a first input via the touchpad; in response
to the first input: switching to a multi-timeline view and
displaying, in the multi-timeline view, the at least a portion of
the first linear arrangement of cards, wherein the at least the
portion of the first linear arrangement of cards comprises the
single card.
[0324] In still other embodiments, method 1700 can further include:
after displaying the second linear arrangement of cards, selecting
a card other than the selected first card; and ceasing display of
the second linear arrangement.
F. CONCLUSION
[0325] The present disclosure is not to be limited in terms of the
particular embodiments described in this application, which are
intended as illustrations of various aspects. Many modifications
and variations can be made without departing from its spirit and
scope, as will be apparent to those skilled in the art.
Functionally equivalent methods and apparatuses within the scope of
the disclosure, in addition to those enumerated herein, will be
apparent to those skilled in the art from the foregoing
descriptions. Such modifications and variations are intended to
fall within the scope of the appended claims.
[0326] The above detailed description describes various features
and functions of the disclosed systems, devices, and methods with
reference to the accompanying figures. In the figures, similar
symbols typically identify similar components, unless context
dictates otherwise. The example embodiments described herein and in
the figures are not meant to be limiting. Other embodiments can be
utilized, and other changes can be made, without departing from the
spirit or scope of the subject matter presented herein. It will be
readily understood that the aspects of the present disclosure, as
generally described herein, and illustrated in the figures, can be
arranged, substituted, combined, separated, and designed in a wide
variety of different configurations, all of which are explicitly
contemplated herein.
[0327] With respect to any or all of the ladder diagrams,
scenarios, and flow charts in the figures and as discussed herein,
each block and/or communication can represent a processing of
information and/or a transmission of information in accordance with
example embodiments. Alternative embodiments are included within
the scope of these example embodiments. In these alternative
embodiments, for example, functions described as blocks,
transmissions, communications, requests, responses, and/or messages
can be executed out of order from that shown or discussed,
including substantially concurrent or in reverse order, depending
on the functionality involved. Further, more or fewer blocks and/or
functions can be used with any of the ladder diagrams, scenarios,
and flow charts discussed herein, and these ladder diagrams,
scenarios, and flow charts can be combined with one another, in
part or in whole.
[0328] A block that represents a processing of information can
correspond to circuitry that can be configured to perform the
specific logical functions of a herein-described method or
technique. Alternatively or additionally, a block that represents a
processing of information can correspond to a module, a segment, or
a portion of program code (including related data). The program
code can include one or more instructions executable by a processor
for implementing specific logical functions or actions in the
method or technique. The program code and/or related data can be
stored on any type of computer readable medium such as a storage
device including a disk or hard drive or other storage medium.
[0329] The computer readable medium can also include non-transitory
computer readable media such as computer-readable media that stores
data for short periods of time like register memory, processor
cache, and random access memory (RAM). The computer readable media
can also include non-transitory computer readable media that stores
program code and/or data for longer periods of time, such as
secondary or persistent long term storage, like read only memory
(ROM), optical or magnetic disks, compact-disc read only memory
(CD-ROM), for example. The computer readable media can also be any
other volatile or non-volatile storage systems. A computer readable
medium can be considered a computer readable storage medium, for
example, or a tangible storage device.
[0330] Moreover, a block that represents one or more information
transmissions can correspond to information transmissions between
software and/or hardware modules in the same physical device.
However, other information transmissions can be between software
modules and/or hardware modules in different physical devices.
[0331] The particular arrangements shown in the figures should not
be viewed as limiting. It should be understood that other
embodiments can include more or less of each element shown in a
given figure. Further, some of the illustrated elements can be
combined or omitted. Yet further, an example embodiment can include
elements that are not illustrated in the figures.
[0332] While various aspects and embodiments have been disclosed
herein, other aspects and embodiments will be apparent to those
skilled in the art. The various aspects and embodiments disclosed
herein are for purposes of illustration and are not intended to be
limiting, with the true scope and spirit being indicated by the
following claims.
* * * * *