U.S. patent application number 12/483920 was filed with the patent office on 2009-12-24 for overlay of information associated with points of interest of direction based data services.
This patent application is currently assigned to MICROSOFT CORPORATION. Invention is credited to Moe Khosravy, Lev Novik.
Application Number | 20090319178 12/483920 |
Document ID | / |
Family ID | 41430673 |
Filed Date | 2009-12-24 |
United States Patent
Application |
20090319178 |
Kind Code |
A1 |
Khosravy; Moe ; et
al. |
December 24, 2009 |
OVERLAY OF INFORMATION ASSOCIATED WITH POINTS OF INTEREST OF
DIRECTION BASED DATA SERVICES
Abstract
With the addition of directional information in the environment,
a variety of service(s) can be provided on top of user
identification or interaction with specific object(s) of interest
by pointing at the objects. Image data representing a subset of
real space near a portable computing device can be displayed
including a set of points of interest (POIs) for direction based
service(s) within scope and automatically overlaying POI content on
the image data relating to the POIs. In one embodiment, the display
is included in an electronic device worn such that the display is
substantially in front of an eye, e.g., a heads up display.
Inventors: |
Khosravy; Moe; (Bellevue,
WA) ; Novik; Lev; (Bellevue, WA) |
Correspondence
Address: |
TUROCY & WATSON, LLP
127 Public Square, 57th Floor, Key Tower
CLEVELAND
OH
44114
US
|
Assignee: |
MICROSOFT CORPORATION
Redmond
WA
|
Family ID: |
41430673 |
Appl. No.: |
12/483920 |
Filed: |
June 12, 2009 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61074415 |
Jun 20, 2008 |
|
|
|
61074590 |
Jun 20, 2008 |
|
|
|
61073849 |
Jun 19, 2008 |
|
|
|
Current U.S.
Class: |
701/408 |
Current CPC
Class: |
G01C 21/3664 20130101;
H04W 4/029 20180201; H04W 4/02 20130101; H04L 67/18 20130101; G01C
21/00 20130101; G01C 21/20 20130101; H04W 4/026 20130101; G06Q
30/0241 20130101 |
Class at
Publication: |
701/207 |
International
Class: |
G01C 21/00 20060101
G01C021/00 |
Claims
1. A method, comprising: displaying image data representing a
subset of real space in a pre-defined vicinity of a portable
computing device; determining a set of points of interest (POIs) of
one or more location based services supported by the portable
computing device within scope of the subset of real space
represented by the image data; and automatically overlaying content
relating to at least one POI of the set of POIs on the image data
displayed according to the displaying.
2. The method of claim 1, wherein the overlaying includes
automatically overlaying content indicating at least one
interactive capability with respect to the at least one POI via the
one or more location based services.
3. The method of claim 1, further comprising: automatically
receiving the content relating to the at least one POI from the one
or more location based services.
4. The method of claim 1, wherein the automatically overlaying
includes automatically overlaying content relating to the at least
one POI over or overlapping with the at least one POI as
represented in the image data.
5. The method of claim 1, wherein the automatically overlaying
includes automatically overlaying content relating to the at least
one POI substantially near the at least one POI as represented in
the image data.
6. The method of claim 1, wherein the displaying includes
displaying video data input from an image capture device of the
portable computing device.
7. The method of claim 1, wherein the displaying includes
displaying the image data received from a network service based on
a location of the portable computing device.
8. The method of claim 1, wherein the displaying includes
displaying satellite image data received from a network service
based on a location of the portable computing device.
9. The method of claim 1, wherein the displaying includes
displaying the image data received from a network service based on
a direction and the location of the portable computing device.
10. The method of claim 1, further comprising: determining a planar
orientation of a display of the portable computing device employed
for the displaying.
11. The method of claim 10, wherein, if the planar orientation is
substantially vertical, the displaying includes displaying two
dimensional image data representing a subset of three dimensional
real space corresponding to a direction defined from the front of
the display to the back of the display and substantially orthogonal
to the display of the portable computing device.
12. The method of claim 10, wherein, if the planar orientation is
substantially vertical, the displaying includes displaying two
dimensional image data representing a subset of three dimensional
real space corresponding to a direction defined from the back of
the display to the front of the display and substantially
orthogonal to the display of the portable computing device.
13. The method of claim 10, further comprising: if the planar
orientation is substantially horizontal, determining whether the
display is facing substantially up or substantially down.
14. The method of claim 13, wherein, if the display is facing
substantially up, the displaying includes displaying image data
representing the subset of real space in the pre-defined vicinity
of the portable computing device as topographical map image data
representing a topographical map of at least part of the
pre-defined vicinity.
15. The method of claim 13, wherein, if the display is facing
substantially down, the displaying includes displaying image data
representing the subset of real space in the pre-defined vicinity
of the portable computing device as celestial body map image data
representing a sky object map associated with a skyward direction
from the pre-defined vicinity.
16. The method of claim 1, wherein the determining includes
determining a set of points of interest (POIs) of one or more
direction based services based on an orientation of the portable
computing device.
17. An electronic device adapted to be worn with a display of the
electronic device substantially in front of at least one eye of a
user of the electronic device, comprising: a positional component
that outputs position information as a function of a location of
the electronic device; a directional component that outputs
direction information as a function of an orientation of the
electronic device; and at least one processor configured to process
at least the position information and the direction information to
determine at least one point of interest relating to the position
information, configured to display, within the display of the
electronic device, the at least one point of interest representing
geographical space nearby the electronic device, and configured to
overlay interactive user interface elements corresponding to the at
least one point of interest nearby, overlapping or over the at
least one point of interest in the user interface.
18. The electronic device of claim 17, wherein the direction
component is a digital compass.
19. The electronic device of claim 17, wherein the at least one
processor is further configured to receive input via one or more of
the interactive user interface elements and automatically take
action based on the input.
20. The electronic device of claim 17, further comprising: a motion
component that outputs motion information as a function of at least
one movement of the electronic device.
21. The electronic device of claim 20, wherein the at least one
processor is further configured to process at least the motion
information and the direction information to determine at least one
pre-defined gesture undergone with respect to the at least one
point of interest and to automatically make a request based on the
at least one pre-defined gesture and the at least one point of
interest.
22. A method for displaying point of interest information on a
mobile device, comprising: determining direction information as a
function of a direction of the mobile device; determining position
information as a function of a position of the mobile device;
determining a set of points of interest within interactive scope of
the mobile device based on the direction information and the
position information; displaying an image based representation of
at least the subset of points of interest; and receiving point of
interest advertisement information for at least a subset of the
points of interest of the set and automatically overlaying the
point of interest advertisement information at pertinent one or
more locations of the image based representation associated with at
least the subset of points of interest.
23. The method of claim 22, further comprising: modifying the image
based representation prior to the displaying.
24. The method of claim 23, further comprising: wherein the
modifying includes modifying the image based representation based
on whether the mobile device is experiencing substantially
nighttime or substantially daytime.
25. The method of claim 23, further comprising: wherein the
modifying includes modifying the image based representation based
on a planar orientation of a display of the mobile device.
26. The method of claim 23, further comprising: wherein the
modifying includes modifying the image based representation based
on the direction information.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional
Application Ser. No. 61/074,415, filed on Jun. 20, 2008, entitled
"MOBILE COMPUTING SERVICES BASED ON DEVICES WITH DYNAMIC DIRECTION
INFORMATION," U.S. Provisional Application Ser. No. 61/074,590,
filed on Jun. 20,2008, entitled "MOBILE COMPUTING SERVICES BASED ON
DEVICES WITH DYNAMIC DIRECTION INFORMATION," and to U.S.
Provisional Application Ser. No. 61/073,849, filed on Jun. 19,
2008, entitled "MOBILE COMPUTING DEVICES, ARCHITECTURE AND USER
INTERFACES BASED ON DYNAMIC DIRECTION INFORMATION," the entirety of
each of which are incorporated herein by reference.
TECHNICAL FIELD
[0002] The subject disclosure relates to the provision of
direction-based services for a device based on direction
information and/or other information, such as location information,
and to overlaying information in an image based view of a set of
points of interest associated with one or more direction-based
services.
BACKGROUND
[0003] By way of background concerning some conventional systems,
mobile devices, such as portable laptops, PDAs, mobile phones,
navigation devices, and the like have been equipped with location
based services, such as global positioning system (GPS) systems,
WiFi, cell tower triangulation, etc. that can determine and record
a position of mobile devices. For instance, GPS systems use
triangulation of signals received from various satellites placed in
orbit around Earth to determine device position. A variety of
map-based services have emerged from the inclusion of such location
based systems that help users of these devices to be found on a map
and to facilitate point to point navigation in real-time and search
for locations near a point on a map.
[0004] However, such navigation and search scenarios are currently
limited to displaying relatively static information about endpoints
and navigation routes. While some of these devices with location
based navigation or search capabilities allow update of the bulk
data representing endpoint information via a network, e.g., when
connected to a networked portable computer (PC) or laptop, such
data again becomes fixed in time. Accordingly, it would be
desirable to provide a set of richer experiences for users than
conventional experiences predicated on location and conventional
processing of static bulk data representing potential endpoints of
interest.
[0005] Moreover, with conventional navigation systems, a user may
wish to request information about a particular point of interest
(POI), but if it is not clear what additional information might be
available about various POIs represented on display, other than
that it is possible to navigate to the particular POI. The user
experience suffers as a result since opportunities to interact with
POIs are lost with conventional navigation systems.
[0006] The above-described deficiencies of today's location based
systems and devices are merely intended to provide an overview of
some of the problems of conventional systems, and are not intended
to be exhaustive. Other problems with the state of the art and
corresponding benefits of some of the various non-limiting
embodiments may become further apparent upon review of the
following detailed description.
SUMMARY
[0007] A simplified summary is provided herein to help enable a
basic or general understanding of various aspects of exemplary,
non-limiting embodiments that follow in the more detailed
description and the accompanying drawings. This summary is not
intended, however, as an extensive or exhaustive overview. Instead,
the sole purpose of this summary is to present some concepts
related to some exemplary non-limiting embodiments in a simplified
form as a prelude to the more detailed description of the various
embodiments that follow.
[0008] Direction based pointing services are provided for portable
devices or mobile endpoints. Mobile endpoints can include a
positional component for receiving positional information as a
function of a location of the portable electronic device, a
directional component that outputs direction information as a
function of an orientation of the portable electronic device and a
processing engine that processes the positional information and the
direction information to determine a subset of points of interest
relative to the portable electronic device as a function of the
positional information and/or the direction information.
[0009] Devices or endpoints can include compass(es), e.g., magnetic
or gyroscopic, to determine a direction and location based systems
for determining location, e.g., GPS. To supplement the positional
information and/or the direction information, devices or endpoints
can also include component(s) for determining speed and/or
acceleration information for processing by the engine, e.g., to aid
in the determination of gestures made with the device.
[0010] With the addition of directional information in the
environment, a variety of service(s) can be provided on top of
identification of specific object(s) of interest. For instance,
content for POIs can be overlaid on top of an image based
representation of real space to provide entry points to viewing
information about the POIs or interacting with the POIs.
[0011] Various embodiments include displaying image data
representing a subset of real space near a portable computing
device; determining a set of points of interest (POIs) for
direction based service(s) supported by the portable computing
device within scope of the real space represented by the image data
and automatically overlaying POI content on the image data. In one
embodiment, the display is included in an electronic device worn
such that the display is substantially in front of a user's eyes,
e.g., as part of a heads up display, helmet, headgear, helmet,
shoulder supported device, neck supported device, etc.
[0012] These and other embodiments are described in more detail
below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Various non-limiting embodiments are further described with
reference to the accompanying drawings in which:
[0014] FIG. 1 illustrates a block diagram of POIs displayed and
corresponding overlay information in accordance with an
embodiment;
[0015] FIG. 2 illustrates a non-limiting sample image overlay in
accordance with an embodiment;
[0016] FIG. 3 illustrates another non-limiting sample image overlay
in accordance with an embodiment;
[0017] FIG. 4 is a flow diagram illustrating an exemplary
non-limiting process for when a portable electronic device is held
in a vertical plane;
[0018] FIG. 5 is a flow diagram illustrating an exemplary
non-limiting process for determining a planar orientation of a
device;
[0019] FIG. 6 is a block diagram illustrating alternate embodiments
for image based representation of real space based on whether the
device is horizontal or vertical;
[0020] FIG. 7 is a block diagram illustrating an embodiment for
image based representation of real space when a planar orientation
of the device is vertical;
[0021] FIG. 8 is a block diagram illustrating an embodiment for
image based representation of real space when executing a collision
based algorithm(s);
[0022] FIG. 9 is a block diagram illustrating an embodiment for
image based representation of real space when marking points of
interest for audio/visual notification;
[0023] FIG. 10 is an embodiment of an image rendering device as
implemented in a heads up display device, such as headgear,
glasses, or the like;
[0024] FIG. 11 is a block diagram illustrating alternate
embodiments for image based representation of real space based on
the device being in a substantially horizontal plane;
[0025] FIG. 12 is a block diagram illustrating alternate
embodiments for image based representation of real space based on
the device being in a substantially horizontal plane;
[0026] FIG. 13 is a block diagram illustrating alternate 2-D or 3-D
embodiments for image based representation of real space in front
of a user of the device based on the device being in a
substantially vertical plane;
[0027] FIG. 14 is a block diagram illustrating alternate 2-D or 3-D
embodiments for image based representation of real space behind a
user of the device based on the device being in a substantially
vertical plane;
[0028] FIG. 15 is a block diagram illustrating alternate
embodiments for modifying the image based representation of real
space prior to overlaying POI content;
[0029] FIG. 16 is a non-limiting process for overlaying POI content
on a display in a direction based services environment;
[0030] FIG. 17 is another non-limiting process for overlaying POI
content on a display in a direction based services environment;
[0031] FIG. 18 is a sample mobile computing device for performing
POI overlay of content in a direction based services environment
applicable to one or more embodiments herein;
[0032] FIG. 19 is an exemplary non-limiting architecture for
providing direction based services based on direction based
requests as satisfied by network services and corresponding data
layers;
[0033] FIG. 20 is a sample computing device in which one or more
embodiments described herein may be implemented;
[0034] FIG. 21 illustrates a sample embodiment in the context of
advertisement content and opportunity to deliver the advertisement
content as overlay content to clients consuming direction based
services for a set of POIs within scope;
[0035] FIG. 22 is a block diagram illustrating the formation of
motion vectors for use in connection with location based
services;
[0036] FIG. 23, FIG. 24 and FIG. 25 illustrate aspects of
algorithms for determining intersection endpoints with a pointing
direction of a device;
[0037] FIG. 26 represents a generic user interface for a mobile
device for representing points of interest based on pointing
information;
[0038] FIG. 27 represents some exemplary, non-limiting alternatives
for user interfaces for representing point of interest
information;
[0039] FIG. 28 represents some exemplary, non-limiting fields or
user interface windows for displaying static and dynamic
information about a given point of interest;
[0040] FIG. 29 illustrates a process for predicting points of
interest and aging out old points of interest in a region-based
algorithm;
[0041] FIG. 30 illustrates a first process for a device upon
receiving a location and direction event;
[0042] FIG. 31 illustrates a second process for a device upon
receiving a location and direction event;
[0043] FIG. 32 is a block diagram representing an exemplary
non-limiting networked environment in which embodiment(s) may be
implemented; and
[0044] FIG. 33 is a block diagram representing an exemplary
non-limiting computing system or operating environment in which
aspects of embodiment(s) may be implemented.
DETAILED DESCRIPTION
Overview
[0045] Among other things, current location services systems and
services, e.g., GPS, cell triangulation, P2P location service, such
as Bluetooth, WiFi, etc., tend to be based on the location of the
device only, and tend to provide static experiences that are not
tailored to a user because the data about endpoints of interest is
relatively static, or fixed in time. Another problem is that a user
may wish to do other things than navigate to a particular point of
interest (POI).
[0046] At least partly in consideration of these and other
deficiencies of conventional location based services, in various
non-limiting embodiments, in addition to displaying image based
representations of real space including representations of
direction based services objects determined for the real space,
e.g., points of interest, the image based representations are
overlaid with additional POI information pertaining to the POIs. In
this regard, the user experience is substantially improved since
users can view or interact with POI information in conceptual
proximity to the objects as represented in the image based
representation of real space, e.g., in real time.
[0047] For instance, various embodiments of a portable device are
provided that use direction information, position information
and/or motion information to determine a set of POIs within scope.
Then, when displaying an image based view (e.g., video data or
satellite images) of the set of POIs and corresponding real space,
POI information is overlaid, next to, nearby or over the POIs. A
way to interact with POIs is thus provided via a device having
access to direction information about a direction of the device,
position information about a position of the device and optional
motion information, wherein based on the information, the device
intelligently fetches content regarding POIs and overlays the
content in association with the POIs as represented in the image
data.
[0048] A non-limiting device provisioned for direction based
services can include an engine for analyzing location information
(e.g., GPS, cell phone triangulation, etc.), direction information
such as compass information (e.g., North, West, South, East, up,
down, etc.), and optionally movement information (e.g.,
accelerometer information) to allow a platform for pointing to and
thereby finding objects of interest in a user's environment. A
variety of scenarios are contemplated based on a user finding
information of interest about objects of interests, such as
restaurants, or other items around an individual, or persons,
places or events of interest nearby a user and tailoring
information to that user (e.g., coupons, advertisements), and then
overlaying that content on a display representing real space in
proximity to the device. Any of the embodiments described herein
can be provided in the context of a heads up display of POIs, or
portable electronic device, i.e., any computing device wherein the
act of pointing directionally with the device can be used in
connection with one or more direction based services
[0049] In various non-limiting embodiments, a process includes
displaying image data representing a subset of real space in a
pre-defined vicinity of a portable computing device, determining a
set of POIs of direction based service(s) supported by the portable
computing device within scope and automatically overlaying content
relating to the POIs of the set on the image data. The overlaying
can include indicating an interactive capability with respect to
the POI(s) via the direction based service(s). The overlaying of
content can overlap or be presented near the POI content and the
underlying POI as represented in the image data. The content
relating to the POIs can be automatically received from the
direction based service(s).
[0050] The image data can be any one or more of: video data input
from an image capture device of the portable computing device,
image data received from a network service based on a location of
the portable computing device, satellite image data received from a
network service based on a location of the portable computing
device, or image data received from a network service based on a
direction and the location of the portable computing device.
[0051] The process can also include determining a planar
orientation of a display of the portable computing device. In such
embodiments, if the planar orientation is substantially vertical,
two dimensional image data representing a subset of three
dimensional real space in front of, or alternatively behind, the
user is displayed. The determining can also ascertain whether the
display is facing substantially up or substantially down. If the
display is facing substantially up, the image data is a
topographical map of the area in the vicinity of the device. If the
display is facing substantially down, the image data is a celestial
body map of the sky in the vicinity of the device.
[0052] In other embodiments, a portable electronic device includes
a positional component that outputs position information as a
function of a location of the portable electronic device and a
directional component, e.g., a digital compass, that outputs
direction information as a function of an orientation of the
portable electronic device. The position information and the
direction information are processed to determine POIs relating to
the position information. Then, the POIs are displayed within a
user interface representing geographical space nearby the portable
electronic device along with overlaid interactive user interface
elements overlapping or over the at least one point of interest in
the user interface. Automatic action can thus be taken by inputting
one or more of the interactive user interface elements.
[0053] The device can also include a motion component that outputs
motion information as a function of at least one movement of the
portable device. Using the motion information, gestures can be
determined with respect to the POIs, and the gestures can initiate
automatic action with respect to at least one POI.
[0054] In other embodiments, a method for displaying POI
information on a mobile device is provided including determining a
set of POIs within interactive scope of the device based on
direction information and the position information of the device,
displaying an image based representation of some POIs, receiving
POI advertisement information for the POIs and automatically
overlaying the POI advertisement information at pertinent locations
of the image based representation.
[0055] This can include modifying the image based representation
prior to the displaying of the POIs. This might, for example,
include switching between nighttime and daytime views, modifying
the image based representation based on a planar orientation of the
device, or modifying the image based representation as a function
of the direction information of the device.
[0056] Accordingly, in various non-limiting embodiments, a way to
interact with POIs is provided by a pointing device having imaging
means, such as a camera for still or video imaging of by the
device. In a variety of embodiments, visual indications of POIs are
overlaid on an image or map or graphic of a location, so that a
user can easily distinguish among their actual surroundings and
POIs in their actual surroundings. In addition to being implemented
on a pointing device, a heads up display embodiment is provided
that is worn on the head. A variety of scenarios are explored
showing the benefits of POI overlay content.
[0057] While each of the various embodiments herein are presented
independently, e.g., as part of the sequence of respective Figures,
one can appreciate that a portable device and/or associated network
services, as described, can incorporate or combine two or more of
any of the embodiments. Given that each of the various embodiments
improve the overall services ecosystem in which users wish to
operate, together a synergy results from combining different
benefits. Accordingly, the combination of different embodiments
described below shall be considered herein to represent a host of
further alternate embodiments.
[0058] Details of various other exemplary, non-limiting embodiments
are provided below.
Overlay of Information Associated with Points of Interest of
Direction Based Data Services
[0059] As mentioned, with the addition of directional information
in the environment, a variety of service(s) can be provided on top
of identification of specific object(s) or point(s) of interest.
For instance, content for POIs can be overlaid on top of an image
based representation of real space to provide entry points to
viewing information about the POIs or interacting with the POIs.
The techniques can be embodied in any device provisioned for
direction based services, such as a portable electronic device, or
an electronic device worn such that the display is substantially in
front of a user's eyes, e.g., as part of a heads up display,
helmet, headgear, helmet, shoulder supported device, neck supported
device, etc.
[0060] FIG. 1 is a high level block diagram of POIs displayed and
corresponding overlay information in accordance with an embodiment
of a user interface. Direction based services enabled device 100
(examples provided below) includes a display 110 for displaying
image based data corresponding to real space in proximity to device
100 and/or as a function of direction of the display 110 of device
100. In a typical scenario, based on location and/or direction, a
set of POIs is displayed in the image data on display 110, such as
POIs 122, 124 and 126. Correspondingly, in various embodiments, POI
content is retrieved from one or more direction based data services
and overlaid near the POIs 122, 124 and 126, for example, at
locations indicated by POI overlays 112, 114 and 126,
respectively.
[0061] FIG. 2 illustrates a non-limiting sample image overlay in
accordance with an embodiment. Where a device includes a camera, a
representative non-limiting overlay UI 200 might, for example,
include an image based representation of three POIs POI1, POI2,
POI3 and POI4. The POIs are overlaid over actual image data being
real time viewed on the device via an LCD screen or like display.
The actual image data can be of products on a shelf or other
display or exhibit in a store. Thus, as the user aims the camera
around his or her environment, the lens becomes the pointer, and
the POI information can be overlaid intelligently for discovery of
endpoints of interest. Moreover, a similar embodiment can be
imagined even without a camera, such as a UI in which 3-D objects
are virtually represented based on real geometries known for the
objects relative to the user.
[0062] Thus, in the present non-limiting embodiment, the device UI
can be implemented consistent with a camera, or a virtual camera,
view for intuitive use of such devices. The pointer mechanism of
the device could also switch based on whether the user was
currently in live view mode for the camera or not. Moreover,
assuming sufficient processing power and storage, real time image
processing could discern an object of interest and based on image
signatures, overlay POI information over such image in a similar
manner to the above embodiments. In this regard, with the device
provided herein with a camera, a user can perform such actions as
zoom in zoom out, perform tilt detection for looking down or up, or
pan across a field of view to obtain a range of POIs associated
with a panning scope, etc.
[0063] With respect to a representative set of user settings, a
number or maximum number of desired endpoints delivered as results
can be configured. How to filter can also be configured, e.g., 5
most likely, 5 closest, 5 closest to 100 feet away, 5 within
category or sub-category, alphabetical order, etc. In each case,
based on a pointing direction, implicitly a cone or other cross
section across physical space is defined as a scope of possible
points of interest. In all cases, some set of POIs is defined
according to a proximity to the device. In this regard, the width
or deepness of this cone or cross section can be configurable by
the user to control the accuracy of the pointing, e.g., narrow or
wide radius of points and how far out to search. The images of FIG.
2 do not need to come from a camera but could come from a network
or satellite service based on location and/or direction.
[0064] FIG. 3 illustrates another non-limiting sample image overlay
in accordance with an embodiment. In contrast to the "in front of
the user's device or face" view of FIG. 2, FIG. 3 illustrates a
topographical map view via non-limiting overlay UI 300. For
example, UI 300 includes an image based topographical
representation of five POIs POI1, POI2, POI3, POI4 and POI5. The
view of POIs can be compared from FIG. 2 and FIG. 3 (except that
POI5 is not visible in FIG. 2).
[0065] FIG. 4 is a flow diagram of a non-limiting process whereby
it is anticipated that a user will hold a device substantially in a
vertical plane, as if scanning an area in a camera viewfinder with
overlay information and actions introduced to give the viewfinder
context for POI action, though the image data representing the real
space can be received from any source. For instance, when a user's
arm is extended forward in front of the user's eyes, and the user
observes the display by looking forward towards the landscape. In
such a case where the device is held upright, which can be detected
by motion information of the device, substantially in the vertical
plane, at 400, camera imagery is displayed with overlay of point of
interest indication or information. At 410, a distance is indicated
to scope the points of interest on display, e.g., close, near or
far items. For instance, nearness or farness can be based on tiers
of concentric rings and user indication of which tier.
[0066] At 420, information about a selected point of interest is
displayed as overlay over the image. At 430, an action is requested
with respect to the selected place or item, e.g., show information,
directions, etc. For example, a user may wish to review the item or
add to wikipedia knowledge about point of interest, e.g., upload
information, images, etc. In this regard, because it is intuitive
to give a 3-D perspective view when the viewing plane is orthogonal
to the ground plane, in the present embodiment, a 3-D perspective
view with POI information overlay is implemented when the device is
held substantially in the vertical plane. In effect, the camera
shows the real space behind the device, and indications of points
of interest in that space as if the user was performing a scan of
his or her surroundings with the device. Direction information of
the device 2600 enables data and network services to know what the
scope of objects for interaction with the device is.
[0067] FIG. 5 is another non-limiting flow diagram relating to a
process for determining whether a portable device is aligned
substantially vertically or horizontally with respect to a viewing
plane of the device. At 500, motion information of the device is
analyzed, e.g., accelerometer input. At 510, it is determined
whether a viewing plane of a portable device is aligned with a
substantially horizontal plane substantially parallel to a ground
plane or aligned with a substantially vertical plane substantially
orthogonal to the ground plane. At 520, if the answer is
horizontal, a topographical map view of a geographical area map is
displayed determined based on location and direction information
measured by the portable device. Indication(s) of the point(s) of
interest on the map can also be displayed, e.g., highlighting or
other designation, or enhancement. At 530, if the answer is
vertical, then an image based view of three-dimensional (3-D) space
extending from the portable device (e.g., from the camera) is
displayed. Similarly to the topographical map view, indication(s)
of point(s) of interest pertaining to the 3-D space can be
displayed.
[0068] FIG. 6 is a block diagram illustrating alternate embodiments
for image based representation of real space based on whether the
device is horizontal or vertical, illustrating a general difference
between embodiments having a display of the device in a horizontal
planar orientation or a vertical planar orientation. With device
600 in the horizontal plane, a 2-D topographical map display of
geographical area and indications of points of interest 620 is
displayed. In this regard, device 600 detects it is substantially
in the horizontal plane and displays UI 610. When device 650
detects it is in the substantially vertical plane 650, upright, a
vertical plane UI 660 is invoked which, instead of a 2-D plan view
of the world, includes a 3-D perspective view 670 as reflected by
the 2-D imagery of the camera input.
[0069] FIG. 7 is a block diagram illustrating an embodiment for
image based representation of real space when a planar orientation
of the device 700 is vertical, thereby invoking the image
acquisition device 710 to acquire input 720 and display the input
on display 730 with POI information 740. In this regard, as the
user rotates the camera according to the arrow 750, the POI
information changes along with the scope of the camera input 710 as
it changes with the device 700 spinning around.
[0070] FIG. 8 is a block diagram illustrating an embodiment for
image based representation of real space when executing a collision
based algorithm. Direction based services enabled device 800
includes a display 810 for displaying image based data
corresponding to real space in proximity to device 800 and/or as a
function of direction of the display 810 of device 800. In a
typical scenario, based on location and/or direction, a set of POIs
is displayed in the image data on display 810, such as POIs 822,
824 and 826. Correspondingly, in various embodiments, POI content
is retrieved from one or more direction based data services and
overlaid near the POIs 822, 824 and 826, for example, at locations
indicated by POI overlays 812, 814 and 826, respectively.
[0071] In addition, since POIs 822, 824 and 826 may be moving along
a path recorded or tracked by one or more direction based services,
direction indicators 832, 834 and 836, respectively, can be
provided to give a user a real-time view of the movement of the
POIs 822, 824 and 826 and their current direction. In this way,
based on algorithms that either help the user to collide (or
otherwise come into contact) with other POIs, or help the user to
avoid other POIs, a variety of applications and scenarios are
contemplated from social networking scenarios to restaurant finding
to games, such as hide and seek.
[0072] FIG. 9 is a block diagram illustrating an embodiment for
image based representation of real space when marking points of
interest for audio/visual notification. Direction based services
enabled device 900 includes a display 910 for displaying image
based data corresponding to real space in proximity to device 900
and/or as a function of direction of the display 910 of device 900.
In a typical scenario, based on location and/or direction, a set of
POIs is displayed in the image data on display 910, such as POIs
922, 924 and 926. Correspondingly, in various embodiments, POI
content is retrieved from one or more direction based data services
and overlaid near the POIs 922, 924 and 926, for example, at
locations indicated by POI overlays 912, 914 and 926, respectively.
In one embodiment, any of the POIs 922, 924 or 926 can be marked by
a user implicitly or explicitly, and as a result, an audio or
visual notification 932 can be applied to the marked POI 926 or POI
overlay 916 now, or at a future interaction time as well (e.g., a
reminder).
[0073] FIG. 10 is an embodiment of an image rendering device as
implemented in a heads up display device, such as headgear,
glasses, or the like. As mentioned, any of the embodiments herein
can be equally applied in a set of glasses, or other embodiment in
which a display can be presented in front of a user's eyes without
being a handheld device per se. For instance, this could be glasses
1014 or head gear 1012. In either case, the device includes a heads
up display 1010 that supports the display of POI data received from
direction based services. A camera C can be included to observe
what the user's eye or eyes 1020 are looking at. Devices 1014 or
1012 can further include voice input 1040 for voice input commands
to the display to take action with respect to overlay content. The
content can also be projected content or a virtual image plane with
2-D or 3-D POI overlays in alternative embodiments of the device
1012, or 1014, or HUD 1010.
[0074] FIG. 11 is a block diagram illustrating alternate
embodiments for image based representation of real space based on
the device being in a substantially horizontal plane. In this
embodiment, a device 1100 is held with the display 1105 facing
substantially up towards the sky, or sky plane 1120, which is
defined generally parallel with respect to a ground plane 1110. In
such an embodiment, it can be inferred that the user wants a
topographical map view 1125 of his or her surroundings or proximity
in connection with display 1105.
[0075] Similar to FIG. 11, FIG. 12 is a block diagram illustrating
an alternate embodiment for image based representation of real
space based on the device being in a substantially horizontal
plane. However, in this case, instead of up, a device 1200 is held
with the display 1205 facing substantially down towards the ground
a ground plane 1210 running parallel to a sky plane 1220. In such
an embodiment, it can be inferred that the user wants a sky map
view 1225 of the sky above the user in connection with display
1205, particularly if it can be determined if the user's head or
eyes are underneath the display (i.e., looking up, e.g.,
stargazing). In one application, at nighttime, a user can scan the
sky and learn of planets, constellations, etc., marking them, etc.
interacting with them via the universe of users also observing or
having observed such heavenly bodies.
[0076] FIG. 13 is a block diagram illustrating alternate 2-D or 3-D
embodiments for image based representation of real space in front
of a user of the device based on the device being in a
substantially vertical plane. As mentioned, where the user holds a
device 1300 having a display 1305 substantially facing the device
user, the display 1305 can display a 2-D or 2-D view of the POIs in
front of the user 1325. For instance, an imaging element 1330 can
be used to provide the image based view in front of the user, and
POI content can be overlaid on the image based view.
[0077] FIG. 14 is a block diagram illustrating alternate 2-D or 3-D
embodiments for image based representation of real space behind a
user of the device based on the device being in a substantially
vertical plane, e.g., a sleuth mode to see what is happening with
moving POIs behind the user. A user thus holds a device 1400 having
a display 1405 substantially facing the device user, the display
1405 can display a 2-D or 2-D view of the POIs behind the user
1425. For instance, an imaging element 1430 can be used to provide
an image based view of what is behind the user, and POI content can
be overlaid on the image based view.
[0078] FIG. 15 is a block diagram illustrating alternate
embodiments for modifying the image based representation of real
space prior to overlaying POI content. For instance, a device 1500
supporting direction based services may include an overall scene on
display 1505 of some area being pointed at by the device. The area
might include POIs 1510 and 1512 on the display, and overlay
elements 1520 and 1522 are respectively positioned near the POIs.
According to the present embodiment, a variety of views of the
image data can be achieved other than camera based views of the
surroundings. For instance, algorithms can be applied to the image
based view on display 1505 including night view 1530, edge detected
view 1532, a cartoonized view 1534, a virtual earth image view
1536, a POI heat map view (popularity, relevance, etc.) 1537 or
other image based representations of a scene, or POIs 1538, which
may be suited to a given application.
[0079] FIG. 16 is a non-limiting process for overlaying POI content
on a display in a direction based services environment. At 1600,
image data is displayed representing a subset of real space in a
pre-defined vicinity of a portable computing device. At 1610, a set
of POIs of direction based service(s) are determined within scope.
At 1620, a planar orientation of a display of the portable
computing device is determined. At 1630, the content relating to
the POIs can be received from direction based service(s). At 1640,
the content relating to the POIs is automatically overlaid on the
image data ready for user viewing or interaction.
[0080] FIG. 17 is another non-limiting process for overlaying POI
content on a display in a direction based services environment. At
1700, direction information is determined as a function of a
direction of the device and at 1710, position information is
determined as a function of a position of the device. At 1720, a
set of points of interest within interactive scope of the device is
determined based on the direction information and the position
information. At 1730, an image based representation of the point(s)
of interest is displayed by the device. At 1740, point of interest
advertisement information for the point(s) of interest of the set
is received and at 1750, the point of interest advertisement
information is automatically overlaid at pertinent locations of the
image based representation relating to the point(s) of
interest.
[0081] FIG. 18 is a sample mobile computing device for performing
POI overlay of content in a direction based services environment
applicable to one or more embodiments herein. In this regard, a set
of services 1860 can be built based on location information 1822
and direction information 1832 collected by the phone with a
corresponding interface or display 1825 including POI overlay
content as described in one or more embodiments herein. For
instance, location information 1822 can be recorded by a location
subsystem 1820 such as a GPS subsystem communicating with GPS
satellites 1840. Direction or pointing information 1832 can be
collected by a direction subsystem 1830, such as a compass, e.g.,
gyroscopic, magnetic, digital compass, etc. In addition,
optionally, movement information 1812 can be gathered by the device
1800, e.g., via tower triangulation algorithms, and/or acceleration
of the device 1800 can be measured as well, e.g., with an
accelerometer. The collective information 1850 can be used to gain
a sense of not only where the device 1800 is located in relation to
other potential points of interest tracked or known by the overall
set of services 1860, but also what direction the user is pointing
the device 1800, so that the services 1860 can appreciate at whom
or what the user is pointing the device 1800.
[0082] In addition, a gesture subsystem 1870 can optionally be
included, which can be predicated on any one or more of the motion
information 1812, location information 1822 or direction
information 1832. In this regard, not only can direction
information 1832 and location information 1822 be used to define a
set of unique gestures, but also motion information 1812 can be
used to define an even more complicated set of gestures. The
gesture monitor 1870 produces gesture information 1872, which can
be input as appropriate in connection with delivering services
1860.
[0083] As mentioned, in another aspect, a device 1800 can include a
client side memory 1880, such as a cache, of potentially relevant
points of interest, which, based on the user's movement history can
be dynamically updated. The context, such as geography, speed, etc.
of the user can be factored in when updating. For instance, if a
user's velocity is 2 miles an hour, the user may be walking and
interested in updates at a city block by city block level, or at a
lower level granularity if they are walking in the countryside.
Similarly, if a user is moving on a highway at 60 miles per hour,
the block-by-block updates of information are no longer desirable,
but rather a granularity can be provided and predictively cached on
the device 1800 that makes sense for the speed of the vehicle.
[0084] FIG. 19 is an exemplary non-limiting architecture for
providing direction based services 1910 based on direction based
requests as satisfied by network services and corresponding data
layers according to one or more embodiments. Location information
1900 (e.g., WiFi, GLS, tower triangulation, etc.), direction
information 1902 (e.g., digital compass) and user intent
information 1904, which can be implicit or explicit, are input to
services 1910, which may be any one or more of web services 1912,
cloud services 1914 or other data services 1916. As a result,
content 1940 is returned for efficient real-time interactions with
POIs of current relevance. Data can come from more than one storage
layer or abstraction 1920, 1922, 1924, . . . , or abstraction 1930,
1932, 1934, . . . , e.g., from local server databases or remote
third party storage locations.
[0085] FIG. 20 illustrates an exemplary non-limiting device 2000
including processor(s) 2010 having a position engine or subsystem
2020 for determining a location of the device 2000 and a direction
engine or subsystem 2030 for determining a direction or orientation
of the device 2000. Then, by interacting with local application(s)
2040 and/or service(s) 2070, content, such as advertisements, can
be delivered to the device, which can tailored to device intent and
a place in which the device is present, or other factors. When the
content is displayed according to a interaction, the content can be
rendered by graphic subsystem or display/UI 2050 or audio subsystem
2060, and POI content can be supplemented with overlay content
placed at, near, overlapping with or over corresponding POIs in an
underlying image based representation.
[0086] In one non-limiting embodiment, point structure 2090 is
included, e.g., a triangular or other polygonal piece that points
along an orientation line 2095 upon which directional calculations
are based. Similarly, the orientation line 2095 can be indicated by
graphics subsystem display/UI 2050 with or without point structure
2090. In this regard, various embodiments herein enable POI ID
information 2080 to be received from services 2070 so that the
content can be viewed or interactions can occur with services 2070
with respect to the POIs.
[0087] FIG. 21 illustrates a sample embodiment in the context of
advertisement content and opportunity to deliver the advertisement
content as overlay content to clients consuming direction based
services for a set of POIs within scope. A potential benefit of the
POI overlay content 2120 for devices supporting direction based
services 2120 based on location information 2140 and direction
information 2150 is advertising opportunity 2130. Based on
aggregate data, business intelligence can price based on statistics
and other factors, the cost of an advertising opportunity 2130 can
be calculated. In short, if Coca Cola believes that it is likely
that a user will be nearby Coca Cola merchandise soon, there is
value to Coca Cola in accelerating the process of getting
information to the user's device about a Coke coupon via POI
overlay, such that the Coke coupon pops up immediately when the
user is within range of a Coke retailer POI.
[0088] Due to the enhanced interactive skills of a device
provisioned for direction based location services, FIG. 21 also
illustrates a variety of device interactions that help to form
aggregate and individual user data for purposes of input to a
business intelligence and advertising engine 2130, and/or invited
by way of POI overlay content. By measuring interactions with
points of interest via text 2100, search 2102, barcode scan 2104,
image scan 2106, designation/selection of item of interest 2108,
price compare operations 2110, gesture input 2112, other
interaction with item of interest 2114, voice input, etc., a lot of
user knowledge is gained that can help determine probabilities
sufficient to trigger advertising opportunities for interested
entities 2130. In addition, those advertising opportunities 2130
can be sent to the user in the form of overlay UI content 2120 that
invites any of the foregoing types of device interactions as
well.
[0089] In this regard, users can interact with the endpoints in a
host of context sensitive ways to provide or update information
associated with endpoints of interest, or to receive beneficial
information or instruments (e.g., coupons, offers, etc.) from
entities associated with the endpoints of interest, and any of such
actions can be facilitated by information, content, advertising,
etc. that can relate to POIs and overlaid with the POIs in
connection with an image based representation of the POIs.
Supplemental Context Regarding Pointing Devices, Architectures and
Services
[0090] The following description contains supplemental context
regarding potential non-limiting pointing devices, architectures
and associated services to further aid in understanding one or more
of the above embodiments. Any one or more of any additional
features described in this section can be accommodated in any one
or more of the embodiments described above with respect to
predictive direction based services at a particular location for
given POI(s). While such combinations of embodiments or features
are possible, for the avoidance of doubt, no embodiments set forth
in the subject disclosure should be considered limiting on any
other embodiments described herein.
[0091] As mentioned, a broad range of scenarios can be enabled by a
device that can take location and direction information about the
device and build a service on top of that information. For example,
by using an accelerometer in coordination with an on board digital
compass, an application running on a mobile device updates what
each endpoint is "looking at" or pointed towards, attempting hit
detection on potential points of interest to either produce
real-time information for the device or to allow the user to select
a range, or using the GPS, a location on a map, and set information
such as "Starbucks--10% off cappuccinos today" or "The Alamo--site
of . . . " for others to discover. One or more accelerometers can
also be used to perform the function of determining direction
information for each endpoint as well. As described herein, these
techniques can become more granular to particular items within a
Starbucks, such as "blueberry cheesecake" on display in the
counter, enabling a new type of sale opportunity.
[0092] Accordingly, a general device for accomplishing this
includes a processing engine to resolve a line of sight vector sent
from a mobile endpoint and a system to aggregate that data as a
platform, enabling a host of new scenarios predicated on the
pointing information known for the device. The act of pointing with
a device, such as the user's mobile phone, thus becomes a powerful
vehicle for users to discover and interact with points of interest
around the individual in a way that is tailored for the individual.
Synchronization of data can also be performed to facilitate roaming
and sharing of POV data and contacts among different users of the
same service.
[0093] In a variety of embodiments described herein, 2-dimensional
(2D), 3-dimensional (3D) or N-dimensional directional-based search,
discovery, and interactivity services are enabled for endpoints in
the system of potential interest to the user.
[0094] The pointing information and corresponding algorithms depend
upon the assets available in a device for producing the pointing or
directional information. The pointing information, however produced
according to an underlying set of measurement components, and
interpreted by a processing engine, can be one or more vectors. A
vector or set of vectors can have a "width" or "arc" associated
with the vector for any margin of error associated with the
pointing of the device. A panning angle can be defined by a user
with at least two pointing actions to encompass a set of points of
interest, e.g., those that span a certain angle defined by a
panning gesture by the user.
[0095] In one non-limiting embodiment, a portable electronic device
includes a positional component for receiving positional
information as a function of a location of the portable electronic
device, a directional component that outputs direction information
as a function of an orientation of the portable electronic device
and a location based engine that processes the positional
information and the direction information to determine a subset of
points of interest relative to the portable electronic device as a
function of at least the positional information and the direction
information.
[0096] The positional component can be a positional GPS component
for receiving GPS data as the positional information. The
directional component can be a magnetic compass and/or a gyroscopic
compass that outputs the direction information. The device can
include acceleration component(s), such as accelerometer(s), that
outputs acceleration information associated with movement of the
portable electronic device. The use of a separate sensor can also
be used to further compensate for tilt and altitude adjustment
calculations.
[0097] In one embodiment, the device includes a cache memory for
dynamically storing a subset of endpoints of interest that are
relevant to the portable electronic device and at least one
interface to a network service for transmitting the positional
information and the direction information to the network service.
In return, based on real-time changes to the positional information
and direction/pointing information, the device dynamically receives
in the cache memory an updated subset of endpoints that are
potentially relevant to the portable electronic device.
[0098] For instance, the subset of endpoints can be updated as a
function of endpoints of interest within a pre-defined distance
substantially along a vector defined by the orientation of the
portable electronic device. Alternatively or in addition, the
subset of endpoints can be updated as a function of endpoints of
interest relevant to a current context of the portable electronic
device. In this regard, the device can include a set of
Representational State Transfer (REST)-based application
programming interfaces (APIs), or other stateless set of APIs, so
that the device can communicate with the service over different
networks, e.g., Wi-Fi, a GPRS network, etc. or communicate with
other users of the service, e.g., Bluetooth. For the avoidance of
doubt, the embodiments are in no way limited to a REST based
implementation, but rather any other state or stateful protocol
could be used to obtain information from the service to the
devices.
[0099] The directional component outputs direction information
including compass information based on calibrated and compensated
heading/directionality information. The directional component can
also include direction information indicating upward or downward
tilt information associated with a current upward or downward tilt
of the portable electronic device, so that the services can detect
when a user is pointing upwards or downwards with the device in
addition to a certain direction. The height of the vectors itself
can also be taken into account to distinguish between an event of
pointing with a device from the top of a building (likely pointing
to other buildings, bridges, landmarks, etc.) and the same event
from the bottom of the building (likely pointing to a shop at
ground level), or towards a ceiling or floor to differentiate among
shelves in a supermarket. A 3-axis magnetic field sensor can also
be used to implement a compass to obtain tilt readings.
[0100] Secondary sensors, such as altimeters or pressure readers,
can also be included in a mobile device and used to detect a height
of the device, e.g., what floor a device is on in a parking lot or
floor of a department store (changing the associated map/floorplan
data). Where a device includes a compass with a planar view of the
world (e.g., 2-axis compass), the inclusion of one or more
accelerometers in the device can be used to supplement the motion
vector measured for a device as a virtual third component of the
motion vector, e.g., to provide measurements regarding a third
degree of freedom. This option may be deployed where the provision
of a 3-axis compass is too expensive, or otherwise unavailable.
[0101] In this respect, a gesturing component can also be included
in the device to determine a current gesture of a user of the
portable electronic device from a set of pre-defined gestures. For
example, gestures can include zoom in, zoom out, panning to define
an arc, all to help filter over potential subsets of points of
interest for the user.
[0102] For instance, web services can effectively resolve vector
coordinates sent from mobile endpoints into <x,y,z> or other
coordinates using location data, such as GPS data, as well as
configurable, synchronized POV information similar to that found in
a GPS system in an automobile. In this regard, any of the
embodiments can be applied similarly in any motor vehicle device.
One non-limiting use is also facilitation of endpoint discovery for
synchronization of data of interest to or from the user from or to
the endpoint.
[0103] Among other algorithms for interpreting
position/motion/direction information, as shown in FIG. 22, a
device 2200 employing the direction based location based services
2202 as described herein in a variety of embodiments herein include
a way to discern between near objects, such as POI 2214 and far
objects, such as POI 2216. Depending on the context of usage, the
time, the user's past, the device state, the speed of the device,
the nature of the POIs, etc., the service can determine a general
distance associated with a motion vector. Thus, a motion vector
2206 will implicate POI 2214, but not POI 2216, and the opposite
would be true for motion vector 2208.
[0104] In addition, a device 2200 includes an algorithm for
discerning items substantially along a direction at which the
device is pointing, and those not substantially along a direction
at which the device is pointing. In this respect, while motion
vector 2204 might implicate POI 2212, without a specific panning
gesture that encompassed more directions/vectors, POIs 2214 and
2216 would likely not be within the scope of points of interest
defined by motion vector 2204. The distance or reach of a vector
can also be tuned by a user, e.g., via a slider control or other
control, to quickly expand or contract the scope of endpoints
encompassed by a given "pointing" interaction with the device.
[0105] In one non-limiting embodiment, the determination of at what
or whom the user is pointing is performed by calculating an
absolute "Look" vector, within a suitable margin of error, by a
reading from an accelerometer's tilt and a reading from the
magnetic compass. Then, an intersection of endpoints determines an
initial scope, which can be further refined depending on the
particular service employed, i.e., any additional filter. For
instance, for an apartment search service, endpoints falling within
the look vector that are not apartments ready for lease, can be
pre-filtered.
[0106] In addition to the look vector determination, the engine can
also compensate for, or begin the look vector, where the user is by
establish positioning (.about.15 feet) through an A-GPS stack (or
other location based or GPS subsystem including those with
assistance strategies) and also compensate for any significant
movement/acceleration of the device, where such information is
available.
[0107] As mentioned, in another aspect, a device can include a
client side cache of potentially relevant points of interest,
which, based on the user's movement history can be dynamically
updated. The context, such as geography, speed, etc. of the user
can be factored in when updating. For instance, if a user's
velocity is 2 miles an hour, the user may be walking and interested
in updates at a city block by city block level, or at a lower level
granularity if they are walking in the countryside. Similarly, if a
user is moving on a highway at 60 miles per hour, the
block-by-block updates of information are no longer desirable, but
rather a granularity can be provided and predictively cached on the
device that makes sense for the speed of the vehicle.
[0108] In an automobile context, the location becomes the road on
which the automobile is travelling, and the particular items are
the places and things that are passed on the roadside much like
products in a particular retail store on a shelf or in a display.
The pointing based services thus creates a virtual "billboard"
opportunity for items of interest generally along a user's
automobile path. Proximity to location can lead to an impulse buy,
e.g., a user might stop by a museum they are passing and pointing
at with their device, if offered a discount on admission.
[0109] In various alternative embodiments, gyroscopic or magnetic
compasses can provide directional information. A REST based
architecture enables data communications to occur over different
networks, such as Wi-Fi and GPRS architectures. REST based APIs can
be used, though any stateless messaging can be used that does not
require a long keep alive for communicated data/messages. This way,
since networks can go down with GPRS antennae, seamless switching
can occur to Wi-Fi or Bluetooth networks to continue according to
the pointing based services enabled by the embodiments described
herein.
[0110] A device as provided herein according to one or more
embodiments can include a file system to interact with a local
cache, store updates for synchronization to the service, exchange
information by Bluetooth with other users of the service, etc.
Accordingly, operating from a local cache, at least the data in the
local cache is still relevant at a time of disconnection, and thus,
the user can still interact with the data. Finally, the device can
synchronize according to any updates made at a time of
re-connection to a network, or to another device that has more up
to date GPS data, POI data, etc. In this regard, a switching
architecture can be adopted for the device to perform a quick
transition from connectivity from one networked system (e.g., cell
phone towers) to another computer network (e.g., Wi-Fi) to a local
network (e.g., mesh network of Bluetooth connected devices).
[0111] With respect to user input, a set of soft keys, touch keys,
etc. can be provided to facilitate in the directional-based
pointing services provided herein. A device can include a windowing
stack in order to overlay different windows, or provide different
windows of information regarding a point of interest (e.g., hours
and phone number window versus interactive customer feedback
window). Audio can be rendered or handled as input by the device.
For instance, voice input can be handled by the service to
explicitly point without the need for a physical movement of the
device. For instance, a user could say into a device "what is this
product right in front of me? No, not that one, the one above it"
and have the device transmit current direction/movement information
to a service, which in turn intelligently, or iteratively,
determines what particular item of interest the user is pointing
at, and returns a host of relevant information about the item.
[0112] One non-limiting way for determining a set of points of
interest is illustrated in FIG. 23. In FIG. 23, a device 2300 is
pointed (e.g., point and click) in a direction D1, which according
to the device or service parameters, implicitly defines an area
within arc 2310 and distance 2320 that encompasses POI 2330, but
does not encompass POI 2332. Such an algorithm will also need to
determine any edge case POIs, i.e., whether POIs such as POI 2334
are within the scope of pointing in direction D1, where the POI
only partially falls within the area defined by arc 2310 and
distance 2320.
[0113] Other gestures that can be of interest in for a gesturing
subsystem include recognizing a user's gesture for zoom in or zoom
out. Zoom in/zoom out can be done in terms of distance like FIG.
24. In FIG. 24, a device 2400 pointed in direction D1 may include
zoomed in view which includes points of interest within distance
2420 and arc 2410, or a medium zoomed view representing points of
interest between distance 2420 and 2422, or a zoomed out view
representing points of interest beyond distance 2422. These zoom
zones correspond to POIs 2430, 2432 and 2434, respectively. More or
less zones can be considered depending upon a variety of factors,
the service, user preference, etc.
[0114] For another non-limiting example, with location information
and direction information, a user can input a first direction via a
click, and then a second direction after moving the device via a
second click, which in effect defines an arc 2510 for objects of
interest in the system as illustrated in FIG. 25. For instance, via
first pointing act by the user at time t1 in direction D1 and a
second pointing act at time t2 by the user in direction D2, an arc
2510 is implicitly defined. The area of interest implicitly
includes a search of points of object within a distance 2520, which
can be zoomed in and out, or selected by the service based on a
known granularity of interest, selected by the user, etc. This can
be accomplished with a variety of forms of input to define the two
directions. For instance, the first direction can be defined upon a
click-and-hold button event, or other engage-and-hold user
interface element, and the second direction can be defined upon
release of the button. Similarly, two consecutive clicks
corresponding to the two different directions D1 and D2 can also be
implemented.
[0115] Also, instead of focusing on real distance, zooming in or
out could also represent a change in terms of granularity, or size,
or hierarchy of objects. For example, a first pointing gesture with
the device may result in a shopping mall appearing, but with
another gesture, a user could carry out a recognizable gesture to
gain or lose a level of hierarchical granularity with the points of
interest on display. For instance, after such gesture, the points
of interest could be zoomed in to the level of the stores at the
shopping mall and what they are currently offering.
[0116] In addition, a variety of even richer behaviors and gestures
can be recognized when acceleration of the device in various axes
can be discerned. Panning, arm extension/retraction, swirling of
the device, backhand tennis swings, breaststroke arm action, golf
swing motions could all signify something unique in terms of the
behavior of the pointing device, and this is to just name a few
motions that could be implemented in practice. Thus, any of the
embodiments herein can define a set of gestures that serve to help
the user interact with a set of services built on the pointing
platform, to help users easily gain information about points of
information in their environment.
[0117] Furthermore, with relatively accurate upward and downward
tilt of the device, in addition to directional information such as
calibrated and compensated heading/directional information, other
services can be enabled. Typically, if a device is ground level,
the user is outside, and the device is "pointed" up towards the top
of buildings, the granularity of information about points of
interest sought by the user (building level) is different than if
the user was pointing at the first floor shops of the building
(shops level), even where the same compass direction is implicated.
Similarly, where a user is at the top of a landmark such as the
Empire State building, a downward tilt at the street level (street
level granularity) would implicate information about different
points of interest that if the user of the device pointed with
relatively no tilt at the Statue of Liberty (landmark/building
level of granularity).
[0118] Also, when a device is moving in a car, it may appear that
direction is changing as the user maintains a pointing action on a
single location, but the user is still pointing at the same thing
due to displacement. Thus, thus time varying location can be
factored into the mathematics and engine of resolving at what the
user is pointing with the device to compensate for the user
experience based upon which all items are relative.
[0119] Accordingly, armed with the device's position, one or more
web or cloud services can analyze the vector information to
determine at what or whom the user is looking/pointing. The service
can then provide additional information such as ads, specials,
updates, menus, happy hour choices, etc., depending on the endpoint
selected, the context of the service, the location (urban or
rural), the time (night or day), etc. As a result, instead of a
blank contextless Internet search, a form of real-time visual
search for users in real 3-D environments is provided.
[0120] In one non-limiting embodiment, the direction based pointing
services are implemented in connection with a pair of glasses,
headband, etc. having a corresponding display means that acts in
concert with the user's looking to highlight or overlay features of
interest around the user.
[0121] As shown in FIG. 26, once a set of objects is determined
from the pointing information according to a variety of contexts of
a variety of services, a mobile device 2600 can display the objects
via representation 2602 according to a variety of user experiences
tailored to the service at issue. For instance, a virtual camera
experience can be provided, where POI graphics or information can
be positioned relative to one another to simulate an imaging
experience. A variety of other user interface experiences can be
provided based on the pointing direction as well.
[0122] For instance, a set of different choices are shown in FIG.
27. UI 2700 and 2702 illustrate navigation of hierarchical POI
information. For instance, level1 categories may include category1,
category2, category3, category4 and category5, but if a user
selects around the categories with a thumb-wheel, up-down control,
or the like, and chooses one such as category2. Then, subcategory1,
subcategory2, subcategory3 and subcategory4 are displayed as
subcategories of category2. Then, if the user selects, for
instance, subcategory4, perhaps few enough POIs, such as buildings
2700 and 2710 are found in the subcategory in order to display on a
2D map UI 2704 along the pointing direction, or alternatively as a
3D virtual map view 2706 along the pointing direction.
[0123] Once a single POI is implicated or selected, then a full
screen view for the single POI can be displayed, such as the
exemplary UI 2800. UI 2800 can have one or more of any of the
following representative areas. UI 2800 can include a static POI
image 2802 such as a trademark of a store, or a picture of a
person. UI 2800 can also include other media, and a static POI
information portion 2804 for information that tends not to change
such as restaurant hours, menu, contact information, etc. In
addition, UI 2800 can include an information section for dynamic
information to be pushed to the user for the POI, e.g., coupons,
advertisements, offers, sales, etc. In addition, a dynamic
interactive information are 2808 can be included where the user can
fill out a survey, provide feedback to the POI owner, request the
POI to contact the user, make a reservation, buy tickets, etc. UI
2800 also can include a representation of the direction information
output by the compass for reference purposes. Further, UI 2800 can
include other third party static or dynamic content in area
2812.
[0124] When things change from the perspective of either the
service or the client, a synchronization process can bring either
the client or service, respectively, up to date. In this way, an
ecosystem is enabled where a user can point at an object or point
of interest, gain information about it that is likely to be
relevant to the user, interact with the information concerning the
point of interest, and add value to services ecosystem where the
user interacts. The system thus advantageously supports both static
and dynamic content.
[0125] Other user interfaces can be considered such as left-right,
or up-down arrangements for navigating categories or a special set
of soft-keys can be adaptively provided.
[0126] To support processing of vector information and aggregating
POI databases from third parties, a variety of storage techniques,
such as relational storage techniques can be used. For instance,
Virtual Earth data can be used for mapping and aggregation of POI
data can occur from third parties such as Tele Atlas, NavTeq, etc.
In this regard, businesses not in the POI database will want to be
discovered and thus, the service provides a similar, but far
superior from a spatial relevance standpoint, Yellow Pages
experiences where businesses will desire to have their additional
information, such as menus, price sheets, coupons, pictures,
virtual tours, etc. accessible via the system.
[0127] In addition, a synchronization platform or framework can
keep the roaming caches in sync, thereby capturing what users are
looking at and efficiently processing changes. Or, where a user
goes offline, local changes can be recorded, and when the user goes
back online, such local changes can be synchronized to the network
or service store. Also, since the users are in effect pulling
information they care about in the here and in the now through the
act of pointing with the device, the system generates high cost per
thousand impression (CPM) rates as compared to other forms of
demographic targeting. Moreover, the system drives impulse buys,
since the user may not be physically present in a store, but the
user may be near the object, and by being nearby and pointing at
the store, information about a sale concerning the object can be
sent to the user.
[0128] As mentioned, different location subsystems, such as tower
triangulation, GPS, A-GPS, E-GPS, etc. have different tolerances.
For instance, with GPS, tolerances can be achieved to about 10
meters. With A-GPS, tolerances can be tightened to about 12 feet.
In turn, with E-GPS, tolerance may be a different error margin
still. Compensating for the different tolerances is part of the
interpretation engine for determining intersection of a pointing
vector and a set of points of interest. In addition, a distance to
project out the pointing vector can be explicit, configurable,
contextual, etc.
[0129] In this regard, the various embodiments described herein can
employ any algorithm for distinguishing among boundaries of the
endpoints, such as boundary boxes, or rectangles, triangles,
circles, etc. As a default radius, e.g., 150 feet could be
selected, and such value can be configured or be context sensitive
to the service provided. On-line real estate sites can be leveraged
for existing POI information. Since different POI databases may
track different information at different granularities, a way of
normalizing the POI data according to one convention or standard
can also be implemented so that the residential real estate
location data of Zillow can be integrated with GPS information from
Starbucks of all the Starbucks by country.
[0130] In addition, similar techniques can be implemented in a
moving vehicle client that includes GPS, compass, accelerometer,
etc. By filtering based on scenarios (e.g., I need gas), different
subsets of points of interest (e.g., gas stations) can be
determined for the user based not only on distance, but actual time
it may take to get to the point of interest. In this regard, while
a gas station may be 100 yards to the right off the highway, the
car may have already passed the corresponding exit, and thus more
useful information to provide is what gas station will take the
least amount of time to drive from a current location based on
direction/location so as to provide predictive points of interest
that are up ahead on the road, and not already aged points of
interest that would require turning around from one's destination
in order to get to them.
[0131] For existing motor vehicle navigation devices, or other
conventional portable GPS navigation devices, where a device does
not natively include directional means such as a compass, the
device can have an extension slot that accommodates direction
information from an external directional device, such as a compass.
Similarly, for laptops or other portable electronic devices, such
devices can be outfitted with a card or board with a slot for a
compass. While any of the services described herein can make web
service calls as part of the pointing and retrieval of endpoint
process, as mentioned, one advantageous feature of a user's
locality in real space is that it is inherently more limited than a
general Internet search for information. As a result, a limited
amount of data can be predictively maintained on a user's device in
cache memory and properly aged out as data becomes stale.
[0132] In another aspect of any of the embodiments described
herein, because stateless messaging is employed, if communications
drop with one network, the device can begin further communicating
via another network. For instance, a device has two channels, and a
user gets on a bus, but no longer have GPRS or GPS activity.
Nonetheless the user is able to get the information the device
needs from some other channel. Just because a tower, or satellites
are down, does not mean that the device cannot connect through an
alternative channel, e.g., the bus's GPS location information via
Bluetooth.
[0133] With respect to exemplary mobile client architectures, a
representative device can include, as described variously herein,
client Side Storage for housing and providing fast access to cached
POI data in the current region including associated dynamically
updated or static information, such as annotations, coupons from
businesses, etc. This includes usage data tracking and storage. In
addition, regional data can be a cached subset of the larger
service data, always updated based on the region in which the
client is roaming. For instance, POI data could include as a
non-limiting example, the following information:
TABLE-US-00001 POI coordinates and data //{-70.26322, 43.65412,
"STARBUCK'S"} Localized annotations //Menu, prices, hours of
operation, etc Coupons and ads //Classes of coupons (new user,
returning, etc)
[0134] Support for different kinds of information (e.g., blob v
structured information (blob for storage and media; structured for
tags, annotations, etc.)
[0135] A device can also include usage data and preferences to hold
settings as well as usage data such as coupons "activated,"
waypoints, businesses encountered per day, other users encountered,
etc. to be analyzed by the cloud services for business intelligence
analysis and reporting.
[0136] A device can also include a continuous update mechanism,
which is a service that maintains the client's cached copy of a
current region updated with the latest. Among other ways, this can
be achieved with a ping-to-pull model that pre-fetches and swaps
out the client's cached region using travel direction and speed to
facilitate roaming among different regions. This is effectively a
paging mechanism for upcoming POIs. This also includes sending a
new or modified POI for the region (with annotations+coupons),
sending a new or modified annotation for the POIs (with coupons),
or sending a new or modified coupon for the POI.
[0137] A device can also include a Hardware Abstraction Layer (HAL)
having components responsible for abstracting the way the client
communicates with the measuring instruments, e.g., the GPS driver
for positioning and LOS accuracy (e.g., open eGPS), magnetic
compass for heading and rotational information (e.g., gyroscopic),
one or more accelerometers for gestured input and tilt (achieves 3D
positional algorithms, assuming gyroscopic compass).
[0138] As described earlier, a device can also include
methods/interfaces to make REST calls via GPRS/Wi-Fi and a file
system and storage for storing and retrieving the application data
and settings.
[0139] A device can also include user input and methods to map
input to the virtual keys. For instance, one non-limiting way to
accomplish user input is to have softkeys as follows, though it is
to be understood a great variety of user inputs can be used to
achieve interaction with the user interfaces of the pointing based
services.
TABLE-US-00002 SK up/down: //Up and down on choices SK right, SK
ok/confirm: //Choose an option or drill down/next page SK left, SK
cancel/back, //Go back to a previous window, cancel Exit / Incoming
Call events //Exit the app or minimize
[0140] In addition, a representative device can include a graphics
and windowing stack to render the client side UI, as well as an
audio stack to play sounds/alerts.
[0141] As mentioned, such a device may also include spatial and
math computational components including a set of APIs to perform 3D
collision testing between subdivided surfaces such as spherical
shells (e.g., a simple hit testing model to adopt and boundary
definitions for POIs), rotate points, and cull as appropriate from
conic sections.
[0142] A representative interaction with a pointing device as
provided in one or more embodiments herein is illustrated in FIG.
29. At 2900, location/direction vector information is determined
based on the device measurements. This information can be recorded
so that a user's path or past can be used when predictively
factoring what the user will be interested in next, as illustrated
at 2910. The predicting can be made based on a variety of other
factors as well, such as context, application, user history,
preferences, path, time of day, proximity, etc. such that the
object(s) or POI(s) a user is most likely to interact with in the
future are identified.
[0143] At 2920, based on the object(s) or POI(s) identified at
2910, predictive information is pre-fetched or otherwise
pre-processed for use with the predicted services with respect to
such object(s) or POI(s). Then, based on current vector
information, or more informally, the act of pointing by the user,
at 2930, an object or point of interest is selected based on any of
a variety of "line of sight" algorithms that determine what POI(s)
are currently within (or outside) of the vector path. It is noted
that occlusion culling techniques can optionally be used to
facilitate overlay techniques. In this regard, at 2940, based at
least in part on the pre-fetched or pre-processed predictive
information, services are performed with respect to the object(s)
or POI(s).
[0144] Additionally, whether the point of interest at issue falls
within the vector can factor in the error in precision of any of
the measurements, e.g., different GPS subsystems have different
error in precision. In this regard, one or more items or points of
interest may be found along the vector path or arc, within a
certain distance depending on context. As mentioned, at 2940, any
of a great variety of services can be performed with respect to any
point of interest selected by the user via a user interface. Where
only one point of interest is concerned, the service can be
automatically performed with respect to the point of interest.
[0145] FIG. 30 is a block diagram of an example region based
prediction algorithm 3000 that takes into account user path and
heading, e.g., as a user has moved from age out candidate 3010 to
the present location 3002, and based on a current user path,
locations 3004 and 3006 are predicted for the user. Accordingly,
based on the direction and location based path history, POI data
for locations 3004 and 3006 can be pre-fetched to local memory of
the device. Similarly, location 3010 becomes the topic for a
decision as to when to age out the data. Such an age out decision
can also be made based on the amount of unused space remaining in
memory of the device. While FIG. 30 illustrates a path based
algorithm, as mentioned, other algorithms can be used to predict
what POIs will be of interest as well.
[0146] For existing motor vehicle navigation devices, or other
conventional portable GPS navigation devices, where a device does
not natively include directional means such as a compass, the
device can have an extension slot that accommodates direction
information from an external directional device, such as a compass.
Similarly, for laptops or other portable electronic devices, such
devices can be outfitted with a card or board with a slot for a
compass. While any of the services described herein can make web
service calls as part of the pointing and retrieval of endpoint
process, as mentioned, limited bandwidth may degrade the
interactive experience. As a result, a limited amount of data can
be predictively maintained on a user's device in cache memory and
optionally aged out as data becomes stale, e.g., when relevance to
the user falls below a threshold.
[0147] As described in various embodiments herein, FIG. 31
illustrates a process for a device when location (e.g., GPS) and
direction (e.g., compass) events occur. Upon the detection of a
location and direction event, at 3100, for POIs in the device's
local cache, a group of POIs are determined that pass an
intersection algorithm for the direction of pointing of the device.
At 3110, POIs in the group can be represented in some fashion on a
UI, e.g., full view if only 1 POI, categorized view, 2-D map view,
3-D perspective view, or user images if other users, etc. The
possibilities for representation are limitless; the embodiments
described herein are intuitive based on the general notion of
pointing based direction services.
[0148] At 3120, upon selection of a POI, static content is
determined and any dynamic content is acquired via synchronization.
When new data becomes available, it is downloaded to stay up to
date. At 3130, POI information is filtered further by user specific
information (e.g., if it is the user's first time at the store,
returning customer, loyalty program member, live baseball game
offer for team clothing discounts, etc.). At 3140, static and
dynamic content that is up to date is rendered for the POI. In
addition, updates and/or interaction with POI information is
allowed which can be synced back to the service.
Exemplary Networked and Distributed Environments
[0149] One of ordinary skill in the art can appreciate that the
various embodiments of methods and devices for pointing based
services and related embodiments described herein can be
implemented in connection with any computer or other client or
server device, which can be deployed as part of a computer network
or in a distributed computing environment, and can be connected to
any kind of data store. In this regard, the various embodiments
described herein can be implemented in any computer system or
environment having any number of memory or storage units, and any
number of applications and processes occurring across any number of
storage units. This includes, but is not limited to, an environment
with server computers and client computers deployed in a network
environment or a distributed computing environment, having remote
or local storage.
[0150] FIG. 32 provides a non-limiting schematic diagram of an
exemplary networked or distributed computing environment. The
distributed computing environment comprises computing objects 3210,
3212, etc. and computing objects or devices 3220, 3222, 3224, 3226,
3228, etc., which may include programs, methods, data stores,
programmable logic, etc., as represented by applications 3230,
3232, 3234, 3236, 3238. It can be appreciated that objects 3210,
3212, etc. and computing objects or devices 3220, 3222, 3224, 3226,
3228, etc. may comprise different devices, such as PDAs,
audio/video devices, mobile phones, MP3 players, laptops, etc.
[0151] Each object 3210, 3212, etc. and computing objects or
devices 3220, 3222, 3224, 3226, 3228, etc. can communicate with one
or more other objects 3210, 3212, etc. and computing objects or
devices 3220, 3222, 3224, 3226, 3228, etc. by way of the
communications network 3240, either directly or indirectly. Even
though illustrated as a single element in FIG. 32, network 3240 may
comprise other computing objects and computing devices that provide
services to the system of FIG. 32, and/or may represent multiple
interconnected networks, which are not shown. Each object 3210,
3212, etc. or 3220, 3222, 3224, 3226, 3228, etc. can also contain
an application, such as applications 3230, 3232, 3234, 3236, 3238,
that might make use of an API, or other object, software, firmware
and/or hardware, suitable for communication with or implementation
of the predicted interaction model as provided in accordance with
various embodiments.
[0152] There are a variety of systems, components, and network
configurations that support distributed computing environments. For
example, computing systems can be connected together by wired or
wireless systems, by local networks or widely distributed networks.
Currently, many networks are coupled to the Internet, which
provides an infrastructure for widely distributed computing and
encompasses many different networks, though any network
infrastructure can be used for exemplary communications made
incident to the techniques as described in various embodiments.
[0153] Thus, a host of network topologies and network
infrastructures, such as client/server, peer-to-peer, or hybrid
architectures, can be utilized. In a client/server architecture,
particularly a networked system, a client is usually a computer
that accesses shared network resources provided by another
computer, e.g., a server. In the illustration of FIG. 32, as a
non-limiting example, computers 3220, 3222, 3224, 3226, 3228, etc.
can be thought of as clients and computers 3210, 3212, etc. can be
thought of as servers where servers 3210, 3212, etc. provide data
services, such as receiving data from client computers 3220, 3222,
3224, 3226, 3228, etc., storing of data, processing of data,
transmitting data to client computers 3220, 3222, 3224, 3226, 3228,
etc., although any computer can be considered a client, a server,
or both, depending on the circumstances. Any of these computing
devices may be processing data, or requesting services or tasks
that may implicate the predicted interaction model and related
techniques as described herein for one or more embodiments.
[0154] A server is typically a remote computer system accessible
over a remote or local network, such as the Internet or wireless
network infrastructures. The client process may be active in a
first computer system, and the server process may be active in a
second computer system, communicating with one another over a
communications medium, thus providing distributed functionality and
allowing multiple clients to take advantage of the
information-gathering capabilities of the server. Any software
objects utilized pursuant to the direction based services can be
provided standalone, or distributed across multiple computing
devices or objects.
[0155] In a network environment in which the communications
network/bus 3240 is the Internet, for example, the servers 3210,
3212, etc. can be Web servers with which the clients 3220, 3222,
3224, 3226, 3228, etc. communicate via any of a number of known
protocols, such as the hypertext transfer protocol (HTTP). Servers
3210, 3212, etc. may also serve as clients 3220, 3222, 3224, 3226,
3228, etc., as may be characteristic of a distributed computing
environment.
Exemplary Computing Device
[0156] As mentioned, various embodiments described herein apply to
any device wherein it may be desirable to perform pointing based
services, and predict interactions with points of interest. It
should be understood, therefore, that handheld, portable and other
computing devices and computing objects of all kinds are
contemplated for use in connection with the various embodiments
described herein, i.e., anywhere that a device may request pointing
based services. Accordingly, the below general purpose remote
computer described below in FIG. 33 is but one example, and the
embodiments of the subject disclosure may be implemented with any
client having network/bus interoperability and interaction.
[0157] Although not required, any of the embodiments can partly be
implemented via an operating system, for use by a developer of
services for a device or object, and/or included within application
software that operates in connection with the operable
component(s). Software may be described in the general context of
computer-executable instructions, such as program modules, being
executed by one or more computers, such as client workstations,
servers or other devices. Those skilled in the art will appreciate
that network interactions may be practiced with a variety of
computer system configurations and protocols.
[0158] FIG. 33 thus illustrates an example of a suitable computing
system environment 3300 in which one or more of the embodiments may
be implemented, although as made clear above, the computing system
environment 3300 is only one example of a suitable computing
environment and is not intended to suggest any limitation as to the
scope of use or functionality of any of the embodiments. Neither
should the computing environment 3300 be interpreted as having any
dependency or requirement relating to any one or combination of
components illustrated in the exemplary operating environment
3300.
[0159] With reference to FIG. 33, an exemplary remote device for
implementing one or more embodiments herein can include a general
purpose computing device in the form of a handheld computer 3310.
Components of handheld computer 3310 may include, but are not
limited to, a processing unit 3320, a system memory 3330, and a
system bus 3321 that couples various system components including
the system memory to the processing unit 3320.
[0160] Computer 3310 typically includes a variety of computer
readable media and can be any available media that can be accessed
by computer 3310. The system memory 3330 may include computer
storage media in the form of volatile and/or nonvolatile memory
such as read only memory (ROM) and/or random access memory (RAM).
By way of example, and not limitation, memory 3330 may also include
an operating system, application programs, other program modules,
and program data.
[0161] A user may enter commands and information into the computer
3310 through input devices 3340 A monitor or other type of display
device is also connected to the system bus 3321 via an interface,
such as output interface 3350. In addition to a monitor, computers
may also include other peripheral output devices such as speakers
and a printer, which may be connected through output interface
3350.
[0162] The computer 3310 may operate in a networked or distributed
environment using logical connections to one or more other remote
computers, such as remote computer 3370. The remote computer 3370
may be a personal computer, a server, a router, a network PC, a
peer device or other common network node, or any other remote media
consumption or transmission device, and may include any or all of
the elements described above relative to the computer 3310. The
logical connections depicted in FIG. 33 include a network 3371,
such local area network (LAN) or a wide area network (WAN), but may
also include other networks/buses. Such networking environments are
commonplace in homes, offices, enterprise-wide computer networks,
intranets and the Internet.
[0163] As mentioned above, while exemplary embodiments have been
described in connection with various computing devices, networks
and advertising architectures, the underlying concepts may be
applied to any network system and any computing device or system in
which it is desirable to derive information about surrounding
points of interest.
[0164] There are multiple ways of implementing one or more of the
embodiments described herein, e.g., an appropriate API, tool kit,
driver code, operating system, control, standalone or downloadable
software object, etc. which enables applications and services to
use the pointing based services. Embodiments may be contemplated
from the standpoint of an API (or other software object), as well
as from a software or hardware object that provides pointing
platform services in accordance with one or more of the described
embodiments. Various implementations and embodiments described
herein may have aspects that are wholly in hardware, partly in
hardware and partly in software, as well as in software.
[0165] The word "exemplary" is used herein to mean serving as an
example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In
addition, any aspect or design described herein as "exemplary" is
not necessarily to be construed as preferred or advantageous over
other aspects or designs, nor is it meant to preclude equivalent
exemplary structures and techniques known to those of ordinary
skill in the art. Furthermore, to the extent that the terms
"includes," "has," "contains," and other similar words are used in
either the detailed description or the claims, for the avoidance of
doubt, such terms are intended to be inclusive in a manner similar
to the term "comprising" as an open transition word without
precluding any additional or other elements.
[0166] As mentioned, the various techniques described herein may be
implemented in connection with hardware or software or, where
appropriate, with a combination of both. As used herein, the terms
"component," "system" and the like are likewise intended to refer
to a computer-related entity, either hardware, a combination of
hardware and software, software, or software in execution. For
example, a component may be, but is not limited to being, a process
running on a processor, a processor, an object, an executable, a
thread of execution, a program, and/or a computer. By way of
illustration, both an application running on computer and the
computer can be a component. One or more components may reside
within a process and/or thread of execution and a component may be
localized on one computer and/or distributed between two or more
computers.
[0167] The aforementioned systems have been described with respect
to interaction between several components. It can be appreciated
that such systems and components can include those components or
specified sub-components, some of the specified components or
sub-components, and/or additional components, and according to
various permutations and combinations of the foregoing.
Sub-components can also be implemented as components
communicatively coupled to other components rather than included
within parent components (hierarchical). Additionally, it should be
noted that one or more components may be combined into a single
component providing aggregate functionality or divided into several
separate sub-components, and any one or more middle layers, such as
a management layer, may be provided to communicatively couple to
such sub-components in order to provide integrated functionality.
Any components described herein may also interact with one or more
other components not specifically described herein but generally
known by those of skill in the art.
[0168] In view of the exemplary systems described supra,
methodologies that may be implemented in accordance with the
disclosed subject matter will be better appreciated with reference
to the flowcharts of the various figures. While for purposes of
simplicity of explanation, the methodologies are shown and
described as a series of blocks, it is to be understood and
appreciated that the claimed subject matter is not limited by the
order of the blocks, as some blocks may occur in different orders
and/or concurrently with other blocks from what is depicted and
described herein. Where non-sequential, or branched, flow is
illustrated via flowchart, it can be appreciated that various other
branches, flow paths, and orders of the blocks, may be implemented
which achieve the same or a similar result. Moreover, not all
illustrated blocks may be required to implement the methodologies
described hereinafter.
[0169] While the various embodiments have been described in
connection with the preferred embodiments of the various figures,
it is to be understood that other similar embodiments may be used
or modifications and additions may be made to the described
embodiment for performing the same function without deviating
therefrom. Still further, one or more aspects of the above
described embodiments may be implemented in or across a plurality
of processing chips or devices, and storage may similarly be
effected across a plurality of devices. Therefore, the present
invention should not be limited to any single embodiment, but
rather should be construed in breadth and scope in accordance with
the appended claims.
* * * * *