U.S. patent application number 17/333655 was filed with the patent office on 2021-11-11 for apparatus, systems and methods for visually connecting people.
The applicant listed for this patent is Flaregun Inc.. Invention is credited to Kris Cadle, Keith Crutchfield.
Application Number | 20210352435 17/333655 |
Document ID | / |
Family ID | 1000005726478 |
Filed Date | 2021-11-11 |
United States Patent
Application |
20210352435 |
Kind Code |
A1 |
Crutchfield; Keith ; et
al. |
November 11, 2021 |
APPARATUS, SYSTEMS AND METHODS FOR VISUALLY CONNECTING PEOPLE
Abstract
A method for visually finding and interacting with people and
places, operable on a computing system including a mobile device
comprising a processor, a display, a camera and other sensors, the
method including obtaining the current location of members of a
group, if the group members have similar mobile devices (i.e.,
having the disclosed app) and if they have set to permit their
visibility or if they shared their location, by scanning their
surroundings with the camera of their mobile device or by receiving
location data from the group members; and, for each location
obtained, displaying on the mobile device's display a group member
representation associated with the group member's location.
Inventors: |
Crutchfield; Keith; (Los
Angeles, CA) ; Cadle; Kris; (Agoura Hills,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Flaregun Inc. |
Agoura Hills |
CA |
US |
|
|
Family ID: |
1000005726478 |
Appl. No.: |
17/333655 |
Filed: |
May 28, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16377111 |
Apr 5, 2019 |
11026046 |
|
|
17333655 |
|
|
|
|
15665225 |
Jul 31, 2017 |
10257649 |
|
|
16377111 |
|
|
|
|
15134334 |
Apr 20, 2016 |
9743244 |
|
|
15665225 |
|
|
|
|
14251368 |
Apr 11, 2014 |
9351118 |
|
|
15134334 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04W 4/08 20130101; H04W
4/026 20130101; G06F 3/04842 20130101; H04W 4/024 20180201; G06F
3/04815 20130101; G06F 3/011 20130101; G06F 3/04847 20130101; G06F
3/0481 20130101; G06F 3/04817 20130101; G06Q 50/01 20130101; H04W
4/023 20130101; G06F 3/167 20130101; H04W 4/029 20180201; H04N
5/23216 20130101; G06F 3/0482 20130101; G06F 3/14 20130101 |
International
Class: |
H04W 4/02 20060101
H04W004/02; H04W 4/08 20060101 H04W004/08; G06F 3/16 20060101
G06F003/16; G06F 3/0482 20060101 G06F003/0482; G06F 3/0484 20060101
G06F003/0484; G06F 3/14 20060101 G06F003/14; H04N 5/232 20060101
H04N005/232; G06Q 50/00 20060101 G06Q050/00; H04W 4/029 20060101
H04W004/029; G06F 3/0481 20060101 G06F003/0481; H04W 4/024 20060101
H04W004/024; G06F 3/01 20060101 G06F003/01 |
Claims
1. A method for visually finding and interacting with people and
places with minimized network data usage, operable on a computing
system including a first and a second device, each device
comprising a processor, a display, a camera, a location sensor and
an orientation sensor, the method comprising: downloading to the
first and second device an application for visually connecting
people adapted to function in a weak network environment by having
a set of graphics preloaded with the application; determining the
location of the first device by the application; sharing the
location of the first device by directly communicating the location
of the first device to the second device, with no or minimal
involvement of a server associated with the application, a code
string comprising location information of the first device which
the user of the first device wishes to share with the operator of
the second device, and instructions for displaying at least a
graphic from the set of preloaded graphics; wherein the at least a
graphic from the set of preloaded graphics comprises information
associated with the location the first device.
2. The method of claim 1, wherein the code string is a deep
link.
3. The method of claim 1, wherein the direct communication of the
code string is performed via an SMS application.
4. The method of claim 1, wherein the direct communication of the
code string is performed via visible light that encodes data, thus
requiring no connection to the server.
5. The method of claim 1, wherein the direct communication of the
code string is performed via one of a QR code, a bar code, an image
which encodes data or a sound that encodes data.
6. The method of claim 1, wherein the application is further
configured to allow a financial transaction to be conducted by the
user within the application, including purchasing of commercial
content.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. Non-Provisional
application Ser. No. 16/377,111, filed Apr. 5, 2019, which is a
continuation-in-part and claims the benefit of U.S. Non-Provisional
application Ser. No. 15/665,225, filed Jul. 31, 2017, now issued as
U.S. Pat. No. 10,257,649, which is a continuation-in-part of U.S.
Non-Provisional application Ser. No. 15/134,334, filed Apr. 20,
2016 now issued as U.S. Pat. No. 9743244, which is a continuation
of U.S. Non-Provisional application Ser. No. 14/251,368, filed Apr.
11, 2014 now U.S. Pat. No. 9,351,118, which are hereby incorporated
by reference, to the extent that they are not conflicting with the
present application.
BACKGROUND OF INVENTION
1. Field of the Invention
[0002] The invention relates generally to mobile technology and
more particularly to an apparatus, system and methods for visually
connecting people in real time.
2. Description of the Related Art
[0003] Oftentimes, people participate in large and crowded events
(e.g., music festivals, sporting events, etc.) or activities that
take place in large spaces such as a mall, theme park, college
campus, and so on. A problem the participants are facing is getting
lost or separated from the group they are part of (e.g., family,
friends group, etc.). Another problem is that the participants have
limited options of socially interacting with their group while at
these events. Thus, there is a need for a new apparatus, system and
methods for visually connecting people, to solve the above
problems.
[0004] The aspects or the problems and the associated solutions
presented in this section could be or could have been pursued; they
are not necessarily approaches that have been previously conceived
or pursued. Therefore, unless otherwise indicated, it should not be
assumed that any of the approaches presented in this section
qualify as prior art merely by virtue of their presence in this
section of the application.
BRIEF INVENTION SUMMARY
[0005] This Summary is provided to introduce a selection of
concepts in a simplified form that are further described below in
the Detailed Description. This Summary is not intended to identify
key aspects or essential aspects of the claimed subject matter.
Moreover, this Summary is not intended for use as an aid in
determining the scope of the claimed subject matter.
[0006] In an aspect, apparatus, systems and methods are disclosed
herein which are designed to visually connect groups of friends,
families and colleagues in large crowded environments like music
festivals, theme parks, sporting events, conventions, etc. The core
function is achieved through an algorithm of augmented reality, 3D
GPS mapping and other specialized technologies, as disclosed
herein. They allow users to privately view, in real time and 3D
space, the precise location of each friend by displaying a profile
icon on the screen of their mobile device. As such they enhance the
user's experience at these events for example by reducing the
anxiety of getting separated or lost.
[0007] In another aspect, a method for visually finding and
interacting with people and places is provided, operable on a
computing system including a mobile device comprising a processor,
a display a camera and other sensors, the method including
obtaining the current location of members of a group, if the group
members have similar mobile devices (i.e., having the disclosed
app) and if they have set to permit their visibility or if they
shared their location, by scanning their surroundings with the
camera of their mobile device or by receiving location data from
the group members; and, for each location obtained, displaying on
the mobile device's display a group member representation
associated with the group member's location. In addition to finding
and locating friends, the method enables finding locations of
interest (e.g., a vendor's location) associated with the current
location of the user, and even making purchases within the app.
[0008] The above aspects or examples and advantages, as well as
other aspects or examples and advantages, will become apparent from
the ensuing description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] For exemplification purposes, and not for limitation
purposes, aspects, embodiments or examples of the invention are
illustrated in the figures of the accompanying drawings, in
which:
[0010] FIG. 1 illustrates a diagrammatic view of a system for
visually connecting people, according to an embodiment.
[0011] FIG. 2 illustrates a diagrammatic view of an architectural
example of the system and process for visually connecting people,
as seen on a user's device, according to an embodiment.
[0012] FIG. 3 is a flow chart depicting examples of user steps in a
process for visually connecting with people, according to an
embodiment.
[0013] FIGS. 4-31 illustrate a collection of examples of user
actions that can be performed on a user's mobile apparatus when
using the system and method for visually connecting people,
according to several embodiments.
[0014] FIG. 32 illustrates an example of use of an apparatus and
method for visually connecting people, according to an
embodiment.
[0015] FIGS. 33A-33B illustrate a flow chart depicting steps in
another example of a process for visually connecting with people,
according to an aspect.
[0016] FIG. 33C is a legend for the items illustrated in the flow
chart of FIGS. 33A-33B, according to an aspect.
[0017] FIGS. 33D-33E is a flow chart showing exemplary steps in a
user's experience with the application for visually connecting
people, according to an aspect.
[0018] FIGS. 34-36 illustrate examples of user interfaces that may
be shown during a loading or onboarding sequence of the application
for visually connecting people, according to an aspect.
[0019] FIGS. 37-40 illustrate examples of user interfaces that may
be shown to a user while using the application for visually
connecting people to find friends or group members, according to an
aspect.
[0020] FIGS. 41-43 illustrate examples of user interfaces that may
be shown to a user while using the application for visually
connecting people to view tagged points of interest, according to
an aspect.
[0021] FIGS. 44-47 illustrate examples of user interfaces that may
be presented to the user when using the application for visually
connecting people to view advertisements, according to an
aspect.
[0022] FIG. 48 is a flow chart showing a method of utilizing
preloaded and user-generated content for low data usage, according
to an aspect.
[0023] FIG. 49 is a flow chart showing the process for the
geolocation algorithm of the application for visually connecting
people, according to an aspect.
[0024] FIG. 50 is a flow chart showing the process for tagging
points of interest within the ecosystem for the application for
visually finding people, according to an aspect.
[0025] FIG. 51 is a flow chart showing an exemplary process for
hardware having a processor for the application for visually
connecting people, according to an aspect.
[0026] FIGS. 52A-52B illustrate a flow chart depicting steps in
another example of a process for visually connecting with people,
according to an aspect.
[0027] FIG. 53 is a flow chart showing an exemplary process for
device-to-device communication via SMS for the application for
visually connecting people, according to an aspect.
[0028] FIG. 54 is a flow chart showing an exemplary process for
device-to-device communication using a QR code for the application
for visually connecting people, according to an aspect.
[0029] FIG. 55 is a flow chart showing an exemplary process for
device-to-device communication using visible light for the
application for visually connecting people, according to an
aspect.
DETAILED DESCRIPTION
[0030] What follows is a description of various aspects,
embodiments and/or examples in which the invention may be
practiced. Reference will be made to the attached drawings, and the
information included in the drawings is part of this detailed
description. The aspects, embodiments and/or examples described
herein are presented for exemplification purposes, and not for
limitation purposes. It should be understood that structural and/or
logical modifications could be made by someone of ordinary skills
in the art without departing from the scope of the invention.
Therefore, the scope of the invention is defined by the
accompanying claims and their equivalents.
[0031] As used herein and throughout this disclosure, the term
"mobile device" refers to any electronic device capable of
communicating across a mobile network. A mobile device may have a
processor, a memory, a transceiver, an input, and an output.
Examples of such devices include cellular telephones, personal
digital assistants (PDAs), portable computers, etc. The memory
stores applications, software, or logic. Examples of processors are
computer processors (processing units), microprocessors, digital
signal processors, controllers and microcontrollers, etc. Examples
of device memories that may comprise logic include RAM (random
access memory), flash memories, ROMS (read-only memories), EPROMS
(erasable programmable read-only memories), and EEPROMS
(electrically erasable programmable read-only memories). A
transceiver includes but is not limited to cellular, GPRS,
Bluetooth, and Wi-Fi transceivers.
[0032] "Logic" as used herein and throughout this disclosure,
refers to any information having the form of instruction signals
and/or data that may be applied to direct the operation of a
processor. Logic may be formed from signals stored in a device
memory. Software is one example of such logic. Logic may also be
comprised by digital and/or analog hardware circuits, for example,
hardware circuits comprising logical AND, OR, XOR, NAND, NOR, and
other logical operations. Logic may be formed from combinations of
software and hardware. On a network, logic may be programmed on a
server, or a complex of servers. A particular logic unit is not
limited to a single logical location on the network.
[0033] Mobile devices communicate with each other and with other
elements via a network, for instance, a cellular network. A
"network" can include broadband wide-area networks, local-area
networks, and personal area networks. Communication across a
network can be packet-based or use radio and frequency/amplitude
modulations using appropriate analog-digital-analog converters and
other elements. Examples of radio networks include GSM, CDMA, Wi-Fi
and BLUETOOTH.RTM. networks, with communication being enabled by
transceivers. A network typically includes a plurality of elements
such as servers that host logic for performing tasks on the
network. Servers may be placed at several logical points on the
network. Servers may further be in communication with databases and
can enable communication devices to access the contents of a
database. For instance, an authentication server hosts or is in
communication with a database having authentication information for
users of a mobile network. A "user account" may include several
attributes for a particular user, including a unique identifier of
the mobile device(s) owned by the user, relationships with other
users, call data records, bank account information, etc. A billing
server may host a user account for the user to which value is added
or removed based on the user's usage of services. One of these
services includes mobile payment. In exemplary mobile payment
systems, a user account hosted at a billing server is debited or
credited based upon transactions performed by a user using their
mobile device as a payment method.
[0034] For the following description, it can be assumed that most
correspondingly labeled elements across the figures (e.g., 101 and
401, etc.) possess the same characteristics and are subject to the
same structure and function. If there is a difference between
correspondingly labeled elements that is not pointed out, and this
difference results in a non-corresponding structure or function of
an element for a particular embodiment, example or aspect, then the
conflicting description given for that particular embodiment,
example or aspect shall govern.
[0035] FIG. 1 illustrates a diagrammatic view of a system 100 for
visually connecting people, according to an embodiment. As shown,
the system 100 may include several servers, which may be connected
to a network, such as the internet. A mobile device 101, such as a
smart phone (e.g., iPhone.RTM.), may be used by a user to connect
to the system's servers and to perform the functions related to
visually connecting with people, as will be described
hereinafter.
[0036] The server 102 may, for example, be an API server of a
social media site, such as Facebook.TM., and may be used to enable
the user to login to the visual connection application
("application,") disclosed herein, by using the login credentials
of the user for the respective social media site. An advantage of
using such an application login is that it gives the user the
option of pulling friends lists, photos or other data into the
application. Alternatively, the user may create a login profile and
then login using a visual connection server ("application server")
103, which may be dedicated to run the application's server portion
("server application"), in the form of an application or website,
for example.
[0037] The visual connection application may contain mobile device
logic 105 ("Flaregun app", "mobile device app"), which may be
downloadable by a user into their mobile device 101, and server
logic ("server application"), that will typically be preinstalled
on the visual connection server 103. As shown, the Flaregun app 105
may be configured to communicate with the application server 103
and other servers (102 and 104) to enable the mobile device 101 to
perform the functions related to visually connecting with people,
as will be described hereinafter. In addition, the mobile device
app 105 may communicate with the social media server 102 for the
purposes described above.
[0038] The mobile device app 105 may also communicate with a
location server 104, such as a Google.TM. Maps API server, to
support the bird's eye view of the mobile device 101.
[0039] The mobile device 101, may be equipped with a GPS module 106
to determine the position (e.g., longitude and latitude) and a
gyroscope sensor 107 to determine the orientation (e.g., yaw,
pitch, and roll) of the mobile device 101 and its camera (not
shown), which may be needed for operation of the application as
will be described in more details later herein.
[0040] Other sensors may be used in addition to or in combination
with the gyroscope and the GPS module, such as an accelerometer or
a compass, to determine the position and/or orientation of the
mobile device 101.
[0041] The mobile device app 105 may use various buttons to enable
a user to operate the application and employ its functions. Some
example buttons are shown in FIG. 1: "Login", "Finder", "Groups",
"Messages" and "Save Location." Other example buttons will be shown
and described when referring to the subsequent figures.
[0042] A "Finder" button for example may be used to launch the
scope mode, the function and purpose of which will be described
later herein. The functions associated with "Groups" and "Messages"
buttons will also be described later herein when referring to some
of the subsequent figures.
[0043] A "Save Location" button may be used typically in bird's eye
view, when a graphic pin or icon may be displayed on a map (e.g.,
Google.TM. Maps, provided by location server 104) representing the
current location/position of the user. When actuating this button,
the current location of the user is saved on the map. This function
may be useful in several instances. For example, a user may save a
current location when parking a car in a large parking lot. Later,
when returning to the car, the user could easily find the parked
car by walking in the direction of the previously saved location
pin or icon. The application may be configured to give the option
to the user to name/label the saved location (e.g., "Car Location")
by typing the respective name in a text box. Further, more than one
location may be saved and labeled.
[0044] An augmented reality (AR) module is preferably built into
the application and may be downloaded with the mobile device app
105 and may be configured to run on the mobile device 101.
[0045] Preferably, the application server 103 stores all user
groups that were created by different users. For every user, the
user's latest position (e.g., GPS location) may be stored on
application server 103, preferably at login. User's GPS location is
then preferably updated periodically (e.g., one time per second),
by user's mobile device 101. Each time user's mobile device GPS 106
notifies the application of a location update, the location data is
preferably being sent to and stored on the application server 103.
When a user goes into the group section ("Groups" or "My Groups")
of the mobile device app 105, and then selects a group from the
shown group list (see FIGS. 5-6), a list of users/members in that
group is preferably loaded into the user's mobile device 101, with
their most recent coordinates (e.g., latitude and longitude). Those
group users/members are preferably shown in scope mode using the
augmented reality module (see for example FIG. 32).
[0046] Even if those group users/members are not in the visual
proximity, as seen in FIG. 32 for example, of the user and of her
mobile device 101, because they are many miles away, or because, if
they are nearby, there is an obstruction that blocks the view
(e.g., a wall, a building, a hill, etc), the information about
those group members/users may still be displayed on user's mobile
device in scope when scanning around, from left to right for
example. Such information may include the group member's name,
photograph, current location and/or distance. For example, if the
user of the mobile device 101 is in Orange County, California, and
sets her mobile device in scope mode, the group members/users
(e.g., user's friends) will preferably still show up in scope view,
when the mobile device is pointed in the direction where the
respective friend is at that time (e.g., when the mobile device is
pointed in the direction of Los Angeles, this information may be
shown: John Doe, Los Angeles, 25 miles, or, John Doe, Los Angeles,
34.0500.degree. N, 118.2500.degree. W, 25 miles). Thus, the user's
mobile device's camera may not need to actually see the physical
location of the group member (e.g., the actual street in Los
Angeles), such as when the group member is far away; it needs only
to be pointed in that direction, and location and other information
about that group member may still be displayed in scope. Similarly,
same group members/users and information about them may also be
seen in the bird's eye view, depending on the view range the user
has set.
[0047] Thus, the users who wish to use the application, including
the users who wish to be shown/located in scope view, would
typically need to establish server connection, via the internet for
example, be logged into the application and have the GPS function
106 enabled on their mobile device 101. However, alternatives
technologies, such as multipeer technology, may potentially be used
to achieve similar results.
[0048] Preferably, the data about the groups, users, their
location, etc., is stored on application server 103 and then loaded
by the user mobile devices 101 that need it.
[0049] FIG. 2 illustrates a diagrammatic view of an architectural
example of the system and process for visually connecting people,
as seen on a user's device (i.e., user interface), according to an
embodiment. As shown, a user may be provided on her mobile device
201 with a main landing screen 209, which may include various
application buttons such as "My Groups", "Contacts", "Messages",
"My Profile", "Settings" and a "More . . . " button, to reveal
additional buttons when activated. Other buttons may be placed on
the main landing screen as well, such as "Invites", to show the
number of invitations sent or received to join groups.
[0050] When the user selects/activates (e.g., by pressing, touching
or swiping) the "My Groups" button, a groups' screen 209a will
preferably open, on which a list of groups the user has created or
is part of will be displayed.
[0051] Next, when the user selects a group (e.g., Group 1) from the
group list, a screen 218 for that group will preferably open,
listing all group members of that group and giving the user several
options. One of such options may be to add members 223 to that
group, which may open a contacts screen 223a, on which the user may
add group members from his mobile device's contacts or from a
social media site such as Facebook.TM.. Similarly, the contacts
screen 223a may also be accessed via the "Contacts" button on the
main landing screen 209.
[0052] As shown, from the groups' screen 209a, the user may select
an "All Messages" button 215 to access the groups' messages screen
209b, which may list information such as the number of messages
(not shown) exchanged within each group. The groups' messages
screen may also be accessed from the main screen 209 by selecting
the "Messages" button 211b. From the groups' messages screen 209b,
the user may select a group (e.g., Group 1) to view the messages
exchanged 219ab within that group, and/or to send messages to the
group by typing them into a message box 219a and then selecting a
"Send" button 219b.
[0053] From the main screen 209, a user may also select the "My
Profile" button 211d to view and edit her profile (e.g., name, age,
preferences, etc) on a profile screen 209d.
[0054] When on the scope screen 232, a user may select a
representation of a group member 230 (e.g., an icon) showing on the
scope screen, to enlarge that group user's container 234, in order
to view additional information about that user and/or to access
optional interaction methods with that group member, such as by
text messaging 236.
[0055] It should be noted that a universal application "Home"
button 212 is preferably provided on all screens, other than the
main screen, to enable the user to return to the main screen 209 at
any time.
[0056] FIG. 3 is a flow chart depicting examples of user steps in a
process for visually connecting with people, according to an
embodiment. As shown, a user may start the application by
activating a launch button (step 250). Next, after a main landing
page/screen of the application loads (step 251), the user may be
presented with several options on how to proceed. For example, the
user may be offered the option to first view a movie, run an
animation (step 252) of the application, or go through a tutorial
(step 253), both of which may be designed to educate the user on
how the application works, and how to access its various features,
such as scope, bird's eye view, friends finder, group video chat,
and the other application features described in this disclosure. On
the main landing screen, as well as on other application screens, a
"More . . . " button may be offered to the user (step 254), which
when selected may reveal various buttons such as the ones shown at
255. These buttons' functions are self-explanatory or are described
hereinafter when referring to FIGS. 4-31.
[0057] Of particular importance may be the "Set Scope Range" button
shown at 255. Using this button's function users can set the range
(e.g., 0.5, 1, 5, 10, or 50 miles radius) of their accessibility
(i.e., accessibility/range to find friends or range of their
visibility to others, or both), which may be limited to the close
range of an avenue for example, or broadened to a range of miles. A
numerical indicator under each user icon visible in scope may
indicate the approximate distance away. Users may also get an alert
if they are within range of certain blocked or listed
individuals.
[0058] Similarly, a bird's eye range may be set.
[0059] Also, the shown "Social Nets" button may be offered to allow
the user to, for example, view two social network sites on a split
screen and/or to post to such sites.
[0060] After the user lands on the main screen, the user may be
asked to log in (Step 256) using the login credentials of one of
user's social media sites (e.g., Facebook.TM.), to log in by email,
or to create login credentials in the application itself. Next, the
user may be offered to add contacts (e.g., friends, family members,
professional colleagues, etc) to the application, from user's
social media sites (e.g., friends from Facebook.TM.), or from the
contacts stored on the user's mobile device, for example (step
256).
[0061] Next, the user may be presented with the option to set
user's privacy preferences (step 257). The user may, for example,
choose to be discoverable/visible by all application users (i.e.,
"Public" option in step 258), only by user's contacts (i.e., "All
Contacts" option in step 258), or only by members of private groups
(i.e., "Private" option in step 258), user created or is part of.
Thus, users have complete control over who can view their location.
Users can manage their visibility, limiting it to private
invitation-only groups, Facebook friends for example, and/or all
application users, which can be filtered by, for example,
Facebook.TM. interests or specific event invitations. Or, users can
disappear from sight at the touch of a button (step 260).
[0062] Next, the user may create groups or choose groups (step 259)
to interact with as it will be described in more details
hereinafter, when referring to FIGS. 4-32.
[0063] Next, the user may launch the scope (step 261), view group
members' location in scope (step 266) and interact with groups and
groups' members, such as described later when referring to FIGS.
22-32. By turning on the scope, the user can find her friends
without saying a word. The user may then select (step 263) a group
member's profile icon appearing in scope, to view (step 264) that
group member's profile (e.g., name, age, etc) and/or to start (step
265) a text, audio, or video chat, or to exchange media (e.g.,
pictures) with that group member.
[0064] Application users may privately share photos, videos and/or
comments with selected groups, person-to-person within a group, or
across social networks of their choice. In addition to sharing on
social media servers, users can choose to share content
peer-to-peer via, for example Bluetooth LE. As an example, the
sharing may be similar to SnapChat.TM. but only the sender and
receiver keep photo/video on their device. In this mode, no content
is stored on servers.
[0065] Users using the application may also be able to post status
updates (step 264) to multiple social networks directly from the
application. The application may be configured to provide a split
screen, horizontal layout, for the users to view two social sites
of their choice simultaneously.
[0066] When in scope mode (step 266), the user of the application
may freeze the screen (step 262) by pressing a "Hold" or "Pause"
button. Freezing the screen may be useful so user can easily touch
a group member's icon appearing in scope, to initiate contact
without chasing them on screen.
[0067] From the scope screen, the user may navigate to group
message screen (step 270), on which the user can view that group's
messages and/or send messages to that group. The user may also
choose to navigate to a "Choose Group" screen (step 269) to select
a different group or to add a group to see in scope and interact
with.
[0068] In scope, users can easily scan (e.g., left or right) and
view the location of other application groups/users, in real time
and 3D space on the screen of their handheld device via augmented
reality. A compass may live in the top right corner of the screen
to indicate all group members' location relative to each other
and/or the direction the camera points to (e.g., north, east,
etc.).
[0069] From scope mode, or at any time after the application is
launched, a user may be allowed to flip/tilt down (step 267) her
mobile device for bird's eye view, and optionally, save locations
(step 268) showing up on the displayed map. For example, the user
may want to save her location when parking her car, when next to a
tent or a preferred location, for easy finding later.
[0070] In scope mode, bird's eye mode, or at any time after
launching the application, the user may be permitted to press a
button to disappear (step 260), such as that she is not
discoverable/visible by the other application users.
[0071] The user may also add groups to create, be part of or
interact with, as described in more details hereinafter when
referring to FIGS. 4-32. The user may invite/create and manage
various private groups or individuals to view in scope mode.
Invitation recipients may get a text and/or appropriate welcoming
email message. Creating a group is like inviting friends to your
party and makes you the host. You control who is in each group. It
is like hitting a reset for your Facebook.TM. friends, pairing them
down into manageable groups.
[0072] Group Admin can create an on-screen ticker scroll message
specific to each group. Thus, a festival command center for example
would preferably be able to override this message on all devices in
the event of an emergency for example.
[0073] FIGS. 4-31 illustrate a collection of examples of user
actions that can be performed on a user's mobile apparatus when
using the system and method for visually connecting people,
according to several embodiments. FIG. 4 shows a mobile device 401
displaying the application's main menu screen 409. As shown,
several main buttons 411 may be provide, for the user to activate
by tapping, swiping, touching, pressing or the like. Additional
application buttons, such as "Log Out", "FAQ" or the like may be
revealed by activating the "More . . . " button 413. As shown, the
main menu screen 409 may also display additional data 414, such as
how many groups were created by the user and/or the user is part
of, how many unread messages were received, and so on. The main
menu screen 409, as well as several other application screens
described below, may also display a main menu or home button 412,
which the user may activate to conveniently return to the main menu
screen, when so desired.
[0074] Referring now to FIGS. 5-7, a user may select "My Groups"
411a to view, on a group list screen 409a, a list 411aa of all
groups the user has created or is part of. As shown, the list 411aa
may include the names of the groups and the number of members in
each group. On the group list screen 409a, the user may be provided
with the several options, such as to view all messages 415 sent
and/or received from all groups, send a message 415a to a
particular group from the list, add additional members 416 to a
particular group or create a new group 417. Additionally, the user
may select a group 411ab (FIG. 6) to view that group's screen 418
(FIG. 7).
[0075] When on a group's screen 418, a group member list 418a may
be displayed. The group member list 418a may include the names of
the group members and a photograph of each user. As indicated
earlier, when a user logs in the application using her Facebook.TM.
credentials, the user's name and/or photograph may be retrieved
from her Facebook.TM. page. When on a group's screen 418, a user
may add friends 423 to the group, start a video chat 424 with a
group member, send a message 422 to a group member, delete a member
421 from the group (if the user is the one who created the group),
view or send group messages 419 or remove this group 420 from his
group list.
[0076] When the user selects to add friends 423 (FIG. 8), the
application may be configured to open a screen (FIG. 9) on which
the user may select friends to add from his mobile device's contact
list or the user may be given the option to select friends to add
from his Facebook.TM. list of friends, or the like, as shown in
FIG. 9. Similarly, a user may be allowed to add contacts to her
application contacts ("Contacts" in FIG. 4).
[0077] Referring now to FIGS. 10-11, when the user selects "Group
Messages" 419 (FIG. 10), the application may be configured to open
up that group's messages screen (FIG. 11) displaying the messages
exchanged by the group members. On the same screen, the user may
tap an input field 419a, which causes the Apple.TM. OS keyboard to
reveal so that the user can type his message in the text box 419a.
Next, the user may hit a "Send" button 419b to post the message to
the entire group. Next, the user may swipe or tap a "Back" button
419c, to return to Group screen (FIG. 10).
[0078] Referring now to FIGS. 12-13, it is shown that the user may
select "Remove Group" 420 to delete the current group. Typically, a
user may delete only the groups she created or if she is an
administrator/operator of the application. Otherwise, by deleting a
group, the user would be leaving the group and unable to view the
other group members. An alert popup 420a (FIG. 13), asking for
confirmation, may also be displayed to the user.
[0079] From the main menu screen 409 (FIG. 14), the user may select
"Messages" 411b to open an "All Messages" screen 409b (FIG. 15). On
that screen, the user may be shown the total number of messages
409ba associated with each group. The user may also select a group
to view group's messages and/or send messages to that group as
described earlier when referring to FIG. 11.
[0080] Referring now to FIGS. 16-21, it is shown that on the main
menu screen 409, the user may select "My Profile" 411d, to view
and/or edit her information on a profile screen 409d. The main
profile data 409da, as shown, may include a user's photograph,
name, age and/or data about her location. The user may, for
example, tap the mail profile data 409da to view and edit the
respective profile information. From the profile screen 409d (FIG.
17), for example, the user may also be provided with the option to
access other profile settings such as her privacy settings 409db
and preferences 409dc. The user may for example tap to view and/or
edit privacy settings (see 409db in FIG. 18; see also FIG. 19). As
shown in FIG. 19, privacy settings options may include "Private
Groups," "All Contacts," and "All Application/Flaregun Users". The
user may also tap to view and edit preferences 409dc (FIGS. 20-21).
User may type keywords or phrases into the input field to add
personal preferences or may delete existing preferences. Keywords
and phrases should typically be separated by a comma.
[0081] FIG. 22 shows the application in scope view/mode on the
user's mobile device 401. After launching the application, the
scope may be activated in various ways, such as by tapping a
"Finder" button (see FIG. 1), taking the mobile device's camera out
of the bird's eye view, or by pointing the camera at the
surroundings as shown in FIG. 22. A purpose of the scope mode may
be to visually connect with friends, family members, professional
colleagues, work colleagues, and the like, who typically will be
members of the application groups created by the user, or groups
the user is part of (see 411aa in FIG. 6 for example). These real
time visual connection features provide not only for, for example,
a better social media experience, but provides various other
benefits as well, such as meeting, locating, and/or finding a
person (e.g., friend or family member), and/or their mobile
devices, in large spaces (e.g., a mall, college campus, show
venues, ski resorts, airports, etc.), and/or in large crowds, such
as at a concert (see FIG. 32).
[0082] The application may be configured to allow the user to scan
the crowd or the large space, and the augmented reality module may
overlay a representation 430/FIG. 22 (e.g., icon or photograph,
plus name, location and/or distance) of each group member over a
point in the scope image 432 having the position coordinates (e.g.,
latitude and longitude) corresponding with the position coordinates
of the group member's mobile device. Again, as described earlier
when referring to FIG. 1, the determination of the location of a
group member in the scope image 432 may be facilitated by the
mobile device's 401 (FIG. 22) gyroscope and GPS module (see FIG. 1)
which may provide the data necessary to determine what is the user
mobile device's 401 camera looking at (i.e., by knowing the
orientation and location/position of the user mobile device 401 and
thus its camera). In addition, as mentioned earlier, the group
members' 430 position may be known from the GPS coordinates
supplied by group members' mobile devices to the application server
103 (FIG. 1).
[0083] Thus, for example, the user holding her mobile device 401 in
her hand, in scope view, may choose to walk toward a particular
group member appearing in scope in order to meet that group member
in person. Or, as another example, within the application, the user
could start a social media interaction (e.g., text messages, video
or audio chat, etc.) with one or more group members (including
simultaneously) appearing in scope.
[0084] As suggested in FIG. 22, some of the user representations
430 may be faded out and/or a size scaling effect may be used on
them to correlate with the group member's distance. The group
member that is the furthest away will preferably have the smallest
(and/or most faded) representation (e.g., icon) and the closest
will have the largest (and/or less faded), with preferably at least
three size points in between, to indicate depth of field. Other
similar graphical effects may also be used.
[0085] As shown in FIG. 22, when in scope view, a user may tap a
pause button 431 to freeze the screen (FIG. 23). Freezing the
screen may be useful such as when the user may want to start an
interaction with a group member showing up in scope as the freezing
may stop any jumping that may come from the user's camera movement.
For example, tapping a group member's icon 433 (see FIG. 24) may
cause the expansion of that group member's container 434 (FIG. 25),
revealing several options, such as to send messages, start a video
chat, or the like. After the container 434 is expanding, tapping
for example a "Face" button (see FIG. 25) may cause to connect the
user of the mobile device 401 with the group member from the
container 434 via a video chat application such as Apple.TM.
Facetime.TM.. As another example, tapping a "Chat" button in the
container 434 (see FIG. 25) may cause a connection via text
message. Further, tapping anywhere else (see FIG. 26) on the
expanded container 434 may take the user to that group member's
profile, to view more information about that user, such as her
preferences, and see additional options (see FIG. 27). Selecting
"Send a message" 435 on that user's profile screen may open a chat
dialog screen 436 for the user and that group member (peer-to-peer
messaging; see FIG. 28).
[0086] It should be understood that, for example, the
representations 430 (FIG. 22) of the group members appearing in
scope may include a real-time video, so real time video chat may be
held simultaneously in scope view with, for example, all or some of
the group members appearing in scope. Similarly, same real-time
video chat may be held in bird's eye view as well. These inventive
aspects may even further augment the benefits of the application
described herein.
[0087] A "Settings" button may also be provided on the main menu
screen (see FIG. 4 for example). By tapping "Settings" a user may
change application settings such as the range of scope view,
general account data or privacy settings.
[0088] Referring now to FIGS. 29-31, when in scope mode, the user
may be provided with the option to swipe left 437a (FIG. 29) to
reveal the current Group screen (as seen in FIG. 7) or to swipe
right 437b (FIG. 30) to return to the Scope screen. Again, as
mentioned earlier in this description, in scope view as well,
selecting the application logo/button 412 (FIG. 31) will take the
user to the Main Menu (as seen in FIG. 4). This button lives on
most of the application's screens.
[0089] FIG. 32 illustrates an example of use of an apparatus and
method for visually connecting people, according to an embodiment.
As shown, a user may point her mobile device 401 in scope view to a
large crowd (e.g., at a concert) to locate her friends and/or start
one or more of the interactions described herein with one or more
of the friends appearing in scope.
[0090] FIGS. 33A-33B illustrate a flow chart depicting steps in
another example of a process for visually connecting with people,
according to an aspect. The system may be implemented by an
application on any mobile device having GPS or other similar sensor
features, internet access, and a front-facing camera (step 3380).
The application may also run a process for GPS mapping and use of
geolocation algorithms ("MapTag"). For MapTag, a 3D render engine
may be implemented. The mobile device (as described when referring
to FIG. 1) may be used to launch the application for visually
connecting with people. Next, the application may run a start
sequence (step 3381). The start sequence may request required
device permissions (step 3382). If the permissions are not
authorized ("No") the application may return to the application
initialization (step 3380). If the permissions are authorized by
the user ("Yes"), the application may access the mobile device's
location services and camera functions, or any other necessary
functions (step 3383). Next, the application may check if it is a
first run, to determine whether the application software is being
run for the first time, or if data for the application has
previously been stored (step 3384). If no, the application may load
data from the server (which may be referred to as a "Flaregun
server"), such as security and encryption data (step 3387) and
reload an existing unique group number ("UGN") (step 3388). Next,
the application may load application version data and information
(step 3390) from the server.
[0091] If yes, the application may procedurally create the
following exemplary data: a unique group number ("UGN") (step
3385), and a unique user ID ("unique ID," "unique Flaregun User
ID," of "Flaregun user number") (step 3386). The unique user ID and
the UGN may be created from a set of random numbers and may be
checked for duplicates. A directory with the UGN name may be
created, and the device may be allowed access to that UGN. UGNs may
be assigned programmatically when the user first begins to use the
application.
[0092] Next, the application may store data on the user's device
(step 3389), such as the user ID, the UGN, the user's phone number,
GPS data, and the time of day at runtime. Next, the application may
load data from the server, such as security and encryption data
(step 3387) and application version data and information (step
3390).
[0093] Next, the application may proceed to a load sequence (step
3391). The load sequence (step 3391) may enable the transfer of
data between a device running the application software (which may
be referred to as a "Flaregun device") and the server, and may
initiate an update loop. The load sequence may load the following
data from the server: application ("Flaregun") version information,
application security and encryption data, metrics profile, and past
user data and settings. The application version information may
enable a mode of data access that may be tied to a specific or
particular version of application software, and may be used for
customizing the application software based on factors such as
geographic locations, and other methods of changing user
experience, using version information to control content of the
application. The metrics profile may be stored as a data file, and
may direct the software to begin collecting specific metrics.
Metrics variables may be used for the collection of any software
usage information, device behavior, location data, and other sensor
data. Past user data may include past UGNs and vendor data, and
settings information may include blocked user lists, and other user
interface data such as user preferences.
[0094] The load sequence (step 3391) may load GPS diagnostics and
start the device's GPS functions (step 3392), activate the device
camera (step 3393), and load sensor diagnostics (step 3394) and
server data (step 3395).
[0095] When starting the device's GPS functions (step 3392), the
application may enable the device's location services, and check if
the GPS is current. If yes, the application may begin sending GPS
data to the server and update the current group number with the
location of the user. If GPS functions are not current, the
application may send a notification to the user to enable location
services on the device, or may correct the problem automatically
and next return to the load sequence. The camera activate step
(step 3393) may send the camera feed to the background processes of
the application, and enable an augmented content layer onto the
camera feed. The load sequence may also scan the current
application vendor directories and the user data files inside the
UGN directory for data (to be further discussed when referring to
steps 3341 and 3342). If the GPS telemetry is set to yes, the
application may load the user's GPS variables into the
application's MapTag functions. MapTag may then be used to
translate GPS into 3D coordinates, and locate the user inside of
the Flaregun ecosystem. Next, translation from GPS to Cartesian
coordinates may occur, and this process may include but not be
limited to Ellipsoidal and Spherical conversions of World Geodetic
System 1984 (WGS84). The user's GPS data may be sent to the server,
and update intervals may be dynamically optimized based on the
device performance. The GPS data may be converted to degrees,
minutes, and seconds for display and for data logging purposes. The
metrics variables may also be updated with the user's GPS data.
[0096] Loading of server data (step 3395) may include scanning for
user data files inside of the UGN directory, and loading each found
user's data file from any user in the UGN directory, and scanning
current application vendor directories. The step may also include
loading default and custom icons for communication, text, video,
audio, and uploading photos and sharing data between users inside
the UGN. This may be used for adding communication functionality to
the Flaregun application ecosystem. Metrics data may also be
loaded. This step may include loading the list of applicable
metrics variables which are to be enabled when the user is
accessing the application. Event profiles, such as a list of timed
events which may affect augmented content, may also be loaded.
Geolocation profiles may also be loaded, which may include a list
of directions for functionality based on specific geolocations, and
may include information from development kit ("dev kit") data files
from any user in the UGN directory. This may also allow the
application to display augmented data on the device screen, based
on each data file. The augmented data may include visible icons
representing data, data points that are geo-located within the
Flaregun application ecosystem, friend locations from users in a
specific UGN, and vendor content locations from the applicable
vendor directory.
[0097] Dev kits may be standalone client-side software, and may be
run as a standalone application to provide a means of controlling
augmented data content to be used with the application for visually
connecting with people. The dev kits may create data files on the
server based on the user interface design of each dev kit, and
there may be an unlimited number of dev kits. Visibility of dev kit
content may be controlled by the application for geolocation
("MapTag"), and may be limited or unlimited to all users in all
UGNs.
[0098] Next, the application may send user data and information to
the server (step 3396). The user data may include GPS and other
positioning information, and updates to metrics variables based on
the programs being used by the device. Metrics data may be a
placeholder variable for any device-related data that could be sent
through the Flaregun system, and may be aggregated for improvements
to the software, and for providing metrics that may be of use to
clients. This data may include device location over time,
overheating and other performance issues, proximity to sensors
which may work with the software, and any other data being
collected by the Flaregun platform and sent to the server. Such
metrics data (step 3398) and any other user data (step 3396) may
both be sent to the server (step 3399).
[0099] The application may also check for internet connectivity and
check internet diagnostics (step 3346) and if yes, the application
may communicate with the server (step 3399) to send or update the
user data file. The update rate may be based on optimum device
speed, and the network performance threshold may be enabled. Next,
metrics variables and data may be communicated to (step 3398) and
stored on the server (step 3399). Without internet access (checked
for in step 3346), the application may send a notification to the
user, and may load offline content. Offline content may include
past experience data, tags, location information, and any data that
was previously downloaded as offline content. Offline content may
also include data stored by the user that does not require an
internet connection, such as the user's current location for
returning to a location, and sharing locations with others through
SMS.
[0100] Next, the application may create and store data from the
server (as shown by step 3399 of FIG. 33A) into a UGN directory
(step 3341). UGN directories may contain the group of users which
will remain visible to each other. Multiple UGNs may be combined
into the same user experience, and UGNs may be sharable with other
users who wish to join that particular group and become visible to
other users of that same UGN directory. Users may also have the
ability to share their UGN with others through text messaging, or
any other means.
[0101] The application may also create and store data from the
server into a vendor directory (step 3342). Vendor directories may
correspond to a dev kit, and the data files in a vendor directory
may be visible to all or a select number of users in a single UGN,
or a plurality of UGNs, such that a user may be both visible to
other users of a UGN, and view the other users within that UGN, as
well as being able to view members of the other UGNs. The UGN
numbers may be used repeatedly, may be stored in the user data
directories of the application, and may provide a way to distribute
a group identity to the application user experience through text,
email, voice call, and so on, such that anyone having the UGN may
then be visible to the group, regardless of any other social
network connectivity. Vendor directories may be used to send data
from a dev kit to all or some of the users in a Flaregun
application ecosystem. Vendor directories may be used to add
content to the ecosystem without the need for updating the
software.
[0102] The vendor directories (step 3342) may process security and
encryption data (step 3346) for processing graphical
representations of augmented content data ("tags"). The data for
tags may be referred to as "TagAR" (step 3347) and may be stored on
the hard disk of the user's device (step 3348) (to be further
discussed when referring to FIG. 50).
[0103] The vendor directories (step 3342) may also next communicate
with another process for geolocation. Again, the application may
also run a process for GPS mapping and use of geolocation
algorithms ("MapTag") (step 3343) (to be further discussed when
referring to FIG. 49). For MapTag, a 3D render engine may be
implemented (step 3345). The application may also make use of
graphical representations of augmented content data (which may be
referred to as "tags"). An advantage may be that less network usage
may be needed and the speed of the device may be improved, and when
user movement is detected, the GPS rate may be automatically reset
to top speed to maintain best possible performance across the
network.
[0104] The MapTag (step 3343) may process metrics data (step 3398)
and next create augmented content for the application display (step
3344). The augmented content may be graphic or audio
representations of data from the server, and may include but not be
limited to 3D models, 2D billboarded images, text, and procedurally
created sounds and graphics. The data represented by the augmented
content may include friend locations and information about all
users inside a UGN, and allow the ability to hide or block users,
and information about additional UGNs, and allow the ability to
search for nearby UGNs and allow the user to add the other UGNs.
The data may also include vendor locations, and may display all
data files inside a vendor directory, and may dynamically update
data from a dev kit. The augmented content may display content
preloaded into the Flaregun application on a device based on data
from the vendor directory data files. The data files may also be
used to create procedural content. Such procedural content may be
generated by using only the data from the data files and no
preloaded or downloaded graphics, which may reduce the need for
downloaded content and thus reduce network usage. Other data
represented by the augmented content may include any content
already existing on the device which can be manipulated in the
render of the application display, such as photos, videos, and
audio recordings.
[0105] The augmented content may also include display content based
on an outside dev kit accessed by the application. Multiple dev
kits may be used to implemented augmented content into the
application. Dev kit content may be downloaded from the Flaregun
servers at any time, and may be used to implement real time and/or
scheduled changes to the Flaregun application environment without
the need for updating the software. The dev kit may, again, be used
as standalone client-side software, may be used as a web-based
control panel, or as a sensor-based network. The dev kit may create
data files on the server based on the client side user interfaces.
The visibility of dev kit content may be limited or unlimited to
all users in all UGNs.
[0106] Next, the 3D render engine may be implemented by the
application (step 3345). The 3D render engine may update the user
position data, such as GPS and other positions, and orientation
data, as well as any available visual and image recognition based
location data from the device running the application. The 3D
render engine may determine the best accuracy of the device and may
procedurally adapt GPS to safe speeds, which may allow for future
improvement of the device to gain performance while running the
application. The 3D render engine may also determine the battery
status of the user's device, and may send the user a notification
if the battery is too low for the application to function. The
general movement of the user may be determined, and the rate of GPS
updates may be slowed down when movement is recognized as inactive,
such that power consumption of the device may be optimized. Again,
an advantage may be that less network usage may be needed than, for
example, needed for standard text messaging, and the speed of the
device may be improved, and when user movement is detected, the GPS
rate may be automatically reset to top speed to maintain best
possible performance across the network. The GPS data may be
combined with sensor data from the device to control the
orientation, position, and scale, to create the appearance of any
augmented data existing in the physical space being captured on the
device display. Time of day modes may also be programmed into each
dev kit to control independent versions of the same experience
based on a user's location, time of day, or any other similar
factors. The 3D render engine may also determine the device's
position and implement gyroscope sensors and align the GPS heading
to the device sensors. The 3D render engine may also parse
augmented content data files for GPS data, and load each data file
into MapTag in order to display the appropriate image or sound, and
translate GPS data into XYZ coordinates. Next, it may locate the
position of augmented content in the Flaregun ecosystem based on
the position data in the data file. The 3D render engine may
control the activation of network content, hyperlinks, SMS links,
and other connectivity from the augmented data content with the
internet.
[0107] The 3D render engine may be used for either a general user
device user interface (step 3372), which may be accessed freely by
any user with a compatible device to find friends and access map
tags, or the 3D render engine may be used for an administrator user
interface (step 3371, "TagAR"). The TagAR user interface may be
accessed, for example, by vendors, advertisers, or any other users
who wish to purchase ad space or tag locations in the Flaregun
ecosystem and may do so through a secondary portal such as a web
portal. The administrators or other users of the TagAR application
may also add tags or advertisements by using the augmented reality
features of the Flaregun application, and view their environment
through a mobile device camera in order to add tags or
advertisements. As an example, a vendor or merchant may access the
Flaregun ecosystem through TagAR, and may manually add
advertisements or points of interest.
[0108] Next, the 3D render engine may connect with an optional
Flaregun WEBGL (Web Graphics Library) in order to provide preloaded
data, images, graphics, or any other suitable content to the user
(step 3347). Use of preloaded data may limit the need for accessing
network data to provide content to the user, for example.
[0109] The server side codes may include the following exemplary
codes. FlareVerse.php may store user data in a unique data file
inside the UGN directory, and may update the data file based on
current user activity. Scandir.php may scan the UGN directory and
load data onto the user's device, and may scan the Flaregun vendor
directory and load the data onto the user's device. CreateValid.php
may procedurally create a valid UGN directory. Delete.php may
delete a selected tag from the render engine, and may allow the
application to continue to block a single user, and remove a tag
from the environment using a dev kit. Deleteall.php may be used to
delete all tags inside a UGN or vendor directory.
[0110] FIG. 33C is a legend for the items illustrated in the flow
chart of FIGS. 33A-33B, according to an aspect. As shown, various
shapes are used to represent user interfaces, decisions, predefined
processes, displays, terminators, databases, hard disks, data,
stored data, and internal storage in the flow chart.
[0111] FIGS. 33D-33E is a flow chart showing exemplary steps in a
user's experience with the application for visually connecting
people, according to an aspect. First, a user may launch the
application (step 33111). Next, the application may be loaded on
the user's device (step 33112). Next, the user may be prompted by
the application to enter their name (step 33113). An example of the
user interface that may be presented to the user is shown in step
33114. Next, a unique group number may be randomly generated by the
application and assigned to the user (step 33115). Next, the user
may be taken to the application home page (step 33116). An example
of the user interface for a home page that may be presented to the
user is shown in step 33117. The home page may provide the user
with various options. The user may request a new group number (step
33118). The user may also invite other users to join their group
(step 33119). This may be done by the user sharing their group
number with other individuals through, for example, standard text
messaging such as SMS, or email, or any other suitable means. The
user may also join any other group by entering the unique group
number (step 33120). Next, the user may start the group viewer step
33121) by, for example, pressing a button on a mobile device touch
screen (33122). Next, the user may use the application for visually
locating group members (step 33123), by using the application's
augmented reality features, device sensors, and device camera. An
example of the user interface presented to the user for visually
locating group members is shown in 33124.
[0112] FIGS. 34-36 illustrate examples of user interfaces that may
be shown during a loading or onboarding sequence of the application
for visually connecting people, according to an aspect. First, a
user may be shown a loading page (FIG. 34). Next, a user may be
shown a prompt to enter their name (FIG. 35). Next, a user may be
shown a home page with options for proceeding with a unique group
number assigned to them (FIG. 36), as described when referring to
FIGS. 33D-33E.
[0113] FIGS. 37-40 illustrate examples of user interfaces that may
be shown to a user while using the application for visually
connecting people to find friends or group members, according to an
aspect. The user may be shown a smaller visual representation
("tag") 37125 of a friend or group member for an individual that is
far away (FIG. 37, and a larger tag 38125 for an individual that is
close by (FIG. 38). Users may assign certain preloaded images of
graphics to individuals, which may be bundled or preloaded with the
downloading of the application to the mobile device, without the
need for loading of or linking to external profile pictures. When
searching for another user, the application may find the GPS or
coordinates of the user, and the code of the application may be
designed such that the system points to the assigned graphic and
name, and does not need to communicate with an external server or
API in order to point to a string of data representing the found
user. An advantage may be that the string may need a very low
amount of data, such as approximately 13-18 bytes, for example. As
an example of a comparison, text messaging with approximately 140
characters may use approximately 125 KB of data. Again, the user
may first download the application to their mobile device, and the
application may contain all necessary code for running the
application. Light code such as for pointing to another found
user's location may be provided within the application.
[0114] FIGS. 41-43 illustrate examples of user interfaces that may
be shown to a user while using the application for visually
connecting people to view tagged points of interest, according to
an aspect. The user may be shown points of interest through the
camera viewfinder with the application's augmented reality
features, and be displayed as visual indicators ("tags," or "Map
tags") such as 42126. The user may be able to use the application
to walk towards a selected point of interest, such as restrooms,
restaurants, tourist attractions, ATMs, and so on.
[0115] FIGS. 44-47 illustrate examples of user interfaces that may
be presented to the user when using the application for visually
connecting people to view advertisements, according to an aspect.
Similar to points of interest described when referring to FIGS.
41-43, advertisements may be marked or tagged in the augmented
reality of the application ecosystem and shown by visual indicators
and may be referred to as tags, or Ad Tags. Viewing an environment
through the camera of the user's mobile device with the
application's augmented reality features incorporated may show such
tagged advertisements. A vendor, advertiser, or merchant may access
the Flaregun ecosystem through a partner application ("TagAR") in
order to tag or mark areas with advertisements. The tagged
locations may be associated with the advertised product, for
example. A third party device may also be used for adding or
inserting content to the AR space. As an example, any device with a
processor, GPS, and network connectivity may be used by a third
party with administrative access, to add data of the Flaregun user
environment. Adding data to the system may be unavailable for
general users without administrative access or capabilities. As an
example, users accessing the Flaregun ecosystem for free may be
able to view all content, but may not have the ability to add
content, and users with paid access may be able to add and view
content.
[0116] FIG. 48 is a flow chart showing a method of utilizing
preloaded and user-generated content for low data usage, according
to an aspect. Preloaded and user-generated content as well as the
application's location and tag data may be used for displaying
content in the application without the need for transmitting the
content through a web server. By coding a system of identification
into the Flaregun platform, a small amount of data may be sent to a
device instructing it to find a specific image on a user's device,
or image embedded in the Flaregun application, and position it in
virtual space and render it accordingly, using, for example, a 3D
render engine. By reducing the need to upload graphic images to the
server, average tag data rate of transfer may be lowered by a
significant amount, as compared to transferring tag data through a
server and to another device. Data transfer may be reduced by
approximately 98%+/- by replacing a user avatar or downloaded image
with a preloaded image displayed in real time through the coded
system. The graphics may be embedded in the application and may
come preloaded when the user downloads the application to their
mobile device, or the user may create an image to be stored in the
application. The user may activate the application for visually
connecting people ("Flaregun") and a geolocation algorithm
("MapTag") to be discussed further when referring to FIG. 49) may
run, and the user's location data may be encoded with graphics data
to be used when displayed on other receiving devices. As MapTag
continues to run, location data including graphics data may be
received. MapTag may then use the graphics data to change the
surface appearance of the model or artwork of the tag in focus. For
device-to-device communication, the following exemplary process may
take place. First, graphics codes may be downloaded. One set
("graphic code 2," step 48130) may be downloaded to a first device
(step 48131), and another set ("graphic code 1," step 48140) may be
downloaded to a second device (step 48141). The graphic content may
be indexed by code (step 48135, step 48143), and there may next be
an optional step of manual input, wherein the user may create
graphic data with the device or by loading from the network (step
48134, step 48142). Next, the user graphic data may be added to the
preloaded Flaregun graphic content library (step 48136, step 48145)
and stored as part of the library (step 48137, step 48144). This
optionally created data may also be indexed by code (step 48135,
step 48143). The graphics codes may also be received from the
server (step 48139). The data may be then be saved to the user's
devices (step 48131, step 48141). Graphic code 1 and graphic code 2
may then be sent to the server (48138, step 48146) for storage. The
user's devices may then receive graphic codes from the server and
may afterwards skip the step of communicating with the server, and
instead point to downloaded graphic code on the device when the
graphic code is needed or retrieved. The device may use a graphical
code translator (step 48132, step 48147) and then, on the device,
display the graphic content from the preloaded content based on the
downloaded code (step 48133, step 48148).
[0117] FIG. 49 is a flow chart showing the process for the
geolocation algorithm of the application for visually connecting
people ("MapTag"), according to an aspect. The geolocation
algorithm MapTag may be designed to transmit and receive data to a
server or to another device specifically related to translating GPS
positioning data into virtual space coordinates. The process may
run on any suitable device with a processor, GPS sensor, and
network access. First, MapTag may activate a real time render
engine, which may be designed as known in the art. MapTag may also
activate the device's GPS sensors, and find the telemetry (step
49152), and may use a plurality of GPS satellites to do so (step
49149). The device may begin GPS features and simulate GPS (step
49153) and check whether the GPS is calibrated (step 49154). If
not, the process may begin again to restart GPS features (step
49153) and calibrate the GPS. If GPS is calibrated, then the user's
GPS location may be updated (step 49155). Next, the algorithm may
begin looping on the device or other hardware having a Flaregun
processor (step 49156). As an example, altitude and latitude and
longitude in WGS84 coordinates may be used (step 49157). These may
next be converted to degrees, minutes, and seconds (step 49160) and
saved to the device (step 49156). The current status of the GPS
data may be checked, and may be parsed into a float variable or
float data (step 49161). This may next be converted into a location
data string for use or storage on the Flaregun server (step 49158).
Next, the data may be saved to the device (step 49156). Next, the
virtual position of the user's point of view may be moved into real
time to the corresponding position based on the location data.
Next, the location data may be uploaded (step 49166) to the
Flaregun server (step 49167). Next, user profile data may be found,
and the corresponding tag data based on the found user profile may
be downloaded, and the tag data may be translated into a 3-axis
orientation system (step 49159), and a 3-axis location may be
generated. Next, the virtual position of the tag may be moved in
real time to that location, by moving the position of the tag to
the coordinates provided by the float data (step 49162). Next, the
heading data from the device sensor may be calibrated and the
distance of the tag in the real world may be calculated (step
49164), and the virtual position of the user's point of view may be
moved to the rotation angle of the sensor data by setting the tag
pitch, yaw, and roll of the position (step 49163). Next, the tag
may be rendered over a background camera or virtual environment,
and the rendering may be based on the application design use (step
49165), and may be viewed on the user's device (step 49156).
[0118] FIG. 50 is a flow chart showing the process for tagging
points of interest within the ecosystem for the application for
visually finding people, according to an aspect. A partner
application ("TagAR") accessible by vendors, merchants, and
advertisers ("client") may be used for tagging points of interest
or adding advertisements to the augmented reality ecosystem, for
example. The client device (step 50169) may be used for accessing
and providing client data (step 50171) for the MapTag system for
geolocation algorithms (step 50172). Tags used for marking or
showing advertisements on certain geographical locations may be
created by the client such that new content can be added to the
ecosystem, and may be viewable by users of the application at those
locations (step 50170) after the client sends the data to the
server (step 50173). The user may access the application through
their user device (step 50174) and be provided with MapTag data
(step 50175) and client data preloaded in the Flaregun application
(step 50176), which may be received from the server or may be
preloaded when the user downloads the application to their mobile
device. The marked tags may then be viewable to the user when using
the application ("FindAR," "Flaregun," or "application for visually
connecting people," step 50177).
[0119] FIG. 51 is a flow chart showing an exemplary process for
hardware having a processor for the application for visually
connecting people ("Flaregun processor"), according to an aspect. A
hardware transponder may, for example, be linked to the system for
visually connecting people, and may allow for geolocation
capabilities from an embedded chip. As an example, a wristband or
any other similar wearable object may be used for holding the
embedded chip. Similar to the geolocation algorithm being run on a
mobile device, the process may be run with any hardware having the
Flaregun processor. Again, the geolocation algorithm MapTag may be
designed to transmit and receive data to a server or to another
device specifically related to translating GPS positioning data
into virtual space coordinates. First, MapTag may activate a real
time render engine, which may be designed as known in the art.
MapTag may also activate the device's GPS sensors, and find the
telemetry (step 51152), and may use a plurality of GPS satellites
to do so (step 51149). The device may begin GPS features and
simulate GPS (step 51153) and check whether the GPS is calibrated
(step 51154). If not, the process may begin again to restart GPS
features (step 51153) and calibrate the GPS. If GPS is calibrated,
then the user's GPS location may be updated (step 51155). Next, the
algorithm may begin looping (step 51183) on the hardware having a
Flaregun processor (step 51182). Geolocation data of the hardware
may be sent to the server (step 51184) and may be stored on the
server (step 51185). The server may run processes to check where
the location tag has occurred or where it is viewable (step 51189)
and may then make the tag accessible or viewable through various
platforms, such as, for example, a general free application for
users ("FindAR," "Flaregun," "Flaregun FindAR," step 51190), or a
client viewer such as a web portal used by clients to access the
Flaregun ecosystem (step 51191), or a client application ("TagAR,"
"Flaregun TagAR," step 51192).
[0120] FIGS. 52A-52B illustrate a flow chart depicting steps in
another example of a process for visually connecting with people,
according to an aspect.
[0121] The process depicted in FIGS. 52A-B is similar with the
process described above when referring to FIGS. 33A-B. However,
there are several significant differences and enhancements that are
present in the process depicted in FIGS. 52A-B, as it will be
apparent from the ensuing description referring to FIGS. 52A-B.
Start Sequence
[0122] As shown in FIGS. 52A-B, in an example, a User Device 5280
having the Flaregun software/app, may start a sequence (step 5281)
by requesting permission (step 5282) from the respective device
(i.e., the User Device 5280). The user of the User Device 5280 must
authorize access (step 5283) to for example location services and
camera functions of the User Device 5280. The Flaregun application
("application," "software," "app") may determine the GPS signal
strength and bearing and may collect any other location data from
all available sources based on the capabilities of the User Device
5280. The application may also localize user position based on
which data sources the positioning algorithm determines is the most
accurate, by taking several readings from each and comparing the
data with preset rules. This can also include data collected from
for example the device camera by means of landmark and feature
tracking algorithms. User position may be also updated based on the
highest accuracy rating from all available data sources and the
Flaregun software may determine which source is most accurate. The
software can also combine and analyze this data and extrapolate the
user position.
[0123] In an example, the positioning algorithm of the Flaregun
software may use GPS data to run it through a smoothing method,
along with additional forms of data (including camera recognition,
RF data and others) in for example a weighted summation method
using a neural network (i.e., Flaregun machine learning layer 51),
to achieve a finer positioning scale.
[0124] In another step, the application may check (step 5284) to
see if this is the first time the application has run on the
installed device (i.e., User Device 5280). If the answer is (Yes),
the application may require the user to input user data (step
5289a), such as user name and phone number. The user name may be
used for visual recognition by users of other devices (e.g.,
Another Device 50) having the application installed, and the phone
number may be used to create a unique ID and also to return SMS and
text messages. If the answer is (No), the application may proceed
with the Load Sequence.
Load Sequence
[0125] The Load Sequence may include loading saved data (i.e.,
steps 5288a, 5289c, 5290) into the Flaregun software from User's
Device 5280. Such saved data may include all previously stored
location data including data that has been saved by the user, data
that other users have shared and accepted and/or data that has been
included in a Flaregun software update. The previously saved data
may be loaded from the stored memory on the user's device 5280,
from the Flaregun server 5299 and/or from external datasets, such
as bar codes, QR codes, and so on.
[0126] The data that has been saved by the user may include all
saved preferences and programmatic data, saved locations, saved
text, saved associated media content, saved dynamic links and URLs,
any dynamic link data from incoming applink request(s), user's name
(for sharing with others, for example) and/or user's phone number
(for sharing with others, for example). The data from incoming
applink requests can be specific data from dynamic links sent to
the user by other users, not just the links themselves.
[0127] The data that other users have shared and accepted may
include, other users' position data, points of interest shared by
other users and/or associated media shared by other users.
[0128] The data that has been included in a Flaregun software
update may include sponsored content ("Ad Units") and/or
geo-location data based on user's preset selections in Flaregun app
preferences.
[0129] Another step in the Load Sequence may be loading of data
from the Flaregun server (step 5295), which may include site
specific content associated with the user's position, advertising
and other content predetermined by Flaregun based on the user's
previous behavior, usage and/or location, and, calibration signal
data to increase accuracy of the users position and the position of
other users and for other functions of Flaregun application.
[0130] Another step in the Load Sequence may be the storing of
additional data on the Server, which may include a unique Flaregun
User ID (e.g., user's phone number), GPS data (step 5292), metrics
variables (step 5298) and/or dynamic link or deep link URLs
uploaded to the Flaregun ecosystem by users.
[0131] An important improvement in the process depicted in FIGS.
52A-B is the direct communication (see 52a-52g) between user
devices, i.e., User Device 5280 and Another User Device 50 in this
example, enabled by the Flaregun application installed in both
devices. The data sent from the first device may be a string of
variables, which the receiving Flaregun app may use to configure
its content or as instructions to automatically download additional
content or interact with content existing on the receiving device.
As an example, if the User Device 5280 sends its location and the
receiving device (e.g., Another User Device 50) is at another
specific location, the receiving app will load the User Device's
location data and may also load graphic image data and display an
image on the receiving device from that data directly or load
instructions which may trigger other interactions (e.g., open
another app, open a browser and show specific content, display
images that are already in receiving Flaregun app, etc.).
[0132] The direct communication may occur without the server's 5299
involvement or with very little involvement requiring very little
data traffic, which is advantageous particularly in areas with poor
Internet connectivity, such as remote locations (e.g., mountains).
In an example, the existing SMS module (step 52a) of both devices
can be used by the Flaregun app to transfer data directly from one
device to another. For example, a deep link may be text messaged
for the user to click on and see the data. It should be noted that
while the deep link approach may require connection to the server
5299, the data traffic required between the user device and the
server is very low since the actual data intended to be seen by the
receiving user is passed directly from device to device and/or
already existing in the receiving device.
[0133] The SMS module may also be used without implicating a deep
link approach requiring connection to server 5299. For example, the
receiving user could simply copy and paste the text messaged link
into the Flaregun app, so that the Flaregun app know what data,
that is already part of the Flaregun app of the receiving device,
to display.
[0134] Thus, it should be noted that, in an example, some data used
for visually displaying a user's position may be existing in the
Flaregun application and thus the data sent from device-to-device
is simply instructing the receiving Flaregun app what to display.
However, some data, such as profile pictures and AR markers may be
created in the sending device and sent to the receiving device,
and, displayed as a visual representation of sending device's
location and/or other data.
[0135] As another example, an image (step 52e) could be used to
encode data that is needed by the receiving device in order to
display intended data to receiving user. Such an image (e.g., a
.png image) can for example be printed on a flyer and then the
receiving device could scan it so that the Flaregun app knows what
data to display. In another example, the image can be emailed by
the sending device (e.g., User Device 5280) to the receiving device
(e.g., Another Device 50), which, again, would require very little
network traffic, and the receiving user could then copy and paste
the image in the Flaregun app.
[0136] In an example, the email links, as any other links sent via
the methods described herein, can also be dynamic or deep links and
not require any copy or paste action; a simple click on it, when
using a compatible device, such as a mobile phone, may open the
Flaregun app and populate the data automatically.
[0137] FIG. 53 is a flow chart showing an exemplary process for
device-to-device communication via SMS for the application for
visually connecting people, according to an aspect. It should be
noted that this method requires limited network connection.
[0138] FIG. 54 is a flow chart showing an exemplary process for
device-to-device communication using a QR code for the application
for visually connecting people, according to an aspect. It should
be noted that this method requires no network connection.
[0139] FIG. 55 is a flow chart showing an exemplary process for
device-to-device communication using visible light for the
application for visually connecting people, according to an aspect.
It should be noted that this method also requires no network
connection.
[0140] Another step in the Load Sequence may be the enabling of the
Flaregun render engine. The Flaregun requires a physics based real
time render engine.
[0141] Another step in the Load Sequence may be the check of
available device sensors (Step 5294a). This may include enabling
gyroscope, accelerometer, compass, and/or other sensors, checking
device performance and setting optimum performance settings and/or
adding sensor data to Metrics variable 5298 to be stored on the
server 5299.
[0142] Another step in the Load Sequence may be (Step 5293) the
activation of the device's camera, which may include sending camera
feed to background, and/or enabling of the Augmented Content layer
5244.
[0143] Another step in the Load Sequence may be the check for
Internet access (Step 5246). If Internet access is available,
operations such as updating the user data file and/or loading
advertising content and other media into the user's device 5280 may
be performed by the Flaregun application.
[0144] Updating the user data file may include updating the data
update rate based on optimum user device's speed, enabling the
network performance threshold and/or storing metrics variables on
server 5299. Loading advertising content and other media may
include checking the most current ad unit and/or populate ad unit
based on user's metrics variables.
[0145] If Internet access is not available a notification of such
may be sent to the user. Also, content may be loaded offline.
Offline content may include past experience data, tags, location
info, and any other data that was previously downloaded as offline
content, as well as data stored by the user device, which does not
require Internet access, such as current location marking for
returning to a location and sharing locations with others through
SMS.
[0146] Another step in the Load Sequence may be to start the
localization and mapping data algorithm (steps 5243). In one
example, the algorithm simultaneously localizes the input from the
camera (uses the camera data to generate a physical position in
space) and maps this position by collecting multiple positions and
their spatial relationships generating a physical representation of
the digital data. This step may augment GPS data for a more
accurate location augmentation and for helping to develop a map of
specific features of the surroundings for use later. This step may
include serializing localization and mapping data to enable saving
and sharing of unique feature detection by the Flaregun
algorithm.
[0147] Another step in the Load Sequence may be to enable GPS
service on user device 5280. If GPS service is enabled, the
Flaregun application may determine and set maximum GPS accuracy per
device.
[0148] If GPS service is not enabled, the Flaregun application may
send a notification to user to enable GPS service on the user
device 5280.
[0149] If GPS telemetry is enabled, the Flaregun application may
load user's GPS variables into MapTag, filter GPS through Flaregun
GPS smoothing algorithm, use MapTag to translate GPS into 3D
coordinates and locate the user in the Flaregun ecosystem without
requiring an outside API. This operation may include GPS
translation to Cartesian coordinates including by, for example,
ellipsoidal and spherical conversions of WGS84 and other forms of
GPS and augmented GPS systems. In an example, GPS data can be
combined with other natural image tracking and feature recognition
data to improve actual and perceived positional accuracy of the
user and user device 5280. This operation may also include sending
user's GPS data from the user device 5280 to the server 5299.
However, this is not required for core functionality of the
Flaregun application, as described herein. This means that the
Flaregun application can locate the user and perform its core
functionality without sending the users data to the Flaregun
server. However, it may send the users data to the server if
additional functionality requires it or to enhance the users
experience or to improve the core functionality performance. If the
GPS data is sent, the update intervals may be dynamically optimized
based on device's detected performance.
[0150] The GPS data may be converted to degrees, minutes and
seconds for display and for data logging purposes. Metrics variable
may be updated with user's GPS data and user's position may be made
available for sharing with other users via Flaregun coordinate
system.
Update Loop
[0151] During this loop, the Flaregun application may load the
following data from the server 5299 into the user device 5280. In
an example, it may scan current Flaregun Vendor Directories 5242
and load Flaregun content. This may be content loaded into Flaregun
system, such as new graphics specific to Flaregun vendors, and
location specific content, such as messages and location
information that Flaregun administrator adds to the system, and is
thus, not typically from other users. In other words, Flaregun
content may be any content not added by users and which is viewable
to all or some users. It may also load default and custom icons for
communication, text, video, audio, upload photos and share data
between users. This can be used for adding communication
functionality to the Flaregun ecosystem and/or to add sponsored
content to the Flaregun ecosystem.
[0152] During this loop, the Flaregun application may also load the
list of applicable metrics variables which are to be enabled
(metrics profile), a list of timed events which will affect the
augmented content (event profile), a list of directions for
functionality based on specific geolocations (geolocation profile),
each Dev Kit data file from any user previously shared and may
display augmented data on the device screen based on each data
file. These profiles will typically have corresponding data files
which can be stored on the Flaregun server, on the users device,
and/or in graphical coded image formats.
[0153] The displayed augmented data may include visible icons
representing data, such as geolocation of a user, or any fixed
geographic location, inside the Flaregun ecosystem, friends'
locations from users sharing data and/or vendor content locations
from the applicable vendor directory.
[0154] During the Update Loop, the Flaregun application may also
encrypt and encode (Step 5287a-b) Flaregun dynamic links/deep links
and make them available for sharing through SMS or any other
methods with other users, as described hereinabove. In an example,
a dataset may be encrypted with Flaregun encryption. Thus, the data
is first encoded to for example base64 for handling, then it may be
encrypted with a key function to keep anyone from for example
spoofing the links. This dataset may include for example the "FCS"
(Flaregun Coordinate System), which may be pre-processed cartesian
coordinates localizing position data for sharing within the
Flaregun ecosystem, user data, site specific data and metrics
data.
[0155] In another example, a dataset may be encoded with for
example base64 encoding and added to dynamic link/deep link scheme
for sharing across platforms and between devices.
[0156] During the Update Loop, the Flaregun application may also
send user's information to the server 5299, which may include GPS
and other position information and/or an update of metrics
variables based on a schedule program. The schedule program may be
part of the Flaregun development kit and may allow the client to
schedule when specific content would be loaded into Flaregun.
During the Update Loop, the Flaregun application may also apply the
Flaregun GPS smoothing algorithm to improve accuracy of GPS
position data. In an example, the smoothing algorithm may include
combining multiple GPS readings, averaging these readings,
determining if these readings are accurate by comparing multiple
GPS readings after smoothing and the amount of change between
readings and applying a variable range of accuracy allowing the
algorithm to determine if the current reading is accurate by
allowing only a preset amount of change between readings, which
allows the smoothing algorithm to update when user's position is
changing between readings. If readings are outside the preset
thresholds for GPS accuracy, no update will occur. In another
example, multiple readings are taken, then they are compared
without truncation, then they are averaged, then the final reading
is determined to be good or bad based on the amount of change from
the average of the first set of readings and it if good, the
reading is applied to the transform function and the users position
is updated; if bad, further readings are taken until a good reading
is determined.
[0157] During the Update Loop, the Flaregun application may also
apply the Flaregun `6 degrees of freedom` algorithm. This operation
may include determining the orientation of the user's device and
applying that data to the users position associated with the
augmented content 5244. This algorithm is the ability for the
Flaregun camera to move pitch, yaw and roll in 6 directions.
Magnetic heading and accelerometer data may be combined to provide
this function. In an example, this function must be synchronized
with GPS data to provide the translation from GPS to cartesian
coordinate system, for use in the render engine.
Flaregun AR--Augmented Content
[0158] The Flaregun augmented reality ("AR") module may include
augmented content which may be graphic or audio representations of
data from the server and from other devices, such as 3D models, 2D
billboarded images, text and procedurally created sounds and
graphics. In an example, friend location data may be encrypted and
encoded into a string to be passed directly to other devices via
multiple methods such as SMS and text messaging services, QR codes,
barcodes and other visual data transfer methods, wireless data
transfer methods and/or manual entry through the UI. In another
example, a user may request from or share location with another
user via SMS or other methods listed above.
[0159] In another example, the Flaregun application may allow a
user to search for and see augmented content such as nearby public
users, sponsored Flaregun content, media and other points of
interest.
[0160] In another example, the augmented content may include vendor
locations and the Flaregun application may display all data files
inside a Vendor Directory, dynamically update data from Dev Kit,
display content preloaded in Flaregun on a device based on data
from the vendor directory data files or use data files to create
procedural content, which is content generated by using only the
data from the data files and no preloaded or downloaded
graphics.
[0161] In another example, the augmented content may include any
content already existing on the device which can be manipulated in
the render engine, such as photos, videos and audio recordings.
[0162] In another example, the Flaregun application may display
content based on an outside development kits (Dev Kit). Multiple
Dev Kits can be used to implement augmented content into Flaregun.
Dev Kit content can be downloaded from Flaregun servers at any time
and Dev Kits can be used to implement real-time and scheduled
changes to the Flaregun environment without updating the software.
Further, Dev Kits can be used as standalone client-side software,
as a web-based control panel or as a sensor-based network. Dev Kit
may create data files on the server based on client side UI and
visibility of Dev Kit content can be limited or unlimited to all
users. Dev Kits can be used to generate QR codes, barcodes and
other methods of transferring location data between devices through
visual methods. Dev Kit can be used to link online content, media,
website data and RSS feeds to location specific data inside the
Flaregun ecosystem. Dev Kit can be any outside software that adds
content data to an augmented digital ecosystem. Lastly, the
Flaregun coordinate system can be used to include Dev Kits in the
Flaregun application for 3rd parties.
Maptag--GPS Mapping and Geolocation Algorithm
[0163] For MapTag, a 3D render engine may be implemented. MapTag
may update user position data including GPS and other position
and/or orientation data, and/or any available visual and image
recognition-based location data from the user device 5280. MapTag
may also determine the best accuracy of device and procedurally
adapts GPS to safe speeds. This may allow for future improvement of
the device to gain performance and may extend battery life and
lower operating temperature of the user device. In an example, a
programmatic survey may be created where the user device GPS signal
strength is compared with the frame rate of the device camera. The
algorithm tries to keep the frame rate from dropping below a
determined threshold and will throttle the GPS receiver in order to
maintain an optimum balance of frame rate and GPS update speed,
which would allow for the best performance to battery life
compromise. The speed of the device camera will be device dependent
and when new devices are released with increased performance in
this area, then the algorithm will speed up according to the
performance between these three factors (processor speed, GPS
signal strength, battery life). This allows the Flaregun app speed
to be scalable and gain performance based on new device
improvements.
[0164] MapTag may determine battery status and send user a
notification if Flaregun software cannot function because of low
battery charge level.
[0165] MapTag may also determine general movement of user and slow
rate of GPS updates when movement is recognized as inactive to
optimize power consumption of the device. This function of the
MapTag allows for less network usage and improves speed of the
device. Further, when user movement is detected, GPS rate can be
automatically reset to top speed to maintain best performance
across the network.
[0166] Further, as described hereinbefore, GPS data combined with
sensor data from the user device 5280 may be used to control the
orientation, position and scale and thus the appearance of the
augmented data associated with the respective physical space.
[0167] In an aspect, the Flaregun coordinate system described
herein can be used as a standard for visualizing AR content from
3rd party apps and services.
[0168] Time-of-day modes may be programmed into each DevKit to
control independent versions of the same experience based on a
user's location time of day and other factors.
[0169] MapTag algorithm may also determines user device's position
by implementing the gyroscope sensors of the user's device 5280 and
by aligning GPS heading to user device's sensors.
[0170] In another aspect, the MapTag algorithm may be configured to
parse augmented content data files for GPS data, including
decrypting and decoding data shared by other devices 50 with the
user device 5280, decrypting and decoding data shared by DevKits
and visual methods, loading each data file into MapTag and
displaying the appropriate image or sound, translating GPS data
into XYZ coordinates, scaling augmented content to the physical
world in a 1:1 ratio, locating the position of the augmented
content in Flaregun based on the position data in the data file and
locating the position of the augmented content in Flaregun based on
shared location data from other devices and from visual and
wireless data transfer methods.
[0171] In another aspect, the MapTag algorithm controls the
activation of network content, hyperlinks, SMS links, and other
connectivity from augmented data created content to the greater
Internet, and may also control interaction from other users and
other preset Flaregun interactive content.
Flaregun Artificial Intelligence and Machine Learning Layer
[0172] The Flaregun Artificial Intelligence and Machine Learning
Layer (FAIML) (51 in FIG. 52A) processes data when for example user
input requires specific functions. FAIML may contextualize user
data and handle user input and visual display of that data. This
may include audio input such as sounds from the surrounding
environment or pre-programmed sounds and cues, speech recognition
including preprogrammed words based on the user's voice and
speech-to-text processing, gesture and device movement recognition
including speed and position of user's device and combined
movements, visual landmark recognition including physical site
geography and color and light pattern recognition, QR codes and
barcodes including codes generated by Flaregun and codes generated
from 3rd party apps and sources, geolocation position, site
specific orientation data, radio frequency input, "beacons" of all
types and text input.
Flaregun AR Tags--Graphical Representations of Augmented Content
Data
[0173] This module of the Flaregun software uses GPS data from the
user's device, from other devices through shared data methods
and/or from Vendor Directories to overlay icons and/or models over
Camera view or other sensor view to create the effect of real-life
attachment of digital data to physical objects and spaces. It may
also create each Tag based on data from the user's device, from
other devices or the Flaregun Server and may determines the
position of each Tag and locates them based on the unique metadata
attached to each Tag.
[0174] When a tag is created programmatically it can orient itself
to face the user's position and orientation within the available
precision tolerances of the user device.
[0175] This module may also update the orientation of each Tag to
face the direction of the user's camera. Tags can be 2D or 3D and
Tag locations are updated at the best speed of the device. Tags may
contain button properties and act as a link connection in the UI.
Tag properties such as scale, color information and position can be
updated based on the distance of the user to the Tag. Tag behavior
and content can be controlled by the directory behavior variables
applying specific behavior and content to procedurally generated
Tags.
Flaregun AR--3D Models
[0176] The Flaregun AR may determine the location, orientation and
scale of the 3D models and determine if the model, and other
metadata including time of day information, should be visible to
the user. It may also display the 3D model (and/or 2D graphics,
etc.) and apply location data AND texture to the 3D model. It
should be noted that the 3D model geometry may be limited by the
processor power of the device and that the 3D model geometry may be
changed dynamically based on the users position and distance to the
3D model. Further, the 3D model can be generated dynamically and
changed based on the data shared by any other device.
Flaregun AR--Vendor Directory
[0177] The Vendor Directory (5242 in FIG. 52B) is a directory that
may correspond to a DevKit or to any Flaregun media content, or
admin portal on Flaregun websites. The data files in a Vendor
Directory may be visible to all or a select number of users based
on preset parameters and can change dynamically based on a user's
behavior or data usage, location or shared content.
[0178] Vendor Directories may be used to send data from a DevKit or
Flaregun server to all or some of the users in the Flaregun
ecosystem. Vendor Directories can be used to add content to the
Flaregun ecosystem without updating the Flaregun software. Vendor
Directories can be preloaded into the software via software update
and used without further contact with the Flaregun server or any
online service. Vendor Directories can be loaded into Flaregun from
any visual method of data transfer including but not limited to QR
codes, barcodes, image recognition, landmark detection, and GPS
sensor data. Vendor Directories can be shared between Flaregun AR
app and any other app utilizing the Flaregun engine and this
functionality can be repeated between apps or blocked from sharing
with an app based on intended use. Finally, Vendor Directories can
be available to users based on time of day and preset time periods
programmed into the directory behavior algorithm.
[0179] Testing for the system disclosed hereinabove may generate,
process and display thousands and possibly millions of GPS points
automatically.
Flaregun AR--Array
[0180] The Array is a graphical representation of Augmented Content
data procedurally generated to create an array of searchable Tag
locations. In an example, Flaregun Array locations are calculated
by the Flaregun Array Generator and can be programmed into the
Flaregun software to allow for, for example, numbers of searchable
locations to be accessed and displayed by the Flaregun
algorithm.
[0181] The Flaregun Array locations can incorporate online content,
can be served from the Flaregun server, can be parsed from an
existing API, can be dynamically created by the Flaregun software,
can be accessed in real time through the Flaregun Server, can be
shared between any device with a processor through dynamic deep
links without the need of the Flaregun Server and can be
graphically displayed by the Flaregun algorithm and used to
populate a map with.
[0182] The Proprietary Hardware and its UI shown at 53 and 54 in
FIG. 52B refers to the ability to run the Flaregun software on an
embedded chip that may reside for example inside a product being
manufactured by a licensee of the Flaregun software. Such products
may be smart city sensors, wearables, smart glasses, and so on.
[0183] In an example, Proprietary Hardware 53 would connect to the
Flaregun ecosystem as described herein. The "Another Device" would
instead be "Proprietary Hardware." These terms can be
interchangeable because "Another Device" does not need to be the
same type of device as the "User Device". The Flaregun algorithm
can run across device types to/from mobile devices and any device
with a processor running Flaregun including but not limited to
embedded chips in proprietary hardware which would allow for the
same connectivity described herein.
[0184] The Financial Transactions 55 refers to Flaregun.TM.
software being capable of accessing 3rd party APIs such as
PayPal.TM., which would allow Flaregun.TM. users to make a purchase
inside the Flaregun.TM. platform using a 3rd party API and
authorize delivery of special Commercial Content 56 for example
based on or associated with current user's location.
[0185] A financial transaction may be done for example via private
block chain associated with a licensee, via a public platform such
as PayPal.TM., via in game purchase from a 3rd party vendor, or via
a proprietary Flaregun.TM. virtual currency or currency related to
a licensee of the Flaregun.TM. platform.
[0186] In an example, the Financial Transaction layer 55 may
communicate with Flaregun's AI machine learning layer 51, which
will learn user behavior, and financial reactions will be used to
train the machine learning layer and enhance the users experience
accordingly.
[0187] Commercial Content 56 may be any paid content provided by
for example the operator of the Flaregun platform or its sponsors
or licensees. This content may be purchased inside the Flaregun app
or any app running on the Flaregun platform and the content may be
delivered or unlocked accordingly.
[0188] It may be advantageous to set forth definitions of certain
words and phrases used in this patent document. The term "or" is
inclusive, meaning and/or. The phrases "associated with" and
"associated therewith," as well as derivatives thereof, may mean to
include, be included within, interconnect with, contain, be
contained within, connect to or with, couple to or with, be
communicable with, cooperate with, interleave, juxtapose, be
proximate to, be bound to or with, have, have a property of, or the
like.
[0189] Further, as used in this application, "plurality" means two
or more. A "set" of items may include one or more of such items.
Whether in the written description or the claims, the terms
"comprising," "including," "carrying," "having," "containing,"
"involving," and the like are to be understood to be open-ended,
i.e., to mean including but not limited to. Only the transitional
phrases "consisting of" and "consisting essentially of,"
respectively, are closed or semi-closed transitional phrases with
respect to claims.
[0190] If present, use of ordinal terms such as "first," "second,"
"third," etc., in the claims to modify a claim element does not by
itself connote any priority, precedence or order of one claim
element over another or the temporal order in which acts of a
method are performed. These terms are used merely as labels to
distinguish one claim element having a certain name from another
element having a same name (but for use of the ordinal term) to
distinguish the claim elements. As used in this application,
"and/or" means that the listed items are alternatives, but the
alternatives also include any combination of the listed items.
[0191] Throughout this description, the aspects, embodiments or
examples shown should be considered as exemplars, rather than
limitations on the apparatus or procedures disclosed or claimed.
Although some of the examples may involve specific combinations of
method acts or system elements, it should be understood that those
acts and those elements may be combined in other ways to accomplish
the same objectives.
[0192] Acts, elements and features discussed only in connection
with one aspect, embodiment or example are not intended to be
excluded from a similar role(s) in other aspects, embodiments or
examples.
[0193] Aspects, embodiments or examples of the invention may be
described as processes, which are usually depicted using a
flowchart, a flow diagram, a structure diagram, or a block diagram.
Although a flowchart may depict the operations as a sequential
process, many of the operations can be performed in parallel or
concurrently. In addition, the order of the operations may be
re-arranged. With regard to flowcharts, it should be understood
that additional and fewer steps may be taken, and the steps as
shown may be combined or further refined to achieve the described
methods.
[0194] If means-plus-function limitations are recited in the
claims, the means are not intended to be limited to the means
disclosed in this application for performing the recited function,
but are intended to cover in scope any equivalent means, known now
or later developed, for performing the recited function.
[0195] Claim limitations should be construed as means-plus-function
limitations only if the claim recites the term "means" in
association with a recited function.
[0196] If any presented, the claims directed to a method and/or
process should not be limited to the performance of their steps in
the order written, and one skilled in the art can readily
appreciate that the sequences may be varied and still remain within
the spirit and scope of the present invention.
[0197] Although aspects, embodiments and/or examples have been
illustrated and described herein, someone of ordinary skills in the
art will easily detect alternate of the same and/or equivalent
variations, which may be capable of achieving the same results, and
which may be substituted for the aspects, embodiments and/or
examples illustrated and described herein, without departing from
the scope of the invention. Therefore, the scope of this
application is intended to cover such alternate aspects,
embodiments and/or examples. Hence, the scope of the invention is
defined by the accompanying claims and their equivalents. Further,
each and every claim is incorporated as further disclosure into the
specification.
* * * * *