U.S. patent application number 16/030788 was filed with the patent office on 2019-01-10 for artificial intelligence enhanced system for adaptive control driven ar/vr visual aids.
The applicant listed for this patent is EYEDAPTIC, INC.. Invention is credited to Jay E. CORMIER, Brian KIM, David A. WATOLA.
Application Number | 20190012841 16/030788 |
Document ID | / |
Family ID | 64902844 |
Filed Date | 2019-01-10 |
United States Patent
Application |
20190012841 |
Kind Code |
A1 |
KIM; Brian ; et al. |
January 10, 2019 |
ARTIFICIAL INTELLIGENCE ENHANCED SYSTEM FOR ADAPTIVE CONTROL DRIVEN
AR/VR VISUAL AIDS
Abstract
Interactive systems using adaptive control software and hardware
from known and later developed eye-pieces to later developed
head-wear to lenses, including implantable, temporarily insert-able
and contact and related film based types of lenses including thin
film transparent elements for housing cameras lenses and projector
and functional equivalent processing tools. Simple controls,
real-time updates and instant feedback allow implicit optimization
of a universal model while managing complexity.
Inventors: |
KIM; Brian; (San Clement,
CA) ; WATOLA; David A.; (Irvine, CA) ;
CORMIER; Jay E.; (Laguna Niguel, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
EYEDAPTIC, INC. |
Laguna Niguel |
CA |
US |
|
|
Family ID: |
64902844 |
Appl. No.: |
16/030788 |
Filed: |
July 9, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62530286 |
Jul 9, 2017 |
|
|
|
62530792 |
Jul 10, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06T 19/006 20130101;
G06F 3/012 20130101; G06N 5/04 20130101; G06F 9/453 20180201; G06N
20/00 20190101; G06F 3/167 20130101; G06N 3/006 20130101; G06F
3/013 20130101 |
International
Class: |
G06T 19/00 20060101
G06T019/00; G06F 9/451 20060101 G06F009/451; G06F 3/16 20060101
G06F003/16; G06F 3/01 20060101 G06F003/01 |
Claims
1. An adaptive control driven system for visual enhancement and
correction useful for addressing ocular disease states, which
further comprises, in combination; Adaptive peripheral vision
training; Eccentric viewing Training; Pupil tracking with
customizable offset for eccentric viewing; Gamification--follow
fixation targets around screen for training; Targeting lines
overlaid on reality for fixation; Guided fixation across page or
landscape w/head tracking; Guided fixation with words moving across
screen at fixed rates; Guided fixation with words moving at
variable rates triggered by user; Guided Training & controlling
eye movements with tracking lines; Look ahead preview to piece
together words for increased reading speed; Distortion training to
improve fixation.
2. In a system for using AR/VR to address visual issues in users,
improvements which comprise the following improvements in HYBRID
WARPING & LAYERING: Layered processing-- All processing
Implemented as a serial pipeline of independent processing stages,
directly connected only by one-way data flow; General model both
for transforming data and adding to it (e.g. rendering independent
Information into the image); Internally, a pipeline stage can be
arbitrarily complex (e.g. effectively combine multiple stages into
one monolithic stage for performance reasons); Can reconfigure
pipeline connections on-the-fly to enable or disable features;
Tiered Radial Warp-- Divide output display into central, inner,
transition, and outer regions with different radial mapping
characteristics in each region; Only three primary parameters
adjust; Easy to understand and manipulate; Capable of accommodating
varied situations/tasks with different FOV requirements; Two
auxiliary parameters customized per user but infrequently changed;
Simple and efficient processing; Reference lines: highly visible,
guide to optimal location, show distortion contours even as warping
changes, speed up scanning to start of lines in spite of
distortion; Guide the eye to optimal location for reading or
high-acuity task; Show distortion contours as they bend away into
peripheral vision; Speed up scanning to start of next line in spite
of distortion; Dual reference lines--bracket text to show further
distortion details; Fiducial Change traditional crosshairs to
oriented-T (typically inverted-T) to provide intersection point for
fixation without drawing over reading/high-acuity areas; Different
color from reference line to avoid confusion and distraction;
Control Numerous interfaces provided for parameter adjustment;
Freely change any parameter at any time; Save or restore entire
"hotkey" configurations: Automatic mode to adjust warp parameters
and/or select configuration based on camera view (not described in
detail here); and Smooth and continuous transit.
3. An adaptive control driven system for visual enhancement and
correction useful for addressing ocular disease states, which
comprises, in combination; Software using at least one feature
programmed to simulate improved functional vision for a user from a
matrix selected from the group consisting of Hybrid magnification
& warping; FOV dependent on head tracking; Word shifting with
"target lines"; Central radial warping; Interactive on the fly FOV
mapping; Dynamic Zoon; OCR & Font change adaptation; Distortion
Grid adjustment; Scotoma interactive adjustment; and, Adaptive
peripheral vision training.
4. An adaptive control driven system for visual enhancement and
correction useful for addressing ocular disease states, comprising
hardware which further comprises, at least the following features
and their functional equivalents: At least a machine or manufacture
of matter in the state of the art effective for managing; One
button wireless update; Stabilization & targeting training;
Targeting lines & crosshairs for eye fixation & tracking;
Interactive voice recognition and control; Reading & text
recognition mode; Voice memo; and Mode shift transitions.
5. An adaptive control driven system for visual enhancement and
correction useful for addressing ocular disease states, which
comprises in combination: At least a set of hardware capable of
implementing user-driven adjustments, driven by any subject
software described herein to effectively manage; Hybrid
magnification & warping; FOV dependent on head tracking; Word
shifting with "target lines"; Central radial warping; Interactive
on the fly FOV mapping; Dynamic Zoom; OCR & Font change
adaptation; Distortion Grid adjustment; Scotoma interactive
adjustment; Adaptive peripheral vision training; In combination in
whole or in part with: One button wireless update; Stabilization
& targeting training; Training lines & crosshairs for eye
fixation & tracking; Interactive voice recognition mode; Voice
memo; and Mode shift transitions.
6. The System of claim 1, further comprising data collected and
arrayed by AI.
7. The System of claim 2, further comprising data collected and
arrayed by AI.
8. The System of claim 3, further comprising data collected and
arrayed by AI.
9. The System of claim 4, further comprising data collected and
arrayed by AI.
10. The System of claim 5, further comprising data collected and
arrayed by AI.
11. An AI enhanced system, further comprising at least an AI Data
101, residing both in its own database and in an AI cloud 109,
along with AI Compiler 111, and AI filter 107 and with any other
required AI architecture 103 and AI Intervenor 105.
12. The system of claim 11, wherein user data is supplemented by
key medical information arrayed within an AI architecture, further
comprising both resident and transient data sets operatively linked
thereto.
13. The system of claim 12, wherein said resident and transient
data sets are in communication with at least the AI cloud, and the
AI Intervenor analyzes and makes available select data, through and
in connection with AI filter(s) to provide models which better
support user's needs.
14. The system of claim 13, whereby AI Data is linked to and
operatively connected with AI Architecture and any required
interfaces with Adaptive Control visual system(s).
15. The system of claim 14, whereby user data is protected and
subject to AI Filter before becoming part of larger data
super-sets.
16. The system of claim 15, wherein settings control which portions
of user data are arrayed in specific cloud locations, the AI
Compiler, and other aspects of said AI Architecture.
17. The system of claim 16, whereby key ophthalmic, optical and
other medical data are sequestered.
18. The system of claim 17, whereby said sequestration is
concomitantly managed with other hypothecation of select aspects of
the AI Data.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of and priority to U.S.
Provisional Patent Applications Ser. Nos. 62/530,286 &
62/530,792 filed July 2017, the content of each of which is
incorporated herein by reference herein in its entirely, along with
full reservation of all Paris convention rights.
BACKGROUND OF THE DISCLOSURES
[0002] The Interactive Augmented Reality (AR) Visual Aid invention
described below is intended for users with visual impairments that
impact field of vision (FOV). These may take the form of
age-related macular degeneration, retinitis pigmentosa, diabetic
retinopathy, Stargardt's disease, and other diseases where damage
to part of the retina impairs vision. The invention described is
novel because it not only supplies algorithms to enhance vision,
but also provides simple but powerful controls and a structured
process that allows the user to adjust those algorithms.
[0003] The basic hardware is constructed from a non-invasive,
wearable electronics-based AR eyeglass system (see FIGURE)
employing any of a variety of integrated display technologies,
including LCD, OLED, or direct retinal projection. One or more
cameras, mounted on the glasses, continuously monitor the view
where the glasses are pointing. The AR system also contains an
integrated processor and memory storage (either embedded in the
glasses, or tethered by a cable) with embedded software
implementing real-time algorithms that modify the images as they
are captured by the camera(s). These modified, or corrected, images
are then continuously presented to the eyes of the user via the
integrated displays.
[0004] The basic image modification algorithms come in multiple
forms as described later. In conjunction with the AR hardware
glasses, they enable users to enhance vision in ways extending far
beyond simple image changes such as magnification or contrast
enhancement. The fundamental invention is a series of adjustments
that are applied to move, modify, or reshape the image in order to
reconstruct it to suit each specific user's FOV and take full
advantage of the remaining useful retinal area. The following
disclosure describes a variety of mapping, warping, distorting and
scaling functions used to correct the image for the end user.
[0005] The invention places these fundamental algorithms under
human control, allowing the user to interact directly with the
corrected image and tailor its appearance for their particular
condition or specific use case (see flowchart below). In prior art,
an accurate map of the usable user FOV is a required starting point
that must be known in order to provide a template for modifying the
visible image. With this disclosure, such a detailed starting point
derived from FOV measurements does not have to be supplied.
Instead, an internal model of the FOV is developed, beginning with
the display of a generic template or a shape that is believed to
roughly match the type of visual impairment of the user. From this
simple starting point the user adjusts the shape and size of the
displayed visual abnormality, using the simple control interface to
add detail progressively, until the user can visually confirm that
the displayed model captures the nuances of his or her personal
visual field. Using this unique method, accurate FOV tests and
initial templates are not required. Furthermore, the structured
process, which incrementally increases model detail, makes the
choice of initial model non-critical.
OBJECTS AND SUMMARY OF THE INVENTION
[0006] For people with retinal diseases, adapting to loss a vision
becomes a way of life. This impact can affect their life in many
ways including loss of the ability to read, loss of income, loss of
mobility and an overall degraded quality of life. However, with
prevalent retinal diseases such as AMD (Age related Macular
Degeneration) not all of the vision is lost, and in this case the
peripheral vision remains intact as only the central vision is
Impacted with the degradation of the macula. Given that the
peripheral vision remains intact it is possible to take advantage
of eccentric viewing and through patient adaptation to increase
functionality such as reading. Research has proven that through
training of the eccentric viewing increased reading ability (both
accuracy and speed). Eye movement control training and PRL
(Preferred Retinal Locus) training were important to achieving
these results.sup.1. Another factor in increasing reading ability
with those with reduced vision is the ability to views words in
context as opposed to isolation. Magnification is often used as a
simply visual aid with some success. However, with increased
magnification comes decreased FOV (Field of View) and therefore the
lack of ability to see other words or objects around the word or
object of interest. Although it was proven that with extensive
training isolated word reading can improve, eye control was
important to this as well.sup.2. The capability to guide the
training for eccentric viewing and eye movement and fixation
training is important to achieve the improvement in functionality
such as reading. These approaches outlined below will serve to both
describe novel ways to use augmented reality techniques to both
automate and improve the training.
[0007] In order to help users with retinal diseases, especially
users with central vision deficiencies. First it is important to
train and help their ability to fixate on a target. Since central
vision is normally used for this, this is an important step to help
users control their ability to focus on a target. Thereby laying
the ground work for more training and adaptation functionality.
This fixation training can be accomplished through gamification
built into the software algorithms, and can be utilized
periodically for increased fixation training and improved
adaptation. The gamification can be accomplished by following
fixation targets around the display screen and in conjunction with
a hand held pointer can select or click on the target during timed
or untimed exercise. Furthermore, this can be accomplished through
voice active controls as a substitute or adjunct to a hand help
pointer.
[0008] To aid the user in targeting and fixation certain guide
lines can be overlaid on reality or on the incoming image to help
guide the users eye movements along the optimal path. These
guidelines can be a plurality of constructs such as, but not
limited to, cross hair targets, bullseye targets or linear
guidelines such as singular or parallel dotted lines of a fixed or
variable distance apart, a dotted line or solid box of varying
colors. This will enable the user to increase their training and
adaptation for eye movement control to following the tracking lines
or targets as their eyes move across a scene in the case of a
landscape, picture or video monitor or across a page in the case of
reading text.
[0009] This approach can be further modified and improved with
other interactive methods beyond simple eye movement. Targeting
approaches as described above can also be tied to head movement
based on inertial sensor inputs or simply following along as the
head moves. Furthermore, these guided fixation targets, or lines,
can move across the screen at a predetermined fixed rate to
encourage the user to follow along and keep pace. These same
targets can also be scrolled across the screen at variable rates as
determined or triggered by the user for customization to the
situation or scene or text of interest.
[0010] To make the most of a user's remaining useful vision methods
for adaptive peripheral vision training can be employed. Training
and encouraging the user to make the most of their eccentric
viewing capabilities is important. As described the user may
naturally gravitate to their PRL (preferred retinal locus) to help
optimized their eccentric viewing. However, this may not be the
optimal location to maximize their ability to view images or text
with their peripheral vision. Through use of skewing and warping
the images presented to the user, along with the targeting
guidelines it can be determined where the optimal place for the
user to target their eccentric vision.
[0011] Eccentric viewing training through reinforced learning can
be encouraged by a series of exercises. The targeting as described
in fixation training can also be used for this training. With
fixation targets on and the object, area, or word of interest can
be incrementally tested by shifting locations to determine the best
PRL for eccentric viewing.
[0012] Also, pupil tracking algorithms can be employed and not only
have eye tracking capability but can also utilize user customized
offset for improved eccentric viewing capability. Whereby the
eccentric viewing targets are offset guide the user to focus on
their optimal area for eccentric viewing.
[0013] Further improvements in visual adaptation can be achieved
through use of the hybrid distortion algorithms. With the layered
distortion approach objects or words on the outskirts of the image
can receive a different distortion and provide a look ahead preview
to piece together words for increased reading speed. While the user
is focused on the area of interest that is being manipulated the
words that are moving into the focus area can help to provide
context in order to interpolate and better understand what is
coming for faster comprehension and contextual understanding.
[0014] Furthermore, the user can be run through a series of
practice modules whereby different distortion levels and methods
are employed. With these different methods hybrid distortion
training can be used to switch between areas of interest to improve
fixation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Various preferred embodiments are described herein with
references to the drawings in which merely illustrative views are
offered for consideration, whereby:
[0016] FIG. 1 is a grid manipulation flowchart, including
hierarchical model construction.
[0017] Corresponding reference characters indicate corresponding
components are not needed throughout the single view of the
drawing. Skilled artisans will appreciate that elements in the
figures are Illustrated for simplicity and clarity, and have not
necessarily been drawn to scale. For example, the dimensions of
some of the elements in the figures may be exaggerated relative to
other elements to help to improve understanding of various
embodiments of the present invention. Also, common but
well-understood elements that are useful or necessary in a
commercially feasible embodiment are often not depicted in order to
facilitate a less obstructed view of these various embodiments of
the present invention.
DETAILED DESCRIPTIONS
[0018] The present inventors have discovered that low-vision users
can conform a user-tuned software set and improve needed aspects of
vision to enable functional vision to be restored.
[0019] Expressly incorporated by reference as if fully set forth
herein are the following: U.S. Provisional Patent Application No.
62/530,286 filed Jul. 9, 2017, U.S. Provisional Patent Application
No. 62/530,792 filed Jul. 9, 2017, U.S. Provisional Patent
Application No. 62/579,657, filed Oct. 13, 2017, U.S. Provisional
Patent Application No. 62/579,798, filed Oct. 13, 2017, Patent
Cooperation Treaty Patent Application No. PCT/US17/62421, filed
Nov. 17, 2017, U.S. NonProvisional patent application Ser. No.
15/817,117, filed Nov. 17, 2017, U.S. Provisional Patent
Application No. 62/639,347, filed Mar. 6, 2018, U.S. NonProvisional
patent application Ser. No. 15/918,884, filed Mar. 12, 2018, and
U.S. Provisional Patent Application No. 62/677,463, filed May 29,
2018.
[0020] It is contemplated that the processes described above are
implemented in a system configured to present an image to the user.
The processes may be implemented in software, such as machine
readable code or machine executable code that is stored on a memory
and executed by a processor. Input signals or data is received by
the unit from a user, cameras, detectors or any other device.
Output is presented to the user in any manner, including a screen
display or headset display
[0021] In addition, the logic flows depicted in the figures do not
require the particular order shown, or sequential order, to achieve
desirable results. Furthermore, other steps may be provided or
steps may be eliminated from the described flows, and other
components may be added to, or removed from, the described systems.
Accordingly, other embodiments are within the scope of the
following claims.
[0022] Referring now to FIG. 1, systems of the present invention
are shown schematically. Steps A-I are enhanced by the various
interfaces and loops connecting AI interfaces with the instant
system, as is known to those of skill in the art.
[0023] AI Data 101, resides both in its own database and in an AI
cloud 109, along with AI Compiler 111, and AI filter 107 along with
any other required AI architecture 103 and AI Intervenor 105. Step
A involves identifying region(s) to remap from with source FOV;
Step B initializing the same to achieve Step C wherein the model
created is ratified.
[0024] AI Architecture 103 provides both resident and transient
data sets to address the issue(s) being ameliorated in the user's
vision. Said data sets reside in at least one of the sub-elements
of the AI architecture, namely AI cloud 109, AI compiler 111, AI
filter 107 and AI intervenor 105, as known to those skilled in the
art. Likewise, Step D wherein user selects point outputs, and step
E wherein user moves selected point(s) updating models in
real-time, and Step F, wherein user releases selected point(s),
along with step G wherein interlocutory model is deemed complete,
or H needing updates or I complete. Those skilled in the art
understand the multi-path approach and orientation to use AI
elements to create functional and important models using said data,
inter alia.
[0025] It will be appreciated that the above embodiments that have
been described in particular detail are merely example or possible
embodiments, and that there are many other combinations, additions,
or alternatives that may be included. For example, while online
gaming has been referred to throughout, other applications of the
above embodiments include online or web-based applications or other
cloud services.
[0026] Also, the particular naming of the components,
capitalization of terms, the attributes, data structures, or any
other programming or structural aspect is not mandatory or
significant, and the mechanisms that implement the invention or its
features may have different names, formats, or protocols. Further,
the system may be implemented via a combination of hardware and
software, as described, or entirely in hardware elements. Also, the
particular division of functionality between the various system
components described herein is merely exemplary, and not mandatory;
functions performed by a single system component may instead be
performed by multiple components, and functions performed by
multiple components may instead be performed by a single
component.
[0027] Some portions of the above description present features in
terms of algorithms and symbolic representations of operations on
information. These algorithmic descriptions and representations may
be used by those skilled in the data processing arts to most
effectively convey the substance of their work to others skilled in
the art. These operations, while described functionally or
logically, are understood to be implemented by computer programs.
Furthermore, it has also proven convenient at times, to refer to
these arrangements of operations as modules or by functional names,
without loss of generality.
[0028] Unless specifically stated otherwise as apparent from the
above discussion, it is appreciated that throughout the
description, discussions utilizing terms such as "processing" or
"computing" or calculating" or "determining" or "identifying" or
"displaying" or "providing" or the like, refer to the action and
processes of a computer system, or similar electronic computing
device, that manipulates and transforms data represented as
physical (electronic) quantities within the computer system
memories or registers or other such information storage,
transmission or display devices.
[0029] Based on the foregoing specification, the above-discussed
embodiments of the invention may be implemented using computer
programming or engineering techniques including computer software,
firmware, hardware or any combination or subset thereof. Any such
resulting program, having computer-readable and/or
computer-executable instructions, may be embodied or provided
within one or more computer-readable media, thereby making a
computer program product, i.e., an article of manufacture,
according to the discussed embodiments of the invention. The
computer readable media may be, for instance, a fixed (hard) drive,
diskette, optical disk, magnetic tape, semiconductor memory such as
read-only memory (ROM) or flash memory, etc., or any
transmitting/receiving medium such as the Internet or other
communication network or link. The article of manufacture
containing the computer code may be made and/or used by executing
the instructions directly from one medium, by copying the code from
one medium to another medium, or by transmitting the code over a
network.
[0030] While the disclosure has been described in terms of various
specific embodiments, it will be recognized that the disclosure can
be practiced with modification within the spirit and scope of the
claims.
[0031] While several embodiments of the present disclosure have
been described and illustrated herein, those of ordinary skill in
the art will readily envision a variety of other means and/or
structures for performing the functions and/or obtaining the
results and/or one or more of the advantages described herein, and
each of such variations and/or modifications is deemed to be within
the scope of the present disclosure. More generally, those skilled
in the art will readily appreciate that all parameters, dimensions,
materials, and configurations described herein are meant to be
exemplary and that the actual parameters, dimensions, materials,
and/or configurations will depend upon the specific application or
applications for which the teachings of the present disclosure
is/are used.
[0032] Those skilled in the art will recognize, or be able to
ascertain using no more than routine experimentation, many
equivalents to the specific embodiments of the disclosure described
herein. It is, therefore, to be understood that the foregoing
embodiments are presented by way of example only and that, within
the scope of the appended claims and equivalents thereto, the
disclosure may be practiced otherwise than as specifically
described and claimed. The present disclosure is directed to each
individual feature, system, article, material, kit, and/or method
described herein. In addition, any combination of two or more such
features, systems, articles, materials, kits, and/or methods, if
such features, systems, articles, materials, kits, and/or methods
are not mutually inconsistent, is included within the scope of the
present disclosure.
[0033] AI definitions, as defined and used herein, should be
understood to control over dictionary definitions, definitions in
documents incorporated by reference, and/or ordinary meanings of
the defined terms.
[0034] The indefinite articles "a" and "an," as used herein in the
specification and in the claims, unless clearly indicated to the
contrary, should be understood to mean "at least one." The phrase
"and/or," as used herein in the specification and in the claims,
should be understood to mean "either or both" of the elements so
conjoined, i.e., elements that are conjunctively present in some
cases and disjunctively present in other cases. Other elements may
optionally be present other than the elements specifically
identified by the "and/or" clause, whether related or unrelated to
those elements specifically identified, unless clearly indicated to
the contrary.
[0035] Reference throughout this specification to "one embodiment"
or "an embodiment" means that a particular feature, structure, or
characteristic described in connection with the embodiment is
included in at least one embodiment. Thus, appearances of the
phrases "in one embodiment" or "in an embodiment" in various places
throughout this specification are not necessarily all referring to
the same embodiment. Furthermore, the particular features,
structures, or characteristics may be combined in any suitable
manner in one or more embodiments.
[0036] The terms and expressions which have been employed herein
are used as terms of description and not of limitation, and there
is no intention, in the use of such terms and expressions, of
excluding any equivalents of the features shown and described (or
portions thereof), and it is recognized that various modifications
are possible within the scope of the claims. Accordingly, the
claims are intended to cover all such equivalents.
[0037] Reference throughout this specification to "one embodiment,"
"an embodiment," or similar language mans that a particular
feature, structure, or characteristic described in connection with
the embodiment is included in at least one embodiment of the
present invention. Thus, appearances of the phrases "in one
embodiment," "in an embodiment," and similar throughout this
specification may, but do not necessarily, all refer to the same
embodiment.
[0038] Furthermore, the described features, structures, or
characteristics of the invention may be combined in any suitable
manner in one or more embodiments. In the following description,
numerous specific details are provided to provide a thorough
understanding of embodiments of the invention. One skilled in the
relevant art will recognize, however, that the invention may be
practiced without one or more of the specific details, or with
other methods, components, materials, and so forth. In other
instances, well-known structures, materials, and so forth. In other
instances, well-known structures, materials, or operations are not
shown or described in detail to avoid obscuring aspects of the
invention.
[0039] The schematic flow chart diagrams included herein are
generally set forth as logical flow chart diagrams. As such, the
depicted order and labeled steps are indicative of one embodiment
of the presented method. Other steps and methods may be conceived
that are equivalent in function, logic, or effect to one or more
steps, or portions thereof, of the illustrated method.
Additionally, the format and symbols employed are provided to
explain the logical steps of the method and are understood not to
limit the scope of the method. Although various arrow types and
line types may be employed in the flow chart diagrams, they are
understood not to limit the scope of the corresponding method.
Indeed, some arrows or other connectors may be used to indicate
only the logical flow of the method. For instance, an arrow may
indicate a waiting or monitoring period of unspecified duration
between enumerated steps of the depicted method. Additionally, the
order in which a particular method occurs may or may not strictly
adhere to the order of the corresponding steps shown. Unless
otherwise indicated, all numbers expressing quantities of
ingredients, properties such as molecular weight, reaction
conditions, and so forth used in the specification and claims are
to be understood as being modified in all instances by the term
"about." Accordingly, unless indicated to the contrary, the
numerical parameters set forth in the specification and attached
claims are approximations that may vary depending upon the desired
properties sought to be obtained by the present invention. At the
very least, and not as an attempt to limit the application of the
doctrine of equivalents to the scope of the claims, each numerical
parameter should at least be construed in light of the number of
reported significant digits and by applying ordinary rounding
techniques. Notwithstanding that the numerical ranges and
parameters setting forth the broad scope of the invention are
approximations, the numerical values set forth in the specific
examples are reported as precisely as possible. Any numerical
value, however, inherently contains certain errors necessarily
resulting from the standard deviation found in their respective
testing measurements.
[0040] The terms "a," "an," "the" and similar referents used in the
context of describing the invention (especially in the context of
the following claims) are to be construed to cover both the
singular and the plural, unless otherwise indicated herein or
clearly contradicted by context. Recitation of ranges of values
herein is merely intended to serve as a shorthand method of
referring individually to each separate value falling within the
range. Unless otherwise indicated herein, each individual value is
incorporated into the specification as if it were individually
recited herein. All methods described herein can be performed in
any suitable order unless otherwise indicated herein or otherwise
clearly contradicted by context. The use of any and all examples,
or exemplary language (e.g., "such as") provided herein is intended
merely to better illuminate the invention and does not pose a
limitation on the scope of the invention otherwise claimed. No
language in the specification should be construed as indicating any
non-claimed element essential to the practice of the invention.
[0041] Groupings of alternative elements or embodiments of the
invention disclosed herein are not to be construed as limitations.
Each group member may be referred to and claimed individually or in
any combination with other members of the group or other elements
found herein. It is anticipated that one or more members of a group
may be included in, or deleted from, a group for reasons of
convenience and/or patentability. When any such inclusion or
deletion occurs, the specification is deemed to contain the group
as modified thus fulfilling the written description of all Markush
groups used in the appended claims.
[0042] Certain embodiments of this invention are described herein,
including the best mode known to the inventors for carrying out the
invention. Of course, variations on these described embodiments
will become apparent to those of ordinary skill in the art upon
reading the foregoing description. The inventor expects skilled
artisans to employ such variations as appropriate, and the
inventors intend for the invention to be practiced otherwise than
specifically described herein. Accordingly, this invention includes
all modifications and equivalents of the subject matter recited in
the claims appended hereto as permitted by applicable law.
Moreover, any combination of the above-described elements in all
possible variations thereof is encompassed by the invention unless
otherwise indicated herein or otherwise clearly contradicted by
context.
[0043] Specific embodiments disclosed herein may be further limited
in the claims using consisting of or consisting essentially of
language. When used in the claims, whether as filed or added per
amendment, the transition term "consisting of" excludes any
element, step, or ingredient not specified in the claims. The
transition term "consisting essentially of" limits the scope of a
claim to the specified materials or steps and those that do not
materially affect the basic and novel characteristic(s).
Embodiments of the invention so claimed are inherently or expressly
described and enabled herein.
[0044] As one skilled in the art would recognize as necessary or
best-suited for performance of the methods of the invention, a
computer system or machines of the invention include one or more
processors (e.g., a central processing unit (CPU) a graphics
processing unit (GPU) or both), a main memory and a static memory,
which communicate with each other via a bus.
[0045] A processor may be provided by one or more processors
including, for example, one or more of a single core or multi-core
processor (e.g., AMD Phenom II X2, Intel Core Duo, AMD Phenom II
X4, Intel Core i5, Intel Core I & Extreme Edition 980X, or
Intel Xeon E7-2820).
[0046] An I/O mechanism may include a video display unit (e.g., a
liquid crystal display (LCD) or a cathode ray tube (CRT)), an
alphanumeric input device (e.g., a keyboard), a cursor control
device (e.g., a mouse), a disk drive unit, a signal generation
device (e.g., a speaker), an accelerometer, a microphone, a
cellular radio frequency antenna, and a network interface device
(e.g., a network interface card (NIC), WiFI card, cellular modem,
data jack, Ethemet port, modem jack, HDMI port, mini-HDMI port, USB
port), touchscreen (e.g., CRT, LCD, LED, AMOLED, Super AMOLED),
pointing device, trackpad, light (e.g., LED), light/image
projection device, or a combination thereof.
[0047] Memory according to the invention refers to a non-transitory
memory which is provided by one or more tangible devices which
preferably include one or more machine-readable medium on which is
stored one or more sets of instructions (e.g., software) embodying
any one or more of the methodologies or functions described herein.
The software may also reside, completely or at least partially,
within the main memory, processor, or both during execution thereof
by a computer within system, the main memory and the processor also
constituting machine-readable media. The software may further be
transmitted or received over a network via the network interface
device.
[0048] While the machine-readable medium can in an exemplary
embodiment be a single medium, the term "machine-readable medium"
should be taken to include a single medium or multiple media (e.g.,
a centralized or distributed database, and/or associated caches and
servers) that store the one or more sets of instructions. The term
"machine-readable medium" shall also be taken to include any medium
that is capable of storing, encoding or carrying a set of
instructions for execution by the machine and that cause the
machine to perform any one or more of the methodologies of the
present invention. Memory may be, for example, one or more of a
hard disk drive, solid state drive (SSD), an optical disc, flash
memory, zip disk, tape drive, "cloud" storage location, or a
combination thereof. In certain embodiments, a device of the
invention includes a tangible, non-transitory computer readable
medium for memory. Exemplary devices for use as memory include
semiconductor memory devices, (e.g., EPROM, EEPROM, solid state
drive (SSD), and flash memory devices e.g., SD, micro SD, SDXC,
SDIO, SDHC cards); magnetic disks, (e.g., internal hard disks or
removable disks); and optical disks (e.g., CD and DVD disks).
[0049] Furthermore, numerous references have been made to patents
and printed publications throughout this specification. Each of the
above-cited references and printed publications are individually
incorporated herein by reference in their entirety.
[0050] In closing, it is to be understood that the embodiments of
the invention disclosed herein are illustrative of the principles
of the present invention. Other modifications that may be employed
are within the scope of the invention. Thus, by way of example, but
not of limitation, alternative configurations of the present
invention may be utilized in accordance with the teachings herein.
Accordingly, the present invention is not limited to that precisely
as shown and described.
* * * * *