U.S. patent application number 15/817117 was filed with the patent office on 2018-05-24 for systems for augmented reality visual aids and tools.
The applicant listed for this patent is Eyedaptic, LLC. Invention is credited to Jay E. Cormier, Brian Kim, David Watola.
Application Number | 20180144554 15/817117 |
Document ID | / |
Family ID | 62147749 |
Filed Date | 2018-05-24 |
United States Patent
Application |
20180144554 |
Kind Code |
A1 |
Watola; David ; et
al. |
May 24, 2018 |
SYSTEMS FOR AUGMENTED REALITY VISUAL AIDS AND TOOLS
Abstract
Adaptive Control Driven System/ACDS 99, supports visual
enhancement, mitigation of challenges and with basic image
modification algorithms and any known hardware from contact lenses
to IOLs to AR hardware glasses, & enables users to enhance
vision with user interface based on a series of adjustments that
are applied to move, modify, or reshape image sets and components
with full advantage of the remaining useful retinal area, thus
addressing aspects of visual challenges heretofore inaccessible by
devices which learn needed adjustments.
Inventors: |
Watola; David; (Irvine,
CA) ; Cormier; Jay E.; (Laguna Niguel, CA) ;
Kim; Brian; (San Clemente, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Eyedaptic, LLC |
Laguna Niguel |
CA |
US |
|
|
Family ID: |
62147749 |
Appl. No.: |
15/817117 |
Filed: |
November 17, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62424343 |
Nov 18, 2016 |
|
|
|
62579798 |
Oct 31, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G02B 2027/0178 20130101;
G06F 3/013 20130101; G09B 21/008 20130101; G06F 3/167 20130101;
G02B 2027/0138 20130101; G02B 2027/0141 20130101; G09B 21/009
20130101; G06T 5/002 20130101; G06T 19/006 20130101; G02B 27/0172
20130101; G06F 3/012 20130101; G06F 3/011 20130101; G02B 2027/011
20130101; G02B 2027/0134 20130101; G02B 2027/014 20130101 |
International
Class: |
G06T 19/00 20060101
G06T019/00; G09B 21/00 20060101 G09B021/00; G06F 3/01 20060101
G06F003/01; G06F 3/16 20060101 G06F003/16; G06T 5/00 20060101
G06T005/00; G02B 27/01 20060101 G02B027/01 |
Claims
1. In a system for dramatically changing display of the environment
to a user, the improvements comprising, in combination: a
means-for-direct user control of image modification; instantaneous
visual feedback; and a structural software process guiding a user
to address large-scale appearance before fine-tuning smaller
details, to tailor the subject outputs to optimized visual
enhancement to address visual defects, whereby relatively intact
peripheral vision can be used to provide data to be used to improve
central vision impacted by degradation of the macula.
2. The system of claim 1, further comprising use of eccentric
viewing through patient adaptation to increase functionality such
as reading, by viewing words in context, for example.
3. The system of claim 2, whereby a user's field of view (FOV) is
not decreased by overmagnification, rather automated by use of
Augment Reality (AR) methodologies.
4. The system of claim 3, wherein the continuous-space model for
camera's inputs and displayed FOVs is governed by a class of radial
mapping function with simple parameterizations having limited
degrees of freedom, lending them to intuitive adjustment by
un-trained users.'
5. The system of claim 4, further comprising fixation training, for
example, gamification accomplished by following fixation target
around a display screen in conjunction with at least one of a hand
held pointer and voice active controls.
6. The system of claim 5, further comprising, in combination:
Grid-based mapping functions; Composability and scalability of
warps; Visual updates in real-time and temporally independent
ability to adjust and fine-tune the subject models.
7. The system of claim 6, disposed within a device, lense, IOL, two
or three dimensional film, sheet, modular assembly of integrated
components, and any type of internal or external framed glasses
skeleton, infrastructure or housing or frame being wearable and
non-invasive to a user.
8. An adaptive control driven system/ACDS for visual enhancement
and correction useful for addressing ocular disease states, which
comprises, in combination; Software using at least one feature
programmed to simulate improved functional vision for a user from a
matrix selected from the group consisting of: Hybrid magnification
& warping; FOV dependent on head tracking; Word shifting with
"target lines"; Central radial warping; Interactive on the fly FOV
mapping; Dynamic Zoom; OCR & Font change adaptation; Distortion
Grid adjustment; Scotoma interactive adjustment; and, Adaptive
peripheral vision training.
9. An adaptive control driven system/ACDS for visual enhancement of
claim 8 and correction useful for identifying, diagnosing for
addressing, or otherwise mitigating ocular disease states,
comprising hardware which further comprises, at least the following
features and their functional equivalents: At least a wearable
machine or manufacture of matter in the state of the art effective
for managing; One button wireless update; Stabilization &
targeting training; Targeting lines & crosshairs for eye
fixation & tracking; Interactive voice recognition and control;
Reading & text recognition mode; Voice memo; and Mode shift
transitions.
10. An adaptive control driven system/ACDS for visual enhancement
and correction useful for addressing ocular disease states, which
comprises in combination: At least a set of hardware capable of
implementing user-driven adjustments, driven by any subject
software described herein to effectively manage; Hybrid
magnification & warping FOV dependent on head tracking Word
shifting with "target lines" Central radial warping Interactive on
the fly FOV mapping Dynamic Zoom OCR & Font change adaptation
Distortion Grid adjustment Scotoma interactive adjustment Adaptive
peripheral vision training; In combination in whole or in part
with: One button wireless update Stabilization & targeting
training Training lines & crosshairs for eye fixation &
tracking Interactive voice recognition mode Voice memo, and Mode
shift transitions.
11. The adaptive control driven system/ACDS for visual enhancement
defined in claim 10, further comprising, in combination:
On-boarded--batteries; Bluetooth-wifi connection; charging and data
ports.
12. The adaptive control driven system/ACDS for visual enhancement
in claim 11, further comprising, in combination: On-boarded--dual
stereoscopic see-thru displays and an autofocus camera.
13. The adaptive control driven system/ACDS for visual enhancement
12, further comprising, in combination: On-boarded--processing and
accelerometer gyroscope magnetometer chips.
14. The adaptive control driven system/ACDS for visual enhancement
of claim 13, manifested within and: Graphically user interfaced
through basic set up mode displays and training mode displays;
wherein user registration; visual field calibration; field of view
definition; contrast configuration indicator configuration and
control registration function in tandem.
15. The adaptive control driven system/ACDS for visual enhancement
of claim 14 Further comprising training mode displays.
16. The adaptive control driven system/ACDS for visual enhancement
of claim 15 further comprising software updates.
17. An adaptive control driven system/ACDS for visual enhancement
further comprising processes for driving the ACDS for adaptive
peripheral vision training.
18. The adaptive control driven system/ACDS for visual enhancement
of claim 17, further comprising processes for driving the ACDS for
adaptive Eccentric viewing Training.
19. The adaptive control driven system/CDS for visual enhancement
of claim 18, further comprising pupil tracking with customizable
offset for eccentric viewing.
20. The adaptive control driven system/ACDS for visual enhancement
of claim 19, further comprising means for enabling users to
experience gamification, namely following fixation targets around
screen for training.
21. The adaptive control driven system/ACDS for visual enhancement
of claim 20, further comprising targeting lines overlaid on reality
for fixation.
22. The adaptive control driven system/ACDS for visual enhancement
of claim 21, further comprising guided fixation across page or
landscape w/head tracking.
23. The adaptive control driven system/ACDS for visual enhancement
of claim 22, further comprising guided fixation with words moving
across screen at fixed rates.
24. The adaptive control driven system/ACDS for visual enhancement
of claim 23, further comprising guided fixation with words moving
at variable rates triggered by user.
25. The adaptive control driven system/ACDS for visual enhancement
of claim 24, further comprising guided training & controlling
eye movements with tracking lines.
26. The adaptive control driven system/ACDS for visual enhancement
of claim 25, further comprising look ahead preview to piece
together words for increased reading speed.
27. The adaptive control driven system/ACDS for visual enhancement
of claim 26 further comprising distortion training to improve
fixation.
Description
CROSS REFERENCE TO PRIORITY APPLICATIONS
[0001] Expressly incorporated by references, the present
disclosures relate to the U.S. Provisional Patent Application Ser.
No. 62/424,343 filed Nov. 18, 2016 and assigned to EYEDAPTIC, LLC.
All domestic and foreign priority reserved and claimed from said
USSN remains the property of said assignee.
FIELD
[0002] The fields of vision augmentation, automation of the same,
specialized interfaces between users and such tools, including but
not limited to artificial intelligence--particularly for visually
challenged users of certain types, were a launch point for the
instant systems now encompassing improved systems for augmented
reality visual aids and tools.
BACKGROUND OF THE DISCLOSURES
[0003] A modicum of background stitches together the various
aspects of what the instant inventions offer to several divergent
attempts to merge optical, visual and cognitive elements in systems
to create, correct and project images for users.
[0004] Augmented Reality (AR) eyewear implementations fall cleanly
into two disjoint categories, video see-through (VST) and optical
see-through (OST). Apparatus for VST AR closely resembles Virtual
Reality (VR) gear, where the wearer's eyes are fully enclosed so
that only content directly shown on the embedded display remains
visible. VR systems maintain a fully-synthetic three-dimensional
environment that must be continuously updated and rendered at
tremendous computational cost. In contrast, VST AR instead presents
imagery based on the real-time video feed from an
appropriately-mounted camera (or cameras) directed along the user's
eyeline; hence the data and problem domain are fundamentally
two-dimensional. VST AR provides absolute control over the final
appearance of visual stimulus, and facilitates registration and
synchronization of captured video with any synthetic augmentations.
Very wide fields-of-view (FOV) approximating natural human limits
are also achievable at low cost.
[0005] OST AR eyewear has a direct optical path allowing light from
the scene to form a natural image on the retina. This natural image
is essentially the same one that would be formed without AR
glasses. A camera is used to capture the scene for automated
analysis, but its image does not need to be shown to the user.
Instead, computed annotations or drawings from an internal display
are superimposed onto the natural retinal image by (e.g.) direct
laser projection or a half-silvered mirror for optical
combining.
[0006] The primary task of visual-assistance eyewear for low-vision
sufferers does not match the most common use model for AR (whether
VST or OST), which involves superimposing annotations or drawings
on a background image that is otherwise faithful to the reality
seen by the unaided eye. Instead, assistive devices need to
dramatically change how the environment is displayed in order to
compensate defects in the user's vision. Processing may include
contrast enhancement and color mapping, but invariably incorporates
increased magnification to counteract deficient visual acuity.
Existing devices for low-vision are magnification-centric, and
hence operate in the VST regime with VST hardware.
[0007] Tailoring the central visual field to suit the user and
current task leverages a hallmark capability of the VST
paradigm--absolute control over the finest details of the retinal
image--to provide flexible customization and utility where it is
most needed. Even though the underlying platform is fundamentally
OST, careful blending restores a naturally wide field-of-view for a
seamless user experience despite the narrow active display
region.
[0008] There exists a longstanding need to merge the goals of
visual-assistance eyewear for low-vision sufferers with select
benefits of the AR world and models emerging from the same--which
did not exist, it is respectfully proposed, in advance of the
instant teachings thus making them eligible for Letters Patent
under the Paris Convention and National and International Laws.
OBJECTS AND SUMMARY OF THE INVENTION
[0009] The FOV model from AR in light of the needs of visually
challenged users then becomes a template used for changes needed
for re-mapping and in many cases the required warping of subject
images, as known to those of skill in the art. Like the adjustments
used to create the model, modifications to parameters that control
warping are also interactively adjusted by the user. In addition to
direct user control of the image modification coupled with
instantaneous visual feedback, the software imposes a structured
process guiding the user to address large-scale appearance before
fine-tuning small details. This combination allows the user to
tailors the algorithm precisely to his or her affected vision for
optimal visual enhancement.
[0010] For people with retinal diseases, adapting to loss a vision
becomes a way of life. This impact can affect their life in many
ways including loss of the ability to read, loss of income, loss of
mobility and an overall degraded quality of life. However, with
prevalent retinal diseases such as AMD (Age related Macular
Degeneration) not all of the vision is lost, and in this case the
peripheral vision remains intact as only the central vision is
impacted with the degradation of the macula. Given that the
peripheral vision remains intact it is possible to take advantage
of eccentric viewing and through patient adaptation to increase
functionality such as reading. Another factor in increasing reading
ability with those with reduced vision is the ability to views
words in context as opposed to isolation. Magnification is often
used as a simply visual aid with some success. However, with
increased magnification comes decreased FOV (Field of View) and
therefore the lack of ability to see other words or objects around
the word or object of interest. The capability to guide the
training for eccentric viewing and eye movement and fixation
training is important to achieve the improvement in functionality
such as reading. These approaches outlined below will serve to both
describe novel ways to use augmented reality techniques to both
automate and improve the training.
[0011] In order to help users with central vision deficiencies many
of the instant tools were evolved. It is important to train and
help their ability to fixate on a target. Since central vision is
normally used for this, this is an important step to help users
control their ability to focus on a target, as leg work for more
training and adaptation functionality. This fixation training can
be accomplished through gamification built into the software
algorithms, and is utilized periodically for increased fixation
training and improved adaptation. The gamification can be
accomplished by following fixation targets around the display
screen and in conjunction with a hand held pointer can select or
click on the target during timed or untimed exercise. Furthermore,
this can be accomplished through voice active controls as a
substitute or adjunct to a hand help pointer.
[0012] To aid the user in targeting and fixation certain guide
lines can be overlaid on reality or on the incoming image to help
guide the users eye movements along the optimal path. These
guidelines can be a plurality of constructs such as, but not
limited to, cross hair targets, bullseye targets or linear
guidelines such as singular or parallel dotted lines of a fixed or
variable distance apart, a dotted line or solid box of varying
colors. This will enable the user to increase their training and
adaptation for eye movement control to following the tracking lines
or targets as their eyes move across a scene in the case of a
landscape, picture or video monitor or across a page in the case of
reading text.
[0013] To make the most of a user's remaining useful vision methods
for adaptive peripheral vision training can be employed. Training
and encouraging the user to make the most of their eccentric
viewing capabilities is important. As described the user may
naturally gravitate to their PRL (preferred retinal locus) to help
optimized their eccentric viewing. However, this may not be the
optimal location to maximize their ability to view images or text
with their peripheral vision. Through use of skewing and warping
the images presented to the user, along with the targeting
guidelines it can be determined where the optimal place for the
user to target their eccentric vision. Eccentric viewing training
through reinforced learning can be encouraged by a series of
exercises. The targeting as described in fixation training can also
be used for this training. With fixation targets on and the object,
area, or word of interest can be incrementally tested by shifting
locations to determine the best PRL for eccentric viewing.
[0014] Also, pupil tracking algorithms can be employed and not only
have eye tracking capability but can also utilize user customized
offset for improved eccentric viewing capability. Whereby the
eccentric viewing targets are offset guide the user to focus on
their optimal area for eccentric viewing.
[0015] Further improvements in visual adaptation are achieved
through use of the hybrid distortion algorithms. With the layered
distortion approach objects or words on the outskirts of the image
can receive a different distortion and provide a look ahead preview
to piece together words for increased reading speed. While the user
is focused on the area of interest that is being manipulated the
words that are moving into the focus area can help to provide
context in order to interpolate and better understand what is
coming for faster comprehension and contextual understanding.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Various preferred embodiments are described herein with
references to the drawings in which merely illustrative views are
offered for consideration, whereby:
[0017] FIG. 1A is a view of schematized example of external framed
glasses typical for housing features of the present invention;
[0018] FIG. 1B is a view of example glasses typical for housing
features of the present invention;
[0019] FIG. 1C is a view of example glasses typical for housing
features of the present invention;
[0020] FIG. 1D is a view of example glasses typical for housing
features of the present invention
[0021] FIG. 2 is a flowchart showing integration of data management
arrangements according to embodiments of the present invention;
[0022] FIG. 3 is a flowchart illustrating interrelationship of
various elements of the features of the present invention;
[0023] FIG. 4A is a flowchart showing camera and image function
software;
[0024] FIG. 4B is a flowchart showing higher order function
software;
[0025] FIG. 4C is a flowchart showing higher order function
software;
[0026] FIG. 5A is a schematic and flow chart showing user interface
improvements;
[0027] FIG. 5B is a schematic and flow chart showing user interface
improvements; and
[0028] FIG. 5C is a schematic and flow chart showing user interface
improvements.
DETAILED DESCRIPTION OF THE INVENTIONS AND EXAMPLES
[0029] As defined herein "ACDS" comprises those objects of the
present inventions embodying the defined characteristic
functionality illustrated herein by way of schematic Figures and
exemplary descriptions, none of which is intended to be limiting of
the scope of the instant teachings. By way of example, any other
and further features of the present invention or desiderate offered
for consideration hereto may be manifested, as known to artisans,
in any known or developed contact lens, Intra Ocular Lens (IOL),
thin or thick film having optical properties, GOOGLE.RTM. type of
glass or the like means for arraying, disposing and housing
functional optical and visual enhancement elements.
[0030] As referenced, embodiments of the Interactive Augmented
Reality (AR) Visual Aid inventions described below were designed
and intended for users with visual impairments that impact field of
vision (FOV). Usages beyond this scope have evolved in real-time
and have been incorporated herein expressly by reference.
[0031] By way of example these disease states may take the form of
age-related macular degeneration, retinitis pigmentosa, diabetic
retinopathy, Stargardt's disease, and other diseases where damage
to part of the retina impairs vision. The invention described is
novel because it not only supplies algorithms to enhance vision,
but also provides simple but powerful controls and a structured
process that allows the user to adjust those algorithms.
[0032] Referring now to FIG. 1-10 and in particular to FIGS. 1A-1D
and 2, exemplary ACDS 99 is housed in a glasses frame model
including both features and zones of placement which are
interchangeable for processor 101, charging and dataport 103, dual
display 111, control buttons 106, accelerometer gyroscope
magnetometer 112, Bluetooth/Wi-Fi 108, autofocus camera 113, as
known to those skilled in the art. For example, batteries 107,
including lithium-ion batteries shown in a figure, or any known or
developed other versions, shown in other of said figures are
contemplated as either a portion element or
supplement/attachment/appendix to the instant teachings the
technical feature being functioning as a battery.
[0033] In sum, as shown in FIG. 1A-1D, any basic hardware can
constructed from a non-invasive, wearable electronics-based AR
eyeglass system (see FIG. 1A-1D) employing any of a variety of
integrated display technologies, including LCD, OLED, or direct
retinal projection. Materials are also able to be substituted for
the "glass" having electronic elements embedded within the same, so
that "glasses" may be understood to encompass for example, sheets
of lens and camera containing materials, IOLs, contact lenses and
the like functional units. Likewise, electronic magnifiers may be
used such as the Ruby.RTM. brand of Electronic Modifier.
[0034] A plurality of cameras, mounted on the glasses, continuously
monitors the view where the glasses are pointing. The AR system
also contains an integrated processor and memory storage (either
embedded in the glasses, or tethered by a cable) with embedded
software implementing real-time algorithms that modify the images
as they are captured by the camera(s). These modified, or
corrected, images are then continuously presented to the eyes of
the user via the integrated displays.
[0035] It is contemplated that the processes described above are
implemented in a system configured to present an image to the user.
The processes may be implemented in software, such as machine
readable code or machine executable code that is stored on a memory
and executed by a processor. Input signals or data is received by
the unit from a user, cameras, detectors or any other device.
Output is presented to the user in any manner, including a screen
display or headset display. The processor and memory is part of the
headset 99 shown in FIG. 1A-1D or a separate component linked to
the same. Electronic magnifiers, as discussed are also able to be
used.
[0036] Referring also to FIG. 2 is a block diagram showing example
or representative computing devices and associated elements that
may be used to implement the methods and serve as the apparatus
described herein. FIG. 2 shows an example of a generic computing
device 200A and a generic mobile computing device 250A, which may
be used with the techniques described here. Computing device 200A
is intended to represent various forms of digital computers, such
as laptops, desktops, workstations, personal digital assistants,
servers, blade servers, mainframes, and other appropriate
computers. Computing device 250A is intended to represent various
forms of mobile devices, such as personal digital assistants,
cellular telephones, smart phones, and other similar computing
devices. The components shown here, their connections and
relationships, and their functions, are meant to be exemplary only,
and are not meant to limit implementations of the inventions
described and/or claimed in this document.
[0037] The memory 204A stores information within the computing
device 200A. In one implementation, the memory 204A is a volatile
memory unit or units. In another implementation, the memory 204A is
non-volatile memory unit or units. In another implementation, the
memory 204A is a non-volatile memory unit or units. The memory 204A
may also be another form of computer-readable medium, such as a
magnetic or optical disk.
[0038] The storage device 206A is capable of providing mass storage
for the computing device 200A. In one implementation, the storage
device 206A may be or contain a computer-200A. In one
implementation, the storage device 206A may be or contain a
computer-reading medium, such as a floppy disk device, a hard disk
device, an optical disk device, or a tape device, a flash memory or
other similar solid state memory device, or an array of devices,
including devices in a storage area network or other
configurations. A computer program product can be tangibly embodied
in an information carrier. The computer program product may also
contain instructions that, when executed, perform one or more
methods, such as those described above. The information carrier is
a computer- or machine-readable medium, such as the memory 204A,
the storage device 206A, or memory on processor 202A.
[0039] The high speed controller 208A manages bandwidth-intensive
operations for the computing device 200A, while the low-speed
controller 212A manages lower bandwidth-intensive operations,
Artisans understand that ACDS 99 comprises any and all incorporated
sensing, computer and optical and visual data management tools,
and/or state of the art signal processing coupling and
communication means. Such allocation of functions is exemplary
only. In one implementation, the high-speed controller 208A is
coupled to memory 204A, display 216A (e.g., through a graphics
processor or accelerator), and to high-speed expansion ports 210A,
which may accept various expansion cards (not shown). In the
implementation, low-speed controller 212A is coupled to storage
device 206A and low-speed bus 214A. The low-speed bus 214, which
may include various communication ports (e.g., USB, Bluetooth,
Ethernet, wireless Ethernet) may be coupled to one or more
input/output devices, such as a keyboard, a pointing device, a
scanner, or a networking device such as a switch or router, e.g.,
through a network adapter.
[0040] The computing device 200A may be implemented in a number of
different forms, as shown in the figure. For example, it may be
implemented as a standard server 220A, or multiple tunes in a group
of such servers. It may also be implemented as part of a rack
server system 224A. In addition, it may be implemented in a
personal computer such as a laptop computer 222A. Alternatively,
components from computing device 200A may be combined with other
components in a mobile device (not shown), such as device 250A.
Each of such devices may contain one or more of computing device
200A, 250A, and an entire system may be made up of multiple
computing devices 200A, 250A communicating with each other.
[0041] Computing device 250A includes a processor 252A, memory
264A, an input/output device such as a display 254A, a
communication interface 266A, and a transceiver 268A, along other
components. The device 250A may also be provided with a storage
device, such as a Microdrive or other device, to provide additional
storage. Each of the components 250A, 252A, 264A, 254A, 266A, and
268A, are interconnected using various buses, and several of the
components may be mounted on a common motherboard or in other
manners as appropriate.
[0042] The processor 252A can execute instructions within the
computing device 250A, including instructions stored in the memory
264A. The processor may be implemented as a chipset of chips that
include separate and multiple analog and digital processors. The
processor may provide, for example, for coordination of the other
components of the device 250A, such as control of user interfaces,
applications run by device 250A, and wireless communication by
device 250A.
[0043] Referring now to FIGS. 4A-4C and 5A-5C schematic flow-charts
show detailed operations inherent in subject software, as
implemented in ACDS 99, or any related IOC, contact lenses or
combinations thereof.
[0044] FIGS. 4A, 4B and 4C show how cameras, which continuously
capture images are stored, manipulated and used with ACDS 9A. FIG.
4B shows sequences of operations once control buttons 106 are
actuated including setup/training and update modes. FIG. 4C details
users mode and FIG. 5A integrates displays with functional steps
and shows setup, training and update interplay.
[0045] Referring now to 5B trainer controlled modules and sub-modes
are illustrated whereby users learn to regain functional vision in
placed imparted by their visual challenges. FIG. 5C completes a
detailed overview of user interfacing as their own, to those
skilled in the art with user registration, visual field
calibration, VOV definition, contrast configuration and indicator
configuration and control registration.
[0046] Processor 252A may communicate with a user through control
interface 258A and display interface 256A coupled to a display
254A. The display 254A may be, for example, a TFT LCD
(Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic
Light Emitting Diode) display, or other appropriate display
technology. The display interface 256A may comprise appropriate
circuitry for driving the display 254A to present graphical and
other information to a user. The control interface 258A may receive
commands from a user and convert them for submission to the
processor 252A. In addition, an external interface 262A may be
provided in communication with processor 252A, so as to enable near
area communication of device 250A with other devices. External
interface 262A may provide for example, for wired communication in
some implementations, or for wireless communication in other
implementations, and multiple interfaces may also be used.
[0047] The memory 264A stores information within the computing
device 250A. The memory 264A can be implemented as one or more of a
computer-readable medium or media, a volatile memory unit or units,
or a non-volatile memory unit or units. Expansion memory 274A may
also be provided and connected to device 250A through expansion
interface 272A, which may include, for example, a SIMM (Single In
Line Memory Module) card interface. Such expansion memory 274A may
provide extra storage space for device 250A, or may also store
applications or other information for device 250A. Specifically,
expansion memory 274A may include instructions to carry out or
supplement the processes described above, and may include secure
information also. Thus, for example, expansion memory 274A may be
provided as a security module for device 250A, and may be
programmed with instructions that permit secure use of device 250A.
In addition, secure applications may be provided via the SIMM
cards, along with additional information, such as placing
identifying information on the SIMM card in a non-backable manner.
The memory may include, for example, flash memory and/or NVRAM
memory, as discussed below. In one implementation, a computer
program product is tangibly embodied in an information carrier. The
computer program product contains instructions that, when executed,
perform one or more methods, such as those described above. The
information carrier is a computer- or machine-readable medium, such
as the memory 264A, expansion memory 274A, or memory on processor
252A, that may be received, for example, over transceiver 268A or
external interface 262A.
[0048] Device 250A may communicate wirelessly through communication
interface 266A, which may include digital signal processing
circuitry where necessary. Communication interface 266A may provide
for communications under various modes or protocols, such as GSM
voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA,
CDMA2000, or GPRS, among others. Such communication may occur, for
example, through radio-frequency transceiver 268A. In addition,
short-range communication may occur, such as using a Bluetooth,
WI-FI, or other such transceiver (not shown). In addition, GPS
(Global Positioning System) receiver module 270A may provide
additional navigation- and location-related wireless data to device
250A, which may be used as appropriate by applications running on
device 250.
[0049] Device 250A may also communicate audibly using audio codec
260, which may receive spoken information from a user and convert
it to usable digital information. Audio codec 260A may likewise
generate audible sound for a user, such as through a speaker, e.g.,
in a handset of device 250A. Such sound may include sound from
voice telephone calls, may include recorded sound (e.g., voice
messages, music files, etc.) and may also include sound generated
by applications operating on device 250A.
[0050] The computing device 250A may be implemented in a number of
different forms, as shown in the figure. For example, it may be
implemented as part of ACDS 99 or any smart/cellular telephone
280A. It may also be implemented as part of a smart phone 282A,
personal digital assistant, a computer tablet, or other similar
mobile device.
[0051] Thus, various implementations of the system and techniques
described here can be realized in digital electronic circuitry,
integrated circuitry, specially designed ASICs (application
specific integrated circuits), computer hardware, firmware,
software, and/or combinations thereof. These various
implementations can include implementation in one or more computer
programs that are executable and/or interpretable on a programmable
system including at least one programmable processor, which may be
special or general purpose, coupled to receive data and
instructions from, and to transmit data and instructions to, a
storage system, at least one input device, and at least one output
device.
[0052] These computer programs (also known as programs, software,
software applications or code) include machine instructions for a
programmable processor, and can be implemented in a high-level
procedural and/or object-oriented programming language, and/or in
assembly/machine language. As used herein, the terms
"machine-readable medium" "computer-readable medium" refers to any
computer program product, apparatus and/or device (e.g., magnetic
discs, optical disks, memory, Programmable Logic Devices (PLDs))
used to provide machine instructions and/or data to a programmable
processor, including a machine-readable medium that receives
machine instructions as a machine-readable signal. The term
"machine-readable signal" refers to any signal used to provide
machine instructions and/or data to a programmable processor.
[0053] To provide for interaction with a user, the systems and
techniques described here can be implemented on a computer having a
display device (e.g., a CRT (cathode ray tube) or LCD (liquid
crystal display) monitor) for displaying information to the user
and a keyboard and a pointing device (e.g., a mouse or a trackball)
by which the user can provide input to the computer, as mentioned
known or developed Electronic Magnifier (ROV infra) are ______able
to be used? Other kinds of devices can be used to provide for
interaction with a user as well; for example, feedback provided to
the user can be any form of sensory feedback (e.g., visual
feedback, auditory feedback, or tactile feedback; and input from
the user can be received in any form, including acoustic, speech,
or tactile input.
[0054] The systems and techniques described here can be implemented
in a computing system (e.g., computing device 200A and/or 250A)
that includes a back end component (e.g., as a data server), or
that includes a middleware component (e.g., an application server),
or that includes a front end component (e.g., a client computer
having a graphical user interface or a Web browser through which a
user can interact with an implementation of the systems and
techniques described here), or any combination of such back end,
middleware, or front end components. The components of the system
can be interconnected by any form or medium of digital data
communication (e.g., a communication network). Examples of
communication networks include a local area network ("LAN"), a wide
area network) ("WAN") and the Internet.
[0055] The computing system can include clients and servers. A
client and server are generally remote from each other and
typically interact through a communication network. The
relationship of client and server arises by virtue of computer
programs running on the respective computers and having a
client-server relationship to each other.
[0056] In the example embodiment, computing devices 200A and 250A
are configured to receive and/or retrieve electronic documents from
various other computing devices connected to computing devices 200A
and 250A through a communication network, and store these
electronic documents within at least one of memory 204A, storage
device 206A, and memory 264A. Computing devices 200A and 250A are
further configured to manage and organize these electronic
documents within at least one of memory 204A, storage device 206A,
and memory 264A using the techniques described herein.
[0057] In addition, the logic flows depicted in the figures do not
require the particular order shown, or sequential order, to achieve
desirable results. Furthermore, other steps may be provided or
steps may be eliminated from the described flows, and other
components may be added to, or removed from, the described systems.
Accordingly, other embodiments are within the scope of the
following claims.
[0058] It will be appreciated that the above embodiments that have
been described in particular detail are merely example or possible
embodiments, and that there are many other combinations, additions,
or alternatives that may be included. For example, while online
gaming has been referred to throughout, other applications of the
above embodiments include online or web-based applications or other
cloud services. Similarly, for example, VMWares future event
failure systems approach may be used in conjunction with the
instant systems.
[0059] Unless specifically stated otherwise as apparent from the
above discussion, it is appreciated that throughout the
description, discussions utilizing terms such as "processing" or
"computing" or calculating" or "determining" or "identifying" or
"displaying" or "providing" or the like, refer to the action and
processes of a computer system, or similar electronic computing
device, that manipulates and transforms data represented as
physical (electronic) quantities within the computer system
memories or registers or other such information storage,
transmission or display devices.
[0060] Based on the foregoing specification, the above-discussed
embodiments of the invention may be implemented using computer
programming or engineering techniques including computer software,
firmware, hardware or any combination or subset thereof. Any such
resulting program, having computer-readable and/or
computer-executable instructions, may be embodied or provided
within ACDS or within one or more computer-readable media, thereby
making a computer program product, i.e., an article of manufacture,
according to the discussed embodiments of the invention. The
computer readable media may be, for instance, a fixed (hard) drive,
diskette, electronic magnifier with on-board memory, optical disk,
magnetic tape, semiconductor memory such as read-only memory (ROM)
or flash memory, etc., or any transmitting/receiving medium such as
the Internet or other communication network or link. The article of
manufacture containing the computer code may be made and/or used by
executing the instructions directly from one medium, by copying the
code from one medium to another medium, or by transmitting the code
over a network.
[0061] Referring now also to FIG. 3, another schematic is shown
which illustrates an example embodiment of ACDS 99 and/or a mobile
device 200B (used interchangeably herein). This is but one possible
device configuration, and as such it is contemplated that one of
ordinary skill in the art may differently configure the mobile
device. Many of the elements shown in FIG. 3 may be considered
optional and not required for every embodiment. In addition, the
configuration of the device may be any shape or design, may be
wearable, or separated into different elements and components. ACDS
99 and/or a device 200B may comprise any type of fixed or mobile
communication device that can be configured in such a way so as to
function as described below. The mobile device may comprise a PDA,
cellular telephone, smart phone, tablet PC, wireless electronic
pad, or any other computing device.
[0062] In this example embodiment, ACDS 99 and/or mobile device
200B is configure with an outer housing 204B that protects and
contains the components described below. Within the housing 204B is
a processor 208B and a first and second bus 212B1, 212B2
(collectively 212B). The processor 208B communicates over the buses
212B with the other components of the mobile device 200B. The
processor 208B may comprise any type of processor or controller
capable of performing as described herein. The processor 208B may
comprise a general purpose processor, ASIC, ARM, DSP, controller,
or any other type processing device.
[0063] The processor 208B and other elements of ACDS 99 and/or a
mobile device 200B receive power from a battery 220B or other power
source. As discussed, it is contemplated that use of
superconductive materials for super batteries is incorporated or
implemented in ACDS, see, for example, LAMBORGHINI/MIT battery-type
of functional elements by using materials which store and release
energy, at room temperature output like batteries with other
technical benefits are outlined. An electrical interface 224B
provides one or more electrical ports to electrically interface
with the mobile device 200B, such as with a second electronic
device, computer, a medical device, or a power supply/charging
device. The interface 224B may comprise any type of electrical
interface or connector format.
[0064] One or more memories 210B are part ACDS 99 and/or mobile
device 200B for storage of machine readable code for execution on
the processor 208B, and for storage of data, such as image data,
audio data, user data, medical data, location data, shock data, or
any other type of data. The memory may store the messaging
application (app). The memory may comprise RAM, ROM, flash memory,
optical memory, or micro-drive memory. The machine-readable code as
described herein is non-transitory.
[0065] As part of this embodiment, the processor 208B connects to a
user interface 216B. The user interface 216B may comprise any
system or device configured to accept user input to control the
mobile device. The user interface 216B may comprise one or more of
the following: keyboard, roller ball, buttons, wheels, pointer key,
touch pad, and touch screen. A touch screen controller 230B is also
provided which interfaces through the bus 212B and connects to a
display 228B.
[0066] The display comprises any type of display screen configured
to display visual information to the user. The screen may comprise
an LED, LCD, thin film transistor screen, OEL, CSTN (color super
twisted nematic). TFT (thin film transistor), TFD (thin film
diode), OLED (organic light-emitting diode), AMOLED display
(active-matrix organic light-emitting diode), capacitive touch
screen, resistive touch screen or any combination of these
technologies. The display 228B may further comprise a display
processor (not shown) or controller that interfaces with the
processor 208B. The touch screen controller 230B may comprise a
module configured to receive signals from a touch screen which is
overlaid on the display 228B. Messages may be entered on the touch
screen 230B, or the user interface 216B may include a keyboard or
other data entry device.
[0067] Also part of this exemplary mobile device is a speaker 234B
and microphone 238B. The speaker 234B and microphone 238B may be
controlled by the processor 208B and are configured to receive and
convert audio signals to electrical signals, in the case of the
microphone, based on processor control. Likewise, processor 208B
may activate the speaker 234B to generate audio signals. These
devices operate as is understood in the art and as such are not
described in detail herein. Expressly incorporated herein is the
system of U.S. Pat. No. 9,782,084, as mentioned above, Artisans
understanding that remote devices, sensors, and interactive
communication tools are readily interface into, and data-friendly
with ACDS 99.
[0068] Also connected to one or more of the buses 212B is a first
wireless transceiver 240B and a second wireless transceiver 244B,
each of which connect to respective antenna 248B, 252B. The first
and second transceiver 240B, 244B are configured to receive
incoming signals from a remote transmitter and perform analog front
end processing on the signals to generate analog baseband signals.
The incoming signal may be further processed by conversion to a
digital format, such as by an analog to digital converter, for
subsequent processing by the processor 208B. Likewise, the first
and second transceiver 240B, 244B are configured to receive
outgoing signals from the processor 208B, or another component of
the mobile device 208B, and up-convert these signals from baseband
to RF frequency for transmission over the respective antenna 248B,
252B. Although shown with a first wireless transceiver 240B and a
second wireless transceiver 244B, it is contemplated that the
mobile device 200B may have only one such system or two or more
transceivers. For example, some devices are tri-band or quad-band
capable, or have Bluetooth and NFC communication capability.
[0069] It is contemplated that ACDS 99 and/or a mobile device, and
hence the first wireless transceiver 240B and a second wireless
transceiver 244B may be configured to operate according to any
presently existing or future developed wireless standard including,
but not limited to, Bluetooth, WI-FI such as IEEE 802.11 a,b,g,n,
wireless LAN, WMAN, broadband fixed access, WiMAX, any cellular
technology including CDMA, GSM, EDGE, 3G, 4G, 5G, TDMA, AMPS, FRS,
GMRS, citizen band radio, VHF, AM, FM, and wireless USB.
[0070] Also part of ACDS 99 and/or a mobile device is one or more
system connected to the second bus 212B which also interfaces with
the processor 208B. These devices include a global positioning
system (GPS) module 260B with associated antenna 262B. The GPS
module 260B is capable of receiving and processing signals from
satellites or other transponders to generate location data
regarding the location, direction of travel, and speed of the GPS
module 260B. GPS is generally understood in the art and hence not
described in detail herein.
[0071] A gyro 264B connects to the bus 212B to generate and provide
orientation data regarding the orientation of the mobile device
204B. A compass 268B, such as a magnetometer, provides directional
information to the mobile device 204B. A shock detector 272B, which
may include an accelerometer, connects to the bus 212B to provide
information or data regarding shocks or forces experienced by the
mobile device. In one configuration, the shock detector 272B
generates and provides data to the processor 208B when the mobile
device experiences a shock or force greater than a predetermined
threshold. This may indicate a fall or accident, again reference to
the Device of U.S. Pat. No. 9,782,084 and communication systems and
methods apply herein.
[0072] One or more cameras (still, video, or both) 276B are
provided to capture image data for storage in the memory 210B
and/or for possible transmission over a wireless or wired link or
for viewing at a later time. The processor 208B may process image
data to perform the steps described herein.
[0073] A flasher and/or flashlight 280B are provided and are
processor controllable. The flasher or flashlight 280B may serve as
a strobe or traditional flashlight, and may include an LED. A power
management module 284 interfaces with or monitors the battery 220B
to manage power consumption, control battery charging, and provide
supply voltages to the various devices which may require different
power requirements.
[0074] Eccentric Viewing & Peripheral Vision
Adaptation--Further modules are included to guide the trainer and
user in order for the user to more effectively and consistently
utilize their peripheral vision. This includes more quickly
adapting to the most effective eccentric viewing technique and
preferred retinal locus.
[0075] Contextual Viewing & Inclusion with Hybrid
Distortion--In order to better utilize the full FOV of the
augmented reality device and take advantage of the nonlinear
transformations that maximize the field of vision, this training
module to help the user understand and adapt to this view is
utilized.
[0076] Gamification of Training--The above modules can be gamified
to help the user practice and improve over time. Many such games
can be constructed to help improve fixation, eccentric viewing,
contextual viewing and adaptation.
[0077] Distortion Mapping--As described elsewhere, when distortion
is present in a user's field of view, a mapping can be
interactively fashioned to reflect the details of this distortion.
In training mode this map can be constructed with help of the
trainer/specialists for later use to undistort the user's
vision.
[0078] Training Mode also functions as a greatly-enhanced Setup
Mode, providing a trained facilitator with a number of graphical
user interface tools for tailoring the device to a specific user's
requirements. Whereas the standard Setup Mode provides a
low-complexity route to semi-custom settings suitable for a class
of users who make the same choices in Setup Mode, Training Mode
allows for detailed customization to the specific user.
[0079] Referring now to FIGS. 4B, 4C, 5A, 5B & 4C, for example,
the facilitator can narrow down the wide array of possible
parameter combinations presented by the device by choosing from a
set of tool palettes that contain archetypical configurations for
various conditions, e.g. Age-related Macular Degeneration,
Retinitis Pigmentosa, etc. This initial choice subsequently
determines the baseline scenarios selected for User Mode.
Additional adjustments made during Training mode can automatically
fine-tune these for the user. The initial choice also restricts the
suggested processing options to help the trainer test and evaluate
their utility for the specific user, but the facilitator always has
the option of incorporating any feature combination into a user's
configuration.
Update Mode
[0080] When update mode is entered from Setup Mode, the operator
can initiate a request to check for available software updates or
patches. The device will attempt to satisfy this request by
connecting to a remote server via an available wireless or tethered
interface. If the device is not up-to-date, the user will be asked
to confirm a desire to perform the update. Once confirmed, the
update will occur automatically.
[0081] Interrupted updates that are recoverable (e.g. due to loss
of connection) will prompt the operator for a decision about
continuing or returning to Setup Mode. Unrecoverable interruptions
(e.g. loss of power) will require a restart into User Mode, and
will be accompanied by a warning message.
Overview & Modes
[0082] The User Interface for the Eyedaptic AR glasses, taking
advantage of Hybrid See Through technology, (described in a
separate provisional patent filing) offers several modes for either
the user or trainer. Many of these feature unique constructs and
methodologies to address usage with a variety of retinal diseases
such as macular degeneration. These combine advanced functions to
enable adaptations tailored to the user's particular affliction and
its progression along with a simple and easy to use control
interface for everyday usage in a variety of settings. The
supported modes are:
[0083] User Mode--The AR visual aid automatically boots into this
mode upon power up. Setup Mode--A specific sequence and/or
combination of buttons (or a specific physical or wireless
connection) places the visual aid into a setup mode for user or
trainer control of the visual field and user controls. Setup Mode
is also the gateway to Training and Update Modes. Training
Mode--When in Setup Mode, a specific sequence and/or combination of
buttons (or a specific physical or wireless connection) places the
visual aid into a training mode intended for use by a trained Low
Vision Specialists to help the user adapt and to customize settings
for the user.
[0084] Update Mode--This is a separate mode, also entered from
Setup Mode, that supports software updates and patches as well as
restoration of user settings to a previous configuration.
User Mode
[0085] For everyday operation by the end-user, the inherentrange of
features and capabilities of the device is balanced against its
utility by providing a simple and responsive user interface that
provides instantaneous access to the most commonly-used
functionality.
[0086] This is accomplished by defining overarching use-case
templates, or scenarios. A single button B1 located on the eyewear
controls the current scenario: a short button-push cycles through
the available scenarios in a fixed order, while a prolonged (e.g.
one-second) button-press immediately selects a predefined "home"
scenario. Sufficiently dexterous users can associate other
sequences (e.g. appropriately-timed "double-clicks," etc.) with
specific scenarios, but otherwise multiple pushes provide cycling
behavior to avoid causing confusion and frustration for users with
less developed fine motor skills.
[0087] Each scenario corresponds to a common vision-related
use-case or set of similar use-cases. For example, the device ships
preconfigured with three use-case templates: [0088] A "plain"
scenario that does not apply any processing to the input image
(until further customized) preconfigured as the default "home"
scenario; [0089] A "wide" scenario--suitable (e.g.) for navigating
indoors, or watching television--where tiered radial warping is
configured with a medium-sized inner circle and moderate
magnification so that objects can be recognized at a distance while
still maintaining context; [0090] A "narrow" scenario--suitable for
reading and other close-in work--where tiered radial warping is
configured with a relatively large inner circle and high
magnification.
[0091] Additional scenarios can be added in Setup Mode. Because
User Mode operation always falls within the context of one of these
mutually-exclusive templates, a visual cue is needed to orient the
user; a color-coded border or partial-border indicates the
currently-selected scenario (e.g. no border for "plain," a green
border for "wide," and a red border for "narrow"). Colors can be
reassigned according to preference and for mnemonic value (e.g.
"red" to suggest the "reading" use-case), but clearly fewer
scenarios are inherently more manageable.
[0092] Transitions between scenarios are effected by gradually and
smoothly varying the parameters of the various relevant processing
effects (e.g. magnification, amount of contrast enhancement, radii
in tiered warps). Using a brief but obvious animation gives the
user a visual indication of the changes that are occurring. When
applicable, further screen annotations can further point out the
modifications, e.g. a visible circle that animates to show ac
hanging radius. When the user is quickly cycling past multiple
scenarios, such animations can be deferred until a final selection
is determined.
[0093] Every scenario is bipartite, comprising one segment used
"sustained" viewing and another for "spotting." Each segment
possesses a full set of parameters that can be independently
chosen, but are most advantageously selected for complementary
usage (as demonstrated by the default preconfigured scenarios,
below). When the user activates a scenario, the sustained subset of
parameters is applied. A prolonged press of a second device-mounted
button B2 changes parameters to the corresponding spotting subset
(with accompanying animation); conversely, releasing the button
restores sustained settings once again. No border color change or
other indicator marks transitions between sustained and spotting
segments since they are always coincident with direct user
manipulation of button B2. Spotting mode is normally transient, but
a discrete command (separate button or voice command) can be used
to lock in place indefinitely, until released by a complementary
command or B2--an additional unique display element or pattern will
unambiguously indicate the locked configuration.
[0094] The table below illustrates one possible baseline
configuration for the three default scenarios, where sustained and
spotting characteristics are chosen to complement each other.
Consider the "narrow" scenario, primarily intended for reading or
similar close-up work using the sustained subset. When tailored to
the individual, this reading-optimized sustained configuration
might include color mapping or binarization to give high-contrast
two-level images. Occasionally, the user may desire to look up and
examine the environment or interact with another person without
experiencing visual processing artifacts best suited to text
processing; the spotting mode expedites reducing magnification and
restoring a more natural, less distorted field of view with no
color changes for this purpose. This can be further facilitated
with automatic adaptive shifting between spotting and sustaining
modes, which is discussed later. The other sample scenarios
presented here exemplify use-cases where it is desirable to apply
more magnification temporarily.
TABLE-US-00001 Scenario Primary Usage Sustained Segment Spotting
Segment "Plain" Outdoors No processing Tiered radial warp [Outdoor
with medium inner navigation] radius and high zoom [Reading street
signs] "Wide" Indoors, Tiered radial warp Tiered radial warp
navigation, with medium inner with medium/large TV radius and
moderate inner radius and high magnification magnification [Indoor
navigation, [Reading text on TV] TV] "Narrow" Reading, Tiered
radial warp Tiered radial warp close-up with large inner with
medium inner viewing radius, high radius and moderate
magnification, magnification, no strong contrast color mapping
enhancement or [looking around, binarization/ turning page] color
mapping [reading]
[0095] There are two keys to success with a strategy built upon a
small number of templates. The first is per-user customization.
During setup mode, a user customizes each of the scenarios to the
peculiarities of his vision, viz. the location and size of visual
defects, tiered radial warp configuration, preferred amount of
magnification, and desired enhancements to contrast or color. This
allows the fewest number of presses (of a single button) to reach
the most commonly desired personal configurations without
delay.
[0096] The second key to success is supporting further modification
within the existing template. Two additional buttons provide this
capability.
Button B2, when pressed quickly instead of being held down, cycles
through or selects within a fixed set of changes. The specific set
of changes is fully customizable, even to the point of radically
altering the nature of the display. Once again, however, the device
remains more usable when easily-remembered or predicted choices are
made. For example, in a scenario defined for reading (e.g. the
"narrow" scenario above), it is sensible to have B2 cycle through a
small number of contrast-enhancement selections, e.g. low
enhancement, high enhancement, inverted video, binarization, and
inverted binarization. Setup Mode and Training Mode offer a menu of
predefined pallets for common tasks.
[0097] Button B3 works like B2, but has its own collection of
settings. For the reading-related scenario, B3 could be defined
(e.g.) to toggle reference guide lines on and off independently of
any other settings.
[0098] A fourth control facilitates further fine-tuning of
magnification and/or tiered radial warp internal radius. This
control is a touch-pad that allows continuous fine adjustment of
parameters. By sliding a finger back and forth on the button along
one axis (e.g. parallel to the side of the head), the magnification
can be changed. Sliding along the other axis (i.e. left-to-right)
manipulates the size of the magnified circle. To reduce complexity,
the outer circle radius automatically changes along with the inner
circle radius in a predefined non-linear fashion. For some users,
having two controls share a single touchpad is untenable; in that
case, Setup Mode can configure the touchpad to control only
magnification directly, and both radii will then be automatically
adjusted in conjunction.
[0099] The four-control (B1, B2, B3, and touchpad) user interface
described above exists to provide a minimum viable control
mechanism to all users. Its existence is crucial because the
presence of an external controller or online data service
connection cannot be guaranteed. Some users will be unable to
maintain external hardware reliably without misplacing it; other
will lack the dexterity or visual ability to use one. When
possible, additional buttons will be provided. Additionally, the
Eyedaptic visual aid affords a wide variety of control interfaces
for more sophisticated and determined users. These interfaces
provide fine-grained access to all features and capabilities, and
include voice access for spoken commands and responses, handheld
Bluetooth controllers, mobile phone or tablet-based applications,
and Wifi-based control by computers or other devices. Any of the
remote devices can incorporate virtual graphical user interfaces,
command-line or scripted interfaces, physical switches and other
physically-manipulated controls, or motion-sensing devices.
Supporting Wifi, Bluetooth, and other wireless communication
schemes also means that controls can originate at great
distances.
[0100] Because the device incorporates motion sensors, they are
also part of the user interface. Movements can trigger behavior
dependent on the scenario. For example, when operating in a
sustained scenario for reading or other close-up work, large head
motions can be caused to trigger a switch to associated spotting
parameters, allowing the user to re-orient his view; once
large-scale motions cease, an automatic return to the sustained
reading configuration will occur after a suitable programmed delay.
As a more complex example, the amount of magnification can be
adjusted to be proportional to the amount of head motion. Automatic
image stabilization, which depends in part on these embedded motion
sensors, can also be associated with a specific subset of
scenarios.
[0101] With the addition of more interfaces and controls comes even
greater flexibility. One important feature that requires more than
the minimal user interface is the "floating" scenario. As described
above, the touchpad and B2/B3 can be used to fine-tune a scenario.
However, such changes are ephemeral, and will be lost as soon as B1
is used to change the scenario. A permanent change to the default
settings for a scenario requires returning to Setup Mode. As an
expedient alternative, the current configuration can be instantly
stored into a designated "floating" scenario via a single button
press, voice command, or other well-defined control activation.
This allows the user to tailor a custom configuration on-the-fly,
creating a corresponding pair of sustained/spotting configurations
suited to a specific task without entering Setup Mode. Once
created, the floating scenario behaves just like any other scenario
except that it retains any changes made to it.
[0102] Another advanced feature that needs to be voluntarily
activated is "autozoom," or automatic magnification based on text
size. In a scenario that is intended to be a reading context
(either sustained or spotting), when this feature is activated the
images are scanned to look for text or text-like features in the
high-acuity portion of the wearer's field of view. A standard
Computer Vision/Optical Character Recognition technique such the
well-known Stroke Width Transform can be used to locate these
features. When detected, the magnification level and/or
field-of-view is adjusted to increase small text to the preferred
text size for reading. The magnification is never permitted to
change too quickly, and is restored to a neutral setting when large
head movements are detected. Autozoom can operate fully
autonomously, or can be activated in a one-shot fashion by a
command.
[0103] Note that relatively few distinct features converge in this
device to give it tremendous power in User Mode without producing
overwhelming complexity. A summary of set of independent features
displayed in User Mode follows: [0104] Selectable scenarios with
user defined contents, but typically based on generic,
widely-applicable and archetypical templates that are tailored to
user preferences [0105] Single-button rapid cycling through
available scenarios [0106] Single-button expedited selection of a
designated "home" scenario. [0107] Animated transitions to avoid
disorientation [0108] Scenario indicator via a colored border or
partial border [0109] Additional two button controls cycle through
scenario-dependent options (user-tailored behavior) [0110]
"Floating" scenario that can be defined on-the-fly [0111] Single
button or other designated control (not part of minimal interface)
to snapshot current configuration as "floating" sustained or
spotting parameters [0112] Floating scenario retains all changes
made to it [0113] Unique complementary sustained vs. spotting
segments comprising each bipartite scenario. [0114] Single-button
switching between sustained and spotting behaviors [0115]
Motion-based switching between sustained and spotting behaviors
(not necessarily symmetric) [0116] Ability to lock or unlock
spotting mode (which is normally transient) via a discrete command
[0117] Unambiguous additional displayed element or pattern
indicates locked configuration [0118] Animated transitions to avoid
disorientation [0119] Tiered radial warping (described in detail
elsewhere) with continuous field-of-view and magnification
adjustment [0120] Touchpad control of FOV and magnification
parameters in real-time [0121] Magnification, inner, and outer
radius parameters individually controlled, OR [0122] Magnification
and inner radius individually controlled with outer radius
automatically adjusted in a consistent and visually pleasing way,
OR [0123] Magnification individually controlled, with the two radii
automatically adjusted in a consistent and visually pleasing way.
[0124] Automatic magnification adjustment based on head movement,
with increased FOV and decreased magnification accompanying larger
movements [0125] Basic contrast adjustment (for global contrast)
[0126] Local contrast adjustment using unsharp masking to induce
prominent high-contrast halos ("Britext") at image transitions,
particularly text, in order to improve reading ability and speed;
although this is a continuously-varying parameter, it is
anticipated that any given user will prefer to use a small number
of settings (e.g. "moderate" and "very high").
Setup Mode
[0127] This mode can be entered from User Mode, and can be utilized
by the user, the trainer, or someone helping the user to configure
their device. This mode supports not only initial setup and
registration plus later configuration changes to override existing
User Mode settings, but also constitutes a prerequisite gateway to
the other special-purpose modes, Training Mode and Update Mode,
which can only be entered from Setup Mode.
[0128] Functions provided here are deliberately limited to avoid
confusing the untrained user, but still provide a high degree of
utility and customizability. The determined and capable user can
perform further customizations by entering Training Mode.
Setup Mode functions are: [0129] Registration--User data such as
name, date, contact information, etc. is entered. This does not
typically change after initial setup, but can be updated as
desired. [0130] Rough calibration of visual field--A short
device-directed exercise is used to obtain a very rough estimate of
the size, shape, and placement of a central scotoma or other visual
field defect for later use in automatically adapting the processing
to the user's visual characteristics. [0131] Magnification and
FOV--A short device-directed exercise is used to help the user
select desired amounts of magnification under conditions simulating
typical visual tasks, such as reading or navigating. Field of view
is automatically adjusted based on requested magnification, with
the exact formula depending on the measured visual field. Preferred
text size is also configured here for use with automatic zooming.
[0132] Contrast--A short device-directed exercise assists the user
in selecting preferred contrast enhancements under conditions
simulating typical visual tasks that often require improved
contrast (e.g. reading). [0133] Mode indicator
configuration--Allows the user to override the automatic choices
made for indicating mode changes in the display; typically the user
will adjust this to obtain mnemonically-useful results, e.g. "red"
for "reading." [0134] Control interface--The user can enable or
disable voice control, external controllers, and other
control-related options (e.g. to limit the control scheme in User
Mode for a low-dexterity user).
[0135] For the most part, the basic Setup Mode makes a large number
of complex processing decisions based on a small number of
interactive user decisions. The result is expected to be
satisfactory for a majority of users; those who desire further
customization must turn to Training Mode.
Training Mode
[0136] This mode is targeted for trainers, usual Low Vision
Specialists or Occupational Therapists who help the users with
their low vision training. This offers the capabilities of Setup
Mode available to either the trainer or user, but layers on many
novel features. During training mode, the trainer has external
control of the device via a wirelessly-linked console or tablet. He
or she can monitor what the camera captures by using an external
display, and can also provide the wearer with alternate visual
displays or overlayed annotations that aid in the training process.
The trainer directs the flow of the training session, giving
limited control to the wearer only for the purpose of making
user-directed responses or adjustments under guidance. User-based
control of the state (mode) is limited to abandoning Training Mode
via the same control sequence that initiates Setup Mode; this
facility is provided mainly for the case where Training Mode has
been accidentally activated.
[0137] Features used to both calibrate the user's affliction as
well as to provide various training aspects for the user for more
effective use of the AR visual aid include the following: Initial
setup (see Setup Mode)--Before entering training mode the trainer
will enter into setup mode for initial setup of data entry,
registration and default user settings.
[0138] Clock face scotoma mapping--This module gives the ability to
establish a rough map of the users scotoma or visual defect in
order to better understand their low vision affliction and needs.
The clock face methodology, which presents only the numbers
associated with a traditional clock face, is instantly recognizable
and relatable to most users. Once a rough mapping of a visual field
defect is established based on visibility of numbers at the
standard twelve positions on a clock faces of various sizes, those
same positions can be used to grade acuity further by varying the
brightness or size of the numbers, or using the well-known
"oriented-E" technique.
[0139] Eye movement control & fixation training--This module
guides the user in a training regime in order to better control eye
movement and fixation in order to optimize usefulness of the
augmented reality device. This includes displayed guide lines and
targets to help both the user and trainer. It may also include eye
tracking in order for the trainer to better understand the user's
particular needs.
[0140] Eccentric Viewing & Peripheral Vision
Adaptation--Further modules are included to guide the trainer and
user in order for the user to more effectively and consistently
utilize their peripheral vision. This includes more quickly
adapting to the most effective eccentric viewing technique and
preferred retinal locus.
[0141] Contextual Viewing & Inclusion with Hybrid
Distortion--In order to better utilize the full FOV of the
augmented reality device and take advantage of the nonlinear
transformations that maximize the field of vision, this training
module to help the user understand and adapt to this view is
utilized. This feature also supports training for reading in the
presence of the nonlinear transformation, both with and without the
presence of reference guidelines. Gamification of Training--The
above modules can be gamified to help the user practice and improve
over time. Many such games can be constructed to help improve
fixation, eccentric viewing, contextual viewing and adaptation.
[0142] Distortion Mapping--As described elsewhere, when distortion
is present in a user's field of view, a mapping can be
interactively fashioned to reflect the details of this distortion.
In training mode this map can be constructed with help of the
trainer/specialists for later use to undistort the user's
vision.
[0143] Training Mode also functions as an Enhanced Setup Mode,
providing the trained facilitator with a number of graphical user
interface tools for tailoring the device to a specific user's
requirements. Whereas the standard Setup Mode provides a
low-complexity route to semi-custom settings suitable for an
equivalence-class of users who make the same choices in the basic
Setup Mode, Training Mode allows for detailed customization to the
specific user. Whenever none of the special training module
functions is engaged, the device is nominally providing this
Enhanced Setup capability. During this period, it displays a
distinctive "Training Mode" status indicator but responds to
standard user inputs as if in User Mode; however, the trainer can
also manipulate the configuration. In fact, the trainer is
permitted to enable, disable, or adjust any feature or mode at any
time, even during the execution of a training module. This freedom
allows is issues to be addressed or experiments to be performed
without having to suspend the current activity.
[0144] For example, the facilitator can narrow down the wide array
of possible parameter combinations presented by the device by
choosing from a set of tool palettes that contain archetypical
configurations for various conditions, e.g. Age-related Macular
Degeneration, Retinitis Pigmentosa, etc. This initial choice
subsequently determines the baseline scenarios selected for User
Mode. Additional adjustments made during Training mode can
automatically fine-tune these for the user. The initial choice also
restricts the suggested processing options to help the trainer test
and evaluate their utility for the specific user, but the
facilitator always has the option of incorporating any feature
combination into a user's configuration.
Update Mode
[0145] When update mode is entered from Setup Mode, the operator
can initiate a request to check for available software updates or
patches. The device will attempt to satisfy this request by
connecting to a remote server via an available wireless or tethered
interface. If the device is not up-to-date, the user will be asked
to confirm a desire to perform the update. Once confirmed, the
update will occur automatically.
[0146] Interrupted updates that are recoverable (e.g. due to loss
of connection) will prompt the operator for a decision about
continuing or returning to Setup Mode. Unrecoverable interruptions
(e.g. loss of power) will require a restart into User Mode, and
will be accompanied by a warning message.
[0147] While several embodiments of the present disclosure have
been described and illustrated herein, those of ordinary skill in
the art will readily envision a variety of other means and/or
structures for performing the functions and/or obtaining the
results and/or one or more of the advantages described herein, and
each of such variations and/or modifications is deemed to be within
the scope of the present disclosure. More generally, those skilled
in the art will readily appreciate that all parameters, dimensions,
materials, and configurations described herein are meant to be
exemplary and that the actual parameters, dimensions, materials,
and/or configurations will depend upon the specific application or
applications for which the teachings of the present disclosure
is/are used.
[0148] Those skilled in the art will recognize, or be able to
ascertain using no more than routine experimentation, many
equivalents to the specific embodiments of the disclosure described
herein. It is, therefore, to be understood that the foregoing
embodiments are presented by way of example only and that, within
the scope of the appended claims and equivalents thereto, the
disclosure may be practiced otherwise than as specifically
described and claimed. The present disclosure is directed to each
individual feature, system, article, material, kit, and/or method
described herein. In addition, any combination of two or more such
features, systems, articles, materials, kits, and/or methods, if
such features, systems, articles, materials, kits, and/or methods
are not mutually inconsistent, is included within the scope of the
present disclosure.
[0149] All definitions, as defined and used herein, should be
understood to control over dictionary definitions, definitions in
documents incorporated by reference, and/or ordinary meanings of
the defined terms.
[0150] Unless otherwise indicated, all numbers expressing
quantities of ingredients, properties such as molecular weight,
reaction conditions, and so forth used in the specification and
claims are to be understood as being modified in all instances by
the term "about." Accordingly, unless indicated to the contrary,
the numerical parameters set forth in the specification and
attached claims are approximations that may vary depending upon the
desired properties sought to be obtained by the present invention.
At the very least, and not as an attempt to limit the application
of the doctrine of equivalents to the scope of the claims, each
numerical parameter should at least be construed in light of the
number of reported significant digits and by applying ordinary
rounding techniques. Notwithstanding that the numerical ranges and
parameters setting forth the broad scope of the invention are
approximations, the numerical values set forth in the specific
examples are reported as precisely as possible. Any numerical
value, however, inherently contains certain errors necessarily
resulting from the standard deviation found in their respective
testing measurements.
[0151] The terms "a," "an," "the" and similar referents used in the
context of describing the invention (especially in the context of
the following claims) are to be construed to cover both the
singular and the plural, unless otherwise indicated herein or
clearly contradicted by context. Recitation of ranges of values
herein is merely intended to serve as a shorthand method of
referring individually to each separate value falling within the
range. Unless otherwise indicated herein, each individual value is
incorporated into the specification as if it were individually
recited herein. All methods described herein can be performed in
any suitable order unless otherwise indicated herein or otherwise
clearly contradicted by context. The use of any and all examples,
or exemplary language (e.g., "such as") provided herein is intended
merely to better illuminate the invention and does not pose a
limitation on the scope of the invention otherwise claimed. No
language in the specification should be construed as indicating any
non-claimed element essential to the practice of the invention.
[0152] Groupings of alternative elements or embodiments of the
invention disclosed herein are not to be construed as limitations.
Each group member may be referred to and claimed individually or in
any combination with other members of the group or other elements
found herein. It is anticipated that one or more members of a group
may be included in, or deleted from, a group for reasons of
convenience and/or patentability. When any such inclusion or
deletion occurs, the specification is deemed to contain the group
as modified thus fulfilling the written description of all Markush
groups used in the appended claims.
[0153] Certain embodiments of this invention are described herein,
including the best mode known to the inventors for carrying out the
invention. Of course, variations on these described embodiments
will become apparent to those of ordinary skill in the art upon
reading the foregoing description. The inventor expects skilled
artisans to employ such variations as appropriate, and the
inventors intend for the invention to be practiced otherwise than
specifically described herein. Accordingly, this invention includes
all modifications and equivalents of the subject matter recited in
the claims appended hereto as permitted by applicable law.
Moreover, any combination of the above-described elements in all
possible variations thereof is encompassed by the invention unless
otherwise indicated herein or otherwise clearly contradicted by
context.
[0154] Specific embodiments disclosed herein may be further limited
in the claims using consisting of or consisting essentially of
language. When used in the claims, whether as filed or added per
amendment, the transition term "consisting of" excludes any
element, step, or ingredient not specified in the claims. The
transition term "consisting essentially of" limits the scope of a
claim to the specified materials or steps and those that do not
materially affect the basic and novel characteristic(s).
Embodiments of the invention so claimed are inherently or expressly
described and enabled herein.
[0155] As one skilled in the art would recognize as necessary or
best-suited for performance of the methods of the invention, a
computer system or machines of the invention include one or more
processors (e.g., a central processing unit (CPU) a graphics
processing unit (GPU) or both), a main memory and a static memory,
which communicate with each other via a bus.
[0156] A processor may be provided by one or more processors
including, for example, one or more of a single core or multi-core
processor (e.g., AMD Phenom II X2, Intel Core Duo, AMD Phenom II
X4, Intel Core i5, Intel Core i& Extreme Edition 980X, or Intel
Xeon E7-2820).
[0157] An I/O mechanism may include a video display unit (e.g., a
liquid crystal display (LCD) or a cathode ray tube (CRT)), an
alphanumeric input device (e.g., a keyboard), a cursor control
device (e.g., a mouse), a disk drive unit, a signal generation
device (e.g., a speaker), an accelerometer, a microphone, a
cellular radio frequency antenna, and a network interface device
(e.g., a network interface card (NIC), Wi-Fi card, cellular modem,
data jack, Ethernet port, modem jack, HDMI port, mini-HDMI port,
USB port), touchscreen (e.g., CRT, LCD, LED, AMOLED, Super AMOLED),
pointing device, trackpad, light (e.g., LED), light/image
projection device, or a combination thereof.
[0158] Memory according to the invention refers to a non-transitory
memory which is provided by one or more tangible devices which
preferably include one or more machine-readable medium on which is
stored one or more sets of instructions (e.g., software) embodying
any one or more of the methodologies or functions described herein.
The software may also reside, completely or at least partially,
within the main memory, processor, or both during execution thereof
by a computer within system, the main memory and the processor also
constituting machine-readable media. The software may further be
transmitted or received over a network via the network interface
device.
[0159] While the machine-readable medium can in an exemplary
embodiment be a single medium, the term "machine-readable medium"
should be taken to include a single medium or multiple media (e.g.,
a centralized or distributed database, and/or associated caches and
servers) that store the one or more sets of instructions. The term
"machine-readable medium" shall also be taken to include any medium
that is capable of storing, encoding or carrying a set of
instructions for execution by the machine and that cause the
machine to perform any one or more of the methodologies of the
present invention. Memory may be, for example, one or more of a
hard disk drive, solid state drive (SSD), an optical disc, flash
memory, zip disk, tape drive, "cloud" storage location, or a
combination thereof. In certain embodiments, a device of the
invention includes a tangible, non-transitory computer readable
medium for memory. Exemplary devices for use as memory include
semiconductor memory devices, (e.g., EPROM, EEPROM, solid state
drive (SSD), and flash memory devices e.g., SD, micro SD, SDXC,
SDIO, SDHC cards); magnetic disks, (e.g., internal hard disks or
removable disks); and optical disks (e.g., CD and DVD disks).
[0160] Furthermore, numerous references have been made to patents
and printed publications throughout this specification. Each of the
above-cited references and printed publications are individually
incorporated herein by reference in their entirety.
[0161] In closing, it is to be understood that the embodiments of
the invention disclosed herein are illustrative of the principles
of the present invention. Other modifications that may be employed
are within the scope of the invention. Thus, by way of example, but
not of limitation, alternative configurations of the present
invention may be utilized in accordance with the teachings herein.
Accordingly, the present invention is not limited to that precisely
as shown and described.
* * * * *