U.S. patent application number 15/396871 was filed with the patent office on 2018-04-26 for system for ahah-based feature extraction of surface manifolds.
The applicant listed for this patent is KnowmTech, LLC. Invention is credited to Alex Nugent.
Application Number | 20180114143 15/396871 |
Document ID | / |
Family ID | 54069224 |
Filed Date | 2018-04-26 |
United States Patent
Application |
20180114143 |
Kind Code |
A1 |
Nugent; Alex |
April 26, 2018 |
SYSTEM FOR AHAH-BASED FEATURE EXTRACTION OF SURFACE MANIFOLDS
Abstract
Methods and systems for feature extraction of LIDAR surface
manifolds. LIDAR point data with respect to one or more LIDAR
surface manifolds can be generated. An AHAH-based feature
extraction operation can be automatically performed on the point
data for compression and processing thereof. The results of the
AHAH-based feature extraction operation can be output as a
compressed binary label representative of the at least one surface
manifold rather than the point data to afford a high-degree of
compression for transmission or further processing thereof.
Additionally, one or more voxels of a LIDAR point cloud composed of
the point data can be scanned in order to recover the compressed
binary label, which represents prototypical surface patches with
respect to the LIDAR surface manifold(s).
Inventors: |
Nugent; Alex; (Santa Fe,
NM) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
KnowmTech, LLC |
Albuquerque |
NM |
US |
|
|
Family ID: |
54069224 |
Appl. No.: |
15/396871 |
Filed: |
January 3, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15057184 |
Mar 1, 2016 |
9589238 |
|
|
15396871 |
|
|
|
|
13613700 |
Sep 13, 2012 |
9280748 |
|
|
15057184 |
|
|
|
|
61663264 |
Jun 22, 2012 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06N 3/063 20130101;
G06N 5/02 20130101; G06N 20/00 20190101; G06K 9/6232 20130101; G06N
5/047 20130101; G06N 3/049 20130101; G06N 3/08 20130101 |
International
Class: |
G06N 99/00 20060101
G06N099/00; G06N 3/08 20060101 G06N003/08; G06K 9/62 20060101
G06K009/62; G06N 5/02 20060101 G06N005/02; G06N 5/04 20060101
G06N005/04; G06N 3/063 20060101 G06N003/063 |
Claims
1. (canceled)
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. (canceled)
10. (canceled)
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. (canceled)
17. (canceled)
18. (canceled)
19. (canceled)
20. (canceled)
21. A system for feature extraction of surface manifolds, said
system comprising: an optical remote sensor that measures a
distance to and/or other properties of a target by illuminating
said target with light, wherein said optical remote sensor provides
sensor data indicative of at least one surface manifold and wherein
point data is generated with respect to said at least one surface
manifold; and an AHAH (Anti-Hebbian and Hebbian) based mechanism
that performs an AHAH-based feature extraction operation on said
point data for compression and processing thereof.
22. The system of claim 21 wherein said optical remote sensor
comprises a LIDAR (Light Detection and Ranging) device.
23. The system of claim 21 further comprising scanning at least one
voxel of a point cloud composed of said point data in order to
recover said compressed binary label which represents prototypical
surface patches with respect to said at least one surface
manifold.
24. The system of claim 21 wherein said point data comprises LIDAR
point data and said surface manifolds comprises LIDAR surface
manifolds.
25. The system of claim 24 wherein said LIDAR point data is capable
of being represented as a voxel with at least one active grid
element and at least one inactive grid element, wherein a voxel
grid element is considered active if said voxel grid element
contains within a spatial boundary thereof at least one LIDAR
point.
26. The system of claim 24 wherein said output of said voxel
comprises a sparse binary activation vector including grid elements
that are active.
27. The system of claim 21 further comprising a plurality of
meta-stable switches, said AHAH based mechanism comprising said
plurality of meta-stable switches.
28. The system of claim 27 wherein each meta-stable switch among
said plurality of meta-stable switches comprises a self-organizing
unit.
29. The system of claim 27 wherein each meta-stable switch among
said plurality of meta-stable switches comprises a self-organizing
node.
30. The system of claim 27 wherein said AHAH based mechanism
comprises at least one memristor composed of said plurality of
meta-stable switches.
31. The system of claim 21 wherein said AHAH based mechanism
comprises at least one AHAH node comprising a plurality of synapses
and associated feedback circuitry.
32. The system of claim 31 wherein said associated feedback
circuitry acts on at least one of three electrode configurations
including a 1-2 electrode configuration, a 2-1 electrode
configuration, or a 2-2 electrode configuration.
33. (canceled)
34. (canceled)
35. (canceled)
36. (canceled)
Description
CROSS-REFERENCE TO PATENT APPLICATION
[0001] This application is a continuation of U.S. patent
application Ser. No. 15/057,184, which was filed on Mar. 1, 2016,
the disclosure of which is incorporated herein by reference in its
entirety. U.S. patent application Ser. No. 15/057,184 is
continuation of U.S. patent application Ser. No. 13/613,700, which
was filed on Sep. 13, 2012, and which claimed priority under 35
U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No.
61/663,264, which was filed on Jun. 22, 2012, the disclosures of
which are also incorporated herein by reference in their
entireties. This patent application thus claims priority to the
Jun. 22, 2012 filing date of U.S. Provisional Patent Application
Ser. No. 61/663,264.
TECHNICAL FIELD
[0002] Embodiments are generally related to machine learning and
artificial intelligence. Embodiments additionally relate to feature
extraction methods and systems. Embodiments further relate to the
field of LIDAR (Light Detection And Ranging, also LADAR).
BACKGROUND
[0003] Machine learning can roughly be characterized as the process
of creating algorithms that can learn a behavior from examples. One
simple example is that of pattern classification. A series of input
patterns are given to the algorithm along with a desired output
(the label) and the algorithm learns how to classify the patterns
by producing the desired label for any given input pattern. Such a
method is called supervised learning since the human operator must
provide the labels during the teaching phase. An example is the
kernal-based SVM algorithm. Alternately, unsupervised "clustering"
is a process of assigning labels to the input patterns without the
use of a human operator. Such unsupervised methods must usually
function through a statistical analysis of the input data, for
example, finding the Eigen value vectors of the covariance matrix.
One such example of unsupervised clustering is the suit of k-means
algorithms.
[0004] A few problems have continued to challenge the field of
machine learning. Few if any standard md accepted methods exist for
learning based on few patterns or exemplars. Without sufficient
examples, finding a solution that balances memorization with
generalization is of often difficult. The difficulty is due to
separation of a training and testing stage, where the variables
that encode the algorithms learning behavior are modified during
the learning stage and tested for accuracy and generalization
during the testing phase. Without sufficient examples during the
leaning stage, it is difficult or impossible to determine the
appropriate variable configurations leading to this optimal point.
Theoretically, the mathematical technique of
support-vector-maximization provides an optimal solution, should
there be sufficient training data to encompass the natural
statistics of the data and presuming the statistics do not change
over time, a problem called concept drift. The idea is that al
input patterns are projected into a high dimensional where they are
linearly separable space.
[0005] A linear classifier can then be used to label the data in
binary classification task. A linear classifier can be thought of
as a hyper plane in a high-dimensional space, where we cal the
hyper plane the decision boundary. A input falling on one side of
the decision boundary results in a positive output, while al inputs
on the other side result in a negative output. The support-vectors
are the distances from the closest input points to the decision
boundary. The process of maximizing this distance is the process of
support-vector-maximization. However, without sufficient examples
it is of little or no use since identifying the support-vectors
requires testing a number of input patterns to find which ones are
closer to the decision boundary. Indeed, some thought may convince
the reader that finding the point of optimal generalization is not
possible with only one example since by definition measuring
generalization requires evaluation of a number of exemplars.
[0006] Another problem facing the field of machine learning is
adaptation to non-stationary statistics, i.e. concept drift. The
problem occurs when the statistic of the underlying data changes
over time. Any method that relies on a separation of training and
testing is doomed to failure, as whatever the algorithm has learned
quickly becomes incorrect as time moves forward. Methods for
continual real-time adaptation re dearly needed, but such methods
are often at odds with the training methods employed to find the
initial solution.
[0007] Another problem acing the field of machine learning s power
consumption. Finding statistical regularities in large quantities
of streaming information can be incredibly power intensive, as the
problem encounters combinatorial explosions. The complexity of the
task is echoed in biological nervous systems, which are essentially
communication networks that self-evolve to doted and act on
regularities present in the input data stream. It is estimated that
there are between 2 and 4 kilometers of wires in one cubic
millimeter of cortex. At 2500 cm2 total area and 2 mm thick, that
is 1.5 million kilometers of wire in the human cortex, or enough
wire to wrap around the earth 37 times.
[0008] For this reason, the closer one can match the distributed
processors of the hardware to the structure of the underlying
network being simulated, the less information must be shuttled
back-and-forth between memory and processor and the lower the power
dissipation required for emulation. The limit of efficiency occurs
when the hardware becomes the network, which occurs when memory
becomes processing. We call this point physical computation, since
the physical properties of the system we now "computing" the answer
rather than the answer being arrived at abstractly through
operations on numbers represented as binary values. Physical
computation is related to, but not the same as, analog computation.
For example, consider the problem of simulating the fall of a rock
dropped from some height. We may go about a solution a number of
ways. First, we may derive a mathematical expression and evaluate
this on a digital computer. This is digital computing. Second, we
may solve a differential equation by noticing the equations of
motions are mathematically equivalent to some other process, for
example, those of transistor physics. This is analog computing. The
third option is that we could find a rock and drop it. This is
physical computing. Relatively simple arguments can be made to show
that this is the only practical solution to large adaptive systems
on the scale of living systems such as a brain. Digital and analog
computing each suffer from the memory-processing duality, a
condition which does not exist in nature and which introduces very
high power dissipations for highly adaptive large-scale
systems.
[0009] As an example of just how significant computation is,
consider IBM's recent cat-scale cortical simulation of 1 billion
neurons and 10 trillions synapses. This effort required 147,456
CPU's and ran at 1/100th real time. At a power consumption of 20 W
per CPU, this is 3 megawatts. If we presume perfect scaling, a
real-time simulation would consume 100.times. more power 300
megawatts. A human brain is .about.20 times larger than a cat, so
that a real-time simulation of a network at the scale of a human
would consume 6 GW if done with traditional serial processors. This
is 600 million times more energy than a human brain actually
dissipate. It is worth consideration that every brain in existence
has evolved for just one purpose: to control an autonomous
platform. An algorithm for finding regularities in large quantities
of streaming information that cannot be mapped directly to
physically adaptive hardware will likely not find use in mobile
platforms, as the energy demands far exceed practical power
budgets.
BRIEF SUMMARY
[0010] The following summary of the invention is provided to
facilitate an understanding of some of the innovative features
unique to the present invention, and is not intended to be a full
description. A full appreciation of the various aspects of the
invention can be gained by taking the entire specification, claims,
drawings, and abstract as a whole.
[0011] It is, therefore, one aspect of the disclosed embodiments to
provide for an proved feature extraction method and system.
[0012] It is another aspect of the disclosed embodiments to provide
for methods and systems for feature extraction of surface manifolds
from noisy point cloud data sources.
[0013] The aforementioned aspects and other objectives and
advantages can now be achieved as described herein. Methods and
systems for feature extraction of surface manifolds are disclosed
herein. In general, pixilated data with depth information can be
generated via techniques including, for example, LIDAR. An
AHAH-based feature extraction operation can be automatically
performed on or with respect to the point data for compression and
processing thereof. The results of the AHAH-based feature
extraction operation can be output as a compressed binary label
representative of the at least one surface manifold rather than the
point data to afford a high-degree of compression for transmission
or further processing thereof. Additionally, one or more voxels of
a point cloud composed of the point data can be scanned in order to
recover the compressed binary label, which represents prototypical
surface patches with respect to the surface manifold(s).
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The accompanying figures, in which like reference numerals
refer to identical or functionally-similar elements throughout the
separate views and which are incorporated in and form part of the
specification, further illustrate the present invention and,
together with the detailed description of the invention, serve to
explain the principles of the present invention.
[0015] FIG. 1 illustrates a graph depicting data indicative of a
meta-stable switch, in accordance with the disclosed
embodiments;
[0016] FIG. 2 illustrates a sample graph of a Lissajous I-V curve,
in accordance with the disclosed embodiments;
[0017] FIG. 3 illustrates a schematic diagram of a synapse based on
a differential pair of memristors, in accordance with the disclosed
embodiments;
[0018] FIG. 4 illustrates a schematic diagram depicting a circuit
that includes plurality of AHAH nodes, in accordance with the
disclosed embodiments;
[0019] FIG. 5 illustrates a schematic diagram of a group of
differential synapses;
[0020] FIG. 6 illustrates a schematic diagram of a 2-1 post-flip
AHAH circuit, in accordance with the disclosed embodiments;
[0021] FIG. 7 illustrates a data structure of four different
distributions on two wires x0 and x1, in accordance with the
disclosed embodiments;
[0022] FIG. 8 illustrates a schematic diagram of AHAH rule
attractor points representing bifurcations of its input space, in
accordance with the disclosed embodiments;
[0023] FIG. 9 illustrates a collective of AHAH nodes each occupying
distinct attractor states can distinguish features, in accordance
with the disclosed embodiments;
[0024] FIG. 10 illustrates a schematic diagram of a system of AHAH
nodes, in accordance with the disclose embodiments;
[0025] FIG. 11 illustrates a block diagram of a basic module
layout, where the noisy input X0 in dimension D0 is reduced in
dimensionality and conditioned to a stable bit pattern X1 in
dimension D1, which is further reduced to a maximally efficient
compact bit pattern X2 in dimension D2, in accordance with the
disclosed embodiments;
[0026] FIG. 12 illustrates a graphical representation of a
collection of points from a measurement representing a surface
patch; and
[0027] FIG. 13 illustrates a graph indicating that LIDAR point data
may be encapsulated within voxels, which can be further discretized
into grid elements and used to construct an activity vector
consisting of the indices to the grid elements that contain point
data, in accordance with the disclosed embodiments.
DETAILED DESCRIPTION
[0028] The particular values and configurations discussed in these
non-limiting examples can be varied and are cited merely to
illustrate an embodiment of the present invention and are not
intended to limit the scope of the invention.
[0029] The embodiments will now be described more fully hereinafter
with reference to the accompanying drawings, in which illustrative
embodiments of the invention are shown. The embodiments disclosed
herein can be embodied in many different forms and should not be
construed a limited to the embodiments set forth herein; rather,
these embodiments are provided so that this disclosure will be
through and complete, and will fully convey the scope of the
invention to those sidled in the art. Like numbers refer to like
elements throughout. As used herein, the term "and/or" includes any
and al combinations of one or more of the associated listed
items.
[0030] The terminology used herein is for the purpose of describing
particular embodiments only and is not intended to be limiting of
the invention. As used herein, the singular forms "a", "an" and
"the" are intended to include the plural forms as well, unless the
context clearly indicates otherwise. It will be further understood
that the terms "comprises" and/or "comprising" when used in this
specification, specify the presence of stated features, integers,
steps, operations, elements and/or components, but do not preclude
the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof.
[0031] The disclosed embodiments offer first a mechanism that can
optimally produce a series of labels responding to features in an
input data stream. Given a set of noisy, incomplete but
re-occurring patterns, the goal is to output stable labels for the
patterns. We can achieve this goal by using a collection of AHAH
nodes to collapse the input space tom a high-dimensional and noisy
input space to a low-dimensional and noise-free space, where we can
then perform exact bit matching on the output to further reduce the
dimensionality. Because our methods can be constructed as a
physical circuit, we will proceed with a background on memristors
as met-sable switches and the AHAH plasticity rule. However, the
methods may also be utilized in more traditional methods of
software and hardware where we extract core mathematical models and
simulate these models within our computing system. The AHaH rule
can be understood as a two-part procedure of state evaluation that
results in negative feedback to the synaptic state (Anti-Hebbian
Learning) followed by state reinforcement that results in positive
feedback to the synaptic state (Hebbian learning). Such techniques
are detailed in, for example, U.S. Pat. No. 7,599,895, which is
incorporated herein by reference.
[0032] A memristor m a collection of meta-stable switches (MSS).
Each MSS possesses at least two states, A and B, separated by a
potential energy barrier. This can be seen in Illustration 1. We
will set the barrier potential as the reference potential V=0. The
probability that the MSS will transition from the A state to the B
state is given by P.sub.A, while the probability that the MSS will
transition from the B state to the A state is given by P.sub.B.
[0033] The transition probabilities [P.sub.A, P.sub.B] can be
modeled as:
P A = .alpha. 1 1 + e - .beta. ( .DELTA. V - V A ) = .alpha..GAMMA.
( .DELTA. V , V A ) ##EQU00001## P B = .alpha. ( 1 - .GAMMA. (
.DELTA. V , - V B ) ) ##EQU00001.2##
[0034] In this case,
.beta. = q kT ##EQU00002##
is the thermal voltage and is equal to 28 mV.sup.-1,
.DELTA. T t c ##EQU00003##
is the ratio of the time step period .DELTA.t to the characteristic
time scale of the device t.sub.c and .DELTA.V is the voltage across
the device. We will define P.sub.A as the positive-going direction
so that a positive applied voltage increases the chances of
occupying the B stale. Each state has an intrinsic electrical
conductance given by w.sub.A and w.sub.B. A MSS possesses utility
in an electrical circuit as a memory or adaptive computational
element so long as these conductances are different. We will take
the convention that w.sub.b.gtoreq.w.sub.a.
[0035] FIG. 1 illustrates a graph 10 depicting data indicative of a
meta-stable switch, in accordance with the disclosed embodiments. A
meta-stable switch is a two-state device that switches
probabilistically between its states as a function of applied bias
and temperatures. A memristor is a collection of N meta-stable
switches. We can model this in discrete time steps .DELTA.t. The
memristor conductance is given by the sum over each meta-stable
switch:
W.sub.m=N.sub.Aw.sub.A+N.sub.Aw.sub.B=N.sub.B(w.sub.B-w.sub.A)+Nw.sub.A
wherein N.sub.A is the number of MSS's in the A state, N.sub.B is
the number of MSS's in the B stale and N=N.sub.A+N.sub.B. At each
time step some sub-population of the MSSs in the A state will
transition to the B state, while some sub-population in the B state
will transition to the A state. The probability that k switches
will transition out of a population of n switches given a
probability of p is given by the binomial distribution:
P ( n , k ) = n ! k ! ( n - l ) ! p k ( 1 - p ) n - k
##EQU00004##
[0036] As n becomes large, we may approximate the binomial
distribution with a normal distribution:
G ( .mu. , .sigma. 2 ) = 1 2 .pi..sigma. 2 e - ( x - .mu. ) 3 2
.sigma. 2 ##EQU00005##
wherein .mu.=np and .sigma..sup.2=np(1-p).
[0037] The change in conductance of a memristor is a probabilistic
process since a memristor is composed of discrete meta-stable
switches. Using the approximation above, the number of MSSs that
transition between A and B states is picked from a normal
distribution with a center at np and variance np(1-p), where the
state transition probabilities are given as above.
[0038] The update to the memristor conductance is thus given by the
contribution from two random variables picked from two normal
disruptions:
.DELTA.N.sub.B=G(N.sub.AP.sub.A,N.sub.AP.sub.A(1-P.sub.A))-G(N.sub.BP.su-
b.B,N.sub.BP.sub.B(1-P.sub.B))
[0039] The update to the conductance of the memristor is then given
by:
.DELTA.w.sub.m=.DELTA.N.sub.B(w.sub.B-w.sub.A)
[0040] To measure the characteristic timescale of the device one
may initialize a memristor into a non-equilibrium state such as
N.sub.B=N or N.sub.B=0 and measure the decay back to an equilibrium
conductance period under zero bias.
[0041] FIG. 2 illustrates a sample graph 20 of a Lissajous I-V
curve, in accordance with the disclosed embodiments. Graph 20 plots
output current along the y-axis versus input voltage along the
x-axis for a sample memristor in graph 20. A memristor is
intrinsically a stochastic element, although composed of many MSS's
or subjected to large voltage differentials it may appear to be
deterministic. Depending on the relative values of V.sub.A and
V.sub.B the device will display a range of characteristics. Of some
interest is the property of decay and a non-conducting ground
state, which we can be achieved under the conditions
V.sub.B<V.sub.A. V.sub.AkTlq and w.sub.B>w.sub.A.
[0042] The utility of a memristor lies in its ability to change its
conductance as a function of the voltage history applied to the
device. This can be illustrated by a Lissajous I-V curve that shows
how the conductance of a memristor changes over time as a
sinusoidal voltage is applied.
[0043] The core device element of our self-organizing unit or node
is thus the meta-stable switch, and a memristor can be seen as a
device composed of a collection of meta-stable switches. A synapse
is a differential pair of memristors: W=w.sub.0-w.sub.1, where W
denotes the difference in conductance between the two memristors
composing the synapse.
[0044] FIG. 3 illustrates a schematic diagram of a synapse 30 based
on a differential pair of memristors 32, 34, in accordance with the
disclosed embodiments. The configuration shown in FIG. 3 is not
unique and in fact there exists three possible configurations 2-1,
1-2, and 2-2, which refers to the number of input and output
electrodes on the synapse. A further discussion of possible
arrangements can be found in U.S. Pat. No. 7,509,805 entitled
"Methodology for the Configuration and Repair of Unreliable
Switching Elements," which issued on Oct. 6, 2009, and is
incorporated herein by reference in its entirety.
[0045] The probability that a meta-stable switch will transition
from its ground state to excited state is a function of the applied
voltage and time it is applied. For this treatment, we will take
the function to be quadratic in voltage and linear in time as
indicated by the following equation:
P(E.sub.0.fwdarw.E.sub.1).apprxeq..alpha.V.sup.2T
[0046] In the above equation, the variable a represents a constant
and the variable T represents a characteristic update timescale. We
are thus presuming that both applied voltage polarities will cause
the memristor to increase its conductance. This is the case in
memristors composed of nanoparticles in a colloidal suspension, but
is not true of al memristors, for example, those reported by
Hewlett-Packard, University of Boise, University of Michigan, UCLA,
and others. In these cases, a reverse voltage polarity will cause a
decrease in device conductance. This is shown above in FIG. 2.
[0047] Furthermore, a memristor may act as an intrinsic diode. This
is true of some memristors, for example, that reported of U of M,
but not of others (U of B). We may thus categorize the various
types of memristors as polar or non-polar in regards to their
ability to change conductance as a function of the applied voltage
and rectifying or non-rectifying as a function of their intrinsic
(or not) diode properties. Our methods apply to al such
configurations, although various synaptic configurations (1-2, 2-1,
2-2) must be used. Furthermore, a mechanisms for lowering the
conductance of the device must be available, be it a reverse bias,
application of high frequency AC voltage, or simply to let it decay
over time.
[0048] FIG. 4 illustrates a schematic diagram depicting a circuit
40 that includes a plurality of AHAH nodes 42, 44, and 46, in
accordance with the disclosed embodiments. Circuit 40 further
includes memristors 50 to 84. An AHAH node is a collection of
synapses and associated CMOS feedback circuitry acting on one of
the three possible electrode configurations of 1-2, 2-1 or 2-2. We
will illustrate the 2-1 case below for non-rectifying polar
memristor. Synapses are formed at the intersection of output and
input electrodes. A synapse is a differential pair of memristors
between the two output electrodes and one input electrode, as shown
in illustration 3. A node is formed from many such synapses
connecting many inputs to the node's electrode, as seen in FIG. 4.
Many such circuits are possible for providing feedback voltage to
the node's electrode. Before we discuss one possible circuit, let
us turn our attention to one node and recognize it as a series of
voltage-divider circuits formed from the input lines.
[0049] FIG. 5 illustrates a schematic diagram of a group of
synapses, wherein each synapse can be seen as a voltage divider
prior to application of feedback voltage, in accordance with the
disclosed embodiments. The sample synapses shown in FIG. 5 include
memristors 52, 54, and 56, 58, and 60, 62. In general, the AHAH
rule is composed of two basic phases: Evaluate and Feedback. During
the evaluate phase, input voltages are applied and these voltage
are integrated via the differential synapses on the nodes
electrode. The evaluation phase is a passive process and, in the
case of the 2-1 configuration, consists of solving for the
steady-state voltage. It should be noted that during this phase,
each synaptic state undergoes negative feedback.
[0050] For example, suppose a memristor was highly positive so
that: w.sub.0>>w.sub.1. This will have the effect of puling
the electrode voltage (V in illustration 3 and 5) up, reducing the
voltage drop across the w.sub.0 memristor but increasing it over
the w.sub.1 memristor. This will cause the w.sub.1 to increase its
conductance more than the w.sub.0 memristor, thus moving the
synapse back toward the zero-point. During the feedback phase,
positive feedback is applied to the electrode via a voltage-keeper
circuit. During the feedback phase, the synapse undergoes an update
that is opposite in direction to that which it received during the
evaluation phase and it proceeds for a variable time. This can be
seen morn clearly in FIG. 6, where we have identified four basic
phases, .theta.0 through .theta.3, labeled charge, evaluate,
feedback and decay, respectively.
[0051] FIG. 6 illustrates a schematic diagram of a 2-1 post-flip
AHAH circuit 70, in accordance with the disclosed embodiments.
Circuit 70 generally includes memristors 72, 74 and a voltage
source 76 (V.sub..infin./2), along with components 78, 80 and
inverters 82, 84, and 86. A graph 88 also shown in FIG. 6, tracks
charge, evaluate, feedback, and decay phases. In the most general
case, we may omit the evaluate and decay phase, although we include
them here for clarity. In the charge phase, voltages are applied to
the inputs and the post-synaptic voltage is evaluated via a passive
integration. In the evaluate phase, the positive-feedback voltage
keeper circuit controlled with clock C0 is activated. During
feedback, the post-synaptic voltage is inverted through
deactivation of passgate with clock C1 and activation of inverter
C2.
[0052] During the decay phase, the inputs are inverted and the
post-synaptic electrode voltage is set to 1/2 of the supply voltage
such that each memristor receives equal and opposite voltage and
thus achieves an accelerated decay. This phase may be removed from
each clock cycle, fore example, occurring once every 100-cock
cycles, or should the memristor have a natural decay rate
commiserate with the usage, a period of sleep would suffice. One
can see how many strategies are available to accomplish the basic
anti-hebbian and hebbian phases of the AHaH rule.
[0053] The operation of the device can be seen as a "memory read"
and "memory refresh cycle", where the act of read (evaluate)
damages the synaptic states, while feedback repairs the state. It
is obvious that each memristors conductance would saturate over
time if not reduced. This can be accomplished by adding a decay
phase in the cycle as shown, by providing for a sufficiently long
rest-state to allow the memristor to decay, or to force the decay
by applying an equal-magnitude reverse bias across both memristors
after a set or variable number of cycles. Although there are
multiple ways to accomplish the task, the net effect is simply to
decay the memristors to keep them operating within their dynamic
range and to prevent saturation over time. We call such an
operation a synaptic normalization. As the dynamic range of the
memristors increases, the frequency of synaptic renormalization may
be reduced.
[0054] A fundamentally important property to the AHAH rule is that
a the magnitude of the post-synaptic activation becomes large, the
Hebbian portion of the update must decrease in magnitude or
transition to Anti-Hebbian. This property insures the rule
converges to independent components of the data that is being
processed by the AHaH node.
[0055] Let us now take a step back and discuss data structure
before again focusing on specific circuit in implementations.
Suppose that we took a monochromatic digital picture of page of
text. By arranging the pixels in proper rows and columns, you would
of course be able to perceive the text and understand what is
written. However, the underlying data structure is not letters, its
binary pixels. By simply taking the array of pixels and arranging
them into any other pattern, what was coherent is now an incoherent
jumble of bits. The conclusion is that the structure of the
information (the letters) is not the same as the information
channels that carry the information (the pixels). The primary
unction of unsupervised clustering algorithms is to uncover the
structure of the information.
[0056] Two wires carrying the same sequence of bits do not carry
any additional information. This is captured in the concept of
mutual information, which measures how much one signal tells us
about another signal. If the mutual information between wire A and
wire B was 1, for example, they carry the same information. If the
mutual information is zero, then they are independent. Let us
visualize this with respect to FIG. 7, which illustrates a data
structure 90 of four different distributions on two wires X0 and
X1, in accordance with the disclosed embodiments. These four
different distributions are: (I) The mutual information between X0
and X1 is 1 and no states can be distinguished; (II) Two states can
be resolved; (III) Three states can be resolved; and (IV) Nine
states can be resolved.
[0057] We must make two important points about FIG. 7. First, the
number of states carried by the wires is in general unrelated to
the number of wires that carry the information. For binary
encodings the total number of resolvable states over N wires is as
high as 2.sup.N but likely much lower. The first challenge in
unsupervised clustering or learning algorithms is to resolve
underlying states from observations over time. Second, wires that
do not resolve more than one state are useless. Think about it this
way: suppose you were locked in a room and could only observe two
light bulbs. If the light bulbs were always on, a in (I) of
Illustration 7, they can convey no information. On the other hand,
if one of the light bulbs blinked on and off while the other
remained on as in (II), you could distinguish between two states.
Something must be causing that light to turn n and off. There must
be a source. What is the nature of this source? A answer to this
question gets at the heart of machine learning. Suppose that our
observations of two wires X0 and X1 led to the distribution of II.
Furthermore, once we recognized the existence of two states we took
note of their specific sequence over time: ABAABBABAABB . . . .
[0058] We can now identify further temporal structure. It is
temporal structure that allows us to infer the existence of a
source or mechanism in the environment since temporal events link
cause and effect. Explaining a temporal sequence requires a model
of a mechanism that generates the sequence. We could analyze the
sequence in a number of ways. For example, the sequence AA follows
AB, BB follows AA, and AA follows AB, repeating in a cycle. On the
other hand, the sequence ABAABB is simply repeating, or ABB follows
ABA. How we view the sequence is dependent on the temporal window
we are capable of holding in memory. This leads us to an important
simplifying observation. The sequence above is not actually a
sequence! It is spatial pattern. After al, what we have
communicated is static letters on a page, not a temporally dynamic
video or song. By recording in temporary memory prior states, we
have converted a temporal pattern into a spatial pattern. The
problem of identifying temporal structure then becomes the problem
of identifying spatial structure.
[0059] It is this realization that has spawned the recent work in
liquid and echo state machines, which have been used successfully
in predicting/modeling time sequences and learning temporal
transfer functions. A rock thrown into a still pond will create
ripples such that taking a picture at any instant in time enables
the past to be reconstructed. Temporal structure is converted into
spatial structure when information travels through networks of
path-day.
[0060] Suppose we created a large dynamic network as in biological
cortex, where pulses traveled between nodes over links. The fact
that a signal takes time to propagate over a link introduces a
time-delay in the network and at this moment temporal structure is
converted into spatial structure.
[0061] For example, two pulses that arrive at the same time at a
node could have taken two paths: one path could come directly from
a sensory input while the other could have come from another
sensory input multiple times steps in the past. What the node
perceives as a spatial coincidence is actually a prediction, and
the prediction has been encoded as links in the network.
[0062] Recall from the discussion of FIG. 7 that only states which
can be distinguished are useful. A light that is always on or
always off can communicate no information and can serve no useful
function to a network. Extraction of data structure would then
appear to be related to a fundamental operation of differentiation
between states over time. Given some time series of vectors
P=[p.sub.0, p.sub.1, p.sub.2, . . . p.sub.n], what spatial states
can be distinguished? What are the spatial building blocks?
Technically, we are looking for the extraction of signals that are
independent in time.
[0063] If an input pattern falls on one side of the decision
boundary, the output of the AHAH node is positive, while it is
negative if it is on the other side of the boundary. Stated another
way, the node output is an efficient binary encoding representing
one natural independent component of the input data distribution.
It is clear that a single AHAH node cannot alone become selective
to a particular feature or pattern in the data. For example,
consider the distribution of IV in FIG. 8. There are nine input
data states or patterns. An AHAH node may only bifurcate its spaces
it can only output a binary label. However, a collective of AHAH
nodes each occupying different states can, as a group, distinguish
each feature.
[0064] Suppose we had two input wires that carried a sequence of
vectors that, over time, matched the distribution of IV in FIG. 8.
These two inputs connect to four AHAH nodes 1-4. One such
configuration is seen in FIG. 9. FIG. 9 illustrates a collective
102 of AHAH nodes each occupying distinct attractor states that can
distinguish features. Lines 1-4 in FIG. 9 represent the decision
boundaries of AHAH node 1-4. Features A-I fall in FIG. 9 on various
sides of each node decision boundary so that each feature results
in a unique binary output tom the group of AHAH nodes
[0065] Given some input pattern, we can simply read off the output
value of each node as a binary label that encodes each unique
feature. Feature A gets the binary label 0011 because node 1 output
is negative, 2 is negative, 3 is positive, and 4 is positive.
[0066] In such a way, a collective of AHAH nodes serves as a
"partitioning" or "clustering" algorithm, outputting a unique
binary label for each unique statistically independent input
source, regardless of the number of input lines that carry the
data.
[0067] The core operation of a collective of AHAH nodes can be seen
in Illustration 10. Many sparse binary (spiking) inputs synapse
onto a small collection of AHAH nodes. Each temporally correlated
group of inputs forms independent components (IC) and the AHAH rule
binds these inputs together by assigning them synapses of the same
sign. In Illustration 10 we can see six IC's with positive weights
shown as green and negative as red. The space of allowable AHAH
states 2.sup.F, wherein F is the number of input features (i.e.
patterns). However, we do not wish each AHAH node to occupy every
state. There is a possibility of al weights staling the same sign,
a condition we call the null state. The null state is useless
computationally as the node's output never changes. To prevent
occupation of the null states, we include a bias input that is
always active and only ever receives anti-Hebbian updates.
[0068] To emphasize the general quality of the AHAH rule and its
applicability outside of purely physically adaptive circuits, we
will re-state the rule in a more generic mathematical form:
.quadrature.w.sub.t=x.sub.if(y)
y=.SIGMA..sub.t=0.sup.Nx.sub.iw.sub.t+x.sub.biasw.sub.biax
f(y)=+sign(y)
.quadrature.w.sub.bias=
[0069] In the above formulation/equations, .alpha., .beta., and
.gamma. are constants, x.sub.i is the ith input and w.sub.i is the
ith weight. We may generally presume that the inputs are binary and
sparse so that only small subset, perhaps 10% or less, of inputs
are active. Seen in this light, we may simply set x=1 for all
active inputs and state that weights are modified when they are
used and not otherwise. The bias can be seen as an input that is
always active, and the update to the bias weight can be seen as
purely anti-Hebbian. The constants control the relative
contribution of positive and negative feedback and may be modified
to achieve certain desired results. For example, by increasing the
contribution of the bias input, we may increase a "restoring force"
that forces the AHAH node into a state that bifurcates its input
space. The net effect is a subtraction of an adaptive average. If
the node has found an attractor state that spits its space in half,
such that approximately half of the IC's are given positive weights
and half are given negative weights, the average node output will
be zero and the bias weight will be zero. If the output becomes
unbalanced, the bias will bring it back, thus preventing the
occupation of the null state.
[0070] FIG. 10 illustrates a schematic diagram of a system 104 of
AHAH nodes. As shown in FIG. 10, the core operation of the
collection or system 104 of AHAH nodes is spatial pooling of input
lines into temporally independent components (IC), collapsing the
large input space, and outputting stable binary labels for input
features.
[0071] Once each AHAH node has settled into unique attractor stabs,
the collective will output a binary label for each input feature,
converting large, sparse, incomplete, noisy patterns into small,
complete, noise-free binary patterns. There are various ways of
describing this operation in the literature. We are performing a
spatial pooling in that we are collapsing a large set of input
patterns into a much smaller set, also known as clustering. We
emphasize, however, that we a certainly not clustering in the
traditional sense.
[0072] Methods like k-means perform projection operations, taking
an input vector and projecting it onto the weight vectors of each
node. The node with the best match is considered the "winner" and
its weight vector is moved closer to the data pattern. The
projection operation is critically dependent on the number of nodes
used, which we believe to be an incorrect starting assumption. This
is like presuming an absolute reference frame on the incoming data
statistics. Given 10 nodes, the data will be projected onto a
10-dimensional coordinate axis, regardless of the natural
statistics of the data. Our microcircuit is like a relative
reference frame, where a feature is only distinguishable because it
is not another feature. If the underlying dimensionality of the
feature space is 10, for example, our circuit will output 10 labels
independent of the number of AHAH nodes.
[0073] We are generating labels (L) for features (F). Let us
presume that each AHAH node will randomly assign each IC to either
the positive or negative state. The total number output labels is
2.sup.N, where N is the number of AHAH nodes. If N is small and the
number of features high, it is possible that the AHAH node
collective will output the same label for different features.
However, as the number of nodes increases, the probability of this
occurring drops exponentially. Specifically, the probability P that
any two features will be assigned the same binary label goes,
as:
P = 1 2 N + 2 2 N + + F 2 N = 0 F i 2 N = F 2 + F 2 N + 1
##EQU00006##
[0074] For 64 features and 16 nodes, the probability of two nodes
being assigned the same label is 3%. By increasing N to 20 we can
reduce this to only 0.4% and with 32 nodes it is less than one in a
million.
[0075] Let us suppose for our purposes that we have 16 nodes so
that the output of the collective is a stable 16-bit pattern. Each
of the 16 bit patterns represents a feature. Although the space of
possible patterns is 2.sup.16, only a small subset will ever occur
if the data is structured. However, far from noisy and incomplete,
the bit patterns are stable and can therefore be matched exactly. A
further reduction from 16 bits to, for example 8 bits can be
accomplished through the use of content-addressable memory (CAM) or
a least-recently-used-cache (LRUC) or other common methods. For
example, given a set of 256 patterns we tore patters as rows and
mach new patterns bit-or-bit against new patterns.
[0076] Let us now return to FIG. 4, which depicts an array of M
AHAH nodes (AHAH.sub.1, AHAH.sub.2, . . . , AHAH.sub.n) receiving
inputs from an array of inputs (X.sub.1, X.sub.2, . . . , X.sub.N)
and producing an output on a register R with values (R.sub.1,
R.sub.2, . . . , R.sub.n). The output of this register is a binary
bit pattern of length M, which we may feed into a CAM to further
reduce its dimension. This can be seen in FIG. 11, which shows the
AHAH module providing an output that is input to the CAM
module.
[0077] Content-Addressable Memory (CAM) is well known in the art,
and may such embodiments are possible. Indeed, many variations may
be used in the circuit described in FIG. 11.
[0078] FIG. 11 illustrates a block diagram of a basic module layout
120, where the noisy input X0 in dimension D0 is reduced in
dimensionality and conditioned to a stable bit pattern X1 in
dimension D1, which is further reduced to a maximally efficient
compact digital encoding in dimension D2, in accordance with the
disclosed embodiments.
[0079] The basic operation of our circuit is thus as follows: A
large and likely sparse input is presented to a synaptic matrix of
AHAH nodes, termed the AHAH module, which operates the AHAH
plasticity rule via the evaluate-feedback cycle as discussed. A
bias input line is modulated such that the bias weights do not
receive the Hebbian portion of the weight update during the
feedback cycle, which prevents occupation of the null state. The
collection of AHAH nodes or AHAH module 122 fall randomly into
their attractor states, which act to bifurcate their input space.
The output of the AHAH module 122 forms a stable bit pattern, which
is then provided as an input to a Content-Addressable Memory or CAM
124 for further reduction of dimensionality. The output of the CAM
124 thus provides maximally efficient binary labels for the
regularities present in the input to the AHAH module 122.
[0080] The methods developed above are ideally suited for
applications where large quantities of spares data must be encoded
into discrete packets or labels and transmitted over a wire. One
such application is the identification of LIDAR surface manifolds
from point cloud data. Rather than transmitting, for example, 1000
points of data we may only transmit the binary label for the
particular surface patch that these points are revealing.
[0081] LIDAR (Light Detection And Ranging, also LADAR) is an
optical remote sensing technology that can measure the distance to,
or other properties of a target by illuminating the target with
light, often using pulses from a laser. LIDAR technology has
application in geomatics, archaeology, geography, geology,
geomorphology, seismology, forestry, remote sensing and atmospheric
physics as well as in airborne laser swath mapping (ALSM), laser
altimetry and LIDAR contour mapping.
[0082] The acronym LADAR (Laser Detection and Ranging) is often
used in military contexts. The term "laser radar" is sometimes
used, even though LIDAR does not employ microwaves or radio waves
and therefore is not radar in the strict sense of the word.
[0083] LIDAR uses ultraviolet, visible, or infrared light to image
objects and can be used with a wide range of targets, including
non-metallic objects, rocks, rain, chemical compounds, aerosols,
clouds, and even single molecules. A narrow laser beam can be used
to map physical features with very high resolution.
[0084] LIDAR has been used extensively for atmospheric research and
meteorology. Downward-looking LIDAR instruments fitted to aircraft
and satellites are used for surveying and mapping, a recent example
being the NASA Experimental Advanced Research LIDAR. In addition,
LIDAR has been identified by NASA as a key technology for enabling
autonomous precision safe landing of future robotic and crewed
lunar landing vehicles.
[0085] Wavelengths in a range from about 10 micrometers to the UV
(ca. 250 nm) are used to suit the target. Typically light is
reflected via backscattering. Different types of scattering are
used for different LIDAR applications; most common are Rayleigh
scattering, Mie scattering, and Reman scattering, as well as
fluorescence. Based on different kinds of backscattering, the LIDAR
can be accordingly called Rayleigh LIDAR. Mie LIDAR, Reman LIDAR,
Na/Fe/K Fluorescence LIDAR, and so on.
[0086] Suitable combinations of wavelengths can allow for remote
mapping of atmospheric contents by looking for wavelength-dependent
changes in the intensity of the returned signal.
[0087] FIG. 12 illustrates a graphical representation of a
collection of points from a LIDAR measurement(s) representing a
surface patch 130, in accordance with the disclosed embodiments. As
may now be appreciated, LIDAR is a well-developed technology that
is used to image an area by sending out pulses of light and
detecting the backscattered radiation. This technology produces
what has become known as a "point cloud" data representation that
consists of a collection of many points, where each point has an
"XYZ" coordinate and perhaps more attributes such as color. If one
breaks the LIDAR data into smaller "3D pixels" or voxels of a
certain size, one is left with a volume of space and a collection
of LIDAR points that encode one or more surfaces. As the size of
the voxel is reduced, the number of LIDAR points captured in the
space will of course reduce. Although LIDAR produces a point
representation, what is actually being sense is a surface in many
cases. It is therefore advantageous to have efficient mechanisms
for converting the point representation in a more compact "surface
patch" representation for transmission and further processing.
[0088] The methods we have illustrated for spontaneous extraction
of features via a collation of AHAH nodes may be employed for
extraction and encoding of surface patches from LIDAR point data.
The LIDAR point data may be broken down into two parts. First we
may decompose the three-dimension space into voxels. Each voxel is
composed of a number of element volume elements that may contain
one or more LIDAR points. We are showing the 2D representation for
convenience. It can be easily seen how the LIDAR point data may be
represented as a voxel with active and inactive grid elements,
where a voxel grid element is considered active if it contains
within its spatial boundary one or more LIDAR points. The output of
the voxel is thus a sparse binary activation vector consisting of
the grid elements that are active.
[0089] FIG. 13 illustrates a graph 134 indicating that LIDAR point
data may be encapsulated within voxels, which can be further
discretized into grid elements and used to construct an activity
vector consisting of the indices to the grid elements that contain
LIDAR point data, in accordance with the disclosed embodiments. In
the example shown in FIG. 13, grid elements [12, 14, 17, 19, 23,
29, 40] are active. We may thus activate the corresponding input
lines of an AHAH feature extraction system as in the configuration
shown in FIG. 11. By scanning the voxel over the LIDAR point cloud,
we may recover binary labels that represent prototypical surface
patches. The output is thus a compressed binary label
representative of a surface manifold rather than the point data,
thus affording a high degree of compression for transmission or
further processing.
[0090] It can be appreciated that the exact features, which are
produced through this technique, are dependant on the statistics of
the LIDAR data, which in turn are a direct reflection of the
statistics of the environment it is measuring. Thus, the method has
the potential to provide a high degree of compression due to the
fact that the based features extracted represent the characteristic
features of the environment. For example, the commonly occurring
surface features within a city are largely flat and orthogonal to
the ground whereas features within a more nature environment like a
forest may occur at many orientations.
[0091] Based on the foregoing, it can be appreciated that a number
of embodiments, preferred and alternative, are disclosed herein.
For example, in one embodiment, a method for feature extraction of
surface manifolds can be implemented. Such a method can include the
steps or logical operations of, for example, generating point data
with respect to at least one surface manifold, and performing an
AHAH-based feature extraction operation on the point data for
compression and processing thereof.
[0092] In another embodiment, a step or logical operation can be
implemented for scanning at least one voxel of a point cloud
composed of the point data in order to recover the compressed
binary label which represents prototypical surface patches with
respect to the at least one surface manifold. In still another
embodiment, the aforementioned point data can include LIDAR point
data and the surface manifolds comprise LIDAR surface manifolds. In
still another embodiment, the LIDAR point data is capable of being
represented as a voxel with at lead one active grid element and at
least one inactive grid element, wherein a voxel grid element is
considered active if the voxel grid element contains within a
spatial boundary thereof at least one LIDAR point. In another
embodiment, the output of the voxel can comprise a sparse binary
activation vector including grid elements that are active.
[0093] In another embodiment, a method for feature extraction can
be implemented. Such an method can include, for example, the steps
or logical operations of presenting an input to an AHAH module
comprising a plurality of AHAH nodes, wherein e plurality of AHAH
nodes operates according to an AHAH plasticity rule; and providing
an output of the AHAH module as an input of a content addressable
memory for further reduction of dimensionality, wherein an output
of the content-dressable memory provides maximally efficient binary
labels for features present in the input to the AHAH module. In
another embodiment, the plurality of AHAH nodes can fall randomly
into attractor states which act to bifurcate input space. In yet
another embodiment, steps or logical operations can be implemented
for providing a link structure so as to convert temporally
phase-shifted activations patterns into temporally correlated
activations and extracting binary labels of features of the
temporally correlated activations for compression or processing
thereof.
[0094] In another embodiment a system for feature extraction of
surface manifolds can be implemented. Such a system may include,
for example, a processor; a data bus coupled to the processor and a
computer-usable medium embodying computer program code, the
computer-usable medium being coupled to the data bus. The computer
program code can include instructions executable by the processor
and configured for generating point data with respect to at least
one surface manifold; and performing an AHAH-based feature
extraction operation on the point data for compression and
processing thereof. In another embodiment, such instructions can be
further configured for scanning at least one voxel of a point cloud
composed of the point data in order to recover the compressed
binary label which represents prototypical surface patches with
respect to the at least one surface manifold. In another
embodiment, the point data can constitute LIDAR point data and the
surface manifolds can constitute LIDAR surface manifolds. In other
embodiments, the LIDAR point data is capable of being represented
as a voxel with at least one active grid element and at least one
inactive grid element, wherein a voxel grid element is considered
active if the voxel grid element contains within a spatial boundary
thereof at least one LIDAR point. In another embodiment, the output
of the voxel can comprise a sparse binary activation vector
including grid elements that are active.
[0095] In sill another embodiment, a system for feature extraction
can be implemented. Such a system can include, for example, an AHAH
module comprising a plurality of AHAH nodes, wherein the plurality
of AHAH nodes operates according to an AHAH plasticity rule and
wherein an input is presented to the AHAH module, and a
content-addressable memory, wherein the output of the AHAH can be
provided as an input of the content-addressable memory for further
education of dimensionality, wherein an output of the
content-addressable memory provides maximally efficient binary
labels for features present in the input to the AHAH module.
[0096] In another embodiment, the plurality of AHAH nodes can fall
randomly into attractor states which ad to bifurcate their input
space. In another embodiment, a link structure can convert
temporally phase-shifted activations patterns into temporally
correlated activations. In another embodiment, an extractor can be
provided for extracting binary labels of features of the temporally
correlated activations for compression or pro ing of.
[0097] In another embodiment, a link structure can be provided,
which converts temporally phase-shifted activations patterns into
temporally correlated activations, wherein binary labels of
features of the temporally correlated activations are extractable
for compression or processing thereof.
[0098] In still another embodiment, a system for feature extraction
of surface manifolds can be implemented. Such a system can include
a means for generating point data with respect to at least one
surface manifold and a means for performing an AHAH-based feature
extraction operation on said point data for compression and
processing thereof. In another embodiment, such a system may
include a means for scanning at least one voxel of a point cloud
composed of said point data in order to recover said compressed
binary label which represents prototypical surface patches with
respect to said at least one surface manifold. In another
embodiment of such a system, the point data can include LIDAR point
data and said surface manifolds can include LIDAR surface
manifolds. In another embodiment of such a system, the LIDAR point
data is capable of being represented as a voxel with at least one
active grid element and at least one inactive grid element, wherein
a voxel grid element is considered active f said voxel grid element
contains within a spatial boundary thereof at least one LIDAR
point. In yet another embodiment of such a system, the output of
said voxel can include a sparse binary activation vector including
grid elements that are active.
[0099] It will be appreciated that variations of the
above-disclosed and other features and functions, or alternatives
thereof, may be desirably combined into many other different
systems or application. It will also be appreciated that various
presently unforeseen or unanticipated alternatives, modifications,
variations or improvements therein may be subsequently made by
those skilled in the art which are also intended to be encompassed
by the following claims.
* * * * *