U.S. patent application number 14/082059 was filed with the patent office on 2014-05-15 for self-organizing sensing and actuation for automatic control.
This patent application is currently assigned to GENERAL CYBERNATION GROUP, INC.. The applicant listed for this patent is GENERAL CYBERNATION GROUP, INC.. Invention is credited to George Shu-Xing Cheng.
Application Number | 20140136455 14/082059 |
Document ID | / |
Family ID | 50682697 |
Filed Date | 2014-05-15 |
United States Patent
Application |
20140136455 |
Kind Code |
A1 |
Cheng; George Shu-Xing |
May 15, 2014 |
Self-Organizing Sensing and Actuation for Automatic Control
Abstract
A Self-Organizing Process Control Architecture is introduced
with a Sensing Layer, Control Layer, Actuation Layer, Process
Layer, as well as Self-Organizing Sensors (SOS) and Self-Organizing
Actuators (SOA). A Self-Organizing Sensor for a process variable
with one or multiple input variables is disclosed. An artificial
neural network (ANN) based dynamic modeling mechanism as part of
the Self-Organizing Sensor is described. As a case example, a
Self-Organizing Soft-Sensor for CFB Boiler Bed Height is presented.
Also provided is a method to develop a Self-Organizing Sensor.
Inventors: |
Cheng; George Shu-Xing;
(Folsom, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
GENERAL CYBERNATION GROUP, INC. |
Rancho Cordova |
CA |
US |
|
|
Assignee: |
GENERAL CYBERNATION GROUP,
INC.
Rancho Cordova
CA
|
Family ID: |
50682697 |
Appl. No.: |
14/082059 |
Filed: |
November 15, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61727045 |
Nov 15, 2012 |
|
|
|
Current U.S.
Class: |
706/12 ; 700/20;
703/2; 706/17; 706/25 |
Current CPC
Class: |
G05B 13/048 20130101;
G05B 15/02 20130101 |
Class at
Publication: |
706/12 ; 700/20;
706/25; 703/2; 706/17 |
International
Class: |
G06N 3/08 20060101
G06N003/08; G05B 15/02 20060101 G05B015/02 |
Goverment Interests
[0001] This invention was made with government support under SBIR
grant DE-SC0008235 awarded by the U.S. Department of Energy. The
government has certain rights to the invention.
Claims
1. A self-organizing process control system, comprising: a) a
control layer that includes one or multiple automatic controllers
for controlling various process variables; b) a sensing layer that
includes one or multiple sensors for measuring various process
variables; c) an actuation layer that includes one or multiple
actuators that take control command signals from the controllers
and manipulate certain process inputs or manipulated variables; d)
a process layer that includes physical processes or systems with
inputs and outputs that have dynamic relationships; and e) one or
more of a self-organizing sensor (SOS) and a self-organizing
actuator (SOA).
2. The control system of claim 1, in which e) comprises
self-organizing sensors (SOS) having parallel and distributed
information processing capabilities.
3. The control system of claim 1, in which e) comprises
self-organizing actuators (SOA) having parallel and distributed
information processing capabilities.
4. The control system of claim 1, comprising a self-organizing
sensor (SOS) characterized by one or more of: a) having one or
multiple inputs from the process layer; b) having one or multiple
inputs from the sensing layer; c) sending its output to the sensing
layer; and d) sending its output to the control layer.
5. The control system of claim 1, comprising a self-organizing
actuator (SOA) characterized by one or more of: a) taking commands
from the control layer; b) having inputs from the sensing layer;
and c) manipulating one manipulated variable or manipulating
multiple manipulated variables in a coordinated way at the same
time.
6. A method of developing a self-organizing sensor for estimating a
variable of interest in a system, comprising: a) determining a
relationship between (i) the variable of interest at steady-state
and (ii) one or more variables of the system that are predetermined
or pre-measured in the steady-state; b) converting the variable of
interest to a target variable; c) constructing a dynamic model
mechanism having one or multiple variables of the system as inputs;
d) producing a dynamic model output; e) training the dynamic model
so that its output tracks a given trajectory of the target
variable; and f) combining or selecting the dynamic model output
and the target variable to produce a final self-organizing sensor
output.
7. The method of claim 6, in which the training of the dynamic
model is designed to minimize a difference between the dynamic
model output and the target variable.
8. The method of claim 6, in which the training of the dynamic
model is designed to determine whether the dynamic model is still
in its learning phase based upon one or more of a model error and
the convergence of weighting factors of the dynamic model.
9. The method of claim 6, in which the variable of interest is a
process variable of interest in a physical process, comprising: a)
selecting or determining a variable Y.sub.S(t) that is the process
variable of interest or a process variable that is related to the
process variable of interest at steady-state condition; b) deriving
a formula to calculate Y.sub.S(t) based on one or multiple
variables or parameters obtained by one or more of pre-measurement
in the steady-state of the process, determination through
experimentation, and manual entry; c) converting the variable
Y.sub.S(t) to a target variable Y.sub.T(t) of the process variable
of interest; wherein a converter is designed based on energy and
material balance calculations of the process or a ratio of the
input and output signals of the converter; d) using a dynamic
modeling mechanism to produce a dynamic model output Y.sub.D(t); e)
training the dynamic model based on minimizing the model error
e.sub.M(t) which is the difference between the dynamic model output
Y.sub.D(t) and the target variable Y.sub.T(t); f) judging the
amplitude of the model error e.sub.M(t) and the convergence of the
weighting factors of the dynamic model to determine whether the
dynamic model is still in its learning phase; g) implementing a
combiner and switch mechanism to combine or select Y.sub.D(t) and
Y.sub.T(t) to produce the final self-organizing sensor output Y(t);
h) using the estimated target variable Y.sub.T(t) as the
self-organizing sensor output, if the dynamic model is still in its
learning phase; and i) using the Y.sub.D(t) as the self-organizing
sensor output, if the dynamic model has finished its learning
phase.
10. The method of claim 9, wherein the use of a dynamic modeling
mechanism comprises using an artificial neural network (ANN) based
dynamic modeling mechanism.
11. A self-organizing sensor (SOS) for estimating a process
variable of interest in a physical process, comprising: a) a
steady-state model, including: i) a plurality of inputs,
S.sub.1(t), S.sub.2(t), . . . , S.sub.L(t), wherein L is the number
of input variables or parameters, and ii) an output Y.sub.S(t),
which is a function of the inputs S.sub.1(t), S.sub.2(t), . . . ,
S.sub.L(t) substantially in the following form:
Y.sub.S(t)=F.sub.S[S.sub.1(t),S.sub.2(t), . . . ,S.sub.L(t)],
wherein F.sub.s[.] is a function that describes the input and
output relationship of the steady-state model; b) a dynamic model,
including: i) a plurality of inputs, x.sub.1(t), x.sub.2(t), . . .
, x.sub.M(t), wherein M is the number of input variables, and ii)
an output Y.sub.D(t) that tracks the given trajectory of its target
variable Y.sub.T(t) under process dynamic and operating condition
changes; c) a converter that converts the output Y.sub.S(t) of the
steady-state model to the target variable Y.sub.T(t); d) an adder
that implements the following function:
e.sub.M(t)=Y.sub.T(t)-Y.sub.D(t); e) an objective function for the
dynamic model that is substantially in the following form: E M ( t
) = 1 2 e M ( t ) 2 = 1 2 [ Y T ( t ) - Y D ( t ) ] 2 ;
##EQU00011## and f) a combiner and switch mechanism that combines
or selects the signals Y.sub.D(t) and Y.sub.T(t) to produce the
final Soft-Sensor output Y(t).
12. The self-organizing sensor (SOS) of claim 11, wherein the
inputs of the steady-state model is in one or more of the following
forms: a) process variables measured online from the process
directly, b) process variables or other information from the sensor
networks, c) internal variables or parameters of the SOS, and d)
parameters set from a computer or device.
13. The self-organizing sensor (SOS) of claim 11, wherein the
dynamic model is an artificial neural network (ANN), comprising: a)
an input signal processing layer, having a plurality of input
signals x.sub.1(t), x.sub.2(t), . . . , x.sub.M(t), wherein M=1, 2,
3, . . . , as an integer, whereby each of the input signals moves
iteratively through a series of delay units so that a set of
normalized input signals X.sub.1 to X.sub.N is generated
substantially in the following form: X 1 = N ( x 1 ( 1 ) ) , X 2 =
N ( x 1 ( 2 ) ) , ... ... ##EQU00012## X 10 = N ( x 1 ( 10 ) ) , X
11 = N ( x 2 ( 1 ) ) , X 12 = N ( x 2 ( 2 ) ) , ... ...
##EQU00012.2## X 20 = N ( x 2 ( 10 ) ) , ... ... ... ...
##EQU00012.3## X N - 9 = N ( x M ( 1 ) ) , X N - 8 = N ( x M ( 2 )
) , ... ... ##EQU00012.4## X N = N ( x M ( 10 ) ) , ##EQU00012.5##
wherein N=1, 2, 3, . . . , as an integer, and N(.) denotes the
normalization function; b) a hidden layer with N neurons, wherein
each input signal is conveyed separately to each of the neurons in
the hidden layer via a path weighted by an individual weighting
factor W.sub.ij, where i=1, 2, . . . N, and j=1, 2, . . . N, and
the inputs to each of the neurons in the hidden layer are summed by
a set of adders to produce signal P.sub.j, which is further
filtered by a set of activation functions f(.) to produce Q.sub.j,
wherein j=1, 2, . . . N, which denotes the jth neuron in the hidden
layer; c) a piecewise continuous linear function f(x) mapping real
numbers to [0,1] used as the activation function in the neural
network as defined by f ( x ) = 0 , if x < - b a f ( x ) = ax +
b , if - b a .ltoreq. x .ltoreq. b a f ( x ) = 1 , if x > b a
##EQU00013## where a is an arbitrary constant and b=1/2; d) an
output layer with one neuron, wherein each output signal from the
hidden layer is conveyed to the single neuron in the output layer
via a path weighted by an individual weighting factor h.sub.j,
where j=1, 2, . . . N; e) an output signal Y.sub.D(t) generated by
the ANN based on the following difference equations: P j ( n ) = i
= 1 N w ij ( n ) X i ( n ) , Q j ( n ) = f ( P j ( n ) ) , O ( n )
= f ( j = 1 N h j ( n ) Q j ( n ) ) , = a j = 1 N h j ( n ) Q j ( n
) + b , Y D ( t ) = 100 [ a j = 1 N h j ( n ) Q j ( n ) + b ] ,
##EQU00014## wherein n denotes the nth iteration, O(t) is the
continuous function of O(n), and D(.) is a de-normalization
function; and f) a mechanism to minimize the objective function
E.sub.M(t) by adjusting the weighting factors in the artificial
neural networks substantially in the following form:
.DELTA.w.sub.ij(n)=a.sup.2.eta.e.sub.M(n)X.sub.i(n)h.sub.j(n),
.DELTA.h.sub.j(n)=a.eta.e.sub.M(n)Q.sub.j(n), wherein .eta.>0 is
the learning rate, e.sub.M(n) is the discrete signal of model error
e.sub.M(t), a is a constant, and X.sub.i(n) is the ith input
signal.
14. The self-organizing sensor (SOS) of claim 11, in which the
self-organizing sensor (SOS) is a CFB bed height self-organizing
sensor for estimating the bed height of a circulating fluidized-bed
(CFB) boiler, comprising: a) a process variable PT.sub.B that is
the bed thickness pressure differential, which is calculated
substantially in the following form: PT.sub.B=PT.sub.2-PT.sub.1,
wherein PT.sub.1 is Damper Delta P and PT.sub.2 is Furnace Delta P
of the CFB boiler, which are estimated in the idle or steady-state
condition of the CFB boiler through experimentation; b) a
steady-state model, including: i) a plurality of inputs:
PT.sub.1(t), PT.sub.2(t), H.sub.0, and a1, wherein PT.sub.1 is
Damper Delta P, PT.sub.2 is Furnace Delta P, H.sub.0 is Bed Height
at PT.sub.B=0, and a1 is a constant found in the steady-state
through experimentation; ii) an output H.sub.s(t), which is
calculated substantially in the following form:
H.sub.S(t)=a.sub.1*PT.sub.B+H.sub.0; c) a dynamic model, including:
i) a plurality of inputs: PT.sub.2(t), F.sub.C(t), F.sub.R(t),
F.sub.D(t), F.sub.P(t); wherein PT.sub.2(t) is Furnace Delta P,
F.sub.C(t) is Coal Flow, F.sub.R(t) is Recycle Flow, F.sub.D(t) is
Disposal Flow, and F.sub.P(t) is Primary Air of the CFB boiler; ii)
an output H.sub.D(t) that tracks the given trajectory of its target
variable H.sub.T(t) under process dynamic and operating condition
changes; d) a converter that converts the output H.sub.s(t) of the
steady-state model to the target variable H.sub.T(t); e) an adder
that implements the following function:
e.sub.M(t)=H.sub.T(t)-H.sub.D(t); f) an objective function for the
dynamic model substantially in the following from: E M ( t ) = 1 2
e M ( t ) 2 = 1 2 [ H T ( t ) - H D ( t ) ] 2 ; ##EQU00015## and g)
a combiner and switch mechanism that either combines or selects the
signals H.sub.D(t) and H.sub.T(t) to produce the final soft-sensor
output H(t).
15. The CFB bed height self-organizing sensor of claim 14, wherein
the dynamic model is an artificial neural network (ANN),
comprising: a) an input signal processing layer, including a
plurality of input signals x.sub.1(t), x.sub.2(t), . . . ,
x.sub.M(t), wherein M=1, 2, 3, . . . , as an integer, each of the
input signals moves iteratively through a series of delay units so
that a set of normalized input signals X.sub.1 to X.sub.N is
generated substantially in the following form: X 1 = N ( PT 2 ( 1 )
) , X 2 = N ( PT 2 ( 2 ) ) , ... ... ##EQU00016## X 10 = N ( PT 2 (
10 ) ) , X 11 = N ( F c ( 1 ) ) , X 12 = N ( F c ( 2 ) ) , ... ...
##EQU00016.2## X 20 = N ( F c ( 10 ) ) , ... ... ... ...
##EQU00016.3## X N - 9 = N ( F p ( 1 ) ) , X N - 8 = N ( F p ( 2 )
) , ... ... ##EQU00016.4## X N = N ( F p ( 10 ) ) , ##EQU00016.5##
wherein N=1, 2, 3, . . . 49, 50 as an integer, and N(.) denotes the
normalization function; b) a hidden layer with N neurons, wherein
each input signal is conveyed separately to each of the neurons in
the hidden layer via a path weighted by an individual weighting
factor W.sub.ij, where i=1, 2, . . . N, and j=1, 2, . . . N, and
the inputs to each of the neurons in the hidden layer are summed by
a set of adders to produce signal P.sub.j, which is further
filtered by a set of activation functions f(.) to produce wherein
j=1, 2, . . . N, which denotes the jth neuron in the hidden layer;
c) a piecewise continuous linear function f(x) mapping real numbers
to [0,1] being used as the activation function in the neural
network as defined by f ( x ) = 0 , if x < - b a f ( x ) = ax +
b , if - b a .ltoreq. x .ltoreq. b a f ( x ) = 1 , if x > b a
##EQU00017## where a is an arbitrary constant and b=1/2; d) an
output layer with one neuron, wherein each output signal from the
hidden layer is conveyed to the single neuron in the output layer
via a path weighted by an individual weighting factor h.sub.j,
where j=1, 2, . . . N; and e) an output signal H.sub.D(t) generated
by the ANN based on the following difference equations: P j ( n ) =
i = 1 N w ij ( n ) X i ( n ) , Q j ( n ) = f ( P j ( n ) ) , O ( n
) = f ( j = 1 N h j ( n ) Q j ( n ) ) , = a j = 1 N h j ( n ) Q j (
n ) + b , H D ( t ) = 100 [ a j = 1 N h j ( n ) Q j ( n ) + b ] ,
##EQU00018## wherein n denotes the nth iteration, O(t) is the
continuous function of O(n), and D(.) is a de-normalization
function.
16. The CFB bed height self-organizing sensor of claim 15, wherein
the minimization of the objective function E.sub.M(t) is
accomplished by adjusting the weighting factors in the artificial
neural networks (ANN) substantially in the following form:
.DELTA.w.sub.ij(n)=a.sup.2.eta.e.sub.M(n)X.sub.i(n)h.sub.j(n),
.DELTA.h.sub.j(n)=a.eta.e.sub.M(n)Q.sub.j(n), wherein .eta.>0 is
the learning rate, e.sub.M(n) is the discrete signal of model error
e.sub.M(t), a is a constant, and X.sub.i(n) is the ith input
signal.
Description
[0002] The subject of this patent relates to sensing, actuation,
and automatic control of physical processes including industrial
processes, equipment, facilities, buildings, devices, boilers,
valve positioners, motion stages, drives, motors, turbines,
compressors, engines, robotics, vehicles, and appliances.
[0003] In the foreseeable future, the energy needed to support our
economic growth will continue to come mainly from coal, our
nation's most abundant and lowest cost resource. The performance of
coal-fired power plants is highly dependent on coordinated and
integrated sensing, control, and actuation technologies and
products.
[0004] The implementation of sensors and advanced controls in power
systems can provide valuable methods to improve operational
efficiency, reduce emissions, and lower operating costs. As new
power generation technologies and systems mature, the plant that
encompasses these systems will become inherently complex. The
traditional process control architecture that includes a
conventional process layer, sensing layer, control layer, and
actuation layer would no longer be sufficient. In order to manage
complexity, the process control architecture that supports the
plant control systems need to evolve to manage complexity and
optimize performance.
[0005] On the other hand, with the advent of information
technology, sensor networks have been implemented in more and more
industrial plants. Most "modem" sensors and actuators are equipped
with Fieldbus, a digital network for the industrial environment,
that can send and receive useful information throughout the
network. However, much of the information from the sensor networks
is not very well utilized due to various reasons.
[0006] This patent introduces a novel Self-Organizing Process
Control Architecture based on distributed intelligence and
self-organizing methodologies that can distribute and use the
intelligence in the sensing and actuation levels to manage
complexity and solve real process control problems. This
Self-Organizing Process Control Architecture can enable distributed
intelligence at all levels, and allow the sensing and actuation
networks to function in a self-organizing manner. The
Self-Organizing Process Control Architecture comprises a Sensing
Layer, Control Layer, Actuation Layer, Process Layer, as well as
one or more of Self-Organizing Sensors (SOS) and Self-Organizing
Actuators (SOA). A Self-Organizing Sensor for a process variable
with one or multiple input variables is disclosed. An artificial
neural network (ANN) based dynamic modeling mechanism as part of
the Self-Organizing Sensor is also described. As a case example, a
Self-Organizing Soft-Sensor for CFB Boiler Bed Height is presented.
At last, a method to develop a Self-Organizing Sensor is
disclosed.
[0007] In the accompanying drawings:
[0008] FIG. 1 is a block diagram illustrating a traditional
single-loop automatic control system incorporating a sensor,
controller, actuator, and process under control.
[0009] FIG. 2 is a block diagram illustrating a traditional process
control architecture encompassing the Sensing Layer, Control Layer,
Actuation Layer, and Process Layer.
[0010] FIG. 3 is a block diagram illustrating a novel
Self-Organizing Process Control Architecture comprising the Sensing
Layer, Control Layer, Actuation Layer, Process Layer, as well as
one or more of Self-Organizing Sensors (SOS) and Self-Organizing
Actuators (SOA) according to an embodiment of this invention.
[0011] FIG. 4 is a schematic representation of the combustion
process of a Circulating Fluidized-Bed (CFB) boiler.
[0012] FIG. 5 is a diagram illustrating the relationship between
the Primary Air and Damper Delta P of a CFB boiler according to an
embodiment of this invention.
[0013] FIG. 6 is a diagram illustrating the relationship between
the Bed Thickness Pressure Differential and Bed Height in
Steady-State Condition in a CFB boiler according to an embodiment
of this invention.
[0014] FIG. 7 is a block diagram illustrating a Self-Organizing
Soft-Sensor for CFB Boiler Bed Height according to an embodiment of
this invention.
[0015] FIG. 8 is a block diagram illustrating an artificial neural
network (ANN) based dynamic modeling mechanism as part of the
Self-Organizing Soft-Sensor for the CFB Boiler Bed Height according
to an embodiment of this invention.
[0016] FIG. 9 is a block diagram illustrating a Self-Organizing
Sensor for a process variable with one or multiple input variables
according to an embodiment of this invention.
[0017] FIG. 10 is a block diagram illustrating an artificial neural
network (ANN) based dynamic modeling mechanism as part of the
Self-Organizing Sensor for a process variable according to an
embodiment of this invention.
[0018] In this patent, the term "mechanism" is used to represent
hardware, software, or any combination thereof. The term "process"
is used to represent a physical system or process with inputs and
outputs that have dynamic relationships. The term "sensor" is used
to represent a sensing mechanism. The term "Soft-Sensor" is used to
represent a sensing mechanism typically implemented in computer
software. The term "Process Variable of Interest" is used to
represent a process variable that is important to the control and
operation of the process but is too difficult or costly to measure
using conventional methods. The term "Target Variable" is used to
represent the target value for the "Process Variable of
Interest".
[0019] Without losing generality, all numerical values given in
this patent are examples. Other values can be used without
departing from the spirit or scope of the invention. The
description of specific embodiments herein is for demonstration
purposes and in no way limits the scope of this disclosure to
exclude other not specially described embodiments of this
invention.
DESCRIPTION
A. Traditional Process Control Architecture
[0020] Traditionally, automatic control is based on the concept of
feedback. The essence of the feedback theory consists of three
components: measurement, comparison, and correction. Measuring the
quantity of the variable to be controlled, comparing it with the
desired value, and using the error to correct the control action is
the basic procedure of feedback automatic control.
[0021] FIG. 1 is a block diagram illustrating a traditional
single-loop automatic control system incorporating a Controller 10,
an Actuator 12, Process 14, a Sensor 16, and Adders 18 and 20. The
Sensor 16 measures the Process Variable (PV) to be controlled. The
Measured Process Variable y(t) is compared at Adder 18 with the
Setpoint (SP) signal r(t) to produce an error signal e(t), which is
used as the input to the Controller 10. The control objective is
for the Controller 10 to produce an output (OP) signal u(t) to
drive the Actuator 12 to manipulate the Process 14 so that the
Process Variable (PV) tracks the given trajectory of the Setpoint.
The signals shown in FIG. 1 are as follows:
[0022] r(t)--Setpoint (SP),
[0023] PV--Process Variable, PV=x(t)+d(t),
[0024] y(t)--Measured Process Variable,
[0025] x(t)--Process Output,
[0026] u(t)--Controller Output (OP),
[0027] d(t)--Disturbance, the disturbance caused by noise or load
changes,
[0028] e(t)--Error between the Setpoint and Measured Variable,
e(t)=r(t)-y(t).
[0029] For simplification, the sensor and actuator are typically
included as part of the process. Therefore, the Measured Process
Variable y(t) can be considered the same as the Process
Variable.
[0030] FIG. 2 is a block diagram illustrating a traditional process
control architecture encompassing the Control Layer 22, Sensing
Layer 24, Actuation Layer 26, and Process Layer 28. Noting that
both FIGS. 1 and 2 show the signals flow from the Process to
Sensing, to Control, to Actuation, and then to Process in a loop.
That is why a feedback control system is sometimes referred to as a
control loop.
[0031] The Process Layer includes physical processes or systems
with inputs and outputs that have dynamic relationships. For
instance, a Circulating Fluidized-Bed (CFB) Boiler is a physical
process that has multiple process variables to be controlled.
[0032] The Sensing Layer includes multiple sensors for measuring
various process variables. These sensors can vary significantly in
size, type, and physical characteristics. For instance, for a CFB
boiler, Bed Temperature, Excess O.sub.2, and Furnace Negative
Pressure are typically measured.
[0033] The Control Layer includes multiple automatic controllers
for controlling various process variables. The controllers are
typically implemented in control devices such as Distributed
Control Systems (DCS), Programmable Logic Controllers (PLC),
Programmable Automation Controllers (PAC), Single-Loop Controllers
(SLC), or computer software. The controllers include Inputs/Outputs
(I/Os), communication buses, or digital networks to interface with
sensors and actuators. The Setpoints are the target values for the
process variables to track, which are entered, managed, and
monitored in the Control Layer. The Control Layer usually includes
a Graphical User Interface (GUI) for the operators to monitor the
process and control system.
[0034] The Actuation Layer includes multiple actuators that take
control command signals from the controllers and manipulate certain
process inputs or manipulated variables. For instance, for a CFB
boiler, Primary Air, Secondary Air, and Exhaust Air can be
manipulated in order to control the Bed Temperature, Excess
O.sub.2, and Furnace Negative Pressure.
[0035] By way of comparison, a traditional process control
architecture may possess the following properties:
[0036] 1. Multiple sensors for measuring various process variables
may exist. However, they send the measurement signals to the
Control Layer only;
[0037] 2. Multiple actuators for controlling different process
variables may exist. However, they take commands from the Control
Layer only; and
[0038] 3. A sensor network may exist, but sensors do not talk to
each other.
B. Self-Organizing Process Control Architecture
[0039] In regard now to the present invention, the following first
reviews the concept of Distributed Intelligence, Self-Organizing,
and other related terms in preparation for further discussions of
the invention with reference in certain instances to FIGS.
3-10.
Distributed Intelligence
[0040] Distributed Intelligence can be considered an artificial
intelligence method that includes distributed solutions for solving
complex problems. It is closely related to Multi-Agent Systems.
Self-Organizing
[0041] Without using strict and academic type definitions,
Self-Organizing can be understood as an organization that is
achieved in a way that is parallel and distributed. Here, parallel
means that all the elements act at the same time, and distributed
means no element is a central coordinator.
Self-Organizing System
[0042] A self-organizing system is a complex system made up of
small and simple units connected to each other and having
self-organizing capabilities.
[0043] FIG. 3 is a block diagram illustrating a novel
Self-Organizing Process Control Architecture comprising the Control
Layer, Sensing Layer, Actuation Layer, Process Layer, as well as
Self-Organizing Sensing (SOS) and/or Self-Organizing Actuation
(SOA) components according to an embodiment of this invention. More
specifically, the Self-Organizing Process Control Architecture not
only comprises the Control Layer 32, Sensing Layer 34, Actuation
Layer 36, Process Layer 38, but also one or more of Self-Organizing
Sensors (SOS) 40, and Self-Organizing Actuators (SOA) 42.
[0044] Notice that the signal flows are not as simple as those of
traditional feedback control loops. The Self-Organizing Sensors
(SOS) and Self-Organizing Actuators (SOA) can have direct inputs
from the sensor networks. The intelligence has not only been
distributed in the sensing and actuation layers, but has also been
utilized. The signal flows indicate that this architecture is
beyond the scope of traditional control schemes.
[0045] This Self-Organizing Process Control Architecture can have
one or more of the following properties:
[0046] 1. A Self-Organizing Sensor (SOS) can have multiple inputs
from the sensor network.
[0047] 2. A Self-Organizing Sensor (SOS) can send its output to the
sensor networks.
[0048] 3. A Self-Organizing Actuator (SOA) can manipulate multiple
manipulated variables in a coordinated way at the same time.
[0049] Potential key differences, one or more of which may exist
between the traditional process control architecture and the
Self-Organizing Process Control Architecture, are compared and
summarized in Table 1.
TABLE-US-00001 TABLE 1 Comparison of Process Control Architectures
Traditional Process Self-Organizing Process No. Common Property
Control Architecture Control Architecture 1 Multiple sensors for
Sensors send the Sensors may also send measuring various
measurement signals to the measurement signals to process variables
may Control Layer only. other sensors and exist. actuators. 2
Multiple actuators for Actuators take commands A Self-Organizing
controlling different from the Control Layer Actuator (SOA) takes
process variables may only. commands from the exist. Controller and
may have inputs from sensors. 3 A sensor network may Sensors do not
talk to each Sensors may talk to each exist. other. other. 4 A
sensor typically A sensor typically has only A Self-Organizing
Sensor measures one physical one or two inputs. (SOS) can have
multiple property. inputs from the sensor network. 5 N/A An
actuator typically A Self-Organizing manipulates one Actuator (SOA)
can manipulated variable. manipulate multiple variables in a
coordinated way at the same time. 6 N/A N/A A Self-Organizing
Sensor (SOS) can send its output to the sensor networks.
C. Self-Organizing Sensor (SOS) for Circulating Fluidized-Bed (CFB)
Boilers
[0050] To realize and describe the concept, properties, and
significance of the Self-Organizing Process Control Architecture, a
realistic sensing scenario is investigated in the context of an
industrial process control, where conventional sensors do not
work.
[0051] Circulating fluidized-bed (CFB) boilers are becoming
strategic in power and energy generation. The unique design allows
fuel such as coal powders to be fluidized in the air so that they
have better contact with the surrounding air for better combustion.
CFB boilers can burn low-grade materials such as waste coal, wood,
and refuse derived fuel. More importantly, less emissions such as
COx and NOx are produced compared to conventional boilers.
[0052] FIG. 4 is a schematic representation of the combustion
process of a Circulating Fluidized-Bed (CFB) boiler. Through the
Coal Feeder, fuel is fed to the lower furnace where it is burned in
an upward flow of combustion air. Unburned fuel and ash leaving the
furnace are collected by the Cyclone Separator and returned to the
lower furnace. Limestone is also fed to the lower furnace for
emission reduction. Multiple fans and dampers are used to form the
Primary Air, Secondary Air, and Exhaust Air as manipulated
variables to achieve the following control objectives:
[0053] 1. Hold the proper CFB circulating condition;
[0054] 2. Keep the combustion fuel-air-ratio; and
[0055] 3. Control the furnace negative pressure.
Since each manipulated variable can affect all three control
objectives, this is a strongly coupled multivariable process.
[0056] In a CFB furnace, there are four regions based on the
vertical distribution of solids, which can be coal or fuel powder.
They are the Bottom Region, Dense Region, Dilute Region, and Exit
Region. The Bed Thickness is a process variable representing the
thickness or the height of the Dense Region.
[0057] Since the Dense Region has the highest heat transfer
efficiency through direct contact to the furnace wall, it is
important to run the CFB furnace at an optimal Bed Thickness. If
the fluidized bed is too thin, the heat transfer efficiency is low.
If the fluidized bed is too thick, it cannot hold-up. So, it is
desirable to run the CFB furnace at the maximum Bed Thickness
possible, while not causing other operating condition problems such
as a fuel-air ratio mismatch. An appropriate amount of air and
pressure is required to establish the fluidized bed and maintain an
optimal fuel-air ratio at the same time.
[0058] This, indeed, can be a very complex problem, where the
industry still does not have good answers. Typically, plants run a
trial-and-error based operation, and the Bed Thickness is fixed at
a relatively conservative and safe position. This results in low
efficiency and potential CFB furnace shutdowns if the fuel type and
size suddenly change. Automatic control of Bed Thickness is very
important for the new generation of CFB boilers, especially
Supercritical CFB boilers. However, this not only presents a
control issue; but also a measurement problem. To conclude, the CFB
Boiler Bed Thickness is difficult to measure yet it is a Process
Variable of Interest. As defined at the beginning of this document,
the term "Process Variable of Interest" is used to represent a
process variable that is important to the control and operation of
the process but is too difficult or costly to measure using
conventional methods.
CFB Boiler Bed Height Steady-State Model
[0059] From an automatic control point of view, controlling the Bed
Height or Bed Thickness should achieve similar results. Therefore,
a feature of the invention can provide simplification to the design
by developing a CFB Bed Height Soft-Sensor instead of a CFB Bed
Thickness Soft-Sensor.
[0060] From a material balance point of view, Bed Thickness is
related to the actual fuel amount or fuel density. For a coal-fired
CFB boiler, fuel is mainly composed of coal powder. Since the
fluidized bed varies, the actual Bed Thickness of a CFB boiler can
only be a rough estimate. On the other hand, the pressure
differential between the top and bottom of the fuel can be used as
a key variable. This is defined as Bed Thickness Pressure
Differential, PT.sub.B.
[0061] In the fluidized condition, the Bed Thickness Pressure
Differential, PT.sub.B, is representative of the fuel height. In
addition, PT.sub.B is proportional to the weight of the fuel
lifted, which can be described as
PT.sub.B=K*H.sub.S(t)+PT.sub.0, (1)
where
[0062] PT.sub.B is the Bed Thickness Pressure Differential,
[0063] H.sub.S(t) is the Bed Height in the Steady-State
Condition,
[0064] K is a constant that is related to the CFB and coal grade,
etc.,
[0065] PT.sub.0 is the Bed Thickness Pressure Differential when CFB
is idle.
Equation (1) can also be written as
H.sub.S(t)=a.sub.1*PT.sub.B+H.sub.0, (2)
where
[0066] H.sub.0 is the Bed Height at PT.sub.B=0, which can be
estimated in the idle or steady-state condition of the CFB boiler
through experimentation.
[0067] a.sub.1=1/K is a constant that can be found in the
steady-state through experimentation. Although PT.sub.B cannot be
measured, it can be calculated based on the following formula:
PT.sub.B=PT.sub.2-PT.sub.1, (3)
where
[0068] PT.sub.1 is the pressure differential of the CFB furnace
bottom damper, it is called Damper Delta P for short, and
[0069] PT.sub.2 is the pressure differential between the bottom and
top of the CFB furnace, it is called Furnace Delta P for short.
[0070] Reference is made to FIG. 4 for the actual locations of
PT.sub.1 and PT.sub.2. Notice that PT.sub.1 can be measured when
there is no fuel in the CFB furnace at different Primary Air
(F.sub.P) operating points.
[0071] FIG. 5 is a diagram illustrating the relationship between
the Primary Air F.sub.P and Damper Delta P of a CFB boiler. As
shown in FIG. 5, the relationship between PT.sub.1 and F.sub.p is
typically nonlinear.
[0072] FIG. 6 is a diagram illustrating the relationship between
the Bed Thickness Pressure Differential and Bed Height in
Steady-State Condition in a CFB boiler. Since PT.sub.2 is
measurable during normal operations, PT.sub.B can be calculated. In
this way, the steady-state CFB Bed Height H.sub.S(t) can be
calculated based on the following formula.
H.sub.S(t)=a.sub.1*(PT.sub.2-PT.sub.1)+H.sub.0. (4)
Self-Organizing Soft-Sensor for CFB Boiler Bed Height
[0073] FIG. 7 is a block diagram illustrating a Self-Organizing
Soft-Sensor for CFB Boiler Bed Height according to an embodiment of
this invention. The Soft-Sensor comprises a CFB Bed Height
Steady-State Model 44, an ANN-Based Self-Organizing CFB Bed Height
Dynamic Model 46, a Converter 48, an Adder 50, and a Combiner and
Switch 52.
[0074] The CFB Bed Height Steady-State Model 44 produces the
Steady-State Bed Height H.sub.s(t) as described by Equation (4).
The ANN-Based Self-Organizing CFB Bed Height Dynamic Model 46 has
five inputs, PT.sub.2(t), F.sub.c(t), F.sub.R(t), F.sub.D(t),
F.sub.P(t) and one output H.sub.D(t).
[0075] Although H.sub.S(t) is the Steady-State Bed Height, it
varies during normal operation because fuel flow, recycle flow, and
disposal flow all affect the actual amount of the fuel inside the
CFB furnace. Therefore, it is not a constant value. On the other
hand, although H.sub.S(t) is calculated while the CFB is running,
it is not the dynamic Bed Height. In fact, H.sub.S(t) should be
equal to the Bed Height or Fuel Height if the CFB shuts down. To
conclude, H.sub.S(t) reflects the amount of fuel and fuel density
in the CFB furnace. A feature of the invention can convert
H.sub.S(t) through Converter 48 to a targeted Bed Height
H.sub.T(t), which can be used as the target value for the neural
network training. H.sub.T(t) is defined and referenced herein using
the above Target Variable term.
[0076] The Converter 48 can be designed based on energy or material
balance calculations of the process or simply a ratio of the
Converter input and output signals. Without losing generality, a
ratio factor b.sub.1 can be multiplied to convert the Steady-State
Bed Height H.sub.S(t) to Bed Height Target Variable H.sub.T(t).
Here b.sub.1 is related to how much the Bed Height will increase
when the fluidized bed is established from the CFB idle condition.
For instance, if H.sub.S(t) is 2 meters, b.sub.1=3,
H.sub.T(t)=3*H.sub.S(t)=6 meters.
[0077] The Combiner and Switch 52 is a mechanism to either combine
or select the Bed Height signals H.sub.D(t) and H.sub.T(t) to
produce the final Soft-Sensor output H(t). For instance, when the
dynamic model is still in its learning phase where H.sub.D(t)
cannot be used, the estimated Target Variable H.sub.T(t) can be
used as the Soft-Sensor output.
[0078] The variables in the block diagram of FIG. 7 are listed and
described in Table 2.
TABLE-US-00002 TABLE 2 CFB Bed Height Soft-Sensor Variables Symbol
Name Type Note PT.sub.1 Damper Delta P Pre-measured Based on
Primary Air in different operating points. PT.sub.2 Furnace Delta P
Measured Using pressure sensors. PT.sub.B Bed Thickness Calculated
Based on Damper Delta P and Pressure Differential Furnace Delta P
H.sub.0 Bed Height at Idle Pre-measured When CFB is idle. a.sub.1
Coefficient a Constant Determined by experimentation. F.sub.C Coal
Flow Measured Using flow sensors. F.sub.R Recycle Flow Measured
Using flow sensors. F.sub.D Disposal Flow Measured Using flow
sensors. F.sub.P Primary Air Measured Using sensors. H.sub.S(t) Bed
Height at Steady- Calculated The Bed Height that equals to State
fuel height when CFB is idle. H.sub.T(t) Bed Height, Target
Calculated The Bed Height Target Variable Variable relating to
H.sub.S(t). H.sub.D(t) Bed Height, Dynamic Calculated The Output of
the Dynamic Model Output Model. e.sub.M(t) Model Error Calculated
For ANN learning or training. H(t) Bed Height, Soft- Calculated Bed
Height PV (Process Sensor Output Variable) for control. b.sub.1
Coefficient b Constant Determined by experiments.
[0079] In this case example, H(t) is the CFB Boiler Bed Height that
is produced by the Self-Organizing Soft-Sensor. Since controlling
the CFB Bed Thickness or CFB Bed Height can achieve similar
results, H(t) can be used as the Process Variable of Interest. By
using H(t) as the measured process variable, the Bed Height can be
automatically controlled, which can improve the safety and
efficiency of the CFB Boiler. This method can have significant
importance for controlling boilers in future energy plants that can
deliver maximum-energy-efficiency, near-zero-emissions,
fuel-flexibility, and multi-products.
ANN-Based Self-Organizing CFB Bed Height Dynamic Model
[0080] FIG. 8 is a block diagram illustrating an artificial neural
network (ANN) based dynamic modeling mechanism as part of the
Self-Organizing Soft-Sensor for the CFB Boiler Bed Height according
to an embodiment of this invention.
[0081] An objective for the CFB Bed Height Dynamic Model is to
produce an output H.sub.D(t) that can track the given trajectory of
its target variable H.sub.T(t) under process dynamic and operating
condition changes. In other words, the dynamic modeling objective
can be to minimize the model error e.sub.M(t) between the dynamic
model output H.sub.D(t) and its target variable H.sub.T(t) in an
online fashion.
e.sub.M(t)=H.sub.T(t)-H.sub.D(t) (5)
[0082] Then, the objective function for the CFB Bed Height Dynamic
Model can be selected as follows:
E M ( t ) = 1 2 e m ( t ) 2 = 1 2 [ H T ( t ) - H D ( t ) ] 2 . ( 6
) ##EQU00001##
As shown in FIG. 8, the CFB Bed Height Dynamic Model comprises an
input signal processing layer 54, a linear Perceptron multi-layer
artificial neural network that has one input layer 56, one hidden
layer with N neurons 58, and one output layer with one neuron 60.
There are five input signals:
[0083] PT.sub.2(t)--Furnace Delta P
[0084] F.sub.C(t)--Coal Flow
[0085] F.sub.R(t)--Recycle Flow
[0086] F.sub.D(t)--Disposal Flow
[0087] F.sub.p(t)--Primary Air
[0088] In the input signal processing layer 54, each of the input
signals passes iteratively through a series of delay units, where
z.sup.-1 denotes the unit delay operator. In computer real-time
programming, this can correspond to digitizing of an analog signal
at a pre-determined Sample Interval. As an example, the analog
signal PT.sub.2(t) becomes a series of discrete signals
PT.sub.2(1), PT.sub.2(2), . . . PT.sub.2(i), where i denotes the
ith sample. The digitization is based on a moving time interval and
is carried out continuously. The moving time interval is called a
temporal window. The number of discrete values saved and used in
the temporal window is dependent on the number of neurons in the
neural network design.
[0089] Without losing generality, ten discrete values are selected
for each of the five input variables. Then, the input signal
PT.sub.2(t) is digitized to a series of discrete signals
PT.sub.2(1), PT.sub.2(2), . . . PT.sub.2(10); F.sub.c(t) is
digitized to F.sub.C(1), F.sub.C(2), F.sub.C(10), and so on. The
numerical value 10 selected here is an example to describe the
input signal processing mechanism 54. Other values can be used
without departing from the spirit or scope of the invention.
[0090] At the ANN input layer 56, all the discrete signals enter
the neural network in parallel. They go through a normalization
function N(.), individually. Then, a set of normalized input
signals X.sub.1 to X.sub.N is generated as follows:
X 1 = N ( PT 2 ( 1 ) ) , X 2 = N ( PT 2 ( 2 ) ) , X 10 = N ( PT 2 (
10 ) ) , X 11 = N ( F c ( 1 ) ) , X 12 = N ( F c ( 2 ) ) , X 20 = N
( F c ( 10 ) ) , X N - 9 = N ( F p ( 1 ) ) , X N - 8 = N ( F p ( 2
) ) , X N = N ( F p ( 10 ) ) , ( 7 ) ##EQU00002##
where N=50 in this case example.
[0091] These delayed signals X.sub.i=1, 2, . . . N, are then
conveyed to the hidden layer 58 through the neural network
connections. This is equivalent to adding a feedback structure to
the neural network. Accordingly, the regular static multilayer
neural network becomes a dynamic neural network.
[0092] Then, each input signal is conveyed separately to each of
the neurons in the hidden layer via a path weighted by an
individual weighting factor W.sub.ij, where i=1, 2, . . . N, and
j=1, 2, . . . N. The inputs to each of the neurons in the hidden
layer are summed by a set of adders to produce signal P. This
signal P.sub.j is filtered by a set of activation functions f(.) to
produce Q.sub.j, where j=1, 2, . . . N, which denotes the jth
neuron in the hidden layer.
[0093] A piecewise continuous linear function f(x) mapping real
numbers to [0,1] is used as the activation function in the neural
network as defined by
f ( x ) = 0 , if x < - b a ( 8 a ) f ( x ) = a x + b , if - b a
.ltoreq. x .ltoreq. b a ( 8 b ) f ( x ) = 1 , if x > b a ( 8 c )
##EQU00003##
where a is an arbitrary constant and b=1/2.
[0094] Each output signal from the hidden layer is conveyed to the
single neuron in the output layer 60 via a path weighted by an
individual weighting factor h.sub.j, where j=1, 2, . . . N. These
signals are summed in a set of adders to produce signal Z(.), and
then filtered by activation function f(.) to produce the output
O(.) of the neural network with a range of 0 to 1.
A de-normalization function defined by
D(x)=100x, (9)
maps the O(.) signal back into the real space to produce the output
H.sub.D(t).
[0095] An algorithm governing the input-output of the neural
network dynamic model consists of the following difference
equations:
P j ( n ) = i = 1 N w ij ( n ) X i ( n ) , ( 10 ) Q j ( n ) = f ( P
j ( n ) ) , ( 11 ) O ( n ) = f ( j = 1 N h j ( n ) Q j ( n ) ) , =
a j = 1 N h j ( n ) Q j ( n ) + b , ( 12 ) ##EQU00004##
when the variable of function f(.) is in the range specified in
Equation (8b), and O(n) is bounded by the limits specified in
Equations (8a) and (8c).
[0096] The dynamic model output becomes
H D ( t ) = D ( O ( t ) ) = 100 [ a j = 1 N h j ( n ) Q j ( n ) + b
] , ( 13 ) ##EQU00005##
where n denotes the nth iteration; O(t) is the continuous function
of O(n); and D(.) is the de-normalization function.
[0097] The minimization of objective function E.sub.M(t) is
performed by adjusting the weighting factors in artificial neural
networks. An online learning algorithm is developed to continuously
update the values of the weighting factors of the neural network as
follows:
.DELTA.w.sub.ij(n)=a.sup.2.eta.e.sub.M(n)X.sub.i(n)h.sub.j(n),
(14)
.DELTA.h.sub.j(n)=a.eta.e.sub.M(n)Q.sub.j(n). (15)
where .eta.>0 is the learning rate, e.sub.M(n) is the discrete
signal of model error e.sub.M(t), a is a constant in Eqn (8), and
X.sub.i(n) is the ith input signal.
[0098] The dynamic model algorithm can be implemented in computer
software to perform real-time computation for real
applications.
D. Self-Organizing Sensor (SOS) for a Process Variable
[0099] FIG. 9 is a block diagram illustrating a Self-Organizing
Sensor for a process variable with one or multiple input variables
according to an embodiment of this invention. The Self-Organizing
Sensor is shown comprising a Steady-State Model 62, a Dynamic Model
64, a Converter 66, an Adder 68, and a Combiner and Switch 70.
[0100] The Steady-State Model 62 has inputs, S.sub.1(t),
S.sub.2(t), . . . , S.sub.L(t), where L is the number of input
variables or parameters. The inputs for the Steady-State Model can
be in any one or more of the following forms:
[0101] Process variables measured online from the process
directly,
[0102] Process variables or other information from the sensor
networks,
[0103] Internal variables or parameters of the SOS, and
[0104] Parameters set from a computer or device.
The Steady-State Model 62 produces an output Y.sub.S(t), which is a
function of the inputs S.sub.1(t), S.sub.2(t), . . . , S.sub.L(t)
as follows:
Y.sub.S(t)=F.sub.S[S.sub.1(t),S.sub.2(t), . . . ,S.sub.L(t)],
(16)
where L is the number of input variables or parameters, and
F.sub.s[.] is a function that describes the input and output
relationship of the Steady-State Model. This function can be as
simple as a linear function or a complicated multi-variable
dynamical equation. In the CFB Boiler Bed Height example, the
Steady-State model output can be calculated based on the following
formula:
H.sub.S(t)=a.sub.1*PT.sub.B+H.sub.0. (17)
Therefore, the function F.sub.S[.] in this case is a linear
function.
[0105] The Dynamic Model 64 can be implemented using a mechanism
such as an artificial neural network (ANN) that has dynamic
modeling capabilities. It has inputs, x.sub.1(t), x.sub.2(t), . . .
, x.sub.M(t), where M is the number of input variables. The inputs
for the Dynamic Model are process variables measured online from
the process directly or from the sensor networks.
[0106] The objective for the Dynamic Model is to produce an output
Y.sub.D(t) that can track the given trajectory of its target
variable Y.sub.T(t) under process dynamic and operating condition
changes. In other words, the dynamic modeling objective is to
minimize the model error e.sub.M(t) between the dynamic model
output Y.sub.D(t) and its target variable Y.sub.T(t) in an online
fashion.
e.sub.M(t)=Y.sub.T(t)-Y.sub.D(t) (18)
An objective function for the Dynamic Model can be selected as
follows:
E M ( t ) = 1 2 e M ( t ) 2 = 1 2 [ Y T ( t ) - Y D ( t ) ] 2 . (
19 ) ##EQU00006##
[0107] FIG. 10 is a block diagram illustrating an artificial neural
network (ANN) based dynamic modeling mechanism as part of the
Self-Organizing Soft-Sensor for a process variable according to an
embodiment of this invention.
[0108] As shown in FIG. 10, the Dynamic Model comprises an input
signal processing layer 72, a linear Perceptron multi-layer
artificial neural network that has one input layer 74, one hidden
layer with N neurons 76, and one output layer with one neuron
78.
[0109] There are M input signals, x.sub.1(t), x.sub.2(t), . . . ,
x.sub.M(t), where M=1, 2, 3, . . . , as an integer. In the input
signal processing layer 72, each of the input signals moves
iteratively through a series of delay units, where z.sup.-1 denotes
the unit delay operator. In computer real-time programming, this
can correspond to digitizing of an analog signal at a
pre-determined Sample Interval. As an example, the analog signal
x.sub.1(t) becomes a series of discrete signals x.sub.1(1),
x.sub.1(2), . . . x.sub.1(i), where i denotes the ith sample. The
digitization is based on a moving time interval and is carried out
continuously. The moving time interval is called a temporal window.
The number of discrete values saved and used in the temporal window
is dependent on the number of neurons in the neural network
design.
[0110] If the number of discrete values is represented by I for
each of the M input variables, the input signal x.sub.1(t) is
digitized to a series of discrete signals x.sub.1(1), x.sub.1(2), .
. . x.sub.1(I); x.sub.2(t) is digitized to x.sub.2(1), x.sub.2(2),
. . . x.sub.2(I); . . . , and x.sub.M(t) is digitized to
x.sub.M(1), x.sub.M(2), . . . x.sub.M(I). Without losing
generality, I=10 can be selected to describe the input signal
processing mechanism 72. Other values can be used without departing
from the spirit or scope of the invention.
[0111] At the ANN input layer 74, all the discrete signals enter
the neural network in parallel. They go through a normalization
function N(.), individually. Then, a set of normalized input
signals X.sub.1 to X.sub.N is generated.
X 1 = N ( x 1 ( 1 ) ) , X 2 = N ( x 1 ( 2 ) ) , ... ... X 10 = N (
x 1 ( 10 ) ) , X 11 = N ( x 2 ( 1 ) ) , X 12 = N ( x 2 ( 2 ) ) ,
... ... X 20 = N ( x 2 ( 10 ) ) , ... ... ... ... X N - 9 = N ( x M
( 1 ) ) , X N - 8 = N ( x M ( 2 ) ) , ... ... X N = N ( x M ( 10 )
) , ( 20 ) ##EQU00007##
where N=1, 2, 3, . . . , as an integer. If 10 discrete values are
selected for digitizing the continuous variable, N=10*M, where M is
the number of input variables, and N is the number of neurons in
the ANN input layer.
[0112] These delayed signals X.sub.i, i=1, . . . N, are then
conveyed to the hidden layer 76 through the neural network
connections. This is equivalent to adding a feedback structure to
the neural network, whereby the regular static multilayer neural
network becomes a dynamic neural network.
[0113] Next, each input signal is conveyed separately to each of
the neurons in the hidden layer via a path weighted by an
individual weighting factor W.sub.ij, where i=1, 2, . . . N, and
j=1, 2, . . . N. The inputs to each of the neurons in the hidden
layer are summed by a set of adders to produce signal P.sub.j. This
signal P.sub.j is filtered by a set of activation functions f(.) to
produce Q.sub.j, where j j=1, 2, . . . N, which denotes the jth
neuron in the hidden layer.
[0114] A piecewise continuous linear function f(x) mapping real
numbers to [0,1] is used as the activation function in the neural
network as defined by
f ( x ) = 0 , if x < - b a ( 21 a ) f ( x ) = ax + b , if - b a
.ltoreq. x .ltoreq. b a ( 21 b ) f ( x ) = 1 , if x > b a ( 21 c
) ##EQU00008##
where a is an arbitrary constant and b=1/2.
[0115] Each output signal from the hidden layer is conveyed to the
single neuron in the output layer 78 via a path weighted by an
individual weighting factor h.sub.j, where j=1, 2, . . . N. These
signals are summed in a set of adders to produce signal Z(.), and
then filtered by activation function f(.) to produce the output
O(.) of the neural network with a range of 0 to 1.
A de-normalization function defined by
D(x)=100x, (22)
maps the O(.) signal back into the real space to produce the output
Y.sub.D(t).
[0116] An algorithm governing the input-output of the neural
network dynamic model can consist of the following difference
equations:
P j ( n ) = i = 1 N w ij ( n ) X i ( n ) , ( 23 ) Q j ( n ) = f ( P
j ( n ) ) , ( 24 ) O ( n ) = f ( j = 1 N h j ( n ) Q j ( n ) ) , =
a j = 1 N h j ( n ) Q j ( n ) + b , ( 25 ) ##EQU00009##
when the variable of function f(.) is in the range specified in
Equation (21b), and O(n) is bounded by the limits specified in
Equations (21a) and (21c).
[0117] The dynamic model output becomes
Y D ( t ) = D ( O ( t ) ) = 100 [ a j = 1 N h j ( n ) Q j ( n ) + b
] , ( 26 ) ##EQU00010##
where n denotes the nth iteration; O(t) is the continuous function
of O(n); and D(.) is the de-normalization function.
[0118] The minimization of objective function E.sub.M(t) can be
accomplished by adjusting the weighting factors in artificial
neural networks. An online learning algorithm is developed to
continuously update the values of the weighting factors of the
neural network as follows:
.DELTA.w.sub.ij(n)=a.sup.2.eta.e.sub.M(n)X.sub.i(n)h.sub.j(n),
(27)
.DELTA.h.sub.j(n)=a.eta.e.sub.M(n)Q.sub.j(n). (28)
where .eta.>0 is the learning rate, e.sub.M(n) is the discrete
signal of model error e.sub.M(t), a is a constant in Eqn (21), and
X.sub.i(n) is the ith input signal.
[0119] The dynamic model algorithm can be implemented in computer
software to perform real-time computation for real-world
applications.
E. Self-Organizing Sensor Development Method
[0120] In accordance with an aspect of the invention, a method of
developing a Self-Organizing Sensor can comprise one or more of
determining a relationship between (i) a variable of interest in a
system at steady-state and (ii) one or more variables of the system
that can be predetermined or pre-measured in the steady-state,
converting the variable of interest to a target variable, producing
a dynamic model output based on the target variable, training the
dynamic model, and combining or selecting the dynamic model output
and the target variable to produce a final Self-Organizing Sensor
output. The training can be designed to minimize a difference
between the dynamic model output and the target variable. Also, or
alternatively, the training can be followed by determining (e.g.,
based upon a model error and/or the convergence of weighting
factors of the dynamic model) whether the dynamic model is still in
its learning phase. Various permutations of the above, as well as
modifications and/or combinations with any of the below features,
as would be evident or suggested to one skilled in the art in view
of this disclosure, are intended to be encompassed by the
method.
[0121] One implementation of the method is provided below with the
understanding that such can be modified, rearranged, or tailored to
encompass versions of a plurality, or preferably all, of the
following:
1. Select or determine a variable Y.sub.S(t) that is the Process
Variable of Interest or related to the Process Variable of Interest
at Steady-State Condition. In the CFB boiler case example, the
Variable of Interest is the CFB Bed Thickness or the CFB Bed
Height. Y.sub.S(t) is CFB Bed Height at Steady-State Condition. 2.
Derive a formula to calculate Y.sub.S(t) based on one or multiple
variables or parameters that can be pre-measured in the
steady-state of the process, or determined through experimentation,
or entered manually. 3. Convert the variable Y.sub.S(t) to a Target
Variable Y.sub.T(t) of the Process Variable of Interest. The
Converter can be designed based on energy and material balance
calculations of the process or simply a ratio of the input and
output signals of the Converter. 4. Use a dynamic modeling
mechanism such as but not limited to an artificial neural network
(ANN) based dynamic modeling mechanism to produce a dynamic model
output Y.sub.D(t) based on the Target Variable Y.sub.T(t). 5. Train
the dynamic model based on minimizing the model error e.sub.M(t)
which is the difference between the dynamic model output Y.sub.D(t)
and the Target Variable Y.sub.T(t). 6. Judge the amplitude of the
model error e.sub.M(t) and the convergence of the weighting factors
of the dynamic model to determine if the dynamic model is still in
its learning phase or not. 7. Implement a Combiner and Switch
mechanism to either combine or select Y.sub.D(t) and Y.sub.T(t) to
produce the final Self-Organizing Sensor output Y(t). If the
dynamic model is still in its learning phase where Y.sub.D(t)
cannot be used, use the estimated Target Variable Y.sub.T(t) as the
Self-Organizing Sensor output. 8. If the dynamic model has finished
its learning phase where dynamic model output Y.sub.D(t) can be
used to represent the dynamics of the Process Variable of Interest,
use the Y.sub.D(t) as the Self-Organizing Sensor output.
* * * * *