Evolving Intelligent Systems

Selected Topics of Evolving Intelligent Systems

José de Jesús Rubio, Manuel Jimenez, Humberto Perez, Maricela Figueroa


In this paper, some interesting characteristics of evolving intelligent systems are described. They are known as: the definition of the evolving intelligent systems, the applications, evolving pattern classification, evolving control, evolving identification, evolving prediction, and evolving optimization. For the details of the described methods, please see the references.
Evolving systems, identification, pattern classification, control.

1. Introduction

There is some interesting research about evolving systems as is [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], and [12].

The paper of [2] introduces a hybrid evolving architecture for dealing with incremental learning, consisting of two sequential and incremental learning modules: growing Gaussian mixture model (GGMM) and resource allocating neural network, (RAN), the rationale of the architecture rest on two issues:incrementally and the possibility of partially labeled data processing in the context of classication. The paper of [3] presents an online self-evolving fuzzy controller with global learning capabilities, starting from very simple, even empty configurations, the controller learns from it own actions while controlling the plant. The paper of [4] proposes the application of evolving fuzzy modeling to fault tolerant in two steps: fault detection and fault accommodation, fault accommodation uses evolving Takagi-Sugeno fuzzy models, and fault detection uses a model-based approach also based on fuzzy models.In [5], an online Takaji Sugeno fuzzy model is presented, the presented method combines the recursive Gstafson-Kessel clustering algorithm and the fuzzy recursive least square method. The article of [6] proposes knowledge-based short time prediction methods for multivariate streaming time series that rely on the early recognition of such local patterns a parametric fuzzy model for patterns is presented, along with an online, classication based recognition procedure, which will introduce the notion of evolving classification results. In [7], they propose a new incremental linear discriminant analysis (ILDA) for multitask pattern recognition (MTPR) problems in which a chunk of training data for a particular task are given sequentially is switched to another related task one after another. In [8], they present a general approach to the classification of streaming data which represent a specific agent behavior based on evolving systems, their approach can efficiently model and recognize different behaviors. In [9], they propose a new online predictor model for complex nonlinear processes, while the developed model can be as complex as a TS fuzzy model with exible precedents, it habitually thends to shrink to an adaptive linear model. The paper of [10] introduces a new approach for evolving fuzzy modeling using tree structures, the model is a fuzzy linear regression tree whose topology can be continuously updated through a statistical model selection test. In [11], they revisit the underlaying concept and identify an essential optimization problems arising therein. In [12], a stable backpropagation algorithm is used to train an evolving radial basis function neural network, is generates groups with an online clustering, the centers are updated to achieve they are near the incoming data each iteration, so it does not need to prune the neurons.

From the afore mentioned papers, [2], [6], [7], and [8] work with evolving pattern classification, [3] and [4] work with evolving control, [5], [10], and [12], work with evolving identification, [9] works with evolving prediction, and [11] works with evolving optimization.

In this paper, some interesting characteristics of evolving intelligent systems are described. They are known as: the definition of the evolving intelligent systems, the applications, evolving pattern classification, evolving control, evolving identification, evolving prediction, and evolving optimization. For the details of the described methods, please see the references.


The evolving intelligent systems, are characterized by habilities to adjust their structure as well as parameters to the varying characteristics of the environment (with the term of environment embracing processes/phenomena with wich the system has to interact and or deal with the users using the system) [11].

The neural networks born in the eighties, the philosopy of the neural networks is to simulate the behavior of the brain, now the evolving intelligent systems simulate better the brain than the neural networks because, the neural networks only train their parameters while the evolving intelligent systems train their structure and their parameters. Please see Figure 1.

Fig. 1. Evolving Intelligent Systems vs Neural Networks

The emerging area of the evolving intelligent systems was conceibed around the areas of neural networks, fuzy-rule based systems, and neuro-fuzzy hybrids [1].

It is currently being expanded also to the areas of general systems, control hardware, implementarions, etc. Numerous interesting applications of such systems to robotic, au- tonomous unmanned systems, vehicle systems, process monitoring & control, bio-medical data processing, etc., have been reported [1]. Please see Figure 2.

Fig. 2. Applications of the evolving intelligent systems


In [2] he presents the growing Gaussian mixture model(GGMM). The first module of the proposed architecture is the growing mixture models used to a self-learn the labels of the unlabeled data. The GGMM can be seen as a clustering method that is model oriented. In essence, the model-based clustering method perceives the data as a population with K different components, where each component is generated by an underlaying probability distribution. The density of an m-dimensional data point xi from the jth component is fj(xi; θj), where θj represents a vector of some unknown parameters associated with the component j. The density function associated is expressed as:

where τj is the weight of the jth component such that :

A typical case is when fj(xi; θj) is multivariate normal (Gaussian) density φj with θj representing the characteristics of each component j and standing for the mean μj and the covariance matrix, . The density φj is described as follows:

Gaussian mixtures are usually trained using the iterative Expectation Maximization (EM) algorithm. The existing ap- proaches can be split into two categories, he names them as: renement based methods and learning methods.

In [2] he presents the minimal resource allocating network (MRAN). The second module of the proposed architecture consists of a RAN which is mainly responsible for the clas- sication. Originally proposed to approximate functions, it is able according to several studies to learn non-stationary data. RAN consists of three layers: input layer, hidden layer, and output layer. each output node will be computed as follows:

where w0 is a bias and φj(x) is a Gaussian function associated with the jth hidden unit which is defined by the mean μj and the standard deviation σj .φj(x)is defined as:

where || || denoted the Euclidean norm.

In [6], they formulate a multivariate case extension to a fuzzy set μ defined in a N-dimensional space of features xi; i = 1; :::;N, an intersection of N normalized fuzzy sets of this type is being performed and assigned one joint maximum truth value a, which will again occur at the modal point x = r of this set:

For high-dimensional fuzzy sets, i.e. larger number of N, μ(x) would too quickly decrease to zero when moving away from the modal point if T-norm intersection operators would be used. The N-fold intersection of N truth values μi , i = 1; :::;N, is defined as:

Applying this Hamacher intersection operator (6) to (5) yields:

which, if we insert the normalized univariate fuzzy sets for each μi (xi) in rewritten form as given:

In [7], they uses the incremental linear discriminant analysis (ILDA). They assume that there are N training data which belong, to either of C classes, the set of class c data is denoted as:

where xcj is the jth data of class c and ηc is the number of class c data.

The whole set of training data is denoted as:

For X, they can define the between-class scatter matrix SB and the within class scatter matrix Sw :

In LDA, a discriminant vector w is obtained by maximizing the following class separability:

The following objective function has to be maximized:

It is well known that W is computed by solving the following generalized eigenvalue problem:

where Λ is a diagonal matrix whose diagonal element λi is the eigenvalue of wi in their paper, a discriminant space model is represented by the sextuplet:

In [8], they present an evolving classifier of behavior models module (EvCBM). The procedure of this classifier includes the following stage/steps: 1.- Classify a new sample (agent behavior) in a group represented by a prototype, 2.- Calculate the potential of the new data sample to be a prototype, 3.- Update all the prototypes considering the new data sample, 4.- Insert the new data sample as a new prototype if needed, 5.- Remove existing prototypes if needed.


In [3], an online self-evolving fuzzy controller (OSEFC) is presented. For instance, consider a single-input single-output plant, whose dynamics are given by:

where xk = (yk; yk-1; :::;yk-p; uk-1; uk-2; :::; uk-q) is the state of the plant, uk is the control signal exerted by the controller, f is an unknown, continuous, and differentiable function, p and q are constants that determine the plant order.

The controller can be expressed as a function G such that:

The selected fuzzy system uses the product as the Tnorm for the conjunction and the weighted average as the defuzzification strategy. Thus the output of the fuzzy system at instant k is given by:

where Φ is the set of parameters defining the fuzzy controller at time k, Qi is a scalar value representing the rule consequent,and is the firing strength of the ith rule, which is calculated by:

In [4], they propose control which uses a fuzzy model. The proposed fuzzy modeling uses the scatter approach and the potential approach.

Online learning of the ETS fuzzy models using the scatter approach entails the following stages: a) initialization of rule- base structure (antecedent part of the rule), b) reading the next data sample at the next time step, c) recursive calculation of the scatter of the new data, d) recursively update the scatter at the focal point (rule center) of the existing rule, e) possible modication of the rule-base upgrade based on the scatter of the new data in comparison to scatter of the existing rule, f) recursive calculation of the consequent parameters, g) prediction of the model output for the next time step.

In the potential approach, for the data that establish the focal point of the rst cluster, and later it is updated recursively.

For the model predictive control using evolving fuzzy mod- els the control signals change only inside the control horizon:

u(k + j) = u(k + Hc - 1) (20)

for j = Hc; :::;Hp-1. The sequence of future control signals is obtained by optimizing a cost function which describes the control goal, and is usually of the following form:

where the (k + i) denotes the predicted errors given by the difference between the reference r and the output of the system y:


In [5] they present the recursive Gustafson-Kessel clustering algorithm. The algorithm is given as follows:

In [10], it is presented an algorithm to evolve fuzzy linear regression models which is as follows:

In [12], they present an evolving radial basis neural net- work algorithm which is described as follows:


In [9], they propose an adaptive habitually linear and tran- siently nonlinear model (AHLTNM). The algorithm is given as follows:

1.- Define the parameters of the first rule at t = t0: w1t0 = z(n+1)×1, P1t0 = λI(n+1)×(n+1) where z and I denote the zero vector and an unit matrix, respectively and λ is a large positive constant which is used as the resetting factor

2.- for initial N0 = 20n intake data points, t = 1 up to t = N0 perform following tasks

    2.1.- estimate the output of linear model and absolute errors

    2.2.- update the parameters the RLS technique

    2.3.- compute e0 as 1/20 for the mean value of absolute errors

3.- define the centers and lengths of the first hyper rectangle using the considered initial N0 data points, also initialize adaptive thresholds as Et = e0

4.- let t = t+1 and compute the output of the model

5.- update the parameters of the (lth one) with highest validation function at xt

6.- update the adaptive error threshold, Er

7.- if |et| ≥Et and the number of rules is smaller than Mmax the run split operation for lth rule

8.- if |et| < Et and there are at least two rules, run merge operation for lth rule, and also make an intra merge for blocks of the new rule if it is possible

9.- Go to step 4


In [11], he presents the dynamic formation of fuzzy clusters splitting and merging mechanisms. He describes that the objective function minimized by the FCM for the 1st data snapshot exploits the standard sum of distances:

here Q[1] concerns the rst data snapshot, let us also note that the number of clusters, say c[1] may be selected on a basis of the assumed threshold level, Vmax.

The formulation of the clustering splitting is given in the form:

zi are the prototypes and F = [fik] denotes the partition matrix that satises constraint (25). The detailed calculations of the partition matrix F and the two prototypes z1 and z2 are carried out iteratively according to the two expressions:

For the cluster merging, the new prototype eresults form the minimization of the following performance index:


In this paper, some interesting characteristics of evolving intelligent systems were described. They are known as: the denition of the evolving intelligent systems, the applications, evolving pattern classication, evolving control, evolving iden- tication, evolving prediction, and evolving optimization. For the details of the described methods, please see the references.


The authors are grateful with the Dr. Plamen Angelov for inviting us to write this paper. The authors thank the Secretaria de Investigacion y Posgrado del IPN, the Comision de Operacion y Fomento de Actividades Academicas del IPN, and the Consejo Nacional de Ciencia y Tecnologia for their help in this research.


[1]      P. Angelov, D. Filev, N. Kasabov, Editorial, Evolving Systems, vol. 1, no. 1, pp. 1-2, 2010.

[2]      A. Bouchachia, An evolving classication cascade with self-learning, Evolving Systems, vol. 1, no. 3, pp. 143-160, 2010.

[3]      A. B. Cara, H. Pomares, I. rojas, Zs. Lendek, R. babusca, Online self- evolving fuzzy controller with global learning capabilities, Evolving Systems, vol. 1, no. 4, pp. 225-240, 2010.

[4]      D. Chivala, L. F. Mendoza, J. M. C. Sousa, J. M. G. S· da Costa, Ap- plication on evolving fuzzy modeling to fault tolerant control, Evolving Systems, vol. 1, no. 4, pp. 209-224, 2010.

[5]      D. Dovzan, I. Skrjanc, Recursive clustering based on a Gustafson-Kessel algorithm, Evolving Systems, vol. 2, no. 1, pp. 15-24, 2011.

[6]      G. Herbst, S. F. Bocklisch, Recognition of fuzzy time series patterns using evolving classication results, Evolving Systems, vol 1, no. 2, pp. 97-110, 2010.

[7]      M. Hisada, S. Ozawa, K. Zhang, N. Kasabov, Incremental linear discriminant analysis for evolving feature spaces in multitask pattern recognition problems, Evolving Systems, vol. 1, no. 1, pp. 17-28, 2010.

[8]      J. A. Iglesias, P. Angelov, A. Ledezma, A. Sanchis, An evolving clas- sication of agent¥s behaviors: a general approach, Evonving Systems, vol. 1, no. 3, pp. 161-172, 2010.

[9]      A. Kalhor, B. N. Araabi, C. Lucas, An online predictor model as adaptive habitually linear and transiently nonlinear model, Evolving Systems, vol. 1, no. 1, pp. 29-42, 2010.

[10]      A. Lemos, W. Caminhas, F. Gomide, Fuzzy evolving linear regression trees, Evolving Systems, vol. 2, no. 1, pp. 1-15, 2011.

[11]      W. Pedrycz, Evolvable fuzzy systems: some insights and challenges, Evolving Systems, vol. 1, no. 2, pp. 73-82, 2010.

[12]      J. J. Rubio, D. M. V·zquez, J. Pacheco, Backpropagation to train an evolving radial basis function neural network, Evolving Systems, vol. 1, no. 3, pp. 173-180, 2010.

About the Authors

JosÈ de Jesús Rubio (M'08) was born in MÈxico City in 1979. He received the B.S. degree from the Instituto PolitÈcnico Nacional in MÈxico in 2001. He received the M.S. and neural networks.in automatic control form the CINVESTAV IPN in MÈxico in 2004, and the Ph.D. in automatic control from the CINVES- TAV IPN in MÈxico in 2007. He was a full time professor in the Autonomous Metropolitan University - Mexico City from 2006 to 2008. Since 2008, he is a full time professor of the SecciÛn de Estudios de Posgrado e InvestigaciÛn - Instituto PolitÈcnico Nacional ñ ESIME Azcapotzalco. He has published 28 papers in International Journals, 8 chapters in International Books, and he has presented 27 papers in International Conferences with more than 100 citations. He is a member of the IEEE AFS Adaptive Fuzzy Systems. He is part of the editorial board of the journal Evolving Systems. He has been the tutor of 14 M.S. students and 10 B.S. students. His research interests are primarily focused on evolving intelligent systems, intelligent control, nonlin- ear control, adaptive control, sliding mode control, optimal control, neural-fuzzy systems, Kalman filter, least square, bounded ellipsoid, delayed systems, collisions detector, trajec- tory generator, pattern recognition, identication, prediction, image processing, robotic, mechatronic, medic, automotive, alternative energy, signal processing, greenhouse, petroleum, incubators, warehouse, chemical reactor, mixing.

Manuel Jimenez-Lizarraga, received the BS degree in elec- trical engineering from Instituto Tecnologico de Culiacan, Mexico, and the MS and PhD degrees in Automatic Control from CINVESTAV-IPN Mexico in 1996, 2000 and 2006 respectively. He was postdoc fellow at the ECE Department of the Ohio State University, USA from 2008-2009. He is currently with the Faculty of Physical and Mathematical Sci- ences of Autonomous University of Nuevo Leon, Mexico. His research interests include differential games, robust, optimal and sliding mode control and applications.

J. H. PÈrez-Cruz received the diploma in electronic en- gineering from Oaxaca Institute of Technology, Mexico in 1999, the M.S degree from Toluca Institute of Technology, Mexico in 2004, and the Ph.D. degree from CINVESTAV, Mexico in 2008. He is currently Professor at Oaxaca Institute of Technology. His fields of interest are system identication, control and neural networks.