The team is currently working on the project "Intelligent nonlinear systems with shallow and deep architectures" financed by National Science Centre (NCN) in Poland, 2013/11/B/ST6/01337
The discovery of the EBP (Error Back Propagation) algorithm started a rapid growth of computational intelligent systems. Thousands of practical problems have been solved with the help of neural networks. Other neural networks’ architectures are also possible, but the main accomplishments were noticed with feed forward neural networks using primarily MLP (Multi-Layer Perceptron) architectures. Although EBP was a real breakthrough, this is not only a very slow algorithm, but it also is not capable of training networks with super compact architectures. The most noticeable progress was done with an adaptation of the LM algorithm to neural network training. The LM algorithm is capable of training networks with 100 to 1000 fewer iterations, but the size of the problems are significantly limited because the size of the Jacobian matrix is proportional to the number of patterns. Recent study shows that the most popular SLP architecture (MLP with a single hidden layer) has very limited capabilities. For example, a SLP network with 10 neurons can solve only a Parity-9 problem by FCC (Fully Connected Cascade) while the same 10 neurons can solve as large a problem as a Parity-1023. Unfortunately popular training algorithms (including the LM algorithm) are not capable of training these compact and powerful architectures. The frustration with traditional neural networks pushed researchers into different directions of implementing, for example, fuzzy systems, support vector machines, extreme learning machines, etc. The issue can be fixed by using super compact architectures with a reduced degree of freedom. Therefore, our research will focus on these compact architectures and new algorithms to train them.