PROJECTS

 

New project "A new approach for effective training of complex inteligent systems" financed by National Science Centre (NCN) in Poland, 2015/17/B/ST6/01880. The project starts on March 2016.

Abstract

Shallow networks are trained with various gradient based methods or by using newer Support Vector Machines (SVM), or Extreme Learning Machines (ELM) technologies. These networks are using a single hidden layer with limited capabilities, so even for medium size problems require excessive numbers of processing units commonly called neurons. At the same time too many neurons may lead to poor generalization abilities. On the other hand a deep network architecture exhibits tremendous capabilities, but it is almost impossible to train them with a supervised learning algorithm because of the "vanishing gradient" problem. Currently used solutions for deep learning uses a combination of extensive data preprocessing, unsupervised learning algorithms for the first layers, and supervised learning algorithms to train the last layers. Solving the "vanishing gradient" problem allows us to use a supervised training for entire deep architectures with supervised algorithms because this may more fully exploit the tremendous power of deep networks and lead to true robust learning. We have already proven that this can be possible, and the expansion of this success is the purpose of the proposed research. More specifically described as the goals listed below:
Goal 1. Practical elimination of the "vanishing gradient" problem
Goal 2. Development of dedicated advanced second order algorithms to train merged deep and shallow architectures.
Goal 3. Alternative approach to training BMLP architectures by converting them to traditional MLP architectures with dual structures
Goal 4. Development on-line learning algorithm for adaptive learning
Goal 5. Possible reduction of number of training parameters and randomness in the process
Goal 6. Distribution of developed software and dissemination of results

 

 

The team is currently working on the project "Intelligent nonlinear systems with shallow and deep architectures" financed by National Science Centre (NCN) in Poland, 2013/11/B/ST6/01337

Abstract

The discovery of the EBP (Error Back Propagation) algorithm started a rapid growth of computational intelligent systems.  Thousands of practical problems have been solved with the help of neural networks.  Other neural networks’ architectures are also possible, but the main accomplishments were noticed with feed forward neural networks using primarily MLP (Multi-Layer Perceptron) architectures.  Although EBP was a real breakthrough, this is not only a very slow algorithm, but it also is not capable of training networks with super compact architectures. The most noticeable progress was done with an adaptation of the LM algorithm to neural network training. The LM algorithm is capable of training networks with 100 to 1000 fewer iterations, but  the size of the problems are significantly limited  because the size of the Jacobian matrix is proportional to the number of  patterns. Recent study shows that the most popular SLP architecture (MLP with a single hidden layer) has very limited capabilities. For example, a SLP network with 10 neurons can solve only a Parity-9 problem by FCC (Fully Connected Cascade) while the same 10 neurons can solve as large a problem as a Parity-1023.  Unfortunately popular training algorithms (including the LM algorithm) are not capable of training these compact and powerful architectures. The frustration with traditional neural networks pushed researchers into different directions of implementing, for example, fuzzy systems, support vector machines, extreme learning machines, etc.  The issue can be fixed by using super compact architectures with a reduced degree of freedom. Therefore, our research will focus on these compact architectures and new algorithms to train them.

 

Home Page | Team | Projects | Download | Publications

© 2014 All Rights Reserved

University of Information Technology and Management in Rzeszow