Abstract
Two-hidden layer feedforward neural networks (TLFNs) have been shown to outperform single-hidden-layer neural networks (SLFNs) for function approximation in many cases. However, their added complexity makes them more difficult to find. Given a constant number of hidden nodes nh, this paper investigates how their allocation between the first and second hidden layers (n h = n1 + n2 ) affects the likelihood of finding the best generaliser. The experiments were carried out over a total of ten public domain datasets with nh = 8 and 16. The findings were that the heuristic n1 = 0.5nh + 1 has an average probability of at least 0.85 of finding a network with a generalisation error within 0.18% of the best generaliser. Furthermore, the worst case over all data sets was within 0.23% for nh = 8, and within 0.15% for nh = 16. These findings could be used to reduce the complexity of the search for TLFNs from quadratic to linear, or alternatively for ‘topology mapping’ between TLFNs and SLFNs, given the same number of hidden nodes, to compare their performance.
Original language | English |
---|---|
Pages (from-to) | 241-247 |
Number of pages | 7 |
Journal | International Journal of Machine Learning and Computing |
Volume | 6 |
Issue number | 5 |
DOIs | |
Publication status | Published - 7 Oct 2016 |
Keywords
- ANN
- optimal node ratio
- topology mapping
- two-hidden-layer feedforward
- function approximation
Fingerprint
Dive into the research topics of 'On the Optimal Node Ratio between Hidden Layers: A Probabilistic Study'. Together they form a unique fingerprint.Profiles
-
Robert Morgan
- School of Arch, Tech and Eng - Professor of Thermal Propulsion Systems
- Advanced Engineering Centre
Person: Academic