INEB
INEB
TitleThe MEE principle in data classification: A perceptron-based analysis
Publication TypeJournal Article
2010
AuthorsSilva, LM, Marques de Sá, J, Alexandre, LA
JournalNeural ComputationNeural Comp.
Volume22
Issue10
Pagination2698 - 2728
Date Published2010///
08997667 (ISSN)
algorithm, Algorithms, artificial intelligence, artificial neural network, automated pattern recognition, Computer simulation, entropy, letter, Mathematical Concepts, mathematical phenomena, Models, Statistical, Neural Networks (Computer), Pattern Recognition, Automated, standard, statistical model
This letter focuses on the issue of whether risk functionals derived from information-theoretic principles, such as Shannon or Rényi's entropies, are able to cope with the data classification problem in both the sense of attaining the risk functional minimum and implying the minimum probability of error allowed by the family of functions implemented by the classifier, here denoted by min Pe. The analysis of this so-called minimization of error entropy (MEE) principle is carried out in a single perceptron with continuous activation functions, yielding continuous error distributions. In spite of the fact that the analysis is restricted to single perceptrons, it reveals a large spectrum of behaviors that MEE can be expected to exhibit in both theory and practice. In what concerns the theoretical MEE, our study clarifies the role of the parameters controlling the perceptron activation function (of the squashing type) in often reaching the minimum probability of error. Our study also clarifies the role of the kernel density estimator of the error density in achieving the minimum probability of error in practice. © 2010 Massachusetts Institute of Technology.
http://www.scopus.com/inward/record.url?eid=2-s2.0-78149334887&partnerID=40&md5=97c7445cfa4693d406d979afc4b42319