Abstract
This paper presents a new approach to the unsupervised training of Bayesian network classifiers. Three models have been analysed: the Chow and Liu (CL) multinets; the tree-augmented naive Bayes; and a new model called the simple Bayesian network classifier, which is more robust in its structure learning. To perform the unsupervised training of these models, the classification maximum likelihood criterion is used. The maximization of this criterion is derived for each model under the classification expectation-maximization ( EM) algorithm framework. To test the proposed unsupervised training approach, 10 well-known benchmark datasets have been used to measure their clustering performance. Also, for comparison, the results for the k-means and the EM algorithm, as well as those obtained when the three Bayesian network classifiers are trained in a supervised way, are analysed. A real-world image processing application is also presented, dealing with clustering of wood board images described by 165 attributes. Results show that the proposed learning method, in general, outperforms traditional clustering algorithms and, in the wood board image application, the CL multinets obtained a 12 per cent increase, on average, in clustering accuracy when compared with the k-means method and a 7 per cent increase, on average, when compared with the EM algorithm.
Original language | English |
---|---|
Pages (from-to) | 2927-2948 |
Number of pages | 22 |
Journal | Royal Society of London. Proceedings A. Mathematical, Physical and Engineering Sciences |
Volume | 465 |
Issue number | 2109 |
DOIs | |
Publication status | Published - 1 Sept 2009 |
Keywords
- Bayesian networks
- machine learning
- classification expectation-maximization algorithm
- unsupervised training
- clustering