97
Pattern Analysis and Machine Intelligence, vol. 17, no. 1, pp. 90-94, (1995).
[21] JEONG K., XU J., ERDOGMUS D., PRINCIPE C., A new classifier based on
information theoretic learning with unlabeled data. Neural Networks, vol. 18, pp.
719-726, (2005).
[22] JOACHIMS T., Transdutive inference for text classification using support
vectors machines. In Proceedings of ICML-99, 16
th
International Conference on
Machine Learning, pp. 200-209. Morgan Kaufmann Publishers, San Francisco,
CA, (1999).
[23] KIM H., PANG S., JE H., KIM D., BANG S. Y., Constructing support vector
machine ensemble. Pattern Recognition, vol.36, pp. 2757-2767, (2003).
[24] KOTSIANTIS S. B., PINTELAS P. E., Combining bagging and boosting.
International Journal of Computational Intelligence Vol. 1, no. 4, pp. 324-333,
(2004).
[25] MITCHELL T., Machine learning. McGraw Hill, (1997).
[26] NANNI l., LUMINI A., Ensemblator: an ensemble of classifiers for reliable
classification of biological data. Science Direct, Pattern Recognition Letters 28,
pp. 622-630, (2007).
[27] POLICAR R.,
Ensemble based systems in decision making.
IEEE Circuits and Systems Magazine, vol. 6, no. 3, pp. 21-45, (2006).
[28] ROSEN B., Ensemble learning using decorrelated neural networks. Connection
Science, vol. 8, no. 3, pp. 373-384, (1996).
[29] RUTA D., GABRYS B., Classifier selection for majority voting. Information
Fusion, vol. 6, pp 63-81, (2005).
[30] RUTA D., GABRYS B., A theoretical analysis of majority voting errors for
multiple classifier systems. Pattern Analysis and Applications 5 (4), pp. 333-
350, (2002).
[31] SCHAPIRE R.E., The strength of weak learn ability. Machine Learning , vol. 5,