An Enhanced Random Linear Oracle Ensemble Method using Feature Selection Approach based on Naïve Bayes Classifier

Boon Pin Ooi, Norasmadi Abdul Rahim, Ammar Zakaria, Maz Jamilah Masnan, Shazmin Aniza Abdul Shukor


Random Linear Oracle (RLO) ensemble replaced each classifier with two mini-ensembles, allowing base classifiers to be trained using different data set, improving the variety of trained classifiers. Naïve Bayes (NB) classifier was chosen as the base classifier for this research due to its simplicity and computational inexpensive. Different feature selection algorithms are applied to RLO ensemble to investigate the effect of different sized data towards its performance. Experiments were carried out using 30 data sets from UCI repository, as well as 6 learning algorithms, namely NB classifier, RLO ensemble, RLO ensemble trained with Genetic Algorithm (GA) feature selection using accuracy of NB classifier as fitness function, RLO ensemble trained with GA feature selection using accuracy of RLO ensemble as fitness function, RLO ensemble trained with t-test feature selection, and RLO ensemble trained with Kruskal-Wallis test feature selection. The results showed that RLO ensemble could significantly improve the diversity of NB classifier in dealing with distinctively selected feature sets through its fusionselection paradigm. Consequently, feature selection algorithms could greatly benefit RLO ensemble, with properly selected number of features from filter approach, or GA natural selection from wrapper approach, it received great classification accuracy improvement, as well as growth in diversity.


Ensemble; Feature Selection; Naïve Bayes; Pattern Recognition; Random Linear Oracle;

Full Text:



F. Y. Shih, Image processing and pattern recognition: Fundamentals and techniques. 2010.

L. I. Kuncheva, Combining pattern classifiers: Methods and algorithms, 2nd ed. Hoboken, New Jersey: John Wiley & Sons, Inc, 2014.

F. Roli, G. Giacinto, and G. Vernazza, “Methods for designing multiple classifier systems,” Mult. Classif. Syst., vol. 1857, pp. 78– 87, 2000.

T. G. Dietterich, “Ensemble methods in machine learning,” Mult. Classif. Syst., vol. 1857, pp. 1–15, 2000.

G. Brown and L. I. Kuncheva, “‘Good’ and ‘Bad’ diversity in majority vote ensembles,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, vol. 5997 LNCS, pp. 124– 133.

L. Didaci, G. Fumera, and F. Roli, “Diversity in classifier ensembles: Fertile concept or dead end?,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013, vol. 7872 LNCS, pp. 37–48.

Y. Bi, “The impact of diversity on the accuracy of evidential classifier ensembles,” Int. J. Approx. Reason., vol. 53, no. 4, pp. 584–607, 2012.

I. Rish, “An empirical study of the Naive Bayes classifier,” IJCAI 2001 Work. Empir. methods Artif. Intell., pp. 41–46, 2001.

A. R. Webb, Statistical pattern recognition, vol. 71, no. 8. 2011.

A. Jamain and D. J. Hand, “The Naive Bayes mystery: A classification detective story,” Pattern Recognit. Lett., vol. 26, no. 11, pp. 1752–1760, 2005.

L. I. Kuncheva and J. J. Rodríguez, “Classifier ensembles with a random linear oracle,” IEEE Trans. Knowl. Data Eng., vol. 19, no. 4, pp. 500–508, Apr. 2007.

J. J. Rodríguez and L. I. Kuncheva, “Naive Bayes ensembles with a random oracle,” Mult. Classif. Syst., pp. 450–458, 2007.

C. Pardo, J. J. Rodríguez, J. F. Díez-Pastor, and C. García-Osorio, “Random oracles for regression ensembles,” Ensembles Mach. Learn. Appl., pp. 181–199, 2011.

K. Li and L. Hao, “Naive Bayes ensemble learning based on oracle selection,” Control and Decision Conference, 2009. CCDC ’09. Chinese. pp. 665–670, 2009.

A. Ahmad and G. Brown, “A study of random linear oracle ensembles,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 5519 LNCS, pp. 488– 497, 2009.

G. Armano and N. Hatami, “Random prototype-based oracle for selection-fusion ensembles,” 2010 20th Int. Conf. Pattern Recognit., pp. 77–80, 2010.

A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: A review,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 1, pp. 4–37, 2000.

I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” J. Mach. Learn. Res., vol. 3, no. 3, pp. 1157–1182, 2003.

M. Ghaemi and M.-R. Feizi-Derakhshi, “Feature selection using forest optimization algorithm,” Pattern Recognit., vol. 60, pp. 121–129, 2016.

C. M. Bishop, Pattern recognition and machine learning, vol. 4, no. 4. 2006.

D. Caprette, “‘Student’s’ t test (For independent samples),” Rice University, 1999. [Online]. Available: [Accessed: 30-Mar-2016].

N. Zhou and L. Wang, “A modified t-test feature selection method and its application on the HapMap genotype data,” Genomics, Proteomics Bioinforma., vol. 5, no. 3–4, pp. 242–249, 2007.

M. N. Aishah, M. Maz Jamilah, M. A. Nor Azrita, M. N. Nor Fashihah, and S. Syafawati, Engineering statistics. Kedah: Institut Matematik Kejuruteraan, Fotocopy Ent., 2016.

J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” J. Mach. Learn. Res., vol. 7, pp. 1–30, 2006.

J. McDonald, Handbook of biological statistics: Introduction, 3rd ed. Baltimore, Maryland: Sparky House Publishing, 2012.

J. Frost, “Choosing between a nonparametric test and a parametric test,” 2015. [Online]. Available: [Accessed: 01- Apr-2016].

N. P. Padhy, Artificial intelligence and intelligent systems, 14th ed. NewDelhi, India: Oxford University Press, 2015.

M. Graczyk, T. Lasota, Z. Telec, and B. Trawiński, “Nonparametric statistical analysis of machine learning algorithms for regression problems,” in International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, 2010, pp. 111–120.

B. Trawiński, M. Smętek, Z. Telec, and T. Lasota, “Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms,” Int. J. Appl. Math. Comput. Sci., vol. 22, no. 4, pp. 867–881, 2012.

N. Nachar, “The Mann-Whitney U: A test for assessing whether two independent samples come from the same distribution,” Tutor. Quant. Methods Psychol., vol. 4, no. 1, pp. 13–20, 2008.

K. Bache and M. Lichman, “UCI machine learning repository,” University of California Irvine School of Information, 2013. [Online]. Available:


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

ISSN: 2180-1843

eISSN: 2289-8131