Comparative Analysis of Cluster based Boosting

Authors

  • Kolhe N Department, Of Computer Engineering, K.K.Wagh College Of Engineering & Research, Savitribai Phule Pune University, Maharashtra, India
  • Kulkarni H Department, Of Computer Engineering, K.K.Wagh College Of Engineering & Research, Savitribai Phule Pune University, Maharashtra, India
  • Kedia I Department, Of Computer Engineering, K.K.Wagh College Of Engineering & Research, Savitribai Phule Pune University, Maharashtra, India
  • Gaikwad S Department, Of Computer Engineering, K.K.Wagh College Of Engineering & Research, Savitribai Phule Pune University, Maharashtra, India

Keywords:

Boosting, Clustering, Hierarchical clustering, Classifier combining, Machine Learning, Supervised learning, Computer graphics, Artificial intelligence

Abstract

Clustering focuses on grouping similar objects in one cluster and dissimilar objects into another cluster. In clustering, this concept of boosting applies to the area of predictive data mining to generate multiple clusters. There is an existing cluster based boosting(CBB) system which focus on real data sets applied to it as input. It uses K-means algorithm that evolved in limited number of clusters with over fitting and it also holds two limitations: 1.Subsequent functions ignoring troublesome areas 2.Complex subsequent functions. To overcome these drawbacks hierarchical clustering is proposed and thus enhances the accuracy of desired output of CBB approach compared to popular boosting algorithm. The comparative analysis may show the improvement in performance of the system. The users may obtain refined clusters with more accuracy as desired output.

References

[1]L. Dee Miller and Leen-Kiat Soh,"Cluster based

Boosting", IEEE TRANSACTIONS ON KNOWLEDGE

AND DATA ENGINEERING, Volume-27, Isuue-6,Page

No (1-12),June 2015

[2]C. Zhang and Y. Ma,"Ensemble Machine Learning" NewYork, NY, USA: Springer,Page No (76),July 2012

[3]A. Vezhnevets and O. Barinova,"Avoiding boosting

overfitting by removing confusing samples", Springer

Berlin Heidelberg,Volume-4701,Page No (430–

441),2007

[4]D.-S. Kim, Y.-M. Baek, and W.-Y. Kim,"Reducing

overfitting of adaboost by clustering-based pruning of

hard examples", The Korean Society of Broadcast

Engineers, Volume-18, Issue-4, Page No (643-646),Jul

2007

[5]M. Okabe and S. Yamada,“Clustering by learning

constraints priorities,” in Proc. Int. Conf. Data

Mining,Page No (1050–1055),2012.

[6]A. Ganatra and Y. Kosta,"Comprehensive evolution and

evaluation of boosting", Int. J. Comput. Theory

Eng.,Volume-2, Page No ( 931–936),2010.

[7]D. Frossyniotis, A. Likas, and A. Stafylopatis,"A

clustering method based on boosting", Pattern Recog.

Lett.,Volume-25,Page No ( 641–654), 2004.

[8]L. Reyzin and R. Schapire,"How boosting the margin

can also boost classifier complexity", in Proc. Int. Conf.

Mach. Learn.,Page No ( 753–760),2006

[9]R. Schapire and Y. Freund,"Boosting: Foundations and

Algorithms", Cambridge, MA, USA: MIT Press, 2012.

[10] J. Chou, C. Chiu, M. Farfoura, and I. Al-Taharwa,

“Optimizing the prediction accuracy of concrete

compressive strength based on acomparison of data-

mining techniques,” J. Comp. Civil Eng.,Volume-

25,Page No (242–253), 2011

[11]David Eppstein,"Fast Hierarchical Clustering and

Other Applications of Dynamic Closest

Pairs",ACM New York, NY, USA, Volume-5,2005

[12]Y. Freund,"An adaptive version of the boost by

majority algorithm,"Mach. Learn.,Volume-43,Page No

(293–318), 2001

[13]Preeti Baser and Dr. Jatinderkumar R. Saini,"A

Comparative Analysis of Various Clustering

Techniques used for Very Large Datasets",Volume-

3,Page No (1-3),Issue-4,March 2013

Downloads

Published

2015-10-31

How to Cite

[1]
N. Kolhe, H. Kulkarni, I. Kedia, and S. Gaikwad, “Comparative Analysis of Cluster based Boosting”, Int. J. Comp. Sci. Eng., vol. 3, no. 10, pp. 66–70, Oct. 2015.