Convolution Neural Network Based Enhanced Learning Classification Using Privileged Information
DOI:
https://doi.org/10.26438/ijcse/v7i6.353357Keywords:
Untagged corpora, Transfer learning, privileged Information, Neural networkAbstract
The accuracy of data-driven teaching methods is often unsatisfactory when training data are insufficient either in amount or quality. Usually incorporate privileged information (PI), tags, properties or attributes manually labeled to improve the learning of classification. The manual labeling process, however, takes time and works intensively. In addition, manually labeled privileged information may not be rich Sufficient due to personal knowledge limitations. In this approach, classifier learning is enhanced by exploring untagged corporate privileged information (PI), which can effectively eliminate reliance on manually labeled data and enhance privileged information. We treat each selected privileged information as a subcategory in detail and for each subcategory we learn one classifier independently. Classifiers are integrated for all sub-categories to form a more powerful category classifier. In this CNN classifier approach, in particular, to learn the optimum output based on the pictures chosen. The superiority of this proposed approach is demonstrated by extensive experiments on two benchmark data sets.
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. Advances in Neural Information Processing Systems, 561–568, 2002.
[2] A. Bergamo and L. Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. Advances in Neural Information Processing Systems, 181–189, 2010.
[3] L. Niu, W. Li, D. Xu, and J. Cai, “Visual recognition by learning from web data via weakly supervised domain generalization,” IEEE Transactions on neural networks and learning systems, 22(9), 1985–1999, 2017.
[4] R. C. Bunescu and R. J. Mooney. Multiple instance learning for sparse positive bags. International Conference on Machine Learning, 105–112, 2007.
[5] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9): 1627– 1645, 2010.
[6] X. Hua and J. Li. “Prajna: Towards recognizing whatever you want from images without image labeling,” AAAI International Conference on Artificial Intelligence, 137–144, 2015.
[7] M. Jager, C. Knoll, and F. Hamprecht, “Weakly supervised learning of a classifier for unusual event detection,” IEEE Transactions on Image Processing, 17(9): 1700–1708, 2008.
[8] J. Fan, Y. Shen, C. Yang, and N. Zhou, “Structured max-margin learning for inter-related classifier training and multilabel image annotation,” IEEE Transactions on Image Processing, 20(3): 837–854, 2011.
[9] Y. Li, S. Wang, Q. Tian, and X. Ding, “Learning cascaded shared-boost classifiers for part-based object detection,” IEEE Transactions on Image Processing, 23(4): 1858–1871, 2014.
[10] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning. Ann Arbor, 1001(481): 2, 2010.
[11] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. Advances in Neural Information Processing Systems, 561–568, 2002.
[12] F. R. Bach and Z. Harchaoui. Diffrac: a discriminative and flexible framework for clustering. Advances in Neural Information Processing Systems, 49–56, 2008.
[13] Z. Wang and Q. Ji, “Classifier learning with hidden information,” IEEE International Conference on Computer Vision and Pattern Recognition, 4969–4977, 2015
[14] W. Li, L. Niu, and D. Xu, “Exploiting privileged information from web data for image categorization,” European Conference on Computer Vision, 437–452, 2014.
[15] M. Hoai and A. Zisserman. Discriminative sub-categorization. IEEE International Conference on Computer Vision and Pattern Recognition, 1666–1673, 2013.
[16] X. Wang, B. Wang, X. Bai, W. Liu, and Z. Tu. Max-margin multipleinstance dictionary learning. International Conference on Machine Learning, 846–854, 2013.
[17] M. Ristin, J. Gall, M. Guillaumin, and L. Van Gool. From categories to subcategories: large-scale image classification with partial class label refinement. IEEE International Conference on Computer Vision and Pattern Recognition, 231–239, 2015.
[18] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39–41, 1995.
[19] Y. Luo, D. Tao, B. Geng, C. Xu, and S. Maybank, “Manifold regularized multitask learning for semi-supervised multilabel image classification,” IEEE Transactions on Image Processing, 22(2): 523–536, 2013.
[20] Y. Gao, R. Ji, W. Liu, Q. Dai, and G. Hua, “Weakly supervised visual dictionary learning by harnessing image attributes,” IEEE Transactions on Image Processing, 23(12): 5400–5411, 2014.
[21] N. Doulamis and A. Doulamis, “Semi-supervised deep learning for object tracking and classification,” IEEE International Conference on Image Processing, 848–852, 2014.
[22] G. Carneiro, A. Chan, P. Moreno, and N. Vasconcelos, “Supervised learning of semantic classes for image annotation and retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(3):394–410, 2007.
[23] Y. Gao, R. Ji, W. Liu, Q. Dai, and G. Hua, “Weakly supervised visual dictionary learning by harnessing image attributes,” IEEE Transactions on Image Processing, 23(12): 5400–5411, 2014.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
