A Review on Multi-Task clustering with self-adaptive and Model Relation Learning

Authors

  • Rupakumar C Dept. of MCA, Sri Padmavathi College of Computer Sciences And Technology,Tiruchanoor-Tirupati, India
  • Girinath S Dept. of MCA, Sri Padmavathi College of Computer Sciences And Technology,Tiruchanoor-Tirupati, India

Keywords:

Multi-task Clustering, Partially Related Tasks, Negative Transfer, Instance Transfer

Abstract

Multi-task clustering improves the clustering performance of each task by transferring knowledge among the related tasks. An important aspect of multi-task clustering is to assess the task relatedness. However, to our knowledge, only two previous works have assessed the task relatedness, but they both have limitations. In this paper, we propose two multi-task clustering methods for partially related tasks: the self-adapted multi-task clustering (SAMTC) method and the manifold regularized coding multi-task clustering (MRCMTC) method, which can automatically identify and transfer related instances among the tasks, thus avoiding negative transfer. Both SAMTC and MRCMTC construct the similarity matrix for each target task by exploiting useful information from the source tasks through related instances transfer, and adopt spectral clustering to get the final clustering results. But they learn the related instances from the source tasks in different ways.

References

[1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010.

[2] J. Zhang and C. Zhang, “Multitask Bregman clustering,” in Proc. 24th AAAI Conf. Artif. Intell., 2010, pp. 655–660.

[3] X. Zhang and X. Zhang, “Smart multi-task Bregman clustering and multi-task Kernel clustering,” in Proc. 27th AAAI Conf. Artif. Intell., 2013, pp. 1034–1040.

[4] X. Zhang, “Convex discriminative multitask clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 1, pp. 28–40, 2015.

[5] J. Lin, “Divergence measures based on the shannon entropy,” IEEE Trans. Inform. Theory, vol. 37, no. 1, pp. 145–151, 1991.

[6] J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf, “Correcting sample selection bias by unlabeled data,” ¨ in Proc. 20th Adv. Neural Inform. Process. Syst., 2006, pp. 601–608.

[7] X. Zhang, X. Zhang, and H. Liu, “Self-adapted multi-task clustering,” in Proc. 25th Int. Joint Conf. Artif. Intell., 2016, pp. 2357–2363.

[8] R. Caruana, “Multitask learning,” Mach. Learn., vol. 28, no. 1, pp. 41–75, 1997.

[9] R. K. Ando and T. Zhang, “A framework for learning predictive structures from multiple tasks and unlabeled data,” J. Mach. Learn. Res., vol. 6, pp. 1817–1853, 2005.

[10] A. Argyriou, T. Evgeniou, and M. Pontil, “Multi-task feature learning,” in Proc. 20th Adv. Neural Inform. Process. Syst., 2006, pp. 41–48.

[11] J. Chen, L. Tang, J. Liu, and J. Ye, “A convex formulation for learning shared structures from multiple tasks,” in Proc. 26th Int. Conf. Mach. Learn., 2009, pp. 137–144.

[12] T. Evgeniou and M. Pontil, “Regularized multi–task learning,” in Proc. 10th ACM SIGKDD Int. Conf. Knowl. Disc. Data Min., 2004, pp. 109–117.

[13] C. A. Micchelli and M. Pontil, “Kernels for multi–task learning,” in Proc. 18th Adv. Neural Inform. Process. Syst., 2004.

[14] T. Evgeniou, C. A. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” J. Mach. Learn. Res., vol. 6, pp. 615–637, 2005.

[15] A. Barzilai and K. Crammer, “Convex multi-task learning by clustering,” in Proc. 18th Int. Conf. Artif. Intell. and Stat., 2015, pp. 65–73.

[16] N. D. Lawrence and J. C. Platt, “Learning to learn with the informative vector machine,” in Proc. 21st Int. Conf. Mach. Learn., 2004.

[17] E. V. Bonilla, K. M. A. Chai, and C. K. I. Williams, “Multitask gaussian process prediction,” in Proc. 21st Adv. Neural Inform. Process. Syst., 2007, pp. 153–160.

[18] B. Zadrozny, “Learning and evaluating classifiers under sample selection bias,” in Proc. 21st Int. Conf. Mach. Learn., 2004, pp. 114– 121.

[19] W. Dai, Q. Yang, G. Xue, and Y. Yu, “Boosting for transfer learning,” in Proc. 24th Int. Conf. Mach. Learn., 2007, pp. 193–200.

[20] W. Dai, G. Xue, Q. Yang, and Y. Yu, “Transferring naive bayes classifiers for text classification,” in Proc. AAAI Conf. on Artif. Intell., 2007, pp. 540–545.

Downloads

Published

2025-11-25

How to Cite

[1]
C. Rupakumar and S. Girinath, “A Review on Multi-Task clustering with self-adaptive and Model Relation Learning”, Int. J. Comp. Sci. Eng., vol. 7, no. 6, pp. 106–108, Nov. 2025.