Conservative Procedures for Web Image Re-Ranking Precisions Using Semantic Signatures
Keywords:
Image Search, Semantic Space, Semantic Signature, Context, Query Image, Query KeywordAbstract
Image re-ranking, as an effectual way to get better the outputs of web-based image search, has been legitimate by existing mercantile search engines such as Bing and Google. Specified a query keyword, a pond of images is first cultivated based on textual in sequence. By inquisitive the users to pick a query image from the pool, the outstanding pictures are re-ranked based on their ocular concurrences with the query image. A most important confront is that the correspondences of ocular features do not glowing correlate with images’ semantic meanings which construe users’ search intention. In recent time’s people wished-for to match pictures in a semantic space which worn essences or orientation classes closely allied to the semantic meanings of pictures as basis. However, wisdom a universal visual semantic space to illustrate extremely varied images from the web is difficult and ineffective. We put forward a novel image re-ranking context, which routinely offline learns diverse semantic spaces for dissimilar query keywords. The ocular features of pictures are predicted keen on their related semantic spaces to acquire semantic signatures. On the internet arena, images are re-ranked by examine their semantic signatures accomplish from the semantic space specified by the query keyword. The wished-for query-specific semantic signatures appreciably get better both the accurateness and efficiency of image re-ranking. The pioneering visual characteristics of thousands of proportions can be predicted to the semantic signatures as squat as 25 dimensions. Preliminary results show that 25-40 percent relative enrichment has been accomplished on re-ranking precisions contrasted with the state-of-the-art approaches.
References
Y. Cao, C. Wang, Z. Li, L. Zhang, and L. Zhang. “Spatial-bag-of features.” In Proc. CVPR, 2010.
Bart and S. Ullman. “Single-example learning of novel classes using representation by similarity.” In Proc. BMVC, 2005.
B. Luo, X. Wang, and X. Tang. “A world wide web based image search engine using text and image content features.” In Proceedings of the SPIE Electronic Imaging, 2003.
J. Cui, F. Wen, and X. Tang, “Intent Search: Interactive on-Line Image Search Re-Ranking,” Proc. 16th ACM Int’l Conf. Multimedia, 2008.
J. Cui, F. Wen, and X. Tang, “Real Time Google and Live Image Search Re-Ranking,” Proc. 16th ACM Int’l Conf. Multimedia, 2008.
C. Lampert, H. Nickisch, and S. Harmeling. “Learning to detect unseen object classes by between-class attribute transfer.” In Proc. CVPR, 2005.
J. Philbin, M. Isard, J. Sivic, and A. Zisserman. “Descriptor Learning for Efficient Retrieval.” In Proc. ECCV, 2010.
N. Rasiwasia, P. J. Moreno, and N. Vasconcelos. “Bridging the gap: Query by semantic example”. IEEE Trans. On Multimedia, 2007.
M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. “What helps where and why? Semantic relatedness for knowledge transfer”. In Proc. CVPR, 2010.
Q. Yin, X. Tang, and J. Sun. “An associate-predict model for face recognition”. In Proc. CVPR, 2011.
Lowe. “Distinctive image features from scale-invariant key points”. Int’l Journal of Computer Vision, 2004.
Xiaogang Wang, Shi Qiu, Ke Liu, and Xiaoou Tang, “Web Image Re-Ranking Using Query-Specific Semantic Signatures”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 4, April 2014.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
