黄娜, 何泾沙. 基于深度特征与局部特征融合的图像检索[J]. 北京工业大学学报, 2020, 46(12): 1345-1354. DOI: 10.11936/bjutxb2019070005
    引用本文: 黄娜, 何泾沙. 基于深度特征与局部特征融合的图像检索[J]. 北京工业大学学报, 2020, 46(12): 1345-1354. DOI: 10.11936/bjutxb2019070005
    HUANG Na, HE Jingsha. Image Retrieval Based on Fusion of Deep Feature and Local Feature[J]. Journal of Beijing University of Technology, 2020, 46(12): 1345-1354. DOI: 10.11936/bjutxb2019070005
    Citation: HUANG Na, HE Jingsha. Image Retrieval Based on Fusion of Deep Feature and Local Feature[J]. Journal of Beijing University of Technology, 2020, 46(12): 1345-1354. DOI: 10.11936/bjutxb2019070005

    基于深度特征与局部特征融合的图像检索

    Image Retrieval Based on Fusion of Deep Feature and Local Feature

    • 摘要: 为了提高准确率与效率,提出一种基于深度特征与局部特征融合的图像检索方法,选择深度特征作为全局特征,局部特征采用加速不变特征(speeded up robust features,SURF)和局部二值模式(local binary pattern,LBP)等.为了解决典型相关分析(canonical correlation analysis,CCA)的特征融合方法存在信息缺失、信息冗余2个缺陷的问题,将目标函数改进为最小化特征之间的相关性,求解出变换基,通过投影变换得到2种特征中各自所包含的独立性信息,在此基础上加入其中一方所包含的相关性信息,得到最终的融合结果.改进后的融合方法能够更加全面地表征原始数据,同时消除冗余信息.在实验中,首先通过图像分类的应用验证了深度特征与LBP特征融合具有较好的判别能力,平均分类准确率达到99.1%,同时具有较高的时间效率.通过实验讨论不同维度对特征融合性能的影响,结果表明,增加特征选择的维度能够在一定程度上提高分类准确率.最后,验证基于深度特征与局部特征融合的图像检索方法,计算融合特征的相似性距离,根据距离度量得到检索排名.在实验数据集上查准率为98.0%,查全率为46.0%.对比结果表明,该方法不仅能实现可靠的准确性,还具有较高的时间效率.

       

      Abstract: To improve the accuracy and efficiency, an image retrieval method based on the fusion of global features and local features was proposed. The deep feature was selected as the global feature, and the speeded up robust features (SURF)and local binary pattern (LBP) were used as the local feature. The feature fusion based on canonical correlation analysis (CCA) has two shortcomings, i.e., information loss and information redundancy. To solve these problems, the criterion function was improved to obtain the basis vector that can minimize the correlation between features. Through the projection transformation on the basis vector, the independent information of the two feature vectors was obtained. On the basis of this, the final fusion was the independent information joined with the correlated information which was contained in one of the two features. The improved fusion method represented the original data more comprehensively and eliminated redundant information at the same time. In the experiments, it proves firstly that the fusion of depth feature and LBP feature has better discriminant ability in the application of image classification, the average accuracy reaches 99.1% with high time performance. A group of experiments were carried out to discuss the influence of different dimensions on feature fusion performance. Results show that increasing the dimension of feature selection can improve the classification accuracy to a certain extent. Finally, the image retrieval based on the fusion of deep feature and local feature was validated. In the experiment, the Manhattan distance was used to measure the similarity of the fused features, and the retrieval ranking was obtained based on the similarity measure. An accurate retrieval results was achieved on the experimental data set, the precision reached 98.0% while the recall reached 46.0%. The comparison shows that this method not only achieves reliable accuracy, but also has high time performance.

       

    /

    返回文章
    返回