CN108829711B - Image retrieval method based on multi-feature fusion - Google Patents

Image retrieval method based on multi-feature fusion Download PDF

Info

Publication number
CN108829711B
CN108829711B CN201810418660.XA CN201810418660A CN108829711B CN 108829711 B CN108829711 B CN 108829711B CN 201810418660 A CN201810418660 A CN 201810418660A CN 108829711 B CN108829711 B CN 108829711B
Authority
CN
China
Prior art keywords
image
feature
target image
color
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810418660.XA
Other languages
Chinese (zh)
Other versions
CN108829711A (en
Inventor
栾雄
张闻强
徐念龙
杨莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dejian Computer Technology Co ltd
Original Assignee
Shanghai Dejian Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dejian Computer Technology Co ltd filed Critical Shanghai Dejian Computer Technology Co ltd
Priority to CN201810418660.XA priority Critical patent/CN108829711B/en
Publication of CN108829711A publication Critical patent/CN108829711A/en
Application granted granted Critical
Publication of CN108829711B publication Critical patent/CN108829711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image retrieval method based on multi-feature fusion, which comprises the following steps: acquiring a target image I, and calculating the image characteristics of the target image I: extracting the color characteristics of the target image I and storing the color characteristics in a color characteristic library of the image; extracting the shape feature of the target image I and storing the shape feature in a shape feature library of the image; extracting the texture features of the target image I and storing the texture features in a texture feature library of the image; and calculating the similarity of the target image I and the image data set to obtain a final retrieval result. The invention has the following beneficial effects: and enabling the user to acquire similar home scheme images according to the retrieval images. Aiming at the defects of the existing single features, the method deals with the combination of feature sets in the household industry, can improve the efficiency of single feature retrieval and solve the problem of insufficient coverage of the single features in the actual household scene. The method and the device can improve the searching efficiency of the images of the household products, save the time for a user to search the household scheme, and improve the actual searching experience of the user.

Description

Image retrieval method based on multi-feature fusion
Technical Field
The invention relates to the technical field of image retrieval, in particular to an image retrieval method based on multi-feature fusion.
Background
The image retrieval method based on the content is to establish a high-dimensional feature vector library of the image by utilizing information such as visual features and spatial relations of the image, perform matching according to the high-dimensional feature vector of the image and return an image retrieval result of a user. Compared with the image retrieval method based on the text, the method has the advantage that the retrieval result is more effective.
The image features can be divided into three major categories, namely color features, texture features and shape features according to the division of logic types. The color feature is one of the most widely applied image features, and can provide a function based on color classification for image search; the texture features emphatically describe texture modes in the image blocks; and shape features mainly describe structural features in the image. In the image search, the image features serve as bottom information to support the retrieval of the target image I.
The image retrieval method based on the single feature can have higher efficiency in the aspect of a certain feature, but when the image retrieval method faces to a natural scene with complexity, changeability and different categories, the image retrieval method often has defects. Therefore, the retrieval method combining multiple image characteristics has urgent needs in engineering practical application.
In a specific application scene, one of the difficulties in fusing multiple image features is the selection of an image feature method, and the different image feature extraction methods have different expression forms on the features, which finally results in different understandings of image meanings, thereby affecting the image retrieval effect in the application scene.
Disclosure of Invention
The invention provides an image retrieval method based on multi-feature fusion, which solves the problem that the retrieval method fusing multiple image features in the prior art is imperfect.
The technical scheme of the invention is realized as follows:
an image retrieval method based on multi-feature fusion comprises the following steps:
(1) acquiring a target image I and calculating the image characteristics of the target image I
1) Extracting the color characteristics of the target image I and storing the color characteristics in a color characteristic library of the image
Converting a target image I from an RGB color space to an HSV color space according to a standard conversion formula;
② converting the target image I converted into HSV color space according to formula
Figure GDA0003023770350000021
And
Figure GDA0003023770350000022
performing first-step quantization, quantizing the hue H into 7 intervals, and quantizing the brightness V and the saturation S into 3 intervals respectively;
thirdly, mapping the RGB value of the target image I into 63 color spaces of HSV through quantization according to the formula L-9H +3S + V;
the target image I is processed in a blocking way, each image is given different weights according to the information content, and the color histogram of each image is H (I)k) Indicating that the weight corresponding to each image is wkMeaning that the block weighted color histogram of the entire image is
Figure GDA0003023770350000023
Where n is the number of partitions for the target image I,
Figure GDA0003023770350000024
fourthly, normalization processing is carried out on the block weighted color histogram of the whole image, and the color histogram is stored in a color feature library of the image as the color feature of the target image I;
2) extracting the shape feature of the target image I and storing the shape feature in the shape feature library of the image
Carrying out edge enhancement on an original color image of a target image I by adopting a DomainTransform method, wherein a parameter sigma _ s is 10, and a parameter sigma _ r is 0.15;
secondly, converting the edge-enhanced target image I into a gray image according to a standard formula, and scaling the gray image by adopting a bilinear difference method; performing edge detection on the scaled gray level image by using a canny edge operator;
calculating the gradient mode and gradient direction of each contour point of the gray level image: adopt the formula
Figure GDA0003023770350000031
The template carries out sobel operator filtering on the gray level image to obtain the horizontal gradient of the gray level image
Figure GDA0003023770350000032
And vertical gradient
Figure GDA0003023770350000033
Thereby obtaining the ashGradient mode of degree image
Figure GDA0003023770350000034
The gradient direction of the gray scale image is
Figure GDA0003023770350000035
Obtaining a gradient mode and a gradient direction of the gray level image at each contour point according to the edge detection result;
fourthly, carrying out multi-scale processing on the gray level image, dividing the gray level image into L layers by adopting a pyramid division method, and dividing each layer into 2lAnd (L ═ 0.,. L) blocks, accumulating the gradient module value of a certain gradient direction interval at the contour point of the nth block of the L layer of the gray-scale image as the statistic value of the gradient direction interval, traversing all the contour points and the gradient direction interval of the gray-scale image, wherein the statistic gradient direction histogram of the nth block of the L layer of the gray-scale image is
Figure GDA0003023770350000036
Splicing and merging the gradient direction histograms of all image blocks of the gray image to obtain a complete gradient direction histogram
Figure GDA0003023770350000037
Sixthly, normalizing the complete gradient direction histogram H (I) of the gray level image to obtain the dimension of
Figure GDA0003023770350000038
The shape feature vector is used as the shape feature of the target image I and is stored in a shape feature library of the image;
3) extracting the texture feature of the target image I and storing the texture feature in the texture feature library of the image
Respectively calculating the roughness, the contrast and the directivity of each pixel point of a target image I;
② one channel of the target image I
Figure GDA0003023770350000039
Performing block mean filtering with size of 2 m, wherein m is 1,2,3,4,5, thereby obtaining 5 different mean filtered images
Figure GDA00030237703500000310
Respectively calculating horizontal difference images of 5 different mean value filtering images
Figure GDA0003023770350000041
And
Figure GDA0003023770350000042
wherein
Figure GDA0003023770350000043
③ using the channel
Figure GDA0003023770350000044
Is calculated to obtain 10E at each pixel (x, y)T,R(x, y) values, from which the maximum value is selected as the roughness value at pixel (x, y), i.e. the roughness value at pixel (x, y)
Figure GDA0003023770350000045
Wherein m is 1,2,3,4, 5;
fourthly, in the channel
Figure GDA0003023770350000046
Mean value of statistical pixels (x, y) in a 7 x 7 window of pixels (x, y)
Figure GDA0003023770350000047
Variance (variance)
Figure GDA0003023770350000048
In the channel
Figure GDA0003023770350000049
Counting the fourth difference of the pixel (x, y) in a 7 x 7 window of the pixel (x, y)
Figure GDA00030237703500000410
Then the channel
Figure GDA00030237703500000411
The contrast value at pixel (x, y) is
Figure GDA00030237703500000412
Fifthly, the channel is connected
Figure GDA00030237703500000413
According to the formula
Figure GDA00030237703500000414
Convolving the template to obtain the channel
Figure GDA00030237703500000415
Horizontal gradient of
Figure GDA00030237703500000416
And vertical gradient
Figure GDA00030237703500000417
Further calculating to obtain the channel
Figure GDA00030237703500000418
Directivity value at pixel (x, y)
Figure GDA00030237703500000419
Sixthly, accumulating the roughness values, the contrast values and the directivity values of R, G, B channels at the pixel (x, y) to obtain channel-independent roughness values, contrast values and directivity values at the pixel (x, y);
seventhly, uniformly quantizing the roughness, the contrast and the directivity of the target image I into g intervals, and changing the value intervals of the roughness, the contrast and the directivity of the target image I into [0, g-1 ];
determining corresponding interval of pixel (x, y) in texture histogram by combining roughness, contrast and directivity
Figure GDA00030237703500000420
Accumulating each pixel in a corresponding interval to obtain an accumulated texture histogram HT(IT) In which H isT(IT) The dimension of (a) is g × g;
ninthly, the texture histogram H of the dimension g, gT(IT) Normalizing to obtain the texture features of the target image I, and storing the texture features into a texture feature library of the image;
(2) similarity calculation of target image I and image dataset
1) Inputting a search image Q, extracting its color features XCShape feature XSAnd texture feature XTCalculating XCWith each feature Y in the color feature libraryCLinear nuclear distance line of
Figure GDA0003023770350000051
Calculating XSWith each feature Y in the shape feature librarySEuropean distance of
Figure GDA0003023770350000052
Calculating XTWith each feature Y in the texture feature libraryTJSD distance of
Figure GDA0003023770350000053
Wherein d is the dimension of the corresponding feature;
2) randomly extracting pairwise image pairs Q in image libraryr1And Qr2Calculating a color feature distance of the pair of images
Figure GDA0003023770350000054
Obtaining a sample set
Figure GDA0003023770350000055
Further obtaining a sampling set
Figure GDA0003023770350000056
Sample mean and sample standard deviation of (2), repeatingThe average value of the sample mean value is obtained, namely dCMean value of Gaussian distribution ofC,dCStandard deviation of (a)C(ii) a D can be obtained by the same method according to the operation stepsSMean value of Gaussian distribution ofSStandard deviation σS,dTMean value of Gaussian distribution ofTStandard deviation σT(ii) a According to
Figure GDA0003023770350000057
Figure GDA0003023770350000058
Respectively combine dC、dS、dTConverting to a standard Gaussian distribution;
3) measuring the three distances dC、dS、dTD is obtained by adopting weight value method fusionmerge=wCdC+wSdS+wTdT,wC+wS+wT=1;
4) For the calculated distance dmergeAnd sorting, and taking the first P data, namely taking the first P images as a retrieval result.
Preferably, the scaling ratio of the gray scale image in the step 2) in the step (1) is not more than 500.
The three kinds of characteristics of color, shape and texture of the image respectively correspond to the image content attributes of different human senses. In the scheme, the image characteristics of three categories are calculated independently, and then the information of the three characteristics is fused in the similarity calculation process to obtain a comprehensive similarity calculation result.
Firstly, extracting color features of an image by using a color histogram, extracting shape features of the image by using a hierarchical gradient direction histogram, extracting texture features of the image by using tamura texture representation, and finally fusing the color, shape and texture features by using a weight fusion method for retrieval.
The invention has the beneficial effects that:
and enabling the user to acquire similar home scheme images according to the retrieval images. Aiming at the defects of the existing single features, the method deals with the combination of feature sets in the household industry, can improve the efficiency of single feature retrieval and solve the problem of insufficient coverage of the single features in the actual household scene.
The method and the device can improve the searching efficiency of the images of the household products, save the time for a user to search the household scheme, and improve the actual searching experience of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a non-uniform blocking method for blocking an image according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
An image retrieval method based on multi-feature fusion comprises the following specific steps:
(1) acquiring a target image I and calculating the image characteristics of the target image I
1) Extracting the color characteristics of the target image I and storing the color characteristics in a color characteristic library of the image
Converting a target image I from an RGB color space to an HSV color space according to a standard conversion formula;
② converting the target image I converted into HSV color space according to formula
Figure GDA0003023770350000071
And
Figure GDA0003023770350000072
performing first-step quantization, quantizing the hue H into 7 intervals, and quantizing the brightness V and the saturation S into 3 intervals respectively;
thirdly, mapping the RGB value of the target image I into 63 color spaces of HSV through quantization according to the formula L-9H +3S + V;
the information quantity provided by the color at different positions in one image is different, usually the information of one image is mainly concentrated in the center of the image, the edge part is often used as the background, therefore, the target image I is simply blocked, each image is given different weights according to the quantity of the information contained in the image, the uneven blocking method as shown in figure 1 is adopted, the area A in figure 1 is positioned in the center of the image, the main information of the image is contained, the weight is given to the area A, the image information contained in B, C, D, E, F, G, H, I is less, and the weight is given to the area A less;
h (I) for color histogram of each imagek) Indicating that the weight corresponding to each image is wkMeaning that the block weighted color histogram of the entire image is
Figure GDA0003023770350000073
Where n is the number of partitions for the target image I,
Figure GDA0003023770350000074
fourthly, normalization processing is carried out on the block weighted color histogram of the whole image, and the color histogram is stored in a color feature library of the image as the color feature of the target image I;
2) extracting the shape feature of the target image I and storing the shape feature in the shape feature library of the image
Carrying out edge enhancement on an original color image of a target image I by adopting a DomainTransform method, wherein a parameter sigma _ s is 10, and a parameter sigma _ r is 0.15;
secondly, converting the edge-enhanced target image I into a gray image according to a standard formula, and scaling the gray image by a bilinear difference method, wherein the maximum value of the scaling ratio is 500; performing edge detection on the scaled gray-scale image by using a canny edge operator, wherein the parameter low threshold is 46, the parameter high threshold is 115, and the apertureSize is 3;
calculating the gradient mode and gradient direction of each contour point of the gray level image: adopt the formula
Figure GDA0003023770350000081
The template carries out sobel operator filtering on the gray level image to obtain the horizontal gradient of the gray level image
Figure GDA0003023770350000082
And vertical gradient
Figure GDA0003023770350000083
Further obtaining a gradient mode of the gray level image
Figure GDA0003023770350000084
The gradient direction of the gray scale image is
Figure GDA0003023770350000085
Obtaining a gradient mode and a gradient direction of the gray image at each contour point according to the edge detection result, wherein the gradient direction range is 0-180 degrees, and the gradient direction is uniformly quantized to KSA section wherein KS=20;
Fourthly, carrying out multi-scale processing on the gray level image, dividing the gray level image into L layers by adopting a pyramid division method, and dividing each layer into 2lAnd (L ═ 0.,. L) blocks, accumulating the gradient module value of a certain gradient direction interval at the contour point of the nth block of the L layer of the gray-scale image as the statistic value of the gradient direction interval, traversing all the contour points and the gradient direction interval of the gray-scale image, wherein the statistic gradient direction histogram of the nth block of the L layer of the gray-scale image is
Figure GDA0003023770350000086
Splicing and merging the gradient direction histograms of all image blocks of the gray image to obtain a complete gradient direction histogram
Figure GDA0003023770350000087
Sixthly, normalizing the complete gradient direction histogram H (I) of the gray level image to obtain the dimension of
Figure GDA0003023770350000091
The shape feature vector is used as the shape feature of the target image I and is stored in a shape feature library of the image;
3) extracting the texture feature of the target image I and storing the texture feature in the texture feature library of the image
Respectively calculating the roughness, the contrast and the directivity of each pixel point of a target image I;
② one channel of the target image I
Figure GDA0003023770350000092
Performing block mean filtering with size of 2 m, wherein m is 1,2,3,4,5, thereby obtaining 5 different mean filtered images
Figure GDA0003023770350000093
Respectively calculating horizontal difference images of 5 different mean value filtering images
Figure GDA0003023770350000094
And
Figure GDA0003023770350000095
wherein
Figure GDA0003023770350000096
③ using the channel
Figure GDA0003023770350000097
Is calculated to obtain 10E at each pixel (x, y)T,R(x, y) value selected fromThe maximum value is chosen as the roughness value at pixel (x, y), i.e.
Figure GDA0003023770350000098
Wherein m is 1,2,3,4, 5;
fourthly, in the channel
Figure GDA0003023770350000099
Mean value of statistical pixels (x, y) in a 7 x 7 window of pixels (x, y)
Figure GDA00030237703500000910
Variance (variance)
Figure GDA00030237703500000911
In the channel
Figure GDA00030237703500000912
Counting the fourth difference of the pixel (x, y) in a 7 x 7 window of the pixel (x, y)
Figure GDA00030237703500000913
Then the channel
Figure GDA00030237703500000914
The contrast value at pixel (x, y) is
Figure GDA00030237703500000915
Fifthly, the channel is connected
Figure GDA00030237703500000916
According to the formula
Figure GDA00030237703500000917
Convolving the template to obtain the channel
Figure GDA00030237703500000918
Horizontal gradient of
Figure GDA00030237703500000919
And vertical gradient
Figure GDA00030237703500000920
Further calculating to obtain the channel
Figure GDA00030237703500000921
Directivity value at pixel (x, y)
Figure GDA00030237703500000922
Sixthly, accumulating the roughness values, the contrast values and the directivity values of R, G, B channels at the pixel (x, y) to obtain channel-independent roughness values, contrast values and directivity values at the pixel (x, y);
seventhly, uniformly quantizing the roughness, the contrast and the directivity of the target image I into g intervals, and changing the value intervals of the roughness, the contrast and the directivity of the target image I into [0, g-1 ];
determining corresponding interval of pixel (x, y) in texture histogram by combining roughness, contrast and directivity
Figure GDA0003023770350000101
Accumulating each pixel in a corresponding interval to obtain an accumulated texture histogram HT(IT) In which H isT(IT) The dimension of (a) is g × g;
ninthly, the texture histogram H of the dimension g, gT(IT) Normalizing to obtain the texture features of the target image I, and storing the texture features into a texture feature library of the image;
(2) similarity calculation of target image I and image dataset
1) Inputting a search image Q, extracting its color features XCShape feature XSAnd texture feature XTCalculating XCWith each feature Y in the color feature libraryCLinear nuclear distance line of
Figure GDA0003023770350000102
Calculating XSWith each feature Y in the shape feature librarySEuropean distance of
Figure GDA0003023770350000103
Calculating XTWith each feature Y in the texture feature libraryTJSD distance of
Figure GDA0003023770350000104
Wherein d is the dimension of the corresponding feature;
2) randomly extracting pairwise image pairs Q in image libraryr1And Qr2Calculating a color feature distance of the pair of images
Figure GDA0003023770350000105
Obtaining a sample set
Figure GDA0003023770350000106
Further obtaining a sampling set
Figure GDA0003023770350000107
The process is repeated to obtain the average value of the sample mean value, namely dCMean value of Gaussian distribution ofC,dCStandard deviation of (a)C(ii) a D can be obtained by the same method according to the operation stepsSMean value of Gaussian distribution ofSStandard deviation σS,dTMean value of Gaussian distribution ofTStandard deviation σT(ii) a According to
Figure GDA0003023770350000111
Figure GDA0003023770350000112
Respectively combine dC、dS、dTConverting to a standard Gaussian distribution;
3) measuring the three distances dC、dS、dTW is obtained by adopting weight value method fusionC=wS=wT1/3 in this example wC=wS=wT=1/3;
4) For the calculated distance dmergeAnd sorting, and taking the first P data, namely taking the first P images as a retrieval result.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (2)

1. An image retrieval method based on multi-feature fusion is characterized by comprising the following steps:
(1) acquiring a target image I and calculating the image characteristics of the target image I
1) Extracting the color characteristics of the target image I and storing the color characteristics in a color characteristic library of the image
Converting a target image I from an RGB color space to an HSV color space according to a standard conversion formula;
② converting the target image I converted into HSV color space according to formula
Figure FDA0003023770340000011
And
Figure FDA0003023770340000012
performing first-step quantization, quantizing the hue H into 7 intervals, and quantizing the brightness V and the saturation S into 3 intervals respectively;
thirdly, mapping the RGB value of the target image I into 63 color spaces of HSV through quantization according to the formula L-9H +3S + V;
the target image I is processed in a blocking way, each image is given different weights according to the information content, and the color histogram of each image is H (I)k) Indicating that the weight corresponding to each image is wkMeaning that the block weighted color histogram of the entire image is
Figure FDA0003023770340000013
Where n is the number of partitions for the target image I,
Figure FDA0003023770340000014
fourthly, normalization processing is carried out on the block weighted color histogram of the whole image, and the color histogram is stored in a color feature library of the image as the color feature of the target image I;
2) extracting the shape feature of the target image I and storing the shape feature in the shape feature library of the image
Carrying out edge enhancement on an original color image of a target image I by adopting a Domain Transform method, wherein a parameter sigma _ s is 10, and a parameter sigma _ r is 0.15;
secondly, converting the edge-enhanced target image I into a gray image according to a standard formula, and scaling the gray image by adopting a bilinear difference method; performing edge detection on the scaled gray level image by using a canny edge operator;
calculating the gradient mode and gradient direction of each contour point of the gray level image: adopt the formula
Figure FDA0003023770340000021
The template carries out sobel operator filtering on the gray level image to obtain the horizontal gradient of the gray level image
Figure FDA0003023770340000022
And vertical gradient
Figure FDA0003023770340000023
Further obtaining a gradient mode of the gray level image
Figure FDA0003023770340000024
The gradient direction of the gray scale image is
Figure FDA0003023770340000025
Obtaining a gradient mode and a gradient direction of the gray level image at each contour point according to the edge detection result;
fourthly, carrying out multi-scale processing on the gray level image, and dividing the gray level image into L parts by adopting a pyramid division methodLayers, each layer divided into 2lAnd (L ═ 0.,. L) blocks, accumulating the gradient module value of a certain gradient direction interval at the contour point of the nth block of the L layer of the gray-scale image as the statistic value of the gradient direction interval, traversing all the contour points and the gradient direction interval of the gray-scale image, wherein the statistic gradient direction histogram of the nth block of the L layer of the gray-scale image is
Figure FDA0003023770340000026
Splicing and merging the gradient direction histograms of all image blocks of the gray image to obtain a complete gradient direction histogram
Figure FDA0003023770340000027
Sixthly, normalizing the complete gradient direction histogram H (I) of the gray level image to obtain the dimension of
Figure FDA0003023770340000028
The shape feature vector is used as the shape feature of the target image I and is stored in a shape feature library of the image;
3) extracting the texture feature of the target image I and storing the texture feature in the texture feature library of the image
Respectively calculating the roughness, the contrast and the directivity of each pixel point of a target image I;
② one channel of the target image I
Figure FDA0003023770340000031
Performing block mean filtering with size of 2 m, wherein m is 1,2,3,4,5, thereby obtaining 5 different mean filtered images
Figure FDA0003023770340000032
Respectively calculating horizontal difference images of 5 different mean value filtering images
Figure FDA0003023770340000033
And
Figure FDA0003023770340000034
wherein
Figure FDA0003023770340000035
③ using the channel
Figure FDA0003023770340000036
Is calculated to obtain 10E at each pixel (x, y)T,R(x, y) values, from which the maximum value is selected as the roughness value at pixel (x, y), i.e. the roughness value at pixel (x, y)
Figure FDA0003023770340000037
Wherein m is 1,2,3,4, 5;
fourthly, in the channel
Figure FDA0003023770340000038
Mean value of statistical pixels (x, y) in a 7 x 7 window of pixels (x, y)
Figure FDA0003023770340000039
Variance (variance)
Figure FDA00030237703400000310
In the channel
Figure FDA00030237703400000311
Counting the fourth difference of the pixel (x, y) in a 7 x 7 window of the pixel (x, y)
Figure FDA00030237703400000312
Then the channel
Figure FDA00030237703400000313
The contrast value at pixel (x, y) is
Figure FDA00030237703400000314
Fifthly, the channel is connected
Figure FDA00030237703400000315
According to the formula
Figure FDA00030237703400000316
Convolving the template to obtain the channel
Figure FDA00030237703400000317
Horizontal gradient of
Figure FDA00030237703400000318
And vertical gradient
Figure FDA00030237703400000319
Further calculating to obtain the channel
Figure FDA00030237703400000320
Directivity value at pixel (x, y)
Figure FDA00030237703400000321
Sixthly, accumulating the roughness values, the contrast values and the directivity values of R, G, B channels at the pixel (x, y) to obtain channel-independent roughness values, contrast values and directivity values at the pixel (x, y);
seventhly, uniformly quantizing the roughness, the contrast and the directivity of the target image I into g intervals, and changing the value intervals of the roughness, the contrast and the directivity of the target image I into [0, g-1 ];
determining corresponding interval of pixel (x, y) in texture histogram by combining roughness, contrast and directivity
Figure FDA0003023770340000041
Accumulating each pixel in a corresponding interval to obtain an accumulated texture histogram HT(IT) In which H isT(IT) Has the dimension ofg*g*g;
Ninthly, the texture histogram H of the dimension g, gT(IT) Normalizing to obtain the texture features of the target image I, and storing the texture features into a texture feature library of the image;
(2) similarity calculation of target image I and image dataset
1) Inputting a search image Q, extracting its color features XCShape feature XSAnd texture feature XTCalculating XCWith each feature Y in the color feature libraryCLinear nuclear distance line of
Figure FDA0003023770340000042
Calculating XSWith each feature Y in the shape feature librarySEuropean distance of
Figure FDA0003023770340000043
Calculating XTWith each feature Y in the texture feature libraryTJSD distance of
Figure FDA0003023770340000044
Wherein d is the dimension of the corresponding feature;
2) randomly extracting pairwise image pairs Q in image libraryr1And Qr2Calculating a color feature distance of the pair of images
Figure FDA0003023770340000045
Obtaining a sample set
Figure FDA0003023770340000046
Further obtaining a sampling set
Figure FDA0003023770340000047
The process is repeated to obtain the average value of the sample mean value, namely dCMean value of Gaussian distribution ofC,dCStandard deviation of (a)C(ii) a D can be obtained by the same method according to the operation stepsSGaussian distribution ofMean value μSStandard deviation σS,dTMean value of Gaussian distribution ofTStandard deviation σT(ii) a According to
Figure FDA0003023770340000048
Figure FDA0003023770340000049
Respectively combine dC、dS、dTConverting to a standard Gaussian distribution;
3) measuring the three distances dC、dS、dTD is obtained by adopting weight value method fusionmerge=wCdC+wSdS+wTdT,wC+wS+wT=1;
4) For the calculated distance dmergeAnd sorting, and taking the first P data, namely taking the first P images as a retrieval result.
2. The image retrieval method based on multi-feature fusion as claimed in claim 1, wherein the scaling ratio of the gray scale image of step 2) in step (1) is not more than 500.
CN201810418660.XA 2018-05-04 2018-05-04 Image retrieval method based on multi-feature fusion Active CN108829711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810418660.XA CN108829711B (en) 2018-05-04 2018-05-04 Image retrieval method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810418660.XA CN108829711B (en) 2018-05-04 2018-05-04 Image retrieval method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN108829711A CN108829711A (en) 2018-11-16
CN108829711B true CN108829711B (en) 2021-06-01

Family

ID=64148349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810418660.XA Active CN108829711B (en) 2018-05-04 2018-05-04 Image retrieval method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN108829711B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109782214B (en) * 2019-01-26 2021-02-26 哈尔滨汇鑫仪器仪表有限责任公司 Electric energy meter state remote sending mechanism
CN110135440A (en) * 2019-05-15 2019-08-16 北京艺泉科技有限公司 A kind of image characteristic extracting method suitable for magnanimity Cultural Relics Image Retrieval
CN110826446B (en) * 2019-10-28 2020-08-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN113095341A (en) * 2019-12-23 2021-07-09 顺丰科技有限公司 Image matching method, device and storage medium
CN114170418B (en) * 2021-11-30 2024-05-24 吉林大学 Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching
CN116403419B (en) * 2023-06-07 2023-08-25 贵州鹰驾交通科技有限公司 Traffic light control method based on vehicle-road cooperation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551823A (en) * 2009-04-20 2009-10-07 浙江师范大学 Comprehensive multi-feature image retrieval method
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features
CN101770644A (en) * 2010-01-19 2010-07-07 浙江林学院 Forest-fire remote video monitoring firework identification method
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN105404657A (en) * 2015-11-04 2016-03-16 北京工业大学 CEDD feature and PHOG feature based image retrieval method
CN106202338A (en) * 2016-06-30 2016-12-07 合肥工业大学 Image search method based on the many relations of multiple features
CN107958073A (en) * 2017-12-07 2018-04-24 电子科技大学 A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110255589A1 (en) * 2009-08-03 2011-10-20 Droplet Technology, Inc. Methods of compressing data and methods of assessing the same
US8744180B2 (en) * 2011-01-24 2014-06-03 Alon Atsmon System and process for automatically finding objects of a specific color
ES2530687B1 (en) * 2013-09-04 2016-08-19 Shot & Shop. S.L. Method implemented by computer for image recovery by content and computer program of the same
CN104298775A (en) * 2014-10-31 2015-01-21 北京工商大学 Multi-feature content-based image retrieval method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551823A (en) * 2009-04-20 2009-10-07 浙江师范大学 Comprehensive multi-feature image retrieval method
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features
CN101770644A (en) * 2010-01-19 2010-07-07 浙江林学院 Forest-fire remote video monitoring firework identification method
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN105404657A (en) * 2015-11-04 2016-03-16 北京工业大学 CEDD feature and PHOG feature based image retrieval method
CN106202338A (en) * 2016-06-30 2016-12-07 合肥工业大学 Image search method based on the many relations of multiple features
CN107958073A (en) * 2017-12-07 2018-04-24 电子科技大学 A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mean-shift algorithm fusing multi feature;Yue Gao et al.;《2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference》;20171002;1245-1249 *
基于多特征DS融合策略的图像检索技术研究;邵天日;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140715(第7期);I138-815 *

Also Published As

Publication number Publication date
CN108829711A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108829711B (en) Image retrieval method based on multi-feature fusion
CN115861135B (en) Image enhancement and recognition method applied to panoramic detection of box body
CN109636784B (en) Image saliency target detection method based on maximum neighborhood and super-pixel segmentation
CN106846339A (en) Image detection method and device
CN111915572B (en) Adaptive gear pitting quantitative detection system and method based on deep learning
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN110766689A (en) Method and device for detecting article image defects based on convolutional neural network
CN106548160A (en) A kind of face smile detection method
CN108710916B (en) Picture classification method and device
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
CN115797813B (en) Water environment pollution detection method based on aerial image
CN109726649A (en) Remote sensing image cloud detection method of optic, system and electronic equipment
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
CN115033721A (en) Image retrieval method based on big data
CN108985346B (en) Existing exploration image retrieval method fusing low-level image features and CNN features
CN113361407B (en) PCANet-based spatial spectrum feature combined hyperspectral sea ice image classification method
CN117456376A (en) Remote sensing satellite image target detection method based on deep learning
CN110276260B (en) Commodity detection method based on depth camera
US10115195B2 (en) Method and apparatus for processing block to be processed of urine sediment image
CN110766655A (en) Hyperspectral image significance analysis method based on abundance
CN113034543B (en) 3D-ReID multi-target tracking method based on local attention mechanism
CN110162654A (en) It is a kind of that image retrieval algorithm is surveyed based on fusion feature and showing for search result optimization
Abadi et al. Vehicle model recognition based on using image processing and wavelet analysis
Kavitha et al. Exemplary Content Based Image Retrieval using visual contents & genetic approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant