CN112800267A - Fine-grained shoe print image retrieval method - Google Patents

Fine-grained shoe print image retrieval method Download PDF

Info

Publication number
CN112800267A
CN112800267A CN202110152570.2A CN202110152570A CN112800267A CN 112800267 A CN112800267 A CN 112800267A CN 202110152570 A CN202110152570 A CN 202110152570A CN 112800267 A CN112800267 A CN 112800267A
Authority
CN
China
Prior art keywords
shoe
width
image
semantic
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110152570.2A
Other languages
Chinese (zh)
Other versions
CN112800267B (en
Inventor
王新年
段硕古
王文卿
白桂欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110152570.2A priority Critical patent/CN112800267B/en
Publication of CN112800267A publication Critical patent/CN112800267A/en
Application granted granted Critical
Publication of CN112800267B publication Critical patent/CN112800267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method and a system for retrieving a fine-grained shoe print image, which comprise the following steps: extracting shoe print attribute information; calculating the similarity of the shoe print attributes; calculating the similarity of the shoe print contents; calculating the ranking score of combining the shoe print content information and the attribute information; and outputting the images in the data set in a descending order according to the sorting score to obtain a query result. The similarity among the shoe print images is calculated by combining the shoe print attribute information, the shoe print content information and the shoe print semantic block spatial layout relation, so that the distinctiveness among the shoe print images with small differences is effectively increased, and the fine-grained shoe print image retrieval precision is improved.

Description

Fine-grained shoe print image retrieval method
Technical Field
The invention relates to a method for searching fine-grained images, in particular to a method for searching shoe imprints of suspects with the same patterns and different sizes.
Background
At present, shoe print retrieval methods mainly comprise retrieval methods based on global apparent characteristics, local region characteristics and key point characteristics. The shoe print retrieval algorithm based on the global apparent features mainly takes the whole shoe print image as input, and extracts the features of the shoe print image, such as Fourier spectrum features, Gnaor features, depth features and the like. Pradeep M.Patil takes the sole pattern as a pair of texture images, firstly, the deflection angle of the sole pattern is calculated by adopting a Randon transformation method, and correction is carried out; secondly, constructing 8 Gabor filters in different directions, and filtering patterns; then selecting 4 Gabor characteristic graphs with the largest energy value, and dividing each characteristic graph into small blocks which are 8 multiplied by 8 and are not overlapped with each other; calculating the variance of each sub-block to represent the sub-block; thereby being used as the characteristic of the sole pattern. The shoe print retrieval algorithm based on the local area features mainly extracts a part of interested areas in the shoe print and extracts the features of the interested areas. In 2015, Wang et al proposed a shoe mark retrieval algorithm based on wavelet fourier mellin features. The algorithm firstly divides a shoe print image into a half sole part and a heel part, and respectively gives different weights to the two parts to respectively extract features. The specific method comprises the following steps: firstly, respectively carrying out wavelet transformation on two parts of shoe printing images; secondly, carrying out Fourier transform on the image after the wavelet transform; and then, carrying out polar coordinate transformation on the amplitude obtained after the first Fourier transformation, and carrying out Fourier transformation on the amplitude again to obtain the frequency spectrum characteristic with the rotation and translation scale invariance. A retrieval method based on a key point feature, which is essentially a feature representing a specific physical area or spatial relationship node on an image, may also be referred to as interest point feature retrieval. Almaadeed et al proposed that multi-scale Harris and Hessian performed keypoint detection to obtain scale invariance features, and performed SIFT transformation on the detected features to obtain rotation invariance, and finally fused the features extracted by the two detection methods on a fractional level.
At present, shoe print retrieval based on global features, shoe print retrieval based on local features and shoe print retrieval based on key point features have achieved certain results, but the existing method mainly considers the similarity between patterns, namely only shoe print images with the same type of patterns as the query image are arranged in front of the retrieval result, but the sequence of the sizes between the same patterns in the retrieval result is not considered. The size is used as an important attribute of the shoe seal, and the returned results of the similar patterns are arranged in a descending order according to a program similar to the size of the query graph, so that the shoe seal images with the common attribute can be more accurately found and retrieved, and the case handling efficiency of the public security personnel is improved.
Reference documents:
Yan K,Lu L,Summers R M.Unsupervised Body Part Regression via Spatially Self-ordering Convolutional Neural Networks[J].2017.
disclosure of Invention
According to the technical problem provided by the invention, a fine-grained shoe print image retrieval method is provided. The invention mainly utilizes a fine-grained shoe print image retrieval method, which is characterized by comprising the following steps of:
s1: extracting shoe print attribute information;
s2: calculating the similarity of the shoe print attributes;
s3: calculating the similarity of the shoe print contents;
s4: calculating the ranking score of combining the shoe print content information and the attribute information;
s5: and outputting the images in the data set in a descending order according to the sorting score to obtain a retrieval result.
Further, step S1 further includes the following steps:
s11: extracting attribute information of the single shoe print;
s111: judging the integrity of the shoe print image: obtaining the image area contained in the minimum circumscribed rectangle of the single shoe print image A and recording the image area as PATo PACarrying out binarization, marking points on the patterns as 1 and marking background points as 0; calculating the duty ratio of the shoe print image A, and recording as OAIn which O isAIs PAThe ratio of the sum of the number of the points marked as 1 to the number of the pixel points of the whole image; if O isAIf the value is less than the threshold value z, the shoe print image is considered to have shoe palm or heel missing;
s112: calculating the shoe length, the shoe width and the heel width of the shoe print;
a: if the shoe print image is complete, then P is takenAIs the shoe length HAWidth is W of shoe widthA(ii) a Will PAThe shoe print image contained therein was printed as 3: 2, dividing the shoe sole into a shoe sole and a heel, seeking the minimum circumscribed rectangle of the heel part, and defining the width of the minimum circumscribed rectangle of the heel part as the heel width wA(ii) a Judging the authenticity of the shoe length, the shoe width and the heel width of the shoe print image, and updating attribute information;
b: if the shoe print image has the shoe palm or heel missing, the shoe length H is definedA0, shoe width WA0, heel width wA=0;
S12: according to the single piece of shoe print attribute information extraction method, the data set G is G (G)1,...,Gk,...,GNExtracting attribute information from the shoe prints in the N to form a shoe print attribute information matrix M:
Figure BDA0002932153220000031
further, step S112 further includes the steps of:
s1121: let r be1=WA/wAIf r is1If the width of the shoe sole is less than 1.28, the shoe sole is considered to be incomplete at two sides, and the width of the updated shoe is WA=1.28×wA(ii) a If r1If the width is more than 1.33, the heel width w is updated by considering the two sides of the heel are incompleteA=WA/1.33;
S1122: let r be2=HA/WAIf r is2If the length is less than 2.9, the end of the toe cap or the heel is considered to be incomplete, and the length H of the shoe is updatedA=3.0×wA(ii) a If r2If > 3.1, the shoe width W updated in step S1121 is consideredAStill smaller than the actual shoe width, and the shoe width W is updated againA=HA/3.0 and determine whether r is1If more than 1.33, if yes, the heel width w is updatedA=WA/1.33。
Further, step S2 further includes the following steps:
s21: extracting attribute information of the query image Q according to the single shoe print attribute information extraction method, and respectively marking the shoe length as HQThe width of the shoe is WQThe width of the heel is wQ
S22: calculating query image Q and library image GkSimilarity score S of attribute information therebetweenp(k);
a: if the query graph Q and the library graph GkIf the shoe length, the shoe width and the heel width of the shoe mark are 0, the query graph Q and the library graph GkS of attribute information similarity scorep(k)=0;
b: if the query graph Q and the library graph GkThe shoe length, the shoe width and the heel width of the existing shoe mark are all not 0, and then the following calculation is carried out:
(1) calculating the query image Q and the library image GkThe difference dis between the propertiesi(k),i=1,2,3;
Figure BDA0002932153220000032
(2) Calculating query image Q and library image GkSimilarity score s between attributesi(k),i=1,2,3;
Figure BDA0002932153220000041
Wherein, Δ d1,Δd2,Δd3Respectively representing the difference values of the shoe length, the shoe width and the heel width when the difference values are different by one size;
(3) calculating query image Q and library image GkOverall attribute similarity score S ofP(k);
If it is
Figure BDA0002932153220000042
The query image Q and the library map GkHas an overall attribute similarity score of
Figure BDA0002932153220000043
Wherein
Figure BDA0002932153220000044
The weight of the similarity scores of the three kinds of attribute information is taken up; alpha is alphaiIs a constant value when disi(k) At > 0, alphai1, otherwise, αi=0;si(k) Similarity of three kinds of attribute information;
if it is
Figure BDA0002932153220000045
The query image Q and the library map GkHas an overall attribute similarity score of
Figure BDA0002932153220000046
Wherein
Figure BDA0002932153220000047
The weight of the similarity scores of the three kinds of attribute information is taken up; si(k) There are three kinds of attribute information similarity.
Further, step S3 further includes the following steps:
s31: constructing a semantic block sample matrix;
s32: constructing a spatial layout relation of the semantic block in the query graph;
s33: constructing semantic block in image set G ═ G1,...,Gk,...,GN1, k ═ 1.., spatial layout relationship in N;
s34: calculating query graph Q and library graph GkSpatial layout similarity score SD(k) And pattern similarity score SF(k);
S35: calculating query graph Q and library graph GkContent similarity score S betweenc(k)Sc(k)=η×SD(k)+(1-η)×SF(k) Wherein eta is more than or equal to 0.5.
Further, step S31 further includes the following steps:
s311: defining the minimum circumscribed rectangle of the query image Q as a semantic block P1And obtaining the height thereof
Figure BDA0002932153220000048
Width of
Figure BDA0002932153220000049
And itY coordinate of upper left corner point in Q
Figure BDA00029321532200000410
And the y coordinate of the lower right corner point in Q
Figure BDA00029321532200000411
S312: to semantic block P1Trisecting in vertical direction, and taking the top and bottom parts to obtain semantic block P2,P5And obtaining the height of each semantic block
Figure BDA00029321532200000412
Width of
Figure BDA00029321532200000413
Y-coordinate with its upper left corner point in Q
Figure BDA00029321532200000414
And the lower right corner point in Q
Figure BDA00029321532200000415
Wherein i is 2, 5;
s313: respectively convert semantic blocks P2、P5Halving in the horizontal direction to obtain semantic blocks P3,P4And P6,P7And obtaining the height of each semantic block
Figure BDA0002932153220000051
Width of
Figure BDA0002932153220000052
Y-coordinate with its upper left corner point in Q
Figure BDA0002932153220000053
And the lower right corner point in Q
Figure BDA0002932153220000054
Wherein i is 3,4,6, 7;
s314: selected in steps S311-S313Semantic Block PiHorizontally turning to obtain a turned semantic block Ti1, 7, constructing a semantic block sample matrix
Figure BDA0002932153220000055
S315: carrying out binarization on the image content in each semantic block, marking the points on the patterns as 1 and marking the background points as 0; calculating the duty ratio of each semantic block, and recording the duty ratio as Di1., 7 where DiThe ratio of the sum of the number of the pixels marked as 1 in the ith semantic block to the area of the pixel is shown.
Further, the step S32 further has the steps of:
s321: calculating the vertical distance V between semantic blocks in QQ(Pi,Pi+3) I is 2,3,4, wherein VQ(Pi,Pi+3) As a semantic block Pi+3And semantic block PiOf the upper left corner point y coordinate value, i.e. the difference
Figure BDA0002932153220000056
S322: updating the y coordinate values of the upper left corner and the lower right corner of the semantic block in Q, namely
Figure BDA0002932153220000057
Wherein λ ∈ [0, H ]Q) And is and
Figure BDA0002932153220000058
s323: calculating the proportion R of the semantic block in the query image Qi,j1, 7, j 1, 2; wherein R isi,1Y coordinate value of upper left corner of ith semantic block
Figure BDA0002932153220000059
Ratio to height of query image Q, i.e.
Figure BDA00029321532200000510
Ri,2Is the y coordinate value of the lower right corner of the ith semantic block
Figure BDA00029321532200000511
Ratio to height of query image Q, i.e.
Figure BDA00029321532200000512
HQIs high for query image Q.
Further, step S33 further includes the following steps:
s331: ratio R in query image Q according to ith semantic blocki,jPreliminarily determining that the upper left corner point is at GkY coordinate value B in (1)k,iAnd its lower right corner at GkY coordinate value N in (1)k,i(ii) a Wherein
Figure BDA00029321532200000513
Figure BDA00029321532200000514
For database image GkThe height of (d);
s332: according to the ith semantic block, displaying the database image GkCoordinates of the upper left corner point and the lower right corner point determined preliminarily in the database image GkIn the matching area J corresponding to the intermediate cutk,i
S333: respectively calculate E(1,1),E(2,1)And Jk,1Degree of similarity of
Figure BDA00029321532200000515
If it is
Figure BDA00029321532200000516
Then Q and GkSelecting semantic Block E with respect to y-axis symmetry(2,i)And GkMatching and similarity calculation are carried out; otherwise, the meaning block E is used(1,i)And GkMatching and similarity calculation are carried out, wherein i is 2.
S334: traversing the semantic block selected in step S333, and performing the following operations:
a: respectively combine the semantic blocks E(j,i)With it in GkMiddle sectionTaken corresponding matching region Jk,iCarrying out normalized cross-correlation operation to obtain a response graph M corresponding to each semantic blockk,iWherein j is 1, 2; i 2., 7;
b: take Fk,iThe y coordinate of the middle maximum value point is a semantic block E(j,i)Has a center point of Jk,iY coordinate of (5)k,iRecord Fk,iMaximum value of (1) is betaiTaking it as the ith semantic block and library graph GkThe pattern similarity score of (2);
c, calculating semantic block E(j,i)The upper left corner point is at Jk,iY coordinate u of (1)k,iWherein
Figure BDA0002932153220000061
HiIs the height of the ith semantic block;
s335: computing semantic Block E(j,i)The upper left corner point is at GkY coordinate of (1) Uk,iWherein U isk,i=uk,i+Uk,i,i=2,...,7;
S336: calculation of GkVertical distance V between middle semantic blocksk(E(j,i),E(j,i+3)) In which V isk(E(j,i),E(j,i+3)) As semantic block E(j,i+3)And semantic block E(j,i)At GkThe difference of the y coordinate values of the middle-upper-left corner points, i.e. Vk(E(j,i),E(j,i+3))=Uk,i+3-Uk,i,i=2,3,4。
Further, step S34 further includes the following steps:
s341: computing semantic Block E in Q(j,i+3)And semantic block E(j,i)Sum of duty cycles of Oi,i+3,i=3,4;
S342: computational library graph GkDistance difference Dis from the distance of semantic blocks in query graph Qk(Pi,Pi+3) In which Disk(Pi,Pi+3)=|VQ(Pi,Pi+3)-Vk(E(j,i),E(j,i+3))|,i=2,3,4;
S343: calculating query graph Q and library graph GkChinese languageSpace layout similarity score S between meaning blocksd(Pi,Pi+3) Wherein
Figure BDA0002932153220000062
S344: calculating query graph Q and library graph GkPattern similarity score S between middle semantic blocksf(Pi,Pi+3) In which S isf(Pi,Pi+3)=βii+3,i=2,3,4;
S345: giving different weights γ to the similarity scoresm,m=1,2,3;
a: semantic inter-block spatial layout similarity score Sd(P2,P5) Score of similarity with pattern Sf(P2,P5) Weight of gamma1Is a constant value, wherein γ1=0.5;
b: according to the sum O of the duty ratios of the ith semantic block and the (i + 3) th semantic blocki,i+3Scoring S for spatial layout similarity between semantic blocksd(Pi,Pi+3) I is 3,4, pattern similarity score Sf(Pi,Pi+3) I-3, 4 are given different weights γmM is 2, 3; if O is3,6≥O4,7Then γ2>γ3And vice versa, gamma2<γ3(ii) a And gamma is23=0.5;
S346: calculating a spatial layout similarity score SD
SD(k)=γ1×Sd(P2,P5)+γ2×Sd(P3,P6)+γ3×Sd(P4,P7);
S347: calculating a pattern similarity score SF
SF(k)=γ1×Sf(P2,P5)+γ2×Sf(P3,P6)+γ3×Sf(P4,P7)。
Further, according to the similarity of the query graph Q and each of the calculated shoe print attribute information and content information in the data set calculated in steps S2 and S3, calculating the ranking score of the query graph and the data set graph:
Figure BDA0002932153220000071
wherein k is the identification of the shoe print image in the data set.
Compared with the prior art, the invention has the following advantages:
the method not only utilizes the pattern similarity information to search the shoe print, but also considers the spatial position information of the pattern to search the shoe print, namely if the semantic block selected in the query graph is positioned in the half sole area of the shoe, the position of the most similar semantic block matched in the library graph is the heel area, and the spatial similarity score corresponding to the heel area is smaller, so that the final similarity score is reduced, and the influence of the similar pattern on the search result can be effectively weakened.
In the method, in the process of shoe print retrieval, size attribute information among the same patterns is considered, and the library picture which has the same patterns as the query picture and is close to the size is arranged in front of the returned result, so that the identity of the suspect can be determined more quickly and accurately by the public security personnel, and the case handling efficiency is improved.
The method of the invention facilitates the effective management of the shoe imprints by the public security personnel by measuring the size attribute information of the shoe imprints, and provides a reasonable and effective arrangement mode for establishing the suspect shoe imprints database for the public security personnel.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic view of the overall process of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, the invention provides a method for retrieving a fine-grained shoe print image, which is characterized by comprising the following steps:
step S1: and extracting shoe print attribute information. Further, the step S1 further includes the following steps:
step S11: extracting attribute information of the single shoe print;
s111: judging the integrity of the shoe print image: obtaining the image area contained in the minimum circumscribed rectangle of the single shoe print image A and recording the image area as PATo PACarrying out binarization, marking points on the patterns as 1 and marking background points as 0; calculating the duty ratio of the shoe print image A, and recording as OAIn which O isAIs PAThe ratio of the sum of the number of the points marked as 1 to the number of the pixel points of the whole image; if O isAIf the value is less than the threshold value z, the shoe print image is considered to have shoe palm or heel missing; step S112: calculating the shoe length, the shoe width and the heel width of the shoe print;
a: if the shoe print image is complete, then P is takenAIs the shoe length HAWidth is W of shoe widthA(ii) a Will PAThe shoe print image contained therein was printed as 3: 2, dividing the shoe sole into a shoe sole and a heel, seeking the minimum circumscribed rectangle of the heel part, and defining the width of the minimum circumscribed rectangle of the heel part as the heel width wA(ii) a Judging the authenticity of the shoe length, the shoe width and the heel width of the shoe print image, and updating attribute information;
b: if the shoe print image has the shoe palm or heel missing, the shoe length H is definedA0, shoe width WA0, heel width wA=0;
Further, step S112 further includes the steps of:
step S1121: let r be1=WA/wAIf r is1If the width of the shoe sole is less than 1.28, the shoe sole is considered to be incomplete at two sides, and the width of the updated shoe is WA=1.28×wA(ii) a If r1If the width is more than 1.33, the heel width w is updated by considering the two sides of the heel are incompleteA=WA/1.33;
Step S1122: let r be2=HA/WAIf r is2If the length is less than 2.9, the end of the toe cap or the heel is considered to be incomplete, and the length H of the shoe is updatedA=3.0×wA(ii) a If r2If > 3.1, the shoe width W updated in step S1121 is consideredAStill smaller than the actual shoe width, and the shoe width W is updated againA=HA/3.0 and determine whether r is1If more than 1.33, if yes, the heel width w is updatedA=WA/1.33. In a preferred embodiment, the full sole includes a toe, and if the toe of the sole is missing, the calculated shoe length is smaller than the actual shoe length, and if the heel is missing, the calculated shoe width is smaller than the actual shoe width.
Step S12: according to the single piece of shoe print attribute information extraction method, the data set G is G (G)1,...,Gk,...,GNExtracting attribute information from the shoe prints in the N to form a shoe print attribute information matrix M:
Figure BDA0002932153220000091
further, after extracting the shoe print attribute information, step S2 is executed: and calculating the similarity of the shoe printing attributes.
Step S21: extracting attribute information of the query image Q according to the single shoe print attribute information extraction method, and respectively marking the shoe length as HQThe width of the shoe is WQThe width of the heel is wQ
Step S22: calculating query image Q and library image GkSimilarity score S of attribute information therebetweenp(k);
a: if the query graph Q and the library graph GkIf the shoe length, the shoe width and the heel width of the shoe mark are 0, the query graph Q and the library graph GkS of attribute information similarity scorep(k)=0;
b: if the query graph Q and the library graph GkThe shoe length, the shoe width and the heel width of the existing shoe mark are all not 0, and then the following calculation is carried out:
(1) calculating the query image Q and the library image GkThe difference dis between the propertiesi(k) I is 1,2, 3; here, the suspect shoe print database image library and the database are the same database.
Figure BDA0002932153220000092
(2) Calculating query image Q and library image GkSimilarity score s between attributesi(k) I is 1,2, 3; in the present application, the attributes shoe length, shoe width, and heel width are described as preferred embodiments.
Figure BDA0002932153220000093
Wherein, Δ d1,Δd2,Δd3Respectively representing the difference values of the shoe length, the shoe width and the heel width when the difference values are different by one size;
(3) calculating query image Q and library image GkOverall attribute similarity score S ofP(k) In that respect A shoe print image has three types of attribute information: the method comprises the steps of measuring the length of a shoe, the width of the shoe and the width of a heel, wherein each attribute has a similarity score, but the weights of the similarity of the three attributes are different, so that the similarity of the three attributes is combined with different weights to calculate the overall attribute similarity score
If it is
Figure BDA00029321532200001019
The query image Q and the library map GkHas an overall attribute similarity score of
Figure BDA0002932153220000101
Wherein
Figure BDA0002932153220000102
The weight of the similarity scores of the three kinds of attribute information is taken up; alpha is alphaiIs a constant value when disi(k) At > 0, alphai1, otherwise, αi=0;si(k) Similarity of three kinds of attribute information;
if it is
Figure BDA0002932153220000103
The query image Q and the library map GkHas an overall attribute similarity score of
Figure BDA0002932153220000104
Wherein
Figure BDA0002932153220000105
The weight of the similarity scores of the three kinds of attribute information is taken up; si(k) There are three kinds of attribute information similarity.
As a preferred embodiment, the present application further includes step S3: and calculating the similarity of the shoe print contents.
Step S31: and constructing a semantic block sample matrix.
As a preferred embodiment, in the present application, the step S31 further includes the steps of:
step S311: defining the minimum circumscribed rectangle of the query image Q as a semantic block P1And obtaining the height thereof
Figure BDA0002932153220000106
Width of
Figure BDA0002932153220000107
And the y-coordinate of its upper left corner point in Q
Figure BDA0002932153220000108
And the y coordinate of the lower right corner point in Q
Figure BDA0002932153220000109
Step S312: to semantic block P1Trisecting in vertical direction, and taking the top and bottom parts to obtain semantic block P2,P5And obtaining the height of each semantic block
Figure BDA00029321532200001010
Width of
Figure BDA00029321532200001011
Y-coordinate with its upper left corner point in Q
Figure BDA00029321532200001012
And the lower right corner point in Q
Figure BDA00029321532200001013
Wherein i is 2, 5;
step S313: respectively convert semantic blocks P2、P5Halving in the horizontal direction to obtain semantic blocks P3,P4And P6,P7And obtaining the height of each semantic block
Figure BDA00029321532200001014
Width of
Figure BDA00029321532200001015
Y-coordinate with its upper left corner point in Q
Figure BDA00029321532200001016
And the lower right corner point in Q
Figure BDA00029321532200001017
Wherein i is 3,4,6, 7;
step S314: the semantic block P selected in the steps S311-S313iHorizontally turning to obtain a turned semantic block Ti1, 7, constructing a semantic block sample matrix
Figure BDA00029321532200001018
Step S315: carrying out binarization on the image content in each semantic block, marking the points on the patterns as 1 and marking the background points as 0; calculating the duty ratio of each semantic block, and recording the duty ratio as Di1., 7 where DiThe ratio of the sum of the number of the pixels marked as 1 in the ith semantic block to the area of the pixel is shown.
Step S32: and constructing the spatial layout relation of the semantic blocks in the query graph.
Step S321: calculating the vertical distance V between semantic blocks in QQ(Pi,Pi+3) I is 2,3,4, wherein VQ(Pi,Pi+3) As a semantic block Pi+3And semantic block PiOf the upper left corner point y coordinate value, i.e. the difference
Figure BDA0002932153220000111
Step S322: updating the y coordinate values of the upper left corner and the lower right corner of the semantic block in Q, namely
Figure BDA0002932153220000112
Wherein λ ∈ [0, H ]Q) And is and
Figure BDA0002932153220000113
step S323: calculating the proportion R of the semantic block in the query image Qi,j1, 7, j 1, 2; wherein R isi,1Y coordinate value of upper left corner of ith semantic block
Figure BDA0002932153220000114
Ratio to height of query image Q, i.e.
Figure BDA0002932153220000115
Ri,2Is the y coordinate value of the lower right corner of the ith semantic block
Figure BDA0002932153220000116
Ratio to height of query image Q, i.e.
Figure BDA0002932153220000117
HQIs high for query image Q.
Step S33: constructing semantic block in image set G ═ G1,...,Gk,...,GN1, k, the spatial layout relationship in N.
Step S331: ratio R in query image Q according to ith semantic blocki,jPreliminarily determining that the upper left corner point is at GkY coordinate value B in (1)k,iAnd its lower right corner at GkY coordinate value N in (1)k,i(ii) a Wherein
Figure BDA0002932153220000118
For database image GkThe height of (d);
step S332: according to the ith semantic block, displaying the database image GkCoordinates of the upper left corner point and the lower right corner point determined preliminarily in the database image GkIn the matching area J corresponding to the intermediate cutk,i
Step S333: respectively calculate E(1,1),E(2,1)And Jk,1Degree of similarity of
Figure BDA0002932153220000119
If it is
Figure BDA00029321532200001110
Then Q and GkSelecting semantic Block E with respect to y-axis symmetry(2,i)And GkMatching and similarity calculation are carried out; otherwise, the meaning block E is used(1,i)And GkMatching and similarity calculation are carried out, wherein i is 2.
Step S334: traversing the semantic block selected in step S333, and performing the following operations:
a: respectively combine the semantic blocks E(j,i)With it in GkIn the intercepted corresponding matching area Jk,iCarrying out normalized cross-correlation operation to obtain a response graph M corresponding to each semantic blockk,iWherein j is 1, 2; i 2., 7;
b: take Fk,iThe y coordinate of the middle maximum value point is a semantic block E(j,i)Has a center point of Jk,iY coordinate of (5)k,iRecord Fk,iMaximum value of (1) is betaiTaking it as the ith semantic block and library graph GkThe pattern similarity score of (2);
c, calculating semantic block E(j,i)The upper left corner point is at Jk,iY coordinate u of (1)k,iWherein
Figure BDA00029321532200001111
HiIs the height of the ith semantic block;
step S335: computing semantic Block E(j,i)The upper left corner point is at GkY coordinate of (1) Uk,iWherein U isk,i=uk,i+Uk,i,i=2,...,7;
Step S336: calculation of GkVertical distance V between middle semantic blocksk(E(j,i),E(j,i+3)) In which V isk(E(j,i),E(j,i+3)) As semantic block E(j,i+3)And semantic block E(j,i)At GkThe difference of the y coordinate values of the middle-upper-left corner points, i.e. Vk(E(j,i),E(j,i+3))=Uk,i+3-Uk,i,i=2,3,4。
Step S34: calculating query graph Q and library graph GkSpatial layout similarity score SD(k) And pattern similarity score SF(k)
Step S35: calculating query graph Q and library graph GkContent similarity score S betweenc(k)Sc(k)=η×SD(k)+(1-η)×SF(k) Wherein eta is more than or equal to 0.5.
Calculating the ranking score of the query graph and the data set graph according to the similarity of the query graph Q and each calculated shoe print attribute information and content information in the data set in the steps S2 and S3:
Figure BDA0002932153220000121
wherein k is the identification of the shoe print image in the data set.
Step S4: and calculating the ranking score by combining the shoe print content information and the attribute information.
Step S34 further includes the steps of:
s341: computing semantic Block E in Q(j,i+3)And semantic block E(j,i)Sum of duty cycles of Oi,i+3,i=3,4;
S342: computational library graph GkDistance difference Dis from the distance of semantic blocks in query graph Qk(Pi,Pi+3) In which Disk(Pi,Pi+3)=|VQ(Pi,Pi+3)-Vk(E(j,i),E(j,i+3))|,i=2,3,4;
S343: calculating query graph Q and library graph GkSpatial layout similarity score S between medium semantic blocksd(Pi,Pi+3) Wherein
Figure BDA0002932153220000122
S344: calculating query graph Q and library graph GkPattern similarity score S between middle semantic blocksf(Pi,Pi+3) In which S isf(Pi,Pi+3)=βii+3,i=2,3,4;
S345: giving different weights γ to the similarity scoresm,m=1,2,3;
a: semantic inter-block spatial layout similarity score Sd(P2,P5) Score of similarity with pattern Sf(P2,P5) Weight of gamma1Is a constant value, wherein γ1=0.5;
b: according to the sum O of the duty ratios of the ith semantic block and the (i + 3) th semantic blocki,i+3Scoring S for spatial layout similarity between semantic blocksd(Pi,Pi+3) I is 3,4, pattern similarity score Sf(Pi,Pi+3) I-3, 4 are given different weights γmM is 2, 3; if O is3,6≥O4,7Then γ2>γ3And vice versa, gamma2<γ3(ii) a And gamma is23=0.5;
S346: calculating a spatial layout similarity score SD
SD(k)=γ1×Sd(P2,P5)+γ2×Sd(P3,P6)+γ3×Sd(P4,P7);
S347: calculating a pattern similarity score SF
SF(k)=γ1×Sf(P2,P5)+γ2×Sf(P3,P6)+γ3×Sf(P4,P7)。
Step S5: and outputting the images in the data set in a descending order according to the sorting score to obtain a retrieval result.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for retrieving a fine-grained shoe print image is characterized by comprising the following steps:
s1: extracting shoe print attribute information;
s2: calculating the similarity of the shoe print attributes;
s3: calculating the similarity of the shoe print contents;
s4: calculating the ranking score of combining the shoe print content information and the attribute information;
s5: and outputting the images in the data set in a descending order according to the sorting score to obtain a retrieval result.
2. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S1 further includes the steps of:
s11: extracting attribute information of the single shoe print;
s111: judging the integrity of the shoe print image: obtaining the image area contained in the minimum circumscribed rectangle of the single shoe print image A and recording the image area as PATo PACarrying out binarization, marking points on the patterns as 1 and marking background points as 0; calculating the duty ratio of the shoe print image A, and recording as OAIn which O isAIs PAThe ratio of the sum of the number of the points marked as 1 to the number of the pixel points of the whole image; if O isAIf the value is less than the threshold value z, the shoe print image is considered to have shoe palm or heel missing;
s112: calculating the shoe length, the shoe width and the heel width of the shoe print;
a: if the shoe print image is complete, then P is takenAIs the shoe length HAWidth is W of shoe widthA(ii) a Will PAThe shoe print image contained therein was printed as 3: 2 divided into a heel part and a sole part, the heel part being soughtA minimum circumscribed rectangle, the width of the minimum circumscribed rectangle defining the heel portion is the heel width wA(ii) a Judging the authenticity of the shoe length, the shoe width and the heel width of the shoe print image, and updating attribute information;
b: if the shoe print image has the shoe palm or heel missing, the shoe length H is definedA0, shoe width WA0, heel width wA=0;
S12: according to the single piece of shoe print attribute information extraction method, the data set G is G (G)1,...,Gk,...,GNExtracting attribute information from the shoe prints in the N to form a shoe print attribute information matrix M:
Figure FDA0002932153210000011
3. the fine-grained shoe print image retrieval method according to claim 2, characterized in that: step S112 further includes the steps of:
s1121: let r be1=WA/wAIf r is1If the width of the shoe sole is less than 1.28, the shoe sole is considered to be incomplete at two sides, and the width of the updated shoe is WA=1.28×wA(ii) a If r1If the width is more than 1.33, the heel width w is updated by considering the two sides of the heel are incompleteA=WA/1.33;
S1122: let r be2=HA/WAIf r is2If the length is less than 2.9, the end of the toe cap or the heel is considered to be incomplete, and the length H of the shoe is updatedA=3.0×wA(ii) a If r2If > 3.1, the shoe width W updated in step S1121 is consideredAStill smaller than the actual shoe width, and the shoe width W is updated againA=HA/3.0 and determine whether r is1If more than 1.33, if yes, the heel width w is updatedA=WA/1.33。
4. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S2 further includes the steps of:
s21: according to the attribute information of single shoe printExtracting attribute information of the query image Q by the method, and respectively marking the shoe length as HQThe width of the shoe is WQThe width of the heel is wQ
S22: calculating query image Q and library image GkSimilarity score S of attribute information therebetweenp(k);
a: if the query graph Q and the library graph GkIf the shoe length, the shoe width and the heel width of the shoe mark are 0, the query graph Q and the library graph GkS of attribute information similarity scorep(k)=0;
b: if the query graph Q and the library graph GkThe shoe length, the shoe width and the heel width of the existing shoe mark are all not 0, and then the following calculation is carried out:
(1) calculating the query image Q and the library image GkThe difference dis between the propertiesi(k),i=1,2,3;
Figure FDA0002932153210000021
(2) Calculating query image Q and library image GkSimilarity score s between attributesi(k),i=1,2,3;
Figure FDA0002932153210000022
Wherein, Δ d1,Δd2,Δd3Respectively representing the difference values of the shoe length, the shoe width and the heel width when the difference values are different by one size;
(3) calculating query image Q and library image GkOverall attribute similarity score S ofP(k);
If it is
Figure FDA0002932153210000023
The query image Q and the library map GkHas an overall attribute similarity score of
Figure FDA0002932153210000024
Wherein
Figure FDA0002932153210000025
The weight of the similarity scores of the three kinds of attribute information is taken up; alpha is alphaiIs a constant value when disi(k) At > 0, alphai1, otherwise, αi=0;si(k) Similarity of three kinds of attribute information;
if it is
Figure FDA0002932153210000031
The query image Q and the library map GkHas an overall attribute similarity score of
Figure FDA0002932153210000032
Wherein
Figure FDA0002932153210000033
The weight of the similarity scores of the three kinds of attribute information is taken up; si(k) There are three kinds of attribute information similarity.
5. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S3 further includes the steps of:
s31: constructing a semantic block sample matrix;
s32: constructing a spatial layout relation of the semantic block in the query graph;
s33: constructing semantic block in image set G ═ G1,...,Gk,...,GN1, k ═ 1.., spatial layout relationship in N;
s34: calculating query graph Q and library graph GkSpatial layout similarity score SD(k) And pattern similarity score SF(k);
S35: calculating query graph Q and library graph GkContent similarity score S betweenc(k)Sc(k)=η×SD(k)+(1-η)×SF(k) Wherein eta is more than or equal to 0.5.
6. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S31 further includes the steps of:
s311: defining the minimum circumscribed rectangle of the query image Q as a semantic block P1And obtaining the height H thereofP1Width, width
Figure FDA0002932153210000035
And the y-coordinate of its upper left corner point in Q
Figure FDA0002932153210000036
And the y coordinate of the lower right corner point in Q
Figure FDA0002932153210000037
S312: to semantic block P1Trisecting in vertical direction, and taking the top and bottom parts to obtain semantic block P2,P5And obtaining the height of each semantic block
Figure FDA0002932153210000038
Width of
Figure FDA0002932153210000039
Y-coordinate with its upper left corner point in Q
Figure FDA00029321532100000310
And the lower right corner point in Q
Figure FDA00029321532100000311
Wherein i is 2, 5;
s313: respectively convert semantic blocks P2、P5Halving in the horizontal direction to obtain semantic blocks P3,P4And P6,P7And obtaining the height of each semantic block
Figure FDA00029321532100000313
Width of
Figure FDA00029321532100000312
Y-coordinate with its upper left corner point in Q
Figure FDA00029321532100000314
And the lower right corner point in Q
Figure FDA00029321532100000315
Wherein i is 3,4,6, 7;
s314: the semantic block P selected in the steps S311-S313iHorizontally turning to obtain a turned semantic block Ti1, 7, constructing a semantic block sample matrix
Figure FDA0002932153210000034
S315: carrying out binarization on the image content in each semantic block, marking the points on the patterns as 1 and marking the background points as 0; calculating the duty ratio of each semantic block, and recording the duty ratio as Di1., 7 where DiThe ratio of the sum of the number of the pixels marked as 1 in the ith semantic block to the area of the pixel is shown.
7. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: the step S32 further has the steps of:
s321: calculating the vertical distance V between semantic blocks in QQ(Pi,Pi+3) I is 2,3,4, wherein VQ(Pi,Pi+3) As a semantic block Pi+3And semantic block PiOf the upper left corner point y coordinate value, i.e. the difference
Figure FDA0002932153210000041
S322: updating the y coordinate values of the upper left corner and the lower right corner of the semantic block in Q, namely
Figure FDA0002932153210000042
Wherein λ ∈ [0, H ]Q) And is and
Figure FDA0002932153210000043
s323: calculating the proportion R of the semantic block in the query image Qi,j1, 7, j 1, 2; wherein R isi,1Y coordinate value of upper left corner of ith semantic block
Figure FDA0002932153210000044
Ratio to height of query image Q, i.e.
Figure FDA0002932153210000045
Ri,2Is the y coordinate value of the lower right corner of the ith semantic block
Figure FDA0002932153210000046
Ratio to height of query image Q, i.e.
Figure FDA0002932153210000047
HQIs high for query image Q.
8. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S33 further includes the steps of:
s331: ratio R in query image Q according to ith semantic blocki,jPreliminarily determining that the upper left corner point is at GkY coordinate value B in (1)k,iAnd its lower right corner at GkY coordinate value N in (1)k,i(ii) a Wherein
Figure FDA0002932153210000048
Figure FDA0002932153210000049
For database image GkThe height of (d);
s332: according to the ith semantic block, displaying the database image GkCoordinates of the upper left corner point and the lower right corner point determined preliminarily in the database image GkIn the matching area J corresponding to the intermediate cutk,i
S333: respectively calculate E(1,1),E(2,1)And Jk,1Degree of similarity of
Figure FDA00029321532100000410
If it is
Figure FDA00029321532100000411
Then Q and GkSelecting semantic Block E with respect to y-axis symmetry(2,i)And GkMatching and similarity calculation are carried out; otherwise, the meaning block E is used(1,i)And GkMatching and similarity calculation are carried out, wherein i is 2.
S334: traversing the semantic block selected in step S333, and performing the following operations:
a: respectively combine the semantic blocks E(j,i)With it in GkIn the intercepted corresponding matching area Jk,iCarrying out normalized cross-correlation operation to obtain a response graph F corresponding to each semantic blockk,iWherein j is 1, 2; i 2., 7;
b: take Fk,iThe y coordinate of the middle maximum value point is a semantic block E(j,i)Has a center point of Jk,iY coordinate of (5)k,iRecord Fk,iMaximum value of (1) is betaiTaking it as the ith semantic block and library graph GkThe pattern similarity score of (2);
c, calculating semantic block E(j,i)The upper left corner point is at Jk,iY coordinate u of (1)k,iWherein
Figure FDA00029321532100000412
HiIs the height of the ith semantic block;
s335: computing semantic Block E(j,i)The upper left corner point is at GkY coordinate of (1) Uk,iWherein U isk,i=uk,i+Uk,i,i=2,...,7;
S336: calculation of GkVertical distance V between middle semantic blocksk(E(j,i),E(j,i+3)) In which V isk(E(j,i),E(j,i+3)) As semantic block E(j,i+3)And semantic block E(j,i)At GkThe difference of the y coordinate values of the middle-upper-left corner points, i.e. Vk(E(j,i),E(j,i+3))=Uk,i+3-Uk,i,i=2,3,4。
9. The fine-grained shoe print image retrieval method according to claim 1, characterized in that: step S34 further includes the steps of:
s341: computing semantic Block E in Q(j,i+3)And semantic block E(j,i)Sum of duty cycles of Oi,i+3,i=3,4;
S342: computational library graph GkDistance difference Dis from the distance of semantic blocks in query graph Qk(Pi,Pi+3) In which Disk(Pi,Pi+3)=|VQ(Pi,Pi+3)-Vk(E(j,i),E(j,i+3))|,i=2,3,4;
S343: calculating query graph Q and library graph GkSpatial layout similarity score S between medium semantic blocksd(Pi,Pi+3) Wherein
Figure FDA0002932153210000051
S344: calculating query graph Q and library graph GkPattern similarity score S between middle semantic blocksf(Pi,Pi+3) In which S isf(Pi,Pi+3)=βii+3,i=2,3,4;
S345: giving different weights γ to the similarity scoresm,m=1,2,3;
a: semantic inter-block spatial layout similarity score Sd(P2,P5) Score of similarity with pattern Sf(P2,P5) Weight of gamma1Is a constant value, wherein γ1=0.5;
b: according to the sum O of the duty ratios of the ith semantic block and the (i + 3) th semantic blocki,i+3Scoring S for spatial layout similarity between semantic blocksd(Pi,Pi+3) I is 3,4, pattern similarity score Sf(Pi,Pi+3) I-3, 4 are given different weights γmM is 2, 3; if O is3,6≥O4,7Then γ2>γ3And vice versa, gamma2<γ3(ii) a And gamma is23=0.5;
S346: calculating a spatial layout similarity score SD
SD(k)=γ1×Sd(P2,P5)+γ2×Sd(P3,P6)+γ3×Sd(P4,P7);
S347: calculating a pattern similarity score SF
SF(k)=γ1×Sf(P2,P5)+γ2×Sf(P3,P6)+γ3×Sf(P4,P7)。
10. The fine-grained shoe print image retrieval method according to claim 1, characterized in that:
calculating the ranking score of the query graph and the data set graph according to the similarity of the query graph Q and each calculated shoe print attribute information and content information in the data set in the steps S2 and S3:
Figure FDA0002932153210000052
wherein k is the identification of the shoe print image in the data set.
CN202110152570.2A 2021-02-03 2021-02-03 Fine-granularity shoe print image retrieval method Active CN112800267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110152570.2A CN112800267B (en) 2021-02-03 2021-02-03 Fine-granularity shoe print image retrieval method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110152570.2A CN112800267B (en) 2021-02-03 2021-02-03 Fine-granularity shoe print image retrieval method

Publications (2)

Publication Number Publication Date
CN112800267A true CN112800267A (en) 2021-05-14
CN112800267B CN112800267B (en) 2024-06-11

Family

ID=75814048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110152570.2A Active CN112800267B (en) 2021-02-03 2021-02-03 Fine-granularity shoe print image retrieval method

Country Status (1)

Country Link
CN (1) CN112800267B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795081A (en) * 2023-01-20 2023-03-14 安徽大学 Cross-domain incomplete footprint image retrieval system based on multi-channel fusion

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170043308A (en) * 2015-10-13 2017-04-21 충북대학교 산학협력단 Method for identificating Person on the basis gait data
CN106776950A (en) * 2016-12-02 2017-05-31 大连海事大学 A kind of field shoe impression mark decorative pattern image search method based on expertise guiding
CN106951906A (en) * 2017-03-22 2017-07-14 重庆市公安局刑事警察总队 The comprehensive analysis method that shoe sole print various dimensions are classified with identification
CN106970968A (en) * 2017-03-22 2017-07-21 重庆市公安局刑事警察总队 The classification of shoe sole print various dimensions looks into a yard method with identification
KR20170095062A (en) * 2016-02-12 2017-08-22 대한민국(관리부서: 행정자치부 국립과학수사연구원장) A Method Of Providing For Searching Footprint And The System Practiced The Method
WO2017168125A1 (en) * 2016-03-31 2017-10-05 Queen Mary University Of London Sketch based search methods
KR101833986B1 (en) * 2016-10-05 2018-03-02 김현철 Shoes component with image film and manufacturing method thereof
US10043109B1 (en) * 2017-01-23 2018-08-07 A9.Com, Inc. Attribute similarity-based search
CN109191481A (en) * 2018-08-03 2019-01-11 辽宁师范大学 Shoes watermark image Fast Threshold dividing method under bad conditions of exposure
CN110188222A (en) * 2019-06-03 2019-08-30 大连海事大学 Shoes based on the semantic filter in part and bridge joint similarity print search method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170043308A (en) * 2015-10-13 2017-04-21 충북대학교 산학협력단 Method for identificating Person on the basis gait data
KR20170095062A (en) * 2016-02-12 2017-08-22 대한민국(관리부서: 행정자치부 국립과학수사연구원장) A Method Of Providing For Searching Footprint And The System Practiced The Method
WO2017168125A1 (en) * 2016-03-31 2017-10-05 Queen Mary University Of London Sketch based search methods
KR101833986B1 (en) * 2016-10-05 2018-03-02 김현철 Shoes component with image film and manufacturing method thereof
CN106776950A (en) * 2016-12-02 2017-05-31 大连海事大学 A kind of field shoe impression mark decorative pattern image search method based on expertise guiding
US10043109B1 (en) * 2017-01-23 2018-08-07 A9.Com, Inc. Attribute similarity-based search
CN106951906A (en) * 2017-03-22 2017-07-14 重庆市公安局刑事警察总队 The comprehensive analysis method that shoe sole print various dimensions are classified with identification
CN106970968A (en) * 2017-03-22 2017-07-21 重庆市公安局刑事警察总队 The classification of shoe sole print various dimensions looks into a yard method with identification
CN109191481A (en) * 2018-08-03 2019-01-11 辽宁师范大学 Shoes watermark image Fast Threshold dividing method under bad conditions of exposure
CN110188222A (en) * 2019-06-03 2019-08-30 大连海事大学 Shoes based on the semantic filter in part and bridge joint similarity print search method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795081A (en) * 2023-01-20 2023-03-14 安徽大学 Cross-domain incomplete footprint image retrieval system based on multi-channel fusion

Also Published As

Publication number Publication date
CN112800267B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
İlsever et al. Two-dimensional change detection methods: remote sensing applications
Pun et al. A two-stage localization for copy-move forgery detection
Ma et al. Bicov: a novel image representation for person re-identification and face verification
Ruichek Local concave-and-convex micro-structure patterns for texture classification
Tao et al. Unsupervised detection of built-up areas from multiple high-resolution remote sensing images
Huang et al. Multi-scale local context embedding for LiDAR point cloud classification
KR100884066B1 (en) System and method for comparing image based on singular value decomposition
Xie et al. Combination of dominant color descriptor and Hu moments in consistent zone for content based image retrieval
CN104850822B (en) Leaf identification method under simple background based on multi-feature fusion
CN106021603A (en) Garment image retrieval method based on segmentation and feature matching
Alizadeh et al. Automatic retrieval of shoeprint images using blocked sparse representation
Folego et al. From impressionism to expressionism: Automatically identifying van Gogh's paintings
CN107644227A (en) A kind of affine invariant descriptor of fusion various visual angles for commodity image search
Patil et al. Rotation and intensity invariant shoeprint matching using Gabor transform with application to forensic science
Almaadeed et al. Partial shoeprint retrieval using multiple point-of-interest detectors and SIFT descriptors
Mata-Montero et al. A texture and curvature bimodal leaf recognition model for identification of costa rican plant species
Lan et al. An edge-located uniform pattern recovery mechanism using statistical feature-based optimal center pixel selection strategy for local binary pattern
CN112800267A (en) Fine-grained shoe print image retrieval method
Ghanmi et al. A new descriptor for pattern matching: application to identity document verification
Li et al. Research of shoeprint image stream retrival algorithm with scale-invariance feature transform
Gwo et al. Shoeprint retrieval: Core point alignment for pattern comparison
Alghamdi et al. Automated person identification framework based on fingernails and dorsal knuckle patterns
Wei et al. The use of scale‐invariance feature transform approach to recognize and retrieve incomplete shoeprints
Srihari et al. Computational methods for the analysis of footwear impression evidence
Zubair Machine learning based biomedical image analysis and feature extraction methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant