CN114722226A - Adaptive retrieval method and device capable of matching images and storage medium - Google Patents

Adaptive retrieval method and device capable of matching images and storage medium Download PDF

Info

Publication number
CN114722226A
CN114722226A CN202210645024.7A CN202210645024A CN114722226A CN 114722226 A CN114722226 A CN 114722226A CN 202210645024 A CN202210645024 A CN 202210645024A CN 114722226 A CN114722226 A CN 114722226A
Authority
CN
China
Prior art keywords
image
similarity
adaptive
images
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210645024.7A
Other languages
Chinese (zh)
Other versions
CN114722226B (en
Inventor
王伟玺
谢林甫
郭欢
李晓明
汤圣君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202210645024.7A priority Critical patent/CN114722226B/en
Publication of CN114722226A publication Critical patent/CN114722226A/en
Application granted granted Critical
Publication of CN114722226B publication Critical patent/CN114722226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/56Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-adaptive retrieval method, a self-adaptive retrieval device and a storage medium for a matchable image, wherein the method comprises the following steps: acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary; sorting according to the similarity value of the similarity vector to obtain a sorted similarity vector; substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve; calculating an inflection point value of the high-order polynomial function, and calculating a self-adaptive threshold value of the image according to the inflection point value; and searching the images according to the self-adaptive threshold value of the calculated images, and outputting similar image results and corresponding images according to the searching results. The invention can search out the image pairs with the same name image points from the disordered images, reduces unnecessary matching between the unrelated image pairs and improves the overall efficiency of three-dimensional reconstruction.

Description

Self-adaptive retrieval method and device capable of matching images and storage medium
Technical Field
The invention relates to the technical field of matchable images, in particular to a matchable image self-adaptive retrieval method, a device and a storage medium.
Background
At present, three-dimensional model reconstruction based on chaotic images is a research focus and a hot spot in the fields of digital photogrammetry, computer vision and the like. Image feature matching is based on three-dimensional reconstruction of chaotic images, i.e. one of the most time-consuming computational processes in the Motion recovery Structure (SFM). Due to the fact that the disordered images lack prior information such as POS information, GPS positioning, air route planning, image shooting sequence and the like, an image feature matching link usually needs all images to be subjected to exhaustive matching (N x (N-1)/2 times of calculation), and unnecessary matching among a large number of irrelevant images exists, so that great computing resource consumption and time waste are caused.
Currently, the mainstream technical route of the three-dimensional reconstruction-oriented matchable image retrieval technology is to extract local feature point (SIFT, SURF, ORB, etc.) operators of images, cluster and generate a visual dictionary based on a bag-of-words model, convert all images into visual dictionary vectors with the same dimension through the visual dictionary, represent the similarity degree between the images (between 0 and 1, the higher the value is, the more similar the value is), and further obtain matchable image pairs through similarity measurement by calculating the distance between high-dimensional vectors, such as euclidean distance, cosine distance, hamming distance, etc., as the similarity numerical value.
However, in the similarity measure method of the prior art, a fixed threshold method is mainly adopted. The fixed threshold method is based on the fact that an empirical threshold (an image with a threshold larger than a certain value n or n images with a similarity value before) is obtained through multiple experiments and is used as a basis for judging similarity of the images, and the fixed threshold method is based on the fact that redundant or too few retrieval results are easily caused, unnecessary matching between unrelated images or insufficient matching of image features is caused, and further the subsequent three-dimensional reconstruction effect is influenced.
In view of this, there is still a need for improvement and development in the art.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, an object of the present invention is to provide a method, an apparatus and a storage medium for adaptive searching of a matchable image, so as to solve the technical problem of low efficiency of the existing three-dimensional reconstruction.
The technical scheme adopted by the invention for solving the technical problem is as follows:
in a first aspect, the present invention provides a method for adaptive searching of matchable images, the method comprising:
acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary;
sorting according to the similarity value of the similarity vector to obtain a sorted similarity vector;
substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve;
calculating an inflection point value of the high-order polynomial function, and calculating a self-adaptive threshold value of the image according to the inflection point value;
and searching the images according to the self-adaptive threshold value of the calculated images, and outputting similar image results and corresponding images according to the searching results.
In one implementation, the acquiring a disordered set of images to be involved in three-dimensional reconstruction and extracting a similarity vector of each image according to a pre-trained visual dictionary previously includes:
acquiring a plurality of preset images;
extracting local feature points of a plurality of preset images, and clustering the extracted local feature points to generate a visual dictionary;
converting the preset images into visual dictionary vectors with the same dimensionality according to the visual dictionary;
and setting the distances among the visual dictionary vectors, and obtaining similarity values of all the characterization image similarity relations according to the distances to obtain a similarity matrix.
In one implementation, the acquiring a disordered set of images to be involved in three-dimensional reconstruction and extracting a similarity vector of each image according to a pre-trained visual dictionary includes:
acquiring a disordered image set to be participated in three-dimensional reconstruction and the pre-trained visual dictionary;
extracting a similarity matrix in the pre-trained visual dictionary;
and extracting the similarity vector of each image in the unordered image set according to the similarity matrix.
In one implementation, substituting the similarity value of the sorted similarity vector into a high-order polynomial function to calculate, and resolving a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve includes:
a plurality of the similarity values are arranged in order from large to small,
substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient:
Figure 850765DEST_PATH_IMAGE001
wherein y is a similarity value, x is a sequenced image sequence number, and a, b, c and d are constant term coefficients;
and obtaining a high-order polynomial function of the fitted image similarity distribution curve according to the analyzed function coefficient.
In one implementation, the calculating a knee value of a higher order polynomial function and calculating an adaptive threshold of an image according to the knee value includes:
and carrying out derivation according to the high-order polynomial function, enabling the derivation function to be zero, calculating a corner value, and substituting the corner value into the high-order polynomial function to calculate the self-adaptive threshold of the image.
In one implementation, the performing image retrieval according to the calculated adaptive threshold of the image, and outputting a similar image result and a corresponding image according to the retrieval result includes:
sequentially judging whether the similarity value of each similarity vector is larger than the corresponding self-adaptive threshold value or not according to the sequence from large to small;
selecting all similarity vectors which are larger than the self-adaptive threshold value, and outputting matched images to the selected similarity vectors;
all similarity vectors less than or equal to the adaptive threshold are selected and the selected similarity vectors are excluded.
In one implementation, the image retrieval according to the self-adaptive threshold of the calculated image and outputting a similar image result and a corresponding image according to the retrieval result includes:
counting the number of the similarity values larger than the self-adaptive threshold value in the similarity matrix with the same sequence number to obtain the number of common partners until all the similarity vectors are traversed;
and setting a specified value according to the number of the common partners, and if the number of the common partners is larger than or equal to the specified value, considering the two images as a similar image pair with the same name image point.
In one implementation, the method for adaptively retrieving a matchable image further includes:
and if the number of the common partners is smaller than the specified value, the common partners are regarded as rough retrieval of the image and are removed.
In a second aspect, the present invention provides a self-adaptive searching device for matchable images, comprising: a memory and a processor; the memory stores a matchable image adaptive retrieval program, which when executed by the processor is configured to implement the operations of the matchable image adaptive retrieval method as described above.
In a third aspect, the present invention provides a storage medium, which is a computer-readable storage medium, and the storage medium stores a matchable image adaptive retrieval program, where the matchable image adaptive retrieval program is used to implement the operation of the matchable image adaptive retrieval method as described above when executed by a processor.
Compared with the prior art, the invention has the beneficial effects that:
1) the method comprises the steps of fitting a similarity distribution curve of disordered images through a high-order polynomial function, obtaining an inflection point with the most severe change of the similarity value through function derivation, obtaining a self-adaptive threshold value of each image as a judgment basis of a similarity measurement method, and further avoiding the problem of redundancy or too few retrieval results caused by inaccurate threshold value in the current similarity measurement method;
2) the method provides a gross error rejection strategy that similar image pairs should have more common similar images, and the gross error rejection strategy is used for counting the number of common partners among the images based on numerical values such as a similarity matrix and a self-adaptive threshold value and taking the number of common partners as a screening basis of the retrieval gross error, so that the retrieval gross error result caused by the similarity of local visual contents is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a method for adaptive searching of matchable images according to the present invention;
FIG. 2 is a flow chart for establishing a similarity matrix according to the present invention;
FIG. 3 is a flow chart of gross error rejection after similar image results are obtained;
FIG. 4 is a detailed flowchart of the adaptive searching method for matching images shown in FIG. 1;
FIG. 5 is a detailed flowchart of coarse rejection after obtaining similar image results shown in FIG. 3;
fig. 6 is a functional schematic diagram of the adaptive image retrieval device according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions, and effects of the present application clearer and clearer, the present application will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Exemplary method
In the existing similarity measure methods, a fixed threshold method is mainly adopted. The fixed threshold method is based on the fact that an empirical threshold (an image with a threshold larger than a certain value n or n images with a similarity value before) is obtained through multiple experiments and is used as a basis for judging similarity of the images, and the fixed threshold method is based on the fact that redundant or too few retrieval results are easily caused, unnecessary matching between unrelated images or insufficient matching of image features is caused, and further the subsequent three-dimensional reconstruction effect is influenced.
Meanwhile, the existing similarity measure method also has a mean threshold-based method. The mean threshold value of the retrieved image is obtained through calculating the mean value and the standard deviation of the similarity value between the retrieved image and the residual image based on a mean threshold value method and a linear function of the mean value and the standard deviation, the mean threshold value has certain self-adaptability, and compared with a fixed threshold value method, the problem of redundancy or too little of the retrieved result can be restrained to a certain extent.
In view of the above problems, the embodiment provides a self-adaptive retrieval method for matchable images, which extracts a similarity vector of each image through a pre-trained visual dictionary, substitutes the similarity value of the similarity vector into a high-order polynomial function for calculation, and resolves a function coefficient to obtain the high-order polynomial function of a similarity distribution curve of a fitted image; calculating an inflection point value of the high-order polynomial function, and calculating an adaptive threshold value of the image according to the inflection point value; and performing image retrieval according to the self-adaptive threshold of the image obtained by calculation, and outputting a similar image result and a corresponding image according to the retrieval result, thereby avoiding the problem of redundant or less retrieval results caused by inaccurate threshold in the current similarity measurement method.
As shown in fig. 1, an embodiment of the present invention provides a method for adaptive searching of a matchable image, where the method for adaptive searching of a matchable image includes the following steps:
step S100: acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary.
The image similarity value change curve is fitted through a high-order polynomial function, the inflection point value of the fitting function is obtained through derivation and serves as a judgment threshold value of image similarity, and then an image similarity self-adaptive measurement method based on the fitting function is obtained, so that the precision of the image retrieval result which can be matched is improved; aiming at the problem of retrieval gross error caused by local visual content similarity, the thinking that a similar image pair participating in three-dimensional reconstruction should have more common similar images is provided, and the retrieval gross error caused by local characteristic point similarity is eliminated through the numerical relationship of a similarity matrix, so that the precision of an image retrieval result is further improved.
Before the adaptive retrieval method of the matchable images is implemented, a visual dictionary is obtained through pre-training, the visual dictionary is formed by combining a plurality of visual words, visual dictionary vectors are extracted from the visual dictionary, and similarity values corresponding to all the images are obtained and used for making a similarity matrix.
Before implementing step S100, the similarity matrix needs to be established, specifically, as shown in fig. 2, the step of establishing the similarity matrix includes:
step S001: a plurality of preset images are acquired.
The preset image is an image shot previously, and the operation of extracting the local feature points is carried out in the next step by preparing the preset image.
Step S002: and extracting a plurality of local feature points of the preset image, and clustering the extracted local feature points to generate a visual dictionary.
The extraction of local feature points of an image generally includes two steps: local feature point detection and local feature description. The local feature point detection is to detect the position or the area of the gradient distribution extreme point in the image by adopting a proper mathematical operator, the obtained area corresponding to the extreme point contains abundant visual information, and the corresponding feature vector has strong distinguishing capability and description capability. Currently, the main local feature point detection operators include: after local regions corresponding to the local feature points are determined, effective local feature descriptions, generally high-dimensional vectors, need to be generated by using a SIFT operator, a SURF operator, an ORB operator, a MSER operator, a Hrris-Affinie operator, a Hessian-Affinie operator and the like.
The local feature points can represent the bottom-layer visual characteristics of the image and are largely used in the image content analysis. However, most of the local feature points of the image are located in a high-dimensional space, for example, the SIFT operator is 128-dimensional, the SURF operator is 64-dimensional, and the like, so that storage and subsequent calculation are inconvenient. In addition, high-dimensional vectors usually face the problem of "dimensional disaster" such as sparseness and noise, which leads to the performance of algorithms that perform well in low-dimensional space to the performance of high-dimensional space being degraded sharply. Therefore, it is necessary to map the high-dimensional local features of the image to a low-dimensional space for storage, indexing and calculation, and map a large number of local feature points to the low-dimensional space to obtain codes corresponding to the local feature points, these codes are called visual words, and all the visual words form a visual dictionary.
Step S003: and converting the preset images into a plurality of visual dictionary vectors with the same dimensionality according to the visual dictionary.
Specifically, the preset image is converted into a visual dictionary vector with the same dimension through the opencv2 and some functions of boost in the C + + algorithm.
Step S004: setting a distance between a plurality of the visual dictionary vectors, for example: as shown in the following table, distances between horizontal rows S1, S2, S3, S4, s... fara, Si, s.fara, Sn-1, and Sn vectors and distances between columns S1, S2, S3, S4, s.fara, Si, s.fara, Sn-1, and Sn vectors are set, and similarity values representing image similarity relationships are obtained according to the distances, so as to obtain a similarity matrix.
Step S001 to step S004, which are the training dictionary stages of the technical route.
The similarity matrix is specifically shown in the following table:
Figure 299064DEST_PATH_IMAGE002
it is found from the table that the similarity value ranges from 0 to 1, and it should be noted that: the higher the similarity value in the table, the more similar the similarity between the images, and when the row number and the column number are equal, the similarity value of the image is the similarity value of itself, i.e. 1, and thus, the similarity matrix of the images is a symmetric matrix with a diagonal of 1. And (4) performing similarity value arrangement at corresponding positions of the line number and the column number until the similarity values of all images are traversed.
Specifically, in an implementation manner of this embodiment, the step S100 includes the following steps:
step S101: and acquiring a disordered image set to be participated in three-dimensional reconstruction and the pre-trained visual dictionary.
The unordered image set is all the preset images in the past, and visual words are extracted from the preset images so as to be combined into a visual dictionary.
Step S102: and extracting a similarity matrix in the pre-trained visual dictionary.
The similarity matrix is specifically shown in the above table, and the similarity vector of each image in the table can be seen according to the similarity matrix.
Step S103: and extracting the similarity vector of each image in the unordered image set according to the similarity matrix.
And extracting the similarity vector to obtain a corresponding similarity value.
As shown in fig. 1, an embodiment of the present invention provides a method for adaptive searching of a matchable image, where the method further includes the following steps:
step S200: and sorting according to the similarity value of the similarity vector to obtain a sorted similarity vector.
In an implementation manner of the embodiment of the present invention, the similarity values of the similarity vectors are arranged in descending order according to a certain row or a certain column of the matrix. Since the higher the similarity value is, the stronger the similarity between the images is, in the process of searching the target image, the searching is started from the image with the strongest similarity until the image with the weakest similarity is searched. All images are retrieved in this manner until the end.
Specifically, for example, in the second column of the similarity matrix, the numerical values corresponding to the similarity values are: 1.00, 0.82, 0.91, 0.16,..., 0.25, 0.77, ordered by similarity value from large to small, are: 1.00, 0.91, 0.82, 0.77,. copy., 0.25, 0.16, resulting in a descending similarity vector ordering.
The embodiment arranges the data in the order from large to small. Because the higher the similarity value is, the stronger the similarity between the images is, in the process of searching the target image, the searching is started from the image with the strongest similarity; the problem of redundant or too few retrieval results caused by inaccurate threshold in the current similarity measurement method is avoided.
As shown in fig. 1, an embodiment of the present invention provides a method for adaptive searching of a matchable image, where the method further includes the following steps:
step S300: and substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve.
Specifically, in an implementation manner of this embodiment, the step S300 includes the following steps:
step S301: substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient:
Figure 813222DEST_PATH_IMAGE003
wherein y is a similarity value, x is a sequenced image sequence number, and a, b, c and d are constant term coefficients; n represents a power, and the value range of n can be set as required, which is not limited in this embodiment.
As shown in fig. 1, an embodiment of the present invention provides a method for adaptive searching of a matchable image, where the method further includes the following steps:
step S400: and calculating the inflection point value of the high-order polynomial function, and calculating the self-adaptive threshold of the image according to the inflection point value.
Specifically, in an implementation manner of this embodiment, the step S400 includes the following steps:
step S401: the step of calculating the inflection point value of the high-order polynomial function and calculating the self-adaptive threshold of the image according to the inflection point value comprises the following steps:
and carrying out derivation according to the high-order polynomial function, namely:
Figure 119351DEST_PATH_IMAGE004
making the derivative function be zero, thereby calculating an x value, wherein the x value is the inflection point value of the high-order polynomial function, and thereby obtaining a point with violent change in the similarity value; and substituting the inflection value into the high-order polynomial function to calculate a y value, wherein the y value is the self-adaptive threshold of the image.
As shown in fig. 1, an embodiment of the present invention provides a method for adaptive searching of a matchable image, where the method further includes the following steps:
step S500: and searching the images according to the self-adaptive threshold value of the calculated images, and outputting similar image results and corresponding images according to the searching results.
Specifically, in an implementation manner of this embodiment, the step S500 includes the following steps:
step S501: and sequentially judging whether the similarity value of each similarity vector is larger than the corresponding adaptive threshold value or not according to the sequence from large to small.
Because the similarity value is higher, the similarity of the two images is stronger, and the similarity values are sequenced from large to small, the most similar images can be searched by the staff in the first time, and meanwhile, the screening mechanism is optimized.
Step S502: and selecting all the similarity vectors which are larger than the adaptive threshold value, and outputting matched images to the selected similarity vectors.
When the similarity vector is larger than the adaptive threshold, the image pair can be ensured to have higher similarity.
Step S503: all similarity vectors less than or equal to the adaptive threshold are selected and the selected similarity vectors are excluded.
When the similarity vector is less than the adaptive threshold, it also indicates that the image pairs are less similar.
Step S100 to step S500 are image retrieval stages of the technical route of the present invention.
In the invention, a high-order polynomial function of a fitting image similarity distribution curve is obtained through the step S300; through the step S400, the function derivation is performed to obtain the inflection point with the most severe change of the similarity degree value and the adaptive threshold of each image, and the inflection point and the adaptive threshold are used as the judgment basis of the similarity measure method, so that the problem of redundant or too few retrieval results caused by inaccurate threshold in the current similarity measure method is solved.
In the practical application process of the steps S100 to S500 of the present embodiment, as shown in fig. 4, the method includes the following steps:
step S011: acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary;
step S012: sorting according to the similarity value of the similarity vectors to obtain sorted similarity vectors;
step S013: substituting the similarity values of the sorted similarity vectors into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve;
step S014: calculating an inflection point value of the high-order polynomial function, and calculating an adaptive threshold value of the image according to the inflection point value;
step S015: repeating the step S011 to the step S014 until the self-adaptive threshold values of all the images are obtained;
step S016: judging whether the similarity value between the image pairs is larger than a self-adaptive threshold value or not;
step S017: if the similarity value between the image pairs is larger than the adaptive threshold, the similarity degree of the image pairs is higher, and the images with the similarity value larger than the adaptive threshold are screened out, so that an image retrieval result is obtained;
step S018: if the similarity value between the image pairs is less than or equal to the self-adaptive threshold, the similarity degree of the image pairs is low, and the images with the similarity value less than or equal to the self-adaptive threshold are screened out and excluded.
In the multi-view image-based three-dimensional reconstruction, a high-precision local feature point operator such as SIFT or SURF is adopted, so that the following image feature points can be guaranteed to keep better robustness in the aspects of rotation, scale, brightness, affine, noise and the like. Although the method has various advantages, describing the image from the local information easily causes the image local visual content to be similar in the image searching process, but actually does not have the image pairs with the same name image points, which can also be called as searching gross error. The rough retrieval results in some wrong image pairs participating in feature point matching in the subsequent image feature matching process, which wastes computing resources and affects the precision of subsequent point clouds and models. Therefore, if the rough search caused by the similarity of the local visual contents of the images can be eliminated, the precision of the search result of the matched images can be further improved, the wrong image feature matching can be reduced, and the overall effect of three-dimensional reconstruction can be improved.
In order to avoid the situation, when data is collected in field, the situation can be avoided by shooting other objects (such as lawns, trees, street lamps, vehicles and the like) around the building and increasing the image recognition degree. The essential idea is that the image pairs with the same name image points will increase the visual content of the part at the same time, and the retrieval of the rough image pairs will not increase the similar content of the part. Therefore, based on the idea of adding other visual contents of images in field collection to avoid matching errors caused by local visual contents, in this embodiment, it is further proposed that, if two images have more common similar images, the coarse search difference between the images due to the similarity of the local visual contents can be eliminated by the number of the visual contents of the images.
For the problem of image retrieval gross error, the embodiment also provides technical content of gross error elimination on the basis of a matchable image adaptive retrieval method so as to improve the precision of matchable image retrieval results and avoid the waste of computer resources.
As shown in fig. 3, in another implementation manner of this embodiment, the method for adaptively retrieving a matchable image further includes the following steps:
step S600: counting the number of the similarity values larger than the self-adaptive threshold value in the similarity matrix with the same sequence number to obtain the number of common partners until all the similarity vectors are traversed; and setting a specified value according to the number of the common partners, and if the number of the common partners is larger than or equal to the specified value, considering the two images as a similar image pair with the same name image point.
The more the number of the common partners is, the more the similar characteristic points of the image pair are, and the stronger the similarity of the image pair is.
Of course, it should be understood that the specified values are integer values at least 3 or greater.
Specifically, an example is made according to the similarity matrix:
for the similar image pairs i and j, the similarity vectors are Si and Sj respectively, and the similarity values of the statistical vectors Si and Sj under the same sequence number are larger than the number of the self-adaptive threshold values.
Accordingly, for example, the similarity value in the similarity matrix with the same sequence number S3 × S3 is counted as: 1.00, 0.82, 0.91, 0.82, 1.00, 0.73, 0.91, 0.73, 1.00, if the adaptive threshold of the image is 0.77, then there are 7 similarity values greater than the adaptive threshold, then the number of common partners is 7, and the similarity vectors of all images are traversed in this way. Further, if the specified value is set to 3, the number of common partners is greater than the specified value, and the two images are regarded as a similar image pair having the same image point.
Step S700: and if the number of the common partners is smaller than the specified value, the common partners are regarded as rough retrieval of the image and are removed.
For example: taking the similarity matrix with the same serial number S4 × S4 as an example, if the adaptive threshold of the image is calculated to be 0.88, 5 number similarity values of 1.00, 0.91, and 1.00 are greater than the adaptive value, i.e., 5 common partner numbers. Further, if the specified value is set to 6, the number of common partners is smaller than the specified value, and the common partners are regarded as coarse search differences of the image and are removed.
Of course, it should be understood that the greater the number of common partners between two images, the greater the degree of similarity between the two images, and thus the two images are considered as a pair of similar images having the same image point; otherwise, the search gross error is regarded as the search gross error and is removed.
Step S600 to step S700 are gross error elimination stages of the technical route of the invention.
According to the method, through the steps S600-S700, the number of common similar images among the images is counted based on the similarity matrix and the adaptive threshold value, and the common similar images are used as a discrimination basis of the retrieval gross error, so that the retrieval gross error result caused by the similarity of local visual contents is avoided.
In the practical application process of the steps S600 to S700 of the present embodiment, as shown in fig. 5, the method includes the following steps:
step S019: obtaining a similar image pair and an image similarity matrix of image retrieval and an adaptive threshold;
step S020: for the similar image pairs i and j, the similarity vectors are Si and Sj respectively;
step S021: counting the number of vectors Si and Sj with the similarity values of the same sequence numbers larger than the self-adaptive threshold value, namely the number of common partners;
step S022: judging whether the number of the common partners is larger than or equal to a specified value or not;
step S023: if the number of the common partners is larger than or equal to the specified value, the two images are indicated to have more common points, and the two images are regarded as similar image pairs with the same name image points, so that a final retrieval result after coarse differences are removed is obtained;
step S024: if the number of common partners is less than the specified value, it means that the two images have less common points and are considered as coarse search, and the images with the number of common partners less than the specified value will be eliminated.
In the embodiment of the invention, through three stages of training a dictionary, image retrieval and gross error elimination, the technical problems of redundant or too few retrieval results and retrieval gross error caused by similar local visual contents in the prior art are solved, the precision of the image result capable of being matched is improved, and the overall efficiency of three-dimensional reconstruction is further improved.
Exemplary device
Based on the above embodiments, the present invention further provides a self-adaptive searching device for matchable images, and a schematic block diagram thereof can be shown in fig. 6.
The self-adaptive retrieval device capable of matching images comprises: the system comprises a processor, a memory, an interface, a display screen and a communication module which are connected through a system bus; wherein, the processor of the image self-adaptive searching device is used for providing calculation and control capability; the memory of the image self-adaptive retrieval device capable of being matched comprises a storage medium and an internal memory; the storage medium stores an operating system and a computer program; the internal memory provides an environment for the operation of an operating system and a computer program in the storage medium; the interface is used for connecting external equipment, such as mobile terminals, computers and the like; the display screen is used for displaying corresponding combined navigation information based on deep learning; the communication module is used for communicating with a cloud server or a mobile terminal.
The computer program is used for realizing the self-adaptive searching method of the matchable image when being executed by a processor.
It will be understood by those skilled in the art that the schematic block diagram shown in fig. 6 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation of the matchable image adaptive search device to which the solution of the present invention is applied, and in particular, the matchable image adaptive search device may include more or less components than those shown in the figure, or combine some components, or have different component arrangements.
In one embodiment, an adaptive image retrieval device is provided, which includes: a memory and a processor; the memory stores a matchable image adaptive retrieval program, which when executed by the processor is configured to implement the operations of the matchable image adaptive retrieval method as described above.
In one embodiment, a storage medium is provided, which is a computer readable storage medium, and the storage medium stores a matchable image adaptive retrieval program, and the matchable image adaptive retrieval program is used for implementing the operation of the matchable image adaptive retrieval method as described above when being executed by a processor.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a non-volatile storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory.
The invention discloses a self-adaptive retrieval method, a self-adaptive retrieval device and a storage medium for a matchable image, wherein the method comprises the following steps: acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary; sorting according to the similarity value of the similarity vector to obtain a sorted similarity vector; substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve; calculating an inflection point value of the high-order polynomial function, and calculating a self-adaptive threshold value of the image according to the inflection point value; and searching the images according to the self-adaptive threshold value of the calculated images, and outputting similar image results and corresponding images according to the searching results. The invention can search out the image pairs with the same name image points from the disordered images, reduces unnecessary matching between the unrelated image pairs and improves the overall efficiency of three-dimensional reconstruction.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A self-adaptive retrieval method for a matchable image is characterized by comprising the following steps:
acquiring a disordered image set to be participated in three-dimensional reconstruction, and extracting a similarity vector of each image according to a pre-trained visual dictionary;
sorting according to the similarity value of the similarity vector to obtain a sorted similarity vector;
substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient to obtain the high-order polynomial function fitting the image similarity distribution curve;
calculating an inflection point value of the high-order polynomial function, and calculating a self-adaptive threshold value of the image according to the inflection point value;
and searching the images according to the self-adaptive threshold value of the calculated images, and outputting similar image results and corresponding images according to the searching results.
2. The adaptive retrieval method for matchable images according to claim 1, wherein the obtaining a disordered set of images to be involved in three-dimensional reconstruction and extracting the similarity vector of each image according to a pre-trained visual dictionary comprises:
acquiring a plurality of preset images;
extracting local feature points of a plurality of preset images, and clustering the extracted local feature points to generate a visual dictionary;
converting the preset images into visual dictionary vectors with the same dimensionality according to the visual dictionary;
and setting the distances among the visual dictionary vectors, and obtaining similarity values of all the representation image similarity relations according to the distances to obtain a similarity matrix.
3. The adaptive retrieval method for matchable images according to claim 1, wherein the obtaining a disordered image set to be involved in three-dimensional reconstruction and extracting a similarity vector of each image according to a pre-trained visual dictionary comprises:
acquiring a disordered image set to be participated in three-dimensional reconstruction and the pre-trained visual dictionary;
extracting a similarity matrix in the pre-trained visual dictionary;
and extracting the similarity vector of each image in the unordered image set according to the similarity matrix.
4. The adaptive retrieval method for matchable images according to claim 1, wherein the step of substituting the similarity values of the sorted similarity vectors into a high-order polynomial function to calculate, and analyzing the function coefficients to obtain the high-order polynomial function fitting the image similarity distribution curve comprises:
arranging a plurality of similarity values in a descending order;
substituting the similarity value of the sorted similarity vector into a high-order polynomial function for calculation, and analyzing a function coefficient:
Figure 992835DEST_PATH_IMAGE001
wherein y is a similarity value, x is a sequenced image sequence number, and a, b, c and d are constant term coefficients;
and obtaining a high-order polynomial function of the image similarity distribution curve according to the analyzed function coefficient.
5. The adaptive image retrieval method of claim 1, wherein the calculating a knee value of a high degree polynomial function and calculating an adaptive threshold of the image based on the knee value comprises:
and carrying out derivation according to the high-order polynomial function, enabling the derivation function to be zero, calculating a corner value, and substituting the corner value into the high-order polynomial function to calculate the self-adaptive threshold of the image.
6. The adaptive image retrieval method of claim 1, wherein the image retrieval according to the adaptive threshold of the calculated image and outputting the similar image result and the corresponding image according to the retrieval result comprises:
sequentially judging whether the similarity value of each similarity vector is larger than the corresponding self-adaptive threshold value or not according to the sequence from large to small;
selecting all similarity vectors which are larger than the self-adaptive threshold value, and outputting matched images to the selected similarity vectors;
all similarity vectors less than or equal to the adaptive threshold are selected and the selected similarity vectors are excluded.
7. The adaptive retrieval method for matchable images according to claim 1, wherein the image retrieval is performed according to the adaptive threshold of the calculated image, and a similar image result and a corresponding image are output according to the retrieval result, and then comprising:
counting the number of the similarity values larger than the self-adaptive threshold value in the similarity matrix with the same sequence number to obtain the number of common partners until all the similarity vectors are traversed;
and setting a specified value according to the number of the common partners, and if the number of the common partners is larger than or equal to the specified value, considering the two images as a similar image pair with the same name image point.
8. The adaptive retrieval method for matchable images according to claim 7, wherein the adaptive retrieval method for matchable images further comprises:
and if the number of the common partners is smaller than the specified value, the common partners are regarded as rough retrieval of the image and are removed.
9. An adaptive searching device for matchable images, comprising: a memory and a processor; the memory stores a matchable image adaptive retrieval program, which when executed by the processor is configured to implement the operation of the matchable image adaptive retrieval method according to any of claims 1-8.
10. A storage medium, which is a computer-readable storage medium, and which stores a matchable image adaptive search program, when being executed by a processor, for implementing the operation of the matchable image adaptive search method according to any one of claims 1-8.
CN202210645024.7A 2022-06-09 2022-06-09 Self-adaptive retrieval method and device capable of matching images and storage medium Active CN114722226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210645024.7A CN114722226B (en) 2022-06-09 2022-06-09 Self-adaptive retrieval method and device capable of matching images and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210645024.7A CN114722226B (en) 2022-06-09 2022-06-09 Self-adaptive retrieval method and device capable of matching images and storage medium

Publications (2)

Publication Number Publication Date
CN114722226A true CN114722226A (en) 2022-07-08
CN114722226B CN114722226B (en) 2022-11-15

Family

ID=82233081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210645024.7A Active CN114722226B (en) 2022-06-09 2022-06-09 Self-adaptive retrieval method and device capable of matching images and storage medium

Country Status (1)

Country Link
CN (1) CN114722226B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902190A (en) * 2019-03-04 2019-06-18 京东方科技集团股份有限公司 Image encrypting algorithm optimization method, search method, device, system and medium
US20210044787A1 (en) * 2018-05-30 2021-02-11 Panasonic Intellectual Property Corporation Of America Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer
CN112926695A (en) * 2021-04-16 2021-06-08 动员(北京)人工智能技术研究院有限公司 Image recognition method and system based on template matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210044787A1 (en) * 2018-05-30 2021-02-11 Panasonic Intellectual Property Corporation Of America Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer
CN109902190A (en) * 2019-03-04 2019-06-18 京东方科技集团股份有限公司 Image encrypting algorithm optimization method, search method, device, system and medium
CN112926695A (en) * 2021-04-16 2021-06-08 动员(北京)人工智能技术研究院有限公司 Image recognition method and system based on template matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
任超锋等: "顾及地理空间信息的无人机影像匹配像对提取方法", 《自然资源遥感》 *
林甲祥等: "支持度和置信度自适应的关联规则挖掘", 《计算机工程与设计》 *
王雪平等: "基于可决系数的自适应关联规则挖掘算法", 《智能系统学报》 *

Also Published As

Publication number Publication date
CN114722226B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN109784223B (en) Multi-temporal remote sensing image matching method and system based on convolutional neural network
CN113297975A (en) Method and device for identifying table structure, storage medium and electronic equipment
CN110458175B (en) Unmanned aerial vehicle image matching pair selection method and system based on vocabulary tree retrieval
CN107832335B (en) Image retrieval method based on context depth semantic information
CN104199842A (en) Similar image retrieval method based on local feature neighborhood information
CN109919084B (en) Pedestrian re-identification method based on depth multi-index hash
CN110543581A (en) Multi-view three-dimensional model retrieval method based on non-local graph convolution network
CN113255714A (en) Image clustering method and device, electronic equipment and computer readable storage medium
CN110751027B (en) Pedestrian re-identification method based on deep multi-instance learning
CN114792372A (en) Three-dimensional point cloud semantic segmentation method and system based on multi-head two-stage attention
CN111027140A (en) Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN110083731B (en) Image retrieval method, device, computer equipment and storage medium
Yan et al. Geometrically based linear iterative clustering for quantitative feature correspondence
CN113157962B (en) Image retrieval method, electronic device, and storage medium
CN113936214A (en) Karst wetland vegetation community classification method based on fusion of aerospace remote sensing images
CN112734818A (en) Multi-source high-resolution remote sensing image automatic registration method based on residual error network and SIFT
CN111241326B (en) Image visual relationship indication positioning method based on attention pyramid graph network
CN116817887B (en) Semantic visual SLAM map construction method, electronic equipment and storage medium
CN114913330B (en) Point cloud component segmentation method and device, electronic equipment and storage medium
CN114722226B (en) Self-adaptive retrieval method and device capable of matching images and storage medium
CN107578069B (en) Image multi-scale automatic labeling method
CN111898618B (en) Method, device and program storage medium for identifying ancient graphic characters
CN114943766A (en) Relocation method, relocation device, electronic equipment and computer-readable storage medium
JP7192990B2 (en) Learning device, retrieval device, learning method, retrieval method, learning program, and retrieval program
CN113160291A (en) Change detection method based on image registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant