CN116740374A - Repeated texture recognition method and device - Google Patents

Repeated texture recognition method and device Download PDF

Info

Publication number
CN116740374A
CN116740374A CN202211348049.7A CN202211348049A CN116740374A CN 116740374 A CN116740374 A CN 116740374A CN 202211348049 A CN202211348049 A CN 202211348049A CN 116740374 A CN116740374 A CN 116740374A
Authority
CN
China
Prior art keywords
image
matching
images
feature points
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211348049.7A
Other languages
Chinese (zh)
Other versions
CN116740374B (en
Inventor
郭睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202410741680.6A priority Critical patent/CN118691848A/en
Priority to CN202211348049.7A priority patent/CN116740374B/en
Publication of CN116740374A publication Critical patent/CN116740374A/en
Application granted granted Critical
Publication of CN116740374B publication Critical patent/CN116740374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a repeated texture recognition method and a device, wherein two images corresponding to an image pair are transversely spliced, and the connecting line slope of a matched characteristic point pair in the two images is calculated. And further counting the connection line proportion that the deviation of the slope and the slope standard value is larger than or equal to the set value, if the proportion is larger than the corresponding preset value, indicating that the image pair has repeated textures of a small area in the non-main direction of feature matching, namely determining that the matching relation of the image pair is wrong, and deleting the matching relation of the image pair. Therefore, the method does not need to analyze the content information of the characteristic points in the image pair, and only recognizes the repeated textures which are not in the main direction of characteristic matching through the characteristic point connecting line slope, so that the method is insensitive to the input image, and the robustness of the method is improved.

Description

Repeated texture recognition method and device
Technical Field
The application relates to the technical field of three-dimensional reconstruction, in particular to a method and a device for identifying repeated textures.
Background
Three-dimensional Reconstruction (3D Reconstruction) is the recovery of a three-dimensional structure of an object or scene from a plurality of two-dimensional images, and finally creates virtual reality expressing the objective world in a computer. Three-dimensional reconstruction is applied to many scenes, for example, scenes in which a three-dimensional digital model is constructed based on a real scene, such as AR augmented reality, three-dimensional digital model creation of cultural relics, and the like.
Objects of the same shape and texture may exist in the two-dimensional image for three-dimensional reconstruction, and the characteristics of the objects of the same shape and texture may form incorrect matching, so that the three-dimensional reconstruction model is incorrect. For example, when the same electric appliance, such as water dispensers with the same shape and texture, is placed at different positions of a certain space environment, when the three-dimensional reconstruction is performed on the scene, the water dispensers at different positions are likely to be identified as the same water dispenser, so that the error of the three-dimensional reconstruction model of the environment is caused. Therefore, how to accurately identify and filter out repetitive textures in a three-dimensional reconstruction process is a problem that needs to be solved at present.
Disclosure of Invention
In view of the above, the present application provides a method and apparatus for identifying repeated textures, so as to solve the above technical problems, and the technical scheme disclosed in the present application is as follows:
in a first aspect, the present application provides a method for identifying repetitive textures, applied to an electronic device, the method comprising: acquiring feature point matching information contained in at least two images with a matching relationship, wherein the feature point matching information comprises information of matched feature points contained in at least two images; transversely splicing at least two images, and calculating the slope of the connecting line of all the matching characteristic points in the at least two images; counting the number of connecting lines meeting a first preset condition in at least two images, wherein the first preset condition comprises that the deviation between the slope and the standard value of the slope is larger than or equal to a first preset value; determining that the number of the connecting lines meets a second preset condition, wherein the second preset condition comprises that the proportion of the connecting lines meeting the first preset condition is larger than or equal to a second preset value; it is determined that at least two images contain a small region repeat texture. Therefore, the scheme does not need to analyze the content information of the characteristic points in the image pair, and the repeated textures which are not in the main direction of characteristic matching are identified only through the characteristic point connecting line slope, so that the method is insensitive to the input image, and the robustness of the method is improved.
In a possible implementation manner of the first aspect, before the stitching at least two images laterally, the method further includes: counting the number of matching feature points contained in any one of at least two images; if the number of the matched feature points is greater than or equal to a first threshold value, performing transverse stitching on at least two images; and if the number of the matched feature points is smaller than a first threshold value, determining that the small-area repeated textures exist in at least two images. Therefore, the existence of the small-area repeated textures can be primarily identified through the number of the matched characteristic points in the image, if the number of the matched characteristic points in the image is smaller than a first threshold value, the existence of the small-area repeated textures of the image pair is determined, and further, the error of the matching relation of the image pair is determined, so that the efficiency of identifying the repeated textures is improved.
In a possible implementation manner of the first aspect, the determining process of the slope standard value includes: and calculating slope median values corresponding to all the connecting lines contained in the image pair, and determining the slope median values as slope standard values.
In a possible implementation manner of the first aspect, the method further includes: and deleting the matching relation between at least two images containing the small-area repeated textures.
In a possible implementation manner of the first aspect, the method further includes: after determining that the number of connecting lines meeting the first preset condition in at least two images does not meet the second preset condition, counting the distribution range of matching feature points contained in any image in at least two images; if the distribution range is smaller than or equal to the preset range, determining that at least two images contain the small-area repeated textures. Therefore, the repeated textures which cannot be identified through the characteristic point connecting line slope can be further identified through matching the distribution condition of the characteristic points, and therefore the accuracy of identifying the repeated textures is improved.
In a possible implementation manner of the first aspect, the counting a distribution range of matching feature points included in any one of at least two images includes: dividing any one image of at least two images into a plurality of grids, and counting the number of grids containing matching feature points in any one image; the grids which contain the matched characteristic points and are adjacent in position are communicated into a communication area; the distribution range of the matching feature points is determined based on the parameters of the connected regions contained in any one of the images, the parameters including at least one of the number and the area of the regions. It can be seen that this approach is simple and efficient by counting the distribution of feature points in a grid-like manner in the image.
In a possible implementation manner of the first aspect, determining a distribution range of the matching feature points based on a parameter of an area included in any image includes: if the number of all the connected areas contained in any image is smaller than or equal to a third threshold value, determining that the distribution range of the matched feature points is smaller than a preset range; if the number of all the connected areas contained in any image is larger than a third threshold value, judging whether the total area of all the connected areas in any image is smaller than or equal to a fourth threshold value; if the total area is smaller than or equal to a fourth threshold value, determining that at least two images contain small-area repeated textures; if the total area is larger than the fourth threshold value, determining that the matching relationship of at least two images is correct. In this way, the distribution condition of the matching feature points is determined by counting the number of connected areas or the areas of the connected areas contained in the image, and the accuracy of the repeated texture recognition result is improved.
In a possible implementation manner of the first aspect, before counting a distribution range of matching feature points included in any image in at least two image pairs, the method further includes: counting the number of matching feature points included in any one of at least two images; if the number of the matching feature points is smaller than or equal to a second threshold value, executing a step of counting the distribution range of the matching feature points contained in any image in at least two pairs; if the number of the matching feature points is larger than a second threshold value, determining that the matching relationship of at least two images is correct. Therefore, the scheme primarily identifies the images without the small-area repeated textures through the number of the matched feature points contained in the images, and reduces the number of the images with the small-area repeated textures identified, so that the efficiency of identifying the repeated textures is improved.
In a second aspect, the present application further provides a method for identifying repeated textures, which is applied to an electronic device, and the method includes: acquiring feature point matching information contained in at least two image pairs with a matching relationship, wherein the feature point matching information comprises information of matched feature points contained in at least two images; counting the distribution range of the matching feature points contained in any one of at least two images; at least two images with distribution ranges smaller than or equal to the preset range are determined to contain small-area repeated textures. Therefore, the scheme does not need to analyze the image characteristic information, but recognizes the repeated textures of the small areas through the matching characteristic points contained in the image pair, so that the method is insensitive to the input image, and the robustness of the method is improved.
In a possible implementation manner of the second aspect, the counting a distribution range of matching feature points included in any image in the image pair includes: dividing any one image of at least two images into a plurality of grids, and counting the number of grids containing matching feature points in any one image; the grids which contain the matched characteristic points and are adjacent in position are communicated into a communication area; the distribution range of the matching feature points is determined based on the parameters of the connected regions contained in any one of the images, the parameters including at least one of the number and the area of the regions.
In a possible implementation manner of the second aspect, determining a distribution range of the matching feature points based on a parameter of an area included in any image includes: if the number of all the connected areas contained in any image is smaller than or equal to a third threshold value, determining that the distribution range of the matched feature points is smaller than a preset range; if the number of all the connected areas contained in any image is larger than a third threshold value, judging whether the total area of all the connected areas in any image is smaller than or equal to a fourth threshold value; if the total area is smaller than or equal to a fourth threshold value, determining that at least two images contain small-area repeated textures; if the total area is larger than the fourth threshold value, determining that the matching relationship of at least two images is correct.
In a possible implementation manner of the second aspect, before counting a distribution range of matching feature points included in any image in at least two image pairs, the method further includes: counting the number of matching feature points included in any one of at least two images; if the number of the matching feature points is smaller than or equal to a second threshold value, executing a step of counting the distribution range of the matching feature points contained in any image in at least two pairs; if the number of the matching feature points is larger than a second threshold value, determining that the matching relationship of at least two images is correct.
In a third aspect, the present application also provides an electronic device, including: one or more processors, memory, and a touch screen; the memory is used for storing program codes; the processor is configured to execute program code to cause the electronic device to implement the repetitive texture recognition method as in any of the first or second aspects.
In a fourth aspect, the present application also provides a computer readable storage medium having instructions stored thereon which, when executed on an electronic device, cause the electronic device to perform the repetitive texture recognition method of any of the first or second aspects.
In a fifth aspect, the present application also provides a computer program product having stored thereon an execution, which when run on an electronic device causes the electronic device to implement the repetitive texture recognition method according to any of the first or second aspects.
It should be appreciated that the description of technical features, aspects, benefits or similar language in the present application does not imply that all of the features and advantages may be realized with any single embodiment. Conversely, it should be understood that the description of features or advantages is intended to include, in at least one embodiment, the particular features, aspects, or advantages. Therefore, the description of technical features, technical solutions or advantageous effects in this specification does not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantageous effects described in the present embodiment may also be combined in any appropriate manner. Those of skill in the art will appreciate that an embodiment may be implemented without one or more particular features, aspects, or benefits of a particular embodiment. In other embodiments, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for identifying repetitive textures provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image pair provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of another image pair provided by an embodiment of the present application;
FIG. 4 is a flowchart of another method for identifying repetitive textures provided by an embodiment of the present application;
FIG. 5 is a flow chart of yet another method for repeating texture recognition provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and in the drawings are used for distinguishing between different objects and not for limiting the specified order.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Referring to fig. 1, a flowchart of a repeated texture recognition method provided by an embodiment of the present application is shown, and the method may be applied to an electronic device such as a server, where the electronic device includes a feature extraction module, a feature matching module, a first filtering module, and a second filtering module.
As shown in fig. 1, the method comprises the steps of:
s11, the feature extraction module acquires a plurality of views of the three-dimensional reconstruction object.
For example, images of a plurality of different perspectives, i.e. a plurality of views, of a three-dimensional reconstructed object may be acquired by a terminal device (e.g. camera, smartphone, virtual reality device, etc.).
And uploading a plurality of views of the three-dimensional reconstruction object acquired by the terminal equipment to the electronic equipment for performing a three-dimensional reconstruction process.
And S12, extracting the feature points of each view by a feature extraction module to obtain a feature map.
In an exemplary embodiment of the present application, feature point extraction may be performed on each image using a scale-invariant feature transform (SIFT) method. Feature point screening purpose of SIFT: searching extreme points in different scale spaces, and ensuring that the characteristic points exist under the condition of enlarging or reducing.
S13, the feature extraction module transmits all feature graphs to the feature matching module.
And S14, performing feature matching on all the feature images by a feature matching module to obtain image pairs with matching relations, and transmitting all the image pairs with matching relations to a first filtering module.
In an exemplary embodiment, each image has a unique identifier, and the image pairs herein include the unique identifiers of the two matched images and their mappings. For example, after the feature matching module performs feature matching on the image a and the image B, it is confirmed that the successfully matched features in the two images meet a preset condition (for example, the number of successfully matched features is greater than a preset value).
In an exemplary embodiment, a random sample consensus (random sample consensus, RANSAC) algorithm may be used to perform feature matching on all feature images to obtain an image pair with a matching relationship, and information about the feature point pair (or referred to as a matching feature point pair) that the image pair contains with a matching relationship. The RANSAC algorithm is a simple and effective method of removing noise effects, estimating a model, estimating model parameters using as few points as possible, and then expanding the range of influence of the resulting model parameters as much as possible.
S15, the first filtering module acquires feature point matching information of the image pair.
For example, in an exemplary embodiment, the feature point matching information may include the number of feature points having a matching relationship, position information, feature values, and the like.
S16, judging whether the number of the matched feature points contained in one image in the image pair is larger than or equal to a first threshold value; if yes, then S17 is performed; if not, S20 is performed.
The matching feature points are feature points having a matching relationship in two images included in the image pair.
As shown in fig. 2, the image 30 and the image 40 are two images with a matching relationship determined after feature matching by the feature matching module, wherein the scene shown in the images 30 and 40 includes an object such as a table.
In this example, the feature point a included in the image 30 has a matching relationship with the feature point B included in fig. 20, that is, the point a and the point B are a pair of matching feature point pairs. The present example shows only one pair of matching feature point pairs.
Counting the number of the characteristic points with the matching relation with the other image in any image of the image pair, and if the number of the characteristic points with the matching relation is larger than or equal to a first threshold value, indicating that the matching relation of the image pair is possibly correct.
The first threshold may be obtained by statistics according to limited test results, and in this example, the value of the first threshold may be 15.
In one application scenario, the ratio of the slope of the connecting line between all the matching feature points in the two images in a slope range is greater than or equal to a preset ratio, and the slope range in the scenario can be referred to as the main direction of feature matching of the image pair. For example, the slope of the line connecting the feature points successfully matched in the images a and B exceeds 80% by about 60 °, and the feature matching main direction of the images a and B is about 60 °.
However, there are still matching feature points which are not in the main direction in the two images, and the feature points are feature points with matching errors, if there are feature points with matching errors in the two images, it can be determined that the two images are images with matching errors. Such a case can be identified by the procedure shown in the following steps S17 to S19.
And S17, transversely splicing the image pairs, and calculating the slope and the slope median of the connecting lines of all the matched characteristic point pairs.
The transverse stitching refers to stitching two images in a horizontal direction (for example, an X-axis direction).
Fig. 2 is a schematic diagram of the image 30 and the image 40 after being spliced in the X-axis direction, and the slope of the connection line AB between the feature point B and the feature point a is calculated.
The slope median is the median of the slopes of all the matching feature point lines in the image pair, for example, the image pair includes 20 matching feature point pairs, that is, the image pair can obtain 20 feature point lines, and the slopes of the 20 feature point lines are calculated respectively, so as to calculate the median of the slopes of the 20 feature point lines.
S18, counting the number of the connecting lines with the deviation between the corresponding connecting line slope of the same image pair and the slope median value being larger than or equal to a first preset value.
In the embodiment of the present application, the first preset value may be determined according to the statistical result of the limited number of experiments, for example, the first preset value is 8 °, and the preset value corresponding to the slope deviation is not limited in the present application.
S19, judging whether the connection ratio meeting the conditions is larger than or equal to a second preset value; if yes, then execute S20; if not, S22 is performed.
In the embodiment of the present application, the second preset value may be determined according to an experimental statistical result, for example, the numerical range of the second preset value may be 5% -15%.
The ratio of the number of the connecting lines meeting the condition refers to the ratio of the number of the connecting lines of the feature points meeting the condition to the number of the connecting lines of all the feature points contained in the image pair. For example, assuming that the second preset value is 5%, a certain image pair includes 8 feature point lines meeting the condition, and the image pair includes 100 feature point lines in total, that is, the proportion of feature point lines meeting the condition is 8%, it is obvious that the proportion of line of the load condition included in the image pair is greater than the second preset value.
S20, deleting the matching relation of the image pair.
For example, the image a and the image B are two images having a wrong matching relationship, and the matching mapping relationship between the image a and the image B is deleted.
The processes described in S15 to S20 above can screen out image pairs in which the matching feature points are inconsistent with the feature matching main direction, and such images are usually images in which there is a wrong matching relationship.
In another scenario, as shown in FIG. 3, image 10 and image 20 are pairs of images having a matching relationship, with image 10 and image 20 being stitched laterally in FIG. 3. The matching feature points included in the image 10 and fig. 20 are concentrated in the region where the "good fortune" word is located. The broken line of the "good fortune" area in fig. 3 represents the line connecting the feature points having the matching relationship in the image 10 and fig. 20. As shown in fig. 3, the slopes of the feature point lines of the two images are consistent based on the present, but the matching feature points of the two images are distributed in a certain small area in a concentrated manner. Such repetitive textures cannot be identified by the characteristic point link slope described above. Therefore, the embodiment of the application also provides another scheme for filtering repeated textures, and the scheme identifies the image with the characteristic points distributed in a small-range area of the image in a concentrated manner by matching the distribution condition of the characteristic points in the image. This process may include the steps of:
S21, judging whether the number of the matched feature points is smaller than or equal to a second threshold value; if yes, executing S22; if not, S26 is performed.
In the embodiment of the present application, the second threshold is greater than the first threshold, and the second threshold may also be obtained by statistics according to limited test results, for example, the value of the second threshold in this example may be 500, and the present application does not limit the range of values of the second threshold.
If the number of matching feature points contained in the image is greater than the first threshold and less than or equal to the second threshold, it is necessary to further identify whether there is a small region repeat texture in the image. If the number of the matched feature points is larger than a second threshold value, determining that the matching relation of the image pair is correct, and directly reserving the image pair.
S22, dividing any image corresponding to the image pair into N x M grids, and counting the distribution condition of grids containing the matched characteristic points in the image.
It will be appreciated that M and N may be adjusted according to the size of the image, and in an exemplary embodiment, it is guaranteed that each grid contains a certain number of pixels, for example, 40×40 pixels.
S23, connecting adjacent grids which contain matching feature points in any image into a region (or called a connected region).
The matching feature point refers to a feature point in any one image of the image pair that has a matching relationship with a feature point of another image, for example, a feature point a in the image 30 matches a feature point B in the image 40 in fig. 2, and thus, both points a and B may be referred to as matching feature points.
In an exemplary embodiment, n×m grids in the image are traversed first, grids containing feature points are screened out, then grids containing feature points are traversed again, grids containing feature points having a matching relationship with another image in the image pair are screened out, and finally grids containing matching feature points and having adjacent positions are communicated into a region.
S24, judging whether the number of the areas contained in the image is smaller than a third threshold value; if yes, S26 is performed, and if not, S25 is performed.
And counting the number of the areas which are obtained by connecting grids and included in any image in the image pair, if the number is smaller than a certain preset value, for example, 3, determining that the matching characteristic points of the image pair are distributed in the area with a smaller range, and further determining that repeated textures exist in the area with the small range.
The matching feature points of a normally correctly matched image pair are distributed relatively uniformly over the image, and therefore if the matching feature points of an image pair are concentrated in a small area, it can be determined that there is an error in the matching relationship of the image pair.
As shown in fig. 3, the image 10 and the image 20 are image pairs having a matching relationship, and in fig. 3, the image 10 and the image 20 are laterally stitched. Further, since the matching feature points included in the image 10 and the image 20 are concentrated in the region where the "good" word is located, it can be determined that the image pair is an image pair including a repetitive texture, and there is an error in the matching relationship by determining whether the number of regions included in the image is smaller than the third threshold.
S25, judging whether the maximum value of the number of grids contained in the area is smaller than a fourth threshold value; if yes, S26 is performed, and if not, S27 is performed.
If the number of the areas contained in any image in the image pair is larger than a third threshold value, continuing to judge whether the sum of the number of grids contained in all the connected areas in the image is smaller than a fourth threshold value, and if so, indicating that the matching feature points of the image are concentrated in a small-range area.
In an exemplary embodiment of the present application, the fourth threshold may be determined according to the total number of meshes included in the image, for example, in an example, the fourth threshold may be set to 0.05×the total number of meshes.
S26, deleting the matching relation of the image pair.
If the number of the areas is smaller than the third threshold value or the number of grids contained in the area with the largest area is smaller than the fourth threshold value, the matching feature point set of the image pair is determined to be distributed in a small-range area. As described above, the matching feature points of the image pairs that are correctly matched are generally uniformly distributed throughout the entire image, and therefore if the matching feature points of two images are distributed in a small area, it is necessary to delete the image pairs that are incorrectly matched if the matching relationship between the two images is determined to be incorrect.
For example, the image a and the image B are image pairs having a wrong matching relationship, and the matching mapping relationship of the image a and the image B is deleted.
S27, reserving the matching relation of the image pair.
If the sum of the grid numbers contained in all the connected areas in the image is larger than the fourth threshold, the area of the sum of the connected areas is larger, in other words, the distribution range of the matching feature points of the image pair is larger, and finally, the matching relationship of the image pair is determined to be correct, so that the matching relationship of the image pair is reserved.
S28, judging whether unprocessed image pairs exist or not.
In an exemplary embodiment, the parameter i may be set to represent the number of currently unprocessed image pairs corresponding to the same three-dimensional reconstruction object, and the value of the parameter i is updated for each processed image pair, for example i=i-1. In this scenario, if the value of i is equal to 0, it indicates that there is no unprocessed image pair, and if the value of i is greater than 0, it indicates that there is an unprocessed image pair.
In another exemplary embodiment of the present application, the parameter j may be set to represent the number of processed image pairs corresponding to the same three-dimensional reconstruction object, with 1 added to each processed image pair j. In such a scenario, if the value of j is less than the number of all image pairs corresponding to the three-dimensional reconstructed object, it indicates that there are unprocessed image pairs, and if the value of j is equal to the number of all image pairs corresponding to the three-dimensional reconstructed object, it indicates that there are no unprocessed image pairs.
If there are unprocessed image pairs, execution returns to S15 to continue processing the next pair of image pairs. If there is no unprocessed image pair, S29 is performed.
S29, outputting the image pair with the matching relation reserved.
And (3) reserving the image pairs with the matching relationship, namely screening out the image pairs with the correct matching relationship, and finally outputting the image pairs with the correct matching relationship, and continuing the subsequent processing.
It will be appreciated that only the process shown in S15-S20 may be used to filter out image pairs having small region repeating textures in the non-primary direction, or only the process shown in S21-S27 may be used to filter out image pairs having repeating textures concentrated in small regions.
According to the repeated texture recognition method provided by the embodiment, the slope of the connecting lines of the matched characteristic points contained in the image pair and the slope median of the image pair are calculated, the number of connecting lines with the deviation of the slope and the slope median being larger than a first preset value is counted, if the connecting line duty ratio meeting the condition is larger than a second preset value, the fact that the matching relation of the image pair is wrong is indicated, and the matching relation of the image pair is deleted. Further, the method can identify whether the matching relation of the image pair is correct according to the distribution condition of the matching characteristic point pairs contained in the image pair. Specifically, the image may be divided into a plurality of grids, and the distribution of feature points on the grids may be counted. And communicating adjacent grids containing the matching characteristic point pairs into an area, counting the number of areas contained in the image and the number of grids contained in each area, and deleting the matching relation of the image pairs if the number of areas contained in the image is smaller than or equal to a third threshold or the maximum value of the number of grids contained in the area is smaller than or equal to a fourth threshold, which indicates that the matching characteristic point pairs contained in the image pairs are intensively distributed in a small-range area. And if the maximum value of the number of grids contained in the region is greater than a fourth threshold value, preserving the matching relation of the image pair. And finally outputting the image pair with the reserved matching relation, and continuing the subsequent processing. According to the scheme, image characteristic information is not required to be analyzed, and the repeated textures of the small areas are identified through the slope or distribution condition of the matched characteristic point pairs contained in the image pairs, so that the method is insensitive to the input image, and the robustness of the method is improved.
Fig. 4 is a flowchart of another method for identifying a repeated texture according to an embodiment of the present application, where the embodiment determines whether the image pair has a repeated texture according to a line slope of a matching feature point pair in the image pair. The method is applied to electronic equipment, and the electronic equipment comprises a feature extraction module, a feature matching module and a first filtering module.
As shown in fig. 4, the method may include the steps of:
s31, the feature extraction module acquires a plurality of views of the three-dimensional reconstruction object.
S32, the feature extraction module extracts feature points of each view to obtain a feature map.
S33, the feature extraction module transmits all feature graphs to the feature matching module.
And S34, performing feature matching on all the feature images by a feature matching module to obtain image pairs with matching relations, and transmitting all the image pairs with matching relations to a first filtering module.
S35, the first filtering module acquires feature point matching information of the image pair.
S36, judging whether the number of the matched feature points contained in one image in the image pair is larger than or equal to a first threshold value; if yes, then execute S37; if not, S310 is performed.
And S37, transversely splicing the image pairs, and calculating the slope and the slope median of the connecting lines of all the matched characteristic point pairs.
S38, counting the number of connecting lines with the deviation between the slope of the connecting line in the same image and the slope median value being larger than or equal to a first preset value.
S39, judging whether the connection ratio meeting the conditions is larger than or equal to a second preset value; if yes, then S310 is performed; if not, S311 is performed.
S310, deleting the matching relation of the image pair.
In this embodiment, the implementation process of S31 to S310 is the same as the implementation process of S11 to S20 in the embodiment shown in fig. 1, and will not be repeated here.
S311, the matching relation of the image pair is reserved.
In this embodiment, if the first filtering module determines that the connection ratio meeting the condition is smaller than the second preset value, it indicates that the ratio of feature points that are not in the feature matching main direction is smaller and can be ignored, so that it can be determined that the matching relationship of the image pair is correct, and the matching relationship of the image pair is reserved.
S312, the first filtering module determines whether an unprocessed image pair exists. If yes, returning to the execution S35; if not, S313 is performed.
In this embodiment, the process of determining whether an unprocessed image pair exists by the first filtering module is the same as the implementation of S28 in the embodiment shown in fig. 1, and will not be described herein.
S313, the first filtering module outputs an image pair that retains the matching relationship.
The implementation of each step in this embodiment is substantially the same as that of the related steps in the embodiment shown in fig. 1, and the detailed description of this embodiment will not be repeated.
According to the repeated texture recognition method provided by the embodiment, two images corresponding to the image pair are transversely spliced, and the slope of a connecting line of the matched characteristic point pair in the image pair and the median value of the slopes of all connecting lines in the image pair are calculated. And further, counting the connection proportion of the deviation of the slope and the median value of the slope to be larger than or equal to a preset value, if the proportion is larger than a corresponding preset value, indicating that the image pair has repeated textures of a small area in a non-main direction of feature matching, namely determining that the matching relation of the image pair is wrong, and deleting the matching relation of the image pair. Therefore, the method does not need to analyze the content information of the characteristic points in the image pair, and only recognizes the repeated textures which are not in the main direction of characteristic matching through the characteristic point connecting line slope, so that the method is insensitive to the input image, and the robustness of the method is improved.
Fig. 5 is a flowchart of still another method for identifying repetitive textures, which is provided in an embodiment of the present application, and the embodiment analyzes the distribution situation of the matching feature points in the image pair to identify whether the image pair has a repetitive texture. The method is applied to electronic equipment, and the electronic equipment comprises a feature extraction module, a feature matching module and a second filtering module.
As shown in fig. 5, the method may include the steps of:
s41, the feature extraction module acquires a plurality of views of the three-dimensional reconstruction object.
S42, the feature extraction module extracts feature points of each view to obtain a feature map.
S43, the feature extraction module transmits all feature graphs to the feature matching module.
And S44, performing feature matching on all the feature images by the feature matching module to obtain image pairs with matching relations, and transmitting all the image pairs with matching relations to the first filtering module.
S45, the second filtering module acquires feature point matching information of the image pair.
S46, the second filtering module judges whether the number of the matched feature points is smaller than or equal to a second threshold value; if yes, executing S47; if not, S412 is performed.
S47, dividing any image in the image pair into N.times.M grids, and counting the corresponding grid distribution condition of the characteristic points in the image.
S48, connecting adjacent grids containing the matched feature points in the image into a region.
S49, judging whether the number of the areas contained in the image is smaller than a third threshold value; if yes, S411 is performed, and if not, S410 is performed.
S410, judging whether the maximum value of the number of grids contained in all the communication areas is smaller than a fourth threshold value; if yes, S411 is performed, and if not, S412 is performed.
S411, deleting the matching relation of the image pair.
S412, preserving the matching relation of the image pair.
S413, it is determined whether or not an unprocessed image pair exists. If there are unprocessed image pairs, execution returns to S45 to continue processing the next pair of image pairs. If there is no unprocessed image pair, S414 is performed.
S414, outputting the image pair with the matching relation reserved.
In this embodiment, the processes described in S45 to S414 are all performed in the second filter module.
The process of S41 to S414 in this embodiment is the same as the implementation process of the same steps in the embodiment shown in fig. 1, and will not be described here again.
According to the repeated texture recognition method provided by the embodiment, whether the image pair has the small region repeated textures is recognized according to the distribution condition of the matched feature point pairs contained in the image pair, if so, the matching relation of the image pair is deleted, namely, the matching relation of the image pair with wrong matching relation is filtered. According to the scheme, image characteristic information is not required to be analyzed, and the repeated textures of the small areas are identified through the matching characteristic points contained in the image pairs, so that the method is insensitive to the input image, and the robustness of the method is improved.
On the other hand, the application also provides electronic equipment applying the repeated texture recognition method.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be an electronic device such as a server or a terminal device. The terminal device may include a cell phone, tablet, desktop, laptop, notebook, ultra-mobile Personal Computer (UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), wearable electronic device, smart watch, etc.
As shown in fig. 6, the electronic device may include a processor 101, a memory 102, a bus 103, and a communication interface 104, where the number of processors 101 may be 1 to N, where N is an integer greater than 1.
The processor 101 and the memory 102 communicate with each other via a bus 103. The processor 101 may communicate with external devices through a bus 103 and a communication interface 104, for example, the communication interface 104 includes a transmitting unit and a receiving unit. The communication interface 104 receives data sent by the peripheral device via the receiving unit, which data is transferred to the processor 101 via the bus 103. The data transmitted by the processor 101 is transferred to the communication interface 104 via the bus 103, and the communication interface 104 is transmitted to the peripheral device through the transmission unit. The processor 101 is configured to invoke program instructions in the memory 102 to perform the repetitive texture recognition method shown in fig. 1, 4 or 5.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided in this embodiment, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present embodiment may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the respective embodiments. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method of repeating texture recognition, for use with an electronic device, the method comprising:
acquiring feature point matching information contained in at least two images with a matching relationship, wherein the feature point matching information comprises information of matched feature points contained in the at least two images;
transversely splicing the at least two images, and calculating the slopes of all the matching characteristic point connecting lines in the at least two images;
counting the number of connecting lines meeting a first preset condition in the at least two images, wherein the first preset condition comprises that the deviation between the slope and the standard value of the slope is larger than or equal to a first preset value;
determining that the number of the connecting lines meets a second preset condition, wherein the second preset condition comprises that the proportion of the connecting lines meeting the first preset condition is larger than or equal to a second preset value;
Determining that the at least two images contain small region repeating textures.
2. The method of claim 1, wherein prior to laterally stitching the at least two images, the method further comprises:
counting the number of matching feature points contained in any one of the at least two images; if the number of the matching feature points is greater than or equal to a first threshold value, executing the transverse stitching of the at least two images;
and if the number of the matching feature points is smaller than the first threshold value, determining that the at least two images have small-area repeated textures.
3. The method according to claim 1, wherein the slope criterion value determination process includes:
and calculating slope median values corresponding to all connecting lines included in the image pairs, and determining the slope median values as the slope standard values.
4. A method according to any one of claims 1-3, wherein the method further comprises: and deleting the matching relation between at least two images containing the small-area repeated textures.
5. The method according to any one of claims 1-4, further comprising:
after determining that the number of connecting lines meeting the first preset condition in the at least two images does not meet the second preset condition, counting the distribution range of matching feature points contained in any image in the at least two images;
And if the distribution range is smaller than or equal to a preset range, determining that the at least two images contain small-area repeated textures.
6. The method of claim 5, wherein said counting the distribution range of the matching feature points included in any one of the at least two images comprises:
dividing any one image of the at least two images into a plurality of grids, and counting the number of grids containing the matching characteristic points in any one image;
the grids which contain the matching characteristic points and are adjacent in position are communicated into a communication area;
and determining the distribution range of the matching feature points based on parameters of connected areas contained in any image, wherein the parameters comprise at least one of the number and the area of the areas.
7. The method of claim 6, wherein determining the distribution range of the matching feature points based on the parameters of the region included in the arbitrary image comprises:
if the number of all the connected areas contained in any one image is smaller than or equal to a third threshold value, determining that the distribution range of the matched feature points is smaller than the preset range;
if the number of all the communication areas contained in any one image is larger than the third threshold value, judging whether the total area of all the communication areas in any one image is smaller than or equal to a fourth threshold value;
If the total area is smaller than or equal to the fourth threshold value, determining that the at least two images contain small-area repeated textures;
and if the total area is larger than the fourth threshold value, determining that the matching relationship of the at least two images is correct.
8. The method according to any one of claims 5-7, wherein prior to said counting the distribution range of matching feature points comprised by any one of said at least two image pairs, the method further comprises:
counting the number of matching feature points included in any one of the at least two images;
if the number of the matching feature points is smaller than or equal to a second threshold value, executing the step of counting the distribution range of the matching feature points contained in any image in the at least two image pairs;
and if the number of the matching feature points is larger than the second threshold value, determining that the matching relationship of the at least two images is correct.
9. A method of repeating texture recognition, for use with an electronic device, the method comprising:
acquiring feature point matching information contained in at least two image pairs with a matching relationship, wherein the feature point matching information comprises information of matched feature points contained in the at least two images;
Counting the distribution range of the matching feature points contained in any image of the at least two images;
and determining that at least two images with the distribution range smaller than or equal to a preset range contain small-area repeated textures.
10. The method of claim 9, wherein said counting the distribution range of matching feature points contained in any one of the pair of images comprises:
dividing any one image of the at least two images into a plurality of grids, and counting the number of grids containing the matching characteristic points in any one image;
the grids which contain the matching characteristic points and are adjacent in position are communicated into a communication area;
and determining the distribution range of the matching feature points based on parameters of connected areas contained in any image, wherein the parameters comprise at least one of the number and the area of the areas.
11. The method according to claim 10, wherein determining the distribution range of the matching feature points based on the parameters of the region included in the arbitrary image includes:
if the number of all the connected areas contained in any one image is smaller than or equal to a third threshold value, determining that the distribution range of the matched feature points is smaller than the preset range;
If the number of all the communication areas contained in any one image is larger than the third threshold value, judging whether the total area of all the communication areas in any one image is smaller than or equal to a fourth threshold value;
if the total area is smaller than or equal to the fourth threshold value, determining that the at least two images contain small-area repeated textures;
and if the total area is larger than the fourth threshold value, determining that the matching relationship of the at least two images is correct.
12. The method according to any one of claims 9-11, wherein prior to said counting the distribution range of matching feature points comprised by any one of said at least two image pairs, the method further comprises:
counting the number of matching feature points included in any one of the at least two images;
if the number of the matching feature points is smaller than or equal to a second threshold value, executing the step of counting the distribution range of the matching feature points contained in any image in the at least two image pairs;
and if the number of the matching feature points is larger than the second threshold value, determining that the matching relationship of the at least two images is correct.
13. An electronic device, the electronic device comprising: one or more processors, memory, and a touch screen; the memory is used for storing program codes; the processor is configured to execute the program code to cause the electronic device to implement the repetitive texture recognition method as defined in any one of claims 1 to 12.
14. A computer readable storage medium having instructions stored thereon which, when executed on an electronic device, cause the electronic device to perform the repetitive texture recognition method as defined in any one of claims 1 to 12.
CN202211348049.7A 2022-10-31 2022-10-31 Repeated texture recognition method and device Active CN116740374B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410741680.6A CN118691848A (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device
CN202211348049.7A CN116740374B (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211348049.7A CN116740374B (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410741680.6A Division CN118691848A (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device

Publications (2)

Publication Number Publication Date
CN116740374A true CN116740374A (en) 2023-09-12
CN116740374B CN116740374B (en) 2024-06-21

Family

ID=87906647

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410741680.6A Pending CN118691848A (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device
CN202211348049.7A Active CN116740374B (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410741680.6A Pending CN118691848A (en) 2022-10-31 2022-10-31 Repeated texture recognition method and device

Country Status (1)

Country Link
CN (2) CN118691848A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316368A (en) * 2008-07-18 2008-12-03 西安电子科技大学 Full view stabilizing method based on global characteristic point iteration
CN101957919A (en) * 2010-09-22 2011-01-26 上海交通大学 Character recognition method based on image local feature retrieval
CN108875451A (en) * 2017-05-10 2018-11-23 腾讯科技(深圳)有限公司 A kind of method, apparatus, storage medium and program product positioning image
CN110674780A (en) * 2019-09-30 2020-01-10 苏州科达科技股份有限公司 Scene change detection method, device, equipment and readable storage medium
CN112837353A (en) * 2020-12-29 2021-05-25 北京市遥感信息研究所 Heterogeneous image matching method based on multi-order characteristic point-line matching
CN114782715A (en) * 2022-04-08 2022-07-22 宁波芯然科技有限公司 Vein identification method based on statistical information
CN115115861A (en) * 2022-08-31 2022-09-27 中国民航大学 Image correction method applied to rotating binocular stereoscopic vision system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316368A (en) * 2008-07-18 2008-12-03 西安电子科技大学 Full view stabilizing method based on global characteristic point iteration
CN101957919A (en) * 2010-09-22 2011-01-26 上海交通大学 Character recognition method based on image local feature retrieval
CN108875451A (en) * 2017-05-10 2018-11-23 腾讯科技(深圳)有限公司 A kind of method, apparatus, storage medium and program product positioning image
CN110674780A (en) * 2019-09-30 2020-01-10 苏州科达科技股份有限公司 Scene change detection method, device, equipment and readable storage medium
CN112837353A (en) * 2020-12-29 2021-05-25 北京市遥感信息研究所 Heterogeneous image matching method based on multi-order characteristic point-line matching
CN114782715A (en) * 2022-04-08 2022-07-22 宁波芯然科技有限公司 Vein identification method based on statistical information
CN115115861A (en) * 2022-08-31 2022-09-27 中国民航大学 Image correction method applied to rotating binocular stereoscopic vision system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张静;袁振文;张晓春;李颖;: "基于SIFT特征和误匹配逐次去除的图像拼接", 半导体光电, no. 01, pages 136 - 141 *
林敏;陈姝;袁浩翔;: "基于网格对应的双约束特征点匹配算法", 《计算机技术与自动化》, vol. 39, no. 1, pages 84 - 88 *

Also Published As

Publication number Publication date
CN118691848A (en) 2024-09-24
CN116740374B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN111340077A (en) Disparity map acquisition method and device based on attention mechanism
CN111080654B (en) Image lesion region segmentation method and device and server
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN114529490B (en) Data processing method, device, equipment and readable storage medium
JP7282474B2 (en) Encryption mask determination method, encryption mask determination device, electronic device, storage medium, and computer program
CN113888635B (en) Visual positioning method and related device
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN112287945A (en) Screen fragmentation determination method and device, computer equipment and computer readable storage medium
CN109615620A (en) The recognition methods of compression of images degree, device, equipment and computer readable storage medium
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN116740374B (en) Repeated texture recognition method and device
CN114863450B (en) Image processing method, device, electronic equipment and storage medium
CN111931794B (en) Sketch-based image matching method
CN108805883A (en) A kind of image partition method, image segmentation device and electronic equipment
CN112991451B (en) Image recognition method, related device and computer program product
CN114723796A (en) Three-dimensional point cloud generation method and device and electronic equipment
CN113610856A (en) Method and device for training image segmentation model and image segmentation
CN118196910B (en) Gesture interaction method, gesture interaction system, computer and storage medium
CN116994002B (en) Image feature extraction method, device, equipment and storage medium
CN113111891B (en) Image reconstruction method and device, terminal equipment and storage medium
CN115731096A (en) Method, device and equipment for restoring image occlusion area and storage medium
CN114842292A (en) Face recognition data set balancing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant