CN113160389A - Image reconstruction method and device based on characteristic line matching and storage medium - Google Patents

Image reconstruction method and device based on characteristic line matching and storage medium Download PDF

Info

Publication number
CN113160389A
CN113160389A CN202110452921.1A CN202110452921A CN113160389A CN 113160389 A CN113160389 A CN 113160389A CN 202110452921 A CN202110452921 A CN 202110452921A CN 113160389 A CN113160389 A CN 113160389A
Authority
CN
China
Prior art keywords
line
feature
characteristic
matching
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110452921.1A
Other languages
Chinese (zh)
Inventor
苏星
李路平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fanglian Technical Service Co Ltd
Original Assignee
Shanghai Fanglian Technical Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fanglian Technical Service Co Ltd filed Critical Shanghai Fanglian Technical Service Co Ltd
Priority to CN202110452921.1A priority Critical patent/CN113160389A/en
Publication of CN113160389A publication Critical patent/CN113160389A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image reconstruction method and device based on characteristic line matching, a storage medium and a processor. The image reconstruction method based on the characteristic line matching comprises the following steps: obtaining contour lines of at least three images; determining a first contour line in the contour lines, and selecting a first characteristic line from the contour lines; searching in other images according to a first characteristic point closest to the first characteristic line to obtain a second characteristic point; matching a second characteristic line closest to the second characteristic point with the first characteristic line; and performing the operation of three-dimensionally reconstructing the image based on the matching result. The method and the device solve the technical problems that the requirements on the shooting imaging quality and the shooting mode are very high.

Description

Image reconstruction method and device based on characteristic line matching and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for reconstructing an image based on feature line matching, a storage medium, and a processor.
Background
The three-dimensional reconstruction method based on aerial images is developed mainly based on a Structure from Motion method. The method is a three-dimensional reconstruction technology for recovering a three-dimensional structure by using a camera photo.
By inputting a group of pictures for shooting fixed scenes, the technology can identify the characteristic points in the pictures, match the same characteristic points in different pictures, calculate the positions of the cameras respectively when shooting different pictures and the spatial positions of the characteristic points, and finally generate a group of three-dimensional point clouds reflecting the spatial structure of the scenes.
Starting from the identification and restoration of the space structure of the characteristic line in the image, a group of three-dimensional line segments with the space structure is generated through three-dimensional reconstruction. The algorithm based on the characteristic line can keep the straight line contour of the building, and finally generates a more accurate three-dimensional model.
However, the existing Structure from Motion technology based on the characteristic line has the following defects:
1) the requirements on the quality of the shot images are very high. The technology is established on the basis that the same building contour line can be completely and accurately identified in images with different viewing angles, but is limited by the existing straight line identification technology, the accuracy of straight line identification is influenced by aspects such as illumination, shooting angles and the like, the same building contour line is easily identified as a complete line segment in an image with high imaging quality, and is easily identified as a plurality of line segments or not identified as a line segment in an image with low imaging quality.
2) The requirements for the shooting mode are very high. The technical requirement is that the contour line of the same building can be completely displayed in images with different visual angles, but is limited by the actual shooting route, and the mutual shielding condition among objects can easily occur in the shooting process. The previous image of the same building contour line can be completely displayed, but the image under another view angle can be partially shielded, which puts higher requirements on the shooting mode.
Aiming at the problem that the requirements on the shooting imaging quality and the shooting mode in the related technology are very high, an effective solution is not provided at present.
Disclosure of Invention
The present application mainly aims to provide an image reconstruction method, an image reconstruction device, a storage medium, and a processor based on feature line matching, so as to solve the problem that requirements for shooting imaging quality and shooting mode are very high.
In order to achieve the above object, according to one aspect of the present application, there is provided an image reconstruction method based on feature line matching.
The image reconstruction method based on the characteristic line matching comprises the following steps: s01, obtaining contour lines of at least three images; s02, determining a first contour line in the contour lines, and selecting a first characteristic line from the first contour line; s03, searching the other images according to the first characteristic point closest to the first characteristic line to obtain a second characteristic point; s04, matching a second feature line closest to the second feature point with the first feature line; and S05, performing the operation of three-dimensional reconstruction images based on the matching result.
Further, S01, acquiring the contour lines of at least three images includes: s011, identifying feature points in the three selected images; s012, matching feature points between any image and other two images based on any image; s013, calculating the camera positions when the three images are shot by adopting a RANSAC algorithm, and screening out feature points which do not meet the camera position relation; s014, adopting a FastLineDetector algorithm to quickly identify the outline of the scene in the image.
Further, S03, the step of searching for a second feature point in the remaining images according to the first feature point closest to the first feature line includes: s031, search for and get the first characteristic point nearest to said first characteristic line; s032, judging whether the first feature point meets a preset positioning condition, if so, executing S033, and if not, executing S034; s033, searching the rest of the images according to the first feature point to obtain a second feature point; and S034, terminating the characteristic line matching.
Further, S04, matching the second feature line closest to the second feature point with the first feature line includes: s041, judging whether second feature points exist in other images, if so, executing a step S042, and if not, executing a step S043; s042, matching a second characteristic line closest to the second characteristic point with the first characteristic line; and S043, terminating the characteristic line matching.
Further, the performing, at S05, an operation of three-dimensionally reconstructing the image based on the matching result includes: and S051, according to the position of the camera, the matched characteristic points and the matched characteristic lines, projecting the matched characteristic points to a three-dimensional space by utilizing a triangulation algorithm to obtain a three-dimensional scenery model.
Further, matching a second feature line closest to the second feature point with the first feature line further includes: s061, judging whether all first characteristic lines in the first contour line are traversed or not; s062, if yes, finishing the feature line matching, and executing the step S05; s063, if not, executing step S02 to step S04.
Further, after matching the second feature line closest to the second feature point with the first feature line, step S04 includes: s071, parameterizing at least three first characteristic lines in the matched first characteristic line group; s072, substituting the parameterized expression result into a preset trifocal tensor model to judge whether a preset contour line condition is met; and S073, if the first characteristic lines meet the requirement, screening the at least three matched first characteristic lines to a target matching group.
Further, S05, after the performing the operation of three-dimensionally reconstructing the image based on the matching result, includes: s081, judge whether all images have performed step S01 to step S05; s082, if yes, the image three-dimensional reconstruction is finished; s083, if not, performing steps S01 to S05 based on the unexecuted image.
In order to achieve the above object, according to another aspect of the present application, there is provided an image reconstruction apparatus based on feature line matching.
The application provides an image reconstruction device based on characteristic line matching, which comprises: the method comprises the following steps: an obtaining module 10, configured to obtain contour lines of at least three images; a selecting module 20, configured to determine a first contour line in the contour lines, and select a first feature line therefrom; the searching module 30 is configured to search for a second feature point in the remaining images according to a first feature point closest to the first feature line; a matching module 40, configured to match a second feature line closest to the second feature point with the first feature line; and the execution module 50 is used for executing the operation of three-dimensional reconstruction images based on the matching result.
In order to achieve the above object, according to another aspect of the present application, there is provided a storage medium.
According to the present application, there is provided a storage medium including: the image reconstruction method based on the characteristic line matching is provided.
In the embodiment of the application, the contour lines of at least three images are obtained in a characteristic line matching-based mode; determining a first contour line in the contour lines, and selecting a first characteristic line from the contour lines; searching in other images according to a first characteristic point closest to the first characteristic line to obtain a second characteristic point; matching a second characteristic line closest to the second characteristic point with the first characteristic line; performing an operation of three-dimensionally reconstructing an image based on the matching result; the purpose of avoiding the influence of the integrity of the characteristic line on the matching of the characteristic line is achieved, so that the technical effect of greatly reducing the requirements on the shooting imaging quality and the shooting mode is achieved, and the technical problem that the shooting imaging quality and the shooting mode have very high requirements is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic flowchart of an image reconstruction method based on feature line matching according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating an image reconstruction method based on feature line matching according to a preferred embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of an image reconstruction method based on feature line matching according to still another preferred embodiment of the present application;
FIG. 4 is a flowchart illustrating an image reconstruction method based on feature line matching according to another preferred embodiment of the present application;
FIG. 5 is a flowchart illustrating an image reconstruction method based on feature line matching according to another preferred embodiment of the present application;
FIG. 6 is a flowchart illustrating an image reconstruction method based on feature line matching according to another preferred embodiment of the present application;
FIG. 7 is a flowchart illustrating an image reconstruction method based on feature line matching according to another preferred embodiment of the present application;
FIG. 8 is a flowchart illustrating an image reconstruction method based on feature line matching according to another preferred embodiment of the present application;
fig. 9 is a flowchart of an image reconstruction apparatus based on feature line matching according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the invention and its embodiments and are not intended to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meanings of these terms in the present invention can be understood by those skilled in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present invention, there is provided an image reconstruction method based on feature line matching, as shown in fig. 1, the method including steps S01 to S05 as follows:
s01, obtaining contour lines of at least three images;
at least three images are selected from all the images and a work unit is established. In this embodiment, the image selection may be automatic random selection by a computer through a random algorithm, or at least three images may be selected by a user through selection operations on a software interface; preferably, the latter can ensure that one image and the rest of the at least three images have scene overlapping with larger area respectively through artificial comparison in the interface; partial scene overlapping exists between the other images; the image quality can be ensured, and a guarantee is provided for subsequent feature point and feature line matching.
According to the embodiment of the present invention, preferably, as shown in fig. 2, the step S01 of acquiring the contour lines of at least three images includes:
s011, identifying feature points in the three selected images;
s012, matching feature points between any image and other two images based on any image;
s013, calculating the camera positions when the three images are shot by adopting a RANSAC algorithm, and screening out feature points which do not meet the camera position relation;
s014, adopting a FastLineDetector algorithm to quickly identify the outline of the scene in the image.
In this embodiment, it is preferable to select three images, and define the middle image of the three images as an image having a large scene overlap with the remaining two images. The Structure from Motion technology is combined with the Structure from Motion technology based on the characteristic line, the advantage that the Structure from Motion technology has lower requirements on the imaging quantity is utilized, only larger scene overlapping is needed between two images, and the requirements on the imaging quantity are greatly reduced.
Feature points of the three images can be identified according to a Structure from Motion technology based on the feature lines, and feature point matching with the other two images is carried out based on the intermediate image, so that the same feature points of different images are matched with each other; the matched feature points have the condition of no matching, wherein the RANSAC algorithm is adopted to calculate the camera positions (different shooting angles and positions of different images) when three images are shot, and the feature points which do not meet the camera position relation are screened out, so that the finally obtained feature point group is more accurate; finally, based on the matched feature points, the contour line of the scenery in the image can be quickly identified by adopting a FastLineDetector algorithm; the contour line is the external contour of the scenery, can effectively retain the straight line contour of the scenery, and is favorable for generating a more accurate three-dimensional model.
S02, determining a first contour line in the contour lines, and selecting a first characteristic line from the first contour line;
the contour lines identified only by the FastLineDetector algorithm have a large difference from the real contour of the scene, so that the final three-dimensional model cannot truly reflect the real contour of the scene, and the accuracy is low. Therefore, to solve this problem, a characteristic line matching technique needs to be introduced. Specifically, the contour line of the intermediate image is used as a first contour line, the first contour line is divided into a plurality of characteristic lines, any characteristic line is selected from the characteristic lines as the first characteristic line by adopting a random selection algorithm, and guarantee is provided for subsequent characteristic line matching.
S03, searching the other images according to the first characteristic point closest to the first characteristic line to obtain a second characteristic point;
according to the embodiment of the present invention, preferably, as shown in fig. 3, the step S03 of searching for the second feature point in the remaining images according to the first feature point closest to the first feature line includes:
s031, search for and get the first characteristic point nearest to said first characteristic line;
s032, judging whether the first feature point meets a preset positioning condition, if so, executing S033, and if not, executing S034;
s033, searching the rest of the images according to the first feature point to obtain a second feature point;
and S034, terminating the characteristic line matching.
It can be known from the above S011 to S014 that the same feature points of the three images are all matched with each other, so the matched feature points can be used as references for feature line matching to eliminate the influence that the feature line matching is not affected even if the feature line is incomplete; specifically, firstly, the distances from all matched feature points to the first feature line are calculated, the distances are sorted, and the feature point corresponding to the closest sorted distance is used as the first feature point; then, the matched first feature point and the feature points in the other two images have a matching relationship, so that the feature points in the other two images can be matched as second feature points; therefore, guarantee is provided for searching the second characteristic line according to the second characteristic point.
In this embodiment, preferably, a positioning condition is preset, and the first feature point is used as a positioning point only when it is determined that the condition is satisfied; otherwise the subsequent feature line matching will be terminated. Specifically, the criteria are as follows: searching two characteristic lines which are closest to the first characteristic point in the image, wherein the characteristic line which is closest to the first characteristic point is a selected characteristic line; calculating the distances from the feature point to the two feature lines respectively, wherein the shortest distance is less than 0.7 times of the distance of the other segment; the shortest distance should also be smaller than a fixed threshold, and in this embodiment, it is preferable to set the fixed threshold as 10 pixel points. If the feature point meets the above condition, the feature point is considered to be a positioning point of the selected feature line, and the step S033 is continuously executed; if the two characteristic points are not satisfied simultaneously, the characteristic point is considered to be incapable of being used as a positioning point of the selected characteristic line, and the matching work of the first characteristic line is skipped. The preset conditions can ensure that the first characteristic point can be used as a qualified positioning point of the characteristic line, so that the matching success rate is improved.
S04, matching a second feature line closest to the second feature point with the first feature line;
according to the embodiment of the present invention, as shown in fig. 4, preferably, the matching, at S04, of the second feature line closest to the second feature point with the first feature line includes:
s041, judging whether second feature points exist in other images, if so, executing a step S042, and if not, executing a step S043;
s042, matching a second characteristic line closest to the second characteristic point with the first characteristic line;
and S043, terminating the characteristic line matching.
According to the method for obtaining the first characteristic point closest to the first characteristic line by referring to the search, the second characteristic line closest to the first characteristic line can be obtained by searching through the second characteristic points corresponding to the other two images; thus, by means of the matching relationship between the first feature point and the second feature point, the matching relationship between the first feature line and the two second feature lines can be established. Therefore, even if the quality of the images in the three images is not high or the images are limited by the shooting mode, the illumination, the shooting angle, the outline of the scenery are shielded or other problems exist, the straight line recognition algorithm cannot completely and accurately recognize the outline of the scenery, and the discontinuous outlines occur, and the discontinuous outlines cannot influence the matching of the first characteristic line and the second characteristic line. In the process of matching the characteristic lines, the characteristic lines are not directly matched based on the characteristics of the characteristic lines, but are indirectly matched through matched characteristic points which are close to the characteristic lines; that is, the feature lines themselves have poor integrity of the contour lines due to quality and/or shooting mode problems, and the matching between the feature lines is not affected; therefore, the requirements of shooting imaging quality and shooting mode are greatly reduced, the use quality is low, and the reconstruction of the three-dimensional image can be finally realized by the image limited by the shooting mode.
As a preferable example in the present embodiment, in order to ensure that the second feature point matching the first feature point exists in both of the other two images, it is necessary to determine whether the second feature point exists before the first feature line and the second feature line match, and to perform matching of the first feature line and the second feature line only when both of the first feature line and the second feature line exist; otherwise the subsequent feature line matching will be terminated. Therefore, the condition of mismatching can be eliminated to a certain degree.
According to the embodiment of the present invention, as shown in fig. 6, after matching the second feature line closest to the second feature point with the first feature line, S04 further includes:
s061, judging whether all first characteristic lines in the first contour line are traversed or not;
s062, if yes, finishing the feature line matching, and executing the step S05;
s063, if not, executing step S02 to step S04.
After the matching of the first feature line and the second feature line is finished, in order to ensure that all the first feature lines in the first contour line are subjected to the feature line matching process, whether all the first feature lines in the first contour line have been traversed is performed by the determination method in this embodiment, and only after all the first feature lines have been traversed, the feature line matching is finished, and a guarantee is provided for subsequently combining all the matched feature line groups to perform three-dimensional reconstruction. If the first characteristic line is not traversed, the steps S02 to S04 are continued to match the characteristic lines.
And S05, performing the operation of three-dimensional reconstruction images based on the matching result.
According to the embodiment of the present invention, preferably, as shown in fig. 5, the performing of the three-dimensional reconstructed image based on the matching result at S05 includes:
and S051, according to the position of the camera, the matched characteristic points and the matched characteristic lines, projecting the matched characteristic points to a three-dimensional space by utilizing a triangulation algorithm to obtain a three-dimensional scenery model.
And projecting the matched feature points to a three-dimensional space by combining a triangulation algorithm according to the calculated camera position, the mutually matched feature points and the mutually matched multi-section feature lines to obtain a three-dimensional scene model. Because the characteristic lines of the three images cannot completely reflect the overall contour line of the scene, the reconstructed three-dimensional scene model is also a partial three-dimensional structure of the restored scene.
According to the embodiment of the present invention, as shown in fig. 8, after the performing the operation of three-dimensionally reconstructing the image based on the matching result at S05, the method further includes:
s081, judge whether all images have performed step S01 to step S05;
s082, if yes, the image three-dimensional reconstruction is finished;
s083, if not, performing steps S01 to S05 based on the unexecuted image.
The whole scene cannot be highlighted due to the partial three-dimensional structure, and for this reason, the three-dimensional structure needs to be complemented by other groups of images (one group of images is three images). After the group of images has been subjected to all steps to complete the partial reconstruction of the three-dimensional image, determining whether all of the group of images have been subjected to steps S01 to S05, and terminating the routine only if it is determined that all of the group of images have been subjected to steps S01 to S05 to complete the three-dimensional reconstruction of the image; if the judgment is no, one of the unexecuted image groups continues to execute steps S01 to S05 until the judgment is yes, terminating the routine.
From the above description, it can be seen that the present invention achieves the following technical effects:
in the embodiment of the application, the contour lines of at least three images are obtained in a characteristic line matching-based mode; determining a first contour line in the contour lines, and selecting a first characteristic line from the contour lines; searching in other images according to a first characteristic point closest to the first characteristic line to obtain a second characteristic point; matching a second characteristic line closest to the second characteristic point with the first characteristic line; performing an operation of three-dimensionally reconstructing an image based on the matching result; the purpose of avoiding the influence of the integrity of the characteristic line on the matching of the characteristic line is achieved, so that the technical effect of greatly reducing the requirements on the shooting imaging quality and the shooting mode is achieved, and the technical problem that the shooting imaging quality and the shooting mode have very high requirements is solved.
According to the embodiment of the present invention, as shown in fig. 7, after matching the second feature line closest to the second feature point with the first feature line, the method further includes:
s071, parameterizing at least three first characteristic lines in the matched first characteristic line group;
a group of first characteristic line groups comprising three first characteristic lines can be obtained by adopting a method of characteristic line direct matching or characteristic line matching based on a nearest characteristic point, and the characteristic groups are input into a parameterization module for parameterization expression. Specifically, since the positions of the cameras when three images are taken are different from each other, three first feature lines that match each other need to establish a corresponding image coordinate system according to the positions of the cameras, and the three first feature lines are expressed by mathematical expressions in the respective image coordinate systems. Therefore, the corresponding model is established by referring to the mathematical expressions of the parameterized expressions.
S072, substituting the parameterized expression result into a preset trifocal tensor model to judge whether a preset contour line condition is met;
establishing a trifocal tensor model according to the respective camera positions of the three images; the judgment of whether the three characteristic lines are the same characteristic line can be realized through the model, namely whether the first characteristic lines in three different images have the condition of no matching can be judged; specifically, three parametric expressions obtained by three first characteristic lines of the parametric expression are input into an established trifocal tensor model, the model is calculated, if the calculation result is that an equation is established, the three first characteristic lines reflect the same characteristic line, and no mismatching exists; if the calculation result is that the equation is not satisfied, it is determined that one of the three first characteristic lines cannot reflect the use of one characteristic line, and a mismatch exists. Therefore, the screening of the subsequent mismatched characteristic line groups can be guaranteed.
Preferably, the presetting of the trifocal tensor model includes:
and calculating to obtain a trifocal tensor model according to the camera positions respectively corresponding to the three images.
Wherein the coordinates of the cameras corresponding to the three images are respectively P ═ I |0],P′=[A|a4],P″=[B|b4]. Wherein A ═ a1,a2,a3],B=[b1,b2,b3]。aiAnd biI is 1,2,3,4, the ith column of matrices P' and P ", respectively. I, I 'and I' represent the characteristic lines to be matched which are respectively positioned on the three images. If the three characteristic lines are correctly matched, they should previously satisfy the following formula:
Figure BDA0003037858070000121
and S073, if the first characteristic lines meet the requirement, screening the at least three matched first characteristic lines to a target matching group.
Through the judgment of the trifocal tensor model, whether the characteristic lines in the first characteristic line group are mismatching can be known; if the characteristic line group is mismatched, the mismatched characteristic line group is directly screened out; and if not, matching the three first characteristic lines again, screening the three first characteristic lines to a target matching group, and waiting for subsequent three-dimensional reconstruction. Therefore, the characteristic line groups which are mismatched can be screened out, and the more accurate characteristic line groups are adopted, so that the accuracy of the three-dimensional reconstruction image is effectively improved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present invention, there is also provided an apparatus for implementing the above image reconstruction method based on feature line matching, as shown in fig. 9, the apparatus includes:
an obtaining module 10, configured to obtain contour lines of at least three images;
at least three images are selected from all the images and a work unit is established. In this embodiment, the image selection may be automatic random selection by a computer through a random algorithm, or at least three images may be selected by a user through selection operations on a software interface; preferably, the latter can ensure that one image and the rest of the at least three images have scene overlapping with larger area respectively through artificial comparison in the interface; partial scene overlapping exists between the other images; the image quality can be ensured, and a guarantee is provided for subsequent feature point and feature line matching.
According to the embodiment of the present invention, preferably, acquiring the contour lines of at least three images includes:
identifying the feature points in the three selected images;
performing feature point matching between any image and the other two images;
calculating the camera positions when the three images are shot by adopting an RANSAC algorithm, and screening out feature points which do not meet the camera position relation;
and rapidly identifying the outline of the scene in the image by adopting a FastLineDetector algorithm.
In this embodiment, it is preferable to select three images, and define the middle image of the three images as an image having a large scene overlap with the remaining two images. The Structure from Motion technology is combined with the Structure from Motion technology based on the characteristic line, the advantage that the Structure from Motion technology has lower requirements on the imaging quantity is utilized, only larger scene overlapping is needed between two images, and the requirements on the imaging quantity are greatly reduced.
Feature points of the three images can be identified according to a Structure from Motion technology based on the feature lines, and feature point matching with the other two images is carried out based on the intermediate image, so that the same feature points of different images are matched with each other; the matched feature points have the condition of no matching, wherein the RANSAC algorithm is adopted to calculate the camera positions (different shooting angles and positions of different images) when three images are shot, and the feature points which do not meet the camera position relation are screened out, so that the finally obtained feature point group is more accurate; finally, based on the matched feature points, the contour line of the scenery in the image can be quickly identified by adopting a FastLineDetector algorithm; the contour line is the external contour of the scenery, can effectively retain the straight line contour of the scenery, and is favorable for generating a more accurate three-dimensional model.
A selecting module 20, configured to determine a first contour line in the contour lines, and select a first feature line therefrom;
the contour lines identified only by the FastLineDetector algorithm have a large difference from the real contour of the scene, so that the final three-dimensional model cannot truly reflect the real contour of the scene, and the accuracy is low. Therefore, to solve this problem, a characteristic line matching technique needs to be introduced. Specifically, the contour line of the intermediate image is used as a first contour line, the first contour line is divided into a plurality of characteristic lines, any characteristic line is selected from the characteristic lines as the first characteristic line by adopting a random selection algorithm, and guarantee is provided for subsequent characteristic line matching.
The searching module 30 is configured to search for a second feature point in the remaining images according to a first feature point closest to the first feature line;
according to the embodiment of the present invention, it is preferable that the searching for the second feature point in the remaining images according to the first feature point closest to the first feature line includes:
searching to obtain a first characteristic point closest to the first characteristic line;
judging whether the first characteristic point meets a preset positioning condition or not;
if yes, searching the rest images according to the first characteristic point to obtain a second characteristic point;
if not, the feature line matching is terminated.
It can be known from the above that the same feature points of the three images are matched with each other, so that the matched feature points can be used as references for feature line matching to eliminate the influence that the feature line matching is not influenced even if the feature line is incomplete; specifically, firstly, the distances from all matched feature points to the first feature line are calculated, the distances are sorted, and the feature point corresponding to the closest sorted distance is used as the first feature point; then, the matched first feature point and the feature points in the other two images have a matching relationship, so that the feature points in the other two images can be matched as second feature points; therefore, guarantee is provided for searching the second characteristic line according to the second characteristic point.
In this embodiment, preferably, a positioning condition is preset, and the first feature point is used as a positioning point only when it is determined that the condition is satisfied; otherwise the subsequent feature line matching will be terminated. Specifically, the criteria are as follows: searching two characteristic lines which are closest to the first characteristic point in the image, wherein the characteristic line which is closest to the first characteristic point is a selected characteristic line; calculating the distances from the feature point to the two feature lines respectively, wherein the shortest distance is less than 0.7 times of the distance of the other segment; the shortest distance should also be smaller than a fixed threshold, and in this embodiment, it is preferable to set the fixed threshold as 10 pixel points. If the characteristic point meets the condition, the characteristic point can be regarded as a positioning point of the selected characteristic line and is continuously executed downwards; if the two characteristic points are not satisfied simultaneously, the characteristic point is considered to be incapable of being used as a positioning point of the selected characteristic line, and the matching work of the first characteristic line is skipped. The preset conditions can ensure that the first characteristic point can be used as a qualified positioning point of the characteristic line, so that the matching success rate is improved.
A matching module 40, configured to match a second feature line closest to the second feature point with the first feature line;
according to the embodiment of the present invention, it is preferable that matching a second feature line closest to the second feature point with the first feature line includes:
judging whether second feature points exist in other images or not;
if so, matching a second characteristic line closest to the second characteristic point with the first characteristic line;
if not, the feature line matching is terminated.
According to the method for obtaining the first characteristic point closest to the first characteristic line by referring to the search, the second characteristic line closest to the first characteristic line can be obtained by searching through the second characteristic points corresponding to the other two images; thus, by means of the matching relationship between the first feature point and the second feature point, the matching relationship between the first feature line and the two second feature lines can be established. Therefore, even if the quality of the images in the three images is not high or the images are limited by the shooting mode, the illumination, the shooting angle, the outline of the scenery are shielded or other problems exist, the straight line recognition algorithm cannot completely and accurately recognize the outline of the scenery, and the discontinuous outlines occur, and the discontinuous outlines cannot influence the matching of the first characteristic line and the second characteristic line. In the process of matching the characteristic lines, the characteristic lines are not directly matched based on the characteristics of the characteristic lines, but are indirectly matched through matched characteristic points which are close to the characteristic lines; that is, the feature lines themselves have poor integrity of the contour lines due to quality and/or shooting mode problems, and the matching between the feature lines is not affected; therefore, the requirements of shooting imaging quality and shooting mode are greatly reduced, the use quality is low, and the reconstruction of the three-dimensional image can be finally realized by the image limited by the shooting mode.
As a preferable example in the present embodiment, in order to ensure that the second feature point matching the first feature point exists in both of the other two images, it is necessary to determine whether the second feature point exists before the first feature line and the second feature line match, and to perform matching of the first feature line and the second feature line only when both of the first feature line and the second feature line exist; otherwise the subsequent feature line matching will be terminated. Therefore, the condition of mismatching can be eliminated to a certain degree.
According to the embodiment of the present invention, after matching the second feature line closest to the second feature point with the first feature line, the method further includes:
judging whether all the first characteristic lines in the first contour line are traversed or not;
if yes, ending the feature line matching and entering the execution module 50;
if not, the selection module 20, the search module 30 and the matching module 40 are executed.
After the matching of the first feature line and the second feature line is finished, in order to ensure that all the first feature lines in the first contour line are subjected to the feature line matching process, whether all the first feature lines in the first contour line have been traversed is performed by the determination method in this embodiment, and only after all the first feature lines have been traversed, the feature line matching is finished, and a guarantee is provided for subsequently combining all the matched feature line groups to perform three-dimensional reconstruction. If the first characteristic line is not traversed, the selecting module 20, the searching module 30 and the matching module 40 are continuously executed to match the characteristic lines.
And the execution module 50 is used for executing the operation of three-dimensional reconstruction images based on the matching result.
According to the embodiment of the present invention, preferably, as shown in fig. 5, the performing of the three-dimensional reconstructed image based on the matching result at S05 includes:
and S051, according to the position of the camera, the matched characteristic points and the matched characteristic lines, projecting the matched characteristic points to a three-dimensional space by utilizing a triangulation algorithm to obtain a three-dimensional scenery model.
And projecting the matched feature points to a three-dimensional space by combining a triangulation algorithm according to the calculated camera position, the mutually matched feature points and the mutually matched multi-section feature lines to obtain a three-dimensional scene model. Because the characteristic lines of the three images cannot completely reflect the overall contour line of the scene, the reconstructed three-dimensional scene model is also a partial three-dimensional structure of the restored scene.
According to the embodiment of the present invention, preferably, after the performing the operation of three-dimensionally reconstructing the image based on the matching result, the method further includes:
judging whether all the images execute the acquisition module 10, the selection module 20, the search module 30, the matching module 40 and the execution module 50;
if yes, completing the three-dimensional reconstruction of the image;
if not, the acquisition module 10, the selection module 20, the search module 30, the matching module 40, and the execution module 50 are executed based on the unexecuted image.
The whole scene cannot be highlighted due to the partial three-dimensional structure, and for this reason, the three-dimensional structure needs to be complemented by other groups of images (one group of images is three images). After a group of image groups complete all steps to complete partial reconstruction of the three-dimensional image, judging whether the image groups completely execute the acquisition module 10, the selection module 20, the search module 30, the matching module 40 and the execution module 50, and only terminating the program under the condition of judging yes to complete the three-dimensional reconstruction of the image; if the judgment result is no, continuing to execute the acquisition module 10, the selection module 20, the search module 30, the matching module 40 and the execution module 50 on one group in the unexecuted image group until the judgment result is yes, and terminating the program.
From the above description, it can be seen that the present invention achieves the following technical effects:
in the embodiment of the application, the contour lines of at least three images are obtained in a characteristic line matching-based mode; determining a first contour line in the contour lines, and selecting a first characteristic line from the contour lines; searching in other images according to a first characteristic point closest to the first characteristic line to obtain a second characteristic point; matching a second characteristic line closest to the second characteristic point with the first characteristic line; performing an operation of three-dimensionally reconstructing an image based on the matching result; the purpose of avoiding the influence of the integrity of the characteristic line on the matching of the characteristic line is achieved, so that the technical effect of greatly reducing the requirements on the shooting imaging quality and the shooting mode is achieved, and the technical problem that the shooting imaging quality and the shooting mode have very high requirements is solved.
According to the embodiment of the present invention, after matching the second feature line closest to the second feature point with the first feature line, the method further includes:
parametrically expressing at least three first characteristic lines in the matched first characteristic line group;
a group of first characteristic line groups comprising three first characteristic lines can be obtained by adopting a method of characteristic line direct matching or characteristic line matching based on a nearest characteristic point, and the characteristic groups are input into a parameterization module for parameterization expression. Specifically, since the positions of the cameras when three images are taken are different from each other, three first feature lines that match each other need to establish a corresponding image coordinate system according to the positions of the cameras, and the three first feature lines are expressed by mathematical expressions in the respective image coordinate systems. Therefore, the corresponding model is established by referring to the mathematical expressions of the parameterized expressions.
Substituting the parameterized expression result into a preset trifocal tensor model to judge whether a preset contour line condition is met or not;
establishing a trifocal tensor model according to the respective camera positions of the three images; the judgment of whether the three characteristic lines are the same characteristic line can be realized through the model, namely whether the first characteristic lines in three different images have the condition of no matching can be judged; specifically, three parametric expressions obtained by three first characteristic lines of the parametric expression are input into an established trifocal tensor model, the model is calculated, if the calculation result is that an equation is established, the three first characteristic lines reflect the same characteristic line, and no mismatching exists; if the calculation result is that the equation is not satisfied, it is determined that one of the three first characteristic lines cannot reflect the use of one characteristic line, and a mismatch exists. Therefore, the screening of the subsequent mismatched characteristic line groups can be guaranteed.
Preferably, the presetting of the trifocal tensor model includes:
and calculating to obtain a trifocal tensor model according to the camera positions respectively corresponding to the three images.
Wherein the coordinates of the cameras corresponding to the three images are respectively P ═ I |0],P′=[A|a4],P″=[B|b4]. Wherein A ═ a1,a2,a3],B=[b1,b2,b3]。aiAnd biI is 1,2,3,4, the ith column of matrices P' and P ", respectively. I, I 'and I' represent the characteristic lines to be matched which are respectively positioned on the three images. If the three characteristic lines are correctly matched, they should previously satisfy the following formula:
Figure BDA0003037858070000181
and if so, screening the at least three matched first characteristic lines into a target matching group.
Through the judgment of the trifocal tensor model, whether the characteristic lines in the first characteristic line group are mismatching can be known; if the characteristic line group is mismatched, the mismatched characteristic line group is directly screened out; and if not, matching the three first characteristic lines again, screening the three first characteristic lines to a target matching group, and waiting for subsequent three-dimensional reconstruction. Therefore, the characteristic line groups which are mismatched can be screened out, and the more accurate characteristic line groups are adopted, so that the accuracy of the three-dimensional reconstruction image is effectively improved.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An image reconstruction method based on feature line matching is characterized by comprising the following steps:
s01, obtaining contour lines of at least three images;
s02, determining a first contour line in the contour lines, and selecting a first characteristic line from the first contour line;
s03, searching the other images according to the first characteristic point closest to the first characteristic line to obtain a second characteristic point;
s04, matching a second feature line closest to the second feature point with the first feature line;
and S05, performing the operation of three-dimensional reconstruction images based on the matching result.
2. The image reconstruction method according to claim 1, wherein the step S01 of obtaining contour lines of at least three images comprises:
s011, identifying feature points in the three selected images;
s012, matching feature points between any image and other two images based on any image;
s013, calculating the camera positions when the three images are shot by adopting a RANSAC algorithm, and screening out feature points which do not meet the camera position relation;
s014, adopting a FastLineDetector algorithm to quickly identify the outline of the scene in the image.
3. The image reconstruction method according to claim 1, wherein the step S03 of searching for the second feature point in the remaining images based on the first feature point closest to the first feature line includes:
s031, search for and get the first characteristic point nearest to said first characteristic line;
s032, judging whether the first feature point meets a preset positioning condition, if so, executing S033, and if not, executing S034;
s033, searching the rest of the images according to the first feature point to obtain a second feature point;
and S034, terminating the characteristic line matching.
4. The image reconstruction method according to claim 1, wherein the matching S04 of a second feature line closest to the second feature point with the first feature line includes:
s041, judging whether second feature points exist in other images, if so, executing a step S042, and if not, executing a step S043;
s042, matching a second characteristic line closest to the second characteristic point with the first characteristic line;
and S043, terminating the characteristic line matching.
5. The image reconstruction method according to claim 2, wherein the performing of the three-dimensional reconstructed image based on the matching result at S05 includes:
and S051, according to the position of the camera, the matched characteristic points and the matched characteristic lines, projecting the matched characteristic points to a three-dimensional space by utilizing a triangulation algorithm to obtain a three-dimensional scenery model.
6. The image reconstruction method according to claim 1, wherein the step S04, after matching a second feature line closest to the second feature point with the first feature line, further comprises:
s061, judging whether all first characteristic lines in the first contour line are traversed or not;
s062, if yes, finishing the feature line matching, and executing the step S05;
s063, if not, executing step S02 to step S04.
7. The image reconstruction method according to claim 1, wherein the step S04, after matching a second feature line closest to the second feature point with the first feature line, further comprises:
s071, parameterizing at least three first characteristic lines in the matched first characteristic line group;
s072, substituting the parameterized expression result into a preset trifocal tensor model to judge whether a preset contour line condition is met;
and S073, if the first characteristic lines meet the requirement, screening the at least three matched first characteristic lines to a target matching group.
8. The image reconstruction method according to claim 1, wherein the step S05, after the step of performing the operation of reconstructing the image in three dimensions based on the matching result, further comprises:
s081, judge whether all images have performed step S01 to step S05;
s082, if yes, the image three-dimensional reconstruction is finished;
s083, if not, performing steps S01 to S05 based on the unexecuted image.
9. An image reconstruction apparatus based on feature line matching, comprising:
an obtaining module 10, configured to obtain contour lines of at least three images;
a selecting module 20, configured to determine a first contour line in the contour lines, and select a first feature line therefrom;
the searching module 30 is configured to search for a second feature point in the remaining images according to a first feature point closest to the first feature line;
a matching module 40, configured to match a second feature line closest to the second feature point with the first feature line;
and the execution module 50 is used for executing the operation of three-dimensional reconstruction images based on the matching result.
10. A storage medium for storing the image reconstruction method based on the feature line matching according to any one of claims 1 to 8.
CN202110452921.1A 2021-04-25 2021-04-25 Image reconstruction method and device based on characteristic line matching and storage medium Pending CN113160389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110452921.1A CN113160389A (en) 2021-04-25 2021-04-25 Image reconstruction method and device based on characteristic line matching and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110452921.1A CN113160389A (en) 2021-04-25 2021-04-25 Image reconstruction method and device based on characteristic line matching and storage medium

Publications (1)

Publication Number Publication Date
CN113160389A true CN113160389A (en) 2021-07-23

Family

ID=76870729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110452921.1A Pending CN113160389A (en) 2021-04-25 2021-04-25 Image reconstruction method and device based on characteristic line matching and storage medium

Country Status (1)

Country Link
CN (1) CN113160389A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564837A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual positioning method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504684A (en) * 2014-12-03 2015-04-08 小米科技有限责任公司 Edge extraction method and device
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
CN107256406A (en) * 2017-04-19 2017-10-17 深圳清华大学研究院 Overlapping fibers image partition method, device, storage medium and computer equipment
US10186049B1 (en) * 2017-03-06 2019-01-22 URC Ventures, Inc. Determining changes in object structure over time using mobile device images
CN110738111A (en) * 2019-09-11 2020-01-31 鲁班嫡系机器人(深圳)有限公司 Multi-purpose-based matching and gesture recognition method, device and system
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504684A (en) * 2014-12-03 2015-04-08 小米科技有限责任公司 Edge extraction method and device
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
US10186049B1 (en) * 2017-03-06 2019-01-22 URC Ventures, Inc. Determining changes in object structure over time using mobile device images
CN107256406A (en) * 2017-04-19 2017-10-17 深圳清华大学研究院 Overlapping fibers image partition method, device, storage medium and computer equipment
CN110738111A (en) * 2019-09-11 2020-01-31 鲁班嫡系机器人(深圳)有限公司 Multi-purpose-based matching and gesture recognition method, device and system
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈春晓;张娟;: "基于三焦点张量的多视图三维重构", 生物医学工程学杂志, no. 04, pages 769 - 774 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564837A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual positioning method, device and system
CN115564837B (en) * 2022-11-17 2023-04-18 歌尔股份有限公司 Visual positioning method, device and system

Similar Documents

Publication Publication Date Title
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
US6072903A (en) Image processing apparatus and image processing method
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
US20120177284A1 (en) Forming 3d models using multiple images
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
CN111598993A (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
CN109640066B (en) Method and device for generating high-precision dense depth image
CN113689578B (en) Human body data set generation method and device
CN106023147B (en) The method and device of DSM in a kind of rapidly extracting linear array remote sensing image based on GPU
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN112184793B (en) Depth data processing method and device and readable storage medium
WO2021076185A1 (en) Joint depth prediction from dual-cameras and dual-pixels
CN115578516A (en) Three-dimensional imaging method, device, equipment and storage medium
Kallwies et al. Triple-SGM: stereo processing using semi-global matching with cost fusion
CN111310567A (en) Face recognition method and device under multi-person scene
CN113160389A (en) Image reconstruction method and device based on characteristic line matching and storage medium
CN108460368B (en) Three-dimensional image synthesis method and device and computer-readable storage medium
CN114004935A (en) Method and device for three-dimensional modeling through three-dimensional modeling system
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
CN109754467B (en) Three-dimensional face construction method, computer storage medium and computer equipment
CN110514140B (en) Three-dimensional imaging method, device, equipment and storage medium
CN109712230B (en) Three-dimensional model supplementing method and device, storage medium and processor
CN115713547A (en) Motion trail generation method and device and processing equipment
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination