CN111489439A - Three-dimensional line graph reconstruction method and device and electronic equipment - Google Patents

Three-dimensional line graph reconstruction method and device and electronic equipment Download PDF

Info

Publication number
CN111489439A
CN111489439A CN202010293554.0A CN202010293554A CN111489439A CN 111489439 A CN111489439 A CN 111489439A CN 202010293554 A CN202010293554 A CN 202010293554A CN 111489439 A CN111489439 A CN 111489439A
Authority
CN
China
Prior art keywords
line segment
dimensional
frame image
determining
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010293554.0A
Other languages
Chinese (zh)
Other versions
CN111489439B (en
Inventor
查红彬
王求元
姜立
方奕庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
BOE Technology Group Co Ltd
Original Assignee
Peking University
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, BOE Technology Group Co Ltd filed Critical Peking University
Priority to CN202010293554.0A priority Critical patent/CN111489439B/en
Publication of CN111489439A publication Critical patent/CN111489439A/en
Application granted granted Critical
Publication of CN111489439B publication Critical patent/CN111489439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a three-dimensional line graph reconstruction method, a three-dimensional line graph reconstruction device and electronic equipment, wherein the method comprises the following steps: acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image; determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of the shooting equipment according to the matching line segment; and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of obtaining the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be obtained in real time.

Description

Three-dimensional line graph reconstruction method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional line graph reconstruction method, a three-dimensional line graph reconstruction device and electronic equipment.
Background
In the existing three-dimensional line graph reconstruction method, the line segment features are processed in a line segment feature matching mode, a group of collected data needs to be used for joint optimization, the current reconstruction quality cannot be observed in the collection process in real time, the calculation amount requirement is extremely high, and hours or even days of calculation are often required, such as L ine3D + +, and the like.
That is, the conventional three-dimensional line graph reconstruction method has a large amount of data to process, and cannot generate a three-dimensional line graph in real time.
Disclosure of Invention
The invention aims to provide a three-dimensional line graph reconstruction method, a three-dimensional line graph reconstruction device and electronic equipment, and aims to solve the problems that the existing three-dimensional line graph reconstruction method is large in data processing amount and cannot generate a three-dimensional line graph in real time.
In order to achieve the above object, the present invention provides a three-dimensional line graph reconstruction method including:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment;
determining the current pose of the shooting equipment according to the matching line segment;
and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image.
Further, the determining a matching line segment matching the two-dimensional feature line segment in the observation line segment includes:
sampling the two-dimensional characteristic line segment to obtain a sampling point;
acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
fitting the predicted points to obtain predicted line segments;
if the observation line segment comprises a line segment which is partially or completely overlapped with the prediction line segment, determining the line segment which is partially or completely overlapped with the prediction line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matched line segment matched with the two-dimensional characteristic line segment.
Further, after the fitting the predicted points to obtain predicted line segments, the method further includes:
and if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the determining the current pose of the shooting device according to the matching line segment includes:
acquiring three-dimensional characteristic points of the previous frame image and two-dimensional characteristic points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and determining the current pose of the shooting equipment according to the first error and the second error.
An embodiment of the present invention further provides a three-dimensional line graph reconstruction apparatus, including:
the first acquisition module is used for acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
the first determining module is used for determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment;
the second determining module is used for determining the current pose of the shooting equipment according to the matching line segment;
and the second acquisition module is used for solving the triangularization of the matching line segment according to the current pose to acquire a three-dimensional line graph of the current frame image.
Further, the first determining module includes:
the sampling sub-module is used for sampling the two-dimensional characteristic line segment to obtain a sampling point;
the first obtaining sub-module is used for obtaining a prediction point of the sampling point in the current frame image according to the sampling point;
the second obtaining submodule is used for fitting the predicted points to obtain predicted line segments;
a first determining submodule, configured to determine, as a candidate line segment, a line segment that is partially or completely overlapped with the predicted line segment in the observation line segment if the observation line segment includes a line segment that is partially or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as the matched line segment matched with the two-dimensional characteristic line segment.
Further, the first determining module further comprises:
and the third determining submodule is used for generating a line segment if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the second determining module includes:
the third obtaining submodule is used for obtaining the three-dimensional characteristic points of the previous frame image and the two-dimensional characteristic points of the current frame image;
the fourth obtaining submodule is used for obtaining a first error according to the three-dimensional characteristic point and the two-dimensional characteristic point;
the fifth obtaining submodule is used for projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
the sixth obtaining submodule is used for obtaining a second error according to the two-dimensional projection line segment and the predicted line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
An embodiment of the present invention further provides an electronic device, including: the three-dimensional line graph reconstruction method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps in the three-dimensional line graph reconstruction method provided by the embodiment of the invention when being executed by the processor.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the three-dimensional line graph reconstruction method provided in the embodiment of the present invention.
In the embodiment of the invention, a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image are obtained; determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of the shooting equipment according to the matching line segment; and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of obtaining the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be obtained in real time.
Drawings
Fig. 1 is a flowchart of a three-dimensional line graph reconstruction method according to an embodiment of the present invention;
FIG. 1a is a schematic diagram of a coordinate system for representing three-dimensional line segments according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of a reprojection method for providing three-dimensional segments according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of a two-dimensional projection line segment after a three-dimensional line segment is provided for re-projection according to an embodiment of the present invention;
FIG. 1d is a schematic diagram of a Bayesian network provided by an embodiment of the present invention;
fig. 2 is a structural diagram of a three-dimensional line drawing reconstruction apparatus according to an embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart of a three-dimensional line graph reconstruction method according to an embodiment of the present invention, and as shown in fig. 1, the embodiment provides a three-dimensional line graph reconstruction method applied to an electronic device, including the following steps:
step 101, acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image.
The shooting equipment shoots and obtains continuous frame images, and the previous frame image is an image which is shot and obtained before the current frame image and is adjacent to the current frame image. In this step, the two-dimensional feature line segment of the previous frame image may be understood as a matching line segment of the previous frame image, and the matching line segment is obtained according to the two-dimensional feature line segment of the previous frame image. The previous frame image is obtained by shooting before the previous frame image, and the images adjacent to the previous frame image are repeated, and the description is omitted here.
For the first frame image, the first frame image has no previous frame image, and therefore, the determination of the match line segment of the first frame image can be determined by using a method in the prior art, which is not limited herein. For other frame images except for the first frame image, the method in the present embodiment may be adopted to determine the matching line segment. Furthermore, the matching line segment of the current frame image is used as a two-dimensional characteristic line segment to participate in determining the matching line segment of the next frame image.
The observation line Segment of the current frame image can be obtained by a straight line Segment detection algorithm (L ine Segment Detector, L SD for short).
And 102, determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment.
The step is a line segment tracking part, a two-dimensional characteristic line segment of a previous frame can be provided for a current frame so as to optimize an observation line segment obtained according to the current frame, and the specific process comprises the following steps:
sampling the two-dimensional characteristic line segment to obtain a sampling point;
acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
fitting the predicted points to obtain predicted line segments;
if the observation line segment comprises a line segment which is partially or completely overlapped with the prediction line segment, determining the line segment which is partially or completely overlapped with the prediction line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matched line segment matched with the two-dimensional characteristic line segment.
Specifically, for the first frame image, the first frame image has no previous frame image, and therefore, the determination of the match line segment of the first frame image may be determined by a method in the prior art, which is not limited herein. For other frame images except for the first frame image, the method in the present embodiment may be adopted to determine the matching line segment. Furthermore, the matching line segment of the current frame image is used as a two-dimensional characteristic line segment to participate in determining the matching line segment of the next frame image.
When the two-dimensional characteristic line segment is sampled, the two-dimensional characteristic line segment can be sampled in an equidistant sampling mode to obtain sampling points, then the L-K optical flow algorithm is adopted to solve the position points of the sampling points in the current frame image, the position points are prediction points, the prediction points can also be called optical flow points, and then the optical flow points are fitted into a prediction line segment by using a least square method.
The observation line segment may include one or more observation sub-line segments, and the prediction line segment may include one or more predictor sub-line segments. And comparing the observation line segment with the observation line segment, sequentially determining whether each observation sub-line segment in the observation line segment is partially or completely overlapped with each predictor sub-line segment of the prediction line segment, and if so, determining the line segment which is partially or completely overlapped with the prediction line segment in the observation line segment as a candidate line segment. The above-mentioned partial coincidence or complete coincidence means that the observation sub-line segment and the prediction sub-line segment are partially coincident or completely coincident. For example, if the observation sub-line segment and the predictor sub-line segment completely coincide, the observation sub-line segment and the predictor sub-line segment are considered to be completely coincident; if the observation sub-line segment intersects with the predictor sub-line segment or a certain segment is overlapped, the observation sub-line segment and the predictor sub-line segment are considered to be partially overlapped.
The candidate line segment can comprise one or more candidate sub-line segments, and if the candidate line segment comprises one candidate sub-line segment, the candidate sub-line segment is determined as a matched line segment matched with the two-dimensional characteristic line segment; and if the candidate line segment comprises a plurality of candidate sub-line segments, determining the line segment with the longest length in the candidate line segments as a matched line segment matched with the two-dimensional characteristic line segment.
In this embodiment, when a matching line segment matching the two-dimensional characteristic line segment in the observation line segment is determined, since the whole processing process is performed on a two-dimensional layer, the processing process has a small amount of calculation and high processing efficiency.
Further, in an embodiment of the present invention, after the fitting the predicted points to obtain predicted line segments, the method further includes:
and if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Specifically, if the observation line segment does not include a line segment partially or completely overlapping the prediction line segment, a matching line segment cannot be obtained from the observation line segment, and in order to prevent the line segment from being broken or lost, maintenance may be performed by using methods such as fusion and robust detection, for example, a line segment is generated, and the generated line segment is determined as a matching line segment matching the two-dimensional characteristic line segment.
And 103, determining the current pose of the shooting equipment according to the matching line segment.
The current pose of the capture device may include rotation and translation of the capture device, which may be a video camera, a camera, or the like. And when the shooting equipment is a camera, the current pose of the shooting equipment is the rotation and translation of the current camera.
Specifically, the processing procedure of this step includes:
acquiring three-dimensional characteristic points of the previous frame image and two-dimensional characteristic points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and determining the current pose of the shooting equipment according to the first error and the second error.
Specifically, an orb (organized FAST and Rotated brief) feature extraction method is used to determine the correspondence between the three-dimensional feature point of the previous frame image and the two-dimensional feature point of the current frame image. On the basis of the current pose of the shooting equipment, solving the posture of the shooting equipment corresponding to the current frame in a mode of optimizing points and line re-projection errors, and calculating the re-projection errors of end points and line segments, wherein the following expression is shown:
Figure BDA0002451331080000061
l=K1(Rn+[t]×Rv)
Figure BDA0002451331080000071
wherein, the point A is a three-dimensional characteristic point of a previous frame image, P is a coordinate of the point A, the point B is a two-dimensional characteristic point corresponding to the point A in the current frame, P is a coordinate of the point B, R is a camera orientation, t is a camera translation, K is a three-dimensional characteristic point of the previous frame image, P is a coordinate of the point A, andpis a camera internal reference, ∈pThe (first error) is a reprojection error of the point, which represents the distance between the projected point and the point B after the point a is projected onto the current frame. And T is the pose of the camera.
Parameter L ═ for three-dimensional line segments using Pl ü cker coordinates (n)TvT)TAs shown in fig. 1a, v is the direction of the line segment, and n is the normal vector of the plane formed by v and the origin. The shooting equipment comprises an internal parameter KpAnd extrinsic reference R, t, previous frame mapProjecting the three-dimensional line segment of the three-dimensional line image to the current frame image to obtain a two-dimensional projected line segment, and calculating an error ∈ between the two-dimensional projected line segment and the end points c and d of the obtained predicted line segmentl(i.e., the second error).
Minimization using L M algorithm
Figure BDA0002451331080000072
Error function to obtain the current pose T of the shooting deviceφ=(R,t)。
L as shown in FIGS. 1b-1cwIs a three-dimensional line segment lcIs a two-dimensional projection line segment, FcRepresenting the current frame image, CwFor the pose of the camera when taking the previous image, CcFor the current pose, Rc,tcShown in dashed lines in fig. 1c as matching line segments (i.e., segments of the observation line segments that match the two-dimensional feature line segments), for camera orientation and camera translation, respectively.
And 104, solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image.
And solving the triangularization of the matched line segment by using the current pose of the shooting equipment to obtain a three-dimensional line segment, so that a three-dimensional line graph of the current frame image is obtained.
In the embodiment, a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image are obtained; determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of the shooting equipment according to the matching line segment; and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of obtaining the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be obtained in real time.
In addition, the three-dimensional line graph reconstruction method provided by the invention can be used in any scene without carrying out Manhattan hypothesis; the three-dimensional line graph and the pose of the shooting equipment can be obtained on line in real time; a line feature sequence is obtained by using a feature tracking mode, so that the time for matching features is reduced, and the operating efficiency is improved; a line feature sequence is obtained by using a feature tracking mode, so that the error three-dimensional line segment obtained by feature mismatching is avoided, and the accuracy of the three-dimensional line graph is improved.
As shown in FIG. 1d, modeling is performed based on the form of Bayesian network for the modeling process of the three-dimensional line graph reconstruction method provided by the invention, wherein T is the camera pose, LjIs the jth three-dimensional line segment,
Figure BDA0002451331080000081
the position of the line segment at the 2D previous moment corresponding to the jth three-dimensional line segment,
Figure BDA0002451331080000082
for the observed value of the jth three-dimensional line segment at the tth moment, the overall optimization equation is as follows:
Figure BDA0002451331080000083
wherein S isT,SL,
Figure BDA0002451331080000084
Are each a set of the corresponding ones,
Figure BDA0002451331080000085
is a 2D line segment observation model, P (T)t|Tt-1,Tt-2) And on the problem solving, converting the graph model into a least square optimization method, and obtaining the optimal solution by adopting L M method iterative optimization to obtain the current pose of the shooting equipment.
Referring to fig. 2, fig. 2 is a structural diagram of a three-dimensional line graph reconstruction apparatus according to an embodiment of the present invention, and as shown in fig. 2, the three-dimensional line graph reconstruction apparatus 200 includes:
a first obtaining module 201, configured to obtain a two-dimensional feature line segment of a previous frame image and an observation line segment of a current frame image;
a first determining module 202, configured to determine a matching line segment, which matches the two-dimensional feature line segment, in the observation line segment;
the second determining module 203 is used for determining the current pose of the shooting equipment according to the matching line segment;
and the second obtaining module 204 is configured to solve triangulation on the matching line segment according to the current pose, and obtain a three-dimensional line graph of the current frame image.
In an embodiment of the present invention, the first determining module 202 includes:
the sampling sub-module is used for sampling the two-dimensional characteristic line segment to obtain a sampling point;
the first obtaining sub-module is used for obtaining a prediction point of the sampling point in the current frame image according to the sampling point;
the second obtaining submodule is used for fitting the predicted points to obtain predicted line segments;
a first determining submodule, configured to determine, as a candidate line segment, a line segment that is partially or completely overlapped with the predicted line segment in the observation line segment if the observation line segment includes a line segment that is partially or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as the matched line segment matched with the two-dimensional characteristic line segment.
In an embodiment of the present invention, the first determining module 202 further includes:
and the third determining submodule is used for generating a line segment if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
In an embodiment of the present invention, the second determining module 203 includes:
the third obtaining submodule is used for obtaining the three-dimensional characteristic points of the previous frame image and the two-dimensional characteristic points of the current frame image;
the fourth obtaining submodule is used for obtaining a first error according to the three-dimensional characteristic point and the two-dimensional characteristic point;
the fifth obtaining submodule is used for projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
the sixth obtaining submodule is used for obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
It should be noted that, in this embodiment, the three-dimensional line graph reconstruction apparatus 200 may implement any implementation manner in the method embodiment in the embodiment shown in fig. 1, that is, any implementation manner in the method embodiment in the embodiment shown in fig. 1 may be implemented by the three-dimensional line graph reconstruction apparatus 200 in this embodiment, and the same beneficial effects are achieved, and no further description is provided here.
Referring to fig. 3, fig. 3 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device 300 includes: a memory 301, a processor 302, and a computer program stored on the memory 301 and executable on the processor 302, wherein,
the processor 302 is configured to read the computing program in the memory 301, and execute the following processes:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment;
determining the current pose of the shooting equipment according to the matching line segment;
and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image.
Further, when the step of determining a matching line segment matching the two-dimensional feature line segment in the observation line segment is executed, the processor 302 specifically executes:
sampling the two-dimensional characteristic line segment to obtain a sampling point;
acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
fitting the predicted points to obtain predicted line segments;
if the observation line segment comprises a line segment which is partially or completely overlapped with the prediction line segment, determining the line segment which is partially or completely overlapped with the prediction line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matched line segment matched with the two-dimensional characteristic line segment.
Further, the processor 302 further performs:
and if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Further, when the step of determining the current pose of the shooting device according to the matching line segment is executed, the processor 302 specifically executes:
acquiring three-dimensional characteristic points of the previous frame image and two-dimensional characteristic points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and determining the current pose of the shooting equipment according to the first error and the second error.
It should be noted that, in this embodiment, the electronic device may implement any implementation manner in the method embodiment shown in fig. 1, that is, any implementation manner in the method embodiment shown in fig. 1 may be implemented by the electronic device in this embodiment, and achieve the same beneficial effects, and details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the first three-dimensional line graph reconstruction method (the three-dimensional line graph reconstruction method shown in fig. 1) provided by the embodiment of the present invention.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A three-dimensional line graph reconstruction method, comprising:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment;
determining the current pose of the shooting equipment according to the matching line segment;
and solving the triangularization of the matching line segment according to the current pose to obtain a three-dimensional line graph of the current frame image.
2. The method of claim 1, wherein determining a matching one of the observation line segments that matches the two-dimensional feature line segment comprises:
sampling the two-dimensional characteristic line segment to obtain a sampling point;
acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
fitting the predicted points to obtain predicted line segments;
if the observation line segment comprises a line segment which is partially or completely overlapped with the prediction line segment, determining the line segment which is partially or completely overlapped with the prediction line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matched line segment matched with the two-dimensional characteristic line segment.
3. The method of claim 2, wherein after said fitting the predicted points to obtain predicted line segments, further comprising:
and if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
4. The method of claim 1, wherein determining the current pose of the camera device from the matched line segments comprises:
acquiring three-dimensional characteristic points of the previous frame image and two-dimensional characteristic points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and determining the current pose of the shooting equipment according to the first error and the second error.
5. A three-dimensional line drawing reconstruction apparatus characterized by comprising:
the first acquisition module is used for acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
the first determining module is used for determining a matched line segment matched with the two-dimensional characteristic line segment in the observation line segment;
the second determining module is used for determining the current pose of the shooting equipment according to the matching line segment;
and the second acquisition module is used for solving the triangularization of the matching line segment according to the current pose to acquire a three-dimensional line graph of the current frame image.
6. The apparatus of claim 5, wherein the first determining module comprises:
the sampling sub-module is used for sampling the two-dimensional characteristic line segment to obtain a sampling point;
the first obtaining sub-module is used for obtaining a prediction point of the sampling point in the current frame image according to the sampling point;
the second obtaining submodule is used for fitting the predicted points to obtain predicted line segments;
a first determining submodule, configured to determine, as a candidate line segment, a line segment that is partially or completely overlapped with the predicted line segment in the observation line segment if the observation line segment includes a line segment that is partially or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as the matched line segment matched with the two-dimensional characteristic line segment.
7. The apparatus of claim 6, wherein the first determining module further comprises:
and the third determining submodule is used for generating a line segment if the observation line segment does not comprise a line segment partially or completely coincident with the prediction line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
8. The apparatus of claim 5, wherein the second determining module comprises:
the third obtaining submodule is used for obtaining the three-dimensional characteristic points of the previous frame image and the two-dimensional characteristic points of the current frame image;
the fourth obtaining submodule is used for obtaining a first error according to the three-dimensional characteristic point and the two-dimensional characteristic point;
the fifth obtaining submodule is used for projecting the three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment;
the sixth obtaining submodule is used for obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps in the three-dimensional line drawing reconstruction method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps in the three-dimensional line drawing reconstruction method according to any one of claims 1 to 4.
CN202010293554.0A 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment Active CN111489439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010293554.0A CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010293554.0A CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111489439A true CN111489439A (en) 2020-08-04
CN111489439B CN111489439B (en) 2024-06-07

Family

ID=71810985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010293554.0A Active CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111489439B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246038A (en) * 2023-05-11 2023-06-09 西南交通大学 Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030440A (en) * 2002-06-27 2004-01-29 Starlabo Corp Image processing method, image processing program, and computer readable recording medium with the program recorded thereon
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN108961410A (en) * 2018-06-27 2018-12-07 中国科学院深圳先进技术研究院 A kind of three-dimensional wireframe modeling method and device based on image
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030440A (en) * 2002-06-27 2004-01-29 Starlabo Corp Image processing method, image processing program, and computer readable recording medium with the program recorded thereon
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN108961410A (en) * 2018-06-27 2018-12-07 中国科学院深圳先进技术研究院 A kind of three-dimensional wireframe modeling method and device based on image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246038A (en) * 2023-05-11 2023-06-09 西南交通大学 Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium

Also Published As

Publication number Publication date
CN111489439B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN110766716B (en) Method and system for acquiring information of space unknown moving target
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
CN110070564B (en) Feature point matching method, device, equipment and storage medium
WO2014077272A1 (en) Three-dimensional object recognition device and three-dimensional object recognition method
CN111445526A (en) Estimation method and estimation device for pose between image frames and storage medium
WO2018235923A1 (en) Position estimating device, position estimating method, and program
Repko et al. 3D models from extended uncalibrated video sequences: Addressing key-frame selection and projective drift
US11195297B2 (en) Method and system for visual localization based on dual dome cameras
JP5439277B2 (en) Position / orientation measuring apparatus and position / orientation measuring program
GB2567245A (en) Methods and apparatuses for depth rectification processing
CN115063768A (en) Three-dimensional target detection method, encoder and decoder
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN111489439B (en) Three-dimensional line graph reconstruction method and device and electronic equipment
CN111829522B (en) Instant positioning and map construction method, computer equipment and device
JP2004514228A (en) Scene restoration and camera calibration with robust use of chirality
CN112085842A (en) Depth value determination method and device, electronic equipment and storage medium
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN113689332B (en) Image splicing method with high robustness under high repetition characteristic scene
CN112184766B (en) Object tracking method and device, computer equipment and storage medium
CN113140031A (en) Three-dimensional image modeling system and method and oral cavity scanning equipment applying same
CN117437303B (en) Method and system for calibrating camera external parameters
CN116246038B (en) Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant