CN111489439B - Three-dimensional line graph reconstruction method and device and electronic equipment - Google Patents

Three-dimensional line graph reconstruction method and device and electronic equipment Download PDF

Info

Publication number
CN111489439B
CN111489439B CN202010293554.0A CN202010293554A CN111489439B CN 111489439 B CN111489439 B CN 111489439B CN 202010293554 A CN202010293554 A CN 202010293554A CN 111489439 B CN111489439 B CN 111489439B
Authority
CN
China
Prior art keywords
line segment
dimensional
point
frame image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010293554.0A
Other languages
Chinese (zh)
Other versions
CN111489439A (en
Inventor
查红彬
王求元
姜立
方奕庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
BOE Technology Group Co Ltd
Original Assignee
Peking University
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, BOE Technology Group Co Ltd filed Critical Peking University
Priority to CN202010293554.0A priority Critical patent/CN111489439B/en
Publication of CN111489439A publication Critical patent/CN111489439A/en
Application granted granted Critical
Publication of CN111489439B publication Critical patent/CN111489439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a three-dimensional line graph reconstruction method, a device and electronic equipment, wherein the method comprises the following steps: acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image; determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of shooting equipment according to the matching line segments; and solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of acquiring the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be acquired in real time.

Description

Three-dimensional line graph reconstruction method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a three-dimensional line graph reconstruction method, apparatus, and electronic device.
Background
In the existing three-dimensional Line graph reconstruction method, line segment characteristics are processed by using a Line segment characteristic matching mode, a group of acquired data is required to be used for combined optimization, the quality of the current reconstruction cannot be observed in the acquisition process in real time, the operand requirement is extremely high, and operations of hours or even days are often required, such as Line3D++, and the like.
That is, the existing three-dimensional map reconstruction method is large in the amount of data handled, and cannot generate a three-dimensional map in real time.
Disclosure of Invention
The invention aims to provide a three-dimensional line graph reconstruction method, a device and electronic equipment, which are used for solving the problem that the existing three-dimensional line graph reconstruction method is large in data volume and cannot generate a three-dimensional line graph in real time.
In order to achieve the above object, the present invention provides a three-dimensional line graph reconstruction method, including:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment;
Determining the current pose of shooting equipment according to the matching line segments;
And solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image.
Further, the determining a matching line segment that matches the two-dimensional feature line segment in the observation line segment includes:
sampling the two-dimensional characteristic line segments to obtain sampling points;
Acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
Fitting the predicted points to obtain predicted line segments;
If the observation line segment comprises a line segment which is partially overlapped or fully overlapped with the predicted line segment, determining the line segment which is partially overlapped or fully overlapped with the predicted line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
Further, after the fitting is performed on the predicted points to obtain predicted line segments, the method further includes:
and if the observation line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the determining, according to the matching line segment, the current pose of the shooting device includes:
acquiring three-dimensional feature points of the previous frame image and two-dimensional feature points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
Projecting the three-dimensional line segments of the three-dimensional line graph of the previous frame image to the current frame image to obtain two-dimensional projection line segments;
Obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
And determining the current pose of the shooting equipment according to the first error and the second error.
The embodiment of the invention also provides a three-dimensional line graph reconstruction device, which comprises:
The first acquisition module is used for acquiring the two-dimensional characteristic line segment of the previous frame image and the observation line segment of the current frame image;
The first determining module is used for determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment;
the second determining module is used for determining the current pose of the shooting equipment according to the matching line segments;
and the second acquisition module is used for solving triangulation on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image.
Further, the first determining module includes:
The sampling submodule is used for sampling the two-dimensional characteristic line segments to obtain sampling points;
The first acquisition sub-module is used for acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
the second acquisition submodule is used for fitting the predicted points to obtain predicted line segments;
The first determining submodule is used for determining a line segment which is partially overlapped or completely overlapped with the predicted line segment in the observed line segment as a candidate line segment if the observed line segment comprises the line segment which is partially overlapped or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the first determining module further includes:
And the third determining submodule is used for generating a line segment if the observed line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, and determining the generated line segment as a matching line segment which is matched with the two-dimensional characteristic line segment.
Further, the second determining module includes:
A third obtaining sub-module, configured to obtain a three-dimensional feature point of the previous frame image and a two-dimensional feature point of the current frame image;
a fourth obtaining sub-module, configured to obtain a first error according to the three-dimensional feature point and the two-dimensional feature point;
a fifth obtaining sub-module, configured to project a three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image, to obtain a two-dimensional projection line segment;
a sixth obtaining submodule, configured to obtain a second error according to the two-dimensional projection line segment and the predicted line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
The embodiment of the invention also provides electronic equipment, which comprises: the three-dimensional line graph reconstruction method comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the computer program realizes the steps in the three-dimensional line graph reconstruction method provided by the embodiment of the invention when being executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, which is characterized in that the computer readable storage medium is stored with a computer program, and the computer program realizes the steps in the three-dimensional line graph reconstruction method provided by the embodiment of the invention when being executed by a processor.
In the embodiment of the invention, a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image are obtained; determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of shooting equipment according to the matching line segments; and solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of acquiring the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be acquired in real time.
Drawings
FIG. 1 is a flow chart of a three-dimensional line graph reconstruction method provided by an embodiment of the invention;
FIG. 1a is a schematic representation of a three-dimensional line segment in a coordinate system provided by an embodiment of the present invention;
FIG. 1b is a schematic diagram of a reprojection of a three-dimensional line segment according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of a two-dimensional projected line segment after three-dimensional line segment re-projection according to an embodiment of the present invention;
FIG. 1d is a schematic diagram of a Bayesian network provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a three-dimensional line drawing reconstruction device according to an embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart of a three-dimensional line drawing reconstruction method provided by an embodiment of the present invention, as shown in fig. 1, the embodiment provides a three-dimensional line drawing reconstruction method, which is applied to an electronic device, and includes the following steps:
And step 101, acquiring a two-dimensional characteristic line segment of the previous frame image and an observation line segment of the current frame image.
The photographing apparatus photographs and obtains continuous frame images, the previous frame image being an image that is photographed and obtained before the current frame image and is adjacent to the current frame image. In this step, the two-dimensional feature line segment of the previous frame image may be understood as a matching line segment of the previous frame image, which is obtained from the two-dimensional feature line segment of the previous frame image. The previous frame image is an image captured before the previous frame image, and is adjacent to the previous frame image, and so on, and will not be described in detail herein.
For the first frame image, the first frame image has no previous frame image, so the determination of the matching line segments of the first frame image can be determined in a manner in the prior art, which is not limited herein. For other frame images than the first frame image, the method in this embodiment may be employed to determine the matching line segments. Further, the matching line segment of the current frame image is taken as a two-dimensional characteristic line segment to participate in determining the matching line segment of the next frame image.
The observation line segment of the current frame image can be obtained through a straight line segment detection algorithm (LINE SEGMENT Detector, abbreviated as LSD) algorithm.
Step 102, determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment.
The step is a line segment tracking part, and the two-dimensional characteristic line segment of the previous frame can be provided for the current frame so as to optimize the observation line segment acquired according to the current frame, and the specific process comprises the following steps:
sampling the two-dimensional characteristic line segments to obtain sampling points;
Acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
Fitting the predicted points to obtain predicted line segments;
If the observation line segment comprises a line segment which is partially overlapped or fully overlapped with the predicted line segment, determining the line segment which is partially overlapped or fully overlapped with the predicted line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
Specifically, for the first frame image, the first frame image has no previous frame image, so the determination of the matching line segment of the first frame image may be determined by a manner in the prior art, which is not limited herein. For other frame images than the first frame image, the method in this embodiment may be employed to determine the matching line segments. Further, the matching line segment of the current frame image is taken as a two-dimensional characteristic line segment to participate in determining the matching line segment of the next frame image.
When sampling the two-dimensional characteristic line segments, an equidistant sampling mode can be adopted to obtain sampling points; then, solving the position point of the sampling point in the current frame image by adopting an L-K optical flow algorithm, wherein the position point is a predicted point, the predicted point can also be called an optical flow point, and then, the optical flow point is fitted into a predicted line segment by utilizing a least square method. The observation line segment includes one or more line segments.
The observation line segment may include one or more observation sub-line segments and the prediction line segment may include one or more prediction sub-line segments. And comparing the predicted line segment with the observed line segment, sequentially determining whether each observed sub-line segment in the observed line segment and each predicted sub-line segment in the predicted line segment are partially or fully overlapped, and if so, determining the line segment which is partially or fully overlapped with the predicted line segment in the observed line segment as a candidate line segment. The above-mentioned partial coincidence or total coincidence means that the observation sub-line segment and the prediction sub-line segment are partially coincident or total coincident. For example, if the observation sub-line segment and the prediction sub-line segment completely coincide, then the observation sub-line segment and the prediction sub-line segment are considered to completely coincide; if the observation sub-line segment intersects with the prediction sub-line segment or a segment coincides, the observation sub-line segment may be considered to partially coincide with the prediction sub-line segment.
The candidate line segment may include one or more candidate sub-line segments, and if the candidate line segment includes one candidate sub-line segment, the candidate sub-line segment is determined to be a matching line segment matching the two-dimensional feature line segment; and if the candidate line segments comprise a plurality of candidate sub-line segments, determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
In the implementation, when the matching line segment matched with the two-dimensional characteristic line segment in the observation line segment is determined, the whole processing process is the processing on the two-dimensional layer, so that the calculated amount of the processing process is small, and the processing efficiency is high.
Further, in an embodiment of the present invention, after the fitting the predicted points to obtain a predicted line segment, the method further includes:
and if the observation line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Specifically, if the observed line segment does not include a line segment that partially coincides with or completely coincides with the predicted line segment, a matching line segment cannot be obtained from the observed line segment, and in order to prevent the line segment from being broken and lost, methods such as fusion, robust detection and the like may be used to perform maintenance, for example, a line segment is generated, and the generated line segment is determined to be a matching line segment that matches the two-dimensional feature line segment.
And step 103, determining the current pose of the shooting equipment according to the matching line segments.
The current pose of the photographing apparatus may include rotation and translation of the photographing apparatus, which may be a video camera, a camera, or the like. When the shooting equipment is a camera, the current pose of the shooting equipment is the rotation and translation of the current camera.
Specifically, the processing procedure of the step comprises the following steps:
acquiring three-dimensional feature points of the previous frame image and two-dimensional feature points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
Projecting the three-dimensional line segments of the three-dimensional line graph of the previous frame image to the current frame image to obtain two-dimensional projection line segments;
Obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
And determining the current pose of the shooting equipment according to the first error and the second error.
Specifically, a ORB (Oriented FAST and Rotated Brief) feature extraction method is used to determine the corresponding relationship between the three-dimensional feature points of the previous frame image and the two-dimensional feature points of the current frame. On the basis of the current pose of the shooting equipment, solving the pose of the shooting equipment corresponding to the current frame by using a mode of optimizing the point and the line re-projection error, and calculating the re-projection error of the endpoint and the line segment, wherein the re-projection error is shown in the following expression:
l=K1(Rn+[t]×Rv)
wherein, point A is the three-dimensional feature point of the previous frame image, P is the coordinate of point A, point B is the two-dimensional feature point corresponding to point A in the current frame, P is the coordinate of point B, R is the camera orientation, t is the camera translation, K p is the camera internal parameter, E p (i.e. the first error) is the re-projection error of the point, and the distance between the projection point A and the point B after the point A is projected to the current frame is shown. And T is the pose of the camera.
The photographing apparatus includes an internal reference K p and an external reference R, t, a three-dimensional line segment of a three-dimensional line map of a previous frame image is projected onto the current frame image to obtain a two-dimensional projected line segment, and an error e l (i.e., a second error) is calculated with end points c and d of the obtained predicted line segment.
Minimization using LM algorithmAnd (3) an error function, and obtaining the current pose T φ = (R, T) of the shooting equipment.
1B-1C, L w is a three-dimensional line segment, L c is a two-dimensional projected line segment, F c represents a current frame image, C w is a pose of the camera when the previous frame image is captured, C c is the current pose, R c,tc is a camera orientation and a camera translation, respectively, and a matching line segment (i.e., a line segment matching a two-dimensional feature line segment in an observed line segment) is shown in dashed lines in FIG. 1C.
And 104, solving triangularization on the matching line segments according to the current pose, and obtaining a three-dimensional line graph of the current frame image.
And solving triangularization on the matched line segments by using the current pose of the shooting equipment to obtain three-dimensional line segments, thereby obtaining a three-dimensional line graph of the current frame image.
In this embodiment, a two-dimensional feature line segment of a previous frame image and an observation line segment of a current frame image are obtained; determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment; determining the current pose of shooting equipment according to the matching line segments; and solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image. The whole calculation process is simple, the efficiency of acquiring the three-dimensional line graph is improved, and the three-dimensional line graph of the current frame image can be acquired in real time.
In addition, the three-dimensional line graph reconstruction method provided by the invention does not carry out Manhattan assumption and can be used in any scene; the three-dimensional line graph and the pose of shooting equipment can be obtained on line in real time; obtaining a line characteristic sequence by using a characteristic tracking mode, reducing special matching time and increasing operation efficiency; and a line characteristic sequence is obtained by using a characteristic tracking mode, so that the false matching of the characteristics is avoided to obtain a wrong three-dimensional line segment, and the accuracy of the three-dimensional line graph is improved.
As shown in fig. 1d, the modeling process of the three-dimensional line graph reconstruction method provided by the invention is modeled based on a bayesian network form. Assuming that T is the camera pose, L j is the jth three-dimensional line segment,For the line segment position of the 2D previous moment corresponding to the jth three-dimensional line segment,/>For the observed value of the jth three-dimensional line segment at the t moment, the overall optimization equation is as follows:
Wherein, Respectively corresponding set,/>Is a 2D line segment observation model, and P (T t|Tt-1,Tt-2) is a corresponding trajectory constraint model. In addition, the present model also includes a point-based observation model. On the aspect of solving the problem, the graph model is converted into a least square optimization method, and the LM method is adopted for iterative optimization to obtain an optimal solution, so that the current pose of the shooting equipment can be obtained.
Referring to fig. 2, fig. 2 is a block diagram of a three-dimensional line drawing reconstruction device according to an embodiment of the present invention, and as shown in fig. 2, a three-dimensional line drawing reconstruction device 200 includes:
a first obtaining module 201, configured to obtain a two-dimensional feature line segment of a previous frame image and an observation line segment of a current frame image;
A first determining module 202, configured to determine a matching line segment that matches the two-dimensional feature line segment in the observation line segment;
A second determining module 203, configured to determine a current pose of the photographing apparatus according to the matching line segment;
And the second obtaining module 204 is configured to solve triangulation for the matching line segment according to the current pose, and obtain a three-dimensional line graph of the current frame image.
In one embodiment of the present invention, the first determining module 202 includes:
The sampling submodule is used for sampling the two-dimensional characteristic line segments to obtain sampling points;
The first acquisition sub-module is used for acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
the second acquisition submodule is used for fitting the predicted points to obtain predicted line segments;
The first determining submodule is used for determining a line segment which is partially overlapped or completely overlapped with the predicted line segment in the observed line segment as a candidate line segment if the observed line segment comprises the line segment which is partially overlapped or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
In one embodiment of the present invention, the first determining module 202 further includes:
And the third determining submodule is used for generating a line segment if the observed line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, and determining the generated line segment as a matching line segment which is matched with the two-dimensional characteristic line segment.
In one embodiment of the present invention, the second determining module 203 includes:
A third obtaining sub-module, configured to obtain a three-dimensional feature point of the previous frame image and a two-dimensional feature point of the current frame image;
a fourth obtaining sub-module, configured to obtain a first error according to the three-dimensional feature point and the two-dimensional feature point;
a fifth obtaining sub-module, configured to project a three-dimensional line segment of the three-dimensional line graph of the previous frame image to the current frame image, to obtain a two-dimensional projection line segment;
a sixth obtaining submodule, configured to obtain a second error according to the two-dimensional projection line segment and the matching line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
It should be noted that, the three-dimensional line drawing reconstruction device 200 in this embodiment may implement any implementation manner in the method embodiment in the embodiment shown in fig. 1, that is, any implementation manner in the method embodiment in the embodiment shown in fig. 1 may be implemented by the three-dimensional line drawing reconstruction device 200 in this embodiment, and achieve the same beneficial effects, which are not described herein again.
Referring to fig. 3, fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 3, an electronic device 300 includes: a memory 301, a processor 302 and a computer program stored on the memory 301 and executable on the processor 302, wherein,
The processor 302 is configured to read the computing program in the memory 301, and execute the following procedures:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment;
Determining the current pose of shooting equipment according to the matching line segments;
And solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image.
Further, the processor 302, when executing the step of determining the matching line segment matching the two-dimensional feature line segment in the observed line segment, specifically executes:
sampling the two-dimensional characteristic line segments to obtain sampling points;
Acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
Fitting the predicted points to obtain predicted line segments;
If the observation line segment comprises a line segment which is partially overlapped or fully overlapped with the predicted line segment, determining the line segment which is partially overlapped or fully overlapped with the predicted line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the processor 302 also performs:
and if the observation line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
Further, the processor 302 specifically performs, when performing the step of determining the current pose of the shooting device according to the matching line segment:
acquiring three-dimensional feature points of the previous frame image and two-dimensional feature points of the current frame image;
obtaining a first error according to the three-dimensional characteristic points and the two-dimensional characteristic points;
Projecting the three-dimensional line segments of the three-dimensional line graph of the previous frame image to the current frame image to obtain two-dimensional projection line segments;
Obtaining a second error according to the two-dimensional projection line segment and the matching line segment;
And determining the current pose of the shooting equipment according to the first error and the second error.
It should be noted that, in this embodiment, the above-mentioned electronic device may implement any implementation manner in the embodiment of the method shown in fig. 1, that is, any implementation manner in the embodiment of the method shown in fig. 1 may be implemented by the above-mentioned electronic device in this embodiment, and achieve the same beneficial effects, which are not described herein again.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the first three-dimensional line graph reconstruction method (the three-dimensional line graph reconstruction method shown in fig. 1) provided by the embodiment of the invention are implemented.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (8)

1. The three-dimensional line graph reconstruction method is characterized by comprising the following steps of:
acquiring a two-dimensional characteristic line segment of a previous frame image and an observation line segment of a current frame image;
determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment;
Determining the current pose of shooting equipment according to the matching line segments;
solving triangularization on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image;
the determining the current pose of the shooting equipment according to the matching line segment comprises the following steps:
acquiring three-dimensional feature points of the previous frame image and two-dimensional feature points of the current frame image;
obtaining a first error according to the three-dimensional characteristic point A and the two-dimensional characteristic point B, wherein the first error The point A is a three-dimensional feature point of a previous frame image, the point P is a coordinate of the point A, the point B is a two-dimensional feature point corresponding to the point A in a current frame, the point P is a coordinate of the point B, the R is a shooting equipment orientation, the T is shooting equipment translation, the K p is shooting equipment internal parameters, the first error E p is a re-projection error of the point, the distance between a projection point and the point B after the point A is projected to the current frame is represented, and the T is a camera pose;
Projecting a three-dimensional line segment L of a three-dimensional line graph of the previous frame image to the current frame image to obtain a two-dimensional projection line segment 1;
calculating to obtain a second error according to the end points of the two-dimensional projection line segment 1 and the matching line segment, wherein the second error The three-dimensional line segment L= (n T vT)T; the v is the line segment direction, the n is the normal vector of the plane formed by the v and the origin, the two-dimensional projection line segment l=K 1(Rn+[t]× Rv), and the c and the d are the end points of the predicted line segment;
And determining the current pose of the shooting equipment according to the first error and the second error.
2. The method of claim 1, wherein the determining a matching one of the observed line segments that matches the two-dimensional feature line segment comprises:
sampling the two-dimensional characteristic line segments to obtain sampling points;
Acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
Fitting the predicted points to obtain predicted line segments;
If the observation line segment comprises a line segment which is partially overlapped or fully overlapped with the predicted line segment, determining the line segment which is partially overlapped or fully overlapped with the predicted line segment in the observation line segment as a candidate line segment;
and determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
3. The method of claim 2, further comprising, after said fitting the predicted points to obtain predicted line segments:
and if the observation line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, generating a line segment, and determining the generated line segment as a matching line segment matched with the two-dimensional characteristic line segment.
4. A three-dimensional map reconstruction apparatus, comprising:
The first acquisition module is used for acquiring the two-dimensional characteristic line segment of the previous frame image and the observation line segment of the current frame image;
The first determining module is used for determining a matching line segment matched with the two-dimensional characteristic line segment in the observation line segment;
the second determining module is used for determining the current pose of the shooting equipment according to the matching line segments;
The second acquisition module is used for solving triangulation on the matching line segments according to the current pose to obtain a three-dimensional line graph of the current frame image;
The second determining module includes:
A third obtaining sub-module, configured to obtain a three-dimensional feature point of the previous frame image and a two-dimensional feature point of the current frame image;
a fourth obtaining submodule, configured to obtain a first error according to the three-dimensional feature point a and the two-dimensional feature point B, where the first error The point A is a three-dimensional feature point of a previous frame image, the point P is a coordinate of the point A, the point B is a two-dimensional feature point corresponding to the point A in a current frame, the point P is a coordinate of the point B, the R is a shooting equipment orientation, the T is shooting equipment translation, the K p is shooting equipment internal parameters, the first error E p is a re-projection error of the point, the distance between a projection point and the point B after the point A is projected to the current frame is represented, and the T is a camera pose;
A fifth obtaining sub-module, configured to project a three-dimensional line segment L of the three-dimensional line graph of the previous frame image to the current frame image, to obtain a two-dimensional projection line segment 1;
A sixth obtaining submodule, configured to obtain a second error according to calculation of end points of the two-dimensional projection line segment 1 and the matching line segment, where the second error The three-dimensional line segment L= (n T vT)T; the v is the line segment direction, the n is the normal vector of the plane formed by the v and the origin, the two-dimensional projection line segment l=K 1(Rn+[t]× Rv), and the c and the d are the end points of the predicted line segment;
and the fourth determining submodule is used for determining the current pose of the shooting equipment according to the first error and the second error.
5. The apparatus of claim 4, wherein the first determining module comprises:
The sampling submodule is used for sampling the two-dimensional characteristic line segments to obtain sampling points;
The first acquisition sub-module is used for acquiring a predicted point of the sampling point in the current frame image according to the sampling point;
the second acquisition submodule is used for fitting the predicted points to obtain predicted line segments;
The first determining submodule is used for determining a line segment which is partially overlapped or completely overlapped with the predicted line segment in the observed line segment as a candidate line segment if the observed line segment comprises the line segment which is partially overlapped or completely overlapped with the predicted line segment;
and the second determining submodule is used for determining the line segment with the longest length in the candidate line segments as a matching line segment matched with the two-dimensional characteristic line segment.
6. The apparatus of claim 5, wherein the first determining module further comprises:
And the third determining submodule is used for generating a line segment if the observed line segment does not comprise the line segment which is partially overlapped with or is completely overlapped with the predicted line segment, and determining the generated line segment as a matching line segment which is matched with the two-dimensional characteristic line segment.
7. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps in the three-dimensional line map reconstruction method as claimed in any one of claims 1 to 3.
8. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the three-dimensional line map reconstruction method according to any one of claims 1 to 3.
CN202010293554.0A 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment Active CN111489439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010293554.0A CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010293554.0A CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111489439A CN111489439A (en) 2020-08-04
CN111489439B true CN111489439B (en) 2024-06-07

Family

ID=71810985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010293554.0A Active CN111489439B (en) 2020-04-15 2020-04-15 Three-dimensional line graph reconstruction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111489439B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246038B (en) * 2023-05-11 2023-08-01 西南交通大学 Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030440A (en) * 2002-06-27 2004-01-29 Starlabo Corp Image processing method, image processing program, and computer readable recording medium with the program recorded thereon
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN108961410A (en) * 2018-06-27 2018-12-07 中国科学院深圳先进技术研究院 A kind of three-dimensional wireframe modeling method and device based on image
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030440A (en) * 2002-06-27 2004-01-29 Starlabo Corp Image processing method, image processing program, and computer readable recording medium with the program recorded thereon
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN108961410A (en) * 2018-06-27 2018-12-07 中国科学院深圳先进技术研究院 A kind of three-dimensional wireframe modeling method and device based on image

Also Published As

Publication number Publication date
CN111489439A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN110070564B (en) Feature point matching method, device, equipment and storage medium
KR100793838B1 (en) Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN112907620B (en) Camera pose estimation method and device, readable storage medium and electronic equipment
CN111951201B (en) Unmanned aerial vehicle aerial image splicing method, device and storage medium
CN111445526A (en) Estimation method and estimation device for pose between image frames and storage medium
CN110111364B (en) Motion detection method and device, electronic equipment and storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
GB2567245A (en) Methods and apparatuses for depth rectification processing
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN113610918A (en) Pose calculation method and device, electronic equipment and readable storage medium
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN111489439B (en) Three-dimensional line graph reconstruction method and device and electronic equipment
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
KR20150097251A (en) Camera alignment method using correspondences between multi-images
Luong et al. Consistent ICP for the registration of sparse and inhomogeneous point clouds
CN112150529B (en) Depth information determination method and device for image feature points
CN109741245B (en) Plane information insertion method and device
CN115937002B (en) Method, apparatus, electronic device and storage medium for estimating video rotation
JP2006113832A (en) Stereoscopic image processor and program
CN112085842A (en) Depth value determination method and device, electronic equipment and storage medium
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
CN113689332A (en) Image splicing method with high robustness under high repetition characteristic scene
CN117437303B (en) Method and system for calibrating camera external parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant