CN112967311A - Three-dimensional line graph construction method and device, electronic equipment and storage medium - Google Patents

Three-dimensional line graph construction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112967311A
CN112967311A CN201911275034.0A CN201911275034A CN112967311A CN 112967311 A CN112967311 A CN 112967311A CN 201911275034 A CN201911275034 A CN 201911275034A CN 112967311 A CN112967311 A CN 112967311A
Authority
CN
China
Prior art keywords
line segment
frame
image
dimensional line
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911275034.0A
Other languages
Chinese (zh)
Other versions
CN112967311B (en
Inventor
王求元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN201911275034.0A priority Critical patent/CN112967311B/en
Priority claimed from CN201911275034.0A external-priority patent/CN112967311B/en
Publication of CN112967311A publication Critical patent/CN112967311A/en
Application granted granted Critical
Publication of CN112967311B publication Critical patent/CN112967311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a three-dimensional line graph construction method and apparatus, an electronic device, and a storage medium, the method including: determining a predicted position of at least one two-dimensional line segment in a t-1 th frame of acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame of acquired image, wherein the acquired image is a two-dimensional image of a target environment acquired by image acquisition equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1; respectively determining the observation position of each two-dimensional line segment in the t-th frame acquisition image according to the predicted position of the at least one two-dimensional line segment in the t-th frame acquisition image; and updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image. The embodiment of the disclosure can realize the rapid construction of the three-dimensional line graph and improve the robustness of the construction of the three-dimensional line graph.

Description

Three-dimensional line graph construction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a three-dimensional line graph construction method and apparatus, an electronic device, and a storage medium.
Background
A three-dimensional line graph construction technology, also called a Simultaneous Localization and reconstruction technology (SLAM), is an important research hotspot in computer vision, and is widely applied to the fields of robots, unmanned driving, unmanned aerial vehicles, augmented/virtual reality, and the like. The SLAM technology obtains real-time self-positioning information and a three-dimensional line graph construction result of a surrounding environment through input of a monocular/monocular camera, and endows a machine with the capability of perceiving the surrounding environment. In general, a three-dimensional line graph construction method for extracting features from a single-frame image and matching inter-frame features based on a descriptor algorithm consumes a lot of time, and the robustness of three-dimensional line graph construction is poor.
Disclosure of Invention
The disclosure provides a three-dimensional line graph construction method and device, electronic equipment and a storage medium, which can realize the rapid construction of a three-dimensional line graph and improve the robustness of the three-dimensional line graph construction.
According to an aspect of the present disclosure, there is provided a three-dimensional line graph construction method including: determining a predicted position of at least one two-dimensional line segment in a t-1 th frame of acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame of acquired image, wherein the acquired image is a two-dimensional image of a target environment acquired by image acquisition equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1; respectively determining the observation position of each two-dimensional line segment in the t-th frame acquisition image according to the predicted position of the at least one two-dimensional line segment in the t-th frame acquisition image; and updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image.
In a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame acquired image includes: for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image; and determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the observed position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image.
In a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame acquired image includes: determining a predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image; and for any two-dimensional line segment, determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image.
In a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame acquired image includes: for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image; determining a first prediction position of the two-dimensional line segment in the t-th frame acquired image according to the observation position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image; determining a predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image; determining a second predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image; and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image according to the first predicted position and the second predicted position.
In a possible implementation manner, determining a predicted pose of the image capturing device corresponding to the t-th frame captured image according to the observed pose of the image capturing device corresponding to the t-1 th frame captured image includes: determining a motion increment of the image acquisition equipment between a t-1 frame acquisition image and a t frame acquisition image; and determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image and the motion increment of the image acquisition equipment between the t-1 th frame of acquired image and the t-th frame of acquired image.
In one possible implementation manner, determining a predicted position of the two-dimensional line segment in the t-th frame captured image according to a three-dimensional line segment corresponding to the two-dimensional line segment and a predicted pose of the image capturing device corresponding to the t-th frame captured image includes: and projecting the three-dimensional line segment corresponding to the two-dimensional line segment into the t-th frame acquired image according to the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image to obtain the predicted position of the two-dimensional line segment in the t-th frame acquired image.
In one possible implementation manner, determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the first predicted position and the second predicted position includes: determining the length of a first line segment of the two-dimensional line segment in a t frame acquisition image according to the first prediction position; determining the length of a second line segment of the two-dimensional line segment in the t frame acquisition image according to the second prediction position; and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image through line segment length weighting operation according to the first predicted position, the second predicted position, the first line segment length and the second line segment length.
In a possible implementation manner, determining, according to the predicted position of the at least one two-dimensional line segment in the t-th frame captured image, the observed position of each two-dimensional line segment in the t-th frame captured image respectively includes: for any two-dimensional line segment, according to the predicted position of the two-dimensional line segment in the t frame acquired image, local extraction operation is performed on the t frame acquired image; and under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is extracted, determining the observation position of the two-dimensional line segment in the t-th frame acquisition image according to the position of the extracted observation line segment.
In one possible implementation manner, for any two-dimensional line segment, according to a predicted position of the two-dimensional line segment in a t-th frame captured image, performing a local extraction operation on the t-th frame captured image, including: determining a plurality of seed pixel points corresponding to the two-dimensional line segments in the t-th frame of acquired image, and respectively determining a line support area corresponding to each seed pixel point to obtain a plurality of line support areas; respectively determining a fitting line segment corresponding to each line supporting area through a line segment fitting algorithm to obtain a plurality of fitting line segments; determining a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part as a candidate line segment under the condition that a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part exists, wherein the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image corresponds to the predicted position of the two-dimensional line segment in the t-th frame acquisition image; and determining an observation line segment of the two-dimensional line segment in the t-th frame acquisition image according to the candidate line segment.
In one possible implementation, the method further includes: under the condition that an observation line section of the two-dimensional line section in the t-th frame acquisition image is not extracted and the extracted observation line section exists in the t-k-1 to t-1-th frame acquisition images, determining a prediction line section of the two-dimensional line section in the t-th frame acquisition image as the observation line section of the two-dimensional line section in the t-th frame acquisition image, wherein k is an integer and is not less than 0 and less than k < t; and under the condition that the observation line segment of the two-dimensional line segment in the t-th frame of the acquired image is not extracted and the extracted observation line segment does not exist in the t-k-1 to t-1 frames of the acquired image, deleting the two-dimensional line segment in the t-th frame of the acquired image.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observed position of each two-dimensional line segment in the t-th frame acquired image includes: according to the observation position of the at least one two-dimensional line segment in the t-th frame acquisition image, under the condition that the at least two-dimensional line segments are collinear in the t-th frame acquisition image and have overlapping parts, combining the at least two-dimensional line segments in the t-th frame acquisition image to obtain the updated observation position of the at least one two-dimensional line segment in the t-th frame acquisition image; and updating the three-dimensional line graph of the target environment according to the updated observation position of at least one two-dimensional line segment in the t-th frame acquisition image.
In one possible implementation, the method further includes: and correcting the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation position of each two-dimensional line segment in the t-th frame of acquired image to obtain the observation pose of the image acquisition equipment corresponding to the t-th frame of acquired image.
In a possible implementation manner, the correcting the predicted pose of the image capturing device corresponding to the t-th frame captured image according to the observed position of each two-dimensional line segment in the t-th frame captured image to obtain the observed pose of the image capturing device corresponding to the t-th frame captured image includes: obtaining a corrected observation pose of the image acquisition equipment corresponding to each frame of acquired images in the t-m frame to t frame of acquired images by minimizing a first energy function, wherein the first energy function comprises at least one of the following data items: the image acquisition device comprises a first data item formed by a reprojection error item of a point characteristic and an information matrix of the point characteristic of each frame of acquired image from a t-m frame to a t frame of acquired image, a second data item formed by a reprojection error item of a line segment characteristic and an information matrix of the line segment characteristic of each frame of acquired image from the t-m frame to the t frame of acquired image, a third data item formed by a pose information matrix of the image acquisition device corresponding to the t-m frame to the t frame of acquired image and a pose smoothing item between poses of the image acquisition device corresponding to adjacent frames of acquired image, wherein m is an integer and m is more than or equal to 2 and less than t.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observed position of each two-dimensional line segment in the t-th frame acquired image includes: and under the condition that the t-th frame acquisition image is a key frame acquisition image, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame acquisition images in the 1 st to t-th frame acquisition images.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: aiming at any two-dimensional line segment, under the condition that a three-dimensional line segment corresponding to the two-dimensional line segment is not determined according to the observation position of the two-dimensional line segment in the t-th frame of acquired image and the two-dimensional line segment exists in at least two key frame of acquired images, performing line segment triangulation operation according to the observation position of the two-dimensional line segment in the at least two key frame of acquired images and the central point of the image acquisition equipment corresponding to the at least two key frame of acquired images, and determining the three-dimensional line segment corresponding to the two-dimensional line segment in the t-th frame of acquired image.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: aiming at a common-view acquired image of a tth frame acquired image in a key frame acquired image, correcting a three-dimensional line segment corresponding to at least one two-dimensional line segment in each frame of common-view acquired image and an observation pose of the image acquisition equipment corresponding to each frame of common-view acquired image by minimizing a second energy function, wherein the second energy function comprises at least one of the following data items: the common-view collected image of the t-th frame collected image is an image with the same or similar image content as the image content in the t-th frame collected image.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: under the condition that loop closure exists in the three-dimensional line graph of the target environment, updating a three-dimensional line segment corresponding to at least one two-dimensional line segment in each key frame acquisition image and the pose of the image acquisition equipment corresponding to each key frame acquisition image by minimizing a third energy function, wherein the third energy function comprises at least one data item as follows: the first data item is formed by a reprojection error item of the point characteristic of each key frame acquisition image and an information matrix corresponding to the point characteristic, the second data item is formed by a reprojection error item of the line segment characteristic of each key frame acquisition image and an information matrix corresponding to the line segment characteristic, and the scale parameter is obtained.
According to another aspect of the present disclosure, there is provided a three-dimensional line drawing construction apparatus including: the device comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining the predicted position of at least one two-dimensional line segment in a t-1 th frame of acquired image according to the observed position of the at least one two-dimensional line segment in the t-1 th frame of acquired image, the acquired image is a two-dimensional image of a target environment acquired by image acquisition equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1; the second determining module is used for respectively determining the observation positions of the two-dimensional line segments in the t-th frame acquisition image according to the predicted positions of the at least one two-dimensional line segment in the t-th frame acquisition image; and the updating module is used for updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the observation position in the t-frame acquired image is determined according to the observation position of the two-dimensional line segment which is determined in the t-1-frame acquired image and is used as prior information, so that the three-dimensional line graph of the target environment can be updated by utilizing the space-time coherence between the acquired images, the rapid construction of the three-dimensional line graph is realized, and the robustness of the construction of the three-dimensional line graph is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 illustrates a flow chart of a method of three-dimensional line graph construction of an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a line flow j corresponding to a three-dimensional line segment L according to an embodiment of the disclosure;
fig. 3 is a schematic diagram illustrating a plurality of seed pixel points corresponding to two-dimensional line segments in a t-th frame of an acquired image according to the embodiment of the disclosure;
FIG. 4 illustrates a schematic diagram of a plurality of line support regions determined based on a plurality of seed pixel points in FIG. 3 according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of a plurality of fitted line segments determined based on the plurality of line support regions of FIG. 4 in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of a line triangularization operation performed based on two keyframe captured images in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating an operation of performing culling of inaccurate three-dimensional line segments based on a key frame captured image according to an embodiment of the disclosure;
fig. 8 shows a block diagram of a three-dimensional line drawing construction apparatus of an embodiment of the present disclosure;
FIG. 9 shows a block diagram of an electronic device of an embodiment of the disclosure;
fig. 10 shows a block diagram of an electronic device of an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a three-dimensional line graph construction method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
and step S11, determining the predicted position of at least one two-dimensional line segment in the t-th frame acquisition image according to the observed position of at least one two-dimensional line segment in the t-1-th frame acquisition image.
The collected image is a two-dimensional image of a target environment collected by image collecting equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer larger than 1.
Step S12, according to the predicted position of at least one two-dimensional line segment in the t-th frame of the collected image, the observed position of each two-dimensional line segment in the t-th frame of the collected image is respectively determined.
And step S13, updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image.
The observation position is the actual position of the two-dimensional line segment and corresponds to the observation line segment; the predicted position is the possible position of the two-dimensional line segment and corresponds to the predicted line segment. The image capturing device may be any device capable of capturing an image of the target environment, such as a camera, which is not specifically limited by the present disclosure.
The observation position in the t-1 frame acquired image is determined by taking the observation position of the determined two-dimensional line segment in the t-1 frame acquired image as prior information, so that the three-dimensional line graph of the target environment can be updated by utilizing the space-time coherence between the acquired images, the rapid construction of the three-dimensional line graph can be realized, and the robustness of the construction of the three-dimensional line graph can be improved.
And respectively determining the observation position of each two-dimensional line segment in the t-th frame acquired image according to the predicted position of at least one two-dimensional line segment in the t-th frame acquired image, wherein the determination modes include but are not limited to the following three modes.
The first method comprises the following steps:
in a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of at least one two-dimensional line segment in the t-1-th frame acquired image includes: for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image; and determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the observed position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image.
And acquiring the space-time coherence of two-dimensional line segments between the images by using continuous frames, and constructing a line flow model according to the corresponding relation between the three-dimensional line segments in the three-dimensional line graph and the two-dimensional line segments in the two-dimensional image. And aiming at the three-dimensional line segment L in the three-dimensional line graph, determining a two-dimensional line segment sequence formed by two-dimensional line segments corresponding to the three-dimensional line segment L in the continuous frame acquisition image as a line flow j. Wherein j ═ { l ═ lt-n,lt-n+1,...,lt-1N is an integer of 1 to n<t. Two-dimensional line segments corresponding to the three-dimensional line segments L can be determined in the collected images from the t-n frame to the t-1 frame, wherein Lt-nA two-dimensional line segment (observation line segment) corresponding to the three-dimensional line segment L determined in the t-n frame captured image for the first timet-1Acquiring an image (F) for the t-1 th framet-1) The two-dimensional line segment (observation line segment) corresponding to the three-dimensional line segment L. FIG. 2 shows a schematic of a line flow j corresponding to a three-dimensional line segment L in an embodiment of the disclosureFigure (a).
For any two-dimensional line segment, based on acceleration-invariant space-time constraint, motion increment m of the two-dimensional line segment between the t-1 frame acquired image and the t-1 frame acquired image can be determined by using a least square method according to the line flow j corresponding to the two-dimensional line segmentlAnd further acquiring an observation line segment l in the t-1 frame acquisition image according to the two-dimensional line segmentt-1The indicated observation position, and the motion increment m of the two-dimensional line segment between the t-1 frame acquisition image and the t-1 frame acquisition imagelDetermining the predicted line g of the two-dimensional line in the t-th frame of the acquired imaget
gt=lt-1+mlΔt (1)。
In formula (1), Δ t is the time difference between the t-1 frame acquisition image and the t-1 frame acquisition image, and the line segment g is predictedtThe predicted position of the two-dimensional line segment in the t-th frame captured image may be indicated.
And the second method comprises the following steps:
in a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of at least one two-dimensional line segment in the t-1-th frame acquired image includes: determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image; and for any two-dimensional line segment, determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image.
In one possible implementation manner, determining a predicted pose of an image capturing device corresponding to a t-th frame captured image according to an observed pose of the image capturing device corresponding to the t-1-th frame captured image includes: determining a motion increment of the image acquisition equipment between the t-1 frame acquisition image and the t frame acquisition image; and determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1-th frame of acquired image and the motion increment of the image acquisition equipment between the t-1-th frame of acquired image and the t-th frame of acquired image.
According to the observation pose of the image acquisition equipment corresponding to the frames from 1 to t-1, based on the space constraint of the motion continuity of the image acquisition equipment, the motion increment M of the image acquisition equipment between the frame t-1 acquired image and the frame t acquired image can be determined by using a least square methodTAnd then according to the observation pose T of the image acquisition equipment corresponding to the T-1 frame acquisition imaget-1And the motion increment M between the t-1 frame acquisition image and the t frame acquisition image of the image acquisition equipmentTDetermining the predicted pose T of the image acquisition equipment corresponding to the T frame acquisition imaget':
Tt'=Tt-1+MTΔt (2)。
In a possible implementation manner, determining a predicted position of a two-dimensional line segment in a t-th frame captured image according to a three-dimensional line segment corresponding to the two-dimensional line segment and a predicted pose of an image capturing device corresponding to the t-th frame captured image includes: and projecting the three-dimensional line segment corresponding to the two-dimensional line segment into the t-th frame acquired image according to the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image to obtain the predicted position of the two-dimensional line segment in the t-th frame acquired image.
Aiming at any two-dimensional line segment, according to the three-dimensional line segment L corresponding to the two-dimensional line segment and the predicted pose T of the image acquisition equipment corresponding to the T-th frame acquisition imaget' the predicted position h of the two-dimensional line segment in the t-th frame acquired image is determined by the following formulat
Figure BDA0002315332570000091
In the formula (3), EsAnd EtTwo three-dimensional endpoints of the three-dimensional line segment L in the three-dimensional line graph are shown; e.g. of the typesAnd etTwo-dimensional end points of a predicted line segment of a two-dimensional line segment corresponding to the three-dimensional line segment L in the t-th frame captured image, the predicted line segment corresponding to the predicted position h of the two-dimensional line segment in the t-th frame captured imaget;KpImage acquisition equipment for acquiring point feature correspondence in imageThe internal reference matrix of (2); rt、ttPredicted pose T of image capture device for image capture based on tth frame capturet' obtaining rotation matrix parameters and translation matrix parameters;
Figure BDA0002315332570000092
representing the conversion of homogeneous coordinates to two-dimensional coordinates.
And the third is that:
in a possible implementation manner, determining a predicted position of at least one two-dimensional line segment in the t-th frame acquired image according to an observed position of at least one two-dimensional line segment in the t-1-th frame acquired image includes: for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image; determining a first prediction position of the two-dimensional line segment in the t-th frame acquired image according to the observation position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image; determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image; determining a second predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image; and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image according to the first predicted position and the second predicted position.
For any two-dimensional line segment, the mode of determining the first predicted position of the two-dimensional line segment in the t-th frame acquisition image is the same as the first mode, and the description is omitted here; the manner of determining the second predicted position of the two-dimensional line segment in the t-th frame captured image is the same as the second manner described above, and is not described herein again.
In one possible implementation manner, determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the first predicted position and the second predicted position includes: determining the length of a first line segment of the two-dimensional line segment in the t frame acquisition image according to the first prediction position; determining the length of a second line segment of the two-dimensional line segment in the t frame acquisition image according to the second prediction position; and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image through line segment length weighting operation according to the first predicted position, the second predicted position, the first line segment length and the second line segment length.
Aiming at any two-dimensional line segment, comprehensively utilizing the space-time coherence of the two-dimensional line segment between the continuous frame acquisition images and the space coherence of the motion continuity of the image acquisition equipment, and acquiring a first predicted line segment g in the image in the t-th frame acquisition image based on the two-dimensional line segmenttAnd a second predicted line segment h of the two-dimensional line segment in the t frame acquisition imagetBy utilizing the collinear property of the line segments, a predicted line segment l 'for indicating the predicted position of the two-dimensional line segment in the t-th frame acquisition image is determined by means of the following line segment length weighting operation't
Figure BDA0002315332570000101
In the formula (4), lgFor a first predicted line segment g in the t-th frame of the captured image with the two-dimensional line segmenttCorresponding first segment length,/hFor a second predicted line segment h in the t-th frame of the captured image with the two-dimensional line segmenttA corresponding second segment length.
In a possible implementation manner, determining, according to a predicted position of at least one two-dimensional line segment in the t-th frame captured image, an observed position of each two-dimensional line segment in the t-th frame captured image respectively includes: for any two-dimensional line segment, according to the predicted position of the two-dimensional line segment in the t frame acquired image, local extraction operation is performed on the t frame acquired image; and under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is extracted, determining the observation position of the two-dimensional line segment in the t-th frame acquisition image according to the position of the extracted observation line segment.
For any two-dimensional line segment, the predicted position of the two-dimensional line segment in the t-th frame acquired image is used as prior information, local extraction operation is performed on the t-th frame acquired image, and the observation position of the two-dimensional line segment in the t-th frame acquired image can be determined quickly and accurately. The specific extraction process is described in detail below.
In one possible implementation manner, for any two-dimensional line segment, according to a predicted position of the two-dimensional line segment in the t-th frame captured image, performing a local extraction operation on the t-th frame captured image, including: determining a plurality of seed pixel points corresponding to the two-dimensional line segment in the t-th frame of acquired image, and respectively determining a line support area corresponding to each seed pixel point to obtain a plurality of line support areas; respectively determining a fitting line segment corresponding to each line supporting area through a line segment fitting algorithm to obtain a plurality of fitting line segments; determining a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part as a candidate line segment under the condition that a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part exists, wherein the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image corresponds to the predicted position of the two-dimensional line segment in the t-th frame acquisition image; and determining an observation line segment of the two-dimensional line segment in the t-th frame acquisition image according to the candidate line segment.
In a possible implementation manner, determining a plurality of seed pixel points corresponding to a two-dimensional line segment in a t-th frame of acquired image includes: determining N pixel points by equidistantly dividing the predicted line segments of the two-dimensional line segments in the t-th frame of the acquired image, wherein N is an odd number; and obtaining an NxN pixel point grid region according to the N pixel points, wherein any pixel point in the NxN pixel point grid region is a seed pixel point, and a central pixel point of the N pixel points is a central pixel point of the NxN pixel point grid region.
The specific value of N may be determined according to actual conditions, and is not specifically limited by the present disclosure.
Fig. 3 shows a schematic diagram of a plurality of seed pixel points corresponding to a two-dimensional line segment in a t-th frame acquired image according to the embodiment of the present disclosure. A line segment AB shown in fig. 3 is a predicted line segment corresponding to a predicted position of a two-dimensional line segment in the t-th frame captured image. As shown in fig. 3, the predicted line segment AB is divided equidistantly to obtain 5 pixel points (N is 5) on the predicted line segment, where the pixel point O is a central pixel point of the 5 pixel points, and then a 5 × 5 pixel point network region is obtained by using the pixel point O as the central pixel point, and any one pixel point in the 5 × 5 pixel point network region is a seed pixel point, that is, fig. 3 shows 25 seed pixel points corresponding to the two-dimensional line segment in the t-th frame collected image.
In a possible implementation manner, determining a line support area corresponding to each seed pixel point respectively to obtain a plurality of line support areas includes: and performing region growing operation aiming at the pixel gradient of each seed pixel point, determining a line supporting region corresponding to each seed pixel point, and obtaining a plurality of line supporting regions.
For any seed pixel point, according to the pixel gradient of the seed pixel point, executing region growing operation, specifically: and determining the pixel points around the seed pixel point, which have the same or similar pixel gradient direction (the difference value of the pixel gradients is less than a third threshold) with the seed pixel point, as a line support area. Fig. 4 shows a schematic diagram of a plurality of line support regions determined based on a plurality of seed pixel points in fig. 3 according to an embodiment of the present disclosure. Four different line support areas are shown in fig. 4. Each line support region can obtain a corresponding fitted line segment through a line segment fitting technology. FIG. 5 illustrates a schematic diagram of a plurality of fitted line segments determined based on the plurality of line support regions in FIG. 4 in accordance with an embodiment of the present disclosure. In fig. 5, 4 fitted line segments are shown: a fitting line segment GH, a fitting line segment UV, a fitting line segment MN and a fitting line segment XY.
After the plurality of fitted line segments are determined, if the fitted line segments which are collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and have the overlapped parts exist in the plurality of fitted line segments, the fitted line segments are determined as candidate line segments, and then the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is determined based on the candidate line segments.
In one possible implementation manner, determining an observation line segment of the two-dimensional line segment in the t-th frame acquired image according to the candidate line segment includes: under the condition that only one candidate line segment exists, determining the candidate line segment as an observation line segment of the two-dimensional line segment in the t-th frame acquisition image; and under the condition that a plurality of candidate line segments exist, performing line segment fusion on the plurality of candidate line segments, and determining the fused line segments as observation line segments of the two-dimensional line segments in the t-th frame acquisition image.
Still taking the above-described fig. 5 as an example, in the 4 fitted line segments shown in fig. 5, the fitted line segment GH and the fitted line segment UV are both collinear with the predicted line segment AB corresponding to the predicted position of the two-dimensional line segment in the t-th captured image and have an overlapping portion, and therefore, the fitted line segment GH and the fitted line segment UV are determined as candidate line segments. And then, performing line segment fusion on the fitting line segment GH and the fitting line segment UV to obtain an observation line segment of the two-dimensional line segment in the t-th frame acquisition image.
And updating the line flow of the two-dimensional line segment according to the observation line segment of the two-dimensional line segment in the t-th frame acquisition image. For example, for any one stream j, the stream j before update is j ═ lt-n,lt-n+1,...,lt-1And (5) according to an observation line section l of the two-dimensional line section in the t-1 frame acquisition imaget-1As prior information, determining an observation line segment l of the two-dimensional line segment in the t frame acquisition imagetFurther observe the line segment ltAdding the line flow j to obtain an updated line flow j which is j ═ lt-n,lt-n+1,...,lt-1,lt}。
In one possible implementation, the method further includes: in the case where there is no fitted line segment that is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame captured image and has an overlapping portion, an observation line segment that is not extracted to the two-dimensional line segment in the t-th frame captured image is determined.
In one possible implementation, the method further includes: under the condition that an observation line section of the two-dimensional line section in the t-th frame acquisition image is not extracted and the extracted observation line section exists in the t-k-1 to t-1-th frame acquisition images, determining a predicted line section of the two-dimensional line section in the t-th frame acquisition image as the observation line section of the two-dimensional line section in the t-th frame acquisition image, wherein k is an integer and is more than or equal to 0 and less than or equal to k and less than t; and deleting the two-dimensional line segment in the t-th frame acquired image under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquired image is not extracted and the extracted observation line segment does not exist in the t-k-1 to t-1 frame acquired images.
After the plurality of fitting line segments are determined, if no fitting line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapped part exists in the plurality of fitting line segments, the local extraction operation performed on the t-th frame acquisition image is determined not to extract the observation line segment of the two-dimensional line segment in the t-th frame acquisition image.
For any two-dimensional line segment, under the condition that the observation line segment is not extracted from the continuous collected images smaller than k frames of the two-dimensional line segment, the predicted line segment can be directly determined as the observation line segment in the continuous collected images smaller than k frames, so that the robustness of three-dimensional line graph construction is improved. For example, when the observation line segment of the two-dimensional line segment in the t-th frame captured image is not extracted, and the extracted observation line segment exists in the t-k-1 to t-1 th frame captured images, it indicates that the current t-th frame captured image is cut off, and the continuous captured image frame of the observation line segment not extracted is smaller than k frames, at this time, the prediction line segment of the two-dimensional line segment in the t-th frame captured image may be directly determined as the observation line segment of the two-dimensional line segment in the t-th frame captured image, and the line flow where the two-dimensional line segment is located may be updated according to the observation line segment of the two-dimensional line segment in the t-th frame captured image.
And for any two-dimensional line segment, deleting the two-dimensional line segment in the latest frame of acquired image under the condition that the two-dimensional line segment does not extract the observation line segment in the continuous k frames of acquired images so as to improve the construction accuracy of the three-dimensional line graph. For example, when an observation line segment of a two-dimensional line segment in a t-th frame of the acquired image is not extracted, and the extracted observation line segment does not exist in the t-k-1 to t-1 frames of the acquired image, it indicates that the current t-th frame of the acquired image is cut off, and a continuous acquired image frame of the observation line segment is not extracted reaches k frames, at this time, the two-dimensional line segment may be deleted from the t-th frame of the acquired image, and then, a line flow where the two-dimensional line segment is located may be deleted.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observed position of each two-dimensional line segment in the t-th frame acquired image includes: according to the observation position of at least one two-dimensional line segment in the t-th frame acquisition image, under the condition that the at least two-dimensional line segments are collinear in the t-th frame acquisition image and have overlapping parts, combining the at least two-dimensional line segments in the t-th frame acquisition image to obtain the updated observation position of the at least one two-dimensional line segment in the t-th frame acquisition image; and updating the three-dimensional line graph according to the updated observation position of at least one two-dimensional line segment in the t-th frame acquisition image.
In order to solve the influence of line segment breakage, line segment false extraction and the like on the construction accuracy of the three-dimensional line graph, under the condition that at least two-dimensional line segments are collinear and have overlapping parts in a t-th frame acquired image according to the observation position of the at least one two-dimensional line segment in the t-th frame acquired image, the at least two-dimensional line segments can be combined in the t-th frame acquired image to obtain the updated observation position of the at least one two-dimensional line segment in the t-th frame acquired image, and then the three-dimensional line graph of a target environment is updated to improve the construction accuracy of the three-dimensional line graph.
In a possible implementation manner, under the condition that it is determined that at least two-dimensional line segments are collinear in the t-th frame of the acquired image and have an overlapping portion, and two-dimensional line segments exceeding a fourth threshold number in line streams where the at least two-dimensional line segments are located are collinear, the at least two line streams are merged, that is, the two-dimensional line segments corresponding to the at least two line streams are merged in each frame of the acquired image corresponding to the at least two line streams, so that an updated observation position of the at least one two-dimensional line segment in each frame of the acquired image corresponding to the at least two line streams is obtained, and further, a three-dimensional line map of a target environment is updated, so that accuracy of building of the three-dimensional line map is improved.
In one possible implementation, the method further includes: and for any two-dimensional line segment, determining the length of the two-dimensional line segment in the t-th frame acquisition image according to the observation position of the two-dimensional line segment in the 1 st-t frame acquisition image.
For any three-dimensional line segment in the three-dimensional line graph, the length of the line segment of the corresponding two-dimensional line segment in the acquired image of the three-dimensional line segment under different acquisition visual angles can be obviously changed. Therefore, for any two-dimensional line segment, the length of each two-dimensional line segment in the line flow where the two-dimensional line segment is located (for example, the length of each line segment determined by the observation position of the two-dimensional line segment in the 1 st frame to the t th frame of the acquired image) is comprehensively considered, so as to determine the length of each two-dimensional line segment in the t th frame of the acquired image.
In one possible implementation, the length of the two-dimensional line segment in the t-th frame of the acquired image is determined by the following formula:
Figure BDA0002315332570000141
in the formula (5), ltThe length of the line in the t frame acquisition image for the two-dimensional line,
Figure BDA0002315332570000142
the length of the observation line segment in the t-th frame acquisition image of the two-dimensional line segment,
Figure BDA0002315332570000143
and beta is the length change rate of the preset line segment, and is the average value of the line segment length of each two-dimensional line segment in the line flow where the two-dimensional line segment is located.
In one possible implementation, the method further includes: at t-Ncl-in case no full-image line-segment extraction operation has been performed from 1 frame to the t-1 frame of the captured image, performing a full-image line-segment extraction operation on the t-frame of the captured image, determining a new two-dimensional line segment in the t-frame of the captured image other than the at least one two-dimensional line segment, wherein N isclIs an integer of 1. ltoreq. Ncl<t-1。
The generation of new segments is not very frequent based on the temporal continuity of the acquired images, and therefore every NclAnd the frame acquisition image performs a full image line segment extraction operation. Acquiring an image for the current t frame, at t-NclUnder the condition that the full-image-segment extraction operation is not executed on the acquired images from the 1 st frame to the t-1 st frame, the full image segment extraction operation is executed on the acquired image of the t frameAnd (5) segment extraction operation, determining a new two-dimensional segment, further increasing a line flow corresponding to the new two-dimensional segment, and updating the three-dimensional line graph of the target environment.
In one possible implementation, the method further includes: and correcting the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation position of each two-dimensional line segment in the t-th frame of acquired image to obtain the observation pose of the image acquisition equipment corresponding to the t-th frame of acquired image.
After the observation position of each two-dimensional line segment in the t-th frame of acquired image is determined, the corresponding relation between the two-dimensional line segment in the t-th frame of acquired image and the three-dimensional line segment in the three-dimensional line graph can be determined based on the corresponding relation between the line flow of each two-dimensional line segment and the three-dimensional line segment in the three-dimensional line graph, and then the pose of the image acquisition equipment corresponding to the multi-frame of acquired image in the sliding window can be jointly corrected based on the corresponding relation. For example, the sliding window includes 3 acquired images: and acquiring images from the t-2 th frame to the t-1 th frame, correcting the observation poses of the image acquisition equipment corresponding to the acquired images of the t-2 th frame and the t-1 th frame based on the corresponding relation between the two-dimensional line segment in each acquired image of the 3 frames of acquired images and the three-dimensional line segment in the three-dimensional line graph to obtain corrected observation poses of the image acquisition equipment corresponding to the acquired images of the t-2 th frame and the t-1 th frame, and correcting the prediction pose of the image acquisition equipment corresponding to the acquired images of the t-1 th frame to obtain the observation pose of the image acquisition equipment corresponding to the acquired images of the t-2 th frame.
In a possible implementation manner, the correcting the predicted pose of the image capturing device corresponding to the t-th frame captured image according to the observed position of each two-dimensional line segment in the t-th frame captured image to obtain the observed pose of the image capturing device corresponding to the t-th frame captured image includes: obtaining a corrected observation pose of the image acquisition equipment corresponding to each frame of acquired images from the t-m frame to the t frame of acquired images by minimizing a first energy function, wherein the first energy function comprises at least one of the following data items: the image acquisition device comprises a first data item formed by a reprojection error item of a point characteristic and an information matrix of the point characteristic of each frame of acquired image from a t-m frame to a t frame of acquired image, a second data item formed by a reprojection error item of a line segment characteristic and an information matrix of the line segment characteristic of each frame of acquired image from the t-m frame to the t frame of acquired image, a third data item formed by a pose information matrix of image acquisition equipment corresponding to the t-m frame to the t frame of acquired image and a smooth item between poses of image acquisition equipment corresponding to adjacent frame of acquired image, wherein m is an integer and m is more than or equal to 2 and less than t.
For example, the first energy function CsThe expression of (a) can be as follows:
Figure BDA0002315332570000151
in the formula (6), elReprojection error term, sigma, for the line segment characteristics of the acquired imagelFor acquiring an information matrix of line segment characteristics of an image, epReprojection error term, sigma, for the point features of the acquired imagepFor the acquisition of an information matrix of point features of an image, eTPose smoothing terms, sigma, of image capture devices corresponding to the captured images of adjacent framesTThe position and pose information matrix of the image acquisition equipment corresponding to the acquired image is acquiredTRepresenting a matrix transposition. By applying a first energy function CsIterative optimization is carried out, and the combined correction can be carried out on the pose of the image acquisition equipment corresponding to the multi-frame acquired image in the sliding window.
In one possible implementation, el、∈pCan be determined by the following equation:
Figure BDA0002315332570000161
in the formula (7), KpIs an internal reference matrix, K, of the image acquisition device corresponding to the point feature in the acquired imagelIs an internal reference matrix f of the image acquisition equipment corresponding to the line segment characteristics in the acquired imagexAnd fyIs the focal length of the image acquisition device, (x)0,y0)TIs the origin in the acquired image; [ t ] of]×Is the antisymmetric form of t; p and P are respectively a two-dimensional point of the collected image and a three-dimensional point of the three-dimensional line graph;
Figure BDA0002315332570000162
is a normalization term of the three-dimensional line segment L, where LxAnd lyIs a two-dimensional straight line parameter.
In one possible implementation, eTCan be determined by the following equation:
Figure BDA0002315332570000163
in the formula (8), Tt-2The observation pose, T, of the image acquisition device corresponding to the acquired image of the T-2 th framet-1The observation pose, T, of the image capturing device corresponding to the T-1 th frame of captured imaget'the predicted pose of the image acquisition equipment corresponding to the t frame acquisition image, SE (3) is a lie algebra')-1Representing the inverse of the matrix.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observed position of each two-dimensional line segment in the t-th frame acquired image includes: and under the condition that the t-th frame acquisition image is the key frame acquisition image, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame acquisition images in the 1 st to t-th frame acquisition images.
Based on the corresponding relation between the two-dimensional line segments and the three-dimensional line graphs in the acquired images of the key frames, one or more of operations such as line segment triangularization, inaccurate three-dimensional line segments elimination, three-dimensional line segment fusion, iterative optimization of energy functions and the like can be performed on the acquired images of the key frames, so that the joint correction of the three-dimensional line graphs and the pose and line flow of the image acquisition equipment corresponding to the acquired images of the key frames can be realized.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: aiming at any two-dimensional line segment, under the condition that a three-dimensional line segment corresponding to the two-dimensional line segment is not determined according to the observation position of the two-dimensional line segment in the t-th frame of acquired image and the two-dimensional line segment exists in at least two key frame of acquired images, performing line segment triangulation operation according to the observation position of the two-dimensional line segment in the at least two key frame of acquired images and the central point of the image acquisition equipment corresponding to the at least two key frame of acquired images, and determining the three-dimensional line segment corresponding to the two-dimensional line segment in the t-th frame of acquired image. For example, the central point of the image acquisition device may be a central position of the image acquisition device.
FIG. 6 illustrates a schematic diagram of performing a line-triangularization operation based on two keyframe-acquired images, in accordance with an embodiment of the present disclosure. For any two-dimensional line segment, under the condition that a three-dimensional line segment corresponding to the two-dimensional line segment is not determined according to the observation position of the two-dimensional line segment in the t-th frame of acquired image and the two-dimensional line segment exists in two key frame of acquired images, based on the two key frame of acquired images, determining a three-dimensional line segment L corresponding to the two-dimensional line segment in the t-th frame of acquired image by the following formula:
Figure BDA0002315332570000171
in formula (9) and figure 6,
Figure BDA0002315332570000172
capturing images for keyframe-based
Figure BDA0002315332570000173
The two-dimensional line segment of
Figure BDA0002315332570000174
And key frame capture images
Figure BDA0002315332570000175
Center point of corresponding image acquisition equipment
Figure BDA0002315332570000176
Determined plane
Figure BDA0002315332570000177
Capturing images for keyframe-based
Figure BDA0002315332570000178
The two-dimensional line segment of
Figure BDA0002315332570000179
And key frame capture images
Figure BDA00023153325700001710
Center point of corresponding image acquisition equipment
Figure BDA00023153325700001711
Determined plane
Figure BDA00023153325700001712
Based on plane
Figure BDA00023153325700001713
And plane
Figure BDA00023153325700001714
And determining a three-dimensional line segment L corresponding to the two-dimensional line segment in the t-th frame of acquired image.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: acquiring any two-dimensional line segment in the image aiming at any key frame according to the two-dimensional endpoint e1、e2Distance between and three-dimensional end point E1、E2Determining the uncertainty of the three-dimensional line segment corresponding to the two-dimensional line segment in the key frame acquisition image, wherein the two-dimensional endpoint e1、e2The distance between a projection point e which is positioned on a two-dimensional straight line corresponding to the two-dimensional line segment in the key frame acquisition image and the center of the camera on the two-dimensional straight line corresponding to the two-dimensional line segment is a first threshold value, and the distance between the projection point e and the center of the camera on the two-dimensional straight line is three-dimensionalEndpoint E1、E2Is a two-dimensional end point e1、e2Projecting points on a three-dimensional line segment corresponding to the two-dimensional line segment in the key frame acquisition image; and deleting the three-dimensional line segments with uncertainty larger than a second threshold value.
Fig. 7 illustrates a schematic diagram of performing an operation of culling inaccurate three-dimensional line segments based on a key frame captured image according to an embodiment of the disclosure. Aiming at any two-dimensional line segment in a key frame acquisition image, determining uncertainty U of a three-dimensional line segment corresponding to the two-dimensional line segment in the key frame acquisition image through the following formulae
Figure BDA0002315332570000181
In equation (10) and fig. 7, e is a two-dimensional end point of the observation line segment of the two-dimensional line segment in the key frame captured image F, e1、e2For two-dimensional points lying on a two-dimensional straight line corresponding to the predicted line segment, and e1、e2The distances from E are all first threshold values, E1、E2Is e1、e2And projection points on the three-dimensional line segment L corresponding to the two-dimensional line segment. The value of the first threshold may be determined according to an actual situation, for example, the value of the first threshold is 0.5 pixel, which is not specifically limited by the present disclosure.
For the three-dimensional line segment with the uncertainty larger than the second threshold, the three-dimensional line segment is shown to bring larger errors to the three-dimensional line graph construction, so that the three-dimensional line segment with the uncertainty larger than the second threshold is deleted to improve the accuracy of the three-dimensional line graph construction. The value of the second threshold may be determined according to actual conditions, which is not specifically limited by the present disclosure.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: and according to the observation position of each two-dimensional line segment in each key frame acquisition image, carrying out line segment fusion on the three-dimensional line segments corresponding to at least two-dimensional line segments under the condition that the three-dimensional line segments corresponding to at least two-dimensional line segments are determined to be collinear and have an overlapped part.
In order to improve the accuracy of constructing the three-dimensional line graph, under the condition that the three-dimensional line segments corresponding to at least two-dimensional line segments are collinear and have an overlapped part, the three-dimensional line segments corresponding to the at least two-dimensional line segments can be combined, and then the three-dimensional line graph is updated.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: aiming at a common-view acquired image of a tth frame acquired image in a key frame acquired image, correcting a three-dimensional line segment corresponding to at least one two-dimensional line segment in each frame of common-view acquired image and an observation pose of image acquisition equipment corresponding to each frame of common-view acquired image by minimizing a second energy function, wherein the second energy function comprises at least one of the following data items: the common-view collected image of the t-th frame collected image is an image with the same or similar image content as the image content in the t-th frame collected image.
For example, the second energy function CsThe expression of (a) can be as follows:
Figure BDA0002315332570000191
in formula (11), elReprojection error term, sigma, for line segment characteristics of a common view acquired imagelAcquiring an information matrix of line segment characteristics of the image for common view, epReprojection error term, sigma, for point features of a common view acquired imagepFor capturing point features of images for common viewAnd (5) information matrix. By applying a second energy function CsIterative optimization is carried out, and a three-dimensional line segment corresponding to at least one two-dimensional line segment in each frame of common-view acquired image and the observation pose of the image acquisition equipment corresponding to each frame of common-view acquired image can be corrected.
In one possible implementation manner, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images includes: under the condition that a loop is closed in a three-dimensional line graph of a detection target environment, correcting a three-dimensional line segment corresponding to at least one two-dimensional line segment in each key frame acquisition image and the pose of image acquisition equipment corresponding to each key frame acquisition image by minimizing a third energy function, wherein the third energy function comprises at least one data item as follows: the first data item is formed by a reprojection error item of the point characteristic of each key frame acquisition image and an information matrix corresponding to the point characteristic, the second data item is formed by a reprojection error item of the line segment characteristic of each key frame acquisition image and an information matrix corresponding to the line segment characteristic, and the scale parameter is obtained.
For example, the third energy function CsThe expression of (a) can be as follows:
Figure BDA0002315332570000192
in the formula (12), the first and second groups,
Figure BDA0002315332570000193
reprojection error term, sigma, for the line segment characteristics of the acquired imagelTo acquire an information matrix of line segment characteristics of an image,
Figure BDA0002315332570000194
reprojection error term, sigma, for the point features of the acquired imagepIs an information matrix of the point characteristics of the acquired image, and s is a scale parameter. By applying a third energy function CsIterative optimization is performed, and at least one two-dimensional line in the image can be acquired for each keyframeAnd correcting the three-dimensional line segment corresponding to the segment and the pose of the image acquisition equipment corresponding to each key frame acquisition image so as to eliminate the scale drift.
In the embodiment of the disclosure, the observation position in the t-th frame of the acquired image is determined according to the observation position of the determined two-dimensional line segment in the t-1-th frame of the acquired image as prior information, so that the three-dimensional line graph of the target environment can be updated by utilizing the space-time coherence between the acquired images, and therefore, the rapid construction of the three-dimensional line graph can be realized in scenes such as jitter and motion blur of some image acquisition equipment, a structured line graph model is obtained, and the robustness of the construction of the three-dimensional line graph is improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a three-dimensional line graph construction apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the three-dimensional line graph construction methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 8 shows a block diagram of a three-dimensional line graph construction apparatus of an embodiment of the present disclosure. As shown in fig. 8, the apparatus 80 includes:
the first determining module 81 is configured to determine, according to an observed position of at least one two-dimensional line segment in the t-1 th frame of the captured image, a predicted position of the at least one two-dimensional line segment in the t-1 th frame of the captured image, where the captured image is a two-dimensional image of a target environment captured by an image capturing device, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1;
a second determining module 82, configured to determine, according to the predicted position of at least one two-dimensional line segment in the t-th frame acquired image, the observation position of each two-dimensional line segment in the t-th frame acquired image respectively;
and the updating module 83 is configured to update the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame of the acquired image.
In one possible implementation, the first determining module 81 includes:
the first determining submodule is used for determining the motion increment of any two-dimensional line segment between the t-1 frame acquisition image and the t frame acquisition image;
and the second determining submodule is used for determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the observed position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image.
In one possible implementation, the first determining module 81 includes:
the third determining submodule is used for determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image;
and the fourth determining submodule is used for determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image aiming at any two-dimensional line segment.
In one possible implementation, the first determining module 81 includes:
the first determining submodule is used for determining the motion increment of any two-dimensional line segment between the t-1 frame acquisition image and the t frame acquisition image;
the second determining submodule is used for determining a first predicted position of the two-dimensional line segment in the t-frame acquired image according to the observed position of the two-dimensional line segment in the t-1 frame acquired image and the motion increment of the two-dimensional line segment between the t-1 frame acquired image and the t-frame acquired image;
the third determining submodule is used for determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image;
the fourth determining submodule is used for determining a second predicted position of the two-dimensional line segment in the t frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t frame acquired image;
and the fifth determining submodule is used for determining the predicted position of the two-dimensional line segment in the t frame acquired image according to the first predicted position and the second predicted position.
In a possible implementation manner, the third determining submodule is specifically configured to:
determining a motion increment of the image acquisition equipment between the t-1 frame acquisition image and the t frame acquisition image;
and determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1-th frame of acquired image and the motion increment of the image acquisition equipment between the t-1-th frame of acquired image and the t-th frame of acquired image.
In a possible implementation manner, the fourth determining submodule is specifically configured to:
and projecting the three-dimensional line segment corresponding to the two-dimensional line segment into the t-th frame acquired image according to the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image to obtain the predicted position of the two-dimensional line segment in the t-th frame acquired image.
In a possible implementation manner, the fifth determining submodule is specifically configured to:
determining the length of a first line segment of the two-dimensional line segment in the t frame acquisition image according to the first prediction position;
determining the length of a second line segment of the two-dimensional line segment in the t frame acquisition image according to the second prediction position;
and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image through line segment length weighting operation according to the first predicted position, the second predicted position, the first line segment length and the second line segment length.
In one possible implementation, the second determining module 82 includes:
the local extraction submodule is used for executing local extraction operation on the t frame acquired image according to the predicted position of any two-dimensional line segment in the t frame acquired image;
and the sixth determining submodule is used for determining the observation position of the two-dimensional line segment in the t-th frame acquisition image according to the position of the extracted observation line segment under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is extracted.
In one possible implementation, the local extraction submodule is specifically configured to:
determining a plurality of seed pixel points corresponding to the two-dimensional line segment in the t-th frame of acquired image, and respectively determining a line support area corresponding to each seed pixel point to obtain a plurality of line support areas;
respectively determining a fitting line segment corresponding to each line supporting area through a line segment fitting algorithm to obtain a plurality of fitting line segments;
determining a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part as a candidate line segment under the condition that a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part exists, wherein the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image corresponds to the predicted position of the two-dimensional line segment in the t-th frame acquisition image;
and determining an observation line segment of the two-dimensional line segment in the t-th frame acquisition image according to the candidate line segment.
In one possible implementation, the apparatus 80 further includes:
the third determining module is used for determining a predicted line segment of the two-dimensional line segment in the t-th frame acquisition image as the observation line segment of the two-dimensional line segment in the t-th frame acquisition image under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is not extracted and the extracted observation line segment exists in the t-k-1 to t-1-th frame acquisition images, wherein k is an integer and is more than or equal to 0 and less than or equal to k and less than t;
and the fourth determining module is used for deleting the two-dimensional line segment in the t-th frame acquired image under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquired image is not extracted and the extracted observation line segment does not exist in the t-k-1 to t-1 frame acquired images.
In one possible implementation, the update module 83 includes:
the first updating submodule is used for merging at least two-dimensional line segments in the t frame acquired image under the condition that the at least two-dimensional line segments are collinear and have overlapped parts in the t frame acquired image according to the observation position of the at least one two-dimensional line segment in the t frame acquired image, so that the updated observation position of the at least one two-dimensional line segment in the t frame acquired image is obtained;
and the second updating submodule is used for updating the three-dimensional line graph of the target environment according to the updated observation position of at least one two-dimensional line segment in the t-th frame acquisition image.
In one possible implementation, the apparatus 80 further includes:
and the fifth determining module is used for correcting the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation position of each two-dimensional line segment in the t-th frame of acquired image to obtain the observation pose of the image acquisition equipment corresponding to the t-th frame of acquired image.
In a possible implementation manner, the fifth determining module is specifically configured to:
obtaining a corrected observation pose of the image acquisition equipment corresponding to each frame of acquired images from the t-m frame to the t frame of acquired images by minimizing a first energy function, wherein the first energy function comprises at least one of the following data items: the image acquisition device comprises a first data item formed by a reprojection error item of a point characteristic and an information matrix of the point characteristic of each frame of acquired image from a t-m frame to a t frame of acquired image, a second data item formed by a reprojection error item of a line segment characteristic and an information matrix of the line segment characteristic of each frame of acquired image from the t-m frame to the t frame of acquired image, a third data item formed by a pose information matrix of image acquisition equipment corresponding to the t-m frame to the t frame of acquired image and a smooth item between poses of image acquisition equipment corresponding to adjacent frame of acquired image, wherein m is an integer and m is more than or equal to 2 and less than t.
In one possible implementation, the update module 83 includes:
and the third updating submodule is used for updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame acquisition images in the 1 st frame to the t th frame acquisition images under the condition that the t-th frame acquisition image is the key frame acquisition image.
In a possible implementation manner, the third update submodule is specifically configured to:
aiming at any two-dimensional line segment, under the condition that a three-dimensional line segment corresponding to the two-dimensional line segment is not determined according to the observation position of the two-dimensional line segment in the t-th frame of acquired image and the two-dimensional line segment exists in at least two key frame of acquired images, performing line segment triangulation operation according to the observation position of the two-dimensional line segment in the at least two key frame of acquired images and the central point of the image acquisition equipment corresponding to the at least two key frame of acquired images, and determining the three-dimensional line segment corresponding to the two-dimensional line segment in the t-th frame of acquired image.
In a possible implementation manner, the third update submodule is specifically configured to:
aiming at a common-view acquired image of a tth frame acquired image in a key frame acquired image, correcting a three-dimensional line segment corresponding to at least one two-dimensional line segment in each frame of common-view acquired image and an observation pose of image acquisition equipment corresponding to each frame of common-view acquired image by minimizing a second energy function, wherein the second energy function comprises at least one of the following data items: the common-view collected image of the t-th frame collected image is an image with the same or similar image content as the image content in the t-th frame collected image.
In a possible implementation manner, the third update submodule is specifically configured to:
under the condition that a loop is closed in a three-dimensional line graph of a detection target environment, a third energy function is minimized, and a three-dimensional line segment corresponding to at least one two-dimensional line segment in each key frame acquisition image and a pose of image acquisition equipment corresponding to each key frame acquisition image are updated, wherein the third energy function comprises at least one of the following data items: the first data item is formed by a reprojection error item of the point characteristic of each key frame acquisition image and an information matrix corresponding to the point characteristic, the second data item is formed by a reprojection error item of the line segment characteristic of each key frame acquisition image and an information matrix corresponding to the line segment characteristic, and the scale parameter is obtained.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the three-dimensional line graph construction method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the three-dimensional line graph construction method provided in any of the embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 9 shows a block diagram of an electronic device of an embodiment of the disclosure. For example, the electronic device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 9, electronic device 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 906 provides power to the various components of the electronic device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 900.
The multimedia components 908 include a screen that provides an output interface between the electronic device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status evaluations of various aspects of the electronic device 900. For example, sensor assembly 914 may detect an open/closed state of electronic device 900, the relative positioning of components, such as a display and keypad of electronic device 900, sensor assembly 914 may also detect a change in the position of electronic device 900 or a component of electronic device 900, the presence or absence of user contact with electronic device 900, orientation or acceleration/deceleration of electronic device 900, and a change in the temperature of electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate wired or wireless communication between the electronic device 900 and other devices. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 904, is also provided, including computer program instructions executable by the processor 920 of the electronic device 900 to perform the above-described methods.
Fig. 10 shows a block diagram of an electronic device of an embodiment of the disclosure. For example, the electronic device 1000 may be provided as a server. Referring to fig. 10, electronic device 1000 includes a processing component 1022 that further includes one or more processors, and memory resources, represented by memory 1032, for storing instructions, such as application programs, that are executable by processing component 1022. The application programs stored in memory 1032 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1022 is configured to execute instructions to perform the above-described methods.
The electronic device 1000 may also include a power supply component 1026 configured to perform power management for the electronic device 1000, a wired or wireless network interface 1050 configured to connect the electronic device 1000 to a network, and an input/output (I/O) interface 1058. The electronic device 1000 may operate based on an operating system stored in memory 1032, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1032, is also provided, including computer program instructions executable by the processing component 1022 of the electronic device 1000 to perform the above-described method.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A three-dimensional line graph construction method, comprising:
determining a predicted position of at least one two-dimensional line segment in a t-1 th frame of acquired image according to an observed position of the at least one two-dimensional line segment in the t-1 th frame of acquired image, wherein the acquired image is a two-dimensional image of a target environment acquired by image acquisition equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1;
respectively determining the observation position of each two-dimensional line segment in the t-th frame acquisition image according to the predicted position of the at least one two-dimensional line segment in the t-th frame acquisition image;
and updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image.
2. The method of claim 1, wherein determining the predicted location of at least one two-dimensional line segment in the t-frame captured image based on the observed location of the at least one two-dimensional line segment in the t-1 frame captured image comprises:
for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image;
and determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the observed position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image.
3. The method of claim 1, wherein determining the predicted location of at least one two-dimensional line segment in the t-frame captured image based on the observed location of the at least one two-dimensional line segment in the t-1 frame captured image comprises:
determining a predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image;
and for any two-dimensional line segment, determining the predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image.
4. The method of claim 1, wherein determining the predicted location of at least one two-dimensional line segment in the t-frame captured image based on the observed location of the at least one two-dimensional line segment in the t-1 frame captured image comprises:
for any two-dimensional line segment, determining the motion increment of the two-dimensional line segment between the t-1 frame acquisition image and the t-frame acquisition image;
determining a first prediction position of the two-dimensional line segment in the t-th frame acquired image according to the observation position of the two-dimensional line segment in the t-1 th frame acquired image and the motion increment of the two-dimensional line segment between the t-1 th frame acquired image and the t-th frame acquired image;
determining a predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image;
determining a second predicted position of the two-dimensional line segment in the t-th frame acquired image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image;
and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image according to the first predicted position and the second predicted position.
5. The method according to claim 3 or 4, wherein determining the predicted pose of the image capturing device corresponding to the t-th frame captured image according to the observed pose of the image capturing device corresponding to the t-1 th frame captured image comprises:
determining a motion increment of the image acquisition equipment between a t-1 frame acquisition image and a t frame acquisition image;
and determining the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation pose of the image acquisition equipment corresponding to the t-1 th frame of acquired image and the motion increment of the image acquisition equipment between the t-1 th frame of acquired image and the t-th frame of acquired image.
6. The method according to claim 3 or 4, wherein determining the predicted position of the two-dimensional line segment in the t-th frame captured image according to the three-dimensional line segment corresponding to the two-dimensional line segment and the predicted pose of the image capturing device corresponding to the t-th frame captured image comprises:
and projecting the three-dimensional line segment corresponding to the two-dimensional line segment into the t-th frame acquired image according to the predicted pose of the image acquisition equipment corresponding to the t-th frame acquired image to obtain the predicted position of the two-dimensional line segment in the t-th frame acquired image.
7. The method of claim 4, wherein determining the predicted position of the two-dimensional line segment in the t-frame captured image according to the first predicted position and the second predicted position comprises:
determining the length of a first line segment of the two-dimensional line segment in a t frame acquisition image according to the first prediction position;
determining the length of a second line segment of the two-dimensional line segment in the t frame acquisition image according to the second prediction position;
and determining the predicted position of the two-dimensional line segment in the t-th frame acquisition image through line segment length weighting operation according to the first predicted position, the second predicted position, the first line segment length and the second line segment length.
8. The method according to any one of claims 1-7, wherein determining the observed position of each two-dimensional line segment in the t-frame captured image according to the predicted position of the at least one two-dimensional line segment in the t-frame captured image comprises:
for any two-dimensional line segment, according to the predicted position of the two-dimensional line segment in the t frame acquired image, local extraction operation is performed on the t frame acquired image;
and under the condition that the observation line segment of the two-dimensional line segment in the t-th frame acquisition image is extracted, determining the observation position of the two-dimensional line segment in the t-th frame acquisition image according to the position of the extracted observation line segment.
9. The method according to claim 8, wherein for any two-dimensional line segment, performing a local extraction operation on the t-th frame captured image according to the predicted position of the two-dimensional line segment in the t-th frame captured image comprises:
determining a plurality of seed pixel points corresponding to the two-dimensional line segments in the t-th frame of acquired image, and respectively determining a line support area corresponding to each seed pixel point to obtain a plurality of line support areas;
respectively determining a fitting line segment corresponding to each line supporting area through a line segment fitting algorithm to obtain a plurality of fitting line segments;
determining a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part as a candidate line segment under the condition that a fitted line segment which is collinear with the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image and has an overlapping part exists, wherein the predicted line segment of the two-dimensional line segment in the t-th frame acquisition image corresponds to the predicted position of the two-dimensional line segment in the t-th frame acquisition image;
and determining an observation line segment of the two-dimensional line segment in the t-th frame acquisition image according to the candidate line segment.
10. The method of claim 9, further comprising:
under the condition that an observation line section of the two-dimensional line section in the t-th frame acquisition image is not extracted and the extracted observation line section exists in the t-k-1 to t-1-th frame acquisition images, determining a prediction line section of the two-dimensional line section in the t-th frame acquisition image as the observation line section of the two-dimensional line section in the t-th frame acquisition image, wherein k is an integer and is not less than 0 and less than k < t;
and under the condition that the observation line segment of the two-dimensional line segment in the t-th frame of the acquired image is not extracted and the extracted observation line segment does not exist in the t-k-1 to t-1 frames of the acquired image, deleting the two-dimensional line segment in the t-th frame of the acquired image.
11. The method of any of claims 1-10, wherein updating the three-dimensional line graph of the target environment based on the observed position of each two-dimensional line segment in the t-th acquired image comprises:
according to the observation position of the at least one two-dimensional line segment in the t-th frame acquisition image, under the condition that the at least two-dimensional line segments are collinear in the t-th frame acquisition image and have overlapping parts, combining the at least two-dimensional line segments in the t-th frame acquisition image to obtain the updated observation position of the at least one two-dimensional line segment in the t-th frame acquisition image;
and updating the three-dimensional line graph of the target environment according to the updated observation position of at least one two-dimensional line segment in the t-th frame acquisition image.
12. The method of claim 3, further comprising:
and correcting the predicted pose of the image acquisition equipment corresponding to the t-th frame of acquired image according to the observation position of each two-dimensional line segment in the t-th frame of acquired image to obtain the observation pose of the image acquisition equipment corresponding to the t-th frame of acquired image.
13. The method of claim 12, wherein correcting the predicted pose of the image capturing device corresponding to the t-th frame of captured image according to the observed position of each two-dimensional line segment in the t-th frame of captured image to obtain the observed pose of the image capturing device corresponding to the t-th frame of captured image comprises:
obtaining a corrected observation pose of the image acquisition equipment corresponding to each frame of acquired images in the t-m frame to t frame of acquired images by minimizing a first energy function, wherein the first energy function comprises at least one of the following data items: the image acquisition device comprises a first data item formed by a reprojection error item of a point characteristic and an information matrix of the point characteristic of each frame of acquired image from a t-m frame to a t frame of acquired image, a second data item formed by a reprojection error item of a line segment characteristic and an information matrix of the line segment characteristic of each frame of acquired image from the t-m frame to the t frame of acquired image, a third data item formed by a pose information matrix of the image acquisition device corresponding to the t-m frame to the t frame of acquired image and a pose smoothing item between poses of the image acquisition device corresponding to adjacent frames of acquired image, wherein m is an integer and m is more than or equal to 2 and less than t.
14. The method of claim 1, wherein updating the three-dimensional line graph of the target environment according to the observed position of each two-dimensional line segment in the t-th frame captured image comprises:
and under the condition that the t-th frame acquisition image is a key frame acquisition image, updating the three-dimensional line graph of the target environment according to the observation positions of the two-dimensional line segments in the key frame acquisition images in the 1 st to t-th frame acquisition images.
15. The method of claim 14, wherein updating the three-dimensional line graph of the target environment according to the observed positions of the respective two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images comprises:
aiming at any two-dimensional line segment, under the condition that a three-dimensional line segment corresponding to the two-dimensional line segment is not determined according to the observation position of the two-dimensional line segment in the t-th frame of acquired image and the two-dimensional line segment exists in at least two key frame of acquired images, performing line segment triangulation operation according to the observation position of the two-dimensional line segment in the at least two key frame of acquired images and the central point of the image acquisition equipment corresponding to the at least two key frame of acquired images, and determining the three-dimensional line segment corresponding to the two-dimensional line segment in the t-th frame of acquired image.
16. The method of claim 14, wherein updating the three-dimensional line graph of the target environment according to the observed positions of the respective two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images comprises:
aiming at a common-view acquired image of a tth frame acquired image in a key frame acquired image, correcting a three-dimensional line segment corresponding to at least one two-dimensional line segment in each frame of common-view acquired image and an observation pose of the image acquisition equipment corresponding to each frame of common-view acquired image by minimizing a second energy function, wherein the second energy function comprises at least one of the following data items: the common-view collected image of the t-th frame collected image is an image with the same or similar image content as the image content in the t-th frame collected image.
17. The method of claim 14, wherein updating the three-dimensional line graph of the target environment according to the observed positions of the respective two-dimensional line segments in the key frame captured images in the 1 st to t-th frame captured images comprises:
under the condition that loop closure exists in the three-dimensional line graph of the target environment, updating a three-dimensional line segment corresponding to at least one two-dimensional line segment in each key frame acquisition image and the pose of the image acquisition equipment corresponding to each key frame acquisition image by minimizing a third energy function, wherein the third energy function comprises at least one data item as follows: the first data item is formed by a reprojection error item of the point characteristic of each key frame acquisition image and an information matrix corresponding to the point characteristic, the second data item is formed by a reprojection error item of the line segment characteristic of each key frame acquisition image and an information matrix corresponding to the line segment characteristic, and the scale parameter is obtained.
18. A three-dimensional line drawing construction apparatus characterized by comprising:
the device comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining the predicted position of at least one two-dimensional line segment in a t-1 th frame of acquired image according to the observed position of the at least one two-dimensional line segment in the t-1 th frame of acquired image, the acquired image is a two-dimensional image of a target environment acquired by image acquisition equipment, the two-dimensional line segment corresponds to a three-dimensional line segment in a three-dimensional line graph of the target environment, and t is an integer greater than 1;
the second determining module is used for respectively determining the observation positions of the two-dimensional line segments in the t-th frame acquisition image according to the predicted positions of the at least one two-dimensional line segment in the t-th frame acquisition image;
and the updating module is used for updating the three-dimensional line graph of the target environment according to the observation position of each two-dimensional line segment in the t-th frame acquisition image.
19. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 17.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 17.
CN201911275034.0A 2019-12-12 Three-dimensional line graph construction method and device, electronic equipment and storage medium Active CN112967311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911275034.0A CN112967311B (en) 2019-12-12 Three-dimensional line graph construction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911275034.0A CN112967311B (en) 2019-12-12 Three-dimensional line graph construction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112967311A true CN112967311A (en) 2021-06-15
CN112967311B CN112967311B (en) 2024-06-07

Family

ID=

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096386A (en) * 2015-07-21 2015-11-25 中国民航大学 Method for automatically generating geographic maps for large-range complex urban environment
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
JP2017162024A (en) * 2016-03-07 2017-09-14 学校法人早稲田大学 Stereo matching processing method, processing program and processing device
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium
CN108510516A (en) * 2018-03-30 2018-09-07 深圳积木易搭科技技术有限公司 A kind of the three-dimensional line segment extracting method and system of dispersion point cloud
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN109584362A (en) * 2018-12-14 2019-04-05 北京市商汤科技开发有限公司 3 D model construction method and device, electronic equipment and storage medium
CN109978891A (en) * 2019-03-13 2019-07-05 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
EP3508935A1 (en) * 2018-01-05 2019-07-10 iRobot Corporation System for spot cleaning by a mobile robot
CN110125928A (en) * 2019-03-27 2019-08-16 浙江工业大学 A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096386A (en) * 2015-07-21 2015-11-25 中国民航大学 Method for automatically generating geographic maps for large-range complex urban environment
JP2017162024A (en) * 2016-03-07 2017-09-14 学校法人早稲田大学 Stereo matching processing method, processing program and processing device
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium
EP3508935A1 (en) * 2018-01-05 2019-07-10 iRobot Corporation System for spot cleaning by a mobile robot
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN108510516A (en) * 2018-03-30 2018-09-07 深圳积木易搭科技技术有限公司 A kind of the three-dimensional line segment extracting method and system of dispersion point cloud
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109584362A (en) * 2018-12-14 2019-04-05 北京市商汤科技开发有限公司 3 D model construction method and device, electronic equipment and storage medium
CN109978891A (en) * 2019-03-13 2019-07-05 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110125928A (en) * 2019-03-27 2019-08-16 浙江工业大学 A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUOXUAN ZHANG等: "Building a 3-D Line-Based Map Using Stereo SLAM", IEEE TRANSACTIONS ON ROBOTICS, pages 1364 - 1377 *
李海丰;胡遵河;范龙飞;姜子政;陈新伟;: "基于几何约束及0-1规划的视频帧间线段特征匹配算法", 计算机应用, no. 08 *
陈起凤;刘军;李威;雷光元;董广峰;: "三维重建中线段匹配方法的研究", 武汉工程大学学报, no. 04 *
魏鑫燏等: "基于线特征的单目 SLAM 中的迭代数据关联算法", 计算机应用研究, pages 1 - 8 *

Similar Documents

Publication Publication Date Title
TWI767596B (en) Scene depth and camera motion prediction method, electronic equipment and computer readable storage medium
CN110798630B (en) Image processing method and device, electronic equipment and storage medium
CN111551191B (en) Sensor external parameter calibration method and device, electronic equipment and storage medium
CN109584362B (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN111401230B (en) Gesture estimation method and device, electronic equipment and storage medium
CN109840917B (en) Image processing method and device and network training method and device
CN111881827B (en) Target detection method and device, electronic equipment and storage medium
CN112146645B (en) Method and device for aligning coordinate system, electronic equipment and storage medium
CN112432637B (en) Positioning method and device, electronic equipment and storage medium
CN112991381B (en) Image processing method and device, electronic equipment and storage medium
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN112541971A (en) Point cloud map construction method and device, electronic equipment and storage medium
KR20210142745A (en) Information processing methods, devices, electronic devices, storage media and programs
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium
CN112837372A (en) Data generation method and device, electronic equipment and storage medium
CN112767541A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN111325786A (en) Image processing method and device, electronic equipment and storage medium
CN112967311B (en) Three-dimensional line graph construction method and device, electronic equipment and storage medium
WO2022110801A1 (en) Data processing method and apparatus, electronic device, and storage medium
CN112967311A (en) Three-dimensional line graph construction method and device, electronic equipment and storage medium
CN109543544B (en) Cross-spectrum image matching method and device, electronic equipment and storage medium
CN114549983A (en) Computer vision model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant