CN111784842A - Three-dimensional reconstruction method, device, equipment and readable storage medium - Google Patents

Three-dimensional reconstruction method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111784842A
CN111784842A CN202010601885.6A CN202010601885A CN111784842A CN 111784842 A CN111784842 A CN 111784842A CN 202010601885 A CN202010601885 A CN 202010601885A CN 111784842 A CN111784842 A CN 111784842A
Authority
CN
China
Prior art keywords
camera
dimensional point
pose
point cloud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010601885.6A
Other languages
Chinese (zh)
Other versions
CN111784842B (en
Inventor
姚萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010601885.6A priority Critical patent/CN111784842B/en
Publication of CN111784842A publication Critical patent/CN111784842A/en
Application granted granted Critical
Publication of CN111784842B publication Critical patent/CN111784842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The application discloses a three-dimensional reconstruction method, a three-dimensional reconstruction device, a three-dimensional reconstruction equipment and a readable storage medium, and relates to the field of artificial intelligence and the technical field of automatic driving. The specific implementation scheme is as follows: the electronic equipment acquires external parameters between the first camera and each second camera, and determines a reprojection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera. And then, adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose. And finally, the electronic equipment carries out three-dimensional reconstruction according to the new three-dimensional point cloud and the pose. The method improves the precision of the three-dimensional point cloud and the camera attitude.

Description

Three-dimensional reconstruction method, device, equipment and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a three-dimensional reconstruction method, a three-dimensional reconstruction device, three-dimensional reconstruction equipment and a readable storage medium.
Background
Three-dimensional reconstruction is one of research hotspots in the technical field of computer vision, and the three-dimensional reconstruction technology aims to reconstruct a three-dimensional virtual model of a real object in a computer based on a two-dimensional image and display the three-dimensional virtual model on a computer screen.
In the SFM (structure-from-motion) based three-dimensional reconstruction technique, a series of images are acquired by a camera, and a three-dimensional point cloud, a camera pose, and the like are generated from the images.
In the three-dimensional reconstruction method based on the SFM, the scale of the generated three-dimensional point cloud is different from the real scale, so that the use of the three-dimensional point cloud is limited.
Disclosure of Invention
The application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, a three-dimensional reconstruction equipment and a readable storage medium.
In a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including:
external reference between the first camera and each of the at least one second camera is acquired.
And determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera.
And iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose.
And performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a second aspect, an embodiment of the present application provides a three-dimensional reconstruction apparatus, including:
the acquisition module is used for acquiring external parameters between the first camera and each second camera in the at least one second camera.
And the determining module is used for determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera.
And the adjusting module is used for iteratively adjusting the three-dimensional coordinates of all three-dimensional points in the three-dimensional point cloud and the pose of the first camera according to the re-projection error function so as to obtain a new three-dimensional point cloud and a new pose.
And the reconstruction module is used for performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the first aspect or any possible implementation of the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on an electronic device, cause the electronic device computer to perform the method of the first aspect or the various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing an electronic device to perform the method of the first aspect or the various possible implementations of the first aspect.
In a sixth aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including: determining a re-projection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera, adjusting each three-dimensional point in the three-dimensional point cloud to be a three-dimensional coordinate according to the re-projection error function to obtain a new three-dimensional point cloud, and adjusting the pose of the first camera to obtain a new pose.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic network architecture diagram of a three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 2 is a flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a position of a second camera in a three-dimensional reconstruction method according to an embodiment of the present application;
fig. 4 is a flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present disclosure;
fig. 6 is another three-dimensional reconstruction apparatus provided in an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a three-dimensional reconstruction method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, along with the rapid development of Artificial Intelligence (AI), the unmanned technology is mature day by day. Unmanned driving is also known as autonomous driving. In the automatic driving process, the automatic driving vehicle utilizes a map to perform navigation, positioning and the like. In the process of generating the map, a plurality of cameras with the same model are arranged on a collection vehicle, a series of images are collected by the cameras, and then three-dimensional point clouds, the poses of the cameras and the like are generated by the images based on the SFM three-dimensional reconstruction technology.
The length of the three-dimensional point cloud generated by the SFM-based three-dimensional reconstruction is different from the scale of the real world, so that the use of the three-dimensional point cloud in vehicle target positioning and path planning is limited. Moreover, the overlapping rate of the visual ranges of the cameras of the same model is high, so that the process of generating the three-dimensional point cloud by the SFM-based three-dimensional reconstruction mode is limited.
In view of this, embodiments of the present application provide a three-dimensional reconstruction method, an apparatus, a device, and a readable storage medium, which perform three-dimensional reconstruction by using relative external parameters between heterogeneous cameras as optimization constraint terms, so that a finally generated three-dimensional point cloud has a real world scale, thereby expanding a use range of the three-dimensional point cloud.
Fig. 1 is a schematic network architecture diagram of a three-dimensional reconstruction method according to an embodiment of the present application. Referring to fig. 1, the network architecture includes a server 1, a vehicle-mounted terminal 2 disposed on a collection vehicle, and a network connection is established between the vehicle-mounted terminal 2 and the server 1. The acquisition vehicle is also provided with a first camera 3 and at least one second camera 4, and the first camera 4 and the at least one second camera are used for acquiring the key frames to obtain a key frame set.
When the vehicle-mounted terminal 2 executes the three-dimensional reconstruction method provided by the embodiment of the application, the vehicle-mounted terminal 2 generates an original rough three-dimensional point cloud, the pose of the first camera and the like based on the key frame set. The vehicle-mounted terminal 2 acquires external references between the first camera and each second camera from the server 1 and the like, constructs a re-projection error function based on the external references, the original three-dimensional point cloud, the pose of the first camera and the like, adjusts the coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera based on the re-projection error function, generates a new three-dimensional point cloud and a new pose, and performs three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
When the server 1 executes the three-dimensional reconstruction method provided by the embodiment of the application, the vehicle-mounted terminal 1 acquires the key frames shot by the first camera and the second camera and sends the key frames to the server 1, or the first camera and the second camera directly send the shot key frames to the server 1. The server 1 performs three-dimensional reconstruction based on the key frame set, which may specifically refer to the description of the vehicle-mounted terminal 1, and is not described herein again.
The three-dimensional reconstruction method according to the embodiment of the present application is described in detail below based on the network architecture shown in fig. 1. For example, see fig. 2.
Fig. 2 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application. The execution subject of the present embodiment is an electronic device, which is, for example, a server or a vehicle-mounted terminal in fig. 1. The method comprises the following steps:
101. external parameters and the like between the first camera and each of the at least one second camera are acquired.
Illustratively, the first camera is, for example, a wide-angle camera, and the second camera is, for example, a fisheye camera. The first camera is typically directed directly in front of the harvesting vehicle and is therefore also referred to as a forward wide angle camera. The first camera and the second camera are heterogeneous cameras, and the different second cameras are cameras with the same model. The external reference between the first camera and the second camera is also referred to as the external reference between heterogeneous cameras, and includes a rotation matrix, a translation vector, and the like. Here, the translation vector is also referred to as a translation matrix or the like.
The electronic equipment obtains external parameters between the first camera and each second camera by reading the local configuration file; or, the electronic device obtains the external reference between the first camera and each second camera from a remote database, and the like, and the embodiments of the present application are not limited.
It should be noted that the external reference in the embodiment of the present application refers to a relative external reference between the first camera and the second camera, and is not an absolute external reference. This is because the absolute external reference between the two heterogeneous cameras cannot be acquired during the movement of the acquisition vehicle.
102. And determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera.
Illustratively, the pose of the first camera refers to a pose projection plane of the first camera, also referred to as an imaging plane of the first camera, and the like. The three-dimensional point cloud is a rough three-dimensional point cloud without scales obtained by the electronic equipment in advance according to the key frames in the key frame set. The three-dimensional point cloud without scale means that the distance between any two three-dimensional points in the three-dimensional point cloud has no unit, for example, the distance between two three-dimensional points is 2, but 2 meters, 2 centimeters, 2 decimeters and the like are unknown.
After the three-dimensional point cloud is determined based on the keyframe, the pose of the first camera is also determined. The electronic device determines a reprojection error function of the original three-dimensional point cloud based on the pose of the first camera and the external parameters between the first camera and each second camera. This reprojection error function is also referred to as a cost optimization function. The reprojection error is: the point P1 in the key frame 1 and the point P2 in the key frame 2 are a pair of matching points, and a three-dimensional point P is determined according to the two points, projected to the key frame 1 to obtain P1 ', and projected to the key frame 2 to obtain P2'. The error between P1 and P1 'plus the error between P2 and P2' is the reprojection error of the three-dimensional point P. Since the key frame matching is to match any two key frames in the key frame set. Therefore, if the P1 point and the P2 point are a pair of matching points, and the P2 point and the P3 point are a pair of matching points, assuming that P3 is a point in a key frame photographed by a fisheye camera, P1, P2, and P3 match each other. And when the three-dimensional point is determined, determining a three-dimensional point according to the three two-dimensional points, and further calculating a reprojection error.
In addition, for one camera (the first camera or the second camera described above), the reprojection error can be understood as: the error between a two-dimensional point obtained by projecting a three-dimensional point in the real three-dimensional world onto the imaging plane of the camera and a real two-dimensional point of the three-dimensional point on the imaging plane of the camera. For example, the key frame shot by the first camera includes a guideboard, and based on the key frame, a three-dimensional coordinate of a two-dimensional point (real two-dimensional point) on the guideboard is calculated, and the three-dimensional coordinate is projected onto the imaging plane of the first camera to obtain a projected two-dimensional point, and the difference between the projected two-dimensional point and the coordinate of the real two-dimensional point on the two-dimensional image.
103. And iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose.
For example, after obtaining the reprojection error function of the three-dimensional point cloud, the electronic device adjusts the three-dimensional coordinates of each three-dimensional point P in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function, so as to obtain a new three-dimensional point cloud and a new pose of the first camera. In the adjusting process, the electronic equipment adjusts the three-dimensional coordinates of each three-dimensional point and the pose of the first camera for multiple times based on the re-projection error function until the re-projection error function is converged, namely the value of the re-projection error function is minimum.
104. And performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
For example, the electronic device may map the new three-dimensional point cloud to a two-dimensional space, etc. to obtain a map, etc.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the electronic equipment acquires the external reference between the first camera and each second camera, and determines the reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera. And then, adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud according to the re-projection error function to obtain a new three-dimensional point cloud, and simultaneously adjusting the pose of the first camera to obtain a new pose of the first camera. And finally, the electronic equipment carries out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose. In the process, the electronic equipment introduces external constraint between the cameras in the cost optimization function, so that the new three-dimensional point cloud has the characteristic of the scale of the real world, and the precision of the three-dimensional point cloud and the camera posture is improved. Therefore, a map with high accuracy can be generated based on the three-dimensional point cloud after three-dimensional reconstruction.
In the above embodiment, when the electronic device determines the reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each of the second cameras and the pose of the first camera, the electronic device determines the pose of each of the second cameras according to the external reference between the first camera and each of the second cameras and the pose of the first camera. Then, the electronic equipment determines a reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the poses of the second cameras and the three-dimensional point cloud.
In an exemplary manner, the first and second electrodes are,the external reference between the first camera and the second camera comprises a rotation matrix R and a translational vector t, denoted as (R, t), and the pose of the first camera is CfThe pose of the second camera is T (R, T, C)f). Wherein T (-) represents a transfer function.
Finally, the electronic equipment is according to the pose C of the first camerafThe pose of the second camera is T (R, T, C)f) And the position C of the coordinate of each three-dimensional point in the original three-dimensional point cloud in the first camerafAnd determining a re-projection error function of the three-dimensional point cloud.
By adopting the scheme, the electronic equipment determines the pose of each second camera according to the external parameters and the pose of the first camera, and further determines the reprojection error function of the three-dimensional point cloud according to the position of the first camera, the pose of each second camera and the three-dimensional point cloud, so that the aim of accurately determining the reprojection error function of the three-dimensional point cloud is fulfilled, and the aim of accurately reconstructing three-dimensional points is fulfilled.
In the above embodiment, for each three-dimensional point in the three-dimensional point cloud, a reprojection error of the three-dimensional point in the pose of the first camera and a reprojection error of the three-dimensional point in the pose of each second camera are determined, and a reprojection error of the three-dimensional point is determined according to the reprojection error of the three-dimensional point in the pose of the first camera and the reprojection error of the three-dimensional point in the pose of each second camera; and obtaining the reprojection error of each three-dimensional point in the three-dimensional point cloud. And then, the electronic equipment determines the reprojection error of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
Illustratively, for each three-dimensional point P in the three-dimensional point cloud, the reprojection error of the three-dimensional point P includes the pose C of the three-dimensional point at the first camerafD (P, C) of the reprojection errorf) And the pose T (R, T, C) of the three-dimensional point P at each second cameraf) The sum of the reprojection errors. Is formulated as:
Figure BDA0002559267660000071
wherein N representsThe number of the second cameras, i, represents the ith camera of the N second cameras.
In the above embodiment, the three-dimensional point P is in the pose C of the first camerafLower reprojection error d (P, C)f) The method comprises the following steps: the P1 point in the key frame 1 and the P2 point in the key frame 2 are a pair of matching points, a three-dimensional point P is determined according to the two points, the three-dimensional point P is projected into the key frame 1 to obtain P1 ', the three-dimensional point is projected into the key frame 2 to obtain P2', and then the error between P1 and P1 'plus the error between P2 and P2' is the pose C of the three-dimensional point P at the first camerafLower reprojection error d (P, C)f). If the same method is adopted, the pose T (R, T, C) of the three-dimensional point P at each second camera can be obtainedf) The reprojection error of (c).
And after the electronic equipment obtains the reprojection error of each three-dimensional point in the three-dimensional point cloud, determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each point. The reprojection error function of the three-dimensional point cloud is:
Figure BDA0002559267660000072
by adopting the scheme, the electronic equipment determines the reprojection error function of the three-dimensional point cloud according to the reprojection error of each point, and the purpose of accurately determining the reprojection error function of the three-dimensional point cloud is achieved.
In the above embodiment, when the electronic device iteratively adjusts the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose, the electronic device iteratively adjusts the pose of the camera corresponding to each keyframe in the keyframe set according to the reprojection error function to minimize the value of the reprojection error function, determines a new three-dimensional point according to the adjusted keyframe, and generates the new three-dimensional point cloud according to the new three-dimensional point.
Illustratively, the electronic device optimizes the three-dimensional point cloud and the pose of the first camera by using a Beam Adjustment (BA) with a reprojection error function as an optimization cost function. In the optimization process, the electronic equipment calculates the reprojection error of each three-dimensional point according to the camera pose of each key frame, adjusts the camera pose of the key frame, and iterates for multiple times to enable the value of the reprojection error function to be minimum. And when the value of the reprojection error function is minimum, matching any two key frames in the key frames at the moment to determine a new three-dimensional point, and further generating a new three-dimensional point cloud by using the new three-dimensional point.
By adopting the scheme, the purpose of optimizing the three-dimensional reconstruction by utilizing the beam adjustment is realized.
In the above embodiment, the electronic device calculates the initial rough three-dimensional point cloud in advance. That is to say, before determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each of the second cameras and the pose of the first camera, the electronic device further acquires a keyframe set, where the keyframe set includes M keyframes acquired by the first camera and M keyframes acquired by each of the second cameras in the at least one second camera, performs feature point matching on each pair of keyframes in the keyframe set to obtain a plurality of pairs of matching points, and determines a three-dimensional point corresponding to each matching point in the plurality of pairs of matching points to obtain the three-dimensional point cloud. The first camera and the second camera are cameras of different models.
For example, the electronic device controls the first camera and each of the second cameras to shoot the key frames in real time, or the first camera shoots the first video, each of the second cameras shoots the second video, and the electronic device acquires the key frames based on the first video and each of the second videos.
When the electronic equipment acquires the key frames shot by the cameras in real time, the first cameras and the second cameras are arranged on the collecting vehicle, and when the collecting vehicle moves for a preset distance, the first cameras and the second cameras shoot the surrounding environment of the collecting vehicle to obtain the key frames. For example, when the collection vehicle moves for 0.33 meter, the first camera and each second camera take pictures of the surroundings of the collection vehicle to obtain one group of key frames, and when the collection vehicle continues to move for 0.33 meter, the first camera and each second camera take pictures of the surroundings of the collection vehicle to obtain another group of key frames. In this way, the collection vehicle is continuously moving, and the first camera and each second camera can acquire multiple groups of key frames.
By adopting the scheme, the key frame is shot only when the collection vehicle moves for the preset distance, and the housekeeper frame cannot be shot when the collection vehicle does not move, so that the aim of accurately obtaining the key frame is fulfilled.
When the electronic equipment acquires the key frames based on a first video shot by the first camera and a second video shot by each second camera, the first camera and each second camera arranged on the collection vehicle shoot the environment around the collection vehicle at the same time, after a period of time, the first camera shoots the first video, and each second camera shoots the second video. The electronic device obtains the first video and each second video. Then, the electronic device extracts the key frames from the first video and each second video at intervals of preset duration, so as to obtain a key frame set. In the process of shooting the video, the first camera and the second cameras do not sense whether the collecting vehicle moves or not. When the capturing vehicle does not move due to an obstacle or the like, the contents of the two image frames within the preset time interval are likely to be the same. Therefore, compared with a mode of shooting the key frame in real time, the key frame acquired by the mode is not accurate enough.
By adopting the scheme, the purpose of acquiring the key frame is realized.
When the collection vehicle moves a preset distance, acquiring images of the first camera and the second cameras, which are shot by the first camera and the second cameras, of the surrounding environment of the collection vehicle to obtain the key frame set, wherein the first camera and the second cameras are arranged on the collection vehicle
After the electronic equipment acquires the key frames shot by each camera, the key frames shot by all the cameras form a key frame set, angular points are extracted from each key frame in the key frame set, and the angular points are described by descriptors such as Scale-invariant feature transform (SIFT). Then, for any two key frames, constraining the corners in the two key frames by a fundamental matrix (fundamental matrix), and eliminating the wrong corners. Then, the corner point in one key frame and the corner point with the same descriptor in the other key frame are used as a pair of matching points, so that a plurality of pairs of matching points are obtained. In the embodiment of the present application, two points in a pair of matching points are located in two key frames respectively, and the point o1 in the key frame 1 and the point o2 in the key frame 2 are matched, which means that the point o1 in the key frame 1 and the point o2 in the key frame 2 are matched at the same point.
After the electronic equipment obtains a plurality of pairs of matching points, a three-dimensional point is calculated for each pair of matching points, so that a plurality of three-dimensional points are obtained, and the three-dimensional points form an original three-dimensional point cloud without a real world scale.
By adopting the scheme, the purpose of acquiring the original three-dimensional point cloud without the real world scale is achieved.
In the above embodiment, after the electronic device performs the three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose, the electronic device further controls the driving route of the autonomous vehicle and the like according to the map obtained by the three-dimensional reconstruction.
For example, when the autonomous vehicle needs to be navigated, the start position and the destination position are sent to the electronic device, and the electronic device plans the driving path of the autonomous vehicle based on the start position, the destination position and the map. In addition, the electronic equipment can also use the map to perform offline route learning and the like, so that the learning success rate is greatly improved.
By adopting the scheme, the electronic equipment utilizes the map to perform off-line route learning and on-line path planning, so that the learning success rate is greatly improved and the driving route planning is more accurate.
In the above embodiments, the first camera is, for example, a wide-angle camera, and the second camera is, for example, a fisheye camera, and the first camera is generally directed toward the front of the capturing vehicle, and is therefore also referred to as a forward wide-angle camera. The second cameras are arranged in different orientations on the recapture vehicle, for example, there are a total of four second cameras facing forward, rearward, left, and right of the capture vehicle, respectively. For example, please refer to fig. 3. Fig. 3 is a schematic diagram illustrating a position of a second camera in the three-dimensional reconstruction method according to the embodiment of the present application.
Referring to fig. 3, a dot-filled rectangle indicates a forward wide-angle camera, and a diagonal-filled rectangle indicates a fisheye camera. The number of the fisheye cameras is four, and the fisheye camera 41 is arranged in the same direction as the wide-angle camera 3. Fisheye camera 42 is directed to the left side of the harvesting cart, fisheye camera 43 is directed to the right side of the harvesting cart, and fisheye camera 44 is directed to the rear of the harvesting cart. The external reference between the cameras refers to external reference between heterogeneous cameras having partially overlapping fields of view, such as external reference between the wide-angle camera 3 and the fisheye camera 41, external reference between the wide-angle camera 3 and the fisheye camera 42, external reference of the family of the wide-angle camera 3 and the fisheye camera 43, and the like.
The above three-dimensional reconstruction method will be described in detail below, taking 4 second cameras as an example. For example, please refer to fig. 4.
Fig. 4 is a flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application. The embodiment comprises the following steps:
201. and acquiring key frames shot by the wide-angle camera and each fisheye camera in the moving process of the collection vehicle to obtain a key frame set.
202. And generating an original three-dimensional point cloud without a real world scale according to the key frame set.
203. And acquiring external parameters between the wide-angle camera and each fisheye camera.
Illustratively, assume that the external parameter between the wide-angle camera 3 and the fisheye camera 41 is (R)1,t1B), external reference (R) between the wide-angle camera 3 and the fisheye camera 422,t2And, the external reference (R) of the family of the wide-angle camera 3 and the fisheye camera 433,t3And, the external reference (R) of the family of the wide-angle camera 3 and the fisheye camera 434,t4,)。
204. And determining the pose of each fisheye camera.
Illustratively, assume the pose of the wide-angle camera 3 as CfThen the pose of the fisheye camera 41 is T (R)1,t1,Cf) The pose of the fisheye camera 42 is T (R)2,t2,Cf) The pose of the fisheye camera 43 is T (R)3,t3,Cf) The pose of the fisheye camera 44 is T (R)4,t4,Cf)。
205. And determining the reprojection error of each three-dimensional point P in the three-dimensional point cloud.
Illustratively, the reprojection error of each three-dimensional point P in the three-dimensional point cloud is:
Figure BDA0002559267660000101
Figure BDA0002559267660000102
wherein T (-) represents a transfer function.
206. And determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point P in the three-dimensional point cloud.
Illustratively, the reprojection error function for a three-dimensional point cloud is represented as:
Figure BDA0002559267660000103
Figure BDA0002559267660000111
wherein E isrepRepresenting the reprojection errors of all three-dimensional points in the three-dimensional point cloud.
207. And adjusting each three-dimensional point in the three-dimensional point cloud into a three-dimensional coordinate according to the reprojection error function to obtain a new three-dimensional point cloud, and adjusting the pose of the first camera to obtain a new pose.
For example, please refer to the description of step 103 above.
208. And performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
For example, please refer to the above description without reference to 104.
In the above, a specific implementation of the three-dimensional reconstruction method mentioned in the embodiments of the present application is introduced, and the following is an embodiment of the apparatus of the present application, which can be used to implement the embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present application. The apparatus may be integrated in or implemented by a server. As shown in fig. 5, in the present embodiment, the three-dimensional reconstruction apparatus 100 may include:
an obtaining module 11, configured to obtain external parameters between the first camera and each of the at least one second camera;
a determining module 12, configured to determine a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each of the second cameras and the pose of the first camera;
the adjusting module 13 is configured to iteratively adjust the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose;
and the reconstruction module 14 is used for performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a possible design, the determining module 12 is specifically configured to determine the pose of each second camera according to the external reference between the first camera and each second camera and the pose of the first camera, and determine the reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each second camera, and the three-dimensional point cloud.
In a possible design, the determining module 12, when determining the reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the poses of the second cameras, and the three-dimensional point cloud, is configured to determine, for each three-dimensional point in the three-dimensional point cloud, a reprojection error of the three-dimensional point in the pose of the first camera and a reprojection error of the three-dimensional point in the pose of the second cameras, and determine a reprojection error of the three-dimensional point according to the reprojection error of the three-dimensional point in the pose of the first camera and the reprojection error of the three-dimensional point in the pose of the second cameras; and determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
In a possible design, the obtaining module 11 is further configured to, before determining the reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each of the second cameras and the pose of the first camera, obtain a key frame set, where the key frame set includes M key frames obtained by the first camera and M key frames obtained by each of the second cameras in the at least one second camera, the M key frames of the first camera and the key frames in the M key frames of the second camera are in a one-to-one correspondence, and the first camera and the second camera are cameras of different models; performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points; and determining a three-dimensional point corresponding to each matching point in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
In a feasible design, when acquiring the key frame set, the acquiring module 11 is specifically configured to acquire a first video captured by the first camera and a second video captured by the at least one second camera, and extract key frames from the first video and the second video respectively every preset time interval to obtain the key frame set.
In a feasible design, the obtaining module 11 is configured to obtain images of the surroundings of the capturing vehicle captured by the first camera and the second cameras to obtain the keyframe set when the capturing vehicle moves a preset distance, where the first camera and the second camera are disposed on the capturing vehicle.
In a feasible design, the adjusting module 13 is specifically configured to iteratively adjust the pose of the camera corresponding to each key frame in the key frame set according to the reprojection error function, so as to minimize the value of the reprojection error function, determine a new three-dimensional point according to the adjusted key frame, and generate the new three-dimensional point cloud according to the new three-dimensional point.
In one possible design, the first camera is a wide angle camera and the second camera is a fisheye camera.
Fig. 6 is another three-dimensional reconstruction apparatus provided in the embodiment of the present application, and referring to fig. 6, the three-dimensional reconstruction apparatus 100 provided in the embodiment further includes, on the basis of fig. 5: and the control module is used for controlling the driving route of the automatic driving vehicle according to the map obtained by the three-dimensional reconstruction after the reconstruction module 14 carries out the three-dimensional reconstruction according to the new three-dimensional point cloud.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device for implementing a three-dimensional reconstruction method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 21, memory 22, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 21 is taken as an example.
Memory 22 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the three-dimensional reconstruction method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the three-dimensional reconstruction method provided by the present application.
The memory 22, as a non-transitory computer readable storage medium, may be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the three-dimensional reconstruction method in the embodiment of the present application (for example, the acquiring module 11, the determining module 12, the adjusting module 13, the reconstructing module 14, and the control module 15 described in the axe 6 shown in fig. 5). The processor 21 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 22, that is, implements the three-dimensional reconstruction method in the above-described method embodiment.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created when the three-dimensional reconstruction method is performed according to the electronic device, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 22 optionally includes a memory remotely located from the processor 21, and these remote memories may be connected via a network to an electronic device for performing the three-dimensional reconstruction method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for performing the three-dimensional reconstruction method may further include: an input device 23 and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.
The input device 23 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the three-dimensional reconstruction electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 24 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the 7 or more computer programs are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the electronic equipment introduces the external constraint between the cameras in the cost optimization function, so that the new three-dimensional point cloud has the characteristic of the scale of the real world, and the precision of the three-dimensional point cloud and the camera posture is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (20)

1. A method of three-dimensional reconstruction, comprising:
acquiring external parameters between the first camera and each second camera in the at least one second camera;
determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera;
iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the re-projection error function to obtain a new three-dimensional point cloud and a new pose;
and performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
2. The method of claim 1, wherein determining a reprojection error function for the three-dimensional point cloud based on the external reference between the first camera and each of the second cameras and the pose of the first camera comprises:
determining the pose of each second camera according to the external reference between the first camera and each second camera and the pose of the first camera;
and determining a re-projection error function of the three-dimensional point cloud according to the pose of the first camera, the poses of the second cameras and the three-dimensional point cloud.
3. The method of claim 2, wherein determining the reprojection error function for the three-dimensional point cloud from the pose of the first camera, the pose of each of the second cameras, and the three-dimensional point cloud comprises:
for each three-dimensional point in the three-dimensional point cloud, determining a reprojection error of the three-dimensional point under the pose of the first camera and a reprojection error of the three-dimensional point under the pose of each second camera, and determining the reprojection error of the three-dimensional point according to the reprojection error of the three-dimensional point under the pose of the first camera and the reprojection error of the three-dimensional point under the pose of each second camera; obtaining a reprojection error of each three-dimensional point in the three-dimensional point cloud;
and determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
4. The method of any one of claims 1-3, wherein prior to determining the reprojection error function of the three-dimensional point cloud based on the external reference between the first camera and each of the second cameras and the pose of the first camera, further comprising:
acquiring a key frame set, where the key frame set includes M key frames acquired by the first camera and M key frames acquired by each of the at least one second camera, the M key frames of the first camera and the key frames of the M key frames of the second camera are in one-to-one correspondence, and the first camera and the second camera are cameras of different models;
performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points;
and determining a three-dimensional point corresponding to each matching point in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
5. The method of claim 4, wherein the obtaining the set of keyframes comprises:
acquiring a first video shot by the first camera and a second video shot by the at least one second camera respectively;
and respectively extracting key frames from the first video and the second video every other preset time length to obtain the key frame set.
6. The method of claim 4, wherein the obtaining the set of keyframes comprises:
when the collection vehicle moves a preset distance, acquiring images of the first camera and the second cameras, which are shot by the first camera and the second cameras, of the surrounding environment of the collection vehicle to obtain the key frame set, wherein the first camera and the second cameras are arranged on the collection vehicle.
7. The method of claim 4, wherein iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose comprises:
iteratively adjusting the pose of the camera corresponding to each key frame in the key frame set according to the re-projection error function so as to minimize the value of the re-projection error function;
determining a new three-dimensional point according to the adjusted key frame;
and generating the new three-dimensional point cloud according to the new three-dimensional point.
8. The method of any of claims 1-3, wherein the first camera is a wide angle camera and the second camera is a fisheye camera.
9. The method of any of claims 1-3, wherein after the three-dimensional reconstruction from the new three-dimensional point cloud and the new pose, further comprising:
and controlling the running route of the automatic driving vehicle according to the map obtained by the three-dimensional reconstruction.
10. A three-dimensional reconstruction apparatus comprising:
the acquisition module is used for acquiring external parameters between the first camera and each second camera in the at least one second camera;
the determining module is used for determining a reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each second camera and the pose of the first camera;
the adjusting module is used for iteratively adjusting the three-dimensional coordinates of all three-dimensional points in the three-dimensional point cloud and the pose of the first camera according to the re-projection error function so as to obtain a new three-dimensional point cloud and a new pose;
and the reconstruction module is used for performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
11. The apparatus of claim 10, wherein the determining module is specifically configured to determine the pose of each of the second cameras according to the external reference between the first camera and each of the second cameras and the pose of the first camera, and determine the reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each of the second cameras and the three-dimensional point cloud.
12. The apparatus of claim 11, wherein the determining module, when determining the reprojection error function of the three-dimensional point cloud from the pose of the first camera, the poses of the second cameras, and the three-dimensional point cloud, is configured to determine, for each three-dimensional point in the three-dimensional point cloud, a reprojection error of the three-dimensional point at the pose of the first camera and a reprojection error of the three-dimensional point at the pose of the second cameras, and determine a reprojection error of the three-dimensional point at the pose of the first camera and a reprojection error of the three-dimensional point at the pose of the second cameras; and determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
13. The apparatus according to any one of claims 10-12, wherein the acquiring module, before determining the reprojection error function of the three-dimensional point cloud according to the external reference between the first camera and each of the second cameras and the pose of the first camera, is further configured to acquire a keyframe set, the keyframe set including M keyframes acquired by the first camera and M keyframes acquired by each of the at least one second camera, the M keyframes of the first camera and the keyframes of the M keyframes of the second camera corresponding to one another, the first camera and the second camera being different models of cameras; performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points; and determining a three-dimensional point corresponding to each matching point in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
14. The apparatus according to claim 13, wherein the obtaining module, when obtaining the key frame set, is specifically configured to obtain a first video captured by the first camera and a second video captured by each of the at least one second camera, and extract key frames from the first video and the second video respectively every preset time period to obtain the key frame set.
15. The apparatus according to claim 13, wherein the acquiring module, when acquiring the keyframe set, is specifically configured to acquire images of an environment around the capturing vehicle captured by the first camera and each of the second cameras when the capturing vehicle moves by a preset distance to obtain the keyframe set, and the first camera and the second camera are disposed on the capturing vehicle.
16. The apparatus of claim 13, wherein the adjusting module is specifically configured to iteratively adjust a pose of a camera corresponding to each keyframe in the set of keyframes according to the reprojection error function so as to minimize a value of the reprojection error function, determine a new three-dimensional point according to the adjusted keyframe, and generate the new three-dimensional point cloud according to the new three-dimensional point.
17. The apparatus of any of claims 10-12, wherein the first camera is a wide angle camera and the second camera is a fisheye camera.
18. The apparatus of any of claims 10-12, wherein the apparatus further comprises: and the control module is used for controlling the driving route of the automatic driving vehicle according to the map obtained by the three-dimensional reconstruction after the reconstruction module carries out the three-dimensional reconstruction according to the new three-dimensional point cloud.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202010601885.6A 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium Active CN111784842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601885.6A CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601885.6A CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111784842A true CN111784842A (en) 2020-10-16
CN111784842B CN111784842B (en) 2024-04-12

Family

ID=72761370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601885.6A Active CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111784842B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154485A (en) * 2019-08-30 2020-12-29 深圳市大疆创新科技有限公司 Optimization method and equipment of three-dimensional reconstruction model and movable platform
CN115063485A (en) * 2022-08-19 2022-09-16 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401276B1 (en) * 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105844624A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system
WO2018040017A1 (en) * 2016-08-31 2018-03-08 深圳大学 Method and system for correcting distortion of projector lens based on adaptive fringes
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras
CN111127524A (en) * 2018-10-31 2020-05-08 华为技术有限公司 Method, system and device for tracking trajectory and reconstructing three-dimensional image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401276B1 (en) * 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105844624A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
WO2018040017A1 (en) * 2016-08-31 2018-03-08 深圳大学 Method and system for correcting distortion of projector lens based on adaptive fringes
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras
CN111127524A (en) * 2018-10-31 2020-05-08 华为技术有限公司 Method, system and device for tracking trajectory and reconstructing three-dimensional image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈克贤: "基于图像序列的稀疏点云重建", 中国优秀硕士学位论文全文数据库, no. 4 *
谢朝毅;田金文;张钧;: "基于单目悬停相机的定轴慢旋空间非合作目标三维表面重建", 舰船电子工程, no. 03 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154485A (en) * 2019-08-30 2020-12-29 深圳市大疆创新科技有限公司 Optimization method and equipment of three-dimensional reconstruction model and movable platform
CN115063485A (en) * 2022-08-19 2022-09-16 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium
WO2024037562A1 (en) * 2022-08-19 2024-02-22 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN111784842B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
JP6768156B2 (en) Virtually enhanced visual simultaneous positioning and mapping systems and methods
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
US20180082435A1 (en) Modelling a three-dimensional space
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
CN111935393A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111722245B (en) Positioning method, positioning device and electronic equipment
CN111709973B (en) Target tracking method, device, equipment and storage medium
CN111539973B (en) Method and device for detecting pose of vehicle
CN110929669B (en) Data labeling method and device
CN111753961A (en) Model training method and device, and prediction method and device
Yang et al. Reactive obstacle avoidance of monocular quadrotors with online adapted depth prediction network
Mahlknecht et al. Exploring event camera-based odometry for planetary robots
CN112561978A (en) Training method of depth estimation network, depth estimation method of image and equipment
CN111612852A (en) Method and apparatus for verifying camera parameters
CN111784842B (en) Three-dimensional reconstruction method, device, equipment and readable storage medium
CN111784834A (en) Point cloud map generation method and device and electronic equipment
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
CN112101209A (en) Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112241718A (en) Vehicle information detection method, detection model training method and device
WO2019119426A1 (en) Stereoscopic imaging method and apparatus based on unmanned aerial vehicle
CN110517298B (en) Track matching method and device
Zhang et al. Virtual Reality Aided High-Quality 3D Reconstruction by Remote Drones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant