CN111784842B - Three-dimensional reconstruction method, device, equipment and readable storage medium - Google Patents

Three-dimensional reconstruction method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111784842B
CN111784842B CN202010601885.6A CN202010601885A CN111784842B CN 111784842 B CN111784842 B CN 111784842B CN 202010601885 A CN202010601885 A CN 202010601885A CN 111784842 B CN111784842 B CN 111784842B
Authority
CN
China
Prior art keywords
camera
dimensional point
pose
point cloud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010601885.6A
Other languages
Chinese (zh)
Other versions
CN111784842A (en
Inventor
姚萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010601885.6A priority Critical patent/CN111784842B/en
Publication of CN111784842A publication Critical patent/CN111784842A/en
Application granted granted Critical
Publication of CN111784842B publication Critical patent/CN111784842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The application discloses a three-dimensional reconstruction method, a device, equipment and a readable storage medium, and relates to the field of artificial intelligence and the technical field of automatic driving. The specific implementation scheme is as follows: the electronic equipment obtains external parameters between the first camera and each second camera, and determines a reprojection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera. And then, adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose. And finally, the electronic equipment performs three-dimensional reconstruction according to the new three-dimensional point cloud and the pose. The method improves the precision of the three-dimensional point cloud and the camera gesture.

Description

Three-dimensional reconstruction method, device, equipment and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a three-dimensional reconstruction method, a device, equipment and a readable storage medium.
Background
Three-dimensional reconstruction is one of research hotspots in the technical field of computer vision, and the three-dimensional reconstruction technology aims at reconstructing a three-dimensional virtual model of a real object in a computer based on a two-dimensional image and displaying the three-dimensional virtual model on a computer screen.
In a three-dimensional reconstruction technique based on a structure-from-motion (SFM), a series of images are acquired by a camera, and a three-dimensional point cloud, a camera pose, and the like are generated from the images.
In the SFM-based three-dimensional reconstruction method, the generated three-dimensional point cloud is different in scale from the real scale, so that the use of the three-dimensional point cloud is limited.
Disclosure of Invention
The application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, three-dimensional reconstruction equipment and a readable storage medium.
In a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including:
external parameters between each of the first camera and at least one of the second cameras are acquired.
And determining a reprojection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera.
And iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose.
And carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a second aspect, embodiments of the present application provide a three-dimensional reconstruction apparatus, including:
And the acquisition module is used for acquiring external parameters between each second camera in the first camera and the at least one second camera.
And the determining module is used for determining a reprojection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera.
And the adjusting module is used for iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function so as to obtain a new three-dimensional point cloud and a new pose.
And the reconstruction module is used for carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the first aspect or any possible implementation of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing the electronic device to perform the method of the first aspect or the various possible implementations of the first aspect.
In a sixth aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including: according to external parameters between the first camera and each second camera and the pose of the first camera, determining a reprojection error function of the three-dimensional point cloud, and according to the reprojection error function, adjusting each three-dimensional point in the three-dimensional point cloud to be a three-dimensional coordinate so as to obtain a new three-dimensional point cloud, and adjusting the pose of the first camera so as to obtain a new pose.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
fig. 1 is a schematic diagram of a network architecture of a three-dimensional reconstruction method according to an embodiment of the present application;
FIG. 2 is a flow chart of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a position of a second camera in the three-dimensional reconstruction method according to the embodiment of the present application;
FIG. 4 is a flow chart of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional reconstruction device according to an embodiment of the present application;
FIG. 6 is another three-dimensional reconstruction apparatus provided in an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a three-dimensional reconstruction method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Currently, with the rapid development of artificial intelligence (artificial intelligence, AI), unmanned technologies are also becoming increasingly mature. Unmanned driving is also known as autopilot. In the automatic driving process, the automatic driving vehicle uses the map to navigate, position, etc. In the process of generating the map, a plurality of cameras with the same model are arranged on the acquisition vehicle, a series of images are acquired by the cameras, and then a three-dimensional point cloud, the pose of the cameras and the like are generated by the images based on a SFM three-dimensional reconstruction technology.
The length of the three-dimensional point cloud generated by the SFM-based three-dimensional reconstruction is different from the real world scale, so that the use of the three-dimensional point cloud in vehicle target positioning and path planning is limited. Moreover, the mutual overlapping rate of the visual ranges of cameras of the same type is high, so that the process of generating the three-dimensional point cloud by the SFM-based three-dimensional reconstruction mode is limited.
In view of this, the embodiments of the present application provide a three-dimensional reconstruction method, apparatus, device, and readable storage medium, which uses the relative external parameters between heterogeneous cameras as optimization constraint terms to perform three-dimensional reconstruction, so that the finally generated three-dimensional point cloud has a real-world scale, thereby expanding the application range of the three-dimensional point cloud.
Fig. 1 is a schematic diagram of a network architecture of a three-dimensional reconstruction method according to an embodiment of the present application. Referring to fig. 1, the network architecture includes a server 1, a vehicle-mounted terminal 2 disposed on a collection vehicle, and a network connection is established between the vehicle-mounted terminal 2 and the server 1. The acquisition vehicle is further provided with a first camera 3 and at least one second camera 4, the first camera 4 and the at least one second camera being arranged to acquire key frames to obtain a set of key frames.
When the vehicle-mounted terminal 2 executes the three-dimensional reconstruction method provided by the embodiment of the application, the vehicle-mounted terminal 2 generates an original rough three-dimensional point cloud, the pose of the first camera and the like based on the keyframe set. The vehicle-mounted terminal 2 acquires external parameters between the first camera and each second camera from the server 1 and the like, constructs a re-projection error function based on the external parameters, the original three-dimensional point cloud, the pose of the first camera and the like, adjusts the coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera based on the re-projection error function, generates a new three-dimensional point cloud and a new pose, and performs three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
When the server 1 executes the three-dimensional reconstruction method provided in the embodiment of the present application, the vehicle-mounted terminal 1 acquires the key frames shot by the first camera and the second camera and sends the key frames to the server 1, or the first camera and the second camera directly send the shot key frames to the server 1. The server 1 performs three-dimensional reconstruction based on the keyframe set, and specifically, reference may be made to the above description of the vehicle-mounted terminal 1, which is not repeated here.
The following describes the three-dimensional reconstruction method according to the embodiment of the present application in detail based on the network architecture shown in fig. 1. For example, see fig. 2.
Fig. 2 is a flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application. The execution subject of the present embodiment is an electronic device, which is, for example, a server or an in-vehicle terminal in fig. 1. The method comprises the following steps:
101. obtaining external parameters between each of the first camera and at least one second camera, and the like.
The first camera is, for example, a wide-angle camera, and the second camera is, for example, a fisheye camera. The first camera is typically directed directly in front of the collection vehicle and is therefore also referred to as a forward wide angle camera. The first camera and the second camera are heterogeneous cameras, and the different second cameras are the same type cameras. The outliers between the first camera and the second camera, also referred to as outliers between the heterogeneous cameras, include rotation matrices, translation vectors, and the like. Wherein the translation vector is also called translation matrix or the like.
The electronic equipment acquires external parameters between the first camera and each second camera by reading the local configuration file; alternatively, the electronic device obtains external parameters between the first camera and each second camera from a remote database, and the like, and embodiments of the present application are not limited.
It should be noted that, the external parameters in the embodiments of the present application refer to the relative external parameters between the first camera and the second camera, and not the absolute external parameters. This is because the absolute external parameters between the two heterogeneous cameras cannot be obtained during the movement of the acquisition vehicle.
102. And determining a reprojection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera.
Illustratively, the pose of the first camera refers to a pose projection plane of the first camera, also referred to as an imaging plane of the first camera, and so on. The three-dimensional point cloud is a rough three-dimensional point cloud without scale which is obtained by the electronic equipment in advance according to the key frames in the key frame set. The three-dimensional point cloud without scale means that there is no unit of distance between any two three-dimensional points in the three-dimensional point cloud, for example, the distance between two three-dimensional points is 2, but it is not known whether it is 2 meters, 2 cm, 2 decimeters, or the like.
After the three-dimensional point cloud is determined based on the key frames, the pose of the first camera is determined. The electronic device determines a re-projection error function of the original three-dimensional point cloud based on the pose of the first camera, the extrinsic parameters between the first camera and each of the second cameras. This re-projection error function is also called cost optimization function. The reprojection error refers to: the point P1 in the key frame 1 and the point P2 in the key frame 2 are a pair of matching points, a three-dimensional point P is determined according to the two points, the three-dimensional point P is projected into the key frame 1 to obtain P1', and the three-dimensional point is projected into the key frame 2 to obtain P2'. The error between P1 and P1 'plus the error between P2 and P2' is the reprojection error of the three-dimensional point P. Since keyframe matching is to match any two keyframes in a keyframe set. Therefore, if the P1 point and the P2 point are a pair of matching points and the P2 point and the P3 point are a pair of matching points, assuming that P3 is a point in a key frame photographed by the fisheye camera, P1, P2, and P3 match each other. When the three-dimensional points are determined, a three-dimensional point is determined according to the three two-dimensional points, and then the reprojection error is calculated.
In addition, the re-projection error of one camera (the first camera or the second camera) can be understood as: an error between a two-dimensional point projected from a three-dimensional point in the real three-dimensional world onto the imaging plane of the camera and a real two-dimensional point of the three-dimensional point on the imaging plane of the camera. For example, a key frame captured by the first camera includes a guideboard, and three-dimensional coordinates of a two-dimensional point (real two-dimensional point) on the guideboard are calculated based on the key frame, and the three-dimensional coordinates are projected onto an imaging plane of the first camera to obtain a projected two-dimensional point, and a difference value between the projected two-dimensional point and the coordinates of the real two-dimensional point on the two-dimensional image.
103. And iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose.
The electronic device adjusts the three-dimensional coordinates of each three-dimensional point P in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function after obtaining the reprojection error function of the three-dimensional point cloud, so as to obtain a new three-dimensional point cloud and a new pose of the first camera. In the adjustment process, the electronic equipment adjusts the three-dimensional coordinates of each three-dimensional point and the pose of the first camera for a plurality of times based on the reprojection error function until the reprojection error function converges, namely the value of the reprojection error function is minimum.
104. And carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
For example, the electronic device may map a new three-dimensional point cloud to a two-dimensional space, etc., to obtain a map, etc.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the electronic equipment acquires the external parameters between the first camera and each second camera, and determines the re-projection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera. And then, according to the reprojection error function, adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud to obtain a new three-dimensional point cloud, and simultaneously adjusting the pose of the first camera to obtain a new pose of the first camera. And finally, the electronic equipment performs three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose. In the process, the electronic equipment introduces external parameter constraint among cameras in the cost optimization function, so that the new three-dimensional point cloud has the characteristic of real world scale, and the precision of the three-dimensional point cloud and the camera gesture is improved. Therefore, a map with high accuracy can be generated based on the three-dimensional point cloud after three-dimensional reconstruction.
In the above embodiment, when the electronic device determines the re-projection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera, the pose of each second camera is determined according to the external parameters between the first camera and each second camera and the pose of the first camera. And then, the electronic equipment determines a reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each second camera and the three-dimensional point cloud.
Exemplary, the extrinsic parameters between the first camera and the second camera include a rotation matrix R and a translation vector t, denoted as (R, t), the pose of the first camera being C f The pose of the second camera is T (R, T, C f ). Wherein T (-) represents the transfer function.
Finally, the electronic equipment determines the pose C of the first camera f The pose of the second camera is T (R, T, C f ) And the position and the posture C of the coordinates of each three-dimensional point in the original three-dimensional point cloud in the first camera f And (3) determining the reprojection error function of the three-dimensional point cloud according to the reprojection error.
By adopting the scheme, the electronic equipment determines the pose of each second camera according to the external parameters and the pose of the first camera, and further determines the re-projection error function of the three-dimensional point cloud according to the position of the first camera, the pose of each second camera and the three-dimensional point cloud, so that the purpose of accurately determining the re-projection error function of the three-dimensional point cloud is realized, and the purpose of accurately reconstructing three dimensions is realized.
In the above embodiment, for each three-dimensional point in the three-dimensional point cloud, determining a re-projection error of the three-dimensional point under the pose of the first camera and a re-projection error of the three-dimensional point under the pose of each second camera, and determining the re-projection error of the three-dimensional point according to the re-projection error of the three-dimensional point under the pose of the first camera and the re-projection error of the three-dimensional point under the pose of each second camera; and obtaining the re-projection error of each three-dimensional point in the three-dimensional point cloud. And then, the electronic equipment determines the reprojection error of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
Illustratively, for each three-dimensional point P in the three-dimensional point cloud, the re-projection error of the three-dimensional point P includes the pose C of the three-dimensional point at the first camera f On the reprojection errors d (P, C f ) And the three-dimensional point P is at the pose T (R, T, C) of each second camera f ) And the sum of the re-projection errors. Expressed by the formula:wherein N represents the number of second cameras, and i represents the ith camera of the N second cameras.
In the above embodiment, the three-dimensional point P is at the pose C of the first camera f Lower reprojection error d (P, C f ) Is as follows: p1 Point and off in Key frame 1The P2 point in the key frame 2 is a pair of matching points, a three-dimensional point P is determined according to the two points, the three-dimensional point P is projected into the key frame 1 to obtain P1', the three-dimensional point is projected into the key frame 2 to obtain P2', and then the position C of the three-dimensional point P in the first camera is obtained by adding the error between P1 and P1 'and the error between P2 and P2' f Lower reprojection error d (P, C f ). If the three-dimensional point P can be obtained in the same way in the pose T (R, T, C) of each second camera f ) And the reprojection error.
And after the electronic equipment obtains the re-projection errors of all three-dimensional points in the three-dimensional point cloud, determining a re-projection error function of the three-dimensional point cloud according to the re-projection errors of all the points. The reprojection error function of the three-dimensional point cloud is as follows:
by adopting the scheme, the electronic equipment determines the reprojection error function of the three-dimensional point cloud according to the reprojection error of each point, and the purpose of accurately determining the reprojection error function of the three-dimensional point cloud is achieved.
In the above embodiment, when the electronic device iteratively adjusts the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose, the electronic device iteratively adjusts the pose of the camera corresponding to each key frame in the key frame set according to the reprojection error function, so that the value of the reprojection error function is minimum, determines a new three-dimensional point according to the adjusted key frame, and generates the new three-dimensional point cloud according to the new three-dimensional point.
Illustratively, the electronic device optimizes the three-dimensional point cloud and the pose of the first camera with a beam adjustment (bundle adjustment, BA) using the re-projection error function as an optimization cost function. In the optimization process, the electronic equipment calculates the reprojection error of each three-dimensional point according to the camera pose of each key frame, adjusts the camera pose of the key frame, and iterates for a plurality of times to enable the value of the reprojection error function to be minimum. When the value of the re-projection error function is minimum, any two key frames in the key frames at the moment are matched, a new three-dimensional point is determined, and then a new three-dimensional point cloud is generated by using the new three-dimensional point.
By adopting the scheme, the purpose of optimizing three-dimensional reconstruction by utilizing beam adjustment is realized.
In the above embodiment, the electronic device calculates the initial, rough three-dimensional point cloud in advance. That is, before determining the re-projection error function of the three-dimensional point cloud according to the external parameters between the first camera and each second camera and the pose of the first camera, the electronic device further obtains a keyframe set, where the keyframe set includes M keyframes obtained by the first camera and M keyframes obtained by each second camera in the at least one second camera, performs feature point matching on each pair of keyframes in the keyframe set to obtain a plurality of pairs of matching points, and determines a three-dimensional point corresponding to each pair of matching points in the plurality of pairs of matching points to obtain the three-dimensional point cloud. The first camera and the second camera are cameras of different models, wherein the M key frames of the first camera and the key frames of the M key frames of the second camera are in one-to-one correspondence.
The electronic device may control the first camera and each second camera to capture the key frame in real time, or the first camera may capture the first video, each second camera may capture the second video, and the electronic device may obtain the key frame based on the first video and each second video.
When the electronic equipment acquires the key frames shot by each camera in real time, the first camera and each second camera are arranged on the acquisition vehicle, and each time the acquisition vehicle moves for a preset distance, the first camera and each second camera shoot the surrounding environment of the acquisition vehicle to obtain the key frames. For example, if the collection vehicle moves 0.33 meters, the first camera and each second camera shoot the surrounding environment of the collection vehicle to obtain a group of key frames, and if the collection vehicle continues to move 0.33 meters, the first camera and each second camera shoot the surrounding environment of the collection vehicle to obtain another group of key frames. In this way, the acquisition vehicle continuously moves, and the first camera and each second camera can acquire multiple groups of key frames.
By adopting the scheme, the key frames can be shot only when the acquisition vehicle moves by a preset distance, and the housekeeper frames can not be shot when the acquisition vehicle does not move, so that the purpose of accurately acquiring the key frames is realized.
When the electronic equipment acquires a key frame based on a first video shot by a first camera and a second video shot by each second camera, the first camera and each second camera arranged on the collection vehicle shoot the environment around the collection vehicle at the same time, after a period of time, the first camera shoots the first video, and each second camera shoots the second video. The electronic device acquires the first video and each second video. And then, the electronic equipment extracts key frames from the first video and each second video respectively every interval for a preset time length, so that a key frame set is obtained. Because the first camera and each second camera do not sense whether the acquisition vehicle is moving or not in the process of shooting video. When the collection vehicle does not move due to an obstacle or the like, the contents of the two image frames within the preset time interval are likely to be the same. Therefore, compared with a manner of photographing a key frame in real time, the key frame acquired in this manner is not accurate enough.
By adopting the scheme, the purpose of acquiring the key frame is realized.
When the collection vehicle moves for a preset distance, acquiring images shot by the first camera and each second camera on the surrounding environment of the collection vehicle to obtain the key frame set, wherein the first camera and the second camera are arranged on the collection vehicle
After the electronic equipment acquires the key frames shot by each camera, the key frames shot by all the cameras are formed into a key frame set, the corner points are extracted from each key frame in the key frame set, and the corner points are described by descriptors such as Scale-invariant feature transform, SIFT and the like. Then, for any two key frames, the corner points in the two key frames are constrained by a base matrix (fundamental matrix), and the wrong corner points are removed. Then, the corner point in one key frame and the corner point with the same descriptor in the other key frame are used as a pair of matching points, so that a plurality of pairs of matching points are obtained. In this embodiment of the present application, two points in a pair of matching points are respectively located in two key frames, and the matching of the point o1 in the key frame 1 and the point o2 in the key frame 2 means that the matching of the point o1 in the key frame 1 and the point o2 in the key frame 2 is the same point.
After the electronic equipment obtains a plurality of pairs of matching points, calculating a three-dimensional point for each pair of matching points, thereby obtaining a plurality of three-dimensional points, and forming an original three-dimensional point cloud without real world dimensions.
By adopting the scheme, the purpose of acquiring the original three-dimensional point cloud without real world scale is realized.
In the above embodiment, after the electronic device performs three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose, the electronic device controls the driving route of the automatic driving vehicle and the like according to the map obtained by the three-dimensional reconstruction.
For example, when the autonomous vehicle needs navigation, the starting position and the destination position are transmitted to the electronic device, and the electronic device plans the driving path of the autonomous vehicle based on the starting position, the destination position and the map. In addition, the electronic equipment can also use the map to perform offline route learning and the like, so that the learning success rate is greatly improved.
By adopting the scheme, the electronic equipment utilizes the map to perform offline route learning and online route planning, thereby greatly improving the learning success rate and ensuring more accurate driving route planning.
In the above embodiment, the first camera is, for example, a wide-angle camera, and the second camera is, for example, a fisheye camera, and the first camera is generally directed toward the front of the collection vehicle, and is therefore also referred to as a forward wide-angle camera. Each second camera is disposed on the collection vehicle in a different orientation, for example, there are four second cameras facing the front, rear, left and right of the collection vehicle, respectively. For example, please refer to fig. 3. Fig. 3 is a schematic diagram of a position of a second camera in the three-dimensional reconstruction method according to the embodiment of the present application.
Referring to fig. 3, a dot-filled rectangle indicates a forward wide-angle camera, and a diagonal-filled rectangle indicates a fisheye camera. The fisheye cameras are four in number, and the fisheye camera 41 is arranged in the same direction as the wide-angle camera 3. The fisheye camera 42 is directed to the left side of the collection cart, the fisheye camera 43 is directed to the right side of the collection cart, and the fisheye camera 44 is directed rearward of the collection cart. The external parameters between cameras refer to external parameters between heterogeneous cameras having partially overlapping fields of view, such as external parameters between the wide-angle camera 3 and the fisheye camera 41, external parameters between the wide-angle camera 3 and the fisheye camera 42, external parameters of the families of the wide-angle camera 3 and the fisheye camera 43, and the like.
The three-dimensional reconstruction method described above will be described in detail below using 4 second cameras as an example. For example, please refer to fig. 4.
Fig. 4 is a flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application. The embodiment comprises the following steps:
201. and acquiring key frames shot by the wide-angle camera and each fisheye camera in the moving process of the acquisition vehicle so as to obtain a key frame set.
202. A three-dimensional point cloud is generated from the keyframe set that is original, without real world dimensions.
203. And obtaining external parameters between the wide-angle camera and each fisheye camera.
Illustratively, assume that the external parameters between the wide-angle camera 3 and the fisheye camera 41 are (R 1 ,t 1 (R), the external parameters (R) between the wide-angle camera 3 and the fisheye camera 42 2 ,t 2 (R), the wide-angle camera 3 and the fisheye camera 43 3 ,t 3 (R), the wide-angle camera 3 and the fisheye camera 43 4 ,t 4 ,)。
204. And determining the pose of each fish-eye camera.
Exemplary, assume that the pose of the wide-angle camera 3 is C f The pose of the fisheye camera 41 is T (R 1 ,t 1 ,C f ) The pose of the fisheye camera 42 is T (R 2 ,t 2 ,C f ) The pose of the fisheye camera 43 is T (R 3 ,t 3 ,C f ) The pose of the fisheye camera 44 is T (R 4 ,t 4 ,C f )。
205. And determining the re-projection error of each three-dimensional point P in the three-dimensional point cloud.
Illustratively, each of the three-dimensional point cloudsThe reprojection error of the three-dimensional point P is: wherein T (-) represents the transfer function.
206. And determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point P in the three-dimensional point cloud.
Illustratively, the reprojection error function of a three-dimensional point cloud is expressed as: wherein E is rep Representing the re-projection errors of all three-dimensional points in the three-dimensional point cloud.
207. And according to the reprojection error function, adjusting each three-dimensional point in the three-dimensional point cloud to be a three-dimensional coordinate so as to obtain a new three-dimensional point cloud, and adjusting the pose of the first camera so as to obtain a new pose.
For an example, please refer to the description of step 103 above.
208. And carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
For example, please refer to the description above of no longer 104.
The foregoing describes a specific implementation of the three-dimensional reconstruction method mentioned in the embodiments of the present application, and the following is an embodiment of the apparatus of the present application, which may be used to execute the embodiments of the method of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 5 is a schematic structural diagram of a three-dimensional reconstruction device according to an embodiment of the present application. The apparatus may be integrated in a server or implemented by a server. As shown in fig. 5, in the present embodiment, the three-dimensional reconstruction apparatus 100 may include:
an obtaining module 11, configured to obtain external parameters between each of the first camera and at least one second camera;
a determining module 12, configured to determine a reprojection error function of the three-dimensional point cloud according to external parameters between the first camera and each of the second cameras, and the pose of the first camera;
the adjusting module 13 is configured to iteratively adjust three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and a pose of the first camera according to the reprojection error function, so as to obtain a new three-dimensional point cloud and a new pose;
And the reconstruction module 14 is used for carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose.
In a possible design, the determining module 12 is specifically configured to determine the pose of each second camera according to the external parameters between the first camera and each second camera, and the pose of the first camera, and determine the reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each second camera, and the three-dimensional point cloud.
In a possible design, the determining module 12 is configured to determine, for each three-dimensional point in the three-dimensional point cloud, a re-projection error of the three-dimensional point in the pose of the first camera, and a re-projection error of the three-dimensional point in the pose of the second camera, when determining a re-projection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of the second camera, and the three-dimensional point in the pose of the second camera, and determine the re-projection error of the three-dimensional point according to the re-projection error of the three-dimensional point in the pose of the first camera and the re-projection error of the three-dimensional point in the pose of the second camera; and determining a reprojection error function of the three-dimensional point cloud according to the reprojection error of each three-dimensional point in the three-dimensional point cloud.
In a possible design, the obtaining module 11 is further configured to obtain a keyframe set before determining a reprojection error function of a three-dimensional point cloud according to external parameters between the first camera and each of the second cameras and the pose of the first camera, where the keyframe set includes M keyframes obtained by the first camera and M keyframes obtained by each of the at least one second camera, and the M keyframes of the first camera and the M keyframes of the second camera are in one-to-one correspondence, and the first camera and the second camera are different models of cameras; performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points; and determining a three-dimensional point corresponding to each pair of matching points in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
In a possible design, the acquiring module 11 is specifically configured to acquire a first video captured by the first camera and a second video captured by the at least one second camera when acquiring a keyframe set, and extract keyframes from the first video and the second video respectively at intervals of a preset duration to obtain the keyframe set.
In a possible design, the acquiring module 11 is specifically configured to acquire, when the collection vehicle moves by a preset distance, images of the surrounding environment of the collection vehicle by the first camera and each of the second cameras to obtain the keyframe set, where the first camera and the second camera are disposed on the collection vehicle.
In a feasible design, the adjusting module 13 is specifically configured to iteratively adjust, according to the reprojection error function, a pose of a camera corresponding to each key frame in the set of key frames, so that a value of the reprojection error function is minimized, determine a new three-dimensional point according to the adjusted key frame, and generate the new three-dimensional point cloud according to the new three-dimensional point.
In one possible design, the first camera is a wide angle camera and the second camera is a fisheye camera.
Fig. 6 is another three-dimensional reconstruction device provided in the embodiment of the present application, referring to fig. 6, the three-dimensional reconstruction device 100 provided in the embodiment further includes, based on the above-mentioned fig. 5: the control module is used for controlling the running route of the automatic driving vehicle according to the map obtained by three-dimensional reconstruction after the three-dimensional reconstruction is carried out by the reconstruction module 14 according to the new three-dimensional point cloud.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
According to an embodiment of the present application, there is also provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 7 is a block diagram of an electronic device for implementing a three-dimensional reconstruction method of an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 7, the electronic device includes: one or more processors 21, memory 22, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 7, a processor 21 is taken as an example.
Memory 22 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the three-dimensional reconstruction methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the three-dimensional reconstruction method provided herein.
The memory 22 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the three-dimensional reconstruction method in the embodiments of the present application (e.g., the acquisition module 11, the determination module 12, the adjustment module 13, the reconstruction module 14, and the control module 15 described in the ax 6 shown in fig. 5). The processor 21 executes various functional applications of the server and data processing, i.e. implements the three-dimensional reconstruction method in the above-described method embodiments, by running non-transitory software programs, instructions and modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created when the three-dimensional reconstruction method is performed according to the electronic device, and the like. In addition, the memory 22 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 22 optionally includes memory remotely located relative to the processor 21, which may be connected via a network to an electronic device for performing the three-dimensional reconstruction method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for performing the three-dimensional reconstruction method may further include: an input device 23 and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 may be connected by a bus or otherwise, for example in fig. 7.
The input device 23 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the three-dimensional reconstruction electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output means 24 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the three-dimensional reconstruction method provided by the embodiment of the application, due to the fact that the external parameter constraint between cameras is introduced into the cost optimization function by the electronic equipment, the new three-dimensional point cloud has the characteristic of real world dimensions, and the accuracy of the three-dimensional point cloud and the pose of the cameras is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (18)

1. A three-dimensional reconstruction method, comprising:
acquiring external parameters between each second camera in the first camera and at least one second camera;
determining a reprojection error function of a three-dimensional point cloud according to external parameters between the first camera and each second camera and the pose of the first camera, wherein the reprojection error function of the three-dimensional point cloud is determined according to reprojection errors of all three-dimensional points in the three-dimensional point cloud;
Iteratively adjusting three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function to obtain a new three-dimensional point cloud and a new pose;
performing three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose;
the three-dimensional point re-projection error is determined according to the three-dimensional point re-projection error under the pose of the first camera and the three-dimensional point re-projection error under the pose of the second camera.
2. The method of claim 1, wherein the determining a re-projection error function of a three-dimensional point cloud based on the external parameters between the first camera and each of the second cameras, and the pose of the first camera, comprises:
determining the pose of each second camera according to the external parameters between the first camera and each second camera and the pose of the first camera;
and determining a reprojection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each second camera and the three-dimensional point cloud.
3. The method according to claim 1 or 2, wherein before determining the re-projection error function of the three-dimensional point cloud according to the external parameters between the first camera and each of the second cameras and the pose of the first camera, further comprising:
Acquiring a keyframe set, wherein the keyframe set comprises M keyframes acquired by the first camera and M keyframes acquired by each second camera in the at least one second camera, the M keyframes of the first camera and the keyframes in the M keyframes of the second camera are in one-to-one correspondence, and the first camera and the second camera are cameras of different models;
performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points;
and determining a three-dimensional point corresponding to each pair of matching points in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
4. The method of claim 3, wherein the acquiring a set of keyframes comprises:
acquiring a first video shot by the first camera and a second video shot by each of the at least one second camera;
and respectively extracting key frames from the first video and the second video at intervals of preset time length to obtain the key frame set.
5. The method of claim 3, wherein the acquiring a set of keyframes comprises:
when the collection vehicle moves for a preset distance, images shot by the first camera and the second cameras on the surrounding environment of the collection vehicle are acquired to obtain the keyframe set, and the first camera and the second camera are arranged on the collection vehicle.
6. The method of claim 3, wherein iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the re-projection error function to obtain a new three-dimensional point cloud and a new pose, comprising:
iteratively adjusting the pose of the camera corresponding to each key frame in the key frame set according to the reprojection error function so as to minimize the value of the reprojection error function;
determining a new three-dimensional point according to the adjusted key frame;
and generating the new three-dimensional point cloud according to the new three-dimensional points.
7. The method of claim 1 or 2, wherein the first camera is a wide angle camera and the second camera is a fisheye camera.
8. The method according to claim 1 or 2, wherein after the three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose, further comprising:
and controlling the running route of the automatic driving vehicle according to the map obtained by the three-dimensional reconstruction.
9. A three-dimensional reconstruction apparatus comprising:
the acquisition module is used for acquiring external parameters between each second camera in the first camera and the at least one second camera;
the determining module is used for determining a re-projection error function of the three-dimensional point cloud according to external parameters between the first camera and each second camera and the pose of the first camera, wherein the re-projection error function of the three-dimensional point cloud is determined according to the re-projection error of each three-dimensional point in the three-dimensional point cloud;
The adjusting module is used for iteratively adjusting the three-dimensional coordinates of each three-dimensional point in the three-dimensional point cloud and the pose of the first camera according to the reprojection error function so as to obtain a new three-dimensional point cloud and a new pose;
the reconstruction module is used for carrying out three-dimensional reconstruction according to the new three-dimensional point cloud and the new pose;
the three-dimensional point re-projection error is determined according to the three-dimensional point re-projection error under the pose of the first camera and the three-dimensional point re-projection error under the pose of the second camera.
10. The device of claim 9, wherein the determining module is specifically configured to determine a pose of each second camera according to an external parameter between the first camera and each second camera, and a pose of the first camera, and determine a re-projection error function of the three-dimensional point cloud according to the pose of the first camera, the pose of each second camera, and the three-dimensional point cloud.
11. The apparatus of claim 9 or 10, wherein the obtaining module is further configured to obtain a keyframe set before determining a reprojection error function of a three-dimensional point cloud according to an external parameter between the first camera and each of the second cameras and a pose of the first camera, where the keyframe set includes M keyframes obtained by the first camera and M keyframes obtained by each of the at least one second camera, and the M keyframes of the first camera and the M keyframes of the second camera are in one-to-one correspondence, and the first camera and the second camera are different models of cameras; performing feature point matching on each pair of key frames in the key frame set to obtain a plurality of pairs of matching points; and determining a three-dimensional point corresponding to each pair of matching points in the plurality of pairs of matching points to obtain the three-dimensional point cloud.
12. The apparatus of claim 11, wherein the obtaining module is specifically configured to obtain a first video captured by the first camera and a second video captured by the at least one second camera when obtaining a keyframe set, and extract keyframes for the first video and the second video respectively at intervals of a preset duration to obtain the keyframe set.
13. The apparatus of claim 11, wherein the acquiring module, when acquiring a keyframe set, is specifically configured to acquire images captured by the first camera and each of the second cameras on an environment surrounding the collection vehicle to obtain the keyframe set each time the collection vehicle moves a preset distance, where the first camera and the second camera are disposed on the collection vehicle.
14. The apparatus of claim 11, wherein the adjusting module is specifically configured to iteratively adjust a pose of a camera corresponding to each key frame in the set of key frames according to the reprojection error function, so as to minimize a value of the reprojection error function, determine a new three-dimensional point according to the adjusted key frame, and generate the new three-dimensional point cloud according to the new three-dimensional point.
15. The apparatus of claim 9 or 10, wherein the first camera is a wide angle camera and the second camera is a fisheye camera.
16. The apparatus according to claim 9 or 10, wherein the apparatus further comprises: and the control module is used for controlling the running route of the automatic driving vehicle according to the map obtained by three-dimensional reconstruction after the three-dimensional reconstruction is carried out by the reconstruction module according to the new three-dimensional point cloud.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010601885.6A 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium Active CN111784842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601885.6A CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601885.6A CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111784842A CN111784842A (en) 2020-10-16
CN111784842B true CN111784842B (en) 2024-04-12

Family

ID=72761370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601885.6A Active CN111784842B (en) 2020-06-29 2020-06-29 Three-dimensional reconstruction method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111784842B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154485A (en) * 2019-08-30 2020-12-29 深圳市大疆创新科技有限公司 Optimization method and equipment of three-dimensional reconstruction model and movable platform
CN115063485B (en) * 2022-08-19 2022-11-29 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401276B1 (en) * 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105844624A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system
WO2018040017A1 (en) * 2016-08-31 2018-03-08 深圳大学 Method and system for correcting distortion of projector lens based on adaptive fringes
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127524A (en) * 2018-10-31 2020-05-08 华为技术有限公司 Method, system and device for tracking trajectory and reconstructing three-dimensional image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401276B1 (en) * 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105844624A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
WO2018040017A1 (en) * 2016-08-31 2018-03-08 深圳大学 Method and system for correcting distortion of projector lens based on adaptive fringes
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system
CN108564617A (en) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 Three-dimensional rebuilding method, device, VR cameras and the panorama camera of more mesh cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于单目悬停相机的定轴慢旋空间非合作目标三维表面重建;谢朝毅;田金文;张钧;;舰船电子工程(03);全文 *
基于图像序列的稀疏点云重建;沈克贤;中国优秀硕士学位论文全文数据库(第4期);全文 *

Also Published As

Publication number Publication date
CN111784842A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
JP6768156B2 (en) Virtually enhanced visual simultaneous positioning and mapping systems and methods
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
US11270460B2 (en) Method and apparatus for determining pose of image capturing device, and storage medium
CN107990899B (en) Positioning method and system based on SLAM
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
CN113436270B (en) Sensor calibration method and device, electronic equipment and storage medium
CN111709973B (en) Target tracking method, device, equipment and storage medium
Labbé et al. Single-view robot pose and joint angle estimation via render & compare
CN112561978B (en) Training method of depth estimation network, depth estimation method of image and equipment
CN110986969B (en) Map fusion method and device, equipment and storage medium
CN111652113B (en) Obstacle detection method, device, equipment and storage medium
JP2021101365A (en) Positioning method, positioning device, and electronic device
CN111612852A (en) Method and apparatus for verifying camera parameters
CN108734770B (en) Three-dimensional model reconstruction method, electronic device and non-transitory computer readable recording medium
CN111094895A (en) System and method for robust self-repositioning in pre-constructed visual maps
CN111784842B (en) Three-dimensional reconstruction method, device, equipment and readable storage medium
CN110070578B (en) Loop detection method
WO2019157922A1 (en) Image processing method and device and ar apparatus
CN111753739A (en) Object detection method, device, equipment and storage medium
CN111222579A (en) Cross-camera obstacle association method, device, equipment, electronic system and medium
US20220404460A1 (en) Sensor calibration method and apparatus, electronic device, and storage medium
CN112085842A (en) Depth value determination method and device, electronic equipment and storage medium
JP2023100258A (en) Pose estimation refinement for aerial refueling
Zhang et al. MARS: parallelism-based metrically accurate 3D reconstruction system in real-time
CN112750164A (en) Lightweight positioning model construction method, positioning method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant