CN116958452A - Three-dimensional reconstruction method and system - Google Patents

Three-dimensional reconstruction method and system Download PDF

Info

Publication number
CN116958452A
CN116958452A CN202311199416.6A CN202311199416A CN116958452A CN 116958452 A CN116958452 A CN 116958452A CN 202311199416 A CN202311199416 A CN 202311199416A CN 116958452 A CN116958452 A CN 116958452A
Authority
CN
China
Prior art keywords
point cloud
information
dimensional reconstruction
pose information
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311199416.6A
Other languages
Chinese (zh)
Inventor
王志举
尹涵宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ge Lei Information Technology Co ltd
Original Assignee
Beijing Ge Lei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ge Lei Information Technology Co ltd filed Critical Beijing Ge Lei Information Technology Co ltd
Priority to CN202311199416.6A priority Critical patent/CN116958452A/en
Publication of CN116958452A publication Critical patent/CN116958452A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a three-dimensional reconstruction method and a system, which belong to the technical field of three-dimensional reconstruction, and the method comprises the following steps: extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information. The method extracts the intensity characteristic information from the point cloud data acquired by the laser radar, so that the problem of sparse three-dimensional reconstruction characteristics in the prior art can be solved, the problem of tracking loss in the case of facing a white wall and the like can be solved according to the intensity characteristic information extracted from the point cloud data acquired by the laser radar, and the accuracy and reliability of three-dimensional reconstruction can be further effectively improved.

Description

Three-dimensional reconstruction method and system
Technical Field
The present application relates to the field of three-dimensional reconstruction technologies, and in particular, to a three-dimensional reconstruction method and system.
Background
With the development of three-dimensional reconstruction technology, a multi-sensor fusion three-dimensional reconstruction method has been developed. The three-dimensional reconstruction technology can be applied to factory pipeline detection, disaster relief of specific scenes, mines, building industries and the like.
In the related art, the accuracy of the view-based three-dimensional reconstruction algorithm is poor, and the features of the object in the real world cannot be accurately reflected, so that how to accurately and effectively perform three-dimensional reconstruction is a technical problem which needs to be solved by a person skilled in the art.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the application provides a three-dimensional reconstruction method and a three-dimensional reconstruction system.
Specifically, the embodiment of the application provides the following technical scheme:
in a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including:
extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar;
constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera;
according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information;
and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
Further, the determining target pose information according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU includes:
integrating angular velocity and acceleration information acquired by the IMU, and determining first pose information;
matching the adjacent frame point cloud data acquired by the laser radar, and determining second pose information;
matching the image information of the adjacent frames acquired by the camera, and determining third pose information;
and correcting the second pose information and the third pose information by using the first pose information to obtain target pose information.
Further, the correcting the second pose information and the third pose information by using the first pose information to obtain target pose information includes:
correcting the second pose information and the third pose information by utilizing the first pose information to obtain corrected pose information;
and constructing a target factor graph optimization model, and optimizing the corrected pose information by using the target factor graph optimization model to obtain the target pose information.
Further, the constructing the target factor graph optimization model includes:
and inputting the laser radar visual odometer information, the IMU pre-integration result, the intensity characteristic information and the loop detection result into a factor graph model to obtain the target factor graph optimization model.
Further, after the three-dimensional reconstruction result is obtained, the method further comprises:
under the condition that motion paths overlap, laser point cloud images of continuous frames are obtained;
matching the laser point cloud image with the historical laser point cloud image to obtain a matching result;
and if the matching result is that the matching is successful, correcting the error of the target pose information.
Further, the matching the laser point cloud image with the historical laser point cloud image to obtain a matching result includes:
determining the number of intensity characteristic points in each frame of laser point cloud image;
marking laser point cloud images with the number of the intensity characteristic points exceeding a preset threshold as target point cloud images;
and matching the target point cloud image with the historical laser point cloud image to obtain a matching result.
In a second aspect, an embodiment of the present application further provides a three-dimensional reconstruction system, including:
the system comprises a laser radar, a camera, an Inertial Measurement Unit (IMU) and a processing unit;
the laser radar is used for collecting point cloud data;
the camera is used for collecting image information;
the inertial measurement unit IMU is used for acquiring angular velocity and acceleration information;
the processing unit is used for extracting angular points, face points and intensity characteristic information from point cloud data acquired by the laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
In a third aspect, an embodiment of the present application further provides a three-dimensional reconstruction apparatus, including:
the extraction module is used for extracting angular points, face points and intensity characteristic information from point cloud data acquired by the laser radar;
the construction module is used for constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera;
the determining module is used for determining target pose information according to the angular speed and acceleration information acquired by the Inertial Measurement Unit (IMU);
and the reconstruction module is used for obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the three-dimensional reconstruction method according to the first aspect when executing the program.
In a fifth aspect, embodiments of the present application also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the three-dimensional reconstruction method according to the first aspect.
In a sixth aspect, embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the three-dimensional reconstruction method according to the first aspect.
According to the three-dimensional reconstruction method and system provided by the embodiment of the application, not only are the angular points and the face points extracted from the point cloud data acquired by the laser radar, but also the intensity characteristic information is extracted from the point cloud data acquired by the laser radar, so that the problem of sparse three-dimensional reconstruction characteristics in the prior art can be solved, the problem of tracking loss under the condition of facing a white wall and the like can be solved according to the intensity characteristic information extracted from the point cloud data acquired by the laser radar, and the accuracy and the reliability of three-dimensional reconstruction can be further effectively improved; further, in the embodiment of the application, the point cloud map is constructed according to the angular points, the face points, the intensity characteristic information and the image information acquired by the camera, and after the target pose information is determined according to the angular speed and the acceleration information acquired by the inertial measurement unit IMU, the three-dimensional reconstruction result can be accurately obtained according to the point cloud map and the target pose information, so that the complementarity of each sensor is fully utilized, the three-dimensional reconstruction data is richer and more comprehensive, and the precision and the robustness of the three-dimensional reconstruction result are improved.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a three-dimensional reconstruction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a hardware platform incorporating lidar, camera and IMU inertial measurement units provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a target factor graph optimization model provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of another three-dimensional reconstruction method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an unmanned aerial vehicle control platform and an unmanned aerial vehicle control platform provided by an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a three-dimensional reconstruction device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The method of the embodiment of the application can be applied to a three-dimensional reconstruction scene, realizes the full utilization of complementarity of each sensor, ensures that the three-dimensional reconstruction data is more abundant and comprehensive, and improves the precision and the robustness of the three-dimensional reconstruction result.
In the related art, the accuracy of the view-based three-dimensional reconstruction algorithm is poor, and the features of the object in the real world cannot be accurately reflected, so that how to accurately and effectively perform three-dimensional reconstruction is a technical problem which needs to be solved by a person skilled in the art.
According to the three-dimensional reconstruction method, not only are the angular points and the face points extracted from the point cloud data acquired by the laser radar, but also the intensity characteristic information is extracted from the point cloud data acquired by the laser radar, so that the problem of sparse three-dimensional reconstruction characteristics in the prior art can be solved, the problem of tracking loss under the condition of facing a white wall and the like can be solved according to the intensity characteristic information extracted from the point cloud data acquired by the laser radar, and the accuracy and the reliability of three-dimensional reconstruction can be further effectively improved; further, in the embodiment of the application, the point cloud map is constructed according to the angular points, the face points, the intensity characteristic information and the image information acquired by the camera, and after the target pose information is determined according to the angular speed and the acceleration information acquired by the inertial measurement unit IMU, the three-dimensional reconstruction result can be accurately obtained according to the point cloud map and the target pose information, so that the complementarity of each sensor is fully utilized, the three-dimensional reconstruction data is richer and more comprehensive, and the precision and the robustness of the three-dimensional reconstruction result are improved.
In order to facilitate a clearer understanding of the technical solutions of the embodiments of the present application, some technical contents related to the embodiments of the present application will be first described.
With the development of three-dimensional reconstruction technology, a multi-sensor fusion three-dimensional reconstruction method is generated, and the laser radar, the vision sensor and the IMU have different characteristics and functions. The laser radar can provide high-precision distance and three-dimensional point cloud information, and is suitable for building an accurate map; the vision sensor can provide rich vision information, such as images, feature points and the like, and can be used for environment perception and feature extraction; the IMU may provide inertial measurement data such as accelerometers and gyroscopes for pose estimation and motion tracking. By fusing the two sensors, the complementarity of each sensor can be fully utilized, and the accuracy and the robustness of positioning and map construction are improved.
At present, a multi-sensor fusion algorithm is generally more complex than a single-sensor processing method, and proper fusion algorithms are required to be designed and realized, so that the problems of data alignment, calibration, weight distribution and the like are related, and the complexity and development cost of a system are increased. Different algorithms are suitable for different application scenes, and a multi-sensor fusion algorithm suitable for specific application needs to be selected according to specific conditions. The multi-sensor fusion requires more calculation and storage resources to process and fuse the data of a plurality of sensors, and requirements are set on the real-time performance and the resource utilization rate of the system. Inconsistencies, calibration errors, or malfunctions between sensors may lead to inaccuracy or misleading of fusion results.
The following describes the technical scheme of the present application in detail with reference to fig. 1 to 7. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flow chart of an embodiment of a three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 1, the method provided in this embodiment includes:
step 101, extracting angular points, face points and intensity characteristic information from point cloud data acquired by a laser radar;
specifically, in order to accurately acquire a three-dimensional reconstruction result, in the embodiment of the application, angular points, surface points and intensity characteristic information are firstly extracted from point cloud data acquired by a laser radar; optionally, in the embodiment of the application, not only the angular points and the face points are extracted from the point cloud data acquired by the laser radar, but also the intensity characteristic information is extracted from the point cloud data acquired by the laser radar, so that denser characteristic information is provided for the position with larger change, the problem of sparse three-dimensional reconstruction characteristics in the prior art can be solved, the problem of tracking loss in the case of facing white walls and the like can be solved according to the intensity characteristic information extracted from the point cloud data acquired by the laser radar, and the accuracy and the reliability of three-dimensional reconstruction are further effectively improved.
For example, in the feature extraction thread, after calculating curvature of point cloud information, angle points and face points are extracted, and intensity feature points are extracted, so that the problem of sparse feature points of a variation salient part is solved, and information of the angle points, the face points and the intensity feature points is released.
102, constructing a point cloud map according to corner points, face points, intensity characteristic information and image information acquired by a camera;
specifically, after angular point, face point and intensity characteristic information are extracted from point cloud data acquired by a laser radar, a point cloud map is further constructed according to the acquired angular point, face point, intensity characteristic information and image information acquired by a camera in the embodiment of the application; in the process of constructing the point cloud map, the image information acquired by the camera not only can provide color information, but also can be mutually supplemented, verified and mutually fused with the angular point, the face point and the intensity characteristic information acquired by the laser radar, so that the complementarity of each sensor can be fully utilized, the three-dimensional reconstruction data is richer and more comprehensive, and the positioning and map construction precision and robustness are improved.
Step 103, determining target pose information according to angular velocity and acceleration information acquired by an Inertial Measurement Unit (IMU);
specifically, after a point cloud map is constructed according to angular points, face points, intensity characteristic information and image information acquired by a camera, target pose information, namely position and direction information, is determined according to angular speed and acceleration information acquired by an Inertial Measurement Unit (IMU) in the embodiment of the application. Optionally, the lidar, the camera and the inertial measurement unit IMU in the embodiment of the present application may be integrated to obtain a hardware platform as shown in fig. 2, where the lidar, the camera and the IMU inertial measurement unit are integrated; optionally, the hardware platform may be mounted on an unmanned plane, an unmanned vehicle, or other devices to collect surrounding environment images and radar point cloud data, and collect angular velocity and acceleration information of the hardware platform through the IMU. Optionally, after the unmanned platform and the sensor are installed, the laser radar, the camera and the IMU inertial measurement unit are connected and drive of the ROS system is started, and each node can normally receive and send information of the related sensor to ensure that time stamps of all sensor data are aligned, and the data type is a data type common to an operating system.
And 104, obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
Specifically, a point cloud map is constructed according to angular points, face points, intensity characteristic information and image information acquired by a camera, and after target pose information is determined according to angular speed and acceleration information acquired by an Inertial Measurement Unit (IMU), a three-dimensional reconstruction result can be accurately obtained according to the point cloud map and the target pose information, so that complementarity of each sensor is fully utilized, three-dimensional reconstruction data is richer and more comprehensive, and three-dimensional reconstruction accuracy and robustness are improved.
According to the method, not only are the angular points and the face points extracted from the point cloud data acquired by the laser radar, but also the intensity characteristic information is extracted from the point cloud data acquired by the laser radar, so that the problem of sparse three-dimensional reconstruction characteristics in the prior art can be solved, the problem of tracking loss under the condition of facing white walls and the like can be solved according to the intensity characteristic information extracted from the point cloud data acquired by the laser radar, and the accuracy and the reliability of three-dimensional reconstruction can be further effectively improved; further, in the embodiment of the application, the point cloud map is constructed according to the angular points, the face points, the intensity characteristic information and the image information acquired by the camera, and after the target pose information is determined according to the angular speed and the acceleration information acquired by the inertial measurement unit IMU, the three-dimensional reconstruction result can be accurately obtained according to the point cloud map and the target pose information, so that the complementarity of each sensor is fully utilized, the three-dimensional reconstruction data is richer and more comprehensive, and the precision and the robustness of the three-dimensional reconstruction result are improved.
In an embodiment, determining target pose information according to angular velocity and acceleration information acquired by an inertial measurement unit IMU includes:
integrating angular velocity and acceleration information acquired by the IMU, and determining first pose information;
matching the adjacent frame point cloud data acquired by the laser radar, and determining second pose information;
matching the image information of the adjacent frames acquired by the camera, and determining third pose information;
and correcting the second pose information and the third pose information by using the first pose information to obtain target pose information.
Specifically, in order to accurately acquire pose information, namely position and direction information, in the three-dimensional reconstruction process, in the implementation of the present application, angular velocity and acceleration information acquired by an IMU are integrated respectively, and first pose information is determined; matching the adjacent frame point cloud data acquired by the laser radar, and determining second pose information; matching the image information of the adjacent frames acquired by the camera, and determining third pose information; finally, correcting the second pose information and the third pose information acquired by using the laser radar and the camera by using the first pose information acquired by the IMU to acquire target pose information; the pose estimation method comprises the steps of estimating the pose by using an IMU pre-integration mode, correcting and adjusting the pose of the laser radar and the pose of a camera by further using the pose estimated by using the IMU pre-integration mode, so that the pose of the three sensors are mutually constrained and corrected, more accurate pose estimation can be obtained, the accuracy of the pose estimation is further improved, and the effect of improving the accuracy of the three-dimensional reconstruction result is achieved.
According to the method, the first pose information obtained by the IMU is utilized to correct the second pose information and the third pose information obtained by the laser radar and the camera, so that the poses of the three sensors are mutually constrained and corrected, more accurate pose estimation can be obtained, the accuracy of the pose estimation is further improved, and the effect of improving the accuracy of the three-dimensional reconstruction result is achieved.
In an embodiment, correcting the second pose information and the third pose information by using the first pose information to obtain target pose information includes:
correcting the second pose information and the third pose information by using the first pose information to obtain corrected pose information;
and constructing a target factor graph optimization model, and optimizing the corrected pose information by utilizing the target factor graph optimization model to obtain target pose information.
Specifically, in the embodiment of the application, the first pose information obtained by the IMU is utilized to correct the second pose information and the third pose information obtained by the laser radar and the camera, after corrected pose information is obtained, the constructed target factor graph optimization model is further utilized to further optimize the corrected pose information again, so that more accurate and precise position and direction information can be obtained, and the accuracy and precision of the three-dimensional reconstruction result are improved.
Optionally, constructing the target factor graph optimization model includes:
and inputting the laser radar visual odometer information, the IMU pre-integration result, the intensity characteristic information and the loop detection result into the factor graph model to obtain a target factor graph optimization model.
Specifically, in the embodiment of the application, the pose is continuously estimated and constrained and optimized by constructing the target factor graph optimization model containing the laser-visual odometer factor, the IMU pre-integral factor, the intensity characteristic factor and the loop detection factor, so that the precision and the accuracy of the constructed three-dimensional reconstruction map are higher. Exemplary, a schematic diagram of constructing the target factor graph optimization model is shown in fig. 3. Optionally, respectively performing feature matching on the corner points, the face points and the intensity feature points, performing LM optimization, continuously adjusting the pose by combining IMU data, and adding the intensity feature factors into the factor graph; optionally, a factor graph is constructed for the IMU predicted component between two frames, the state of the current frame is optimized, the sensor continuously issues self acceleration and angular velocity information, the pre-integration quantity between two frames is continuously calculated, the ism optimizer is continuously reset to perform factor graph optimization, and the IMU predicted component is applied based on the optimized state to obtain the IMU odometer at each moment.
According to the method, the target factor graph optimization model containing the laser-visual odometer factor, the IMU pre-integration factor, the intensity characteristic factor and the loop detection factor is constructed, the corrected pose information is further optimized again through the constructed target factor graph optimization model, the pose is estimated continuously, constraint and optimization are carried out, therefore more accurate and precise position and direction information can be obtained, and the accuracy and precision of the three-dimensional reconstruction result are improved.
In one embodiment, after obtaining the three-dimensional reconstruction result, the method further includes:
under the condition that motion paths overlap, laser point cloud images of continuous frames are obtained;
matching the laser point cloud image with the historical laser point cloud image to obtain a matching result;
and if the matching result is that the matching is successful, correcting the error of the target pose information.
Specifically, after a three-dimensional reconstruction result is obtained according to a point cloud map and target pose information, under the condition that motion paths of equipment such as an unmanned aerial vehicle are overlapped, laser point cloud images of continuous frames are obtained, and a current laser point cloud image and a historical laser point cloud image are matched to obtain a matching result; optionally, feature matching and similarity calculation are performed on the current frame and the local map through feature extraction of the laser radar point cloud, and under the condition that the matching result of the current laser point cloud image and the historical laser point cloud image is successful, the error of the target pose information is corrected, so that the pose information of the overlapped position in the corrected motion path is the same, the effect of minimizing the accumulated error is achieved, the problem that the accuracy of the three-dimensional reconstruction result is low due to overlarge accumulated error is solved, the problem that the pose information in the overlapped position is inconsistent due to the accumulated error is effectively solved, and the accuracy of the three-dimensional reconstruction result is improved.
According to the method, under the condition that the motion paths are overlapped, the laser point cloud images of the continuous frames are obtained, the current laser point cloud image and the historical laser point cloud image are matched, under the condition that the matching result of the current laser point cloud image and the historical laser point cloud image is successful, the error of the target pose information is corrected, so that the pose information of the overlapped position in the corrected motion paths is the same, the problem that the pose information of the overlapped position is inconsistent due to the accumulated error is effectively solved, and the accuracy of the three-dimensional reconstruction result is improved.
In an embodiment, matching the laser point cloud image with the historical laser point cloud image to obtain a matching result includes:
determining the number of intensity characteristic points in each frame of laser point cloud image;
marking laser point cloud images with the number of the intensity characteristic points exceeding a preset threshold as target point cloud images;
and matching the target point cloud image with the historical laser point cloud image to obtain a matching result.
Specifically, in the embodiment of the application, in the process of matching the laser point cloud image with the historical laser point cloud image, the laser point cloud image with the number of the intensity characteristic points exceeding the preset threshold value is marked as the target point cloud image by determining the number of the intensity characteristic points in each frame of the laser point cloud image, and then the laser point cloud image with the number of the intensity characteristic points exceeding the preset threshold value is matched with the historical laser point cloud image, so that the matching of the point cloud image can be more conveniently and easily performed based on more intensity characteristic points, the matching result of the point cloud image can be more accurate based on more intensity characteristic points, and the pose information can be corrected based on the accurate point cloud matching result, and the accuracy of the three-dimensional reconstruction result is improved.
Exemplary, a specific flow of the three-dimensional reconstruction method in the embodiment of the present application is shown in fig. 4:
1. building a hardware platform integrating the laser radar, the camera and the IMU inertial measurement unit; optionally, the hardware platform can be carried by an unmanned plane, a wheeled unmanned trolley and a crawler-type unmanned trolley, and can also directly use handheld equipment; alternatively, the unmanned control platform and the unmanned control platform are as shown in fig. 5; after the sensor is installed, calibrating the sensor; the motion control system adopts a PID control algorithm and other motion control algorithms to control the motion of the robot.
2. Acquiring point cloud information by a laser radar, acquiring image information by a camera, and acquiring angular speed and acceleration by an Inertial Measurement Unit (IMU); extracting intensity characteristics from intensity information in the laser radar point cloud information, and adding an intensity information factor into a factor graph for optimization; optionally, the laser radar collects point cloud information, including depth, color and intensity information, extracts corner points, face points and intensity characteristic points, uses the intensity characteristic points to make the characteristics dense, and improves matching accuracy; adding the intensity information into a factor graph for optimization; optionally, a light supplementing device is added for the vision sensor to prevent the lack of features in dark environment; the IMU inertial measurement unit acquires angular velocity and acceleration information and is used for estimating pose;
3. positioning and point cloud map construction are carried out based on laser radar and camera information, and pose estimation is carried out on IMU inertial measurement unit information; the three sensor estimation pose is mutually constrained and corrected to obtain accurate pose estimation; optionally, the camera obtains the pose of the camera through the matching between frames; the laser radar obtains the pose of the laser radar through continuous point cloud frame matching; the IMU information can be directly used for estimating the pose, and the pose and the motion constraint provided by the IMU are used for correcting and adjusting the pose of the laser radar and the camera through an optimization algorithm.
4. Under the condition that motion paths overlap, laser point cloud images of continuous frames are obtained; matching the laser point cloud image with the historical laser point cloud image to obtain a matching result; if the matching result is that the matching is successful, the error of the target pose information is corrected, so that the accumulated error can be minimized, and the accurate reconstruction effect is achieved. Optionally, matching frames with similar distances and far time intervals in the historical frames, performing ICP matching calculation, storing the obtained frame ID and pose, calculating pose transformation, constructing closed-loop factor data, and adding factor graph for optimization. IMU pre-integral factors, laser-visual odometer factors, loop detection factors and characteristic intensity factors are respectively added in factor graph optimization, and the latest pose is released after optimization.
According to the method, the number of the intensity characteristic points in each frame of laser point cloud image is determined, and then the laser point cloud images with the intensity characteristic points exceeding the preset threshold value and the historical laser point cloud images are matched, so that the matching of the point cloud images can be more conveniently and easily performed based on more intensity characteristic points, the matching result of the point cloud images can be more accurate based on more intensity characteristic points, the pose information can be corrected based on the accurate point cloud matching result, and the accuracy of the three-dimensional reconstruction result is improved.
In one embodiment, the present application further provides a three-dimensional reconstruction system, comprising: the system comprises a laser radar, a camera, an Inertial Measurement Unit (IMU) and a processing unit;
the laser radar is used for collecting point cloud data;
the camera is used for collecting image information;
the inertial measurement unit IMU is used for acquiring angular velocity and acceleration information;
the processing unit is used for extracting angular points, face points and intensity characteristic information from point cloud data acquired by the laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
Specifically, the three-dimensional reconstruction system in the embodiment of the application comprises a laser radar, a camera, an Inertial Measurement Unit (IMU) and a processing unit; optionally, the lidar, the camera and the inertial measurement unit IMU may be integrated to obtain a hardware platform incorporating the lidar, the camera and the IMU inertial measurement unit; optionally, the three-dimensional reconstruction system in the embodiment of the application further comprises a camera light supplementing device for preventing the lack of features in a dark environment; optionally, the hardware platform may be mounted on a carrying device such as an unmanned plane, a robot dog, or the like to collect an image of a surrounding environment and radar point cloud data, and acquire angular velocity and acceleration information of the hardware platform collected by the IMU. Further, the acquired radar point cloud data can be input to a processing unit, and angular points, face points and intensity characteristic information are extracted from the point cloud data acquired by the laser radar through the processing unit; the processing unit further constructs a point cloud map according to the extracted corner points, the extracted face points, the extracted intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; the method comprises the steps of acquiring point cloud information by using a laser radar, acquiring image information by using a camera, acquiring angular velocity and acceleration by using an IMU inertial measurement unit, further performing positioning and point cloud map construction by using the point cloud information acquired by the laser radar and the image information acquired by the camera, performing pose estimation by using the IMU inertial measurement unit information, and finally accurately obtaining a three-dimensional reconstruction result according to the point cloud map and the pose information, thereby fully utilizing the complementarity of each sensor, ensuring that three-dimensional reconstruction data is richer and more comprehensive, and improving the precision and robustness of the three-dimensional reconstruction result.
The three-dimensional reconstruction system of the embodiment utilizes the characteristic that the laser radar, the camera and the IMU information can be complemented, adopts a multi-sensor fusion technology, combines strength characteristic matching and constraint optimization, and effectively improves the precision and the robustness in the three-dimensional reconstruction process.
The three-dimensional reconstruction device provided by the application is described below, and the three-dimensional reconstruction device described below and the three-dimensional reconstruction method described above can be referred to correspondingly.
Fig. 6 is a schematic structural diagram of a three-dimensional reconstruction device provided by the present application. The three-dimensional reconstruction device provided in this embodiment includes:
the extracting module 710 is configured to extract corner points, face points and intensity feature information from the point cloud data collected by the laser radar;
the construction module 720 is configured to construct a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information collected by the camera;
a determining module 730, configured to determine target pose information according to angular velocity and acceleration information acquired by the inertial measurement unit IMU;
and the reconstruction module 740 is used for obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
Optionally, the determining module 730 is specifically configured to: integrating angular velocity and acceleration information acquired by the IMU, and determining first pose information;
matching the adjacent frame point cloud data acquired by the laser radar, and determining second pose information;
matching the image information of the adjacent frames acquired by the camera, and determining third pose information;
and correcting the second pose information and the third pose information by using the first pose information to obtain target pose information.
Optionally, the determining module 730 is specifically configured to: correcting the second pose information and the third pose information by using the first pose information to obtain corrected pose information;
and constructing a target factor graph optimization model, and optimizing the corrected pose information by utilizing the target factor graph optimization model to obtain target pose information.
Optionally, the determining module 730 is specifically configured to: and inputting the laser radar visual odometer information, the IMU pre-integration result, the intensity characteristic information and the loop detection result into the factor graph model to obtain a target factor graph optimization model.
Optionally, the reconstruction module 740 is further configured to: under the condition that motion paths overlap, laser point cloud images of continuous frames are obtained;
matching the laser point cloud image with the historical laser point cloud image to obtain a matching result;
and if the matching result is that the matching is successful, correcting the error of the target pose information.
Optionally, the reconstruction module 740 is further configured to: determining the number of intensity characteristic points in each frame of laser point cloud image;
marking laser point cloud images with the number of the intensity characteristic points exceeding a preset threshold as target point cloud images;
and matching the target point cloud image with the historical laser point cloud image to obtain a matching result.
The device of the embodiment of the present application is configured to perform the method of any of the foregoing method embodiments, and its implementation principle and technical effects are similar, and are not described in detail herein.
Fig. 7 illustrates a physical structure diagram of an electronic device, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform a three-dimensional reconstruction method comprising: extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the three-dimensional reconstruction method provided by the above methods, the method comprising: extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
In yet another aspect, the present application also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the three-dimensional reconstruction methods provided above, the method comprising: extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A three-dimensional reconstruction method, comprising:
extracting angular points, surface points and intensity characteristic information from point cloud data acquired by a laser radar;
constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera;
according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information;
and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
2. The three-dimensional reconstruction method according to claim 1, wherein the determining target pose information according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU includes:
integrating angular velocity and acceleration information acquired by the IMU, and determining first pose information;
matching the adjacent frame point cloud data acquired by the laser radar, and determining second pose information;
matching the image information of the adjacent frames acquired by the camera, and determining third pose information;
and correcting the second pose information and the third pose information by using the first pose information to obtain target pose information.
3. The three-dimensional reconstruction method according to claim 2, wherein correcting the second pose information and the third pose information by using the first pose information to obtain target pose information comprises:
correcting the second pose information and the third pose information by utilizing the first pose information to obtain corrected pose information;
and constructing a target factor graph optimization model, and optimizing the corrected pose information by using the target factor graph optimization model to obtain the target pose information.
4. The three-dimensional reconstruction method according to claim 3, wherein the constructing the target factor graph optimization model includes:
and inputting the laser radar visual odometer information, the IMU pre-integration result, the intensity characteristic information and the loop detection result into a factor graph model to obtain the target factor graph optimization model.
5. The method according to claim 4, further comprising, after the obtaining the three-dimensional reconstruction result:
under the condition that motion paths overlap, laser point cloud images of continuous frames are obtained;
matching the laser point cloud image with the historical laser point cloud image to obtain a matching result;
and if the matching result is that the matching is successful, correcting the error of the target pose information.
6. The method of three-dimensional reconstruction according to claim 5, wherein the matching the laser point cloud image with the historical laser point cloud image to obtain a matching result includes:
determining the number of intensity characteristic points in each frame of laser point cloud image;
marking laser point cloud images with the number of the intensity characteristic points exceeding a preset threshold as target point cloud images;
and matching the target point cloud image with the historical laser point cloud image to obtain a matching result.
7. A three-dimensional reconstruction system, the system comprising: the system comprises a laser radar, a camera, an Inertial Measurement Unit (IMU) and a processing unit;
the laser radar is used for collecting point cloud data;
the camera is used for collecting image information;
the inertial measurement unit IMU is used for acquiring angular velocity and acceleration information;
the processing unit is used for extracting angular points, face points and intensity characteristic information from point cloud data acquired by the laser radar; constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera; according to the angular velocity and acceleration information acquired by the inertial measurement unit IMU, determining target pose information; and obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
8. A three-dimensional reconstruction apparatus, comprising:
the extraction module is used for extracting angular points, face points and intensity characteristic information from point cloud data acquired by the laser radar;
the construction module is used for constructing a point cloud map according to the corner points, the face points, the intensity characteristic information and the image information acquired by the camera;
the determining module is used for determining target pose information according to the angular speed and acceleration information acquired by the Inertial Measurement Unit (IMU);
and the reconstruction module is used for obtaining a three-dimensional reconstruction result according to the point cloud map and the target pose information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the three-dimensional reconstruction method according to any one of claims 1 to 6 when the program is executed by the processor.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the three-dimensional reconstruction method according to any one of claims 1 to 6.
CN202311199416.6A 2023-09-18 2023-09-18 Three-dimensional reconstruction method and system Pending CN116958452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311199416.6A CN116958452A (en) 2023-09-18 2023-09-18 Three-dimensional reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311199416.6A CN116958452A (en) 2023-09-18 2023-09-18 Three-dimensional reconstruction method and system

Publications (1)

Publication Number Publication Date
CN116958452A true CN116958452A (en) 2023-10-27

Family

ID=88460475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311199416.6A Pending CN116958452A (en) 2023-09-18 2023-09-18 Three-dimensional reconstruction method and system

Country Status (1)

Country Link
CN (1) CN116958452A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351166A (en) * 2023-12-04 2024-01-05 山东新科凯邦通信器材有限公司 Land management drawing and measuring system based on big data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112230242A (en) * 2020-09-30 2021-01-15 深兰人工智能(深圳)有限公司 Pose estimation system and method
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN115468567A (en) * 2022-09-28 2022-12-13 中国人民解放军陆军装甲兵学院 Cross-country environment-oriented laser vision fusion SLAM method
CN116091700A (en) * 2023-02-13 2023-05-09 广东虚拟现实科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, terminal equipment and computer readable medium
CN116449384A (en) * 2023-03-17 2023-07-18 天津大学 Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN116758153A (en) * 2023-05-31 2023-09-15 重庆大学 Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112230242A (en) * 2020-09-30 2021-01-15 深兰人工智能(深圳)有限公司 Pose estimation system and method
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN115468567A (en) * 2022-09-28 2022-12-13 中国人民解放军陆军装甲兵学院 Cross-country environment-oriented laser vision fusion SLAM method
CN116091700A (en) * 2023-02-13 2023-05-09 广东虚拟现实科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, terminal equipment and computer readable medium
CN116449384A (en) * 2023-03-17 2023-07-18 天津大学 Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN116758153A (en) * 2023-05-31 2023-09-15 重庆大学 Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351166A (en) * 2023-12-04 2024-01-05 山东新科凯邦通信器材有限公司 Land management drawing and measuring system based on big data
CN117351166B (en) * 2023-12-04 2024-03-26 山东新科凯邦通信器材有限公司 Land management drawing and measuring system based on big data

Similar Documents

Publication Publication Date Title
CN112734852B (en) Robot mapping method and device and computing equipment
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
CN109084732B (en) Positioning and navigation method, device and processing equipment
Cvišić et al. SOFT‐SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
Wefelscheid et al. Three-dimensional building reconstruction using images obtained by unmanned aerial vehicles
KR20200044420A (en) Method and device to estimate position
US11231283B2 (en) Localization with neural network based image registration of sensor data and map data
CN108564657B (en) Cloud-based map construction method, electronic device and readable storage medium
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN112197770A (en) Robot positioning method and positioning device thereof
CN113264066A (en) Obstacle trajectory prediction method and device, automatic driving vehicle and road side equipment
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN116958452A (en) Three-dimensional reconstruction method and system
CN111721305B (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
CN112700486A (en) Method and device for estimating depth of road lane line in image
CN111623773A (en) Target positioning method and device based on fisheye vision and inertial measurement
CN112560769B (en) Method for detecting obstacle, electronic device, road side device and cloud control platform
CN112380933A (en) Method and device for identifying target by unmanned aerial vehicle and unmanned aerial vehicle
WO2023142353A1 (en) Pose prediction method and apparatus
WO2020019116A1 (en) Multi-source data mapping method, related apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination