CN117541714A - Outdoor scene reconstruction method and device guided by point cloud - Google Patents

Outdoor scene reconstruction method and device guided by point cloud Download PDF

Info

Publication number
CN117541714A
CN117541714A CN202311247351.8A CN202311247351A CN117541714A CN 117541714 A CN117541714 A CN 117541714A CN 202311247351 A CN202311247351 A CN 202311247351A CN 117541714 A CN117541714 A CN 117541714A
Authority
CN
China
Prior art keywords
point
point cloud
quality
acquisition
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311247351.8A
Other languages
Chinese (zh)
Inventor
高跃
王梓祺
赵曦滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202311247351.8A priority Critical patent/CN117541714A/en
Publication of CN117541714A publication Critical patent/CN117541714A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a point cloud guided outdoor scene reconstruction method and device, wherein the method comprises the following steps: acquiring point cloud data of an initial frame in an outdoor scene; carrying out local feature extraction based on point cloud on the point cloud data to obtain point-by-point features of the point cloud of the current frame; evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame, and screening quality points with the quality lower than a preset threshold; counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame according to the counting result as the position of the target acquisition at the next moment; and controlling the position of the acquisition carrier to acquire the target at the next moment, continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data. Therefore, the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, the real-time reconstruction is difficult to achieve and the like are solved.

Description

Outdoor scene reconstruction method and device guided by point cloud
Technical Field
The application relates to the technical field of computer vision, in particular to a point cloud guided outdoor scene reconstruction method and device.
Background
The point cloud scene reconstruction refers to a process of constructing a point cloud three-dimensional model of a scene of a carrier passing through in a period of time from a three-dimensional point cloud sequence according to a certain time sequence. The technology is widely applied to the fields of computer vision, point cloud processing, high-precision map construction and the like. The high-precision three-dimensional scene point cloud map obtained based on the technology can be used as a true value dependence in the fields of automatic driving, environment perception and the like. The obtained result can also realize higher-level application, such as functions of virtual roaming, traffic prediction, safety monitoring and the like, and provides strong support for the fields of intelligent city construction, intelligent traffic and the like.
In the related art, the following three modes are mainly adopted:
1) Based on total station data acquisition and synthesis. Total stations are a type of high-precision measuring instrument used for measuring floors, buildings and other structures, also known as electronic total stations. The device is highly integrated, combines a plurality of sensors such as an omnidirectional level meter, a vertical meter, a distance meter and the like, can measure position and angle data simultaneously, and can accurately measure the position in a three-dimensional space. The technology of obtaining a scene reconstruction result by using a total station is generally called as a total station measurement technology, and the technology obtains information such as the position, the elevation and the like of ground characteristics by measuring the ground, so as to construct a high-precision scene point cloud.
2) The oblique photography technology is an emerging aerial three-dimensional data acquisition technology, can obtain oblique photography data with high resolution, high precision and high coverage rate in a large area, comprises a digital surface model, a digital ground model, a three-dimensional triangle network and the like, and can be applied to the aspects of three-dimensional reconstruction, map making, urban planning, monitoring and the like. The basic principle of the technology is that a camera is inclined at a certain angle to shoot along a flight track. At the same time of shooting, the inclination angle and position information of the camera are also required to be measured, and the data and the image are combined and processed to construct a high-precision three-dimensional model.
3) A SLAM method based on a high-line-count line scanning laser radar is a technology for realizing autonomous navigation of a robot and establishing an environment map. The aim is to enable the robot to achieve autonomous positioning and map establishment in an unknown environment without prior knowledge or manual marking. The high-line-count line scanning laser radar is a special laser radar sensor and is characterized in that the environment can be scanned with high precision and high frequency to obtain three-dimensional information of the environment. The laser point cloud data are acquired through progressive scanning, and each point is provided with high-precision three-dimensional coordinate information. SLAM methods based on high-line-count line-scan lidar typically include the following five steps: data acquisition, feature extraction, pose estimation, map construction and cycle detection.
However, in the related art, the acquisition difficulty is extremely high, manpower and material resources are consumed, calculation resources are extremely depended, real-time reconstruction is difficult to achieve, and improvement is needed.
Disclosure of Invention
The application provides a point cloud guided outdoor scene reconstruction method and device, which are used for solving the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, and the real-time reconstruction is difficult to achieve.
An embodiment of a first aspect of the present application provides a point cloud guided outdoor scene reconstruction method, including the following steps: acquiring point cloud data of an initial frame in an outdoor scene; extracting local characteristics of the point cloud data based on the point cloud to obtain point-by-point characteristics of the point cloud of the current frame; evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame, and screening quality points with the quality lower than a preset threshold; counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame according to the counting result as the position of the target acquisition at the next moment; and controlling the acquisition carrier to acquire the position of the target at the next moment, continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data.
Optionally, in an embodiment of the present application, the extracting the local feature of the point cloud based on the point cloud data to obtain the point-by-point feature of the point cloud of the current frame includes: denoising the point cloud data to obtain a denoised point set; performing point cloud classification on the point set to obtain a plane point set belonging to the plane point and an edge point set belonging to the edge point so as to obtain a non-ground plane point set; and calculating the point cloud quality of the non-ground plane point concentration point.
Optionally, in an embodiment of the present application, the estimating the quality of the point cloud according to the point-by-point density characteristic of the current frame point cloud, and screening quality points with the quality of the point cloud lower than a preset threshold, includes: setting up weights of densities of points with different heights; calculating density characteristics of all relevant points based on the point cloud quality and the weight; and setting the preset threshold value of the low-quality points, and screening out a low-quality point set of each frame.
Optionally, in an embodiment of the present application, the counting the number and the position distribution of the quality points to obtain a statistical result, and selecting, according to the statistical result, the azimuth with the lowest quality and the most concentrated number in the current frame as the position of the target acquisition at the next moment includes: acquiring the pose of the acquisition carrier of each frame; calculating the most dense point position of each frame of the low-quality point set; and calculating the optimal forward pose of the acquisition carrier for guiding the moving direction at each moment according to the pose and the most dense point position so as to determine the azimuth.
Optionally, in an embodiment of the present application, the controlling the collecting carrier to collect the position of the target at the next moment, and continuously collecting new point cloud data until the collection is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data includes: calculating the optimal advancing direction of the acquisition carrier at each moment according to the pose and the most dense positions of the points so as to guide the moving direction and correct the pose of the acquisition carrier at the current moment; and controlling the acquisition carrier to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Optionally, in an embodiment of the present application, the controlling the collecting carrier to collect the position of the target at the next moment, and continuously collecting new point cloud data until the collection is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data, further includes: judging whether the reconstructed data meets preset use requirements or not; and if the preset use requirement is met, ending the acquisition, otherwise, continuing to acquire the data.
An embodiment of a second aspect of the present application provides a point cloud guided outdoor scene reconstruction device, including: the acquisition module is used for acquiring point cloud data of an initial frame in an outdoor scene; the acquisition module is used for extracting local characteristics of the point cloud based on the point cloud data and acquiring point-by-point characteristics of the point cloud of the current frame; the screening module is used for evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame and screening quality points with the quality lower than a preset threshold value; the selecting module is used for counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame according to the counting result as the position of the target acquisition at the next moment; and the reconstruction module is used for controlling the acquisition carrier to acquire the position of the target at the next moment, continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data.
Optionally, in one embodiment of the present application, the acquiring module includes: the denoising unit is used for denoising the point cloud data to obtain a denoised point set; the classification unit is used for classifying the point sets in a point cloud mode to obtain a plane point set belonging to the plane points and an edge point set belonging to the edge points so as to obtain a non-ground plane point set; and the first calculation unit is used for calculating the point cloud quality of the non-ground plane point concentration point.
Optionally, in one embodiment of the present application, the screening module includes: a setting unit for setting weights of densities of points of different heights; the second calculation unit is used for calculating density characteristics of all relevant points based on the point cloud quality and the weight; and the screening unit is used for setting the preset threshold value of the low-quality points and screening out a low-quality point set of each frame.
Optionally, in one embodiment of the present application, the selecting module includes: the acquisition unit is used for acquiring the pose of the acquisition carrier of each frame; a third calculation unit configured to calculate a most dense point position of each frame of the low quality point set; and the determining unit is used for calculating the optimal forward pose of the acquisition carrier for guiding the moving direction at each moment according to the pose and the most dense point position so as to determine the position.
Optionally, in one embodiment of the present application, the reconstruction module includes: the correction unit is used for calculating the optimal advancing direction of the acquisition carrier at each moment according to the pose and the most dense positions of the points so as to guide the moving direction and correct the pose of the acquisition carrier at the current moment; and the acquisition unit is used for controlling the acquisition carrier to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Optionally, in one embodiment of the present application, the reconstruction module further includes: the judging unit is used for judging whether the reconstructed data meet the preset use requirement; and the execution unit is used for ending the acquisition when the preset use requirement is met, and continuing to acquire data if the preset use requirement is not met.
An embodiment of a third aspect of the present application provides an electronic device, including: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the outdoor scene reconstruction method guided by the point cloud.
A fourth aspect of the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the above point cloud guided outdoor scene reconstruction method.
According to the embodiment of the application, the local feature extraction can be carried out on the collected point cloud data, the point-by-point feature of the point cloud of the current frame is obtained, and the point cloud density quality evaluation screening of the low-quality points is carried out, so that new data are collected to the low-quality target azimuth, the calculation complexity is reduced, and the quality of the reconstruction result is improved. Therefore, the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, the real-time reconstruction is difficult to achieve and the like are solved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart of a point cloud guided outdoor scene reconstruction method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an operating principle of a point cloud guided outdoor scene reconstruction method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an outdoor scene reconstruction device guided by a point cloud according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a point cloud guided outdoor scene reconstruction method and device according to the embodiments of the present application with reference to the accompanying drawings. Aiming at the problems that in the related technology mentioned in the background technology center, the acquisition difficulty is extremely high, manpower and material resources are consumed, calculation resources are extremely depended, real-time reconstruction is difficult to achieve and the like, the application provides a point cloud guided outdoor scene reconstruction method. Therefore, the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, the real-time reconstruction is difficult to achieve and the like are solved.
Specifically, fig. 1 is a schematic flow chart of a point cloud guided outdoor scene reconstruction method according to an embodiment of the present application.
As shown in fig. 1, the outdoor scene reconstruction method guided by the point cloud comprises the following steps:
in step S101, point cloud data of an initial frame in an outdoor scene is acquired.
In the actual implementation process, the embodiment of the application can record the laser radar point cloud data acquired in the open outdoor scene with enough feature quantity as (P) 1 ,P 2 ,...,P n ) Wherein the subscripts represent different positions.
In step S102, local feature extraction based on point cloud is performed on the point cloud data, and point-by-point feature of the point cloud of the current frame is obtained.
Optionally, in an embodiment of the present application, performing local feature extraction based on point cloud on the point cloud data, and obtaining point-by-point feature of the point cloud of the current frame includes: denoising the point cloud data to obtain a denoised point set; performing point cloud classification on the point set to obtain a plane point set belonging to the plane point and an edge point set belonging to the edge point so as to obtain a non-ground plane point set; and calculating the point cloud quality of the non-ground plane point concentration point.
Specifically, the embodiment of the application can acquire all laser radar point clouds P 1 ,P 2 ,...,P n The preliminary denoising method can be based on the principle that clustering is performed according to depth after plane projection of points, and the step aims to prevent noise points such as leaves and the like from affecting the registration effect in the subsequent registration process and increase the registration calculation complexity. Then, the embodiment of the application can classify the obtained point set into two types: plane points and edge points. The obtained total plane point set is P c1p ,P c2p ,...,P cnp All edge point sets: p (P) c1e ,P c2e ,...,P cne The method comprises the steps of carrying out a first treatment on the surface of the Extracting the ground point P according to the characteristics of the ground point c1g ,P c2g ,...,P cng
Further, the embodiments of the present application may be directed to the acquired non-ground plane point set P c1p -P c1g ,P c2p -P c2g ,...,P cnp -P cng And (5) performing point-by-point cloud quality calculation. The point cloud quality formula definition may be: in the assumption of the point p,wherein (1)>The k adjacent points of p are, t is a set threshold value, and k is the set threshold value number.
In step S103, the quality of the point cloud is evaluated according to the point-by-point density characteristic of the point cloud of the current frame, and quality points with the quality lower than a preset threshold are screened.
Optionally, in an embodiment of the present application, evaluating the quality of the point cloud according to the point-by-point density characteristic of the current frame point cloud, screening quality points with the quality of the point cloud lower than a preset threshold includes: setting up weights of densities of points with different heights; calculating density characteristics of all relevant points based on the point cloud quality and the weight; and setting a preset threshold value of the low-quality points, and screening out a low-quality point set of each frame.
It will be appreciated that since a change in altitude will cause a change in the line spacing of the sweep lines of the lidar, it is necessary to establish the weight v of the density of points of different altitude depending on the hardware conditions p
Specifically, embodiments of the present application may be based on the point cloud quality D k (p) and weight v of density of points of different heights p Calculating density characteristics D (p) =v of all relevant points p *D k (p). The embodiment of the application can set the threshold value of the low quality point as u, and screen out the low quality point set P of each frame c1d ,P c2d ,...,P cnd
In step S104, the number and position distribution of the quality points are counted to obtain a counting result, and the azimuth with the lowest quality and the most concentrated number in the current frame is selected as the position of the target acquisition at the next moment according to the counting result.
Optionally, in an embodiment of the present application, counting the number and position distribution of the quality points to obtain a statistical result, and selecting, according to the statistical result, the azimuth with the lowest quality and the most concentrated number in the current frame as the position of the target acquisition at the next time, where the method includes: acquiring the pose of each frame of acquisition carrier; calculating the most dense point position of each frame of the low-quality point set; and calculating the optimal forward pose of the acquisition carrier for guiding the moving direction at each moment according to the pose and the most dense point position so as to determine the position.
For example, the embodiments of the present application may first acquire the pose of each frame of the acquisition carrier, such as a robot, including coordinates and a set of quaternions representing rotation anglesThe embodiment of the application can also calculate the low-quality point set P based on the clustering thought c1d ,P c2d ,...,P cnd The most dense point position of each frame +.>Further, according to the position pose and the most dense position of the points obtained through calculation, the optimal forward pose of the acquisition carrier such as a robot for guiding the moving direction at each moment can be calculated to determine the position.
In step S105, the position of the acquisition carrier acquired towards the target at the next moment is controlled, and new point cloud data is continuously acquired until the acquisition is finished, reconstruction data is obtained, and an outdoor scene reconstruction result is generated according to the reconstruction data.
Optionally, in an embodiment of the present application, controlling a position of the acquisition carrier to acquire to a target at a next moment, and continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data, including: calculating the optimal advancing direction of the acquisition carrier at each moment according to the pose and the most dense positions of the points so as to guide the moving direction and correct the pose of the acquisition carrier at the current moment; and controlling the acquisition carrier to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Specifically, the embodiment of the application can calculate the optimal advancing direction of the acquisition carrier such as a robot at each moment according to the pose and the most dense point position so as to guide the moving direction, and the formula is as follows:
thereby correcting the pose at the current moment to be
Further, the embodiment of the application can control the acquisition carrier such as a robot to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Optionally, in an embodiment of the present application, controlling a position of the acquisition carrier to acquire to a target at a next moment, and continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data, and further including: judging whether the reconstructed data meets the preset use requirement or not; if the preset use requirement is met, the acquisition is finished, otherwise, the data acquisition is continued.
For example, the embodiment of the application can check whether the reconstructed data meets the use requirement through the system, if so, the acquisition is finished, otherwise, the data is continuously acquired.
The working principle of the point cloud guided outdoor scene reconstruction method according to the embodiment of the present application is described in detail with reference to fig. 2.
As shown in fig. 2, the outdoor scene reconstruction method guided by the point cloud in the embodiment of the present application specifically includes the following steps:
step S201: and acquiring point cloud data of an initial frame by using a laser radar in an outdoor scene.
Specifically, the embodiment of the application may record the laser radar point cloud data acquired in an open outdoor scene with a sufficient number of features as (P 1 ,P 2 ,...,P n ) Wherein the subscripts represent different positions.
Step S202: and extracting local characteristics of the acquired point cloud data based on the point cloud to obtain point-by-point characteristics of the point cloud of the current frame.
For example, the embodiment of the application may be applied to all collected laser radar point clouds P 1 ,P 2 ,...,P n The preliminary denoising method can be based on the principle that clustering is performed according to depth after plane projection of points, and the step aims to prevent noise points such as leaves and the like from affecting the registration effect in the subsequent registration process and increase the registration calculation complexity. Then, the embodiment of the application can classify the obtained point set into two types: plane points and edge points. The obtained total plane point set is P c1p ,P c2p ,...,P cnp All edge point sets: p (P) c1e ,P c2e ,...,P cne The method comprises the steps of carrying out a first treatment on the surface of the Extracting the ground point P according to the characteristics of the ground point c1g ,P c2g ,...,P cng
Further, the embodiments of the present application may be directed to the acquired non-ground plane point set P c1p -P c1g ,P c2p -P c2g ,...,P cnp -P cng And (5) performing point-by-point cloud quality calculation. The point cloud quality formula definition may be: in the assumption of the point p,wherein (1)>The k adjacent points of p are, t is a set threshold value, and k is the set threshold value number.
Step S203: and carrying out point cloud quality assessment according to the point-by-point density characteristics of the point cloud of the current frame, and screening low-quality points according to a threshold value.
It will be appreciated that since a change in altitude will cause a change in the line spacing of the sweep lines of the lidar, it is necessary to establish the weight v of the density of points of different altitude depending on the hardware conditions p . The embodiment of the application can rootData point cloud quality D k (p) and weight v of density of points of different heights p Calculating density characteristics D (p) =v of all relevant points p *D k (p). The embodiment of the application can set the threshold value of the low quality point as u, and screen out the low quality point set P of each frame c1d ,P c2d ,...,P cnd
Step S204: and counting the number and position distribution of the low-density points, and selecting the azimuth with the lowest quality and the most quantity under the current frame as the position of the target acquisition at the next moment.
In the actual implementation process, the embodiment of the application can firstly acquire the pose of each frame of acquisition carrier such as a robot, wherein the pose comprises coordinates and a group of quaternions representing rotation anglesThe embodiment of the application can also calculate the low-quality point set P based on the clustering thought c1d ,P c2d ,...,P cnd The most dense point locations of each frame of (a)Further, according to the position pose and the most dense position of the points obtained through calculation, the optimal forward pose of the acquisition carrier such as a robot for guiding the moving direction at each moment can be calculated to determine the position.
Step S205: the acquisition carrier is controlled to move towards the selected low quality target azimuth and new data is acquired.
Specifically, the embodiment of the application can calculate the optimal advancing direction of the acquisition carrier such as a robot at each moment according to the pose and the most dense point position so as to guide the moving direction, and the formula is as follows:
thereby correcting the pose at the current moment to be
Further, the embodiment of the application can control the acquisition carrier such as a robot to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Step S206: and selecting whether the acquisition is finished or not according to the requirement, and repeating the acquisition and detection process if the acquisition is continued.
In the actual execution process, the embodiment of the application can check whether the reconstructed data meets the use requirement through the system, if not, repeat the steps S202 to S205, and if yes, end the acquisition.
According to the outdoor scene reconstruction method for point cloud guidance, which is provided by the embodiment of the application, local feature extraction can be performed on the collected point cloud data, point-by-point features of the point cloud of the current frame are obtained, and point cloud density quality evaluation and screening of low-quality points are performed, so that new data are collected to a low-quality target azimuth, the calculation complexity is reduced, and the quality of a reconstruction result is improved. Therefore, the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, the real-time reconstruction is difficult to achieve and the like are solved.
Next, an outdoor scene reconstruction device guided by point clouds according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 3 is a block schematic diagram of a point cloud guided outdoor scene reconstruction device according to an embodiment of the present application.
As shown in fig. 3, the point cloud guided outdoor scene reconstruction device 10 includes: the system comprises an acquisition module 100, an acquisition module 200, a screening module 300, a selection module 400 and a reconstruction module 500.
The acquisition module 100 is configured to acquire point cloud data of an initial frame in an outdoor scene.
The acquisition module 200 is configured to perform local feature extraction based on point cloud on the point cloud data, and acquire point-by-point features of the point cloud of the current frame.
And the screening module 300 is used for evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame and screening quality points with the quality lower than a preset threshold.
The selecting module 400 is configured to count the number and position distribution of the quality points to obtain a statistical result, and select, according to the statistical result, the azimuth with the lowest quality and the most concentrated number in the current frame as the position of the target acquisition at the next moment.
The reconstruction module 500 is configured to control the acquisition carrier to acquire a position of a target at a next moment, continuously acquire new point cloud data until the acquisition is finished, obtain reconstruction data, and generate an outdoor scene reconstruction result according to the reconstruction data.
Optionally, in one embodiment of the present application, the obtaining module 200 includes: the device comprises a denoising unit, a classifying unit and a first calculating unit.
The denoising unit is used for denoising the point cloud data to obtain a denoised point set.
The classification unit is used for classifying point clouds of the point sets to obtain plane point sets belonging to the plane points and edge point sets belonging to the edge points so as to obtain non-ground plane point sets.
And the first calculation unit is used for calculating the point cloud quality of the non-ground plane point concentration point.
Optionally, in one embodiment of the present application, the screening module 300 includes: a setup unit, a second calculation unit and a screening unit.
Wherein the setting unit is used for setting the weight of the density of the points with different heights.
And the second calculation unit is used for calculating the density characteristics of all relevant points based on the point cloud quality and the weight.
And the screening unit is used for setting a preset threshold value of the low-quality points and screening out a low-quality point set of each frame.
Optionally, in one embodiment of the present application, the selecting module 400 includes: an acquisition unit, a third calculation unit, and a determination unit.
The acquisition unit is used for acquiring the pose of each frame of acquisition carrier.
And a third calculation unit for calculating the most dense point positions of each frame of the low quality point set.
And the determining unit is used for calculating the optimal forward pose of the acquisition carrier for guiding the moving direction at each moment according to the pose and the most dense point position so as to determine the position.
Optionally, in one embodiment of the present application, the reconstruction module 500 includes: a correction unit and an acquisition unit.
The correction unit is used for calculating the optimal advancing direction of the acquisition carrier at each moment according to the pose and the most dense positions of the points so as to guide the moving direction and correct the pose of the acquisition carrier at the current moment.
The acquisition unit is used for controlling the acquisition carrier to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
Optionally, in an embodiment of the present application, the reconstruction module 500 further includes: and the judging unit and the executing unit.
The judging unit is used for judging whether the reconstructed data meets the preset use requirement.
And the execution unit is used for ending the acquisition when the preset use requirement is met, and continuing to acquire data if the preset use requirement is not met.
It should be noted that the explanation of the embodiment of the outdoor scene reconstruction method guided by the point cloud is also applicable to the outdoor scene reconstruction device guided by the point cloud in this embodiment, and will not be repeated here.
According to the outdoor scene reconstruction device guided by the point cloud, local feature extraction can be performed on the collected point cloud data, point-by-point features of the point cloud of the current frame are obtained, point cloud density quality evaluation is performed to screen low-quality points, so that new data are collected to a low-quality target azimuth, calculation complexity is reduced, and quality of reconstruction results is improved. Therefore, the problems that in the related technology, the acquisition difficulty is extremely high, the manpower and material resources are consumed, the calculation resources are extremely depended, the real-time reconstruction is difficult to achieve and the like are solved.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 401, processor 402, and a computer program stored on memory 401 and executable on processor 402.
The processor 402 implements the point cloud guided outdoor scene reconstruction method provided in the above-described embodiment when executing a program.
Further, the electronic device further includes:
a communication interface 403 for communication between the memory 401 and the processor 402.
A memory 401 for storing a computer program executable on the processor 402.
Memory 401 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 401, the processor 402, and the communication interface 403 are implemented independently, the communication interface 403, the memory 401, and the processor 402 may be connected to each other by a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 4, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 401, the processor 402, and the communication interface 403 are integrated on a chip, the memory 401, the processor 402, and the communication interface 403 may complete communication with each other through internal interfaces.
The processor 402 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above point cloud guided outdoor scene reconstruction method.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "N" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented in a combination of any one or more of the following techniques, which are well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. The outdoor scene reconstruction method guided by the point cloud is characterized by comprising the following steps of:
acquiring point cloud data of an initial frame in an outdoor scene;
extracting local characteristics of the point cloud data based on the point cloud to obtain point-by-point characteristics of the point cloud of the current frame;
evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame, and screening quality points with the quality lower than a preset threshold;
counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame according to the counting result as the position of the target acquisition at the next moment; and
and controlling the acquisition carrier to acquire the position of the target at the next moment, continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data.
2. The method according to claim 1, wherein the performing the local feature extraction based on the point cloud data to obtain the point-by-point feature of the point cloud of the current frame includes:
denoising the point cloud data to obtain a denoised point set;
performing point cloud classification on the point set to obtain a plane point set belonging to the plane point and an edge point set belonging to the edge point so as to obtain a non-ground plane point set;
and calculating the point cloud quality of the non-ground plane point concentration point.
3. The method according to claim 1, wherein the estimating the quality of the point cloud according to the point-by-point density characteristic of the current frame point cloud, and the screening the quality points with the quality of the point cloud lower than a preset threshold value, includes:
setting up weights of densities of points with different heights;
calculating density characteristics of all relevant points based on the point cloud quality and the weight;
and setting the preset threshold value of the low-quality points, and screening out a low-quality point set of each frame.
4. A method according to claim 3, wherein the counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame as the position of the target acquisition at the next moment according to the counting result comprises:
acquiring the pose of the acquisition carrier of each frame;
calculating the most dense point position of each frame of the low-quality point set;
and calculating the optimal forward pose of the acquisition carrier for guiding the moving direction at each moment according to the pose and the most dense point position so as to determine the azimuth.
5. The method of claim 4, wherein controlling the acquisition carrier to acquire the position of the target at the next moment and continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data, comprises:
calculating the optimal advancing direction of the acquisition carrier at each moment according to the pose and the most dense positions of the points so as to guide the moving direction and correct the pose of the acquisition carrier at the current moment;
and controlling the acquisition carrier to move towards the corrected direction so as to acquire the point cloud data required to be complemented.
6. The method of claim 1, wherein the controlling the acquisition carrier to acquire the position of the target at the next moment and continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data, further comprises:
judging whether the reconstructed data meets preset use requirements or not;
and if the preset use requirement is met, ending the acquisition, otherwise, continuing to acquire the data.
7. A point cloud guided outdoor scene reconstruction device, comprising:
the acquisition module is used for acquiring point cloud data of an initial frame in an outdoor scene;
the acquisition module is used for extracting local characteristics of the point cloud based on the point cloud data and acquiring point-by-point characteristics of the point cloud of the current frame;
the screening module is used for evaluating the quality of the point cloud according to the point-by-point density characteristics of the point cloud of the current frame and screening quality points with the quality lower than a preset threshold value;
the selecting module is used for counting the number and position distribution of the quality points to obtain a counting result, and selecting the azimuth with the lowest quality and the most concentrated number under the current frame according to the counting result as the position of the target acquisition at the next moment; and
and the reconstruction module is used for controlling the acquisition carrier to acquire the position of the target at the next moment, continuously acquiring new point cloud data until the acquisition is finished, obtaining reconstruction data, and generating an outdoor scene reconstruction result according to the reconstruction data.
8. The apparatus of claim 7, wherein the acquisition module comprises:
the denoising unit is used for denoising the point cloud data to obtain a denoised point set;
the classification unit is used for classifying the point sets in a point cloud mode to obtain a plane point set belonging to the plane points and an edge point set belonging to the edge points so as to obtain a non-ground plane point set;
and the first calculation unit is used for calculating the point cloud quality of the non-ground plane point concentration point.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the point cloud guided outdoor scene reconstruction method according to any one of claims 1-6.
10. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor for implementing the point cloud guided outdoor scene reconstruction method according to any of claims 1-6.
CN202311247351.8A 2023-09-25 2023-09-25 Outdoor scene reconstruction method and device guided by point cloud Pending CN117541714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311247351.8A CN117541714A (en) 2023-09-25 2023-09-25 Outdoor scene reconstruction method and device guided by point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311247351.8A CN117541714A (en) 2023-09-25 2023-09-25 Outdoor scene reconstruction method and device guided by point cloud

Publications (1)

Publication Number Publication Date
CN117541714A true CN117541714A (en) 2024-02-09

Family

ID=89792579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311247351.8A Pending CN117541714A (en) 2023-09-25 2023-09-25 Outdoor scene reconstruction method and device guided by point cloud

Country Status (1)

Country Link
CN (1) CN117541714A (en)

Similar Documents

Publication Publication Date Title
CN110033516B (en) Needle flake particle content detection method based on binocular camera image acquisition and recognition
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN109657686B (en) Lane line generation method, apparatus, device, and storage medium
CN109813335B (en) Calibration method, device and system of data acquisition system and storage medium
CN110956603B (en) Detection method and device for edge flying spot of depth image and electronic equipment
CN109522804B (en) Road edge identification method and system
CN109635816B (en) Lane line generation method, apparatus, device, and storage medium
CN109556569B (en) Topographic map surveying and mapping method and device
CN110008947B (en) Granary grain quantity monitoring method and device based on convolutional neural network
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN106996795B (en) Join scaling method and device outside a kind of vehicle-mounted laser
CN112379393A (en) Train collision early warning method and device
CN113253294A (en) Method, apparatus and medium relating to ground point detection in 3D radar point cloud data
CN115100616A (en) Point cloud target detection method and device, electronic equipment and storage medium
CN111462073A (en) Quality inspection method and device for point cloud density of airborne laser radar
CN115097419A (en) External parameter calibration method and device for laser radar IMU
JP2020160840A (en) Road surface defect detecting apparatus, road surface defect detecting method, road surface defect detecting program
CN113032272A (en) Automatic parking system test evaluation method, device, equipment and storage medium
CN117541714A (en) Outdoor scene reconstruction method and device guided by point cloud
CN116128886A (en) Point cloud data segmentation method and device, electronic equipment and storage medium
Shan et al. Urban terrain and building extraction from airborne LIDAR data
CN111060922A (en) Tree point cloud extraction method based on airborne laser radar point cloud spatial distribution characteristics
CN116129669A (en) Parking space evaluation method, system, equipment and medium based on laser radar
CN115797310A (en) Method for determining inclination angle of photovoltaic power station group string and electronic equipment
CN110969875B (en) Method and system for road intersection traffic management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination