CN111507340A - Target point cloud data extraction method based on three-dimensional point cloud data - Google Patents

Target point cloud data extraction method based on three-dimensional point cloud data Download PDF

Info

Publication number
CN111507340A
CN111507340A CN202010301616.8A CN202010301616A CN111507340A CN 111507340 A CN111507340 A CN 111507340A CN 202010301616 A CN202010301616 A CN 202010301616A CN 111507340 A CN111507340 A CN 111507340A
Authority
CN
China
Prior art keywords
data
point cloud
target
cloud data
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010301616.8A
Other languages
Chinese (zh)
Other versions
CN111507340B (en
Inventor
朱翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenzhen Survey Technology Co ltd
Original Assignee
Beijing Shenzhen Survey Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenzhen Survey Technology Co ltd filed Critical Beijing Shenzhen Survey Technology Co ltd
Priority to CN202010301616.8A priority Critical patent/CN111507340B/en
Publication of CN111507340A publication Critical patent/CN111507340A/en
Application granted granted Critical
Publication of CN111507340B publication Critical patent/CN111507340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a target point cloud data extraction method based on three-dimensional point cloud data, which comprises the following steps: acquiring original three-dimensional point cloud data, and performing denoising processing on the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data; extracting intensity image data from the de-noised three-dimensional point cloud data; calling a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data; extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to the pixel coordinate value of each pixel in the target intensity image data; and calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain target point cloud data.

Description

Target point cloud data extraction method based on three-dimensional point cloud data
Technical Field
The invention relates to the field of data processing, in particular to a target point cloud data extraction method based on three-dimensional point cloud data.
Background
The current three-dimensional imaging neighborhood, Time Of Flight (TOF), is one Of the popular studies. Compared with the five other three-dimensional imaging technologies, the TOF three-dimensional imaging method can obtain a transient image, which means that excessive subsequent processing is not required when the depth of field is calculated, so that a higher frame rate can be achieved, and the related cost can be saved because the system expense of the subsequent processing is reduced. The current TOF three-dimensional imaging research is mostly focused on the fields of transient imaging, super-resolution, non-visual field detection imaging, time-of-flight mass spectrometry and the like. In addition, in general conditions, the range of distance measurement can be adjusted by changing the pulse frequency, the field of view size and the light source intensity of the laser, so that the detection range of TOF three-dimensional imaging is high in elasticity, and the TOF three-dimensional imaging method is suitable for operations such as face recognition, gesture recognition and tracking, body sensing recognition and game interaction in a short-distance range, is also suitable for detecting targets in a long distance, and has a very wide potential application scene. However, these application scenarios require target objectification of the three-dimensional point cloud, i.e., extracting the target point cloud of interest from the background.
At present, the target extraction is applied in two-dimensional images more mature, and both the traditional method based on graph theory and the like and the target extraction method based on machine learning generated by the rise of artificial intelligence have more mature extraction schemes. Due to the improvement of computing power and the maturity of two-dimensional target extraction technology, the processing related to the three-dimensional point cloud is gradually possible. However, most of the current three-dimensional point cloud target extraction researches mainly focus on large-scale targets such as cities, roads, airports and the like acquired by a laser radar system. Less three-dimensional point cloud target extraction is obtained for TOF.
Disclosure of Invention
Aiming at the defects of the prior art, the embodiment of the invention aims to provide a target point cloud data extraction method based on three-dimensional point cloud data, and the target extraction is carried out on the three-dimensional point cloud data according to the characteristic that the TOF three-dimensional imaging technology can simultaneously obtain the point cloud position and the intensity information.
In a first aspect, an embodiment of the present invention provides a method for extracting target point cloud data based on three-dimensional point cloud data, including:
acquiring original three-dimensional point cloud data, and performing denoising processing on the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data;
extracting intensity image data from the de-noised three-dimensional point cloud data;
calling a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data;
extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to the pixel coordinate value of each pixel in the target intensity image data;
and calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain the target point cloud data.
Preferably, the denoising processing on the original three-dimensional point cloud data to obtain the denoised three-dimensional point cloud data specifically comprises:
generating a first buffer area in an internal storage unit;
calling and caching the original three-dimensional point cloud data line by line for multiple times based on an NxN Gaussian template, and caching (N-1) lines into the first cache region every time; wherein N is 3 or 5;
and calculating the intensity image data of the three-dimensional point cloud data cached in the N-1 line of the first cache region and the intensity image data of the three-dimensional point cloud data cached in the Nth line of the original three-dimensional point cloud data by adopting an NxN Gauss template, and obtaining the de-noised three-dimensional point cloud data according to the results of multiple calculations.
Preferably, the invoking a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data specifically includes:
performing super-pixel clustering processing on the intensity image data to obtain first intensity image data;
calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data;
and calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain the target intensity image data.
Further preferably, the intensity image data has N pixel points, and the performing the super-pixel clustering process on the intensity image data to obtain the first intensity image data specifically includes:
s1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data;
s2, determining K clustering centers for the third intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third intensity image data; wherein S represents the step length of the adjacent clustering centers;
s3, reselecting the clustering center in the 3x3 neighborhood of the clustering center to obtain a first clustering center;
s4, performing distance calculation processing according to the data value of a first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center; wherein the distance data comprises a color distance value and a spatial distance value;
s5, calculating according to the color distance value, the space distance value and the step length of the adjacent clustering center to generate first distance data;
s6, performing ascending sorting processing on the plurality of first distance data, and determining the first sorted first distance data as the first clustering center distance data of the first pixel point;
s7, determining the clustering center of the first pixel point according to the first clustering center distance data, and distributing a label value to the first pixel point;
s8, when the first clustering center distance data is smaller than the initial distance data, initializing the first clustering center according to the first clustering center distance data, and continuing to execute S4;
s9, when the first clustering center distance data of each first pixel point is larger than or equal to the corresponding initial distance data, generating fourth intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise the label values;
s10, performing a second image model conversion process on the fourth intensity map data to generate the first intensity image data.
Further preferably, the invoking a preset binarization algorithm to perform binarization processing on the first intensity image data to obtain second intensity image data specifically includes:
and calling a maximum between-class variance algorithm to carry out binarization processing on the first intensity image data to generate second intensity image data.
Preferably, the calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data, and obtaining the target point cloud data specifically includes:
reading second pixel point data of a second pixel point in the target three-dimensional point cloud data;
obtaining third pixel point data adjacent to the second pixel point according to the second pixel point data;
performing Euclidean distance calculation processing according to the second pixel point data and the third pixel point data to obtain second distance data;
calculating according to the Gaussian distribution probability density and the plurality of second distance data to obtain a Gaussian mean value and a standard deviation;
classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean value and the standard deviation, and dividing the second pixel points into target pixel points and outlier pixel points;
and generating the target point cloud data according to the target pixel point data corresponding to the target pixel point.
Preferably, the classifying all the second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the gaussian mean value and the standard deviation, and the dividing the second pixel points into target pixel points and outlier pixel points specifically includes:
determining the number of third pixel points of which the second distance data is smaller than or equal to the Gaussian average value;
when the number of the third pixel points is smaller than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as the outlier pixel points from the target three-dimensional point cloud data;
obtaining a second distance data mean value according to the mean value of the first number of second distance data of the second pixel points;
obtaining a classification judgment value according to the sum of twice of the standard deviation and the Gaussian mean value;
when the second distance data mean value is larger than or equal to the classification judgment value, determining the second pixel points as outlier pixel points, and deleting the outlier pixel points from the target three-dimensional point cloud data;
and determining the second pixel points reserved in the target point cloud data as target pixel points.
Preferably, the method further comprises:
and the flight time camera receives an image acquisition instruction, shoots a target scene according to the image acquisition instruction and generates the original three-dimensional point cloud data.
In a second aspect, an embodiment of the present invention provides an apparatus, which includes a memory for storing a program and a processor for executing the method for extracting target point cloud data based on three-dimensional point cloud data according to the first aspect.
According to the target point cloud data extraction method based on the three-dimensional point cloud data, provided by the embodiment of the invention, under the condition that the characteristics of a TOF three-dimensional imaging technology and the influence factors of an equipment system, ambient light, motion blur and the like of the TOF three-dimensional imaging technology are fully considered, the three-dimensional point cloud data obtained by the TOF imaging technology are subjected to denoising treatment, the target extraction treatment is carried out on the intensity image data of the three-dimensional point cloud data, the target three-dimensional point cloud data is extracted according to the corresponding relation between the three-dimensional point cloud data and the intensity image data, and the target three-dimensional point cloud data is subjected to filtering treatment to. According to the target point cloud data extraction method based on the three-dimensional point cloud data, provided by the embodiment of the invention, in the processing process, the target extraction is carried out aiming at the intensity data of the three-dimensional point cloud data, so that the complexity of the target extraction, the extraction calculation amount and the system processing expense are greatly simplified, in the extraction process, different denoising algorithms are adopted successively to carry out denoising processing on the data according to the image characteristics, and the obtained target point cloud data has the characteristics of high accuracy, high definition and small error.
Drawings
Fig. 1 is a flowchart of a target point cloud data extraction method based on three-dimensional point cloud data according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The embodiment of the invention provides a target point cloud data extraction method based on three-dimensional point cloud data, which is used for extracting the target point cloud data from the three-dimensional point cloud data.
The following describes a method for extracting target point cloud data based on three-dimensional point cloud data according to a first embodiment of the present invention, and fig. 1 is a flowchart of a method for extracting target point cloud data based on three-dimensional point cloud data according to a second embodiment of the present invention. As shown, the method comprises the following steps:
step 101, obtaining original three-dimensional point cloud data, and performing denoising processing on the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data.
Specifically, the raw image data is three-dimensional point cloud data generated by a TOF camera acquiring target scene image data. The flight time camera receives the image acquisition instruction, shoots a target scene according to the image acquisition instruction and generates original three-dimensional point cloud data.
In a specific example of the embodiment of the invention, the TOF camera comprises a sensor with a resolution of 320 × 240 and a matched time-of-flight controller, the TOF camera adopts an infrared light source with a wavelength of 850nm as a transmitting light source, the acquired depth data comprises phase information, intensity amplitude information, ambient light and a flag bit, and the image information acquired by the TOF camera is processed by an integrated processing module to generate the original three-dimensional point cloud data.
After the original three-dimensional point cloud data is obtained, the process of denoising the original three-dimensional point cloud data by adopting the Gaussian template is as follows:
first, a first buffer area is generated in an internal storage unit.
Specifically, a first cache region is generated in an internal storage unit; the first Cache area is a Cache area with a certain storage capacity in the storage unit, namely a Cache in common. The first buffer area is used for buffering the processed data.
Secondly, calling and caching the original three-dimensional point cloud data line by line for multiple times based on an NxN Gaussian template, and caching (N-1) lines into a first cache region every time.
Specifically, the processor reads original three-dimensional point cloud data line by line, and performs multiple line-by-line caching on the original three-dimensional point cloud data based on an NxN Gaussian template, and caches (N-1) lines to a first cache region every time. Wherein N is 3 or 5;
and finally, calculating the intensity image data of the three-dimensional point cloud data cached in the N-1 line of the first cache region and the intensity image data of the three-dimensional point cloud data cached in the Nth line of the original three-dimensional point cloud data by adopting an NxN Gauss template, and obtaining the de-noised three-dimensional point cloud data according to the results of multiple calculations.
Specifically, the intensity image data of the cached data in the first cache region at a time and the intensity image data of the nth line read from the memory are calculated by adopting an NxN Gauss template, and the denoising three-dimensional point cloud data is obtained according to the results of multiple calculations.
A gaussian template with N-3 or 5 is preferably used in this embodiment; for example, in one example, N is 3, that is, the processor caches the original three-dimensional point cloud data line by line for multiple times based on a 3x3 gaussian template, 2 lines are cached into a first cache region each time, then reads the 3 rd line of original three-dimensional point cloud data from the memory, performs calculation processing on the intensity image data of the third line of original three-dimensional point cloud data and the intensity image data of the 2 lines of three-dimensional point cloud data in the first cache region by using a 3x3 gaussian template, and obtains 3 lines of processed three-dimensional point cloud data after the calculation processing; and sequentially carrying out the same calculation processing on all the original three-dimensional point cloud data in the memory to obtain a plurality of processed three-dimensional point cloud data, and obtaining the de-noising three-dimensional point cloud data according to the result of the calculation for a plurality of times.
102, extracting intensity image data from the de-noised three-dimensional point cloud data.
Specifically, the data value corresponding to each pixel of the three-dimensional point cloud data includes intensity data and a depth data value, and in order to simplify the process and the calculation amount of target data extraction, the intensity image data is extracted from the denoised three-dimensional point cloud data, and the target data is determined for the intensity image data.
103, performing super-pixel clustering processing on the intensity image data to obtain first intensity image data;
specifically, the steps of performing superpixel clustering processing on the intensity image data and the data in the embodiment of the present invention are as follows:
and S1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data.
Specifically, the first intensity image data obtained in the embodiment of the present invention is RGB color space data, and in order to adapt to the super-pixel clustering process adopted in the present invention, the first intensity image data needs to be converted into data in L AB color space, that is, third intensity image data.
The conversion process of the first image data into the third intensity image data comprises the steps of firstly normalizing the R value, the G value and the B value of each pixel of the first intensity image data to generate normalized intensity image data, and then correcting each pixel in the normalized intensity image data to generate the third intensity image data.
S2, determining K clustering centers for the third intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third intensity image data; wherein S represents the step length of the adjacent clustering centers; .
Specifically, the intensity image data has N pixels, and the number of pixels in the intensity image data is not changed by the conversion in step S1, so the third intensity image data also includes N pixels, the neighborhood size of the super pixel is determined to be sxs, the third intensity image data is pre-divided into K super pixels of the same size, and K cluster centers can be obtained. Initializing each pixel point in the third intensity image data, initializing initial distance data from each pixel point to a cluster center to which the pixel point belongs, and setting the initial distance data of the pixel points to be infinite because the cluster center of each pixel is uncertain at present. The number K of cluster centers has a direct relationship with the super-pixel domain size sxs, which satisfies the relationship N ═ sxsx K, where N denotes the number of pixels of the third intensity image data as a positive integer, and S denotes the step size of the adjacent cluster centers as a positive integer.
And S3, reselecting the clustering center in the 3x3 neighborhood of the clustering center to obtain a first clustering center.
Specifically, the gradient values of all the pixel points in the 3 × 3 neighborhood of the cluster center are calculated, and the cluster center is moved to the place with the minimum gradient value in the field. The purpose of this is to avoid the cluster center falling on the contour boundary with larger gradient so as not to affect the subsequent clustering effect.
And S4, performing distance calculation processing according to the data value of the first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center.
Specifically, for each pixel i in the first clustering center and the neighborhood of the first clustering center, the color distance value d between the pixel i and the first clustering center is calculated according to the l value, the a value, the b value and the pixel coordinate value x, y of the pixel i and the first clustering center jcAnd a spatial distance value ds. The calculation is according to the formula:
Figure BDA0002454196450000091
Figure BDA0002454196450000092
wherein d iscRepresenting the color distance and ds representing the spatial distance.
And S5, performing calculation processing according to the color distance value, the space distance value and the step length of the adjacent clustering center to generate first distance data.
Specifically, for the first distance data according to each first cluster center and the pixel i in the neighborhood of the first cluster center, according to the formula:
Figure BDA0002454196450000093
wherein d iscRepresenting the color distance, ds representing the spatial distance, S representing the step size of the neighboring cluster centers, m being a constant used to select the importance of spatial proximity, the greater m, the more important spatial proximity is than color similarity. In a specific example of the embodiment of the present invention, m is 10.
S6, performing ascending sorting processing on the plurality of first distance data, and determining the first sorted first distance data as first clustering center distance data of the first pixel point;
and S7, determining the clustering center of the first pixel point according to the first clustering center distance data, and distributing a label value to the first pixel point.
Specifically, the first clustering center corresponding to the first clustering center distance data is used for determining the clustering center of the first pixel point. Thus, the cluster center to which each first pixel point belongs is determined. Then, a label value is assigned to each first pixel point, thus determining to which cluster center each pixel belongs.
S8, when the first clustering center distance data is smaller than the initial distance data, initializing the pixel points according to the first clustering center distance data, and continuing to execute S3;
specifically, when the super-pixel clustering process is executed, initializing initial distance data from each pixel point to a clustering center to which the pixel point belongs for each pixel point, comparing the first clustering center distance data with the initial distance data, and updating the initial clustering data corresponding to the first pixel data into the first clustering center distance data when the first clustering center distance data is smaller than the initial distance data. Thereafter, S4 and the following steps are repeatedly executed.
And S9, when the first clustering center distance data of each first pixel point is larger than or equal to the corresponding initial distance data, generating fourth intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise label values.
Specifically, all steps from S3 to S8 are iteratively executed until the first cluster center distance data corresponding to each pixel point is greater than or equal to the updated initial distance data, which indicates that the super-pixel clustering is completed. That is to say, the assignment of the label values to the N first pixel points is completed, that is, the clustering center of each pixel point is determined, and at this time, fourth intensity image data is generated according to the data values of the N first pixel points.
S10, the fourth intensity map data is subjected to the second image model conversion process to generate the first intensity image data.
In the embodiment of the invention, an inverse conversion algorithm corresponding to a preset image model conversion normal is adopted to perform second image model conversion processing on the fourth image data, and the fourth intensity image data is converted into RGB color model data from L AB color model data.
And step 104, calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data.
Specifically, in the embodiment of the present invention, a maximum between-class variance algorithm in the prior art is called to perform binarization processing on the first intensity image data, so as to generate second intensity image data.
And 105, calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data.
Specifically, in the embodiment of the present invention, the target identification processing is performed on the second intensity data by sequentially adopting expansion and corrosion, so as to generate target intensity image data.
Other morphological processing methods in the prior art may also be used to perform target recognition processing on the second intensity image data to generate target intensity image data.
And 106, extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to the pixel coordinate value of each pixel in the target intensity image data.
Specifically, the pixel coordinate values of the target intensity image data are not changed during processing, and the target three-dimensional point cloud data corresponding to the target intensity image data is extracted from the original three-dimensional point cloud data by comparing the pixel coordinates of each pixel in the target intensity data with the original three-dimensional point cloud data.
And 107, calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain target point cloud data.
Specifically, the method for denoising the generated target three-dimensional point cloud data includes the following steps:
step 1071, reading second pixel point data of a second pixel point in the target three-dimensional point cloud data.
Step 1072, third pixel point data adjacent to the second pixel point is obtained according to the second pixel point data.
Step 1073, a euclidean distance calculation process is performed according to the second pixel point data and the third pixel point data to obtain second distance data.
Specifically, the distance calculation formula between two points is used to perform distance calculation on the second pixel point data and the third pixel point data to obtain the euclidean distances between the second pixel point and the plurality of third pixel points, which are the second distance data. And calculating each second pixel point data and the plurality of third pixel point data to obtain a plurality of second distance data.
Step 1074, a calculation process is performed according to the gaussian distribution probability density and the plurality of second distance data to obtain a gaussian mean and a standard deviation.
Specifically, the second pixel point is adjacent to the plurality of third pixel points, so that the average value of the plurality of second distance data should satisfy the distribution of the gaussian approximate point cloud. The gaussian mean and standard deviation can be calculated from the gaussian distribution probability density.
Step 1075, classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean value and the standard deviation, and dividing the second pixel points into target pixel points and outlier pixel points.
Specifically, the determination of the outlier pixel and the target pixel includes: firstly, the number of third pixel points of which the second distance data is smaller than or equal to the Gaussian mean value is determined. And secondly, when the number of the third pixel points is less than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as the outlier pixel points from the target three-dimensional point cloud data. And then, obtaining a second distance data mean value according to the mean value of the first number of second distance data of the second pixel points. Then, a classification judgment value is obtained according to the sum of twice the standard deviation and the gaussian mean. And then, when the mean value of the second distance data is greater than or equal to the classification judgment value, determining the second pixel points as outlier pixel points, and deleting the outlier pixel points from the target three-dimensional point cloud data. And finally, determining second pixel points reserved in the target point cloud data as target pixel points.
Step 1076, generate target point cloud data according to the target pixel point data corresponding to the target pixel point.
Specifically, each target pixel point corresponds to target pixel point data, and the target pixel point data of all the target pixel points generate target point cloud data.
According to the target point cloud data extraction method based on the three-dimensional point cloud data, provided by the embodiment of the invention, under the condition that the characteristics of a TOF three-dimensional imaging technology and the influence factors of an equipment system, ambient light, motion blur and the like of the TOF three-dimensional imaging technology are fully considered, the three-dimensional point cloud data obtained by the TOF imaging technology are subjected to denoising treatment, the target extraction treatment is carried out on the intensity image data of the three-dimensional point cloud data, the target three-dimensional point cloud data is extracted according to the corresponding relation between the three-dimensional point cloud data and the intensity image data, and the target three-dimensional point cloud data is subjected to filtering treatment to. According to the target point cloud data extraction method based on the three-dimensional point cloud data, provided by the embodiment of the invention, in the processing process, the target extraction is carried out aiming at the intensity data of the three-dimensional point cloud data, so that the complexity of the target extraction, the extraction calculation amount and the system processing expense are greatly simplified, in the extraction process, different denoising algorithms are adopted successively to carry out denoising processing on the data according to the image characteristics, and the obtained target point cloud data has the characteristics of high accuracy, high definition and small error.
The second embodiment of the invention provides equipment which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the first embodiment of the invention when being executed.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A target point cloud data extraction method based on three-dimensional point cloud data is characterized by comprising the following steps:
acquiring original three-dimensional point cloud data, and performing denoising processing on the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data;
extracting intensity image data from the de-noised three-dimensional point cloud data;
calling a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data;
extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to the pixel coordinate value of each pixel in the target intensity image data;
and calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain the target point cloud data.
2. The method for extracting target point cloud data based on three-dimensional point cloud data as claimed in claim 1, wherein the denoising processing is performed on the original three-dimensional point cloud data, and the denoising processing is specifically performed as follows:
generating a first buffer area in an internal storage unit;
calling and caching the original three-dimensional point cloud data line by line for multiple times based on an NxN Gaussian template, and caching (N-1) lines into the first cache region every time; wherein N is 3 or 5;
and calculating the intensity image data of the three-dimensional point cloud data cached in the N-1 line of the first cache region and the intensity image data of the three-dimensional point cloud data cached in the Nth line of the original three-dimensional point cloud data by adopting an NxN Gauss template, and obtaining the de-noised three-dimensional point cloud data according to the results of multiple calculations.
3. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the step of calling a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data specifically comprises:
performing super-pixel clustering processing on the intensity image data to obtain first intensity image data;
calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data;
and calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain the target intensity image data.
4. The method of claim 3, wherein the intensity image data has N pixel points, and the performing the super-pixel clustering process on the intensity image data to obtain the first intensity image data specifically comprises:
s1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data;
s2, determining K clustering centers for the third intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third intensity image data; wherein S represents the step length of the adjacent clustering centers;
s3, reselecting the clustering center in the 3x3 neighborhood of the clustering center to obtain a first clustering center;
s4, performing distance calculation processing according to the data value of a first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center; wherein the distance data comprises a color distance value and a spatial distance value;
s5, calculating according to the color distance value, the space distance value and the step length of the adjacent clustering center to generate first distance data;
s6, performing ascending sorting processing on the plurality of first distance data, and determining the first sorted first distance data as the first clustering center distance data of the first pixel point;
s7, determining the clustering center of the first pixel point according to the first clustering center distance data, and distributing a label value to the first pixel point;
s8, when the first clustering center distance data is smaller than the initial distance data, initializing the first clustering center according to the first clustering center distance data, and continuing to execute S4;
s9, when the first clustering center distance data of each first pixel point is larger than or equal to the corresponding initial distance data, generating fourth intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise the label values;
s10, performing a second image model conversion process on the fourth intensity map data to generate the first intensity image data.
5. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 3, wherein the step of calling a preset binarization algorithm to binarize the first intensity image data to obtain second intensity image data specifically comprises the steps of:
and calling a maximum between-class variance algorithm to carry out binarization processing on the first intensity image data to generate second intensity image data.
6. The method for extracting target point cloud data based on three-dimensional point cloud data as claimed in claim 1, wherein the invoking of a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data specifically comprises:
reading second pixel point data of a second pixel point in the target three-dimensional point cloud data;
obtaining third pixel point data adjacent to the second pixel point according to the second pixel point data;
performing Euclidean distance calculation processing according to the second pixel point data and the third pixel point data to obtain second distance data;
calculating according to the Gaussian distribution probability density and the plurality of second distance data to obtain a Gaussian mean value and a standard deviation;
classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean value and the standard deviation, and dividing the second pixel points into target pixel points and outlier pixel points;
and generating the target point cloud data according to the target pixel point data corresponding to the target pixel point.
7. The method of claim 6, wherein the classifying all the second pixels in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean and the standard deviation, and the classifying the second pixels into target pixels and outlier pixels specifically comprises:
determining the number of third pixel points of which the second distance data is smaller than or equal to the Gaussian average value;
when the number of the third pixel points is smaller than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as the outlier pixel points from the target three-dimensional point cloud data;
obtaining a second distance data mean value according to the mean value of the first number of second distance data of the second pixel points;
obtaining a classification judgment value according to the sum of twice of the standard deviation and the Gaussian mean value;
when the second distance data mean value is larger than or equal to the classification judgment value, determining the second pixel points as outlier pixel points, and deleting the outlier pixel points from the target three-dimensional point cloud data;
and determining the second pixel points reserved in the target point cloud data as target pixel points.
8. The method of extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the method further comprises:
and the flight time camera receives an image acquisition instruction, shoots a target scene according to the image acquisition instruction and generates the original three-dimensional point cloud data.
9. An apparatus comprising a memory for storing a program and a processor for executing the method of extracting target point cloud data based on three-dimensional point cloud data according to any one of claims 1 to 8.
CN202010301616.8A 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data Active CN111507340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301616.8A CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301616.8A CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Publications (2)

Publication Number Publication Date
CN111507340A true CN111507340A (en) 2020-08-07
CN111507340B CN111507340B (en) 2023-09-01

Family

ID=71871010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301616.8A Active CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Country Status (1)

Country Link
CN (1) CN111507340B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529044A (en) * 2020-11-20 2021-03-19 西南交通大学 Railway contact net extraction and classification method based on vehicle-mounted LiDAR
CN113255677A (en) * 2021-05-27 2021-08-13 中国电建集团中南勘测设计研究院有限公司 Method, equipment and medium for rapidly extracting rock mass structural plane and occurrence information
CN117152353A (en) * 2023-08-23 2023-12-01 北京市测绘设计研究院 Live three-dimensional model creation method, device, electronic equipment and readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017157967A1 (en) * 2016-03-14 2017-09-21 Imra Europe Sas Processing method of a 3d point cloud
CN108257222A (en) * 2018-01-31 2018-07-06 杭州中科天维科技有限公司 The automatic blending algorithm of steel stove converter three-dimensional laser point cloud
WO2018185807A1 (en) * 2017-04-03 2018-10-11 富士通株式会社 Distance information processing device, distance information processing method, and distance information processing program
CN110163047A (en) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 A kind of method and device detecting lane line
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110827339A (en) * 2019-11-05 2020-02-21 北京深测科技有限公司 Method for extracting target point cloud

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017157967A1 (en) * 2016-03-14 2017-09-21 Imra Europe Sas Processing method of a 3d point cloud
WO2018185807A1 (en) * 2017-04-03 2018-10-11 富士通株式会社 Distance information processing device, distance information processing method, and distance information processing program
CN108257222A (en) * 2018-01-31 2018-07-06 杭州中科天维科技有限公司 The automatic blending algorithm of steel stove converter three-dimensional laser point cloud
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110163047A (en) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 A kind of method and device detecting lane line
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110827339A (en) * 2019-11-05 2020-02-21 北京深测科技有限公司 Method for extracting target point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张旭东;吴国松;胡良梅;王竹萌;: "基于TOF三维相机相邻散乱点云配准技术研究", 机械工程学报 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529044A (en) * 2020-11-20 2021-03-19 西南交通大学 Railway contact net extraction and classification method based on vehicle-mounted LiDAR
CN112529044B (en) * 2020-11-20 2022-06-28 西南交通大学 Method for extracting and classifying railway contact network based on vehicle-mounted LiDAR
CN113255677A (en) * 2021-05-27 2021-08-13 中国电建集团中南勘测设计研究院有限公司 Method, equipment and medium for rapidly extracting rock mass structural plane and occurrence information
CN113255677B (en) * 2021-05-27 2022-08-09 中国电建集团中南勘测设计研究院有限公司 Method, equipment and medium for rapidly extracting rock mass structural plane and occurrence information
CN117152353A (en) * 2023-08-23 2023-12-01 北京市测绘设计研究院 Live three-dimensional model creation method, device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN111507340B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
US7756296B2 (en) Method for tracking objects in videos using forward and backward tracking
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN111507340B (en) Target point cloud data extraction method based on three-dimensional point cloud data
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
US10878299B2 (en) Methods and apparatus for testing multiple fields for machine vision
CN110097050B (en) Pedestrian detection method, device, computer equipment and storage medium
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
CN110472521B (en) Pupil positioning calibration method and system
US20200082209A1 (en) Methods and apparatus for generating a dense field of three dimensional data for machine vision
CN113761999A (en) Target detection method and device, electronic equipment and storage medium
CN111507337A (en) License plate recognition method based on hybrid neural network
CN113034497A (en) Vision-based thermos cup weld positioning detection method and system
CN109886984B (en) Image accurate segmentation method using foreground and background gray difference and deep learning network
CN113379789B (en) Moving target tracking method in complex environment
CN112734931B (en) Method and system for assisting point cloud target detection
CN107729863B (en) Human finger vein recognition method
CN113343819B (en) Efficient unmanned airborne SAR image target segmentation method
CN115346209A (en) Motor vehicle three-dimensional target detection method and device and computer readable storage medium
CN112052859B (en) License plate accurate positioning method and device in free scene
WO2022096343A1 (en) Method and apparatus for distinguishing different configuration states of an object based on an image representation of the object
CN113963178A (en) Method, device, equipment and medium for detecting infrared dim and small target under ground-air background
CN111507339B (en) Target point cloud acquisition method based on intensity image
EP3624062A1 (en) Methods and apparatus for processing image data for machine vision
CN115096196B (en) Visual height and speed measuring method and system for rocket recovery and storage medium
KR101312306B1 (en) Apparatus for recognizing signs, Method thereof, and Method for recognizing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant