CN111507340B - Target point cloud data extraction method based on three-dimensional point cloud data - Google Patents

Target point cloud data extraction method based on three-dimensional point cloud data Download PDF

Info

Publication number
CN111507340B
CN111507340B CN202010301616.8A CN202010301616A CN111507340B CN 111507340 B CN111507340 B CN 111507340B CN 202010301616 A CN202010301616 A CN 202010301616A CN 111507340 B CN111507340 B CN 111507340B
Authority
CN
China
Prior art keywords
data
point cloud
cloud data
target
dimensional point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010301616.8A
Other languages
Chinese (zh)
Other versions
CN111507340A (en
Inventor
朱翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenzhen Survey Technology Co ltd
Original Assignee
Beijing Shenzhen Survey Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenzhen Survey Technology Co ltd filed Critical Beijing Shenzhen Survey Technology Co ltd
Priority to CN202010301616.8A priority Critical patent/CN111507340B/en
Publication of CN111507340A publication Critical patent/CN111507340A/en
Application granted granted Critical
Publication of CN111507340B publication Critical patent/CN111507340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application provides a target point cloud data extraction method based on three-dimensional point cloud data, which comprises the following steps: acquiring original three-dimensional point cloud data, and denoising the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data; extracting intensity image data from the denoising three-dimensional point cloud data; invoking a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data; extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to pixel coordinate values of pixels in the target intensity image data; and calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain target point cloud data.

Description

Target point cloud data extraction method based on three-dimensional point cloud data
Technical Field
The application relates to the field of data processing, in particular to a target point cloud data extraction method based on three-dimensional point cloud data.
Background
Current three-dimensional imaging neighborhood, time Of Flight (TOF) research is one Of the hot approaches. Compared with the other five three-dimensional imaging technologies, the TOF three-dimensional imaging method can obtain transient images, namely, excessive subsequent processing is not needed when the depth of field is calculated, so that higher frame rate can be achieved, and related cost can be saved because the system cost of the subsequent processing is reduced. Current TOF three-dimensional imaging research is mostly focused on fields such as transient imaging, super resolution, non-field-of-view detection imaging, time-of-flight mass spectrometry, and the like. In addition, as the range of distance measurement can be adjusted by changing the pulse frequency, the field of view and the light source intensity of the laser in the TOF three-dimensional imaging mode under the general condition, the detection distance of the TOF three-dimensional imaging has high elasticity, is suitable for operations such as face recognition, gesture recognition and tracking, somatosensory recognition, game interaction and the like in a close range, is also suitable for detecting targets in a far distance, and has very wide potential application fields. However, these application scenarios require targeting three-dimensional point clouds, i.e., extracting a target point cloud of interest from the background.
The target extraction is applied to two-dimensional images at present, and a traditional method based on graph theory and the like or a target extraction method based on machine learning generated by the rising of artificial intelligence has a mature extraction scheme. Three-dimensional point cloud-related processing is also becoming increasingly possible due to the improvement in computing power and the maturity of two-dimensional object extraction techniques. However, most of three-dimensional point cloud target extraction researches at present mainly focus on large-scale targets such as cities, roads and airports acquired by a laser radar system. Three-dimensional point cloud targets acquired for TOF are less extracted.
Disclosure of Invention
Aiming at the defects of the prior art, the embodiment of the application aims to provide a target point cloud data extraction method based on three-dimensional point cloud data, which can simultaneously acquire the position and intensity information of the point cloud according to the characteristics of a TOF three-dimensional imaging technology and extract targets of the three-dimensional point cloud data.
In a first aspect, an embodiment of the present application provides a method for extracting target point cloud data based on three-dimensional point cloud data, including:
acquiring original three-dimensional point cloud data, and denoising the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data;
extracting intensity image data from the denoising three-dimensional point cloud data;
invoking a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data;
extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to pixel coordinate values of pixels in the target intensity image data;
and calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain the target point cloud data.
Preferably, the denoising processing is performed on the original three-dimensional point cloud data, and the obtaining of the denoised three-dimensional point cloud data specifically includes:
generating a first buffer area in an internal storage unit;
calling and carrying out multiple line-by-line caching on the original three-dimensional point cloud data based on an NxN Gaussian template, wherein each line of cache (N-1) is cached in the first cache region; wherein n=3 or 5;
and calculating the intensity image data of the three-dimensional point cloud data cached in the N-1 line of the first cache region and the intensity image data of the three-dimensional point cloud data of the N line of the original three-dimensional point cloud data by adopting an NxN Gaussian template, and obtaining the denoising three-dimensional point cloud data according to the result of multiple times of calculation.
Preferably, the invoking a preset target extraction algorithm to perform target extraction processing on the intensity image data, and obtaining target intensity image data specifically includes:
performing superpixel clustering on the intensity image data to obtain first intensity image data;
invoking a preset binarization algorithm to perform binarization processing on the first intensity image data to obtain second intensity image data;
and calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain the target intensity image data.
Further preferably, the intensity image data has N pixel points, and the performing superpixel clustering processing on the intensity image data to obtain first intensity image data specifically includes:
s1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data;
s2, determining K clustering centers for the third-intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third-intensity image data; wherein S represents the step length of the adjacent cluster centers;
s3, re-selecting a cluster center in a 3x3 neighborhood of the cluster center to obtain a first cluster center;
s4, performing distance calculation processing according to the data value of the first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center; wherein the distance data comprises a color distance value and a spatial distance value;
s5, calculating according to the color distance value, the space distance value and the step length of the adjacent clustering center to generate first distance data;
s6, carrying out ascending sort processing on the plurality of first distance data, and determining first distance data with first sort as first type center concentration distance data of the first pixel point;
s7, determining a clustering center of the first pixel point according to the first clustering center distance data, and distributing a label value to the first pixel point;
s8, initializing the first clustering center according to the first type center distance data when the first type center distance data is smaller than the initial distance data, and continuously executing S4;
s9, when the first type center-of-concentration distance data of each first pixel point is greater than or equal to the corresponding initial distance data, generating fourth-intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise the tag values;
and S10, performing second image model conversion processing on the fourth intensity image data to generate the first intensity image data.
Further preferably, the invoking a preset binarization algorithm to perform binarization processing on the first intensity image data, to obtain second intensity image data specifically includes:
and invoking a maximum inter-class variance algorithm to perform binarization processing on the first intensity image data to generate the second intensity image data.
Preferably, the invoking the preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data specifically includes:
reading second pixel point data of a second pixel point in the target three-dimensional point cloud data;
acquiring third pixel point data adjacent to the second pixel point according to the second pixel point data;
performing Euclidean distance calculation processing according to the second pixel point data and the third pixel point data to obtain second distance data;
calculating according to the Gaussian distribution probability density and the second distance data to obtain a Gaussian mean value and a standard deviation;
classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean value and the standard deviation, and dividing the second pixel points into target pixel points and outlier pixel points;
and generating the target point cloud data according to the target pixel point data corresponding to the target pixel point.
Further preferably, the classifying all the second pixels in the target three-dimensional point cloud data according to the second pixel data, the second distance data, the gaussian mean value and the standard deviation, and dividing the second pixels into target pixels and outlier pixels specifically includes:
determining the number of third pixel points of which the second distance data is smaller than or equal to the Gaussian average value;
when the number of the third pixel points is smaller than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as outlier pixel points from the target three-dimensional point cloud data;
obtaining a second distance data average value according to the average value of the first number of second distance data of the second pixel points;
obtaining a classification judgment value according to the sum of twice of the standard deviation and the Gaussian mean value;
when the second distance data average value is greater than or equal to the classification judgment value, determining the second pixel point as an outlier pixel point, and deleting the outlier pixel point from the target three-dimensional point cloud data;
and determining the second pixel point reserved in the target point cloud data as a target pixel point.
Preferably, the method further comprises:
and the time-of-flight camera receives an image acquisition instruction, shoots a target scene according to the image acquisition instruction, and generates the original three-dimensional point cloud data.
In a second aspect, an embodiment of the present application provides an apparatus, where the apparatus includes a memory for storing a program and a processor for executing the three-dimensional point cloud data-based target point cloud data extraction method according to the first aspect.
The embodiment of the application provides a target point cloud data extraction method based on three-dimensional point cloud data, which is characterized in that under the condition that the characteristics of TOF three-dimensional imaging technology and the influence factors of equipment systems, ambient light, motion blur and the like are fully considered, target point cloud data is obtained by sequentially denoising three-dimensional point cloud data acquired by the TOF imaging technology, extracting target from intensity image data of the three-dimensional point cloud data, extracting target three-dimensional point cloud data according to the corresponding relation between the three-dimensional point cloud data and strong pair image data and filtering the target three-dimensional point cloud data. According to the target point cloud data extraction method based on the three-dimensional point cloud data, in the processing process, target extraction is carried out on the intensity data of the three-dimensional point cloud data, the complexity of target extraction, the calculated amount of extraction and the system processing cost are greatly simplified, different denoising algorithms are sequentially adopted to denoise data according to image characteristics in the extraction process, and the obtained target point cloud data has the characteristics of high accuracy, high definition and small error.
Drawings
Fig. 1 is a flowchart of a target point cloud data extraction method based on three-dimensional point cloud data according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The embodiment of the application provides a target point cloud data extraction method based on three-dimensional point cloud data, which is used for extracting target point cloud data from the three-dimensional point cloud data.
Next, a method for extracting target point cloud data based on three-dimensional point cloud data according to the first embodiment of the present application is described, and fig. 1 is a flowchart of a method for extracting target point cloud data based on three-dimensional point cloud data according to an embodiment of the present application. As shown, the method comprises the following steps:
step 101, acquiring original three-dimensional point cloud data, and denoising the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data.
Specifically, the raw image data is three-dimensional point cloud data generated by capturing target scene image data by a TOF camera. And the time-of-flight camera receives the image acquisition instruction, shoots a target scene according to the image acquisition instruction, and generates original three-dimensional point cloud data.
In a specific example of the embodiment of the application, the TOF camera comprises a 320×240 resolution sensor and a matched flight time controller, the TOF camera adopts an 850nm infrared light source as an emission light source, and the acquired depth data comprises phase information, intensity amplitude information, ambient light and a marker bit. Image information acquired by the TOF camera is subjected to integrated processing module to generate original three-dimensional point cloud data.
After the original three-dimensional point cloud data is obtained, denoising the original three-dimensional point cloud data by adopting a Gaussian template, wherein the denoising process comprises the following steps of:
first, a first buffer is generated in an internal memory unit.
Specifically, a first buffer area is generated in an internal storage unit; the first Cache area is a Cache area with a certain storage capacity in the storage unit, namely a Cache. The first buffer is used to buffer the processed data.
And secondly, calling and carrying out line-by-line caching on the original three-dimensional point cloud data for a plurality of times based on an NxN Gaussian template, and caching (N-1) lines into the first cache region each time.
Specifically, the processor reads the original three-dimensional point cloud data line by line, caches the original three-dimensional point cloud data line by line for a plurality of times based on an NxN Gaussian template, and caches (N-1) lines into the first cache region each time. Wherein n=3 or 5;
and finally, calculating the intensity image data of the three-dimensional point cloud data cached in the N-1 row of the first cache region and the intensity image data of the N-th row of the three-dimensional point cloud data of the original three-dimensional point cloud data by adopting an NxN Gaussian template, and obtaining denoising three-dimensional point cloud data according to the result of multiple times of calculation.
Specifically, intensity image data of the cached data in the first cache area and intensity image data of the Nth line of image data read from the memory are calculated by adopting an NxN Gaussian template, and denoising three-dimensional point cloud data is obtained according to the result of multiple times of calculation.
A gaussian template of n=3 or 5 is preferably used in this embodiment; for example, n=3 is adopted in one example, that is, the processor caches the original three-dimensional point cloud data line by line for a plurality of times based on a 3x3 gaussian template, caches 2 lines each time into the first cache region, then reads the original three-dimensional point cloud data of the 3 rd line from the memory, calculates the intensity image data of the original three-dimensional point cloud data of the third line and the intensity image data of the three-dimensional point cloud data of the 2 lines in the first cache region by using the 3x3 gaussian template, and obtains three-dimensional point cloud data after 3 lines of processing after calculation; and sequentially carrying out the same calculation processing on all the original three-dimensional point cloud data in the memory to obtain a plurality of processed three-dimensional point cloud data, and obtaining denoising three-dimensional point cloud data according to the result of the plurality of times of calculation.
And 102, extracting intensity image data from the denoising three-dimensional point cloud data.
Specifically, the data value corresponding to each pixel of the three-dimensional point cloud data includes an intensity data value and a depth data value, in order to simplify the process and the calculation amount of extracting the target data, the intensity image data is extracted from the denoised three-dimensional point cloud data, and the target data is determined for the intensity image data.
Step 103, performing superpixel clustering processing on the intensity image data to obtain first intensity image data;
specifically, in the embodiment of the application, the steps of performing super-pixel clustering processing on the intensity image data and the data are as follows:
s1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data.
Specifically, the first intensity image data acquired in the embodiment of the present application is RGB color space data, and in order to adapt to the super-pixel clustering process adopted in the present application, the first intensity image data needs to be converted into LAB color space data, that is, third intensity image data.
In a preferred scheme of the embodiment of the application, the preset image model conversion algorithm is an existing image model conversion algorithm. And converting the first image data from RGB image model data into LAB image model conversion data through a preset image model conversion algorithm. The conversion process of converting the first image data into the third intensity image data includes: firstly, carrying out normalization processing on R value, G value and B value of each pixel of first intensity image data to generate normalized intensity image data; then, correction processing is performed for each pixel in the normalized intensity image data, and third intensity image data is generated.
S2, determining K clustering centers for the third-intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third-intensity image data; wherein S represents the step length of the adjacent cluster centers; .
Specifically, the intensity image data has N pixels, the number of the pixels in the intensity image data is not changed by the conversion in the step S1, so the third intensity image data also includes N pixels, the neighborhood size of the super pixel is determined to be S x S, and the third intensity image data is pre-segmented into K super pixels with the same size, so that K clustering centers can be obtained. And initializing each pixel point in the third intensity image data, initializing an initial distance data from each pixel point to the clustering center to which the pixel point belongs, and setting the initial distance data of the pixel point to infinity because the clustering center of each pixel is not determined at present. The number K of cluster centers has a direct relation with the super-pixel domain size S x S, which satisfies the relation n=s x S x K, where N represents the number of pixels of the third intensity image data, is a positive integer, S represents the step size of the adjacent cluster center, and is a positive integer.
S3, re-selecting the cluster center in the 3x3 neighborhood of the cluster center to obtain a first cluster center.
Specifically, gradient values of all pixels in the 3x3 neighborhood of the cluster center are calculated, and the cluster center is moved to a place where the gradient value is minimum in the field. The purpose of this is to avoid that the cluster center falls on the contour boundary with larger gradient so as not to influence the subsequent clustering effect.
And S4, performing distance calculation processing according to the data value of the first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center.
Specifically, for each pixel i in the first cluster center and in the neighborhood of the first cluster center, calculating the color distance value d between the pixel i and the first cluster center according to the l value, the a value, the b value and the pixel coordinate value x, y of the pixel i and the first cluster center j c And a spatial distance value d s . The calculation is according to the formula:
wherein d c Representing the color distance, ds represents the spatial distance.
And S5, calculating according to the color distance value, the space distance value and the step length of the adjacent clustering center, and generating first distance data.
Specifically, for the first distance data according to each first cluster center and the pixel i in the neighborhood of the first cluster center, according to the formula:
wherein d c Representing the color distance, ds representing the spatial distance, S representing the step size of adjacent cluster centers, m being a constant used to select the importance of spatial proximity, the greater m, the more important the spatial proximity than the similarity in color. In a specific example of an embodiment of the present application, m=10.
S6, performing ascending sort processing on the plurality of first distance data, and determining first distance data of the first sort as first type focus distance data of the first pixel point;
s7, determining a clustering center of the first pixel point according to the first type of clustering center distance data, and distributing a label value to the first pixel point.
Specifically, determining a cluster center of the first pixel point by a first cluster center corresponding to the first cluster center distance data. Thus, the cluster center to which each first pixel point belongs is determined. Then, a label value is assigned to each first pixel point, so that it is determined to which cluster center each pixel belongs.
S8, initializing the pixel point according to the first type of center-focusing distance data when the first type of center-focusing distance data is smaller than the initial distance data, and continuously executing S3;
specifically, when the super-pixel clustering processing is started, initializing initial distance data from each pixel point to the clustering center to which the pixel point belongs, comparing the first clustering center distance data with the initial distance data, and updating the initial clustering data corresponding to the first pixel data into the first clustering center distance data when the first clustering center distance data is smaller than the initial distance data. And then repeatedly performing S4 and subsequent steps.
And S9, when the first type center-of-concentration distance data of each first pixel point is greater than or equal to the corresponding initial distance data, generating fourth-intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise tag values.
Specifically, all steps S3-S8 are iteratively executed until the first cluster center distance data corresponding to each pixel point is greater than or equal to the updated initial distance data, which indicates that the super-pixel clustering is completed. That is, the distribution of the label values of the N first pixel points is completed, that is, the clustering center of each pixel point is determined, and at this time, fourth intensity image data is generated according to the data values of the N first pixel points.
S10, performing second image model conversion processing on the fourth intensity map data to generate first intensity image data.
Specifically, the fourth intensity map data is LAB color space data, which needs to be converted into RGB color space data, and the first intensity image data is generated after conversion. The embodiment of the application adopts a reverse conversion algorithm corresponding to a preset image model conversion method to carry out second image model conversion processing on fourth image data, and converts the fourth intensity image data from LAB color model data to RGB color model data.
Step 104, invoking a preset binarization algorithm to perform binarization processing on the first intensity image data to obtain second intensity image data.
Specifically, in the embodiment of the application, the maximum inter-class variance algorithm in the prior art is called to perform binarization processing on the first intensity image data, so as to generate second intensity image data.
And 105, calling a preset morphological processing algorithm to perform target recognition processing on the second intensity image data to obtain target intensity image data.
Specifically, in the embodiment of the application, the target identification processing is performed on the second intensity data by adopting expansion and corrosion in sequence, so as to generate the target intensity image data.
Other morphological processing methods in the prior art can also be used for performing target identification processing on the second intensity image data to generate target intensity image data.
And 106, extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to pixel coordinate values of pixels in the target intensity image data.
Specifically, the pixel coordinate value of the target intensity image data is not changed in the processing, and the target three-dimensional point cloud data corresponding to the target intensity image data is extracted from the original three-dimensional point cloud data according to the comparison between the pixel coordinate of each pixel in the target intensity image data and the original three-dimensional point cloud data.
And step 107, calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain the target point cloud data.
Specifically, denoising the generated target three-dimensional point cloud data, in the embodiment of the present application, invoking a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data, and obtaining the target point cloud data specifically includes the following steps:
step 1071, reading second pixel point data of the second pixel point in the target three-dimensional point cloud data.
Step 1072, obtain third pixel data adjacent to the second pixel according to the second pixel data.
And step 1073, performing Euclidean distance calculation processing according to the second pixel point data and the third pixel point data to obtain second distance data.
Specifically, a calculation formula of a distance between two points is used for calculating the distance between the second pixel point data and the third pixel point data, so that Euclidean distances between the second pixel point and a plurality of third pixel points are obtained, and the Euclidean distances are second distance data. And calculating the second pixel point data and the third pixel point data to obtain a plurality of second distance data.
And 1074, calculating according to the Gaussian distribution probability density and the plurality of second distance data to obtain a Gaussian mean value and a standard deviation.
Specifically, the second pixel point is adjacent to the plurality of third pixel points, so that the average value of the plurality of second distance data should satisfy the distribution of the gaussian approximate point cloud. The gaussian mean and standard deviation can be calculated from the gaussian distribution probability density.
Step 1075, classifying all the second pixels in the target three-dimensional point cloud data according to the second pixel data, the second distance data, the gaussian mean value and the standard deviation, and dividing the second pixels into target pixels and outlier pixels.
Specifically, the determining of the outlier pixel and the target pixel includes: first, the number of third pixel points with the second distance data smaller than or equal to the Gaussian average value is determined. And secondly, when the number of the third pixel points is smaller than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as outlier pixel points from the target three-dimensional point cloud data. And then, obtaining a second distance data average value according to the average value of the first number of second distance data of the second pixel points. Then, a classification judgment value is obtained according to the sum of twice the standard deviation and the Gaussian mean value. And then, when the second distance data average value is larger than or equal to the classification judgment value, determining the second pixel point as an outlier pixel point, and deleting the outlier pixel point from the target three-dimensional point cloud data. And finally, determining the second pixel point reserved in the target point cloud data as a target pixel point.
In step 1076, target point cloud data is generated according to the target pixel point data corresponding to the target pixel point.
Specifically, each target pixel point corresponds to target pixel point data, and target pixel point data of all the target pixel points generate target point cloud data.
The embodiment of the application provides a target point cloud data extraction method based on three-dimensional point cloud data, which is characterized in that under the condition that the characteristics of TOF three-dimensional imaging technology and the influence factors of equipment systems, ambient light, motion blur and the like are fully considered, target point cloud data is obtained by sequentially denoising three-dimensional point cloud data acquired by the TOF imaging technology, extracting target from intensity image data of the three-dimensional point cloud data, extracting target three-dimensional point cloud data according to the corresponding relation between the three-dimensional point cloud data and strong pair image data and filtering the target three-dimensional point cloud data. According to the target point cloud data extraction method based on the three-dimensional point cloud data, in the processing process, target extraction is carried out on the intensity data of the three-dimensional point cloud data, the complexity of target extraction, the calculated amount of extraction and the system processing cost are greatly simplified, different denoising algorithms are sequentially adopted to denoise data according to image characteristics in the extraction process, and the obtained target point cloud data has the characteristics of high accuracy, high definition and small error.
The second embodiment of the application provides a device, which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be non-volatile memory, such as a hard disk drive and flash memory, in which software programs and device drivers are stored. The software program can execute various functions of the method provided by the embodiment of the application; the device driver may be a network and interface driver. The processor is configured to execute a software program, where the software program is executed to implement the method provided in the first embodiment of the present application.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the application is not limited to the particular embodiments disclosed, but is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the application.

Claims (8)

1. A method for extracting target point cloud data based on three-dimensional point cloud data, the method comprising:
acquiring original three-dimensional point cloud data, and denoising the original three-dimensional point cloud data to obtain denoised three-dimensional point cloud data;
extracting intensity image data from the denoising three-dimensional point cloud data;
invoking a preset target extraction algorithm to perform target extraction processing on the intensity image data to obtain target intensity image data;
extracting target three-dimensional point cloud data from the original three-dimensional point cloud data according to pixel coordinate values of pixels in the target intensity image data;
invoking a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data to obtain the target point cloud data;
the step of calling a preset target extraction algorithm to perform target extraction processing on the intensity image data, and the step of obtaining target intensity image data specifically comprises the following steps:
performing superpixel clustering on the intensity image data to obtain first intensity image data;
invoking a preset binarization algorithm to perform binarization processing on the first intensity image data to obtain second intensity image data;
invoking a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data;
the step of calling a preset point cloud denoising algorithm to denoise the target three-dimensional point cloud data, and the step of obtaining the target point cloud data specifically comprises the following steps:
reading second pixel point data of a second pixel point in the target three-dimensional point cloud data;
acquiring third pixel point data adjacent to the second pixel point according to the second pixel point data;
performing Euclidean distance calculation processing according to the second pixel point data and the third pixel point data to obtain second distance data;
calculating according to the Gaussian distribution probability density and the second distance data to obtain a Gaussian mean value and a standard deviation;
classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the Gaussian mean value and the standard deviation, and dividing the second pixel points into target pixel points and outlier pixel points;
and generating the target point cloud data according to the target pixel point data corresponding to the target pixel point.
2. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the denoising processing is performed on the original three-dimensional point cloud data, and the obtaining of the denoised three-dimensional point cloud data specifically includes:
generating a first buffer area in an internal storage unit;
calling and carrying out multiple line-by-line caching on the original three-dimensional point cloud data based on a 3x3 Gaussian template, wherein 2 lines are cached in the first cache region each time;
and calculating the intensity image data of the three-dimensional point cloud data cached in 2 lines of the first cache region and the intensity image data of the three-dimensional point cloud data of the 3 rd line of the original three-dimensional point cloud data by adopting a-3 x3 Gaussian template, and obtaining the denoising three-dimensional point cloud data according to the result of multiple times of calculation.
3. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the denoising processing is performed on the original three-dimensional point cloud data, and the obtaining of the denoised three-dimensional point cloud data specifically includes:
generating a first buffer area in an internal storage unit;
calling and caching the original three-dimensional point cloud data line by line for a plurality of times based on a 5x5 Gaussian template, wherein 4 lines are cached in the first cache region each time;
and calculating the intensity image data of the three-dimensional point cloud data cached in 4 lines of the first cache region and the intensity image data of the three-dimensional point cloud data of the 5 th line of the original three-dimensional point cloud data by adopting a 5x5 Gaussian template, and obtaining the denoising three-dimensional point cloud data according to the result of multiple times of calculation.
4. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the intensity image data has N pixels, and the performing superpixel clustering on the intensity image data to obtain first intensity image data specifically includes:
s1, performing first image model conversion processing on the first intensity image data by adopting a preset image model conversion algorithm to obtain third intensity image data;
s2, determining K clustering centers for the third-intensity image data according to the neighborhood size of S x S, and initializing each pixel point in the third-intensity image data; wherein S represents the step length of the adjacent cluster centers;
s3, re-selecting a cluster center in a 3x3 neighborhood of the cluster center to obtain a first cluster center;
s4, performing distance calculation processing according to the data value of the first pixel point and the data value of the first clustering center to obtain distance data of the first pixel point and the first clustering center; wherein the distance data comprises a color distance value and a spatial distance value;
s5, calculating according to the color distance value, the space distance value and the step length of the adjacent clustering center to generate first distance data;
s6, carrying out ascending sort processing on the plurality of first distance data, and determining first distance data with first sort as first type center concentration distance data of the first pixel point;
s7, determining a clustering center of the first pixel point according to the first clustering center distance data, and distributing a label value to the first pixel point;
s8, initializing the first clustering center according to the first type center distance data when the first type center distance data is smaller than the initial distance data, and continuously executing S4;
s9, when the first type center-of-concentration distance data of each first pixel point is greater than or equal to the corresponding initial distance data, generating fourth-intensity image data according to the data values of the N first pixel points, wherein the first pixel points comprise the tag values;
and S10, performing second image model conversion processing on the fourth intensity image data to generate the first intensity image data.
5. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein the invoking a preset binarization algorithm to binarize the first intensity image data, and obtaining second intensity image data specifically comprises:
and invoking a maximum inter-class variance algorithm to perform binarization processing on the first intensity image data to generate the second intensity image data.
6. The method for extracting target point cloud data based on three-dimensional point cloud data according to claim 1, wherein classifying all second pixel points in the target three-dimensional point cloud data according to the second pixel point data, the second distance data, the gaussian mean value and the standard deviation specifically comprises:
determining the number of third pixel points of which the second distance data is smaller than or equal to the Gaussian average value;
when the number of the third pixel points is smaller than or equal to a first preset threshold value, determining the second pixel points as outlier pixel points, and deleting the second pixel points determined as outlier pixel points from the target three-dimensional point cloud data;
obtaining a second distance data average value according to the average value of the first number of second distance data of the second pixel points;
obtaining a classification judgment value according to the sum of twice of the standard deviation and the Gaussian mean value;
when the second distance data average value is greater than or equal to the classification judgment value, determining the second pixel point as an outlier pixel point, and deleting the outlier pixel point from the target three-dimensional point cloud data;
and determining the second pixel point reserved in the target point cloud data as a target pixel point.
7. The three-dimensional point cloud data-based target point cloud data extraction method according to claim 1, characterized in that the method further comprises:
and the time-of-flight camera receives an image acquisition instruction, shoots a target scene according to the image acquisition instruction, and generates the original three-dimensional point cloud data.
8. An electronic device for performing a three-dimensional point cloud data-based target point cloud data extraction method, characterized in that the device comprises a memory for storing a program and a processor for performing the three-dimensional point cloud data-based target point cloud data extraction method according to any of claims 1-7.
CN202010301616.8A 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data Active CN111507340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301616.8A CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301616.8A CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Publications (2)

Publication Number Publication Date
CN111507340A CN111507340A (en) 2020-08-07
CN111507340B true CN111507340B (en) 2023-09-01

Family

ID=71871010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301616.8A Active CN111507340B (en) 2020-04-16 2020-04-16 Target point cloud data extraction method based on three-dimensional point cloud data

Country Status (1)

Country Link
CN (1) CN111507340B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529044B (en) * 2020-11-20 2022-06-28 西南交通大学 Method for extracting and classifying railway contact network based on vehicle-mounted LiDAR
CN113255677B (en) * 2021-05-27 2022-08-09 中国电建集团中南勘测设计研究院有限公司 Method, equipment and medium for rapidly extracting rock mass structural plane and occurrence information
CN117152353B (en) * 2023-08-23 2024-05-28 北京市测绘设计研究院 Live three-dimensional model creation method, device, electronic equipment and readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017157967A1 (en) * 2016-03-14 2017-09-21 Imra Europe Sas Processing method of a 3d point cloud
CN108257222A (en) * 2018-01-31 2018-07-06 杭州中科天维科技有限公司 The automatic blending algorithm of steel stove converter three-dimensional laser point cloud
WO2018185807A1 (en) * 2017-04-03 2018-10-11 富士通株式会社 Distance information processing device, distance information processing method, and distance information processing program
CN110163047A (en) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 A kind of method and device detecting lane line
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110827339A (en) * 2019-11-05 2020-02-21 北京深测科技有限公司 Method for extracting target point cloud

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017157967A1 (en) * 2016-03-14 2017-09-21 Imra Europe Sas Processing method of a 3d point cloud
WO2018185807A1 (en) * 2017-04-03 2018-10-11 富士通株式会社 Distance information processing device, distance information processing method, and distance information processing program
CN108257222A (en) * 2018-01-31 2018-07-06 杭州中科天维科技有限公司 The automatic blending algorithm of steel stove converter three-dimensional laser point cloud
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110163047A (en) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 A kind of method and device detecting lane line
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110827339A (en) * 2019-11-05 2020-02-21 北京深测科技有限公司 Method for extracting target point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于TOF三维相机相邻散乱点云配准技术研究;张旭东;吴国松;胡良梅;王竹萌;;机械工程学报(第12期);全文 *

Also Published As

Publication number Publication date
CN111507340A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
Uzkent et al. Tracking in aerial hyperspectral videos using deep kernelized correlation filters
Wang et al. Fusing bird’s eye view lidar point cloud and front view camera image for 3d object detection
Li et al. DeepI2P: Image-to-point cloud registration via deep classification
CN111222395B (en) Target detection method and device and electronic equipment
US10115209B2 (en) Image target tracking method and system thereof
CN111507340B (en) Target point cloud data extraction method based on three-dimensional point cloud data
US20170294027A1 (en) Remote determination of quantity stored in containers in geographical region
US7756296B2 (en) Method for tracking objects in videos using forward and backward tracking
WO2016026371A1 (en) Fast object detection method based on deformable part model (dpm)
Chuang et al. Automatic fish segmentation via double local thresholding for trawl-based underwater camera systems
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN113761999B (en) Target detection method and device, electronic equipment and storage medium
Schilling et al. Detection of vehicles in multisensor data via multibranch convolutional neural networks
CN104376575B (en) A kind of pedestrian counting method and device based on multi-cam monitoring
WO2018227216A1 (en) Learning-based matching for active stereo systems
CN109816694B (en) Target tracking method and device and electronic equipment
US9747507B2 (en) Ground plane detection
US11657485B2 (en) Method for expanding image depth and electronic device
CN109376641A (en) A kind of moving vehicle detection method based on unmanned plane video
CN112949440A (en) Method for extracting gait features of pedestrian, gait recognition method and system
CN116883588A (en) Method and system for quickly reconstructing three-dimensional point cloud under large scene
US20140169684A1 (en) Distance Metric for Image Comparison
CN108875500B (en) Pedestrian re-identification method, device and system and storage medium
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant