CN113034419B - Machine vision task-oriented radar point cloud objective quality evaluation method and device - Google Patents

Machine vision task-oriented radar point cloud objective quality evaluation method and device Download PDF

Info

Publication number
CN113034419B
CN113034419B CN201911233989.XA CN201911233989A CN113034419B CN 113034419 B CN113034419 B CN 113034419B CN 201911233989 A CN201911233989 A CN 201911233989A CN 113034419 B CN113034419 B CN 113034419B
Authority
CN
China
Prior art keywords
point cloud
quality evaluation
calculating
task
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911233989.XA
Other languages
Chinese (zh)
Other versions
CN113034419A (en
Inventor
徐异凌
赵恒�
杨琦
管云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201911233989.XA priority Critical patent/CN113034419B/en
Publication of CN113034419A publication Critical patent/CN113034419A/en
Application granted granted Critical
Publication of CN113034419B publication Critical patent/CN113034419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a radar point cloud objective quality evaluation method and device facing to a machine vision task, comprising the following steps: dividing a spherical domain: dividing the first point cloud and the second point cloud into a plurality of adjacent ball domains respectively, wherein the radius of each ball domain is related to the longest edge a of a visual task target in the point cloud data; calculating a direction vector of the point set: calculating a characteristic value and a characteristic vector of a corresponding point set based on each sphere domain, forming a vector of the direction of the point set, and calculating a vector difference value of the first point cloud and the second point cloud in the same sphere domain; a model score obtaining step: and taking the number of the sphere domain points as a weight, and carrying out weighted summation on the vector difference values calculated by the sphere domains to obtain a model score. The invention can better estimate the service quality of specific tasks (such as point cloud classification, segmentation, identification and the like) based on the radar point cloud, and has better robustness.

Description

Machine vision task-oriented radar point cloud objective quality evaluation method and device
Technical Field
The invention relates to the field of radar point cloud machine vision tasks and point cloud objective quality evaluation, in particular to a radar point cloud objective quality evaluation method and device for machine vision tasks.
Background
In recent decades, laser radar scanning technology and system have become mature and have become more and more widely used. Three-dimensional data is available from lidar, such data is also known as radar point clouds and is capable of reflecting the basic structural information of the target. The research on specific visual tasks such as target segmentation, target identification, target classification and target registration based on three-dimensional laser point cloud data has all achieved results. For example, patent document CN 108981616a discloses a method for inverting effective leaf area index of an artificial forest based on an empirical model of an unmanned laser radar, which is applied to the research fields of forest resource investigation, forest stand quality evaluation and forest productivity estimation.
Unmanned technologies based on laser point clouds are the focus of current research and have been applied to practical driving procedures. However, the way of acquiring data by a vehicle-mounted LiDAR (Light Detection And Ranging), that is, laser Detection And measurement, namely, a laser radar, causes the phenomena of shadows, shelters And the like in the point cloud data, which causes that a specific visual task cannot achieve an ideal effect, for example, the recognition rate of a point cloud segmentation algorithm becomes low, the robustness becomes poor, And further, the waste of computing resources is caused. A data evaluation algorithm is needed to avoid this waste.
Due to the sparsity and heterogeneity of LiDAR data (dense near, sparse far), and knowing that the target volume in a particular task tends to be small in size. The existing point-to-point and point-to-surface distortion quality evaluation used under the general measurement condition obviously cannot well express a specific vision task in the LiDAR, for example, in a radar point cloud segmentation task, for noisy data, the error of a far point which is not concerned by people is larger due to sparseness, and the error of a near point which is concerned by people is smaller. And because the target volume is small, if the noise is uneven, the calculated RMSE is not directly related to a specific task result, so that the robustness of the evaluation models of the classical distortion in the radar point cloud data is poor. In addition, due to the sparsity of LiDAR data, the local nature of normal vectors and curvatures is difficult to obtain and utilize accurately.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a radar point cloud objective quality evaluation method and device for a machine vision task.
Aiming at the defects in the prior art (two classical algorithms), the invention aims to provide a radar point cloud evaluation method for specific visual task service quality, and by adopting the model, the service quality of specific tasks (such as point cloud classification, segmentation, identification and the like) based on radar point cloud can be well estimated, and the model has good robustness.
Specifically, the linear relationship between the index of a specific vision task and the model score can be obtained through the model, so that the service quality of the LiDAR point cloud data can be evaluated, and the obtained data model score can be used for evaluating the service quality of the vision task.
In order to achieve the purpose, the invention adopts the following technical scheme:
a quality of service based evaluation model is provided herein that uses a model to score data to estimate the correct rate of the data pair to achieve under a particular task, a thought that was previously unaddressed.
The specific model idea is as follows.
1. And respectively dividing the reference point cloud a and the point cloud b into a plurality of adjacent ball domains, wherein the radius of each ball domain is a multiplier of the size of the visual task target, and the radius of each divided ball domain is a multiplier which is not more than 2 times of the half of the length of the longest edge of the visual task target.
2. Calculating the characteristic value and the characteristic vector of the point set based on each sphere domain, obtaining the vector representation representing the direction of the point set in the sphere domain according to a certain combination mode, and obtaining the vector difference value of the point cloud a and the point cloud b in the same sphere domain by using a mode of obtaining the vector distance.
3. And weighting and summing the difference of the vectors calculated by the ball neighborhoods to obtain the model score according to the certain attribute of each ball neighborhood as the weight.
4. Verifying and evaluating the effectiveness of the output scores of the model, bringing the reference point cloud a and the point cloud b into a specific visual task, and calculating the corresponding accuracy index size, wherein the relative scores of the model have a certain linear relation with the corresponding index size, and if the linearity is better, the model can be used for estimating the service quality corresponding to the data.
Preferably, the specific visual task comprises any one of:
and performing visual tasks such as point cloud classification, segmentation, identification, tracking and the like.
Preferably, the ball field related multiplier comprises any one of:
1.2, 1.5, 1.7, etc. are not more than 2
Preferably, the model verification method noise category includes any one of:
noise types such as Gaussian noise, random noise, and regional noise
Preferably, the model feature vector combination mode includes any one of the following modes:
multiplication and accumulation, selecting one or two of the eigenvalues to multiply and accumulate with the corresponding eigenvector, etc
Preferably, the method for measuring the vector difference by the model comprises any one of the following steps:
l1 norm, L2 norm, Gaussian kernel function and the like
Preferably, the model verification method vision task accuracy index includes any one of the following:
IoU (cross-over ratio), AP (average accuracy), recall ratio, etc
The invention also provides a radar point cloud objective quality evaluation system facing the machine vision task, which comprises the following steps: the device comprises a noise adding module, a sphere domain segmentation module, a vector representation module for calculating a sphere domain point set, a vector difference calculation module and a quality evaluation score calculation module.
An input point cloud data module: inputting original point cloud data;
a noise adding module: denoising the original point cloud data to obtain noisy point cloud data;
a spherical region segmentation module: respectively separating the original point cloud and the noise point cloud into spherical areas;
and a vector representation module for calculating the direction of the ball domain point set: calculating a characteristic value and a characteristic vector of the spherical domain point set, and calculating vector representation according to the characteristic value and the characteristic vector;
a calculate vector difference module: calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain;
a quality evaluation score calculating module: and calculating a quality evaluation score by taking the neighborhood attribute of each ball as a weight.
Preferably, the method further comprises the following steps:
a model score verification module: and substituting the first point cloud and the second point cloud into a point cloud segmentation task based on deep learning, and calculating the size of the corresponding accuracy index IoU.
Compared with the prior art, the invention has the following beneficial effects:
1. robustness is enhanced, and the method based on weight summation preliminarily solves the problems of close-in density and far-out sparsity of radar data. Specifically, in practical application, in-vehicle LiDAR focuses more on near objects, and the target of a specific visual task is often near, so that the weight summation established by taking the task target as the center is adopted in the invention, and the robustness is enhanced.
2. The complexity is reduced. Due to the fact that the data volume of the point cloud data is too large, point-to-point and point-to-surface modes are high in complexity. The invention adopts a mode of adjacent sphere neighborhood calculation, thereby greatly reducing the operation times.
3. Based on structural similarity. Point-to-point and point-to-surface evaluation neglects structural similarity, and the point cloud target of the visual task (segmentation, identification, registration and the like) cannot be evaluated accurately. The invention takes the sparsity of the radar point cloud into consideration, so the structural similarity of the sparse point cloud is fully considered by adopting a method based on the point set direction under the sphere neighborhood, and the result has better correlation with the index of the visual task.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of the non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of a radar point cloud data quality evaluation method based on a visual task in the embodiment of the invention;
FIG. 2 is a model usage scenario in an embodiment of the present invention.
FIG. 3 is a functional block diagram of a radar point cloud objective quality evaluation system for a vision task in the embodiment of the invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The parameter setting and method selection for the specific implementation of the invention will be described. The specific implementation process is as follows.
As shown in fig. 1, the radar point cloud data quality evaluation method based on the visual task provided by the invention comprises the following steps:
1. the reference point cloud a and the noisy point cloud b are respectively divided into a plurality of adjacent spherical domains, the longest edge of a visual task target in observation point cloud data is x, in order to select the visual task target which can be integrally calculated as far as possible and does not influence the robustness of a model, the radius of each spherical domain is set to be 1.5 x (x/2), and then the number of the spherical neighborhood is correspondingly given.
2. Calculating the eigenvalue and eigenvector of the point set based on each sphere, setting eigenvalue a1, a2, a3, eigenvector α 1, α 2, α 3, combining them into direction L of point set according to the following formula,
L=a1*α1+a2*α2+a3*α3
according to the combination mode, a vector representing the direction of the point set in the spherical domain is obtained, and the vector difference value of the point cloud a and the point cloud b in the same spherical domain is obtained by using an L1 norm, so that the vector size and the direction difference can be reflected.
3. And taking the number of the points in each ball neighborhood as a weight, and performing weighted summation (namely density weighting) on the difference of the vectors calculated by each ball neighborhood to obtain a model score.
4. Verifying the validity of the model, verifying the validity of the score of the model, bringing the reference point cloud a and the point cloud b into a point cloud segmentation task based on deep learning, adding gradient noise to a verification set, calculating the corresponding accuracy index IoU (cross-over ratio) and calculating the evaluation model provided by the invention, wherein the relative score of the model and the corresponding index have a certain linear relation. The linearity can be measured by linear correlation indicators PLCC, KROCC, SROCC. If the linearity is better, the model can be used to estimate the service quality corresponding to the data.
Preferably, the specific visual task comprises any one of:
and performing visual tasks such as point cloud classification, segmentation, identification, tracking and the like.
Preferably, the noise category of the point cloud b in the model comprises any one of the following:
noise types such as Gaussian noise, random noise, and regional noise
Preferably, the ball field related multiplier comprises any one of:
1.2, 1.5, 1.7, etc. are not more than 2
Preferably, the model feature vector combination mode includes any one of the following modes:
multiply-accumulate, selecting one or two of the eigenvalues and corresponding eigenvectors
Preferably, the method for measuring the vector difference by the model comprises any one of the following steps:
l1 norm, L2 norm, radial basis function, etc
Preferably, the model verification method vision task accuracy index comprises any one of the following:
IoU, AP (average accuracy), recall rate and the like
The invention also provides a radar point cloud objective quality evaluation system facing the machine vision task, which comprises the following steps: the device comprises a noise adding module, a spherical domain segmentation module, a vector representation module for calculating a spherical domain point set, a vector difference value calculating module and a quality evaluation score calculating module.
An input point cloud data module: inputting original point cloud data;
a noise adding module: denoising the original point cloud data to obtain noisy point cloud data;
a spherical region segmentation module: respectively separating the original point cloud and the noise point cloud into spherical areas;
and a vector representation module for calculating the direction of the ball domain point set: calculating a characteristic value and a characteristic vector of the spherical domain point set, and calculating vector representation according to the characteristic value and the characteristic vector;
a vector difference calculation module: calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain;
a quality evaluation score calculating module: and calculating a quality evaluation score by taking the neighborhood attribute of each ball as a weight.
Preferably, the method further comprises the following steps:
a model score verification module: and substituting the first point cloud and the second point cloud into a point cloud segmentation task based on deep learning, and calculating the size of the corresponding accuracy index IoU.
Based on the above expression, a specific application example is given below:
taking a laser radar target detection task in automatic driving as an example, point cloud data acquired by a laser radar is very huge. The mass point cloud data is very unfavorable for the next transmission and storage of the computer, and also provides a barrier for the subsequent work. If the target detection task can be estimated by using model calculation before detection, data with higher evaluation quality of the model is preferentially selected, and the task accuracy is increased.
Referring to fig. 2, an application scenario of the model evaluation algorithm is introduced, which is briefly described as follows: in the case of vehicle-mounted LiDAR that obtains a set of mission-specific radar data during unmanned driving, this data score estimate may be obtained by our modeling algorithm (see fig. 1), thereby determining whether the data is feasible for its vision mission. Therefore, the vehicle vision task can determine whether to use the data, thereby greatly improving the frame utilization rate and saving the computing resources.
In particular, the method comprises the following steps of,
1. a sphere neighborhood is calculated using the 3-dimensional reference point cloud a,
2. and based on the same sphere neighborhood, simultaneously calculating the direction characteristic of the point set with the noisy point cloud b to obtain the direction characteristic.
3. And obtaining the direction characteristic difference value of the point set by using a certain vector difference value calculation mode. The commonly used methods include L1 norm, L2 norm, and the like.
4. And obtaining the score F by utilizing the sphere neighborhood weighting. The larger the error. The larger the score.
5. The influence of the point cloud b on the visual task can be estimated according to the size of the F, if the F is greater than Q (threshold), the data is considered to be incapable of completing a normal visual task, and the data is discarded; if F < Q, then the data is used to perform a particular vision task.
As shown in fig. 3, the present embodiment further provides a radar point cloud objective quality evaluation device for machine vision task, including: the device comprises a noise adding module, a spherical domain segmentation module, a vector representation module for calculating a spherical domain point set, a vector difference value calculating module and a quality evaluation score calculating module.
An input point cloud data module: inputting original point cloud data;
a noise adding module: denoising the original point cloud data to obtain noisy point cloud data;
a spherical region segmentation module: respectively separating the original point cloud and the noise point cloud into spherical areas;
and a vector representation module for calculating the direction of the ball domain point set: calculating a characteristic value and a characteristic vector of the spherical domain point set, and calculating vector representation according to the characteristic value and the characteristic vector;
a vector difference calculation module: calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain;
a quality evaluation score calculating module: and calculating a quality evaluation score by taking the neighborhood attribute of each ball as a weight.
In summary, the radar point cloud objective quality evaluation method and system for the machine vision task can evaluate the effectiveness of the input data for the task by considering the point cloud evaluation under the model aiming at a specific vision task (such as point cloud classification, segmentation, identification and the like), and further improve the resource utilization rate and the accuracy of the vision task.
In the embodiment, each functional module of the radar point cloud data objective quality evaluation device based on the visual task corresponds to that of the radar point cloud data objective quality evaluation method based on the visual task in the above embodiment, so that the structure and technical elements of the device can be formed by corresponding conversion of a generation method, and description is omitted here and is not repeated.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make possible variations and modifications of the present invention using the methods and techniques disclosed above without departing from the spirit and scope of the present invention.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in such a manner as to implement the same functions in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (14)

1. A machine vision task-oriented sparse point cloud objective quality evaluation method is characterized by comprising the following steps:
inputting original point cloud data and noise point cloud data obtained by noise processing of the original point cloud;
respectively dividing the original point cloud and the noise point cloud into spherical areas, wherein the radius of each divided spherical area is a multiplier of the size of the target dimension of the visual task;
calculating a characteristic value and a characteristic vector of the ball domain point set;
calculating a direction vector of the point set in the spherical domain according to the characteristic value and the characteristic vector;
calculating a vector difference value of the original point cloud and the noise-added point cloud in the same spherical domain;
taking the attribute of each ball neighborhood as a weight, carrying out weighted summation on the vector difference values calculated by each ball neighborhood to obtain a quality evaluation score,
the method for calculating the direction vector of the point set in the spherical domain is a combined calculation method of multiplying the maximum characteristic value by the corresponding characteristic vector or accumulating the multiplied characteristic values by the characteristic vectors;
and the neighborhood attribute of each ball is the number of neighborhood points of each ball.
2. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1,
the sparse point cloud is a radar point cloud.
3. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, further comprising:
and bringing the original point cloud and the noise point cloud into a visual task, calculating the corresponding accuracy index, and if the quality evaluation score and the accuracy index have a linear relationship, estimating the service quality corresponding to the data by using the sparse point cloud objective quality evaluation method, otherwise, not estimating the service quality corresponding to the data.
4. The machine vision task-oriented objective quality evaluation method for sparse point clouds according to claim 1,
the multiplier that the radius of the ball separating area is the size of the visual task target is that the radius of the ball separating area is not more than 2 times of the length of the longest edge of the visual task target.
5. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2,
the method for calculating the vector difference value of the original point cloud and the noisy point cloud in the same spherical domain is L1 norm or L2 norm or radial basis function.
6. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, wherein the vision task comprises any one of the following:
and (4) carrying out point cloud classification, segmentation, identification and tracking on visual tasks.
7. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, wherein the noise processing noise category comprises any one of:
gaussian noise, random noise, regional noise.
8. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 3, wherein the accuracy index comprises any one of the following:
cross-over ratio IoU, average accuracy AP, recall index.
9. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, comprising:
inputting sparse point cloud data;
calculating a quality evaluation score by using a method for calculating the quality evaluation score;
comparing the quality evaluation score with a preset threshold value: if the quality evaluation score is smaller than the threshold value, the data is valid and can be used continuously, otherwise, the data cannot be used and is discarded.
10. The utility model provides a machine vision task oriented sparse point cloud objective quality evaluation device which characterized in that includes:
an input point cloud data module: inputting original point cloud data;
a noise adding module: denoising the original point cloud data to obtain noisy point cloud data;
a spherical region segmentation module: respectively dividing the original point cloud and the noise point cloud into spherical areas, wherein the radius of each divided spherical area is a multiplier of the size of a visual task target;
a module for calculating the direction vector of the ball domain point set: calculating a characteristic value and a characteristic vector of the spherical domain point set, and calculating a direction vector of the spherical domain point set according to the characteristic value and the characteristic vector;
a calculate vector difference module: calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain;
a quality evaluation score calculating module: calculating a quality evaluation score by taking the neighborhood attribute of each ball as a weight;
the method for calculating the direction vector of the sphere domain point set is a combined calculation method of multiplying the maximum characteristic value by the corresponding characteristic vector or accumulating the multiplied characteristic values and the characteristic vectors;
and the neighborhood attribute of each ball is the number of neighborhood points of each ball.
11. The machine vision task-oriented sparse point cloud objective quality assessment device as claimed in claim 10,
the sparse point cloud is a radar point cloud.
12. The device for objective quality evaluation of sparse point cloud for machine vision task according to claim 10 or claim 11, further comprising:
a data validity judging module: and bringing the original point cloud and the noise point cloud into a visual task, calculating the corresponding accuracy index size, and if the quality evaluation score and the accuracy index size have a linear relation, the sparse point cloud objective quality evaluation device can be used for estimating the service quality corresponding to the data, otherwise, the sparse point cloud objective quality evaluation device cannot be used for estimating the service quality corresponding to the data.
13. The machine vision task-oriented sparse point cloud objective quality evaluation device as claimed in claim 10 or 11,
the multiplier that the radius of the ball separating area is the size of the visual task target is that the radius of the ball separating area is not more than 2 times of the length of the longest edge of the visual task target.
14. The machine vision task-oriented sparse point cloud objective quality assessment device as claimed in claim 10 or claim 11,
the method for calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain is L1 norm or L2 norm.
CN201911233989.XA 2019-12-05 2019-12-05 Machine vision task-oriented radar point cloud objective quality evaluation method and device Active CN113034419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911233989.XA CN113034419B (en) 2019-12-05 2019-12-05 Machine vision task-oriented radar point cloud objective quality evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911233989.XA CN113034419B (en) 2019-12-05 2019-12-05 Machine vision task-oriented radar point cloud objective quality evaluation method and device

Publications (2)

Publication Number Publication Date
CN113034419A CN113034419A (en) 2021-06-25
CN113034419B true CN113034419B (en) 2022-09-09

Family

ID=76450723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911233989.XA Active CN113034419B (en) 2019-12-05 2019-12-05 Machine vision task-oriented radar point cloud objective quality evaluation method and device

Country Status (1)

Country Link
CN (1) CN113034419B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452583B (en) * 2023-06-14 2023-09-12 南京信息工程大学 Point cloud defect detection method, device and system and storage medium
CN117611805B (en) * 2023-12-20 2024-07-19 济南大学 3D abnormal region extraction method and device for regular three-dimensional semantic point cloud

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
CN110246112A (en) * 2019-01-21 2019-09-17 厦门大学 Three-dimensional point cloud quality evaluating method in the room laser scanning SLAM based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
CN110246112A (en) * 2019-01-21 2019-09-17 厦门大学 Three-dimensional point cloud quality evaluating method in the room laser scanning SLAM based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Quality Metric for 3D LiDAR Point Cloud based on Vision Tasks;Heng Zhao et.al;《2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB) 》;20210319;第1-5页 *
Derivation of tree skeletons and error assessment using LiDAR point cloud data of varying quality;M. Bremer et.al;《ISPRS Journal of Photogrammetry and Remote Sensing》;20130415;第39-50页 *
地面三维激光扫描点云质量评价技术研究与展望;花向红 等;《地理空间信息》;20180831;第16卷(第8期);第1-7页 *

Also Published As

Publication number Publication date
CN113034419A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Heinzler et al. Cnn-based lidar point cloud de-noising in adverse weather
Hodges et al. Single image dehazing using deep neural networks
CN112270252A (en) Multi-vehicle target identification method for improving YOLOv2 model
CN110020592A (en) Object detection model training method, device, computer equipment and storage medium
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
US11574395B2 (en) Damage detection using machine learning
CN104751147A (en) Image recognition method
CN103279957A (en) Method for extracting remote sensing image interesting area based on multi-scale feature fusion
CN105205486A (en) Vehicle logo recognition method and device
CN113034419B (en) Machine vision task-oriented radar point cloud objective quality evaluation method and device
CN111709923A (en) Three-dimensional object detection method and device, computer equipment and storage medium
CN101482969B (en) SAR image speckle filtering method based on identical particle computation
CN104766095A (en) Mobile terminal image identification method
CN111444839A (en) Target detection method and system based on laser radar
Baloch et al. Hardware synthesize and performance analysis of intelligent transportation using canny edge detection algorithm
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN113160117A (en) Three-dimensional point cloud target detection method under automatic driving scene
CN116863426A (en) Three-dimensional point cloud target detection method and device based on diffusion model
CN113763412B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP4152274A1 (en) System and method for predicting an occupancy probability of a point in an environment, and training method thereof
CN102521811A (en) Method for reducing speckles of SAR (synthetic aperture radar) images based on anisotropic diffusion and mutual information homogeneity measuring degrees
CN111126617B (en) Method, device and equipment for selecting fusion model weight parameters
CN110852255B (en) Traffic target detection method based on U-shaped characteristic pyramid
Xue et al. Detection of Various Types of Metal Surface Defects Based on Image Processing.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant