CN110956137A - Point cloud data target detection method, system and medium - Google Patents

Point cloud data target detection method, system and medium Download PDF

Info

Publication number
CN110956137A
CN110956137A CN201911212740.0A CN201911212740A CN110956137A CN 110956137 A CN110956137 A CN 110956137A CN 201911212740 A CN201911212740 A CN 201911212740A CN 110956137 A CN110956137 A CN 110956137A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target detection
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911212740.0A
Other languages
Chinese (zh)
Inventor
胡小波
吴树丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Priority to CN201911212740.0A priority Critical patent/CN110956137A/en
Publication of CN110956137A publication Critical patent/CN110956137A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a system and a medium for detecting a target of point cloud data. Wherein, the method comprises the following steps: determining a scene image to be detected according to the acquired point cloud data; inputting the scene image into a pre-trained target detection model to obtain a target detection result; and marking a target object in the point cloud data according to the target detection result. The technical scheme of the invention improves the accuracy of the point cloud data target detection result.

Description

Point cloud data target detection method, system and medium
Technical Field
The embodiment of the invention relates to the technical field of laser radars, in particular to a method, a system and a medium for detecting a target of point cloud data.
Background
With the development of artificial intelligence technology, automatic detection of environmental targets has become a common approach in the environmental sensing process.
Because the camera is greatly influenced by ambient light when shooting images, in order to ensure the stability of a target detection result, in the prior art, a clustering algorithm is generally adopted to perform target detection on point cloud data acquired by a laser radar. However, the clustering algorithm has a high requirement for selecting the number of clusters (i.e., K value) in the process of point cloud data detection, and if the K value is not selected accurately, the clustering effect is not accurate, which further affects the accuracy of the target detection result of the point cloud data, and there is a great need for improvement.
Disclosure of Invention
The embodiment of the invention provides a method, a system and a medium for detecting a target of point cloud data, which improve the accuracy of a point cloud data target detection result.
In a first aspect, an embodiment of the present invention provides a method for detecting a target of point cloud data, including:
determining a scene image to be detected according to the acquired point cloud data;
inputting the scene image into a pre-trained target detection model to obtain a target detection result;
and marking a target object in the point cloud data according to the target detection result.
In a second aspect, an embodiment of the present invention further provides an apparatus for detecting a target in point cloud data, where the apparatus includes:
the image determining module is used for determining a scene image to be detected according to the acquired point cloud data;
the target detection module is used for inputting the scene image into a pre-trained target detection model to obtain a target detection result;
and the object marking module is used for marking a target object in the point cloud data according to the target detection result.
In a third aspect, embodiments of the present invention further provide a surveying and mapping system, which includes at least one lidar and a control device; the control equipment is connected with the at least one laser radar, and the at least one laser radar is used for collecting point cloud data; the control apparatus includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for object detection of point cloud data according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method for object detection of point cloud data according to the first aspect.
According to the method, the system and the medium for detecting the point cloud data, the acquired point cloud data are processed and converted into the scene image to be detected, the pre-trained target detection model is adopted to detect the target of the scene image, and the position of the target object is marked in the point cloud data according to the detection result of the scene image. When the scheme of the embodiment of the invention is used for detecting the point cloud data target, the method for detecting the scene image by adopting the deep learning model replaces the existing clustering algorithm to detect the target, so that the problem that the detection result is influenced by the inaccurate K value selection when the clustering algorithm is used for detecting the point cloud data target is solved, the accuracy of the point cloud data target detection result is improved, and a new thought is provided for the point cloud data target detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a method for detecting a target in point cloud data according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a method for detecting a target in point cloud data according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a method for detecting a target in point cloud data according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for detecting a target of point cloud data according to a fourth embodiment of the present invention;
fig. 5A is a schematic structural diagram of a mapping system according to a fifth embodiment of the present invention;
fig. 5B is a schematic structural diagram of a control device of a mapping system in the fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Before the embodiments of the present invention are described, a use scenario of the embodiments of the present invention is described, and many detection objects are applicable to the embodiments of the present invention, and may be large target objects, such as houses; it may also be a medium object, such as a vehicle, or a small object, such as a pedestrian. The embodiment of the present invention is not limited thereto. In the following, the embodiments of the present invention are described by taking the target object as a pedestrian, but not limited to detecting only the pedestrian in the point cloud data.
Example one
Fig. 1 is a flowchart of a method for detecting a target in point cloud data according to an embodiment of the present invention, which is suitable for detecting a target object in point cloud data. The method may be performed by a control device in the mapping system of the embodiment of the present invention, which may be implemented in software and/or hardware. As shown in fig. 1, the method specifically includes the following steps:
s101, determining a scene image to be detected according to the acquired point cloud data.
The point cloud data may be a set of three-dimensional coordinate vectors recorded in the form of a point cloud by scanning the current scene with the laser radar, and each three-dimensional coordinate vector may be represented by (x, y, z). In addition, the point cloud data may also include the reflected light intensity value of each point cloud. The scene image to be detected can be a two-dimensional image for target detection obtained after point cloud data conversion. The two-dimensional image of any visual angle in the current scene can be selected, and the specific selection of which visual angle to convert into the two-dimensional image can depend on the position of the laser radar, the actual detection requirement and the like. Preferably, the scene image of the embodiment of the present invention may be a bird's eye view of the current scene.
Optionally, in the embodiment of the present invention, the laser radar fixed at a certain position may collect point cloud data for a surrounding environment of a scene where the laser radar is located, and transmit the collected point cloud data to the control device for target detection. For example, the laser radar fixed on the automatic driving vehicle collects point cloud data aiming at the surrounding environment in real time and transmits the point cloud data to the control equipment of the automatic driving vehicle during the driving process of the vehicle. The point cloud data transmitted by the laser radar acquired by the control device is three-dimensional coordinate data of each point in the current scene, and at the moment, the three-dimensional coordinate data of each point needs to be processed, and the three-dimensional point cloud data is converted into two-dimensional image data to be detected, namely a scene image to be detected. Optionally, there are many methods for determining the scene image to be detected according to the acquired point cloud data in this step, and this embodiment is not limited thereto.
One possible implementation may be: and determining the panoramic image as a scene image to be detected according to the acquired point cloud data. Specifically, the three-dimensional coordinate data of each point in the point cloud data may be mapped to a position coordinate of the two-dimensional image according to the three-dimensional coordinate data and the reflected light intensity value of each point in the point cloud data, and the reflected light intensity value of each point is mapped to a pixel value at the position coordinate, so that a panoramic image corresponding to the point cloud data is obtained and serves as a scene image to be detected. For example, assuming that the panorama is a bird's eye view of the current scene, the x-coordinate value and the y-coordinate value in the three-dimensional coordinate data of each point may be mapped to the position coordinates of the point in the bird's eye view according to a preset mapping rule (e.g., simultaneously dividing by a preset numerical value); and mapping the reflected light intensity value of the point into a pixel value between 0 and 255 according to a pixel value mapping rule, and filling the pixel value in the position coordinate of the point in the aerial view, wherein the obtained aerial view is the scene image to be detected. The panoramic image corresponding to the point cloud data is obtained by mapping the three-dimensional coordinate data into the position coordinates of the two-dimensional image and the pixel values of the position coordinates according to the three-dimensional coordinate data of each point in the acquired point cloud data, and the panoramic image is used as the scene image to be detected. It should be noted that how to obtain the panoramic image only according to the three-dimensional coordinate data of each point in the acquired point cloud data will be described in detail in the following embodiments, which is not described herein again. It should be noted that, in the embodiment of the present invention, it is specifically selected whether to map the coordinate value to the pixel value or map the reflected light intensity value to the pixel value, depending on the attribute characteristics of the target object to be detected.
Another possible implementation may be: determining a panoramic image according to the acquired point cloud data; and cutting and/or amplifying the panoramic image to obtain a scene image to be detected. Optionally, after determining the panoramic image in the first implementable manner, the implementable manner may be that, instead of directly taking the panoramic image as the image to be detected, the panoramic image is further subjected to a cropping and/or enlarging operation, and an interested area is extracted from the panoramic image as the scene image to be detected. Specifically, because the coverage area of the point cloud data acquired by the laser radar is large, a part of area in the panoramic image determined according to the point cloud data may not be the area actually required to be detected. In order to improve the subsequent target object detection efficiency and accuracy, the embodiment of the invention can cut out the areas which do not need to be detected in the panoramic image. For example, the laser radar fixed on the autonomous vehicle collects point cloud data within 50m around the vehicle, but only needs to detect whether pedestrians exist within 20m around the vehicle during the driving process of the autonomous vehicle, and at this time, the area outside 20m around the vehicle is actually the area which does not need to be detected, and the area outside 20m around the vehicle can be cut out from the panoramic image within 50m around the vehicle. Because the laser radar has a large collection range, in order to ensure the accuracy of the subsequent detection result, the embodiment may further perform amplification processing on the generated panoramic image according to a certain proportion (for example, the panoramic image is in a proportion of 1: 1 to the actual scene), and the amplified image shows more image detail information, so that the detection of the small target object is facilitated. Optionally, the area of the panoramic image may be cut first, and then the cut area is enlarged, so that all the target objects in the area to be detected can be identified more quickly and accurately.
And S102, inputting the scene image into a pre-trained target detection model to obtain a target detection result.
The target detection model may be a detection model which is trained on the initial model in advance according to a large amount of sample data and can detect a target object in an image, and an algorithm used for training the target detection model may include, but is not limited to: a yolo (young Only Look one) algorithm, a ssd (ssd) (single Shot multi box detector) algorithm, an R-FCN, an FPN (Feature neural Net) algorithm, an M2Det algorithm, a Recurrent Neural Network (RNN), a Long Short Term Memory (LSTM) network, a threshold cycle unit, a simple cycle unit, an auto-encoder, a decision tree, a random forest, a Feature mean classification, a classification regression tree, a hidden markov, a K-nearest neighbor (KNN) algorithm, a logistic regression model, a bayesian model, a gaussian model, and a KL (Kullback-Leibler) and the like. Optionally, training the sample data of the target detection model may include: multiple groups of point cloud data, target objects selected from frames in each group of point cloud data, and the like.
Optionally, in the embodiment of the present invention, the to-be-detected scene image obtained in S101 may be input into a pre-trained target detection model, and the target detection model is operated, at this time, the target detection model performs target object detection on the input scene image based on an algorithm during training, and outputs a detection result. Optionally, the content of the detection result output by the target detection model may be that each position region where the target object may exist is selected by the labeling frame in the input scene image, and the confidence of each labeling frame, that is, the reliability of the labeling frame.
Optionally, the target detection model in the embodiment of the present invention may be constructed based on any target detection algorithm, which is not limited to this embodiment, and for example, the target detection model may include, but is not limited to: SSD algorithm, SNIP algorithm, YOLOv3 algorithm, etc. Preferably, the embodiment of the invention can select a deep learning model of a YOLOv3 network structure as the target detection model, and the YOLOv3 network result can identify three target objects with different sizes, namely large, medium and small. The YOLOv3 network structure includes 106 data processing layers, wherein the 82 th data processing layer is mainly used for predicting objects with large size, the 92 th data processing layer is mainly used for predicting objects with medium size, and the 106 th data processing layer is mainly used for predicting objects with small size.
Optionally, if the current target detection requirement is to detect a single type of target object, the data processing layer in the target detection model for detecting two other types of target objects is the redundant data processing layer. For this special case, before inputting the scene image into the pre-trained target detection model, the embodiment of the present invention may further include: and deleting a redundant data processing layer in the YOLOv3 network structure according to the size of the target object to be detected to obtain a target detection model. Specifically, according to the actual detection requirement and the size judgment rule, the number of pixels occupied by the target object to be detected in the scene image is combined to determine whether the target object belongs to a large-size target object, a medium-size target object or a small-size target object, then a data processing layer which does not need to be processed when the target object with the size is detected in the YOLOv3 network structure is used as a redundant data processing layer, and the redundant data processing layers are deleted in the YOLOv3 network structure, so that the target detection model is obtained. For example, if only pedestrians in the scene image are detected in the embodiment, since the pedestrians occupy fewer pixels (e.g. 4-5 pixels) in the scene image and satisfy the small-size determination rule, the pedestrians belong to small-size target objects, the 82 th data processing layer in the YOLOv3 network structure is mainly used for predicting large-size objects, the 92 th data processing layer is mainly used for predicting medium-size objects, and these two data processing layers do not need to perform data processing when detecting small-size pedestrians, the 82 th and 92 th data processing layers in the YOLOv3 network structure may be used as redundant data processing layers, the 82 th and 92 th data processing layers are deleted in the complete network structure of YOLOv3, and the target detection model is constructed based on the network structure after deleting the redundant data processing layers. When the embodiment of the invention is used for detecting the target object with the single size, the YOLOv3 network structure is trimmed, and redundant network branches are removed, so that the method is more suitable for detecting the target object with the single size and has higher detection speed.
And S103, marking the target object in the point cloud data according to the target detection result.
The content of the detection result output by the target detection model may include: a plurality of labeling boxes for labeling the regions where the target object possibly exists, and the confidence of each labeling box, namely the reliability of the labeling box.
Optionally, the number of the labeling frames included in the target detection result is large, and there may be a case where a plurality of labeling frames correspond to one target object, or a labeled area is not a target object. Therefore, in this step, the target detection result needs to be screened first, the screened labeling frame is a labeling frame on the two-dimensional image, and the target object needs to be marked in the three-dimensional point cloud data by converting the labeled object into a coordinate system of the three-dimensional point cloud data first and then labeling the labeled object, and the specific process may include the following two substeps:
and S1031, screening each labeling frame according to the confidence of each labeling frame in the target detection result, and determining the target labeling frame.
Specifically, each labeling frame labeled in the target detection result has a corresponding confidence, and at this time, each labeling frame in the target detection result may be screened according to a preset target detection result post-processing algorithm according to the confidence of each labeling frame. For example, a maximum non-suppression (NMS) algorithm.
Optionally, in the embodiment of the present invention, before the target detection result post-processing algorithm is used to screen each labeling frame in the target detection result, each labeling frame in the target detection result is primarily screened according to a preset confidence threshold, and then according to the primary screening result, the target detection result post-processing algorithm is used to further screen the remaining labeling frames after the primary screening. For example, the confidence of each labeling frame in the target detection result may be compared with a preset confidence threshold, each labeling frame with the confidence lower than the confidence threshold in the target detection result is filtered, and then the remaining labeling frames after the preliminary screening are further screened by using an NMS algorithm, so as to obtain at least one target labeling frame which finally accurately labels the position of the target object.
And S1032, marking the target object in the point cloud data according to the data information of the target marking frame on the scene image.
The data information of the target annotation box on the scene image may include, but is not limited to: position coordinate information and pixel value information of the target labeling frame on the scene image, and the like.
Specifically, the target labeling frame determined in S1031 is a labeling frame of a position where the target object is located on the two-dimensional image, and the position coordinate of the target labeling frame is also a position coordinate on the two-dimensional image, and since the final requirement of the point cloud data target detection is to label the target object in the three-dimensional point cloud data, this sub-step needs to determine the position coordinate of the target labeling frame corresponding to the coordinate system of the three-dimensional point cloud data by using an inverse algorithm when the point cloud data is converted into the scene image according to the data information of the target labeling frame determined in S1031 on the scene image and using S101. For example, if the x-coordinate value, the y-coordinate value, and the z-coordinate value in the point cloud data are respectively mapped to the position coordinate and the pixel value in the scene image in S101, in this case, the inverse process of S101 may be adopted to map the position coordinate of the target labeling frame to the x-coordinate value and the y-coordinate value in the point cloud data, and map the pixel value of the target labeling frame to the z-coordinate value in the point cloud data. And then, selecting a point cloud frame belonging to the position coordinate area in the point cloud data according to the determined three-dimensional position coordinate of the target marking frame in the point cloud data, wherein the area selected by the frame in the point cloud data is the area where the target object is located.
It should be noted that, in the embodiment of the present invention, the target detection may be performed on the point cloud data acquired by a plurality of laser radars at the same time. The specific processing procedure for the point cloud data collected by each lidar may be similar to the processing manner of S101-S103 described above. In addition, each lidar may acquire point cloud data in a current scene in real time or at regular time, and the operation of S101 to S103 is performed on a group of point cloud data acquired by each lidar each time.
According to the point cloud data target detection method provided by the embodiment of the invention, the acquired point cloud data is processed and converted into the scene image to be detected, the pre-trained target detection model is adopted to perform target detection on the scene image, and the position of the target object is marked in the point cloud data according to the detection result of the scene image. When the scheme of the embodiment of the invention is used for detecting the point cloud data target, the method for detecting the scene image by adopting the deep learning model replaces the existing clustering algorithm to detect the target, so that the problem that the detection result is influenced by the inaccurate K value selection when the clustering algorithm is used for detecting the point cloud data target is solved, the accuracy of the point cloud data target detection result is improved, and a new thought is provided for the point cloud data target detection.
Further, training the deep learning network model requires a large amount of sample data, but because the point cloud data set is relatively scarce, actually when training the target detection model, the sample data that may be used for training cannot cover each scene, so that when the trained target detection model performs target detection on a new scene that has not been learned during training, a situation that the detection effect is not good may occur, for this embodiment of the present invention, if the acquired point cloud data is point cloud data in the new scene, before inputting a scene image into the pre-trained target detection model, the method further includes: and acquiring sample point cloud data of the new scene, and performing optimization training on the target detection model by adopting the sample point cloud data and historical sample data. Specifically, when detecting a target object in a new scene, some point cloud data of the new scene may be obtained first and processed into sample point cloud data, for example, for each group of point cloud data, a corresponding scene image may be determined first, then a target object position is marked in the scene image, and then the scene image and the scene image marked with the target object position are used as a group of sample point cloud data. And then adding the sample point cloud data in the new scene into historical sample data of the previously trained target detection model, updating the sample data of the trained target detection model, and then performing further optimization updating training on the target detection model by using the updated sample data, wherein the trained target detection model can more accurately complete the detection of the target object in the new scene. The method has the advantages that the target detection model is continuously updated and optimized by adopting continuously abundant and perfect sample data, so that the application range and the accuracy of the updated and optimized target detection model are greatly improved.
Example two
Fig. 2 is a flowchart of a target detection method for point cloud data in the second embodiment of the present invention, and this embodiment is based on the above embodiments and further optimizes the method, and specifically provides a description of how to determine a specific situation of a scene image to be detected according to the acquired point cloud data. According to the point cloud data, the scene image to be detected is determined to be the aerial view. As shown in fig. 2, the method of this embodiment specifically includes the following steps:
s201, according to a preset proportion, mapping the horizontal coordinate data of each point in the acquired point cloud data into the image position coordinates of the point.
The preset ratio may be preset according to a requirement of detail resolution capability of the scene image to be detected, and may be a value preset according to a required image resolution. The horizontal coordinate data of each point (e.g., coordinate data of each point under the bird's eye view) may refer to an x-coordinate value and a y-coordinate value on a horizontal plane (e.g., bird's eye view) in the three-dimensional coordinates (x, y, z) of the point cloud data.
Specifically, in this step, the horizontal coordinate data of each point in the point cloud data, that is, the x coordinate value and the y coordinate value, are divided or multiplied by the preset ratio respectively according to the preset ratio, and the obtained numerical value is used as the position coordinate of the point mapped onto the two-dimensional image. Optionally, the preset proportion may be a proportion value, and at this time, the x coordinate value and the y coordinate value are both mapped according to the proportion value; the preset proportion can also be formed by a first preset proportion value of an x coordinate and a second preset proportion value of a y coordinate, and at the moment, the x coordinate value can be mapped according to the first preset proportion value; the y coordinate value may be a mapping of the image position coordinate according to a second preset scale value. For example, if the measurement unit is meter and a resolution of 5 cm is to be obtained on the x-coordinate axis of the image, the negative value of the y-coordinate value in the point cloud data is divided by 0.05, and a resolution of 3 cm is to be obtained on the y-coordinate axis of the image, the negative value of the x-coordinate value in the point cloud data is divided by 0.03, wherein the directions of the x-axis and the y-axis are reversed in order to process the image coordinates.
S202, after the height coordinate data of each point in the acquired point cloud data is mapped into a pixel value, filling the pixel value in the position coordinate of the image of the point to obtain a gray panoramic image.
Optionally, the target object identified in the embodiment of the present invention is an object with an obvious height characteristic, and in this step, the pixel value of the point cloud in the image may be mapped according to the height coordinate value of the point cloud data, so that the subsequent target detection model can accurately find the target object by detecting the pixel value and the object contour. For example, if the target object to be detected is a pedestrian on the road, the height characteristics of the pedestrian are obvious relative to other objects on the road, different height values are mapped to be pixel values of the two-dimensional image, and the subsequent target detection model can accurately detect the pedestrian according to the pixel value range corresponding to the height range of the pedestrian and the contour of the pedestrian. Specifically, the method for mapping the height coordinate data of each point in the acquired point cloud data into a pixel value in this step may include the following two substeps:
s2021, determining the height range to be detected according to the position of the laser radar for collecting the point cloud data.
Specifically, the height value of the point cloud collected by the laser radar is a numerical value given by taking the fixed height of the laser radar as a reference, and at the moment, the height value range which needs to be focused by people is determined as the height value range to be detected according to the fixed position of the laser radar.
S2022, according to the height range to be detected and the pixel value range, mapping the height coordinate data of each point in the acquired point cloud data into a pixel value.
For example, we set the range of height values to 5m below the origin to 0.5m above the origin, height values greater than 0.5m would be set to 0.5m, height values less than-5 m would be set to-5 m, i.e., any value not within the range would be set to a maximum or minimum value. Next, the values are re-scaled between 0 and 255 and the data type is converted to an integer.
Optionally, in this step, after mapping the height value of each point in the point cloud data to its corresponding pixel value, the pixel value is the pixel value of the point cloud data in the two-dimensional image, and therefore, the pixel value may be filled in the two-dimensional image at the image position coordinate of the point determined in S201. And after the corresponding pixel values are filled in each position coordinate of the two-dimensional image, the obtained gray level image is the gray level panoramic image of the acquired point cloud data.
And S203, processing the gray panoramic image by adopting spectral color mapping to obtain a color panoramic image.
Specifically, since it is difficult to distinguish a gray area and a shadow area in an image in a grayscale panoramic image, in order to improve the accuracy of subsequent target object detection, in an embodiment of the present invention, a spectral color mapping processing algorithm may be used to process the grayscale panoramic image into a color image after the grayscale panoramic image is obtained. There are many specific spectral color mapping algorithms, and this embodiment is not limited to this, and for example, each gray value in the gray-scale panoramic image may be mapped to a corresponding color value according to a mapping relation between the gray value and the color value to obtain a color panoramic image. The color panoramic image obtained by processing the gray panoramic image in the step is more beneficial to the characteristic detection of the target object by the target detection model.
And S204, cutting and/or amplifying the color panoramic image to obtain a scene image to be detected.
Optionally, a method for performing cropping and/or enlarging processing on the color panoramic image in this step is already described in the above embodiments, and is not described herein again. In the step, the color panoramic image is cut and/or amplified, so that the interference of areas which do not need to be detected on subsequent target detection can be avoided, the local detail information of the point cloud data is amplified, and the target object can be rapidly and accurately detected by a target detection model.
And S205, inputting the scene image into a pre-trained target detection model to obtain a target detection result.
And S206, marking the target object in the point cloud data according to the target detection result.
According to the point cloud data target detection method provided by the embodiment of the invention, the horizontal coordinate data and the height coordinate in the point cloud data are respectively mapped to the position coordinate and the pixel value of the point, so as to obtain the gray panoramic image of the point cloud data, the gray panoramic image is processed into the color panoramic image, then the cutting and/or amplification processing is carried out, so as to obtain the scene image to be detected, the target detection result of the scene image is carried out according to the target detection model, and the position of the target object is marked in the point cloud data. According to the scheme of the embodiment of the invention, when the scene image to be detected is determined according to the point cloud data, the panoramic image is obtained according to the mapping of the horizontal coordinate value and the height coordinate value, so that the panoramic image contains the contour information and the height information of each object in the scene, and the panoramic image is a color image and is beneficial to accurately detecting the target object by a subsequent target detection model. In addition, after the panoramic image is determined, the panoramic image is cut and/or processed by a method, so that the local detail information is amplified while the non-detection area is filtered, and the subsequent target detection model can accurately and quickly detect the target object.
EXAMPLE III
Fig. 3 is a flowchart of a target detection method for point cloud data in the third embodiment of the present invention, and this embodiment is based on the above embodiments and further optimized, and specifically provides a description of how to determine a specific situation of a scene image to be detected according to acquired point cloud data. As shown in fig. 3, the method of this embodiment specifically includes the following steps:
s301, interference point cloud data are removed from the acquired point cloud data.
The interference point cloud data may be point cloud data that may cause interference to a detection process when the target detection model detects the target object. For example, if the target object is a pedestrian, when the scene image to be detected determined according to the laser radar is the aerial view, the amount of point cloud data of the ground area in the aerial view is large, and the point cloud data is mixed with the point cloud data of the pedestrian, so that interference is easily caused to pedestrian detection, and the point cloud data of the ground area is interference point cloud data.
Optionally, there are many methods for removing the interference point cloud data from the acquired point cloud data in the embodiment of the present invention, which is not limited in this embodiment. The first implementation mode can be that the height range of the interference point cloud is determined according to the position of the laser radar for collecting the point cloud data and the height characteristics of the interference object; and eliminating the point cloud data with the height value in the interference point cloud height range in the point cloud data as the interference point cloud data. For example, if the height of the laser radar is 5m from the ground, and the height of the interfering object is 0m-0.3m from the ground, the height range of the interfering point cloud can be determined to be-5 m to-4.7 m, and the point cloud data with the height value of-5 m to-4.7 m in the acquired point cloud data can be used as the interfering point cloud data to be removed. The advantage of this implementable embodiment is that interfering point cloud data can be removed from the point cloud data simply and quickly.
A second possible implementation may be: performing interference region fitting on the acquired point cloud data, and determining interference point cloud data and non-interference point cloud data in the point cloud data according to a fitting result; and removing the determined interference point cloud data from the acquired point cloud data. Specifically, there are many algorithms for fitting the interference region, and this embodiment is not limited to this. For example, when the target object detected in this embodiment is a pedestrian, the ground area is an interference area, and at this time, a ground area equation under the current scene may be fitted according to a ground fitting algorithm, and then, according to the ground area equation, cloud data of each point belonging to the ground area in the point cloud data is determined as interference point cloud data, and is removed from the acquired point cloud data. The advantage of this embodiment is that the accuracy of the interference point cloud data determination is improved compared to the first embodiment by determining the interference point cloud data by means of interference region fitting.
The third possible implementation manner may be that after the interfering point cloud data and the non-interfering point cloud data in the point cloud data are determined in the second possible implementation manner, point cloud data with a height difference within a preset height from a fitted interfering area is searched from the non-interfering point cloud data and is eliminated from the acquired point cloud data as interfering point cloud data. For example, after the interference point cloud data belonging to the ground area is determined, the point cloud data with the height difference within the range of 0.3m from the ground area equation is continuously searched according to the ground area equation, and the point cloud data and the interference point cloud data belonging to the ground area are taken as the interference point cloud data to be eliminated from the acquired point cloud data. The method has the advantages that the interference point clouds omitted in the interference region fitting process can be combined into the interference point cloud data, and the interference point cloud data in the point cloud data can be removed more comprehensively and accurately.
Optionally, the interference point cloud data in the embodiment of the present invention may also be some noise data, and the noise data in the acquired point cloud data may be removed by a denoising method.
S302, determining a scene image to be detected according to the point cloud data remained after the elimination.
Optionally, in S301, after the interference point cloud data is removed from the acquired point cloud data, according to the remaining point cloud data after removal, a panoramic image corresponding to the remaining point cloud data is determined, and then the determined panoramic image is cut and/or enlarged to obtain a scene image to be detected. The specific process how to determine the panoramic image of the point cloud data and how to perform the cropping and/or enlarging processing on the panoramic image has been described in the foregoing embodiments, and is not described herein again.
And S303, inputting the scene image into a pre-trained target detection model to obtain a target detection result.
And S304, marking the target object in the point cloud data according to the target detection result.
According to the target detection method of the point cloud data, provided by the embodiment of the invention, after the point cloud data is obtained, the interference point cloud data in the point cloud data is removed, the scene image to be detected is determined according to the residual point cloud data, the target detection result of the scene image is carried out according to the target detection model, and the position of the target object is marked in the point cloud data. According to the scheme of the embodiment of the invention, after the point cloud data is acquired, the interference data is removed from the acquired point cloud data, so that the influence of the interference point cloud data on the target detection accuracy is avoided, especially when a small target object is detected, the pixel proportion of the small target object in the image is improved through the removal of the interference point cloud data and the cutting and amplification operation of the panoramic image in the scene image generation process, and the efficiency and the accuracy of the small target object detection are improved.
Example four
Fig. 4 is a schematic structural diagram of a target detection apparatus for point cloud data according to a fourth embodiment of the present invention. The device can execute the target detection method of the point cloud data provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 4, the apparatus specifically includes:
an image determining module 401, configured to determine a scene image to be detected according to the acquired point cloud data;
a target detection module 402, configured to input the scene image into a pre-trained target detection model to obtain a target detection result;
and an object labeling module 403, configured to label a target object in the point cloud data according to the target detection result.
According to the target detection device for point cloud data provided by the embodiment of the invention, the acquired point cloud data is processed and converted into a scene image to be detected, a pre-trained target detection model is adopted to perform target detection on the scene image, and the position of a target object is marked in the point cloud data according to the detection result of the scene image. When the scheme of the embodiment of the invention is used for detecting the point cloud data target, the method for detecting the scene image by adopting the deep learning model replaces the existing clustering algorithm to detect the target, so that the problem that the detection result is influenced by the inaccurate K value selection when the clustering algorithm is used for detecting the point cloud data target is solved, the accuracy of the point cloud data target detection result is improved, and a new thought is provided for the point cloud data target detection.
Further, the image determination model 401 includes:
the panoramic image determining unit is used for determining a panoramic image according to the acquired point cloud data;
and the scene image determining unit is used for cutting and/or amplifying the panoramic image to obtain a scene image to be detected.
Further, the panoramic image determination unit specifically includes:
the position determining subunit is used for mapping the horizontal coordinate data of each point in the acquired point cloud data into the image position coordinate of the point according to a preset proportion;
the pixel determination subunit is used for mapping the height coordinate data of each point in the acquired point cloud data into a pixel value and filling the pixel value in the position coordinate of the image of the point to obtain a gray panoramic image;
and the color mapping subunit is used for processing the grayscale panoramic image by adopting spectral color mapping to obtain a color panoramic image.
Further, the pixel determination subunit is specifically configured to:
determining a height range to be detected according to the position of the laser radar for collecting the point cloud data;
and mapping the height coordinate data of each point in the acquired point cloud data into a pixel value according to the height range to be detected and the pixel value range.
Further, the image determining module 401 is further specifically configured to:
eliminating interference point cloud data from the acquired point cloud data;
and determining the scene image to be detected according to the point cloud data remaining after the elimination.
Further, the above apparatus further comprises:
and the model processing module is used for deleting a redundant data processing layer in the Yolov3 network structure according to the size of a target object to be detected before the scene image is input into a pre-trained target detection model to obtain the target detection model if the target detection model is a deep learning model of the target detection Yolov3 network structure.
Further, the object labeling module 403 is specifically configured to:
screening each labeling frame according to the confidence of each labeling frame in the target detection result to determine a target labeling frame;
and marking a target object in the point cloud data according to the data information of the target marking frame on the scene image.
Further, the above apparatus further comprises:
and the model training module is used for acquiring sample point cloud data of a new scene before inputting the scene image into a pre-trained target detection model if the acquired point cloud data is point cloud data under the new scene, and performing optimization training on the target detection model by adopting the sample point cloud data and historical sample data.
EXAMPLE five
Fig. 5A is a schematic structural diagram of a mapping system according to a fifth embodiment of the present invention, and fig. 5B is a schematic structural diagram of a control device of the mapping system according to the fifth embodiment of the present invention. The mapping system 5 shown in fig. 5A comprises at least one lidar 51 and a control device 50. The control device 50 is connected to each laser radar 51, and each laser radar 51 is used for collecting point cloud data. FIG. 5B illustrates a block diagram of an exemplary control device 50 suitable for use in implementing embodiments of the present invention. The control device 50 shown in fig. 5B is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention. As shown in fig. 5B, the control device 50 is in the form of a general purpose computing device. The components of the control device 50 may include, but are not limited to: one or more processors 501, a memory device 502, and a bus 503 that couples the various system components (including the memory device 502 and the processors 501).
Bus 503 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Control device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by control device 50 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)504 and/or cache memory 505. The control device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 506 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5B and commonly referred to as a "hard drive"). Although not shown in FIG. 5B, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. Storage 502 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 508 having a set (at least one) of program modules 507 may be stored, for instance, in storage 502, such program modules 507 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 507 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
The control device 50 may also communicate with one or more external devices 509 (e.g., keyboard, pointing device, display 510, etc.), with one or more devices that enable a user to interact with the device, and/or with any devices (e.g., network card, modem, etc.) that enable the control device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 511. Also, the control device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 512. As shown in fig. 5B, the network adapter 512 communicates with the other modules of the control device 50 over the bus 503. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the control device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 501 executes various functional applications and data processing by running a program stored in the storage device 502, for example, to implement the target detection method of point cloud data provided by the embodiment of the present invention.
EXAMPLE six
The sixth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the method for detecting a target in point cloud data according to the foregoing embodiments.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above example numbers are for description only and do not represent the merits of the examples.
It will be appreciated by those of ordinary skill in the art that the modules or operations of the embodiments of the invention described above may be implemented using a general purpose computing device, which may be centralized on a single computing device or distributed across a network of computing devices, and that they may alternatively be implemented using program code executable by a computing device, such that the program code is stored in a memory device and executed by a computing device, and separately fabricated into integrated circuit modules, or fabricated into a single integrated circuit module from a plurality of modules or operations thereof. Thus, the present invention is not limited to any specific combination of hardware and software.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A target detection method of point cloud data is characterized by comprising the following steps:
determining a scene image to be detected according to the acquired point cloud data;
inputting the scene image into a pre-trained target detection model to obtain a target detection result;
and marking a target object in the point cloud data according to the target detection result.
2. The method of claim 1, wherein determining the image of the scene to be detected from the acquired point cloud data comprises:
determining a panoramic image according to the acquired point cloud data;
and cutting and/or amplifying the panoramic image to obtain a scene image to be detected.
3. The method of claim 2, wherein determining a panoramic image from the acquired point cloud data comprises:
according to a preset proportion, mapping the horizontal coordinate data of each point in the acquired point cloud data into an image position coordinate of the point;
after mapping the height coordinate data of each point in the acquired point cloud data into a pixel value, filling the pixel value in the position coordinate of the image of the point to obtain a gray panoramic image;
and processing the gray panoramic image by adopting spectral color mapping to obtain a color panoramic image.
4. The method of claim 3, wherein mapping the height coordinate data of each point in the acquired point cloud data to a pixel value comprises:
determining a height range to be detected according to the position of the laser radar for collecting the point cloud data;
and mapping the height coordinate data of each point in the acquired point cloud data into a pixel value according to the height range to be detected and the pixel value range.
5. The method of claim 1, wherein determining the image of the scene to be detected according to the acquired point cloud data comprises:
eliminating interference point cloud data from the acquired point cloud data;
and determining the scene image to be detected according to the point cloud data remaining after the elimination.
6. The method of claim 1, wherein if the target detection model is a deep learning model of a target detection YOLOv3 network structure, before inputting the scene image into a pre-trained target detection model, further comprising:
and deleting a redundant data processing layer in the YOLOv3 network structure according to the size of the target object to be detected to obtain a target detection model.
7. The method of claim 1, wherein labeling a target object in the point cloud data based on the target detection result comprises:
screening each labeling frame according to the confidence of each labeling frame in the target detection result to determine a target labeling frame;
and marking a target object in the point cloud data according to the data information of the target marking frame on the scene image.
8. The method of claim 1, wherein if the acquired point cloud data is point cloud data of a new scene, before inputting the scene image into a pre-trained target detection model, further comprising:
and acquiring sample point cloud data of a new scene, and performing optimization training on the target detection model by adopting the sample point cloud data and historical sample data.
9. A surveying system, characterized by at least one lidar and a control device; the control device is connected with the at least one laser radar; the at least one laser radar is used for collecting point cloud data; the control apparatus includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of object detection of point cloud data of any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of object detection of point cloud data according to any one of claims 1 to 8.
CN201911212740.0A 2019-12-02 2019-12-02 Point cloud data target detection method, system and medium Pending CN110956137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212740.0A CN110956137A (en) 2019-12-02 2019-12-02 Point cloud data target detection method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212740.0A CN110956137A (en) 2019-12-02 2019-12-02 Point cloud data target detection method, system and medium

Publications (1)

Publication Number Publication Date
CN110956137A true CN110956137A (en) 2020-04-03

Family

ID=69979270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212740.0A Pending CN110956137A (en) 2019-12-02 2019-12-02 Point cloud data target detection method, system and medium

Country Status (1)

Country Link
CN (1) CN110956137A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950357A (en) * 2020-06-30 2020-11-17 北京航天控制仪器研究所 Marine water surface garbage rapid identification method based on multi-feature YOLOV3
CN112349096A (en) * 2020-10-28 2021-02-09 厦门博海中天信息科技有限公司 Method, system, medium and equipment for intelligently identifying pedestrians on road
CN112488022A (en) * 2020-12-11 2021-03-12 武汉理工大学 Panoramic monitoring method, device and system
CN112985263A (en) * 2021-02-09 2021-06-18 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net
CN113591777A (en) * 2021-08-11 2021-11-02 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic device, and storage medium
CN114581361A (en) * 2021-06-28 2022-06-03 广州极飞科技股份有限公司 Object form measuring method, device, equipment and storage medium
US20220343530A1 (en) * 2021-04-26 2022-10-27 Ubtech North America Research And Development Center Corp On-floor obstacle detection method and mobile machine using the same
WO2023108544A1 (en) * 2021-12-15 2023-06-22 深圳航天科技创新研究院 Single-antenna ultra-wideband radar system for imaging application

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116781A1 (en) * 2015-10-21 2017-04-27 Nokia Technologies Oy 3d scene rendering
CN108230242A (en) * 2018-01-10 2018-06-29 大连理工大学 A kind of conversion method from panorama laser point cloud to video flowing
US20180218513A1 (en) * 2017-02-02 2018-08-02 Intel Corporation Method and system of automatic object dimension measurement by using image processing
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
US20190026956A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN109993696A (en) * 2019-03-15 2019-07-09 广州愿托科技有限公司 The apparent panorama sketch of works based on multi-view image corrects joining method
US20190286915A1 (en) * 2018-03-13 2019-09-19 Honda Motor Co., Ltd. Robust simultaneous localization and mapping via removal of dynamic traffic participants
CN110263652A (en) * 2019-05-23 2019-09-20 杭州飞步科技有限公司 Laser point cloud data recognition methods and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190026956A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
US20170116781A1 (en) * 2015-10-21 2017-04-27 Nokia Technologies Oy 3d scene rendering
US20180218513A1 (en) * 2017-02-02 2018-08-02 Intel Corporation Method and system of automatic object dimension measurement by using image processing
CN108230242A (en) * 2018-01-10 2018-06-29 大连理工大学 A kind of conversion method from panorama laser point cloud to video flowing
US20190286915A1 (en) * 2018-03-13 2019-09-19 Honda Motor Co., Ltd. Robust simultaneous localization and mapping via removal of dynamic traffic participants
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN109993696A (en) * 2019-03-15 2019-07-09 广州愿托科技有限公司 The apparent panorama sketch of works based on multi-view image corrects joining method
CN110263652A (en) * 2019-05-23 2019-09-20 杭州飞步科技有限公司 Laser point cloud data recognition methods and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"地面激光雷达与近景摄影测量技术集成", 31 May 2017, 测绘出版社, pages: 36 - 39 *
陈慧岩: "地面激光与探地雷达在活断层探测中的应用", 31 August 2019, 北京理工大学出版社, pages: 38 - 42 *
陈慧岩: "智能车辆理论与应用", 北京理工大学出版社, pages: 18 - 21 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950357A (en) * 2020-06-30 2020-11-17 北京航天控制仪器研究所 Marine water surface garbage rapid identification method based on multi-feature YOLOV3
CN112349096A (en) * 2020-10-28 2021-02-09 厦门博海中天信息科技有限公司 Method, system, medium and equipment for intelligently identifying pedestrians on road
CN112488022A (en) * 2020-12-11 2021-03-12 武汉理工大学 Panoramic monitoring method, device and system
CN112488022B (en) * 2020-12-11 2024-05-10 武汉理工大学 Method, device and system for monitoring panoramic view
CN112985263B (en) * 2021-02-09 2022-09-23 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net
CN112985263A (en) * 2021-02-09 2021-06-18 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net
US20220343530A1 (en) * 2021-04-26 2022-10-27 Ubtech North America Research And Development Center Corp On-floor obstacle detection method and mobile machine using the same
WO2022227939A1 (en) * 2021-04-26 2022-11-03 深圳市优必选科技股份有限公司 Ground obstacle detection method and mobile machine using same
US11734850B2 (en) * 2021-04-26 2023-08-22 Ubtech North America Research And Development Center Corp On-floor obstacle detection method and mobile machine using the same
CN114581361A (en) * 2021-06-28 2022-06-03 广州极飞科技股份有限公司 Object form measuring method, device, equipment and storage medium
CN113591777A (en) * 2021-08-11 2021-11-02 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic device, and storage medium
CN113591777B (en) * 2021-08-11 2023-12-08 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic equipment and storage medium
WO2023108544A1 (en) * 2021-12-15 2023-06-22 深圳航天科技创新研究院 Single-antenna ultra-wideband radar system for imaging application

Similar Documents

Publication Publication Date Title
CN110956137A (en) Point cloud data target detection method, system and medium
Hu et al. Fast forest fire smoke detection using MVMNet
CN111427979B (en) Dynamic map construction method, system and medium based on laser radar
US10628890B2 (en) Visual analytics based vehicle insurance anti-fraud detection
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
US20200302237A1 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN113761999B (en) Target detection method and device, electronic equipment and storage medium
CN110135396B (en) Ground mark identification method, device, equipment and medium
Trinder et al. Aerial images and LiDAR data fusion for disaster change detection
CN113706480A (en) Point cloud 3D target detection method based on key point multi-scale feature fusion
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
CN116027324B (en) Fall detection method and device based on millimeter wave radar and millimeter wave radar equipment
CN111121797B (en) Road screening method, device, server and storage medium
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN115909096A (en) Unmanned aerial vehicle cruise pipeline hidden danger analysis method, device and system
CN115082857A (en) Target object detection method, device, equipment and storage medium
CN113076889B (en) Container lead seal identification method, device, electronic equipment and storage medium
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
Ranyal et al. Automated pothole condition assessment in pavement using photogrammetry-assisted convolutional neural network
CN115482277A (en) Social distance risk early warning method and device
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN115527187A (en) Method and device for classifying obstacles
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN114202689A (en) Point location marking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination