CN112287859A - Object recognition method, device and system, computer readable storage medium - Google Patents

Object recognition method, device and system, computer readable storage medium Download PDF

Info

Publication number
CN112287859A
CN112287859A CN202011211545.9A CN202011211545A CN112287859A CN 112287859 A CN112287859 A CN 112287859A CN 202011211545 A CN202011211545 A CN 202011211545A CN 112287859 A CN112287859 A CN 112287859A
Authority
CN
China
Prior art keywords
point cloud
channel
height
top view
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011211545.9A
Other languages
Chinese (zh)
Inventor
许新玉
孔旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202011211545.9A priority Critical patent/CN112287859A/en
Publication of CN112287859A publication Critical patent/CN112287859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an object recognition method, apparatus, and system, a computer readable storage medium. The object identification method comprises the following steps: acquiring point cloud data of an object acquired by a laser radar, wherein the point cloud data comprises a spatial coordinate value reflecting the height of the point cloud; generating a multi-channel top view from the point cloud data, wherein the multi-channel top view comprises a first channel representing a point cloud height; and identifying the object using the top view of the multiple channels.

Description

Object recognition method, device and system, computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an object identification method, apparatus, and system, and a computer-readable storage medium.
Background
In autonomous driving or robotic applications, it is common to use lidar to detect and identify a variety of obstacles in different scenarios.
There are two main methods for object (e.g., obstacle) detection based on lidar point clouds.
One method is as follows: first, dividing the point cloud into Voxel grids (Voxel Grid); features are then extracted directly on the Voxel Grid using a Convolutional Neural Network (CNN). The method can fully utilize the three-dimensional space geometric information of the point cloud. However, the deep convolutional neural network is optimized for weight sharing and convolutional kernel parallel computation, so that the deep convolutional neural network can only have good performance on regular and ordered data. Because the voxel grid data of the point cloud is irregular, disordered and sparse, the deep convolutional neural network is difficult to exert the advantages thereof on the voxel grid of the point cloud for object detection and identification. In recent years, although some methods alleviate this problem, the complexity of these methods is high, and these methods cannot be applied to unmanned vehicles and robots, etc., which have high requirements for real-time response.
Another approach is to project the point cloud into the depth (depth) dimension to produce a depth feature map. This method can generate regular and ordered data of similar depth images, but due to the change of perspective (perspective view) of obstacles with different distances, the size of the obstacles on the depth image shows a large range of change, namely, the size of the obstacle at the near position is large, and the size of the obstacle at the far position is small, so that the CNN method cannot detect the obstacle at the far position.
In order to meet scenes with strict requirements on real-time performance, such as automatic driving or robot applications, a more efficient object detection scheme is required.
Disclosure of Invention
According to some embodiments of the present disclosure, there is provided an object recognition method including: acquiring point cloud data of an object acquired by a laser radar, wherein the point cloud data comprises a spatial coordinate value reflecting the height of the point cloud; generating a multi-channel top view from the point cloud data, wherein the multi-channel top view comprises a first channel representing a point cloud height; and identifying the object using the top view of the multiple channels.
In some embodiments, the first channel comprises a first color channel, a second color channel, and a third color channel each representing a different color range, and generating the top view of the multiple channels from the point cloud data comprises: projecting the point cloud data to the top view according to the corresponding relation between the point cloud coordinate system and the image coordinate system of the top view, wherein each pixel point in the top view corresponds to at least one data point in the point cloud coordinate system; and converting the point cloud heights into color values in the first color channel, the second color channel, and the third color channel, respectively.
In some embodiments, converting the point cloud heights into color values in the first, second, and third color channels, respectively, comprises: dividing the point cloud heights greater than or equal to a first threshold and less than or equal to a second threshold into a plurality of height ranges, wherein the second threshold is greater than the first threshold; and converting the point cloud heights located in the plurality of height ranges to a first color value in a first color channel, a second color value in a second color channel, and a third color value in a third color channel in the top view, respectively.
In some embodiments, converting the point cloud heights into color values in the first, second, and third color channels, respectively, comprises: in different height ranges, different conversion parameters are used to convert the height of the point cloud into color values.
In some embodiments, converting the point cloud heights into color values in the first, second, and third color channels, respectively, comprises: only the maximum value of the point cloud height of at least one data point corresponding to each pixel point is converted into color values in the first color channel, the second color channel, and the third color channel.
In some embodiments, the point cloud data further includes a reflection intensity value and a point cloud density, the multi-channel top view further includes a second channel representing the reflection intensity value and a third channel representing the point cloud density, each pixel point in the top view corresponds to at least one data point in the point cloud coordinate system, and generating the multi-channel top view from the point cloud data includes: projecting the point cloud data to the top view according to the corresponding relation between the point cloud coordinate system and the image coordinate system of the top view; and converting the point cloud height, the reflection intensity value and the point cloud density into color values in the first channel, the second channel and the third channel respectively.
In some embodiments, converting the point cloud height, the reflection intensity value, and the point cloud density to color values for the first channel, the second channel, and the third channel, respectively, comprises: for the first channel, calculating the color value of the pixel point according to the maximum point cloud height of at least one data point corresponding to each pixel point; for the second channel, calculating the color value of the pixel point according to the maximum value of the reflection intensity value of at least one data point corresponding to each pixel point; for the third channel, the color value of the pixel point is calculated according to the density of at least one data point corresponding to each pixel point.
In some embodiments, the object recognition method further includes acquiring image data of the object acquired by the image sensor, wherein: the top view of the multiple channels further includes a fourth channel in the image data representing object class information; generating the top view of the multiple channels from the point cloud data includes converting object class information in the image data to labels in a fourth channel.
In some embodiments, converting the object class information in the image data to the label in the fourth channel comprises: projecting at least one data point corresponding to each pixel point in the top view onto an image of the object; and recording the corresponding label to the corresponding pixel point of the fourth channel according to the object type corresponding to the projection position.
In some embodiments, identifying the object using the top view of the multiple channels comprises: inputting the multi-channel top view into a convolutional neural network to obtain a two-dimensional detection frame of an object on the top view; determining a two-dimensional detection frame of the object in a point cloud coordinate system according to the two-dimensional detection frame of the object on the top view; calculating the height of the object in the point cloud coordinate system; and outputting a three-dimensional detection frame of the object in the point cloud coordinate system based on the two-dimensional detection frame of the object on the top view and the height of the object.
In some embodiments, the two-dimensional detection frame of the object on the top view comprises a plurality of pixel points, and the calculating the height of the object in the point cloud coordinate system comprises: for each pixel point, determining the maximum value and the minimum value of the point cloud height of at least one data point corresponding to the pixel point as the maximum height and the minimum height of the pixel point; determining the maximum value of the maximum heights corresponding to the pixel points as the maximum height of the detection frame, and determining the minimum value of the minimum heights corresponding to the pixel points as the minimum height of the detection frame; and calculating the height of the detection frame as the height of the object according to the difference value between the maximum height and the minimum height of the detection frame.
In some embodiments, the three-dimensional detection box includes the following information: a category of the object; and the position, size and orientation angle of the object in the point cloud coordinate system.
According to further embodiments of the present disclosure, there is provided an object recognition apparatus including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire point cloud data of an object acquired by a laser radar, and the point cloud data comprises a spatial coordinate value reflecting the height of a point cloud; a generating unit configured to generate a multi-channel top view from the point cloud data, wherein the multi-channel top view comprises a first channel representing a point cloud height; and a recognition unit configured to recognize the object using the top view of the plurality of channels.
According to still further embodiments of the present disclosure, there is provided an object recognition apparatus including: a memory; and a processor coupled to the memory, the processor configured to perform the object identification method of any of the above embodiments based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, there is provided an object recognition system including: a lidar configured to acquire point cloud data of an object; and the object recognition apparatus in any of the above embodiments, configured to recognize the object by generating a multi-channel overhead view from the point cloud data.
In some embodiments, the object recognition system further comprises: an image sensor configured to acquire image data of an object.
According to further embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of the above embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 illustrates a flow diagram of a method of training an object recognition model according to some embodiments of the present disclosure;
FIG. 1A illustrates a flow diagram of a method of data augmentation, according to some embodiments of the present disclosure;
FIG. 1B shows a schematic diagram of the structure of the Resnet-FPN feature extractor of some embodiments of the present disclosure;
FIG. 1C shows a schematic diagram of the structure of the Resnet module of some embodiments of the present disclosure;
FIG. 1D illustrates a schematic diagram of the structure of Up _ sample _6 according to some embodiments of the present disclosure;
FIG. 1E illustrates a schematic diagram of an Xcaption-FPN feature extractor of some embodiments of the present disclosure;
FIGS. 1F-1H show schematic structural diagrams of low, medium, and high resolution intermediate and exit convolutional layers, respectively, according to some embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of an object identification method according to some embodiments of the present disclosure;
FIG. 2A illustrates a flow diagram for generating an overhead view of multiple channels from point cloud data according to some embodiments of the present disclosure;
FIG. 3A illustrates a map of the correspondence between a point cloud coordinate system and an image coordinate system of a top view according to some embodiments of the present disclosure;
FIG. 3B illustrates a map of a correspondence between a point cloud coordinate system and an object coordinate system, according to some embodiments of the present disclosure;
FIG. 3C illustrates a flow diagram for converting point cloud heights to color values, in accordance with some embodiments of the present disclosure;
FIG. 3D illustrates a maximum height matrix H according to some embodiments of the present disclosuremaxAnd a minimum height matrix HminA schematic diagram of (a);
FIG. 4 shows a schematic flow diagram of an object identification method according to further embodiments of the present disclosure;
FIG. 5 illustrates a flow diagram for identifying objects using a top view of multiple channels, according to some embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of the training apparatus 10 of the object recognition model of some embodiments of the present disclosure;
FIG. 6A illustrates a block diagram of a data augmentation device of some embodiments of the present disclosure;
FIG. 6B illustrates a block diagram of an object recognition device of some embodiments of the present disclosure;
FIG. 6C illustrates a block diagram of the identification unit shown in FIG. 6B in accordance with some embodiments of the present disclosure;
FIG. 7 illustrates a block diagram of an electronic device of some embodiments of the present disclosure;
FIG. 8 shows a block diagram of an electronic device of yet further embodiments of the disclosure;
FIG. 9 illustrates a block diagram of an object identification system of some embodiments of the present disclosure;
fig. 9A illustrates a block diagram of an electronic device of further embodiments of the disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The inventor finds out through research that: the method has the advantages that a top View (BEV) of the laser radar point cloud is used as the input of the CNN model, so that the representation method is more compact and portable, the memory occupancy is small, and the forward reasoning speed is high; in addition, three-dimensional information can not be lost by representing the height of the point cloud as a channel of a top View, and compared with a method for representing a depth map projected to a Front View (Front View) or a Surrounding View (Surrounding View), an object does not overlap in the top View, so that the difficulty of detecting the dimensional object can be effectively reduced; and because the representation method of the top view keeps the dimension information of the object in the real world, the phenomenon that the dimension changes along with the difference of the distance does not occur, and the prior knowledge of the dimension of the object can be used for better estimating the dimension of the object.
In view of this, the present disclosure proposes a high-performance object identification method that estimates the position, size, direction, and category of an object from a top view of a lidar point cloud using a convolutional neural network. In some embodiments, the point cloud is projected onto a top view, and then 3D object detection is performed using typical one-stage CNN object detection methods (e.g., SSD, single shot multi-box detection).
According to some embodiments of the present disclosure, a method of training an object recognition model is presented.
FIG. 1 illustrates a flow diagram of a method of training an object recognition model according to some embodiments of the present disclosure.
As shown in fig. 1, the training method of the object recognition model includes: step S1, acquiring a training set; step S2, generating a top view according to the point cloud labeling data set; step S3, extracting a plurality of feature maps with different resolutions from the top view by using a feature extractor; step S4, determining the size and the position of the anchor frame on the feature map; step S5, taking each pixel of the feature map as a center, generating anchor frames with different sizes, wherein the sizes comprise size and aspect ratio;
step S6, matching the anchor frame with a truth value boundary frame on a plurality of feature maps with different resolutions so as to determine the sample type of the anchor frame; and step S7, training an object recognition model based on the contribution of anchor boxes of different sample types to the loss function of the convolutional neural network.
In step S1, the acquired training set includes a point cloud annotation data set of the object acquired by the laser radar, and the point cloud annotation data set has a true value bounding box.
In some embodiments, the acquired training set further comprises: and carrying out data augmentation on the point cloud annotation data set to obtain augmented data. That is, the acquiring of the training set in step S1 includes acquiring a point cloud annotation data set of the object acquired by the laser radar, and may further include acquiring augmented data of the point cloud annotation data set by using a data augmentation method.
Fig. 1A illustrates a flow diagram of a data augmentation method according to some embodiments of the present disclosure.
As shown in fig. 1A, the data augmentation method includes: step S12, acquiring a point cloud labeling data set of an object (such as an obstacle) acquired by a laser radar; step S14, selecting a truth value bounding box from the point cloud labeling data set; step S16, perform a specified operation on the point cloud data contained in the selected true value bounding box, and obtain augmented data.
The object point cloud data collected by the lidar generally includes spatial coordinate values reflecting the height of the point cloud. In some embodiments, the point cloud data may be represented by an N x 4 matrix, where each of the N points has information such as X, Y, Z three-dimensional space coordinates and intensity values (intensity).
The point cloud annotation data set obtained in step S12 has a true value bounding box. In some embodiments, the point cloud annotation data includes the following information: a category of the object; as well as the position, size and orientation angle of the object.
At step S14, selecting a truth bounding box from the point cloud annotation data set comprises: randomly selecting a truth value bounding box; or select a truth bounding box with the specified category.
A truth bounding box is randomly selected from all the three-dimensional detection boxes, and the type of the corresponding obstacle can be any marked type. However, if a truth bounding box for a rare obstacle category is selected, such as a heavy-duty truck category in an autopilot scenario, augmenting such data may help to solve the problem of an unbalanced number of labels for various obstacle categories.
In step S16, at least one of the designated operations is performed on the point cloud data contained in the selected true value bounding box, resulting in augmented data.
In some embodiments, the specifying operation may include: and rotating the point cloud data contained in the selected truth value bounding box by a preset angle around the height direction of the truth value bounding box. The height direction of the true value bounding box corresponds to the height direction of the object and can be consistent with the Z-axis direction of the laser radar point cloud coordinate system.
For example, for the selected truth bounding box, a geometric center point of the point cloud contained in the truth bounding box is calculated, and the point cloud is rotated by a predetermined angle around the Z axis of the truth bounding box by taking the geometric center point as an origin of the object coordinate system, so as to obtain the rotated truth bounding box and the point cloud contained in the same. Here, the predetermined angle may be a randomly selected angle.
The rotation operation can simulate the real world that the same obstacle can be presented in different directions in the sensor field of the autonomous driving host vehicle, for example, the pedestrian can be a laser radar facing away from the host vehicle or a laser radar facing the host vehicle.
In other embodiments, the specifying operation may also include: a portion of the contained point cloud data is deleted.
Deleting a portion of the contained point cloud data may include: the point cloud data included in a portion of the selected truth bounding box is deleted, for example, the point clouds of the top half, middle half, or bottom half of the point clouds included in the truth bounding box are deleted. Of course, the point clouds in the left half or the right half of the point clouds included in the three-dimensional detection frame may be deleted.
The deletion operation can simulate that in the real world, only a part of a real object can be observed by a laser radar installed on an autonomous vehicle due to a blind area or occlusion.
Alternatively, deleting a part of the point cloud data may include: and randomly deleting the point clouds in the specified proportion of the point cloud data contained in the selected truth value bounding box. The deletion operation can simulate point clouds with different densities acquired by different laser radars in the real world.
In still other embodiments, the specifying operation may further include: random noise points are added to at least a portion of the point cloud data contained by the selected truth bounding box.
The proportion of increasing random noise points may be determined from the true data distribution. True data distribution requires consideration of factors such as obstacles within the truth bounding box, no perspective possible with lidar, etc. After adding random noise points, the point cloud density may be increased, for example, from 5 to 8 points per 0.1 meter × 0.1 meter. That is, the above-mentioned noise adding operation can also simulate the point clouds of different densities acquired by different laser radars in the real world.
In still other embodiments, the specifying operation may further include: and copying the point cloud data contained in the selected truth value bounding box from the point cloud frame to the space of other point cloud frames.
For example, the point cloud contained in the selected truth bounding box may be copied from the point cloud frame a to another randomly selected point cloud frame B. The placement in the point cloud frame B is consistent with real world requirements, for example, the point cloud within the truth bounding box is placed in free space above the ground to avoid floating in the air or co-locating with the truth bounding box of other obstacles. Therefore, the point cloud of a certain obstacle category, particularly the point cloud of a rare obstacle category can be added on an arbitrarily selected point cloud frame.
The execution of the specified operation on the selected truth bounding box to obtain augmented data is described above. It should be understood that rotation, translation, scaling, etc. operations may also be performed on the point cloud contained in the point cloud annotation data set. These operations may be used alone or in combination.
In some embodiments, a randomly selected frame of the point cloud may be rotated by a predetermined angle about a specified axis. For example, a certain point cloud frame is randomly selected, and the point cloud is randomly rotated around the Z-axis (e.g., pointed towards the sky) of the point cloud coordinate system of the lidar. The angle of rotation may be determined according to various parameters provided by the user.
In other embodiments, the point cloud contained in the point cloud annotation data set may be translated. For example, the point cloud is translated along an X-axis, Y-axis, or Z-axis, respectively, of the point cloud coordinate system of the lidar. The amount of translation may be determined from various parameters provided by the user.
In still other embodiments, the point cloud contained by the point cloud annotation data set may be scaled. For example, the point cloud is scaled along an X-axis, Y-axis, or Z-axis of the point cloud coordinate system of the lidar, respectively. The scale of the scaling may be determined according to various parameters provided by the user, with a typical recommendation between 0.9 and 1.1.
In the above embodiment, by using the data augmentation method, a large amount of more diversified point cloud data can be generated based on the laser radar point cloud annotation data with the true value bounding box. Particularly, similar point cloud data can be generated based on the labeled truth value data of a small amount of rare scenes, so that the problem that the truth value labeling of the rare scenes is far insufficient can be effectively solved; but also a greater amount of similar point cloud data from point cloud data for less classified obstacles, such as trucks, that have been labeled. I.e. new truth data can be generated that fits more scenes. The newly generated point cloud data does not need to be marked, so that the marking cost is greatly reduced, and the data acquisition and marking period is shortened. In addition, the augmented data of the point cloud labeling data set obtained by the data augmentation scheme can be trained to obtain a Convolutional Neural Network (CNN) with better performance so as to improve the accuracy of object identification.
The use of a data augmentation approach to obtain a large amount of valid training data is described above in connection with FIG. 1A. The following continues with a description of how these training data are used to train the object recognition model, returning to fig. 1. First, how to generate the overhead view from the point cloud data in step S2 is described, for example, the point cloud data is projected to the overhead view according to the correspondence between the point cloud coordinate system and the image coordinate system of the overhead view. Next, the description continues on how the object recognition model is trained using top-view.
In step S3, a plurality of feature maps of different resolutions may be extracted from the top view using a Resnet-FPN or an Xception-FPN feature extractor.
The Resnet-FPN feature extractor comprises: a plurality of sets of Resnet modules configured to generate a plurality of original feature maps of different resolutions, each set of Resnet modules comprising a plurality of Resnet modules, each Resnet module comprising a plurality of fused convolution FusedConv operators, each fused convolution FusedConv operator consisting of three sub-operators of two-dimensional convolution, Batch normalization Batch Norm, and RELU activation; a feature pyramid network FPN configured to combine a plurality of original feature maps of different resolutions with corresponding up-sampled feature maps; and an output header configured to output a plurality of feature maps of different resolutions.
The structure of the Resnet-FPN feature extractor is described below with reference to FIGS. 1B, 1C, and 1D, taking an example of an input BEV image having a resolution of 1024 × 512 × 3.
As shown in fig. 1B, the Resnet-FPN feature extractor includes 5 sets of Resnet modules.
The first set of Resnet modules included 2 FusedConv, each with a convolution kernel of 3 × 3 and a channel number of 32.
The second set of Resnet modules (Resnet _ block _2, 24, 24, 48,/2, #2) comprises 2 Resnet modules (Resnet _ block), Stride 2. The third set of Resnet modules (Resnet _ block _3, 32, 32, 64,/2, #5) comprises 5 Resnet modules, Stride 2. A fourth set of Resnet modules (Resnet _ block _4, 48, 48, 96,/2, #5) comprises 5 Resnet modules, Stride 2. A fifth set of Resnet modules (Resnet _ block _5, 64, 64, 128,/2, #2) includes 2 Resnet modules, Stride 2.
As shown in fig. 1C, each Resnet module includes 3 FusedConv operators, where the convolution kernel for the 1 st FusedConv is 1 × 1, the convolution kernel for the 2 nd fusdconv is 3 × 3, and the convolution kernel for the 3 rd fusdconv is 1 × 1. The 2 nd FusedConv is a bottleneck layer, which can reduce the number of output channels, reduce the size of the model and improve the reasoning efficiency. The output of the 3 rd FusedConv is combined with the input of the 1 st FusedConv, for example, by element summation or channel cascading, and then activated by RELU.
In the second set of Resnet modules (Resnet _ block _2, 24, 24, 48,/2, #2), the number of channels of the 1 st, 2 nd, 3 rd FusedConv is 24, 24, 48 th, respectively. Correspondingly, each Resnet module also includes 24 channels, and 48 channels for 3 FusedConv channels.
In the third set of Resnet modules (Resnet _ block _3, 32, 32, 64,/2, #5), the channel numbers of the 1 st, 2 nd, and 3 rd FusedConv are 32, 32, and 64, respectively. Correspondingly, each Resnet module also includes 3 FusedConv channels, which are 32, 32 and 64 respectively.
In the fourth set of Resnet modules (Resnet _ block _4, 48, 48, 96,/2, #5), the channel numbers of the 1 st, 2 nd, and 3 rd FusedConv are 48, 48, and 96, respectively. Correspondingly, each Resnet module also includes 3 FusedConv channels, 48, 96 respectively.
In the fifth set of Resnet modules (Resnet _ block _5, 64, 64, 128,/2, #2), the channel numbers of the 1 st, 2 nd, and 3 rd FusedConv are 64, 64, and 128, respectively. Correspondingly, each Resnet module also includes channels of 3 FusedConv, which are 64, 64 and 128 respectively.
As shown in fig. 1B, the output of the fifth set of Resnet modules (Resnet _ block _5, 64, 64, 128,/2, #2) is convolved with the filters of 1 × 1 and 128 channels, and a characteristic map (64 × 32 × 13) with 1/16 resolution is output.
As shown in fig. 1D, in the upsampling operation (e.g., Up _ sample _6, 96, × 2), the feature map with original resolution 1/16 is also upsampled to 1/8 resolution by the upsampling operation (e.g., deconv operation), and then the upsampled 1/8 resolution feature map is combined with the feature map with original resolution 1/8, for example, by element summation or channel cascade, to obtain the final feature map with resolution 1/8 (128 × 64 × 13). The profile with original resolution of 1/8 was obtained by convolving the output of the fourth set of Resnet modules (Resnet _ block _4, 48, 48, 96,/2, #5) with a filter of 1 × 1, 96 channels.
Similarly, in operation (Up _ sample _7, 64, × 2), the feature map with the final resolution of 1/8 is also upsampled to 1/4 resolution by an upsampling operation (e.g., deconv operation), and then the upsampled 1/4 resolution feature map is combined with the original feature map with the resolution of 1/4, for example, by element summation or channel concatenation, to obtain the final feature map with a resolution of 1/4 (256 × 128 × 13). The profile with original resolution of 1/4 was obtained by convolving the output of the third set of Resnet modules (Resnet _ block _3, 32, 32, 64,/2, #5) with a filter of 1 × 1, 64 channels.
In some embodiments, the Resnet-FPN feature extractor further includes a plurality of FusetConv after the operation (Up _ sample _7, 64, x 2) in order to increase the ability of the model to extract abstract features and improve the generalization of the model. As shown in fig. 1B, the Resnet-FPN feature extractor further comprises, after operation (Up _ sample _7, 64, x 2): 3 x 3, 32 channel FusedConv; and 1 × 1, N channel FusedConv.
In some embodiments, the initial values of the scaling parameters and the bias of the Batch normalized Batch Norm in the last performed fused convolution FusedConv operator (1 × 1, N-channel FusedConv as shown in fig. 1B) are configured to be dynamically adjusted according to the predicted object class.
It should be appreciated that the hyper-parameters of the Resnet-FPN feature extractor, such as the number of feature channels output per layer and the number of Resnet blocks, etc., may be dynamically adjusted according to the computing resources of the target computing platform.
Still taking the resolution of the input BEV image as 1024 × 512 × 3 as an example, the structure of the Xception-FPN feature extractor is described below with reference to fig. 1E-1H.
As shown in fig. 1E, the Xception-FPN feature extractor includes: entry layers (entries) comprising a plurality of separable convolutional SeperableConnv layers configured to generate a plurality of original feature maps of different resolutions; a feature pyramid network FPN configured to combine a plurality of original feature maps of different resolutions with corresponding up-sampled feature maps; and a prediction head configured to output a plurality of feature maps of different resolutions.
As shown in fig. 1E, the input BEV image (1024 × 512 × 3) undergoes several layers of convolution Conv to generate feature maps of different resolutions, such as low (64 × 32 × 128), medium (128 × 64 × 64), and high resolution (256 × 128 × 32), where the number of channels is 128, 64, and 32, respectively. It will be appreciated that the number and resolution of the channels may be adjusted according to the actual requirements.
In order to make the information on the high-resolution feature map usable at lower resolutions, a shortcut connection (shortcut-cut connection) is used to combine the high-resolution feature map pooled by Stride 2 with the low-resolution feature map.
For each resolution, the feature map is passed through intermediate convolutional Layers (Middle Conv Layers) and Exit convolutional Layers (Exit Conv Layers), and then upsampled to be combined with the higher resolution feature map. That is, the low, medium and high resolution raw feature maps are combined with the up-sampled low, medium and high resolution feature maps.
The number of filters per resolution of the intermediate conversion layer and the outlet conversion layer is different, for example 512, 256 and 128 filters at low, medium and high resolution, respectively. FIGS. 1F-1H show schematic structural views of Low, medium, and high resolution intermediate convolutional Layers (Low-res Middle Conv Layers, Mid-res Middle Conv Layers, Hi-res Middle Conv Layers), and Exit convolutional Layers (Low-res Exit Conv Layers, Mid-res Exit Conv Layers, Hi-res Exit Conv Layers), respectively, according to some embodiments of the present disclosure.
As shown in fig. 1F, the low resolution feature map (64 × 32 × 128) passes through the low resolution intermediate convolution layer and the exit convolution layer to generate a feature map having a size of 64 × 32 × 512. The feature map is then upsampled and combined with the medium resolution feature map, for example, the combining operation may be implemented by element-wise summing or channel-wise cascading. In some embodiments, the upsampling operation may be implemented by deconvolution (deconv) -based or interpolation-based upsampling.
Similarly, as shown in FIG. 1G, the medium resolution profile (128 × 64 × 64) passes through the medium resolution intermediate convolution layer and the exit convolution layer, generating a profile having a size of 128 × 64 × 256. The feature map is then up-sampled and combined with the high resolution feature map.
As shown in fig. 1H, the high resolution feature map (256 × 128 × 32) passes through the high resolution intermediate convolution layer and the exit convolution layer, generates a feature map having a size of 256 × 128 × 128, and is then combined with the medium resolution feature map.
Returning to FIG. 1E, as shown, the prediction header for each resolution contains the same operations for all resolutions. These operations include: 1 × 1 convolution of the channels for the low, medium and high resolution feature maps, respectively; a Dropout operation; convolution of 1 x 1 with N output channels, where N is the number of variables to predict.
In the above embodiments, in the Resnet-FPN and Xception-FPN feature extractors, the use of batch _ norm in the FusedConv layer and Dropout before the output layer of the network can improve the generalization capability of the model and overcome overfitting.
The extraction of multiple different resolution feature maps from a top view using a feature extractor is described above in connection with fig. 1B-1H. The following continues with the description of steps S4-S7, i.e., how the object recognition model is trained using anchor boxes (anchor boxes), returning to FIG. 1.
In step S4, the size of the anchor frame and its position on the feature map are determined.
The anchor block serves as a training sample for both the training phase and the prediction phase. The anchor frame is equivalent to taking different windows for a central point to detect multiple objects that are superimposed. The training data comprises different classes of objects, such as pedestrians/vehicles, cars, buses/trucks. The large anchor frame which is combined with the pedestrian can be used for training and predicting the pedestrian; the larger anchor frame, which is used in conjunction with the car, can be used to train and predict the car. By using anchor frames of different sizes and aspect ratios, training and prediction can be made more targeted.
In some embodiments, the size (in pixels) of the anchor frame is determined based on the image density (in meters per pixel) of the top view. For example, the anchor box may include 3 different sizes, small, medium, and large, corresponding to different sized pixels on the BEV feature map (e.g., 16 × 16, 32 × 64, and 32 × 128), respectively. The different sizes of anchor frames may represent different obstacles, such as pedestrians/cyclists (small size), cars (medium size) and buses/trucks (large size), respectively. And carrying out statistical clustering on the truth value bounding boxes included in the training set to determine the size of the anchor box. E.g., by means of kmeans clustering, to obtain the usual size and based thereon selecting the anchor box size. The position of the anchor frame on the feature map can be determined according to the dimensions of the top view and its corresponding feature map.
In step S5, anchor frames of different sizes are generated centering on each pixel of the feature map. For example, anchor frames of different sizes and aspect ratios are placed on each pixel of the plurality of feature maps of different resolutions. Thus, the detection performance of objects with different dimensions can be improved.
At step S6, the anchor box is matched against the truth bounding box over a plurality of feature maps of different resolutions to determine a sample type of the anchor box.
In some embodiments, the different resolutions include a first resolution, a second resolution, and a third resolution, wherein the first resolution is greater than the second resolution, and the second resolution is greater than the third resolution. Thus, matching the anchor box to the true bounding box on a plurality of feature maps of different resolutions includes: matching an anchor box with a first size with a truth bounding box on a feature map of a first resolution; matching an anchor box with a second size with the truth bounding box on the feature map of the second resolution, wherein the second size is larger than the first size; on the feature map of the third resolution, an anchor box having a third size is matched to the truth bounding box, the third size being larger than the second size.
In performing anchor box matching, only small-sized anchor boxes are allowed to match the true-value bounding box on the high-resolution output feature map because the high-resolution feature map corresponds to a small receiving field in the original input resolution, and only small-sized objects can be detected; on the medium resolution output feature map, only medium size anchor boxes are allowed to match the truth bounding box; on the low-resolution output feature map, only large-size anchor boxes are allowed to match the true-value bounding boxes, since the low-resolution feature map corresponds to a large reception field in the original input resolution, enabling detection of large-size objects.
During training of each frame of the point cloud loaded, the anchor frame and the true-value bounding box may be projected to the image coordinate system of the top view, and then the anchor frame and the true-value bounding box are matched to determine whether the anchor frame belongs to a positive sample, a negative sample, or a ignored sample. In calculating the loss, only positive and negative samples contribute to the loss, while the ignored samples do not contribute to the loss.
In some embodiments, it may be determined that the anchor box belongs to a positive sample, a negative sample, or a ignored sample based on the distance between the geometric center of the anchor box and the geometric center of the truth bounding box. The anchor frame whose distance from the nearest truth-value bounding box is less than the first distance threshold is a positive sample, the anchor frame whose distance from any truth-value bounding box is the nearest positive sample, the anchor frame whose distance from the nearest truth-value bounding box is greater than or equal to the second distance threshold is a negative sample, other anchor frames which are neither positive sample nor negative sample are ignored samples, and the second distance threshold is greater than the first distance threshold.
In other embodiments, the anchor box may be determined to belong to a positive sample, a negative sample, or a ignored sample based on the intersection ratio of the anchor box and the truth bounding box. The anchor frame with the intersection ratio with the truth value boundary frame being greater than the first proportional threshold is a positive sample, the anchor frame with the intersection ratio with any truth value boundary frame being the largest is a positive sample, the anchor frame with the intersection ratio with the truth value boundary frame being less than or equal to the second proportional threshold is a negative sample, other anchor frames which are not positive samples or negative samples are ignored samples, and the second proportional threshold is less than the first proportional threshold.
At step S7, an object recognition model is trained based on the contributions of the anchor boxes of the different sample types to the loss function of the convolutional neural network.
In some embodiments, the recognition result of each anchor frame is predicted using a multitask learning method. The recognition result includes whether the anchor frame contains the object, and the position, size, orientation, and category of the object contained in the anchor frame. The multitask comprises a binary classification task of whether an object contained in the anchor frame is a foreground or a background, a regression task of the position, the size and the direction of the object contained in the anchor frame and a classification task of the category of the object contained in the anchor frame, wherein the multitask shares the feature extractor. These learning tasks share the same feature extraction backbone network to enable end-to-end learning.
The loss function can be expressed as Ltotal=μLconf+ρLreg+τLclsWherein L isconfRepresenting foreground/background confidence loss, LregShows the regression loss, L, of position, magnitude and directionclsRepresenting the classification loss of the class, and mu, p and tau represent the corresponding loss weights, respectively.
In some embodiments, the foreground/background confidence loss LconfRegression loss L using Sigmoid focus loss, position, size and orientationregUsing SmoothL1 loss, class classification loss LclsUsing Softmax focus loss.
Confidence loss L for foreground/background obstaclesconfThe focus loss can be used to solve the problem of a severely unbalanced number of foreground objects (positive examples) and background objects (negative examples) and to improve the generalization capability of the object recognition model. For example, the foreground/background obstacle confidence loss may be expressed as Lconf=-αt(1-pt)γlog(pt) Wherein y ∈ { + -1 } represents a true value class, and p ∈ [0, 1 ]]Indicates the probability predicted by the class (positive class, foreground obstacle) model with the label y being 1,
Figure BDA0002758992820000171
the regression loss of position, magnitude and direction can be expressed as
Figure BDA0002758992820000172
Figure BDA0002758992820000173
Wherein L isloc(b, g) represents position loss and between the bounding box according to the truth value and the predicted orientation bounding boxThe SmoothL1 loss was determined,
Figure BDA0002758992820000174
indicating a directional loss, and is determined according to a similarity between a true directional vector represented by the cosine and sine of the directional angle and a predicted directional vector,
Figure BDA0002758992820000175
and
Figure BDA0002758992820000176
respectively representing predicted direction angles
Figure BDA0002758992820000177
Cosine and sine.
The loss of direction can be expressed as
Figure BDA0002758992820000178
Wherein the content of the first and second substances,
Figure BDA0002758992820000179
representing the prediction direction vector, (cos theta, sin theta) representing the true direction vector,
Figure BDA00027589928200001710
represents the magnitude of the prediction direction vector, | | (cos θ, sin θ) | | | represents the magnitude of the true direction vector.
The direction penalty is 1 minus the similarity between the true direction vector and the predicted direction vector. The true direction vector and the predicted direction vector may be normalized by their magnitudes, then the dot product between the two normalized direction vectors is calculated as the similarity between their direction vectors, and finally the similarity is subtracted from 1 to get the directional loss.
The direction angle is uniquely determined by predicting the sine and cosine thereof, rather than directly predicting the absolute value of the direction angle itself. This is because the directions of the obstacles having the direction angles θ and θ + π π can be considered to be the same, e.g., a front-facing vehicle and a rear-facing vehicle are considered to have the same direction. However, since the values of θ and θ + π differ widely, the loss of direction calculated from the direction angle itself during training may not converge. Predicting the sine and cosine of the orientation angle, rather than the orientation angle itself, avoids the problems caused by theta and theta + pi representing the same direction but with the absolute values of the theta and theta + pi angles being significantly different.
Boundary box for position loss according to truth value
Figure BDA0002758992820000181
And predicted orientation bounding box
Figure BDA0002758992820000182
The Smooth-L1 loss between, can be expressed as
Figure BDA0002758992820000183
Wherein the content of the first and second substances,
Figure BDA0002758992820000184
Figure BDA0002758992820000185
Figure BDA0002758992820000186
and
Figure BDA0002758992820000187
represents the geometric center of the anchor box matching the truth box, and alAnd awIs the average calculated from the length and width of all anchor frames.
For class classification loss, the ability of the model to detect and identify rare obstacle classes (e.g., trucks) may be improved by using the class focus loss Softmax to calculate the probability of foreground obstacles.
In the training process, the prediction capability of the model to different variables can be dynamically adjusted by multiplying the regression loss of the foreground/background confidence coefficient loss, the position, the size and the direction and the classification loss of the category by different weighting factors. All positive and negative anchor boxes are used to calculate confidence loss, while only the positive anchor box is used to calculate regression loss and classification loss.
In the training process, inputting the top view into a convolutional neural network to obtain a two-dimensional detection frame of an object to be recognized on the top view (namely an image coordinate system), bbev={cx,cy,lbev,wbev,θbevT }, wherein cxIs the geometric center x coordinate of the object, cyIs the geometric center y coordinate of the object; lbevIs the length of the object, w is the width of the object, in pixels; thetabevIs the direction angle of the object in the image coordinate system, in the range of [ - π/2, π/2); t is the class of the object, such as pedestrian, automobile, tricycle, etc.
The truth bounding box includes the length, width, height, three-dimensional coordinates of the geometric center, orientation angle, category, etc. of the object. During training, the truth bounding box needs to be transformed to the image coordinate system of the top view.
The object recognition model may predict variables that,
{Conf,bcx,bcy,bw,bl,cosθ,sinθ,c0,c1,...cC,},
wherein Conf represents a confidence score for the predicted foreground obstacle; bcx,bcy,bw,blRepresenting the predicted displacement of the geometrical centers of the obstacle and the anchor frame, and the logarithm of the length and width of the obstacle; c. C0,c1,...cCRepresenting the probability of the obstacle category.
The above describes how to train an object recognition model using training data, including deriving augmented data using point cloud annotation data, training a convolutional neural network for object recognition using a point cloud annotation dataset and its augmented data. Next, an object recognition method for performing object recognition by using the trained object recognition model will be described with reference to fig. 2.
Fig. 2 illustrates a flow diagram of an object identification method according to some embodiments of the present disclosure.
As shown in fig. 2, the object recognition method includes: step 10, point cloud data of an object to be identified, which is acquired by a laser radar, is acquired; step 30, generating a multi-channel top view according to the point cloud data; and step 50, identifying the object to be identified by using the top view of the multiple channels.
At step 10, the point cloud data includes spatial coordinate values reflecting the height of the point cloud. In some embodiments, the point cloud data may be represented by an N x 4 matrix, where each of the N points has information such as X, Y, Z three-dimensional space coordinates and intensity values (intensity).
At step 30, a top view of the multiple channels includes a first channel representing a point cloud height. In some embodiments, the first channel includes a first color channel, a second color channel, and a third color channel that respectively represent different color ranges. In other embodiments, the object may also be identified in conjunction with image data of the object captured by an image sensor (e.g., a camera). That is, a fourth channel reflecting the object type may be further added in the top view.
In some embodiments, the object class information may be converted to a tag in the fourth channel by: firstly, projecting data points corresponding to pixel points in a top view onto an image of an object, wherein the projection can be calibrated by utilizing external parameters from a laser radar to a camera; and then, recording the corresponding label to the corresponding pixel point of the fourth channel according to the object type corresponding to the projection position. For example, if the projection point is on an obstacle of the foreground, recording the type ID of the obstacle to the corresponding pixel point of the fourth channel, and if the pedestrian ID is 1, filling 1 in the corresponding pixel point of the fourth channel; if the obstacle is a vehicle with the ID being 2, filling 2 in the corresponding pixel point of the fourth channel.
How to generate a multi-channel overhead view from point cloud data is described below in conjunction with fig. 2A, 3A, and 3B.
Fig. 2A illustrates a flow diagram for generating an overhead view of multiple channels from point cloud data according to some embodiments of the present disclosure.
As shown in fig. 2A, the step 30 of generating a top view of the multiple channels from the point cloud data includes: step 31, projecting the point cloud data to the top view according to the corresponding relation between the point cloud coordinate system and the image coordinate system of the top view; and step 32, converting the point cloud heights into color values in the first color channel, the second color channel and the third color channel respectively.
Fig. 3A illustrates a correspondence between a point cloud coordinate system and an image coordinate system of a top view according to some embodiments of the present disclosure.
Before step 31, the coordinate system of the point cloud area to be projected (i.e. the point cloud coordinate system) and the image coordinate system of the overhead view, as well as the correspondence between the two coordinate systems, are first determined.
As shown in fig. 3A, an X-Y coordinate system is a point cloud coordinate system to be projected, an origin of the coordinate system is a position of the laser radar, a positive direction of the X axis is directed forward, a positive direction of the Y axis is directed leftward, and point clouds in a certain range (for example, 40 meters forward, 40 meters backward, and 20 meters left, right) around and right of the origin of the coordinate system are projected in a plan view, that is, a size of a projection area is expressed by L × W × H (in units of meters, for example, L is 80 meters, and W is 40 meters). The x-y coordinate system is an image coordinate system of a plan view, the origin of which is at the lower left corner of the projection area, the x-axis is the width (w) direction of the image, the y-axis is the height direction of the image, the image size of the plan view is represented by w × h (in pixels), the positive direction of the x-axis is directed upward, and the positive direction of the y-axis is directed rightward.
Then, in step 31, the point cloud data is projected to the overhead view according to the correspondence between the point cloud coordinate system and the image coordinate system of the overhead view. In some embodiments, before the point cloud data is projected to the overhead view, the point cloud data is preprocessed, and the image initialization processing is performed on the overhead view to be output. The preprocessing of the point cloud data includes, for example, denoising preprocessing to remove invalid points such as nan (not a number). The initialization processing of the image includes, for example, setting an initial value of each pixel point in the image coordinate system to 0.
In step 31, only the point cloud data within the projection area may be projected onto the overhead view. The projection area includes points (X, Y, Z) in the point cloud coordinate system that satisfy the following condition:
Xmin≤X≤Xmax,Ymin≤Y≤Ymaxand Z ismin≤Z≤Zmax
Wherein, taking the projection area as 40 meters before and after the laser radar and 20 meters at the left and right as examples, Xmin-40 m, Xmax40 m, Ymin-20 m, Ymax20 m; taking the installation height of the laser radar from the ground as 1.8 m, Zmin1.8 m,. ZmaxMay be set to, for example, 1 meter. It should be understood that the size and the installation height of the projection area can be set according to actual requirements, and correspondingly, Xmin、Xmax、Ymin、Ymax、Zmin、ZmaxAnd can be set according to actual requirements.
The coordinate of the origin of the point cloud coordinate system in the BEV top view coordinate system is assumed to be (O)x,Oy) Still in meters, from a point (X) in the point cloud coordinate systemlidar,Ylidar) Projection to a pixel in a top view (x)bev,ybev) The coordinate transformation of (c) can be calculated by equation (1):
Figure BDA0002758992820000211
wherein the content of the first and second substances,
Figure BDA0002758992820000212
the pixel area size, i.e., the projection density, of the point cloud projection area (in meters) representing a unit area on the top view image, where w represents the image width and L represents the length of the projection area, as shown in fig. 3A. O isx、OyW and L are known, the point (X) in the point cloud coordinate system can be calculated according to equation (1)lidar,Ylidar) Pixel point (x) projected in top viewbev,ybev)。
It should be appreciated that because points in the image coordinate system correspond to discrete pixels, there may be multiple points in the point cloud coordinate system that correspond to the same pixel in the image coordinate system.
θlidarAnd thetabevRespectively representing the orientation angles of the object in the point cloud coordinate system and the image coordinate system. The direction angle theta of the object in the point cloud coordinate system can be obtained according to the corresponding relation graph between the point cloud coordinate system and the object coordinate systemlidarThen, calculating the direction angle theta of the object in the point cloud coordinate system according to the formula (1)bev
Fig. 3B illustrates a correspondence between a point cloud coordinate system and an object coordinate system according to some embodiments of the present disclosure.
Taking the example that the object in the point cloud coordinate system is a vehicle, the origin of the object coordinate system is at the geometric center of the object. As shown in fig. 3B, the positive direction of the X 'axis of the object coordinate system is parallel to the length direction of the object, and the positive direction of the Y' axis is parallel to the width direction of the object. Direction angle theta of object in point cloud coordinate systemlidarIs the angle from the positive direction of the X axis of the point cloud coordinate system to the positive direction of the X' axis of the object coordinate system. If the rotation is counterclockwise then the angle of orientation thetalidarPositive, if the rotation is clockwise then the angle thetalidarIs a negative value. Direction angle thetalidarExpressed in radians, which range between [ -pi/2, pi/2), i.e. in the disclosed embodiment no distinction is made between the head and the tail of the obstacle object.
According to the corresponding relationship diagram between the point cloud coordinate system and the object coordinate system shown in fig. 3B, the direction angle θ of the object in the point cloud coordinate system can be obtainedlidar. Furthermore, the direction angle theta of the object in the point cloud coordinate system can be calculated according to the formula (1)#ev
Next, returning to fig. 2A, in step 32, the point cloud heights located in the plurality of height ranges are converted into a first color value in a first color channel, a second color value in a second color channel, and a third color value in a third color channel in the top view, respectively. In some embodiments, the height of the point cloud is converted into three-channel RGB color values, i.e., a first color value for red, a second color value for green, and a third color value for blue. Fig. 3C illustrates a flow diagram for converting point cloud heights to color values, according to some embodiments of the present disclosure.
As mentioned above, during the projection process, it may happen that a plurality of points in the point cloud coordinate system correspond to the same pixel point in the image coordinate system. In this case, only the maximum point cloud height (i.e., the coordinate value of Z) among the plurality of data points corresponding to one pixel point is converted into the color value in the top view. That is, in step 32, for a pixel point in the image coordinate system of the top view, only the maximum point cloud height of the plurality of corresponding data points is converted into the color value of the pixel point.
As shown in fig. 3C, step 32 includes: step 321, dividing the point cloud height which is greater than or equal to a first threshold and less than or equal to a second threshold into a plurality of height ranges, wherein the second threshold is greater than the first threshold; and step 322, respectively converting the point cloud heights in the plurality of height ranges into color ranges in the top view.
In step 321, the first threshold is ZminAnd the second threshold is ZmaxI.e. only the height of the point cloud is located at [ Z ]min,Zmax]And projecting the point cloud data in the interval onto a top view.
In some embodiments, the point cloud height is divided into three different height ranges, namely into three height intervals, namely a first height range H1, a second height range H2, and a third height range H3.
Next, at step 322, the point cloud heights located in the different height ranges are converted to color values in the different color ranges in the top view.
Taking the height range of 3 as an example, the first height range, the second height range and the third height range respectively correspond to the first color range, the second color range and the third color range in the top view. The different color ranges may correspond to different color channels, e.g., the first color range, the second color range, and the third color range correspond to the first color channel, the second color channel, and the third color channel, respectively.
In some embodiments, the mapping or conversion of the point cloud heights to color values is accomplished with different conversion parameters in different height ranges. For example, color values in a first color range are linearly related to point cloud heights in a first height range with a slope α, color values in a second color range are linearly related to point cloud heights in a second height range with a slope β, and color values in a third color range are linearly related to point cloud heights in a third height range with a slope γ.
For example, for the case of generating RGB color values according to the point cloud heights, taking the point cloud height with a higher red color corresponding to the point cloud height, the point cloud height with a medium green color corresponding to the point cloud height, and the point cloud height with a lower blue color corresponding to the point cloud height as an example, that is, the color of the point with the highest point cloud height appears red on the top view, the color of the point with the lowest point cloud height appears blue on the top view, and the color of the point with the middle point cloud height appears green on the top.
For each segment height range, the higher the point cloud height, the higher the converted color value. Thus, the darker the color may be presented in the corresponding color channel. For example, in the green channel, the higher the point cloud height, the higher the color value of the point pair, e.g., appear dark green, while the lower the point cloud height, the lower the color value of the point pair, the light green appears in the top view.
The values of the slopes α, β and γ may be different from each other for different height range to color range conversions. For example, in a red channel, points with a 2-fold difference in point cloud height have a 2-fold difference in corresponding color values; but in the green channel, the point cloud heights differ by a factor of 2, and the corresponding color values may differ by a factor of 3.
The above is an example of dividing the height of the point cloud into 3 height ranges, it being understood that it may also be divided into more than 3 height ranges, e.g. 4 or more, but still corresponding to 3 color channels. The following description will be given taking 4 height ranges as an example.
Let 4 height ranges be a first height range, a second height range, a third height range and a fourth height range, e.g. [ Z ] respectivelymin,Zmin+klow×(Zmax-Zmin))、[Zmin+klow×(Zmax-Zmin),Zmin+kmid×(Zmax-Zmin))、[Zmin+kmid×(Zmax-Zmin),Zmin+ktop×(Zmax-Zmin))、[Zmin+ktop×(Zmax-Zmin),Zmax) Z coordinate value within the range. It should be understood that klow、kmidAnd ktopCan be set according to actual needs.
The color values of the data points in the first height range on the top view can be calculated as follows: color values in the red channel are zero; the color value in the green channel linearly increases from zero along with the height of the point cloud, and the slope is a; the color value in the blue channel is zero.
The color values of the data points in the second height range in the top view may be calculated as follows: color values in the red channel are zero; the color value in the green channel is 255, and if the color value is counted by a floating point method, the color value is 1; the color values in the blue channel increase linearly from zero with the point cloud height, with a slope of b.
The color values of the data points in the third height range in the top view may be calculated as follows: the color value in the red channel linearly increases from zero with the height of the point cloud, and the slope is c; the color value in the green channel is 255, and if the color value is counted by a floating point method, the color value is 1; the color value in the blue channel is zero.
The color values of the data points in the fourth height range in the top view may be calculated as follows: the color value in the red channel is 255, and if the color value is counted by a floating point method, the color value is 1; the color value in the green channel linearly increases from zero with the height of the point cloud, and the slope is d; the color value in the blue channel is zero. As previously mentioned, the values of slopes a, b, c and d may or may not be the same for different height range to color range conversions.
Below with klow、kmidAnd ktop0.15, 0.5 and 0.75, respectively, and the initial value of the color in the color channel is 1.0 (floating point count), for example, how to describe the difference between the point cloud height and the color value in the foregoing exampleThe color values in the different color channels are calculated by the corresponding relationship.
The first height range, the second height range, the third height range and the fourth height range respectively correspond to [ Zmin,Zmin+0.15×(Zmax-Zmin))、[Zmin+0.15×(Zmax-Zmin),Zmin+0.5×(Zmax-Zmin))、[Zmin+0.5×(Zmax-Zmin),Zmin+0.75×(Zmax-Zmin))、[Zmin+0.75×(Zmax-Zmin),Zmax)。
The color values in different color channels corresponding to the first height range, the second height range, the third height range and the fourth height range are respectively as follows.
First height range: r is 0.0; g ═ a × (Z-Z)min)/(Zmax-Zmin);B=1.0。
Second height range: r is 0.0; g ═ 1.0; b is 1.0-bx [ Z-Z ]min-0.15×(Zmax-Zmin)]/(Zmax-Zmin)。
Third height range: r ═ c × [ Z-Z ]min-0.5×(Zmax-Zmin)]/(Zmax-Zmin);G=1.0;B=0.0。
Fourth height range: r is 0.0; g1.0-d x [ Z-Z ]min-0.75×(Zmax-Zmin)]/(Zmax-Zmin);B=0.0。
As described above, for a pixel in the image coordinate system of the top view, although only the maximum point cloud height of the plurality of corresponding data points is converted into the color value of the pixel, the minimum point cloud height of the plurality of corresponding data points may also be recorded as the calculation parameter in the subsequent processing.
In some embodiments, for each pixel point (x, y) in the image coordinate system, the corresponding maximum point cloud height may be noted as Hmax(x, y), and recording the corresponding minimum point cloud height as Hmin(x, y). In this way,similar processing is carried out on each pixel point in the image coordinate system, and a corresponding maximum height matrix H can be obtainedmaxAnd a minimum height matrix HminAs shown in fig. 3D. Maximum height matrix HmaxAnd a minimum height matrix HminMay be used in subsequent processing to calculate the height of the detection box.
FIG. 3D illustrates a maximum height matrix H according to some embodiments of the present disclosuremaxAnd a minimum height matrix HminSchematic representation of (a).
As shown in FIG. 3D, a pixel point (x) in the image coordinate system1,y1) And (x)2,y2) Respectively corresponding to a plurality of data points included by the upright 1 and the upright 2 in the point cloud coordinate system. For a pixel point (x)1,y1) The maximum point cloud height of the data points included in the corresponding column 1 is, for example, 1.5, and the minimum point cloud height is, for example, -1.2. I.e. the maximum height matrix HmaxIn (x)1,y1) Has a height of 1.5, and a minimum height matrix HminIn (x)1,y1) The height of (A) is-1.2. Similarly, the maximum height matrix HmaxIn (x)2,y2) Has a height of 2.6, and a minimum height matrix HminIn (x)2,y2) The height of (A) is-0.8.
According to the embodiment, the point cloud height in the L multiplied by W range around the laser radar can be set to be [ Z ]min,Zmax]The point cloud in between is projected onto a top view of three channels of w and h pixels in width and height, respectively. By utilizing the top view of three channels and visually displaying the height of the point cloud through RGB different colors, a user can obtain a preliminary clue for identifying an object only according to the colors. In addition, by controlling the conversion parameters (e.g., slope) between the point cloud height and the color values in a segmented manner, information of a specific color channel can be more effectively presented.
In the above embodiment, how the point cloud height is converted into three-channel RGB values is described. In other embodiments, the height of the point cloud may also be converted into a color value in one channel (i.e., the first channel) in the top view, and the reflection intensity value and the density of the point cloud in the point cloud data may be converted into color values in the second channel and the third channel, respectively, in the top view.
For example, the color value of each channel of the data point in the top view can be calculated by using formula (2):
Figure BDA0002758992820000261
wherein G (x, y) represents the maximum value of the heights of all data points corresponding to the pixel point (x, y) in the top view, B (x, y) represents the maximum value of the reflection intensities of all data points corresponding to the pixel point (x, y) in the top view, R (x, y) represents the density of all data points corresponding to the pixel point (x, y) in the top view, and n is the number of all data points projected to the pixel point (x, y). In the calculation of R (x, y), 1.0 is the maximum value of the point cloud density, and N 'indicates how many lines of lidar are, for example, 64 is indicated when N' indicates that 64 lines of lidar are used to collect point cloud data.
By increasing the reflection intensity and the density information on the basis of the point cloud height, the characterization capability of the top view is greatly enhanced, for example, the reflection intensity can better distinguish vehicles with metal surfaces from people with soft surfaces, and the false detection of roadside flower beds can also be reduced.
FIG. 4 shows a flow diagram of an object identification method according to further embodiments of the present disclosure. Fig. 4 differs from fig. 2 in that step 20 is further included, and step 30' in fig. 4 differs from step 30 in fig. 2. Only the differences between fig. 4 and fig. 2 will be described below, and the same parts will not be described again.
As shown in fig. 4, the object recognition method further includes: step 20, image data of the object collected by the image sensor is acquired. It should be understood that the execution sequence of step 20 and step 10 is not limited, and may be executed sequentially or simultaneously, that is, step 20 may be executed before or after step 10 or simultaneously.
After point cloud data and image data are acquired at steps 10 and 20, respectively, at step 30', a multi-channel overhead view, for example, a 4-channel overhead view, is generated using the point cloud data and the image data.
In some embodiments, the 4-channel top view may include a fourth channel representing a category of objects in addition to the aforementioned three color channels representing the height of the point cloud. In other embodiments, the 4-channel top view may include a fourth channel representing object categories in addition to the first channel representing point cloud height, the second channel representing reflected intensity values, and the third channel representing point cloud density described previously.
The fourth channel is added in the top view of the RGB three channels to represent the object types acquired based on the visual image, and the capability of distinguishing the object types and the capability of distinguishing foreground and background objects can be greatly enhanced because the type information obtained based on the visual model is usually more accurate, so that the false detection is effectively reduced.
In some embodiments, the top view of the multiple channels may be smoothed, for example with gaussian filtering, to obtain a smoothed top view for subsequent processing.
After generating a multi-channel top view from the point cloud data, the objects are identified using the multi-channel top view, step 50, as shown in FIG. 2. How to identify the object is described below in conjunction with fig. 5.
FIG. 5 illustrates a flow diagram for identifying objects using a top view of multiple channels according to some embodiments of the present disclosure.
As shown in fig. 5, the step 50 of recognizing the object to be recognized by using the top view of multiple channels includes: step 51, inputting the multi-channel top view into a convolutional neural network to obtain a two-dimensional detection frame of the object to be identified on the top view (namely, an image coordinate system); step 52, determining a two-dimensional detection frame of the object to be recognized in the point cloud coordinate system according to the two-dimensional detection frame of the object to be recognized on the top view; step 53, calculating the height of the object to be recognized in the point cloud coordinate system; and step 54, outputting a three-dimensional detection frame of the object to be recognized in the point cloud coordinate system based on the two-dimensional detection frame of the object in the point cloud coordinate system and the height of the object to be recognized.
In step 51, the convolutional neural network CNN is, for example, a stage of non-predictive (pro free) CNN (e.g., SSD), and the resulting detection box is a detection box without height.
In the image coordinate system of the top view, the coordinates of the two-dimensional detection frame may be represented as bbev={cx,cy,lbev,wbev#ev,t}。cxIs the geometric center x coordinate of the object, cyIs the geometric center y coordinate of the object; lbevIs the length of the object, w is the width of the object, in pixels; thetabevIs the direction angle of the object in the image coordinate system, in the range of [ - π/2, π/2); t is the class of the object, such as pedestrian, automobile, tricycle, etc.
In some embodiments, some post-processing, such as non-maximization suppression processing, is performed on the detection frames obtained in step 51, i.e., only the detection frame with the highest probability is taken for the case where multiple detection frames are obtained.
Next, at step 52, the two-dimensional detection frame b in the image coordinate system of the top view may be framedbev={cx,cy,lbev,wbev,θbevT conversion to two-dimensional detection frame b in point cloud coordinate system of lidarlidar={CX,CY,llidar,wlidar,θlidarT }. For example, using equation (1), can be based on cx,cyAnd thetabevCalculating CX,CYAnd thetalidar. Or the corresponding relation between the image coordinate system and the point cloud coordinate system can be utilized according to lbevAnd wbevObtaining the expression l of the length and the width of the object to be identified under a point cloud coordinate systemlidarAnd wlidar
Then, in step 53, the height of the object to be recognized in the point cloud coordinate system is calculated.
As described above, a pixel point in the image coordinate system of the top view may correspond to a plurality of data points, and for each pixel point, the maximum value and the minimum value of the point cloud height of at least one data point corresponding to the pixel point are determined as the maximum height and the minimum height of the pixel point. For two-dimensional detection frame bbev={cx,cy,lbev,wbev,θbevT, determining the maximum value of the maximum heights corresponding to the pixel points as the maximum height H of the detection framemaxAnd determining the minimum value of the minimum heights corresponding to the plurality of pixel points as the minimum height H of the detection framemin. Then, the height of the detection frame is calculated as the height of the object to be recognized in the point cloud coordinate system according to the difference between the maximum height and the minimum height of the detection frame, for example, the height of the detection frame is H ═ Hmax-Hmin
Next, at step 54, a three-dimensional detection frame is output.
The three-dimensional detection frame comprises complete information of the object to be recognized, including the category t of the object to be recognized and the position and the size of the object to be recognized in a laser radar point cloud coordinate system, namely (C)X,CY,CZ) And length, width and height (l)lidar,wlidarH) and also the direction angle theta of the object to be recognizedlidar
Z coordinate C of detection frame in point cloud coordinate system of laser radarZFor example according to CZ=(Hmax+Hmin) And/2. Two-dimensional detection frame b combined with previous calculationlidar={CX,CY,llidar,wlidar,θlidarT, obtaining a three-dimensional detection frame B under a point cloud coordinate system of the laser radarlidar={CX,CY,CZ,llidar,wlidar,h,θlidar,t}。
The training method, the data augmentation method, and the object recognition method of the object recognition model according to some embodiments of the present disclosure are described above with reference to fig. 1 to 5, and the following describes an apparatus or system implementing these methods.
According to some embodiments of the present disclosure, there is also provided a training apparatus for an object recognition model, which can implement the training method described in any of the above embodiments.
FIG. 6 illustrates a block diagram of the training apparatus 10 for an object recognition model according to some embodiments of the present disclosure.
As shown in fig. 6, the training device 10 for an object recognition model includes:
an obtaining unit 11, configured to obtain a training set, where the training set includes a point cloud labeling data set of an object acquired by a laser radar, and the point cloud labeling data set has a true value bounding box, for example, execute step S1;
a top view generating unit 12 configured to generate a top view from the point cloud annotation data set, for example, perform step S2;
an extraction unit 13 configured to extract a plurality of feature maps of different resolutions from the top view by using a feature extractor, for example, to perform step S3;
a determining unit 14 configured to determine the size of the anchor frame and its position on the feature map, e.g. to perform step S4;
an anchor frame generating unit 15 configured to generate anchor frames of different sizes including size and aspect ratio centering on each pixel of the feature map, for example, to execute step S5;
a matching unit 16 configured to match the anchor block with the true value bounding box on a plurality of feature maps of different resolutions to determine a sample type of the anchor block, e.g., perform step S6;
the training unit 17 is configured to train the object recognition model based on the contributions of the anchor boxes of the different sample types to the loss function of the convolutional neural network, for example, to perform step S7.
According to some embodiments of the present disclosure, a data augmentation device is further provided, which can implement the data augmentation method described in any of the above embodiments.
Fig. 6A illustrates a block diagram of a data augmentation device of some embodiments of the present disclosure.
As shown in fig. 6A, the data amplification apparatus includes an acquisition unit 120, a selection unit 140, and an amplification unit 160.
The obtaining unit 120 is configured to obtain a point cloud annotation dataset of the object acquired by the lidar, for example, to perform step S12. The point cloud labeling data set is provided with a true value three-dimensional detection frame.
The selection unit 140 is configured to select a three-dimensional detection box from the point cloud annotation data set, for example, to perform step S14.
The augmentation unit 160 is configured to perform at least one of the specified operations on the point cloud data contained in the selected three-dimensional detection frame, resulting in augmented data, for example, perform step S16. The specifying operation includes: rotating the point cloud data contained in the three-dimensional detection frame by a preset angle around the height direction of the three-dimensional detection frame; deleting a portion of the contained point cloud data; adding random noise points to at least a portion of the contained point cloud data; and copying the contained point cloud data from the point cloud frame to the space of other point cloud frames.
Fig. 6B illustrates a block diagram of an object identification device of some embodiments of the present disclosure.
As shown in fig. 6B, the object recognition device 60 includes an acquisition unit 61, a generation unit 63, and a recognition unit 65.
The acquisition unit 61 is configured to acquire point cloud data of the object acquired by the laser radar, for example, to perform step S1. The point cloud data includes spatial coordinate values reflecting the height of the point cloud. In some embodiments, the acquisition unit 61 is further configured to acquire image data of the object acquired by the image sensor.
The generating unit 63 is configured to generate an overhead view of the multiple channels from the point cloud data, for example, to perform step S3. The top view of the multiple channels includes a first channel representing a height of the point cloud. In some embodiments, the top view of the multiple channels further includes a second channel representing the reflected intensity values and a third channel representing the point cloud density. In still other embodiments, the top view of the multiple channels further includes a fourth channel representing color information in the image data.
The recognition unit 65 is configured to recognize the object using the top view of the plurality of channels, for example, to perform step S5.
Fig. 6C illustrates a block diagram of the identification unit shown in fig. 6B of some embodiments of the present disclosure.
As shown in fig. 6C, the recognition unit 65 includes an input sub-unit 651, a determination sub-unit 652, a calculation sub-unit 653, and an output sub-unit 654.
And the input subunit 651 is configured to input the multi-channel top view into the trained convolutional neural network, so as to obtain a two-dimensional detection box of the object to be recognized on the top view, for example, step 51 is executed.
The determining subunit 652 is configured to determine the two-dimensional detection frame of the object to be recognized in the point cloud coordinate system according to the two-dimensional detection frame of the object to be recognized on the top view, for example, execute step 52.
The calculation sub-menu 653 is configured to calculate the height of the object to be recognized in the point cloud coordinate system, for example, step 53 is performed.
The output subunit 654 is configured to output the three-dimensional detection frame of the object to be recognized in the point cloud coordinate system based on the two-dimensional detection frame of the object to be recognized in the top view and the height of the object, for example, execute step 54.
According to some embodiments of the present disclosure, there is also provided an electronic device capable of implementing the method described in any of the above embodiments.
Fig. 7 illustrates a block diagram of an electronic device of some embodiments of the present disclosure.
As shown in fig. 7, the electronic apparatus 70 includes: a memory 710 and a processor 720 coupled to the memory 710, the processor 720 configured to perform one or more steps of a method in any of the embodiments of the present disclosure based on instructions stored in the memory 710.
The memory 710 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), a database, and other programs.
FIG. 8 shows a block diagram of an electronic device of some further embodiments of the disclosure.
As shown in fig. 8, the electronic apparatus 80 includes: a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to perform a method of any of the preceding embodiments based on instructions stored in the memory 810.
Memory 810 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The electronic device 80 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and between the memory 810 and the processor 820 may be connected, for example, by a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 840 provides a connection interface for various networking devices. The storage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.
According to some embodiments of the present disclosure, there is also provided an object identification system comprising the electronic device according to any of the embodiments described above.
Fig. 9 illustrates a block diagram of an object identification system of some embodiments of the present disclosure.
As shown in fig. 9, the object recognition system 9 includes an electronic device 90. The electronic device 90 is configured to perform the method of any of the preceding embodiments. The structure of the electronic device 90 may be similar to that of the electronic device 70 or 80 described previously.
In some embodiments, the object recognition system 9 further comprises: the laser radar 91 is configured to acquire point cloud data of an object. In other embodiments, the object recognition system 9 further comprises: an image sensor 93 configured to acquire image data of the object. The image sensor 93 is, for example, a camera.
Fig. 9A illustrates a block diagram of an electronic device of further embodiments of the disclosure.
As shown in fig. 9A, the electronic device 90 includes the training apparatus 10 for object recognition model and the object recognition apparatus 60 according to any of the foregoing embodiments.
The training apparatus 10 is configured to train a convolutional neural network for object recognition using the point cloud labeling data set and its augmented data, for example, to perform the training method described in any of the previous embodiments.
The object recognition device 60 is configured to perform object recognition by using the trained convolutional neural network, for example, to perform the object recognition method according to any of the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
So far, object identification methods, apparatuses, and systems, and computer-readable storage media according to the present disclosure have been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (17)

1. An object identification method comprising:
acquiring point cloud data of an object acquired by a laser radar, wherein the point cloud data comprises a spatial coordinate value reflecting the height of the point cloud;
generating a multi-channel top view from the point cloud data, wherein the multi-channel top view comprises a first channel representing a point cloud height; and
objects are identified using a top view of multiple channels.
2. The object recognition method of claim 1, wherein the first channel comprises a first color channel, a second color channel, and a third color channel that respectively represent different color ranges, and generating the multi-channel top view from the point cloud data comprises:
projecting the point cloud data to the top view according to the corresponding relation between the point cloud coordinate system and the image coordinate system of the top view, wherein each pixel point in the top view corresponds to at least one data point in the point cloud coordinate system; and
the point cloud heights are converted into color values in a first color channel, a second color channel, and a third color channel, respectively.
3. The object identification method of claim 2, wherein converting the point cloud heights into color values in a first color channel, a second color channel, and a third color channel, respectively, comprises:
dividing the point cloud heights greater than or equal to a first threshold and less than or equal to a second threshold into a plurality of height ranges, wherein the second threshold is greater than the first threshold; and
the point cloud elevations located in the plurality of elevation ranges are converted to a first color value in a first color channel, a second color value in a second color channel, and a third color value in a third color channel in the overhead view, respectively.
4. The object identification method of claim 2, wherein converting the point cloud heights into color values in a first color channel, a second color channel, and a third color channel, respectively, comprises:
in different height ranges, different conversion parameters are used to convert the height of the point cloud into color values.
5. The object identification method of claim 2, wherein converting the point cloud heights into color values in a first color channel, a second color channel, and a third color channel, respectively, comprises:
only the maximum value of the point cloud height of at least one data point corresponding to each pixel point is converted into color values in the first color channel, the second color channel, and the third color channel.
6. The object identifying method of claim 1, wherein the point cloud data further comprises a reflection intensity value and a point cloud density, the multi-channel top view further comprises a second channel representing the reflection intensity value and a third channel representing the point cloud density, each pixel point in the top view corresponds to at least one data point in the point cloud coordinate system, and generating the multi-channel top view from the point cloud data comprises:
projecting the point cloud data to the top view according to the corresponding relation between the point cloud coordinate system and the image coordinate system of the top view; and
and respectively converting the point cloud height, the reflection intensity value and the point cloud density into color values in a first channel, a second channel and a third channel.
7. The object identification method of claim 6, wherein converting the point cloud height, the reflection intensity value, and the point cloud density to color values for a first channel, a second channel, and a third channel, respectively, comprises:
for the first channel, calculating the color value of the pixel point according to the maximum point cloud height of at least one data point corresponding to each pixel point;
for the second channel, calculating the color value of the pixel point according to the maximum value of the reflection intensity value of at least one data point corresponding to each pixel point;
for the third channel, the color value of the pixel point is calculated according to the density of at least one data point corresponding to each pixel point.
8. The object recognition method according to claim 2 or 6, further comprising acquiring image data of the object acquired by an image sensor, wherein:
the top view of the multiple channels further includes a fourth channel in the image data representing object class information;
generating the top view of the multiple channels from the point cloud data includes converting object class information in the image data to labels in a fourth channel.
9. The object identifying method of claim 8, wherein converting the object class information in the image data to the label in the fourth channel comprises:
projecting at least one data point corresponding to each pixel point in the top view onto an image of the object;
and recording the corresponding label to the corresponding pixel point of the fourth channel according to the object type corresponding to the projection position.
10. The object identifying method according to claim 2 or 6, wherein identifying the object using the top view of the plurality of channels includes:
inputting the multi-channel top view into a convolutional neural network to obtain a two-dimensional detection frame of an object on the top view;
determining a two-dimensional detection frame of the object in a point cloud coordinate system according to the two-dimensional detection frame of the object on the top view;
calculating the height of the object in the point cloud coordinate system; and
and outputting a three-dimensional detection frame of the object in the point cloud coordinate system based on the two-dimensional detection frame of the object on the top view and the height of the object.
11. The object recognition method of claim 10, wherein the two-dimensional detection frame of the object on the top view comprises a plurality of pixel points, and the calculating the height of the object in the point cloud coordinate system comprises:
for each pixel point, determining the maximum value and the minimum value of the point cloud height of at least one data point corresponding to the pixel point as the maximum height and the minimum height of the pixel point;
determining the maximum value of the maximum heights corresponding to the pixel points as the maximum height of the detection frame, and determining the minimum value of the minimum heights corresponding to the pixel points as the minimum height of the detection frame;
and calculating the height of the detection frame as the height of the object according to the difference value between the maximum height and the minimum height of the detection frame.
12. The object recognition method according to claim 10, wherein the three-dimensional detection frame includes information of:
a category of the object; and
the position, size and orientation angle of the object in the point cloud coordinate system.
13. An object recognition device comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire point cloud data of an object acquired by a laser radar, and the point cloud data comprises a spatial coordinate value reflecting the height of a point cloud;
a generating unit configured to generate a multi-channel top view from the point cloud data, wherein the multi-channel top view comprises a first channel representing a point cloud height; and
an identification unit configured to identify the object using a top view of the multiple channels.
14. An object recognition device comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the object identification method of any of claims 1-12 based on instructions stored in the memory device.
15. An object identification system comprising:
a lidar configured to acquire point cloud data of an object; and
the object recognition arrangement of claim 13 or 14, configured to recognize an object using generating a multi-channel overhead view from the point cloud data.
16. The object identification system of claim 15, further comprising:
an image sensor configured to acquire image data of an object.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the object identification method according to any one of claims 1 to 12.
CN202011211545.9A 2020-11-03 2020-11-03 Object recognition method, device and system, computer readable storage medium Pending CN112287859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011211545.9A CN112287859A (en) 2020-11-03 2020-11-03 Object recognition method, device and system, computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011211545.9A CN112287859A (en) 2020-11-03 2020-11-03 Object recognition method, device and system, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112287859A true CN112287859A (en) 2021-01-29

Family

ID=74351239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011211545.9A Pending CN112287859A (en) 2020-11-03 2020-11-03 Object recognition method, device and system, computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112287859A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095288A (en) * 2021-04-30 2021-07-09 浙江吉利控股集团有限公司 Obstacle missing detection repairing method, device, equipment and storage medium
CN113267761A (en) * 2021-05-28 2021-08-17 中国航天科工集团第二研究院 Laser radar target detection and identification method and system and computer readable storage medium
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通系统有限公司 Traffic target detection method and device, electronic equipment and detection system
CN113447923A (en) * 2021-06-29 2021-09-28 上海高德威智能交通系统有限公司 Target detection method, device, system, electronic equipment and storage medium
CN113610138A (en) * 2021-08-02 2021-11-05 典基网络科技(上海)有限公司 Image classification and identification method and device based on deep learning model and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316355A (en) * 2011-09-15 2012-01-11 丁少华 Generation method of 3D machine vision signal and 3D machine vision sensor
CN102914501A (en) * 2012-07-26 2013-02-06 南京大学 Method for calculating extinction coefficients of three-dimensional forest canopy by using laser-point cloud
CN103955966A (en) * 2014-05-12 2014-07-30 武汉海达数云技术有限公司 Three-dimensional laser point cloud rendering method based on ArcGIS
CN104915982A (en) * 2015-05-15 2015-09-16 中国农业大学 Canopy layer illumination distribution prediction model construction method and illumination distribution detection method
CN107358640A (en) * 2017-07-05 2017-11-17 北京旋极伏羲大数据技术有限公司 A kind of landform of hill shading target area and the method and device of atural object
CN109145680A (en) * 2017-06-16 2019-01-04 百度在线网络技术(北京)有限公司 A kind of method, apparatus, equipment and computer storage medium obtaining obstacle information
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN110070025A (en) * 2019-04-17 2019-07-30 上海交通大学 Objective detection system and method based on monocular image
CN110554409A (en) * 2019-08-30 2019-12-10 江苏徐工工程机械研究院有限公司 Concave obstacle detection method and system
CN111652964A (en) * 2020-04-10 2020-09-11 合肥工业大学 Auxiliary positioning method and system for power inspection unmanned aerial vehicle based on digital twinning
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316355A (en) * 2011-09-15 2012-01-11 丁少华 Generation method of 3D machine vision signal and 3D machine vision sensor
CN102914501A (en) * 2012-07-26 2013-02-06 南京大学 Method for calculating extinction coefficients of three-dimensional forest canopy by using laser-point cloud
CN103955966A (en) * 2014-05-12 2014-07-30 武汉海达数云技术有限公司 Three-dimensional laser point cloud rendering method based on ArcGIS
CN104915982A (en) * 2015-05-15 2015-09-16 中国农业大学 Canopy layer illumination distribution prediction model construction method and illumination distribution detection method
CN109145680A (en) * 2017-06-16 2019-01-04 百度在线网络技术(北京)有限公司 A kind of method, apparatus, equipment and computer storage medium obtaining obstacle information
CN107358640A (en) * 2017-07-05 2017-11-17 北京旋极伏羲大数据技术有限公司 A kind of landform of hill shading target area and the method and device of atural object
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium
CN110070025A (en) * 2019-04-17 2019-07-30 上海交通大学 Objective detection system and method based on monocular image
CN110554409A (en) * 2019-08-30 2019-12-10 江苏徐工工程机械研究院有限公司 Concave obstacle detection method and system
CN111652964A (en) * 2020-04-10 2020-09-11 合肥工业大学 Auxiliary positioning method and system for power inspection unmanned aerial vehicle based on digital twinning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARTIN SIMON 等: "Complex-YOLO: An Euler-Region-Proposal for Real-time 3D Object Detection on Point Clouds", 《EVVC2018》 *
XIAOZHI CHEN 等: "Multi-view 3D Object Detection Network for Autonomous Driving", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
沈琦 等: "基于两级网络的三维目标检测算法", 《计算机科学》 *
点云侠: "PCL点云按高程渲染颜色", 《HTTPS://BLOG.CSDN.NET/QQ_36686437/ARTICLE/DETAILS/109076885》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095288A (en) * 2021-04-30 2021-07-09 浙江吉利控股集团有限公司 Obstacle missing detection repairing method, device, equipment and storage medium
CN113267761A (en) * 2021-05-28 2021-08-17 中国航天科工集团第二研究院 Laser radar target detection and identification method and system and computer readable storage medium
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通系统有限公司 Traffic target detection method and device, electronic equipment and detection system
CN113447923A (en) * 2021-06-29 2021-09-28 上海高德威智能交通系统有限公司 Target detection method, device, system, electronic equipment and storage medium
CN113408454B (en) * 2021-06-29 2024-02-06 上海高德威智能交通系统有限公司 Traffic target detection method, device, electronic equipment and detection system
CN113610138A (en) * 2021-08-02 2021-11-05 典基网络科技(上海)有限公司 Image classification and identification method and device based on deep learning model and storage medium

Similar Documents

Publication Publication Date Title
CN112287860B (en) Training method and device of object recognition model, and object recognition method and system
CN111328396B (en) Pose estimation and model retrieval for objects in images
CN109948661B (en) 3D vehicle detection method based on multi-sensor fusion
CN112287859A (en) Object recognition method, device and system, computer readable storage medium
CN112395962A (en) Data augmentation method and device, and object identification method and system
WO2022033076A1 (en) Target detection method and apparatus, device, storage medium, and program product
Hoang et al. Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning
CN110796686A (en) Target tracking method and device and storage device
CN110298867B (en) Video target tracking method
CN115631344B (en) Target detection method based on feature self-adaptive aggregation
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN113761999A (en) Target detection method and device, electronic equipment and storage medium
CN113095152A (en) Lane line detection method and system based on regression
Cho et al. Semantic segmentation with low light images by modified CycleGAN-based image enhancement
CN113267761B (en) Laser radar target detection and identification method, system and computer readable storage medium
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CN116188999B (en) Small target detection method based on visible light and infrared image data fusion
CN117058646B (en) Complex road target detection method based on multi-mode fusion aerial view
CN113658257A (en) Unmanned equipment positioning method, device, equipment and storage medium
CN114820463A (en) Point cloud detection and segmentation method and device, and electronic equipment
CN113762003A (en) Target object detection method, device, equipment and storage medium
CN113420648A (en) Target detection method and system with rotation adaptability
US20230410561A1 (en) Method and apparatus for distinguishing different configuration states of an object based on an image representation of the object
CN115588187A (en) Pedestrian detection method, device and equipment based on three-dimensional point cloud and storage medium
CN106909936B (en) Vehicle detection method based on double-vehicle deformable component model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination