CN113537417B - Target identification method and device based on radar, electronic equipment and storage medium - Google Patents

Target identification method and device based on radar, electronic equipment and storage medium Download PDF

Info

Publication number
CN113537417B
CN113537417B CN202111090386.6A CN202111090386A CN113537417B CN 113537417 B CN113537417 B CN 113537417B CN 202111090386 A CN202111090386 A CN 202111090386A CN 113537417 B CN113537417 B CN 113537417B
Authority
CN
China
Prior art keywords
radar
data
target
difference value
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111090386.6A
Other languages
Chinese (zh)
Other versions
CN113537417A (en
Inventor
李仕贤
彭佳
谭俊杰
钟仁海
张燎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hawkeye Electronic Technology Co Ltd
Original Assignee
Nanjing Hawkeye Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hawkeye Electronic Technology Co Ltd filed Critical Nanjing Hawkeye Electronic Technology Co Ltd
Priority to CN202111090386.6A priority Critical patent/CN113537417B/en
Publication of CN113537417A publication Critical patent/CN113537417A/en
Application granted granted Critical
Publication of CN113537417B publication Critical patent/CN113537417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention provides a target identification method and device based on radar, electronic equipment and a storage medium, belonging to the technical field of radar signal processing, wherein the method comprises the following steps: calculating an estimated size value of an object according to point cloud data of the object detected by a radar to output radar data with the estimated size value; collecting video data of an image containing an object, which is matched with radar data in time, and preprocessing the video data to obtain the category of the object; marking the category of the object as a label in radar data corresponding to the object to obtain radar data with the label; taking the radar data with the labels as training data to train a preset target recognition model; inputting radar data of a target object to be recognized to a trained target recognition model to recognize a category of the target object to be recognized. The method can automatically and accurately identify the target category without marking the radar data in a manual mode.

Description

Target identification method and device based on radar, electronic equipment and storage medium
Technical Field
The present invention relates to the field of radar signal processing technologies, and in particular, to a target identification method and apparatus based on radar, an electronic device, and a storage medium.
Background
The digital twin technology is characterized in that various traffic data in the environment are extracted, parameterized and modeled, and then the actual traffic condition is presented to generate traffic data with high reality degree, so that a decision basis is provided for traffic scheduling. The elements of traffic data include pedestrians, motorcycles, cars, trucks, etc. on the road, and the correct identification of these element classes is the basic condition for constructing real traffic conditions.
Identification technology based on visual target categories is widely applied, and under good weather conditions, a visual sensor can often show high performance, but the performance of the visual sensor can be seriously influenced under severe weather (rain, fog, strong light and the like). The performance of the millimeter wave radar sensor is not greatly affected even under the conditions of poor light and weather. However, most millimeter wave radars adopt the characteristics of acquiring RCS (Radar Cross Section) of a target, widening of range-doppler dimension and the like, then use a trained classifier to identify the target category, and need to label training data when training the classifier, but the prior art adopts a manual mode to label Radar data, so that the efficiency is very low, and the target RCS can occur along with the movement of a vehicle, so that the problem of error identification is easy to occur.
Disclosure of Invention
The invention provides a target identification method and device based on a radar, electronic equipment and a storage medium, which are used for solving the problems of low efficiency and easy error identification caused by labeling radar data in a manual mode in the prior art, and accurately and automatically completing the labeling operation without manual operation to identify a vehicle type.
The invention provides a target identification method based on radar, which comprises the following steps:
calculating an estimated size value of an object according to point cloud data of the object detected by the radar so as to output radar data with the estimated size value;
acquiring video data which is matched with the radar data in time and contains the image of the object, and preprocessing the video data to obtain the category of the object;
marking the category of the object as a label in radar data corresponding to the object to obtain radar data with the label;
training a preset target recognition model by using the radar data with the labels as training data;
inputting radar data of a target object to be recognized to the trained target recognition model to recognize a category of the target object to be recognized.
According to the radar-based target recognition method of the present invention, the step of calculating an estimated size value of an object from point cloud data of the object detected by the radar to output radar data with the estimated size value includes:
acquiring point cloud data of a plurality of detection points of the object, wherein each detection point comprises basic radar information, the basic radar information comprising at least one of: distance information, speed information, azimuth information, signal-to-noise ratio information, and a value of radar cross-sectional area RCS of the object;
and clustering the point cloud data of the detection points to obtain the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the target cluster corresponding to the object, and taking the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction as the estimated size value of the object.
According to the radar-based target identification method, the step of clustering the point cloud data of the plurality of detection points to obtain the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the target cluster corresponding to the object comprises the following steps:
clustering point cloud data of a plurality of detection points of the object corresponding to a plurality of radar frames to obtain a target cluster corresponding to each frame;
merging and converting the target clusters corresponding to each frame to obtain a maximum coordinate difference value in the X coordinate direction and a maximum coordinate difference value in the Y coordinate direction of the merged target clusters;
wherein the merged conversion formula is expressed as:
Figure 158090DEST_PATH_IMAGE001
Figure 202270DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 7415DEST_PATH_IMAGE003
the lateral distance of the point cloud data representing a target cluster of a single frame,
Figure 60821DEST_PATH_IMAGE004
a longitudinal distance of point cloud data representing a target cluster of a single frame,
Figure 635022DEST_PATH_IMAGE005
represents the lateral velocity of the object after tracking,
Figure 850103DEST_PATH_IMAGE006
representing the longitudinal velocity of the object after tracking,
Figure 627697DEST_PATH_IMAGE007
is the processing period of the radar signal.
According to the radar-based target recognition method of the present invention, the step of calculating an estimated size value of an object from point cloud data of the object detected by the radar to output radar data with the estimated size value further includes:
smoothing the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the merged target cluster, wherein the smoothing is performed based on the following formula:
Figure 219215DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 913502DEST_PATH_IMAGE009
is composed of
Figure 299484DEST_PATH_IMAGE010
The maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the time,
Figure 79221DEST_PATH_IMAGE011
is the value of the maximum coordinate difference in the X coordinate direction or the maximum coordinate difference in the Y coordinate direction at time t-1,
Figure 474430DEST_PATH_IMAGE012
in order to be a filter coefficient, the filter coefficient,
Figure 757644DEST_PATH_IMAGE012
the range of (A) is 0 to 1,
Figure 580107DEST_PATH_IMAGE013
is composed of
Figure 847140DEST_PATH_IMAGE010
And calculating the maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the moment through multi-frame data, wherein k represents the track ID.
According to the radar-based target identification method, the video data including the image of the object, which is matched with the radar data in time, is collected and is preprocessed to obtain the category of the object
Comprises the following steps:
and carrying out time synchronization on the radar outputting the radar data and the camera outputting the video data through a network time protocol.
According to the radar-based target identification method, the step of preprocessing the video data to obtain the object type comprises the following steps:
analyzing the video data by using a preset target detection algorithm to obtain the class of the object.
According to the radar-based target recognition method of the present invention, the step of tagging the category of the object as a tag in radar data corresponding to the object to obtain tagged radar data includes:
determining video data matched with radar data corresponding to the object in terms of time according to the timestamp of each frame of the radar data and the timestamp of each video frame of the video data;
and marking the class of the object obtained according to the matched video data as a label in the radar data corresponding to the object to obtain the radar data with the label.
According to the radar-based target recognition method, the step of training a preset target recognition model by using the labeled radar data as training data comprises the following steps:
and training a preset target recognition model based on a naive Bayes classification algorithm by taking the labeled radar data as training data.
The invention also provides a target identification device based on the radar, which comprises:
the size calculation module is used for calculating an estimated size value of the object according to the point cloud data of the object detected by the radar so as to output radar data with the estimated size value;
the video classification module is used for collecting video data which is matched with the radar data in time and contains the image of the object, and preprocessing the video data to obtain the category of the object;
the tag adding module is used for marking the category of the object as a tag in radar data corresponding to the object to obtain the radar data with the tag;
the model training module is used for training a preset target recognition model by taking the radar data with the labels as training data;
and the target identification module is used for inputting radar data of the target object to be identified into the trained target identification model so as to identify the category of the target object to be identified.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the lane line identification method.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the lane line identification method according to any one of the preceding claims.
According to the radar-based target identification method, the radar-based target identification device, the electronic equipment and the storage medium, the coordinate difference of the target is calculated to be used for the characteristic attribute of target classification, then the classification of radar data is automatically completed by utilizing the target class obtained by video data, the radar data with the target class is obtained, and the constructed preset target identification model is trained by utilizing the number of the radar with the target class to realize the target identification of the test sample. The method can automatically and accurately identify the target category without marking the radar data in a manual mode.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a radar-based target identification method provided by the present invention;
FIG. 2 is a schematic flow chart of calculating a coordinate difference of a target according to the present invention;
FIG. 3 is a schematic diagram of a target cluster of a single frame provided by the present invention;
FIG. 4 is a schematic diagram of a target cluster of a multiframe provided by the present invention;
FIG. 5 is a schematic flow chart of the present invention for tagging radar data;
FIG. 6 is a schematic illustration of the matching of radar data and video data provided by the present invention;
FIG. 7 is a schematic flow chart of model training and target recognition provided by the present invention;
FIG. 8 is a schematic structural diagram of a radar-based target recognition apparatus provided in the present invention;
fig. 9 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and in the claims, and in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein.
The technical terms to which the present invention relates are described below:
the millimeter wave radar is a radar whose working frequency band is in the millimeter wave band, and the principle of ranging is the same as that of a general radar, that is, radio waves (radar waves) are transmitted out, then echoes are received, and position data of a target is measured according to a time difference between receiving and transmitting, and the millimeter wave radar is radio waves whose frequency is in the millimeter wave band.
The millimeter wave radar has a narrower wave speed (generally, milliradian magnitude), so that the angle resolution and angle measurement accuracy of the radar can be improved, and the millimeter wave radar is favorable for resisting electronic interference, clutter interference, multipath reflection interference and the like. And because of its high operating frequency, can get large signal bandwidth (such as gigahertz magnitude) and Doppler's frequency shift, help to improve the measurement accuracy and resolving power of the distance and speed and can analyze the target characteristic. Furthermore, the millimeter wave radar can be applied to the fields of airplanes, satellites, intelligent traffic systems and the like due to the small antenna aperture, the small element size and the small device size.
The invention provides a target identification method and device based on radar, electronic equipment and a storage medium, aiming at solving the problems of low efficiency and easy error identification caused by manually labeling radar data in the prior art.
The coordinate difference of the target is calculated to be used for the characteristic attribute of target classification, then the radar data classification is automatically completed by utilizing the video data to obtain the target class, the radar data with the target class is obtained, and the constructed preset target identification model is trained by utilizing the number of the radar with the target class to realize the target identification of the test sample. The method can automatically and accurately identify the target (such as the vehicle type) without marking the radar data in a manual mode.
The radar-based target recognition method, apparatus, electronic device, and storage medium of the present invention are described below with reference to fig. 1 to 9.
Fig. 1 is a schematic flowchart of a radar-based target identification method provided by the present invention, as shown in fig. 1. The target identification method based on the radar comprises the following steps:
step 101, calculating an estimated size value of the object according to the point cloud data of the object detected by the radar so as to output radar data with the estimated size value.
The radar signal is processed to know whether an interested target object exists in the echo or not, and a threshold detection method can be adopted for detection. However, since there is a certain error probability in the threshold detection method, i.e. a false target is detected, the constant false alarm rate CFAR detection is required. After the constant false alarm rate CFAR detection, DOA (Direction Of Arrival positioning technology) estimation is performed. The DOA obtains the distance information and the direction information of the target by processing the received echo signals.
In the step 101, after DOA estimation is performed, point cloud data of a target object detected by a radar is acquired. There may be a plurality of detection points in an object, and each detection point includes basic information of the Radar, such as distance, speed, azimuth, signal-to-noise ratio, Radar cross-section (RCS), and the like.
The above step 101 is aimed at calculating the maximum coordinate difference of the object based on the basic information of the detection point, because the maximum coordinate difference of the object reflects the size of the object in the x-axis and the y-axis.
And 102, acquiring video data which is matched with the radar data in time and contains the image of the object, and preprocessing the video data to obtain the category of the object.
Optionally, the radar outputting the radar data and the camera outputting the video data may be time synchronized by a network time protocol NTP.
Optionally, video data may be pre-processed using a video processing algorithm (such as a YOLO algorithm) to obtain the class of the object in the video data.
And 103, marking the category of the object as a label in the radar data corresponding to the object to obtain the radar data with the label.
This step is a step of realizing automatic labeling, that is, writing the category of the object into the radar data using the result of analyzing the video data (the category of the object).
Such as for one object (i.e., object 1):
object 1 radarData (radar data) = { x, y, vx, vy, rcs, xsize, ysize };
the video analysis of object 1 is targetType (class of object 1) = car.
Labeling:
object 1 radarData (radar data with tag) = { x, y, vx, vy, rcs, xsize, ysize, dolly }.
Alternatively, the class of the object may be used for identification of vehicle types, including large cars, small cars, trucks, motorcycles, and the like.
And 104, training a preset target recognition model by using the radar data with the labels as training data.
Step 105, inputting radar data of a target object to be recognized into the trained target recognition model to recognize the category of the target object to be recognized.
The above steps 101 to 105 will be described in detail below.
Fig. 2 is a schematic flow chart of calculating a coordinate difference of a target according to the present invention, as shown in fig. 2. In the step 101, the step of calculating an estimated size value of the object according to the point cloud data of the object detected by the radar to output radar data with the estimated size value includes:
step 201, point cloud data of a plurality of detection points of the object are acquired, wherein each detection point comprises basic radar information.
Wherein the basic radar information comprises at least one of: distance information, speed information, azimuth information, signal-to-noise ratio information, and a value of radar cross-sectional area RCS of the object.
Step 202, clustering the point cloud data of the plurality of detection points to obtain a maximum coordinate difference value in the X coordinate direction and a maximum coordinate difference value in the Y coordinate direction of a target cluster corresponding to the object, and taking the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction as an estimated size value of the object.
Fig. 3 is a schematic diagram of a target cluster of a single frame provided by the present invention, as shown in fig. 3. A plurality of detection points of a target (cart) are grouped together to form a target cluster (i.e., a set of a plurality of detection points). The target cluster is a cluster formed by clustering point cloud data of a target, and target cluster information comprises XSIZE, YSIZE and the like of the target.
Where XSIZE refers to the x-coordinate direction (lateral direction) maximum difference value of these detection points, and YSIZE refers to the y-coordinate direction (longitudinal direction) maximum difference value of these detection points. XSIZE and YSIZE may reflect the size of the target to some extent. Such as: the YSIZE of the small car is generally less than 4.5m, and the YSIZE of the large car is more than 6.5 m. The national standard of the size of the vehicle type is defined, and the detailed description is omitted here.
Fig. 4 is a schematic diagram of a target cluster of a multi-frame provided by the present invention, as shown in fig. 4. When the millimeter wave radar detects a moving target, the effective reflection area of the target is changed due to the fluctuation characteristic of the target, and different reflection points on the target structure may be obtained in each detection period. Therefore, if the XSIZE and YSIZE of a single-frame target cluster are adopted, the actual size of a target is often not reflected correctly, and the invention can obtain a more accurate result by accumulating multi-frame data, namely converting target data in different periods into the latest period through a motion equation, and then recalculating the XSIZE and YSIZE of the target cluster information. The XSIZE and YSIZE obtained at the moment can well reflect the structural information of the target.
Specifically, clustering point cloud data of a plurality of detection points of the object corresponding to a plurality of radar frames to obtain a target cluster corresponding to each frame;
and carrying out merging conversion on the target clusters corresponding to each frame to obtain the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the merged target clusters.
Wherein the merged conversion formula is expressed as:
Figure 514882DEST_PATH_IMAGE014
Figure 918181DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 177124DEST_PATH_IMAGE016
the lateral distance of the point cloud data representing a target cluster of a single frame,
Figure 649563DEST_PATH_IMAGE017
a longitudinal distance of point cloud data representing a target cluster of a single frame,
Figure 386575DEST_PATH_IMAGE005
represents the lateral velocity of the object after tracking,
Figure 644381DEST_PATH_IMAGE006
representing the longitudinal velocity of the object after tracking,
Figure 543066DEST_PATH_IMAGE018
is the processing period of the radar signal.
Step 203, performing smoothing processing on the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the merged target cluster, wherein the smoothing processing is performed based on the following formula:
Figure 519113DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 794236DEST_PATH_IMAGE020
is composed of
Figure 172128DEST_PATH_IMAGE010
The maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the time,
Figure 507294DEST_PATH_IMAGE021
is the value of the maximum coordinate difference in the X coordinate direction or the maximum coordinate difference in the Y coordinate direction at time t-1,
Figure 705057DEST_PATH_IMAGE022
in order to be a filter coefficient, the filter coefficient,
Figure 534604DEST_PATH_IMAGE022
the range of (A) is 0 to 1,
Figure 32582DEST_PATH_IMAGE013
is composed of
Figure 273070DEST_PATH_IMAGE010
And calculating the maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the moment through multi-frame data, wherein k represents the track ID.
The above-described smoothing process of the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the target cluster is intended to reduce jitter of XSIZE, YSIZE.
Fig. 5 is a schematic flow chart of tagging radar data provided by the present invention, as shown in fig. 5. In the above steps 102 to 103, the step of acquiring video data including an image of the object, which is temporally matched with the radar data, preprocessing the video data to obtain a category of the object, and marking the category of the object as a tag in radar data corresponding to the object to obtain the radar data with the tag includes:
step 501, analyzing the video data by using a preset target detection algorithm to obtain the class of the object.
Optionally, the radar outputting the radar data and the camera outputting the video data are time-synchronized through a network time protocol NTP.
Optionally, the preset target detection algorithm is a YOLO (YOLO) algorithm. The role of the YOLO algorithm is to convert the object detection problem into a single regression problem that extracts bounding boxes (boundary regions) and class probabilities directly from the image for detecting the class and location of objects in the video.
The YOLO algorithm is applied in the present invention to mark out the category and position of the target (car, truck, etc.) in the picture through a box, and the english letters truck, car, etc. shown in the box in fig. 6 represent the category of the target.
Step 502, determining the video data matched with the radar data corresponding to the object in terms of time according to the timestamp of each frame of the radar data and the timestamp of each video frame of the video data.
Optionally, each frame of data uploaded by the radar contains a timestamp, and a video frame of the video data also contains the timestamp, so that the radar data and the video data can be matched according to the timestamp. The radar data and the video data are considered to match when a timestamp of a data frame of radar and a timestamp of a video frame of video data are the same.
Step 503, marking the category of the object obtained according to the matched video data as a tag in the radar data corresponding to the object, so as to obtain the radar data with the tag.
And if not, indicating that the radar data cannot be used as a training sample, and continuously determining whether the subsequent radar data and the video data are matched.
As shown in fig. 6, as a result of the matching, a small box is visible inside the large box, i.e., the position to which the radar data is mapped, and the number indicates the track number ID.
Such as for one object (i.e., object 1):
object 1 radarData (radar data) = { x, y, vx, vy, rcs, xsize, ysize };
video analysis of object 1 is targetType (class of object 1) = car;
labeling:
object 1 radarData (radar data with tag) = { x, y, vx, vy, rcs, xsize, ysize, dolly }.
Thus, after tagging, radar data with the target class is obtained, i.e. radarData (tagged radar data) = { x, y, vx, vy, rcs, xsize, ysize, dolly } for object 1.
FIG. 7 is a schematic flow chart of model training and target recognition provided by the present invention, as shown in FIG. 7. In the above step 104 and step 105, the step of training a preset target recognition model by using the labeled radar data as training data, and inputting the radar data of the target object to be recognized into the trained target recognition model to recognize the category of the target object to be recognized includes:
and 701, training a preset target recognition model based on a naive Bayes classification algorithm by using the labeled radar data as training data.
The basic idea of a naive bayes classifier is to compute the probability of each class separately for the input data and then select the class corresponding to the high probability.
Specifically, the occurrence frequency of each class in the training set and the conditional probability estimation of each feature attribute partition on each class are calculated through a naive Bayes algorithm.
Step 702, classifying radar data of a target object to be recognized by using a preset target recognition model based on a naive Bayesian classification algorithm to obtain a mapping relation between the radar data of the target object to be recognized and the class of the target object, so as to recognize the class of the target object to be recognized.
The preset naive bayes-based classification algorithm is described in detail as follows:
in the above, the work flow of the naive bayes classifier is similar to that of the traditional target recognition model, and comprises three stages: a preparation phase, a training phase and an application phase.
(1) A preparation stage:
characteristic attributes that may be used for object classification are determined, such as distance, velocity, XSIZE, YSIZE, RCS, etc. information in the radar data as mentioned above.
Basic information such as distance, speed, RCS and the like in the radar data can be directly read from the radar data, and the XSIZE and the YSIZE are calculated according to the conversion method.
Then, classification of the radar data is automatically completed by utilizing the classification of the target obtained by processing through a YOLO video algorithm, and a training sample (namely the radar data with the target classification) is obtained.
(2) A training stage:
the result of this stage is the generation of a classifier (i.e., a pre-set target recognition model) whose main work for the naive bayes algorithm is the conditional probability for each class for the frequency of occurrence and features of each class in the training samples.
(3) An application stage:
and classifying the test sample by using a classifier (namely a preset target recognition model), and checking the performance of the classifier. If the performance of the classifier achieves the expected effect, the classifier can be transplanted to a radar signal processing board, and the target information output by the radar comprises the attribute of the target class.
In the following, the radar-based vehicle type recognition apparatus provided by the present invention is described, and the radar-based vehicle type recognition apparatus described below and the radar-based target recognition method described above may be referred to in correspondence with each other.
Fig. 8 is a schematic structural diagram of a radar-based target recognition apparatus provided in the present invention, as shown in fig. 8. A radar-based target recognition apparatus 800 includes a size calculation module 810, a video classification module 820, a label addition module 830, a model training module 840, and a target recognition module 850. Wherein the content of the first and second substances,
a size calculating module 810, configured to calculate an estimated size value of the object according to the point cloud data of the object detected by the radar, so as to output radar data with the estimated size value.
A video classification module 820, configured to collect video data that includes an image of the object and is temporally matched with the radar data, and pre-process the video data to obtain a category of the object.
And a tag adding module 830, configured to mark the category of the object as a tag in radar data corresponding to the object, so as to obtain radar data with the tag.
And the model training module 840 is used for training a preset target recognition model by using the radar data with the labels as training data.
A target recognition module 850 for inputting radar data of a target object to be recognized to the trained target recognition model to recognize a category of the target object to be recognized.
Optionally, the size calculating module 810 is further configured to:
acquiring point cloud data of a plurality of detection points of the object, wherein each detection point comprises basic radar information, the basic radar information comprising at least one of: distance information, speed information, azimuth information, signal-to-noise ratio information, and a value of radar cross-sectional area RCS of the object;
and clustering the point cloud data of the detection points to obtain the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the target cluster corresponding to the object, and taking the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction as the estimated size value of the object.
Optionally, the size calculating module 810 is further configured to:
clustering point cloud data of a plurality of detection points of the object corresponding to a plurality of radar frames to obtain a target cluster corresponding to each frame;
merging and converting the target clusters corresponding to each frame to obtain a maximum coordinate difference value in the X coordinate direction and a maximum coordinate difference value in the Y coordinate direction of the merged target clusters;
wherein the merged conversion formula is expressed as:
Figure 223709DEST_PATH_IMAGE023
Figure 106214DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 193119DEST_PATH_IMAGE024
the lateral distance of the point cloud data representing a target cluster of a single frame,
Figure 604509DEST_PATH_IMAGE004
a longitudinal distance of point cloud data representing a target cluster of a single frame,
Figure 42443DEST_PATH_IMAGE025
represents the lateral velocity of the object after tracking,
Figure 463060DEST_PATH_IMAGE026
representing the longitudinal velocity of the object after tracking,
Figure 404471DEST_PATH_IMAGE027
is the processing period of the radar signal.
Optionally, the size calculating module 810 is further configured to:
smoothing the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the merged target cluster, wherein the smoothing is performed based on the following formula:
Figure 252342DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 161261DEST_PATH_IMAGE029
is composed of
Figure 385569DEST_PATH_IMAGE030
The maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the time,
Figure 447066DEST_PATH_IMAGE031
is the value of the maximum coordinate difference in the X coordinate direction or the maximum coordinate difference in the Y coordinate direction at time t-1,
Figure 465837DEST_PATH_IMAGE012
in order to be a filter coefficient, the filter coefficient,
Figure 612785DEST_PATH_IMAGE012
the range of (A) is 0 to 1,
Figure 375204DEST_PATH_IMAGE013
is composed of
Figure 291208DEST_PATH_IMAGE010
And calculating the maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the moment through multi-frame data, wherein k represents the track ID.
Optionally, the video classification module 820 is further configured to:
and carrying out time on the radar outputting the radar data and the camera outputting the video data through a network time protocol.
Optionally, the video classification module 820 is further configured to:
analyzing the video data by using a preset target detection algorithm to obtain the class of the object.
Optionally, the tag adding module 830 is further configured to:
determining video data matched with radar data corresponding to the object in terms of time according to the timestamp of each frame of the radar data and the timestamp of each video frame of the video data;
and marking the class of the object obtained according to the matched video data as a label in the radar data corresponding to the object to obtain the radar data with the label.
Optionally, the model training module 840 is further configured to:
and training a preset target recognition model based on a naive Bayes classification algorithm by taking the labeled radar data as training data.
Fig. 9 illustrates a physical structure diagram of an electronic device, and as shown in fig. 9, the electronic device may include: a processor (processor)910, a communication Interface (Communications Interface)920, a memory (memory)930, and a communication bus 940, wherein the processor 910, the communication Interface 920, and the memory 930 communicate with each other via the communication bus 940. Processor 910 may invoke logic instructions in memory 930 to perform the radar-based target recognition method described previously.
Furthermore, the logic instructions in the memory 930 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the radar-based target recognition methods described above.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A target identification method based on radar is characterized by comprising the following steps:
calculating an estimated size value of an object according to point cloud data of the object detected by the radar so as to output radar data with the estimated size value;
acquiring video data which is matched with the radar data in time and contains the image of the object, and preprocessing the video data to obtain the category of the object;
marking the category of the object as a label in radar data corresponding to the object to obtain radar data with the label;
training a preset target recognition model by using the radar data with the labels as training data;
inputting radar data of a target object to be recognized to the trained target recognition model to recognize a category of the target object to be recognized;
wherein the step of calculating an estimated size value of the object to output radar data with the estimated size value according to the point cloud data of the object detected by the radar includes:
acquiring point cloud data of a plurality of detection points of the object;
clustering the point cloud data of the detection points to obtain the maximum coordinate difference value of the target cluster corresponding to the object in the X coordinate direction and the maximum coordinate difference value of the target cluster corresponding to the object in the Y coordinate direction;
the step of clustering the point cloud data of the plurality of detection points to obtain the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the target cluster corresponding to the object comprises the following steps:
clustering point cloud data of a plurality of detection points of the object corresponding to a plurality of radar frames to obtain a target cluster corresponding to each frame;
merging and converting the target clusters corresponding to each frame to obtain a maximum coordinate difference value in the X coordinate direction and a maximum coordinate difference value in the Y coordinate direction of the merged target clusters;
wherein the merged conversion formula is expressed as:
Figure DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
the lateral distance of the point cloud data representing a target cluster of a single frame,
Figure DEST_PATH_IMAGE004
a longitudinal distance of point cloud data representing a target cluster of a single frame,
Figure DEST_PATH_IMAGE005
represents the lateral velocity of the object after tracking,
Figure DEST_PATH_IMAGE006
representing the longitudinal velocity of the object after tracking,
Figure DEST_PATH_IMAGE007
is the processing period of the radar signal.
2. The radar-based target recognition method of claim 1, wherein the step of calculating an estimated size value of the object from the point cloud data of the object detected by the radar to output radar data with the estimated size value further comprises:
taking the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction as estimated size values of the object;
wherein each detection point includes basic radar information including at least one of: distance information, speed information, azimuth information, signal-to-noise ratio information, and a value of radar cross-sectional area RCS of the object.
3. The radar-based target recognition method of claim 2, wherein the step of calculating an estimated size value of the object from the point cloud data of the object detected by the radar to output radar data with the estimated size value further comprises:
smoothing the maximum coordinate difference value in the X coordinate direction and the maximum coordinate difference value in the Y coordinate direction of the merged target cluster, wherein the smoothing is performed based on the following formula:
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
is composed of
Figure DEST_PATH_IMAGE010
The maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at the time,
Figure DEST_PATH_IMAGE011
is the value of the maximum coordinate difference in the X coordinate direction or the maximum coordinate difference in the Y coordinate direction at time t-1,
Figure DEST_PATH_IMAGE012
in order to be a filter coefficient, the filter coefficient,
Figure 970329DEST_PATH_IMAGE012
is in the range of [0,1],
Figure DEST_PATH_IMAGE013
Is composed of
Figure 131183DEST_PATH_IMAGE010
Calculating the maximum coordinate difference value in the X coordinate direction or the maximum coordinate difference value in the Y coordinate direction at any moment through multi-frame data, and using a k tableAnd (4) indicating a track ID.
4. The radar-based target recognition method of any one of claims 1-3, wherein the step of acquiring video data including an image of the object temporally matched to the radar data and pre-processing the video data to obtain the class of the object comprises:
and carrying out time synchronization on the radar outputting the radar data and the camera outputting the video data through a network time protocol.
5. The radar-based target recognition method of claim 4, wherein the step of preprocessing the video data to obtain the class of the object comprises:
analyzing the video data by using a preset target detection algorithm to obtain the class of the object.
6. The radar-based target recognition method of claim 5, wherein the step of tagging the class of objects as tags in the radar data corresponding to the objects, resulting in tagged radar data comprises:
determining video data matched with radar data corresponding to the object in terms of time according to the timestamp of each frame of the radar data and the timestamp of each video frame of the video data;
and marking the class of the object obtained according to the matched video data as a label in the radar data corresponding to the object to obtain the radar data with the label.
7. The radar-based target recognition method of claim 6, wherein the step of training a preset target recognition model using the tagged radar data as training data comprises:
and training a preset target recognition model based on a naive Bayes classification algorithm by taking the labeled radar data as training data.
8. A radar-based target recognition apparatus, comprising:
the size calculation module is used for calculating an estimated size value of the object according to the point cloud data of the object detected by the radar so as to output radar data with the estimated size value;
the video classification module is used for collecting video data which is matched with the radar data in time and contains the image of the object, and preprocessing the video data to obtain the category of the object;
the tag adding module is used for marking the category of the object as a tag in radar data corresponding to the object to obtain the radar data with the tag;
the model training module is used for training a preset target recognition model by taking the radar data with the labels as training data;
a target identification module for inputting radar data of a target object to be identified to the trained target identification model to identify a category of the target object to be identified;
the size calculation module is further configured to:
acquiring point cloud data of a plurality of detection points of the object;
clustering the point cloud data of the detection points to obtain the maximum coordinate difference value of the target cluster corresponding to the object in the X coordinate direction and the maximum coordinate difference value of the target cluster corresponding to the object in the Y coordinate direction;
the size calculation module is further configured to:
clustering point cloud data of a plurality of detection points of the object corresponding to a plurality of radar frames to obtain a target cluster corresponding to each frame;
merging and converting the target clusters corresponding to each frame to obtain a maximum coordinate difference value in the X coordinate direction and a maximum coordinate difference value in the Y coordinate direction of the merged target clusters;
wherein the merged conversion formula is expressed as:
Figure 664933DEST_PATH_IMAGE001
Figure 922739DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 352583DEST_PATH_IMAGE003
the lateral distance of the point cloud data representing a target cluster of a single frame,
Figure 515580DEST_PATH_IMAGE004
a longitudinal distance of point cloud data representing a target cluster of a single frame,
Figure 525125DEST_PATH_IMAGE005
represents the lateral velocity of the object after tracking,
Figure 965333DEST_PATH_IMAGE006
representing the longitudinal velocity of the object after tracking,
Figure 34920DEST_PATH_IMAGE007
is the processing period of the radar signal.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the steps of the radar-based target recognition method according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, is adapted to carry out the steps of the radar-based target recognition method according to any one of claims 1 to 7.
CN202111090386.6A 2021-09-17 2021-09-17 Target identification method and device based on radar, electronic equipment and storage medium Active CN113537417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111090386.6A CN113537417B (en) 2021-09-17 2021-09-17 Target identification method and device based on radar, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111090386.6A CN113537417B (en) 2021-09-17 2021-09-17 Target identification method and device based on radar, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113537417A CN113537417A (en) 2021-10-22
CN113537417B true CN113537417B (en) 2021-11-30

Family

ID=78092874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111090386.6A Active CN113537417B (en) 2021-09-17 2021-09-17 Target identification method and device based on radar, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113537417B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199168A (en) * 2021-12-16 2022-03-18 珠海格力电器股份有限公司 Indoor volume detection method, device, equipment and medium
CN116859380B (en) * 2023-09-05 2023-11-21 长沙隼眼软件科技有限公司 Method and device for measuring target track, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784320B (en) * 2017-09-27 2019-12-06 电子科技大学 Method for identifying radar one-dimensional range profile target based on convolution support vector machine
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data
CN110361727A (en) * 2019-07-22 2019-10-22 浙江大学 A kind of millimetre-wave radar multi-object tracking method
CN111856448A (en) * 2020-07-02 2020-10-30 山东省科学院海洋仪器仪表研究所 Marine obstacle identification method and system based on binocular vision and radar

Also Published As

Publication number Publication date
CN113537417A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113537417B (en) Target identification method and device based on radar, electronic equipment and storage medium
CN103064086B (en) Vehicle tracking method based on depth information
CN111899568B (en) Bridge anti-collision early warning system, method and device and storage medium
EP0888560B1 (en) Improved method of moment estimation and feature extraction for devices which measure spectra as a function of range or time
CN111830508B (en) Road gate anti-smashing system and method adopting millimeter wave radar
CN111541511B (en) Communication interference signal identification method based on target detection in complex electromagnetic environment
CN111123262B (en) Automatic driving 3D modeling method, device and system
CN112881993B (en) Method for automatically identifying false flight path caused by radar distribution clutter
CN107103275A (en) The vehicle detection carried out using radar and vision based on wheel and tracking
WO2023071992A1 (en) Method and apparatus for multi-sensor signal fusion, electronic device and storage medium
Wang et al. A roadside camera-radar sensing fusion system for intelligent transportation
Sengupta et al. Automatic radar-camera dataset generation for sensor-fusion applications
CN115061113A (en) Target detection model training method and device for radar and storage medium
Cao et al. Lane determination of vehicles based on a novel clustering algorithm for intelligent traffic monitoring
CN114298163A (en) Online road condition detection system and method based on multi-source information fusion
Scharf et al. A semi-automated multi-sensor data labeling process for deep learning in automotive radar environment
CN110648542B (en) High-precision vehicle flow detection system based on azimuth recognition narrow-wave radar
CN112784679A (en) Vehicle obstacle avoidance method and device
Argüello et al. Radar classification for traffic intersection surveillance based on micro-Doppler signatures
CN116524537A (en) Human body posture recognition method based on CNN and LSTM combination
CN116027288A (en) Method and device for generating data, electronic equipment and storage medium
CN113591695A (en) Pedestrian re-identification method and device based on millimeter wave radar point cloud
CN114662600A (en) Lane line detection method and device and storage medium
CN113625266A (en) Method, device, storage medium and equipment for detecting low-speed target by using radar
CN113687348B (en) Pedestrian recognition method and device based on tracking micro Doppler graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant