CN111862157B - Multi-vehicle target tracking method integrating machine vision and millimeter wave radar - Google Patents

Multi-vehicle target tracking method integrating machine vision and millimeter wave radar Download PDF

Info

Publication number
CN111862157B
CN111862157B CN202010699138.0A CN202010699138A CN111862157B CN 111862157 B CN111862157 B CN 111862157B CN 202010699138 A CN202010699138 A CN 202010699138A CN 111862157 B CN111862157 B CN 111862157B
Authority
CN
China
Prior art keywords
target
millimeter wave
tracking
radar
wave radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010699138.0A
Other languages
Chinese (zh)
Other versions
CN111862157A (en
Inventor
郑玲
甘耀东
张翔
李以农
高锋
詹振飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202010699138.0A priority Critical patent/CN111862157B/en
Publication of CN111862157A publication Critical patent/CN111862157A/en
Application granted granted Critical
Publication of CN111862157B publication Critical patent/CN111862157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/585Velocity or trajectory determination systems; Sense-of-movement determination systems processing the video signal in order to evaluate or display the velocity value
    • G01S13/587Velocity or trajectory determination systems; Sense-of-movement determination systems processing the video signal in order to evaluate or display the velocity value using optical means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • G01S13/726Multiple target tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a multi-vehicle target tracking method integrating machine vision and millimeter wave radar, which utilizes the millimeter wave radar to acquire road target information and screens vehicle targets according to a kinematic parameter involving a filtering model; detecting vehicles in front of a road by utilizing visual information, and tracking multiple vehicle targets based on detection results; and projecting the vehicle target into the image by adopting a machine vision and millimeter wave radar fusion model, setting an association judgment strategy to associate the vision tracking target with the vehicle target, and correcting the position and the size of the vision tracking boundary frame in the image based on the distance information detected by the millimeter wave radar. The application can solve the technical problem that the effective target is lost after the error accumulation caused by the excessively large or excessively small size of the visual tracking boundary box when the front multi-vehicle is continuously tracked in the prior art.

Description

Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
Technical Field
The application relates to the technical field of intelligent automobile automatic driving environment sensing, in particular to a multi-vehicle target tracking method integrating machine vision and millimeter wave radar.
Background
With the improvement of the level of intelligence, informatization and automation, more and more enterprises and institutions are developing intelligent driving systems and advanced driver assistance systems of automobiles. The environmental perception serves as an 'eye' of an automatically driven automobile, provides road traffic information in front of the automobile, and plays a very important role. Tracking is becoming increasingly important as a ring of environmental awareness.
At present, fusion of multi-sensor information is a research hotspot in the current tracking field. The prior art provides a target object identification method based on video image and millimeter wave radar data fusion, which carries out data fusion on the dynamic position of a target object acquired by a satellite positioning system, the action state of the target object image acquired by image acquisition equipment and the action state of the target object acquired by radar equipment through a control terminal so as to accurately identify and position the target object around a vehicle. However, in the actual automatic driving process, target tracking is required for multiple vehicles. In the continuous tracking process, when the image recognition is carried out on the target object through the video image, the size of the visual tracking boundary box of the target object image is changed continuously because the relative positions of the automatic driving vehicle and the surrounding target vehicles are changed in real time. During the continuous change of the bounding box size, if the size is too large or too small, accumulated errors are generated, and thus the situation that a valid target is lost during tracking may occur.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides a multi-vehicle target tracking method integrating machine vision and millimeter wave radar, which aims to solve the technical problem that effective targets are lost after error accumulation caused by overlarge or undersize visual tracking boundary boxes when the front multi-vehicle is continuously tracked in the prior art.
The technical scheme adopted by the application is that the multi-vehicle target tracking method is formed by combining machine vision and millimeter wave radar.
In a first implementation, the method includes the steps of:
acquiring millimeter wave radar detection data, and filtering the data to obtain a vehicle target;
acquiring a road environment image, detecting surrounding environment vehicles in the road environment image by using a deep learning neural network model, and acquiring position information and size information of a visual tracking target;
performing multi-target tracking in the visual image by utilizing an improved particle filtering algorithm according to the position information and the size information;
using a machine vision and millimeter wave radar fusion model, associating the vehicle target with the vision tracking target according to an association judgment strategy, and correcting the position and the size of the vision tracking boundary frame by utilizing millimeter wave radar ranging information;
and updating track information to obtain a tracking result.
In combination with the first implementation, in a second implementation, filtering the data includes the steps of:
preprocessing millimeter wave radar data, and primarily filtering invalid radar targets;
based on a third-order Kalman filter, combining the distance, angle, speed and acceleration parameters of the target, and carrying out consistency test on the radar target;
and aiming at each radar target at the current moment, combining the adjacent k moment data to carry out continuity judgment.
With reference to the second implementation manner, in a third implementation manner, preprocessing the millimeter wave radar data includes the following steps:
returning the characteristic value of the empty target by using the radar, and screening out the empty target;
and setting longitudinal and transverse distance thresholds according to the range of the target area, and screening out radar targets outside the area.
In combination with the first implementation manner, in a fourth implementation manner, the deep learning neural network is a convolutional neural network.
In combination with the first implementation manner, in a fifth implementation manner, the improved particle filtering algorithm adopts a genetic algorithm to improve the resampling step, single sub-substitution small weight particles are generated in the crossover operation of the genetic algorithm, and the population fitness is calculated by using standard normal distribution in the mutation operation.
With reference to the fifth implementation manner, in a sixth implementation manner, when the genetic algorithm performs individual fitness evaluation, the similarity between the tracking template and the particle window is calculated by using the papanicolaou coefficient, so as to satisfy the following formula:
wherein p is i Represents the ith particle characterization window histogram, q represents the template histogram, and ρ is the coefficient of pasteurization.
With reference to the fifth implementation manner, in a seventh implementation manner, the improved particle filtering algorithm compares the observed values of the two visual tracking bounding boxes when they overlap, and the smaller one determines that the two visual tracking bounding boxes are in an occluded state, so as to satisfy the following formula:
wherein return O id For the sequence number of the occluded particle, p i Represents the ith particle characterization window histogram and q represents the template histogram.
With reference to the first implementation manner, in an eighth implementation manner, a modeling method of a machine vision and millimeter wave radar fusion model is as follows:
establishing a conversion relation among a millimeter wave radar coordinate system, a camera coordinate system, an image physical coordinate system and an image pixel coordinate system, and accurately projecting radar coordinate points on an image;
sampling two sensors of the millimeter wave radar and the camera by using a downward compatibility principle, and keeping sampling time consistent;
setting an association judgment strategy according to the position relation between the radar target projection point and the visual tracking boundary box, and realizing association of the vehicle target and the visual tracking target in millimeter wave radar data;
in combination with the first or eighth implementation manner, in a ninth implementation manner, the setting of the association decision policy specifically includes the following steps:
(1) No radar projection point exists in the visual tracking boundary box, and no radar target is associated and matched with the visual tracking target;
(2) The visual tracking boundary box is associated and matched with the radar projection point, wherein the visual tracking boundary box is provided with only one radar projection point;
(3) A plurality of radar projection points exist in one visual tracking boundary box, and the radar target nearest to the center point (x+w/2, y+h/2) is associated with the visual tracking boundary box; where x is the lateral pixel coordinate of the visual tracking bounding box, y is the longitudinal pixel coordinate of the visual tracking bounding box, w is the width of the visual tracking bounding box, and h is the height of the visual tracking bounding box.
In combination with the ninth implementation manner, in a tenth implementation manner, the steps (1), (2), and (2) have a priority relationship, and when the former judgment is satisfied, the latter judgment is not performed any more.
With reference to the eighth implementation manner, in an eleventh implementation manner, a positional relationship between a radar projection point and a visual tracking bounding box satisfies the following formula:
wherein x is the horizontal pixel coordinate of the visual tracking boundary frame, y is the vertical pixel coordinate of the visual tracking boundary frame, u is the horizontal pixel coordinate of the millimeter wave radar projection point, v is the vertical pixel coordinate of the millimeter wave radar projection point, w is the width of the visual tracking boundary frame, and h is the height of the visual tracking boundary frame.
In combination with the first implementation manner, in a twelfth implementation manner, the position and the size of the visual tracking bounding box are corrected by using millimeter wave radar ranging information, and the corrected position and the corrected size meet the following formulas:
wherein [ x ] 1 ,y 1 ,w 1 ,h 1 ]The position and the size of the tracking target window at the previous moment are represented, and the corresponding longitudinal distance is D 1 ;[x 2 ,y 2 ,w 2 ,h 2 ]The position and the size of the target generated by visual tracking at the current moment are represented, and the corresponding longitudinal distance is D 2
According to the technical scheme, the beneficial technical effects of the application are as follows:
1. in the front multi-vehicle tracking method with the machine vision and the millimeter wave radar fused, the size of a visual tracking boundary frame is corrected through the target kinematic parameters returned by the millimeter wave radar, so that the problem that an effective target is lost after error accumulation caused by overlarge or undersize boundary frame size during continuous tracking is solved.
2. Aiming at the problems of particle degradation in the traditional particle filter tracking algorithm and sample exhaustion in classical resampling, based on the position and size information of a vehicle target obtained by detection, the improved particle filter algorithm is utilized to track multiple targets in a visual image, a genetic algorithm is adopted to improve the resampling step, single filial generation is generated to replace small weight particles in the cross operation, and standard normal distribution is utilized to calculate population fitness and complete mutation operation so as to approximate to a real motion rule. On the premise of ensuring real-time performance, the problems of scale change and shielding in multi-target tracking of vehicles are solved.
3. The method has the advantages that the strong feature learning capability of deep learning is adopted, the defect of manually selecting features in traditional machine learning is avoided, the feature information extracted through convolutional neural network training is richer, the expression capability is stronger, and the obtained result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
Fig. 1 is a flow chart of a method for tracking multiple vehicle targets by combining machine vision with millimeter wave radar.
FIG. 2 is a flow chart of an implementation of the improved particle filter algorithm in vehicle tracking.
Fig. 3 is a diagram showing the conversion relation between a millimeter wave radar coordinate system and a world coordinate system.
Fig. 4 is a diagram showing the conversion relationship between the camera coordinate system and the world coordinate system.
Fig. 5 is a schematic diagram of millimeter wave radar and camera time information.
Detailed Description
Embodiments of the technical scheme of the present application will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and thus are merely examples, and are not intended to limit the scope of the present application.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
Example 1
As shown in fig. 1, the application provides a multi-vehicle target tracking method integrating machine vision and millimeter wave radar, which comprises the following steps:
acquiring millimeter wave radar detection data, and filtering the data to obtain a vehicle target;
acquiring a road environment image, detecting surrounding environment vehicles in the road environment image by using a deep learning neural network model, and acquiring position information and size information of a visual tracking target;
performing multi-target tracking in the visual image by utilizing an improved particle filtering algorithm according to the position information and the size information;
using a machine vision and millimeter wave radar fusion model, associating the vehicle target with the vision tracking target according to an association judgment strategy, and correcting the position and the size of the vision tracking boundary frame by utilizing millimeter wave radar ranging information;
and updating track information to obtain a tracking result.
The working principle of example 1 is explained in detail below.
In this embodiment, the method for tracking multiple vehicle targets by combining machine vision and millimeter wave radar specifically includes the following steps:
1. millimeter wave radar detection data are acquired, and filtering is carried out on the data to obtain a vehicle target
In the present embodiment, the millimeter wave radar outputs 64 pieces of channel information per frame, which contains a large number of empty targets and invalid targets. After millimeter wave radar detection data are acquired, filtering is needed to be carried out on the detection data, invalid information is filtered out according to the movement information of a front vehicle target, and only target information which accords with the movement characteristics of the vehicle is reserved, wherein the target information is the vehicle target. The method comprises the following specific steps:
1.1 preprocessing millimeter wave radar data, and primarily filtering invalid radar targets
(1) And returning the characteristic numerical value of the empty target by using the radar, and screening out the empty target. In this embodiment, for each frame of data of the millimeter wave radar, if the relative speed v=81.91 km/h of each channel, the relative distance d=0, and the angle α=0, we consider the channel data as null target information, and thus filter it.
(2) And setting longitudinal and transverse distance thresholds according to the range of the target area, and screening out targets outside the area. In this embodiment, when the millimeter wave radar detects the front, the longitudinal distance is d and the lateral distance is x. The target to be detected and tracked is the same-direction vehicle of the own lane and the lanes at two sides. By setting a longitudinal distance threshold Y 0 From a lateral distance threshold X 0 When |d|<Y 0 And |x|<X 0 When this indicates that the target remains in the region. Thus, the target information is screened out through preliminary filtration.
1.2, designing an extended Kalman filter based on a third-order Kalman filter and combining radar target distance, angle, speed and acceleration parameters, and carrying out consistency check on the radar target.
1.3, aiming at each radar target at the current moment, carrying out continuity judgment by combining data of adjacent k moments, and judging the radar target as a vehicle target when targets exist in more than k/2 moments to meet consistency test.
2. Acquiring a road environment image, detecting a vehicle target in the road environment image by using a deep learning neural network model, and acquiring position information and size information of the vehicle target
Training the deep learning neural network model mainly comprises the following steps:
(1) And acquiring front road vehicle video frame data, selecting front vehicle position information for each picture frame by using Labelimg software, and manufacturing a vehicle training data set.
(2) In the embodiment, the selected neural network is a convolutional neural network, the characteristic information extracted by training the convolutional neural network is richer, the expression capability is stronger, and the obtained result is more accurate. Because the training class number is vehicle and background, the modification parameters class=2, batch=64, subdivisions=16; the neural network is suitable for training a vehicle training data set which is manufactured by the user.
(3) The vehicle training data set is input into the neural network for training, and when the total loss value is not reduced any more, the neural network training is completed, and the neural network model can be used for identifying the front vehicle target.
And detecting the front vehicle in the road environment image by using the trained neural network model, and obtaining the position information and the size information of each vehicle target.
3. Based on the position and size information of the vehicle object, multi-object tracking is performed in the visual image using an improved particle filter algorithm, and the resampling step is improved using a genetic algorithm
In this embodiment, specifically, single sub-substitution small-weight particles are generated in the crossover operation of the genetic algorithm, and population fitness is calculated by using standard normal distribution in the mutation operation. As shown in fig. 2, the implementation flow of the improved particle filtering algorithm in vehicle tracking is as follows:
3.1 selection Gene coding
The position and the size of the vehicle target in the image are selected as a basic state model, and meanwhile, the pixel offset rate of the vehicle target in the longitudinal and transverse directions in the image and the scale transformation factor of the target visual tracking boundary box are also selected. The final selected gene code is s= [ x, y, h, w, vx, vy, sc ];
3.2 individual fitness evaluation
Selecting an image HSV space for color histogram features, and calculating the similarity between a tracking template and a particle window by using a Pasteur coefficient to satisfy the following formula:
in formula (1), p i The histogram representing the ith particle characterization window, q representing the template histogram, ρ being the pasteurization coefficient, the larger the value, the more similar the two histogram distributions.
3.3 selection
The fitness of each particle in the previous frame of particle set is calculated, and then the current effective particle number is calculated according to the fitness. Substituting the total particle number and the effective sampling particle number into the dynamic self-adaptive probability to calculate and obtain the cross probability Pc. Wherein the dynamic adaptive probability function is:
in the formula (2), N eff Representing the real-time effective sampling particle number, k is the genetic probability coefficient, N s N is the total number of particles th To set a threshold.
3.4 Cross
Only one child is generated by intersecting each pair of parent particles, and the following formula is satisfied:
C=α×P1+(1-α)×P2 (3)
in the formula (3), P1 and P2 are the gene codes of parent particles, C is the gene codes of generated offspring particles, and alpha is a scale factor. After the cross operation is finished, the generated offspring is used for replacing corresponding particles with individual fitness ranking in the previous frame of particle set, and then a new particle set containing N particles can be formed;
3.5 variation
The standard normal distribution is used for mutating particles, so that probability distribution is closer to a target motion rule, and the standard normal distribution meets the following formula:
in the formula (4), S i Gene encoding the ith particle, r is subject to [0,1]Uniformly distributed random numbers, r th Is the mutation probability.
3.6 comparing the observed values of the two visual tracking boundary boxes when the two visual tracking boundary boxes overlap, and judging that the smaller one is in a blocked state, wherein the following formula is satisfied:
in equation (5), return O id For the sequence number of the occluded particle, p i Representing the histogram of the ith particle characterization window, q representing the template histogram. If the continuous shielding reaches a certain number of frames, the target is judged to run out of the visual field, and the corresponding histogram template is deleted.
4. Using a machine vision and millimeter wave radar fusion model, realizing the association of a vehicle target and a vision tracking target in millimeter wave radar data according to an association judgment strategy, and correcting the position and the size of a vision tracking boundary frame by utilizing millimeter wave radar ranging information
4.1 modeling method of machine vision and millimeter wave radar fusion model is as follows: as shown in fig. 3, a conversion relation between a millimeter wave radar coordinate system and a world coordinate system is established; o in FIG. 3 w Is the center of a world coordinate system, and the transverse coordinate axis is X w The longitudinal coordinate axis is Y w ,r 1 R is the range of range scan in radar 2 Is a long distance scanning range. As shown in fig. 4, a conversion relationship between the camera coordinate system and the world coordinate system is established. Therefore, conversion relations among the millimeter wave radar coordinate system, the camera coordinate system, the image physical coordinate system and the image pixel coordinate system are established, and radar coordinate points are accurately projected on the image. As shown in fig. 5, the ESR millimeter wave radar and camera used in the present applicationAnd (3) a time information alignment chart, which uses a downward compatibility principle to sample two sensors of the millimeter wave radar and the camera so as to keep the sampling time consistent.
And 4.2, designing an association judgment strategy according to the position relation between the radar target projection point and the visual tracking boundary box, and realizing association of the vehicle target and the visual tracking target in the millimeter wave radar data.
The setting of the association decision strategy specifically comprises the following steps:
(1) If the visual tracking boundary box does not have radar projection points, the visual tracking target does not have radar targets which are associated and matched with the visual tracking boundary box;
(2) If one visual tracking boundary box exists and only one radar projection point exists, directly matching the visual tracking boundary box with the radar projection point in an associated mode;
(3) If a plurality of radar projection points exist in one visual tracking boundary box, correlating the radar target nearest to the center point (x+w/2, y+h/2) with the visual tracking boundary box; where x is the lateral pixel coordinate of the visual tracking bounding box, y is the longitudinal pixel coordinate of the visual tracking bounding box, w is the width of the visual tracking bounding box, and h is the height of the visual tracking bounding box.
The steps (1), (2) and (3) have priority relation, and when the former judgment is satisfied, the latter judgment is not performed any more, so that the efficiency of the association judgment can be improved.
In this embodiment, the association determination policy means that the positional relationship between the radar projection point and the visual tracking bounding box satisfies the following formula:
in the formula (6), x is the horizontal pixel coordinate of the visual tracking boundary box, y is the vertical pixel coordinate of the visual tracking boundary box, u is the horizontal pixel coordinate of the millimeter wave radar projection point, v is the vertical pixel coordinate of the millimeter wave radar projection point, w is the width of the visual tracking boundary box, and h is the height of the visual tracking boundary box.
4.3 correcting the position and the size of the visual tracking boundary frame by utilizing millimeter wave radar ranging information
The corrected position and size can be represented by coordinates [ x ', y', w ', h' ] and the calculation formula is as follows:
in the formula (7) [ x ] 1 ,y 1 ,w 1 ,h 1 ]The position and the size of the tracking target window at the previous moment are represented, and the corresponding longitudinal distance is D 1 ;[x 2 ,y 2 ,w 2 ,h 2 ]The position and the size of the target generated by visual tracking at the current moment are represented, and the corresponding longitudinal distance is D 2
5. Repeating the steps, and updating the track information to obtain a tracking result.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application, and are intended to be included within the scope of the appended claims and description.

Claims (5)

1. The multi-vehicle target tracking method integrating machine vision and millimeter wave radar is characterized by comprising the following steps of:
acquiring millimeter wave radar detection data, preprocessing the millimeter wave radar data, and primarily filtering invalid radar targets; based on a third-order Kalman filter, combining the distance, angle, speed and acceleration parameters of the target, and carrying out consistency test on the radar target; aiming at each radar target at the current moment, carrying out continuity judgment by combining adjacent k moment data; when the targets exist in more than k/2 time to meet the consistency test, judging the radar target as a vehicle target;
acquiring a road environment image, detecting surrounding environment vehicles in the road environment image by using a deep learning neural network model, and acquiring position information and size information of a visual tracking target;
according to the position information and the size information, performing multi-target tracking in a visual image by utilizing an improved particle filtering algorithm, wherein the improved particle filtering algorithm adopts a genetic algorithm to improve a resampling step, single sub-substitution small-weight particles are generated in the crossing operation of the genetic algorithm, and population fitness is calculated by utilizing standard normal distribution in the mutation operation; performing multi-target tracking includes: selecting position information and size information of a vehicle target in an image as a basic state model, selecting gene codes, performing individual fitness evaluation through a color histogram, firstly calculating fitness of each particle in a previous frame of particle set, then calculating the current effective particle number according to the fitness, and substituting the total particle number and the effective sampling particle number into dynamic self-adaptive probability to calculate so as to obtain cross probability; generating only one child by utilizing each pair of parent particles to cross; the standard normal distribution is used for mutating particles, so that probability distribution of the particles is closer to a target motion rule; comparing the observed values of the two visual tracking boundary boxes when the two visual tracking boundary boxes overlap, and judging that the smaller one is in a blocked state;
the improved particle filtering algorithm compares the observed values of two visual tracking bounding boxes when they overlap, and the smaller one judges that the two visual tracking bounding boxes are in a blocked state, so that the following formula is satisfied:
wherein return O id In order to block the sequence number of the particle,represent the i-th particle characterization window histogram, q u Representing a template histogram;
when the genetic algorithm evaluates the individual fitness, the similarity between the tracking template and the particle window is calculated by using the Pasteur coefficient, and the following formula is satisfied:
wherein p is i Representing an ith particle characterization window histogram, q representing a template histogram, ρ being a pasteurization coefficient;
using a machine vision and millimeter wave radar fusion model, associating the vehicle target with the vision tracking target according to an association judgment strategy, wherein the association judgment strategy comprises: s1, no radar projection points exist in a visual tracking boundary box, and no radar target is associated and matched with a visual tracking target; s2, only one radar projection point exists in one visual tracking boundary box, and the visual tracking boundary box is directly associated and matched with the radar projection point; s3, if a plurality of radar projection points exist in one visual tracking boundary box, correlating a radar target nearest to the center point (x+w/2, y+h/2) with the visual tracking boundary box; wherein x is the transverse pixel coordinate of the visual tracking boundary frame, y is the longitudinal pixel coordinate of the visual tracking boundary frame, w is the width of the visual tracking boundary frame, and h is the height of the visual tracking boundary frame; the steps S1, S2 and S3 have priority relation, and when the former judgment is met, the latter judgment is not carried out any more;
correcting the position and the size of the visual tracking boundary frame by utilizing millimeter wave radar ranging information; the corrected position and size can be represented by coordinates [ x ', y', w ', h' ] and the calculation formula is as follows:
in the above formula, [ x ] 1 ,y 1 ,w 1 ,h 1 ]The position and the size of the tracking target window at the previous moment are represented, and the corresponding longitudinal distance is D 1 ;[x 2 ,y 2 ,w 2 ,h 2 ]The position and the size of the target generated by visual tracking at the current moment are represented, and the corresponding longitudinal distance is D 2
And updating track information to obtain a tracking result.
2. The method for tracking a multi-vehicle object by combining machine vision with millimeter wave radar according to claim 1, wherein preprocessing the millimeter wave radar data comprises the steps of:
returning the characteristic value of the empty target by using the radar, and screening out the empty target;
and setting longitudinal and transverse distance thresholds according to the range of the target area, and screening out radar targets outside the area.
3. The method for tracking the multiple vehicle targets by combining machine vision with millimeter wave radar according to claim 1, wherein the method comprises the following steps of: the deep learning neural network is a convolutional neural network.
4. The method for tracking multiple vehicle targets by combining machine vision and millimeter wave radar according to claim 1, wherein the modeling method of the machine vision and millimeter wave radar combined model is as follows:
establishing a conversion relation among a millimeter wave radar coordinate system, a camera coordinate system, an image physical coordinate system and an image pixel coordinate system, and accurately projecting radar coordinate points on an image;
sampling two sensors of the millimeter wave radar and the camera by using a downward compatibility principle, and keeping sampling time consistent;
and setting an association judgment strategy according to the position relation between the radar target projection point and the visual tracking boundary box, so as to realize the association of the vehicle target and the visual tracking target in the millimeter wave radar data.
5. The method for tracking multiple vehicle targets by combining machine vision and millimeter wave radar according to claim 1, wherein the positional relationship between the radar projection points and the vision tracking boundary box satisfies the following formula:
wherein x is the horizontal pixel coordinate of the visual tracking boundary frame, y is the vertical pixel coordinate of the visual tracking boundary frame, u is the horizontal pixel coordinate of the millimeter wave radar projection point, v is the vertical pixel coordinate of the millimeter wave radar projection point, w is the width of the visual tracking boundary frame, and h is the height of the visual tracking boundary frame.
CN202010699138.0A 2020-07-20 2020-07-20 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar Active CN111862157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010699138.0A CN111862157B (en) 2020-07-20 2020-07-20 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010699138.0A CN111862157B (en) 2020-07-20 2020-07-20 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar

Publications (2)

Publication Number Publication Date
CN111862157A CN111862157A (en) 2020-10-30
CN111862157B true CN111862157B (en) 2023-10-10

Family

ID=73001093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010699138.0A Active CN111862157B (en) 2020-07-20 2020-07-20 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar

Country Status (1)

Country Link
CN (1) CN111862157B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363167A (en) * 2020-11-02 2021-02-12 重庆邮电大学 Extended target tracking method based on fusion of millimeter wave radar and monocular camera
CN112505684B (en) * 2020-11-17 2023-12-01 东南大学 Multi-target tracking method for radar vision fusion under side view angle of severe environment road
CN112550287B (en) * 2020-12-16 2022-08-26 重庆大学 Driving risk assessment method for structured road
CN112767475B (en) * 2020-12-30 2022-10-18 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
WO2022141913A1 (en) * 2021-01-01 2022-07-07 杜豫川 On-board positioning device-based roadside millimeter-wave radar calibration method
GB2621048A (en) * 2021-03-01 2024-01-31 Du Yuchuan Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN112950678A (en) * 2021-03-25 2021-06-11 上海智能新能源汽车科创功能平台有限公司 Beyond-the-horizon fusion sensing system based on vehicle-road cooperation
CN113253255A (en) * 2021-05-11 2021-08-13 浙江大学 Multi-point multi-sensor target monitoring system and method
CN113343849A (en) * 2021-06-07 2021-09-03 西安恒盛安信智能技术有限公司 Fusion sensing equipment based on radar and video
CN113848545B (en) * 2021-09-01 2023-04-14 电子科技大学 Fusion target detection and tracking method based on vision and millimeter wave radar
CN113743385A (en) * 2021-11-05 2021-12-03 陕西欧卡电子智能科技有限公司 Unmanned ship water surface target detection method and device and unmanned ship
CN116125488A (en) * 2021-11-12 2023-05-16 北京万集科技股份有限公司 Target tracking method, signal fusion method, device, terminal and storage medium
CN114266859B (en) * 2021-12-02 2022-09-06 国汽智控(北京)科技有限公司 Data processing method, device, equipment and storage medium
CN113888602B (en) * 2021-12-03 2022-04-05 深圳佑驾创新科技有限公司 Method and device for associating radar vehicle target with visual vehicle target
CN114152942B (en) * 2021-12-08 2022-08-05 北京理工大学 Millimeter wave radar and vision second-order fusion multi-classification target detection method
CN114200442B (en) * 2021-12-10 2024-04-05 合肥工业大学 Road target detection and association method based on millimeter wave radar and vision
CN115542312B (en) * 2022-11-30 2023-03-21 苏州挚途科技有限公司 Multi-sensor association method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN102521612A (en) * 2011-12-16 2012-06-27 东华大学 Multiple video object active tracking method based cooperative correlation particle filtering
WO2016205951A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. A system and a method for tracking mobile objects using cameras and tag devices
CN108320300A (en) * 2018-01-02 2018-07-24 重庆信科设计有限公司 A kind of space-time context visual tracking method of fusion particle filter
CN109376493A (en) * 2018-12-17 2019-02-22 武汉理工大学 A kind of radial base neural net car speed tracking of particle group optimizing
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN110208793A (en) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 DAS (Driver Assistant System), method, terminal and medium based on millimetre-wave radar

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8989442B2 (en) * 2013-04-12 2015-03-24 Toyota Motor Engineering & Manufacturing North America, Inc. Robust feature fusion for multi-view object tracking
US10852419B2 (en) * 2017-10-20 2020-12-01 Texas Instruments Incorporated System and method for camera radar fusion
US11630197B2 (en) * 2019-01-04 2023-04-18 Qualcomm Incorporated Determining a motion state of a target object

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN102521612A (en) * 2011-12-16 2012-06-27 东华大学 Multiple video object active tracking method based cooperative correlation particle filtering
WO2016205951A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. A system and a method for tracking mobile objects using cameras and tag devices
CN108320300A (en) * 2018-01-02 2018-07-24 重庆信科设计有限公司 A kind of space-time context visual tracking method of fusion particle filter
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN109376493A (en) * 2018-12-17 2019-02-22 武汉理工大学 A kind of radial base neural net car speed tracking of particle group optimizing
CN110208793A (en) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 DAS (Driver Assistant System), method, terminal and medium based on millimetre-wave radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多特征融合的粒子滤波目标跟踪算法;刘士荣;朱伟涛;杨帆;仲朝亮;;信息与控制(第06期);全文 *

Also Published As

Publication number Publication date
CN111862157A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111862157B (en) Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN109447018B (en) Road environment visual perception method based on improved Faster R-CNN
CN111611905B (en) Visible light and infrared fused target identification method
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
CN106599792B (en) Method for detecting hand driving violation behavior
CN112750150B (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
CN109753949B (en) Multi-window traffic sign detection method based on deep learning
CN103903019A (en) Automatic generating method for multi-lane vehicle track space-time diagram
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN115995063A (en) Work vehicle detection and tracking method and system
CN113052873B (en) Single-target tracking method for on-line self-supervision learning scene adaptation
CN103488993A (en) Crowd abnormal behavior identification method based on FAST
CN114970321A (en) Scene flow digital twinning method and system based on dynamic trajectory flow
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN111259796A (en) Lane line detection method based on image geometric features
CN111681259A (en) Vehicle tracking model establishing method based on Anchor-free mechanism detection network
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
CN111881748A (en) Lane line visual identification method and system based on VBAI platform modeling
CN112288765A (en) Image processing method for vehicle-mounted infrared pedestrian detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant