CN110068818A - The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device - Google Patents

The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device Download PDF

Info

Publication number
CN110068818A
CN110068818A CN201910367434.8A CN201910367434A CN110068818A CN 110068818 A CN110068818 A CN 110068818A CN 201910367434 A CN201910367434 A CN 201910367434A CN 110068818 A CN110068818 A CN 110068818A
Authority
CN
China
Prior art keywords
measured target
radar
target
image
definition camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910367434.8A
Other languages
Chinese (zh)
Inventor
李晓晖
陈涛
张强
夏芹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Engineering Research Institute Co Ltd
Original Assignee
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Engineering Research Institute Co Ltd filed Critical China Automotive Engineering Research Institute Co Ltd
Priority to CN201910367434.8A priority Critical patent/CN110068818A/en
Publication of CN110068818A publication Critical patent/CN110068818A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • G01S13/92Radar or analogous systems specially adapted for specific applications for traffic control for velocity measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention proposes a kind of working method that traffic intersection vehicle and pedestrian detection are carried out by radar and image capture device, it include: S1, first crossing detection system is set, first MMW RADAR SIGNAL USING output end connects first singlechip radar signal receiving end, first high-definition camera signal output end connects the first embedded gpu image pickup signal receiving end, S2, the vehicle and pedestrian data at database server collection crossing, collect the data of whole millimetre-wave radar acquisitions, and the type of measured target is exported by millimetre-wave radar, the type of measured target is judged, S3, target identification is carried out to the image successive image frame that high-definition camera acquires using housebroken deep neural network, in combination with the calibrating parameters of high-definition camera, calculate measured target position and speed parameter;By being based on kalman filter method, the measured target state under motion state is tracked and filtered;Targeted vital cycle management is carried out further according to the estimated result of Kalman filtering.

Description

The work of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device Make method
Technical field
The present invention relates to computer picture recognition fields, more particularly to one kind to be handed over by radar and image capture device The working method of access mouth vehicle and pedestrian detection.
Background technique
Currently, trackside end mainly utilizes 24GHz radar (common name " microwave detector ") or vehicle image detector (logical at present Claim " bayonet ") detect the object of traffic intersection, however on the one hand, either used 24GHz radar or vehicle image Detector, due to its soft or hard bottleneck, itself ranging and rate accuracy are all relatively low;And on the other hand, single-sensor Target identification rate and environmental suitability also relative deficiency.Therefore all in all, existing detection technique is not sufficient to support future The bus or train route collaborative perception of autonomous driving vehicle.
Summary of the invention
The present invention is directed at least solve the technical problems existing in the prior art, especially innovatively proposes one kind and pass through thunder It reaches and image capture device carries out the working method of traffic intersection vehicle and pedestrian detection.
In order to realize above-mentioned purpose of the invention, the present invention provides one kind to be handed over by radar and image capture device The working method of access mouth vehicle and pedestrian detection, comprising:
S1, is arranged the first crossing detection system, and the first MMW RADAR SIGNAL USING output end connects first singlechip radar letter Number receiving end, the first high-definition camera signal output end connect the first embedded gpu image pickup signal receiving end, first singlechip letter Number output end connects the first interchanger radar signal receiving end, and the first embedded gpu signal output end connects the first interchanger and takes the photograph As signal receiving end;Second crossing detection system is set, and the second MMW RADAR SIGNAL USING output end connects second singlechip radar Signal receiving end, the second high-definition camera signal output end connect the second embedded gpu image pickup signal receiving end, second singlechip Signal output end connects second switch radar signal receiving end, and the second embedded gpu signal output end connects second switch Image pickup signal receiving end;The crossing N detection system is set, and N MMW RADAR SIGNAL USING output end connects N single-chip microcontroller radar letter Number receiving end, N high-definition camera signal output end connect N embedded gpu image pickup signal receiving end, N single-chip microcomputer signal Output end connects N interchanger radar signal receiving end, and N embedded gpu signal output end connects N interchanger image pickup signal Receiving end;First interchanger signal output end connects the first signal receiving end of total switch, and second switch signal output end connects Total switch second signal receiving end is connect, N interchanger signal output end connects total switch n-signal receiving end, total to exchange Machine signal output end connects database server signal receiving end;
S2, database server collect the vehicle and pedestrian data at crossing, collect the data of whole millimetre-wave radar acquisitions, And the type of measured target is exported by millimetre-wave radar, the type of measured target is judged, determines the class of measured target Type scans the width and length of measured target according to the type, in the probability that corresponding intersection occurs, and passes through high-definition camera With millimetre-wave radar calculating fusion measured target away from road surface origin relative position, according to measured target in the coordinate system of road surface Traveling time calculate relative velocity, pass through the image information that high-definition camera exports tested crossing;
S3 carries out target to the image successive image frame that high-definition camera acquires using housebroken deep neural network Identification calculates measured target position and speed parameter in combination with the calibrating parameters of high-definition camera;By being filtered based on Kalman Wave method is tracked and is filtered to the measured target state under motion state;Further according to Kalman filtering estimated result into Row targeted vital cycle management.
Preferably, the S2 includes:
S2-1 sieves the measured target information that millimetre-wave radar exports according to measured target actual motion situation in real time Choosing;
S2-2, the radar signal of millimetre-wave radar transmitting is along with false target, and wherein false target passes through millimeter wave thunder Up to tree information, fence information and the electric pole information acquired with high-definition camera, pass through the radar data and figure acquired in real time As data, changeless tree information, fence information and electric pole information are rejected in screening;
S2-3, it is tested according to the changeless tree information of millimetre-wave radar detection, fence information and electric pole information Target width, length, position and confidence information carry out first round screening to the measured target of detection;It is filtered followed by Kalman Wave algorithm is tracked and is filtered to the target continuously detected;Targeted vital week is carried out according to the estimated result of Kalman filtering Period management;
S2-4 is obtaining first the first millimetre-wave radar of crossing and the first high-definition camera and second the second millimeter wave of crossing The measured target of radar and the second high-definition camera under association and mutually indepedent state between each other without accordingly being detected As a result after, two is merged without associated measured target information using Elman neural network, is then picked under mismatch state Except the measured target.
Preferably, the S3 includes:
For having associated measured target between each other, detected by the following method,
S3-1, the frame image that will be acquired in each high-definition camera, is input to its deep learning for being correspondingly embedded in formula GPU In SSD model, the core of the model is modification and housebroken detection network VGG16;
S3-2, extract detection network in convolutional layer Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, Conv11_2 layers of Feature Mapping feature map, then respectively in each of these convolutional layers Feature Mapping feature map A characteristic point constructs the bounding box Boundingbox of 6 different scale sizes, then respectively to the bounding box of characteristic point construction into Row detection and classification, generate multiple bounding box Boundingbox;
Different characteristic mapping feature map bounding box Boundingbox generated is combined operation by S3-3, After non-maxima suppression method NMS is come the bounding box Boundingbox measured target for inhibiting a part to be overlapped or matching Incorrect bounding box Boundingbox obtains the testing result of final vehicle and pedestrian data measured target.
Preferably, the S3-1 includes:
S3-A, the full linking layer FC6 and FC7 that will test network VGG16 are converted to convolutional layer Conv6 and Conv7;
S3-B, that removes detection network VGG16 prevents Droprout layers of over-fitting and full linking layer FC8;
S3-C, using extension convolution or convolution Atrous algorithm with holes;
S3-D, the convolution kernel step-length Stride that will test the pond layer Pool5 of network VGG16 become 3*3- from 2*2-S2 S1, wherein S2 is the second convolution kernel step-length, and S1 is the first convolution kernel step-length.
Preferably, the S2-3 further include:
When capturing a frame image and a frame radar message simultaneously, the SSD neural network model based on deep learning will It outlines all existing measured target positions in image and provides measured target classification, thus obtain measured target in the picture Pixel coordinate;Using high-definition camera calibrating parameters, the pixel coordinate of each measured target is converted into the earth plane coordinates, then According to the change in location of measured target in consecutive image, the rate of each measured target is calculated;Using Kalman filtering side The detected value of method, predicted value and current frame image based on measured target previous frame image carries out measured target state parameter most Excellent estimation is greater than predicted value and detected value difference the measured target of a certain threshold value, then it is assumed that and current detection result is unreliable, It is directly exported as a result with predicted value, and the life cycle of measured target is made to subtract 1.
Preferably, the S2-3 further include:
From multiple measured target information that millimetre-wave radar provides, rejecting width, length, position are not obviously met first Measured target objective parameter range and the value of the confidence are lower than the target of certain given threshold;Then kalman filter method, meter are used The predicted value of measured target previous frame message parameter is calculated, and the detected value of measured target and present frame message is compared, if The two difference be less than a certain threshold value, then directly millimetre-wave radar present frame message parameter is exported as a result, if predicted value and Detected value is greater than a certain threshold value, then it is assumed that millimetre-wave radar current detection result is unreliable, obtains in conjunction with predicted value and detected value Optimal estimation value, and the life cycle of measured target is made to subtract 1.
Preferably, the S2-3 further include:
The measured target grade status information that whole millimetre-wave radars and high-definition camera are obtained passes through wired or wireless net Network returned data library server, first in whole millimetre-wave radars and high-definition camera, the close mesh of position and speed parameter Mark is associated matching, for measured target classification, using image detection result as final output, while by image detection and thunder The object state parameter obtained up to detection inputs Elman neural network input layer, and neural network output layer result is regarded mesh Target end-state parameter.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
The present invention provides traffic intersection vehicle and pedestrian detection method, using 77GHz millimetre-wave radar and industrial grade high definition Camera greatly improves existing detection scheme for the detection accuracy of target category, distance and speed to the scheme of fusion, The realization of intelligent automobile bus or train route awareness technology can effectively be supported.
Additional aspect and advantage of the invention will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect of the invention and advantage will become from the description of the embodiment in conjunction with the following figures Obviously and it is readily appreciated that, in which:
Fig. 1 is image detection work flow diagram of the present invention;
Fig. 2 is work flow diagram of the present invention;
Fig. 3 is present system schematic diagram;
Fig. 4 is Whole Work Flow figure of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and for explaining only the invention, and is not considered as limiting the invention.
Traffic intersection vehicle and walk situation are detected, crossing dead zone information is provided for autonomous driving vehicle, guarantees certainly The dynamic current safety for driving vehicle.
As shown in Figure 1, the invention patent discloses a kind of traffic intersection vehicle and pedestrian detection method, this method is utilized A mouthful traffic conditions of 77GHz millimetre-wave radar and industrial grade high definition camera satisfying the need are detected.Wherein, millimetre-wave radar exports quilt Survey the type of target, width, length, existing probability, away from road surface origin relative position and relative velocity, camera, which then export, to be tested The image information at crossing.
S1, is arranged the first crossing detection system, and the first MMW RADAR SIGNAL USING output end connects first singlechip radar letter Number receiving end, the first high-definition camera signal output end connect the first embedded gpu image pickup signal receiving end, first singlechip letter Number output end connects the first interchanger radar signal receiving end, and the first embedded gpu signal output end connects the first interchanger and takes the photograph As signal receiving end;Second crossing detection system is set, and the second MMW RADAR SIGNAL USING output end connects second singlechip radar Signal receiving end, the second high-definition camera signal output end connect the second embedded gpu image pickup signal receiving end, second singlechip Signal output end connects second switch radar signal receiving end, and the second embedded gpu signal output end connects second switch Image pickup signal receiving end;The crossing N detection system is set, and N MMW RADAR SIGNAL USING output end connects N single-chip microcontroller radar letter Number receiving end, N high-definition camera signal output end connect N embedded gpu image pickup signal receiving end, N single-chip microcomputer signal Output end connects N interchanger radar signal receiving end, and N embedded gpu signal output end connects N interchanger image pickup signal Receiving end;First interchanger signal output end connects the first signal receiving end of total switch, and second switch signal output end connects Total switch second signal receiving end is connect, N interchanger signal output end connects total switch n-signal receiving end, total to exchange Machine signal output end connects database server signal receiving end;N number of crossing detection system is set, to crossing vehicle and pedestrian number According to carrying out summarizing collection, be stored by database server and carry out deep learning;
S2, database server collect the vehicle and pedestrian data at crossing, collect the data of whole millimetre-wave radar acquisitions, And the type of measured target is exported by millimetre-wave radar, the type of measured target is judged, determines the class of measured target Type scans the width and length of measured target according to the type, in the probability that corresponding intersection occurs, and passes through high-definition camera With millimetre-wave radar calculating fusion measured target away from road surface origin relative position, according to measured target in the coordinate system of road surface Traveling time calculate relative velocity, pass through the image information that high-definition camera exports tested crossing;
S2-1 sieves the measured target information that millimetre-wave radar exports according to measured target actual motion situation in real time Choosing;
S2-2, the radar signal of millimetre-wave radar transmitting is along with false target, and wherein false target passes through millimeter wave thunder Up to tree information, fence information and the electric pole information acquired with high-definition camera, pass through the radar data and figure acquired in real time As data, changeless tree information, fence information and electric pole information are rejected in screening;
S2-3, it is tested according to the changeless tree information of millimetre-wave radar detection, fence information and electric pole information Target width, length, position and confidence information carry out first round screening to the measured target of detection;It is filtered followed by Kalman Wave algorithm is tracked and is filtered to the target continuously detected;Targeted vital week is carried out according to the estimated result of Kalman filtering Period management;
S2-4 is obtaining first the first millimetre-wave radar of crossing and the first high-definition camera and second the second millimeter wave of crossing The measured target of radar and the second high-definition camera under association and mutually indepedent state between each other without accordingly being detected As a result after, two is merged without associated measured target information using Elman neural network, is then picked under mismatch state Except the measured target.
S3 carries out target to the image successive image frame that high-definition camera acquires using housebroken deep neural network Identification calculates measured target position and speed parameter in combination with the calibrating parameters of high-definition camera;By being filtered based on Kalman Wave method is tracked and is filtered to the measured target state under motion state;Further according to Kalman filtering estimated result into Row targeted vital cycle management.Using deep learning method, targets of type identification and the object of view-based access control model image are realized It tests the speed and distance measurement function: target identification being carried out to image successive image frame first with housebroken deep neural network, together When in conjunction with camera calibrating parameters, calculate measured target object location and speed parameter;Then it is based on kalman filter method, it is right Moving target state is tracked and is filtered;Finally targeted vital period pipe is carried out further according to the estimated result of Kalman filtering Reason.
The targeted vital cycle management is the form using life cycle, to the leakage in video and millimetre-wave radar detection Inspection (or erroneous detection) state is modified, and weakens jump of the target-like state value in testing result with this.In the default life of target In period, if target missing inspection (or erroneous detection), then it is assumed that former target still has, and using Kalman prediction (or amendment) its State value;If exceeding the life cycle of target, then it is assumed that former target has disappeared, and assigns an ID again to target.It is worth mentioning , when being more than a certain specific threshold with the difference of the estimated value for Kalman filtering of being taken in and sensor detected value, then will be tested The life cycle of target subtracts 1.
This method according to the actual situation screens the target information of millimetre-wave radar output.It should be noted that thunder Some false targets are usually associated with up to the signal provided, generally comprise trees, fence, electric pole etc., therefore first according to radar Target width, length, position and the confidence information provided carries out first round screening to detection target;It is filtered followed by Kalman Wave algorithm is tracked and is filtered to the target continuously detected;Finally target is carried out further according to the estimated result of Kalman filtering Life cycle management.
After the target level testing result for obtaining two class sensors, recycle Elman neural network to two target informations It is merged.
As shown in Figures 2 and 3, the SSD network (image recognition) based on Vgg16:
One frame picture (300*300) it is right to be input to its by S3-1, the frame image that will be acquired in each high-definition camera It answers in the deep learning SSD model of embedded gpu, the core of the model is modification and housebroken detection network VGG16; (1. herein detection model be located in the corresponding embedded gpu of each camera, be not backstage;2. detection model It is SSD, VGG16 is a part of SSD)
The S3-1 includes:
S3-A, the full linking layer FC6 and FC7 that will test network VGG16 are converted to convolutional layer Conv6 and Conv7;
S3-B, that removes detection network VGG16 prevents Droprout layers of over-fitting and full linking layer FC8;
S3-C, using extension convolution or convolution Atrous algorithm with holes;
S3-D, the convolution kernel step-length Stride that will test the pond layer Pool5 of network VGG16 become 3*3- from 2*2-S2 S1, wherein S2 is the second convolution kernel step-length, and S1 is the first convolution kernel step-length.
(2*2-S2 refers to that the convolution kernel with a 2*2, each translating step are 2;3*3-S1 refers to 3*3's 1) convolution kernel, each translating step are
S3-2, extract detection network in convolutional layer Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, Conv11_2 layers of Feature Mapping feature map, then respectively in each of these convolutional layers Feature Mapping feature map A characteristic point constructs the bounding box Boundingbox of 6 different scale sizes, then respectively to the bounding box of characteristic point construction into Row detection and classification, generate multiple bounding box Boundingbox;
Different characteristic mapping feature map bounding box Boundingbox generated is combined operation by S3-3, After non-maxima suppression method NMS is come the bounding box Boundingbox measured target for inhibiting a part to be overlapped or matching Incorrect bounding box Boundingbox obtains the testing result of final vehicle and pedestrian data measured target.
Elman neural network (subject fusion):
As shown in figure 4, input layer is state parameter (horizontal, ordinate and the speed of the target obtained by different sensors in figure Degree), output layer is the final state parameter of target.
System architecture design is carried out, realizes the detection and analysis of crossing vehicle and pedestrian,
When capturing a frame image image and a frame radar message simultaneously, on the one hand, the SSD nerve based on deep learning Network model will outline all existing target positions in image and provide target category, thus obtain being detected target in image In pixel coordinate;Using camera calibration parameter, the pixel coordinate of each object is converted into the earth plane coordinates, further according to The change in location of target in consecutive image, is calculated the rate of each object;Using kalman filter method, based in target The predicted value of one frame image and the detected value of current frame image carry out the optimal estimation of dbjective state parameter, for predicted value and inspection Measured value difference is greater than the target of a certain threshold value, then it is assumed that and current detection result is unreliable, is directly exported as a result with predicted value, And the life cycle of object is made to subtract 1.
On the other hand, from multiple target informations that radar provides, rejecting width, length, position are not obviously met first Measured target objective parameter range and the value of the confidence are lower than the target of certain given threshold;Then kalman filter method, meter are used Calculate the predicted value of target previous frame message parameter, and by the sum of the detected value of present frame message compare, if the two difference is small In a certain threshold value, then directly radar present frame message parameter is exported as a result, if predicted value and detected value are greater than a certain threshold Value, then it is assumed that radar current detection result is unreliable, obtains optimal estimation value in conjunction with predicted value and detected value, and make object Life cycle subtracts 1.
The target level status information that the above sensor obtains is passed back background terminal by wired or wireless network, it is right first In different sensors, the close target of position and speed parameter is associated matching (being considered same target).For object Classification, using image image testing result as final output, while the object state that image detection and detections of radar are obtained Parameter inputs Elman neural network input layer, and neural network output layer result is regarded to the end-state parameter of target.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this The range of invention is defined by the claims and their equivalents.

Claims (7)

1. a kind of working method for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device, feature exist In, comprising:
S1, is arranged the first crossing detection system, and the first MMW RADAR SIGNAL USING output end connection first singlechip radar signal connects Receiving end, the first high-definition camera signal output end connect the first embedded gpu image pickup signal receiving end, and first singlechip signal is defeated Outlet connects the first interchanger radar signal receiving end, and the first embedded gpu signal output end connects the first interchanger camera shooting letter Number receiving end;Second crossing detection system is set, and the second MMW RADAR SIGNAL USING output end connects second singlechip radar signal Receiving end, the second high-definition camera signal output end connect the second embedded gpu image pickup signal receiving end, second singlechip signal Output end connects second switch radar signal receiving end, and the second embedded gpu signal output end connects second switch camera shooting Signal receiving end;The crossing N detection system is set, and N MMW RADAR SIGNAL USING output end connection N single-chip microcontroller radar signal connects Receiving end, N high-definition camera signal output end connect N embedded gpu image pickup signal receiving end, the output of N single-chip microcomputer signal End connection N interchanger radar signal receiving end, N embedded gpu signal output end connect N interchanger image pickup signal and receive End;First interchanger signal output end connects the first signal receiving end of total switch, and the connection of second switch signal output end is total Interchanger second signal receiving end, N interchanger signal output end connect total switch n-signal receiving end, total switch letter Number output end connects database server signal receiving end;
S2, database server collect the vehicle and pedestrian data at crossing, collect the data of whole millimetre-wave radar acquisitions, and lead to The type for crossing millimetre-wave radar output measured target, the type of measured target is judged, determines the type of measured target, root According to the width and length of the type scanning measured target, in the probability that corresponding intersection occurs, and pass through high-definition camera and milli Metre wave radar calculating fusion measured target is away from road surface origin relative position, according to shifting of the measured target in the coordinate system of road surface The dynamic time calculates relative velocity, and the image information at tested crossing is exported by high-definition camera;
S3 carries out target knowledge to the image successive image frame that high-definition camera acquires using housebroken deep neural network Not, in combination with the calibrating parameters of high-definition camera, measured target position and speed parameter are calculated;By being based on Kalman filtering Method is tracked and is filtered to the measured target state under motion state;It is carried out further according to the estimated result of Kalman filtering Targeted vital cycle management.
2. the work according to claim 1 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S2 includes:
S2-1 screens the measured target information that millimetre-wave radar exports according to measured target actual motion situation in real time;
S2-2, the radar signal of millimetre-wave radar transmitting along with false target, wherein false target by millimetre-wave radar and Tree information, fence information and the electric pole information of high-definition camera acquisition, pass through the radar data and picture number acquired in real time According to changeless tree information, fence information and electric pole information are rejected in screening;
S2-3, changeless tree information, fence information and the electric pole information measured target detected according to millimetre-wave radar Width, length, position and confidence information carry out first round screening to the measured target of detection;It is calculated followed by Kalman filtering Method is tracked and is filtered to the target continuously detected;Targeted vital period pipe is carried out according to the estimated result of Kalman filtering Reason;
S2-4 is obtaining first the first millimetre-wave radar of crossing and the first high-definition camera and second the second millimetre-wave radar of crossing It is not associated between each other with the measured target of the second high-definition camera and obtains corresponding testing result under mutually indepedent state Afterwards, two are merged without associated measured target information using Elman neural network, then rejecting under mismatch state should Measured target.
3. the work according to claim 1 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S3 includes:
For having associated measured target between each other, detected by the following method,
S3-1, the frame image that will be acquired in each high-definition camera are input to its deep learning SSD for being correspondingly embedded in formula GPU In model, the core of the model is modification and housebroken detection network VGG16;
S3-2 extracts convolutional layer Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, Conv11_2 in detection network The Feature Mapping feature map of layer, then respectively in each characteristic point of these convolutional layers Feature Mapping feature map Construct the bounding box Boundingbox of 6 different scale sizes, then respectively to characteristic point construction bounding box carry out detection and Classification, generates multiple bounding box Boundingbox;
Different characteristic mapping feature map bounding box Boundingbox generated is combined operation, passed through by S3-3 Non-maxima suppression method NMS come after the bounding box Boundingbox measured target for inhibiting a part to be overlapped or matching not just True bounding box Boundingbox obtains the testing result of final vehicle and pedestrian data measured target.
4. the work according to claim 1 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S3-1 includes:
S3-A, the full linking layer FC6 and FC7 that will test network VGG16 are converted to convolutional layer Conv6 and Conv7;
S3-B, that removes detection network VGG16 prevents Droprout layers of over-fitting and full linking layer FC8;
S3-C, using extension convolution or convolution Atrous algorithm with holes;
S3-D, the convolution kernel step-length Stride that will test the pond layer Pool5 of network VGG16 become 3*3-S1 from 2*2-S2, Middle S2 is the second convolution kernel step-length, and S1 is the first convolution kernel step-length.
5. the work according to claim 1 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S2-3 further include:
When capturing a frame image and a frame radar message simultaneously, the SSD neural network model based on deep learning will be outlined All existing measured target positions and measured target classification is provided in image, thus obtains the pixel of measured target in the picture Coordinate;Using high-definition camera calibrating parameters, the pixel coordinate of each measured target is converted into the earth plane coordinates, further according to The change in location of measured target in consecutive image, is calculated the rate of each measured target;Using kalman filter method, base The optimal of measured target state parameter is carried out in the predicted value of measured target previous frame image and the detected value of current frame image to estimate Meter is greater than predicted value and detected value difference the measured target of a certain threshold value, then it is assumed that current detection result is unreliable, directly It is exported as a result with predicted value, and the life cycle of measured target is made to subtract 1.
6. the work according to claim 5 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S2-3 further include:
From multiple measured target information that millimetre-wave radar provides, rejecting width, length, position are not obviously met tested first Target objective parameter range and the value of the confidence are lower than the target of certain given threshold;Then kalman filter method is used, quilt is calculated The predicted value of target previous frame message parameter is surveyed, and the detected value of measured target and present frame message is compared, if the two Difference is less than a certain threshold value, then directly exports millimetre-wave radar present frame message parameter as a result, if predicted value and detection Value is greater than a certain threshold value, then it is assumed that millimetre-wave radar current detection result is unreliable, obtains in conjunction with predicted value and detected value optimal Estimated value, and the life cycle of measured target is made to subtract 1.
7. the work according to claim 6 for carrying out traffic intersection vehicle and pedestrian detection by radar and image capture device Make method, which is characterized in that the S2-3 further include:
The measured target grade status information that whole millimetre-wave radars and high-definition camera are obtained is passed by wired or wireless network Return database server, first in whole millimetre-wave radars and high-definition camera, the close target of position and speed parameter into Image detection and radar using image detection result as final output, while being examined measured target classification by row association matching The object state parameter input Elman neural network input layer measured, by neural network output layer result as target End-state parameter.
CN201910367434.8A 2019-05-05 2019-05-05 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device Pending CN110068818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367434.8A CN110068818A (en) 2019-05-05 2019-05-05 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367434.8A CN110068818A (en) 2019-05-05 2019-05-05 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device

Publications (1)

Publication Number Publication Date
CN110068818A true CN110068818A (en) 2019-07-30

Family

ID=67369878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367434.8A Pending CN110068818A (en) 2019-05-05 2019-05-05 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device

Country Status (1)

Country Link
CN (1) CN110068818A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110556005A (en) * 2019-10-11 2019-12-10 成都纳雷科技有限公司 Self-adaptive tracking method and system for improving capture rate in traffic speed measurement system
CN111123262A (en) * 2020-03-30 2020-05-08 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN111754798A (en) * 2020-07-02 2020-10-09 上海电科智能系统股份有限公司 Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video
CN111787481A (en) * 2020-06-17 2020-10-16 北京航空航天大学 Road-vehicle coordination high-precision sensing method based on 5G
CN111913175A (en) * 2020-07-02 2020-11-10 哈尔滨工程大学 Water surface target tracking method with compensation mechanism under transient failure of sensor
CN112037543A (en) * 2020-09-14 2020-12-04 中德(珠海)人工智能研究院有限公司 Urban traffic light control method, device, equipment and medium based on three-dimensional modeling
CN112257698A (en) * 2020-12-23 2021-01-22 深圳佑驾创新科技有限公司 Method, device, equipment and storage medium for processing annular view parking space detection result
CN112649803A (en) * 2020-11-30 2021-04-13 南京航空航天大学 Camera and radar target matching method based on cross-correlation coefficient
CN112907975A (en) * 2021-01-23 2021-06-04 四川九通智路科技有限公司 Detection method for abnormal parking based on millimeter wave radar and video
CN112946628A (en) * 2021-02-08 2021-06-11 江苏中路工程技术研究院有限公司 Road running state detection method and system based on radar and video fusion
CN113065428A (en) * 2021-03-21 2021-07-02 北京工业大学 Automatic driving target identification method based on feature selection
CN113343849A (en) * 2021-06-07 2021-09-03 西安恒盛安信智能技术有限公司 Fusion sensing equipment based on radar and video
CN114141018A (en) * 2021-12-15 2022-03-04 阿波罗智联(北京)科技有限公司 Method and device for generating test result
CN114779180A (en) * 2022-06-20 2022-07-22 成都瑞达物联科技有限公司 Multipath interference mirror image target filtering method for vehicle-road cooperative radar

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008004077A2 (en) * 2006-06-30 2008-01-10 Toyota Jidosha Kabushiki Kaisha Automotive drive assist system with sensor fusion of radar and camera and probability estimation of object existence for varying a threshold in the radar
CN105572663A (en) * 2014-09-19 2016-05-11 通用汽车环球科技运作有限责任公司 Detection of a distributed radar target based on an auxiliary sensor
WO2017126226A1 (en) * 2016-01-22 2017-07-27 日産自動車株式会社 Vehicle driving assist control method and control device
CN107235044A (en) * 2017-05-31 2017-10-10 北京航空航天大学 It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing
CN109686108A (en) * 2019-02-19 2019-04-26 山东科技大学 A kind of vehicle target Trajectory Tracking System and Vehicle tracing method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008004077A2 (en) * 2006-06-30 2008-01-10 Toyota Jidosha Kabushiki Kaisha Automotive drive assist system with sensor fusion of radar and camera and probability estimation of object existence for varying a threshold in the radar
CN105572663A (en) * 2014-09-19 2016-05-11 通用汽车环球科技运作有限责任公司 Detection of a distributed radar target based on an auxiliary sensor
WO2017126226A1 (en) * 2016-01-22 2017-07-27 日産自動車株式会社 Vehicle driving assist control method and control device
CN107235044A (en) * 2017-05-31 2017-10-10 北京航空航天大学 It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing
CN109686108A (en) * 2019-02-19 2019-04-26 山东科技大学 A kind of vehicle target Trajectory Tracking System and Vehicle tracing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
户晋文: "《基于视觉融合的车辆与行人目标检测及测距方法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
曹伟: "《基于SSD的车辆检测与跟踪算法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
杨良义、谢飞、陈涛: "《城市道路交通交叉路口的车路协同系统设计》", 《重庆理工大学学报(自然科学)》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532896B (en) * 2019-08-06 2022-04-08 北京航空航天大学 Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110556005A (en) * 2019-10-11 2019-12-10 成都纳雷科技有限公司 Self-adaptive tracking method and system for improving capture rate in traffic speed measurement system
CN111123262A (en) * 2020-03-30 2020-05-08 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN111123262B (en) * 2020-03-30 2020-06-26 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN111787481A (en) * 2020-06-17 2020-10-16 北京航空航天大学 Road-vehicle coordination high-precision sensing method based on 5G
CN111754798A (en) * 2020-07-02 2020-10-09 上海电科智能系统股份有限公司 Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video
CN111913175A (en) * 2020-07-02 2020-11-10 哈尔滨工程大学 Water surface target tracking method with compensation mechanism under transient failure of sensor
CN112037543A (en) * 2020-09-14 2020-12-04 中德(珠海)人工智能研究院有限公司 Urban traffic light control method, device, equipment and medium based on three-dimensional modeling
CN112649803A (en) * 2020-11-30 2021-04-13 南京航空航天大学 Camera and radar target matching method based on cross-correlation coefficient
CN112649803B (en) * 2020-11-30 2024-02-13 南京航空航天大学 Camera and radar target matching method based on cross-correlation coefficient
CN112257698A (en) * 2020-12-23 2021-01-22 深圳佑驾创新科技有限公司 Method, device, equipment and storage medium for processing annular view parking space detection result
CN112907975A (en) * 2021-01-23 2021-06-04 四川九通智路科技有限公司 Detection method for abnormal parking based on millimeter wave radar and video
CN112946628A (en) * 2021-02-08 2021-06-11 江苏中路工程技术研究院有限公司 Road running state detection method and system based on radar and video fusion
CN113065428A (en) * 2021-03-21 2021-07-02 北京工业大学 Automatic driving target identification method based on feature selection
CN113343849A (en) * 2021-06-07 2021-09-03 西安恒盛安信智能技术有限公司 Fusion sensing equipment based on radar and video
CN114141018A (en) * 2021-12-15 2022-03-04 阿波罗智联(北京)科技有限公司 Method and device for generating test result
CN114141018B (en) * 2021-12-15 2023-02-28 阿波罗智联(北京)科技有限公司 Method and device for generating test result
CN114779180A (en) * 2022-06-20 2022-07-22 成都瑞达物联科技有限公司 Multipath interference mirror image target filtering method for vehicle-road cooperative radar

Similar Documents

Publication Publication Date Title
CN110068818A (en) The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN106842231B (en) A kind of road edge identification and tracking
US5535314A (en) Video image processor and method for detecting vehicles
US9225943B2 (en) PTZ video visibility detection method based on luminance characteristic
CN111652097B (en) Image millimeter wave radar fusion target detection method
CN103176185B (en) Method and system for detecting road barrier
CN103116746B (en) A kind of video flame detection method based on multiple features fusion technology
CN104183127B (en) Traffic surveillance video detection method and device
CN105513349B (en) Mountainous area highway vehicular events detection method based on double-visual angle study
CN108596081A (en) A kind of traffic detection method merged based on radar and video camera
CN103366156A (en) Road structure detection and tracking
CN107679520A (en) A kind of lane line visible detection method suitable for complex condition
CN106980113A (en) Article detection device and object detecting method
CN107423679A (en) A kind of pedestrian is intended to detection method and system
CN105825495A (en) Object detection apparatus and object detection method
CN105608431A (en) Vehicle number and traffic flow speed based highway congestion detection method
CN103383733A (en) Lane video detection method based on half-machine study
CN110083099A (en) One kind meeting automobile function safety standard automatic Pilot architecture system and working method
CN106327880B (en) A kind of speed recognition methods and its system based on monitor video
CN107909601A (en) A kind of shipping anti-collision early warning video detection system and detection method suitable for navigation mark
CN110188606A (en) Lane recognition method, device and electronic equipment based on high light spectrum image-forming
CN109272482A (en) A kind of urban road crossing vehicle queue detection system based on sequence image
CN109615880A (en) A kind of wagon flow measuring method based on radar image processing
CN107221175A (en) A kind of pedestrian is intended to detection method and system
Heckman et al. Potential negative obstacle detection by occlusion labeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730