CN108960185A - Vehicle target detection method and system based on YOLOv2 - Google Patents

Vehicle target detection method and system based on YOLOv2 Download PDF

Info

Publication number
CN108960185A
CN108960185A CN201810803074.7A CN201810803074A CN108960185A CN 108960185 A CN108960185 A CN 108960185A CN 201810803074 A CN201810803074 A CN 201810803074A CN 108960185 A CN108960185 A CN 108960185A
Authority
CN
China
Prior art keywords
vehicle
frame
image
sample
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810803074.7A
Other languages
Chinese (zh)
Inventor
李鹏
马述杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taihua Wisdom Industry Group Co Ltd
Original Assignee
Taihua Wisdom Industry Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taihua Wisdom Industry Group Co Ltd filed Critical Taihua Wisdom Industry Group Co Ltd
Priority to CN201810803074.7A priority Critical patent/CN108960185A/en
Publication of CN108960185A publication Critical patent/CN108960185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of vehicle target detection method and system based on YOLOv2, comprising the following steps: obtain sample traffic video data;Sample traffic video data are divided into frame image as sample image;To sample image progress noise reduction, shade disappear again, local histogram equalization is handled and local inhomogeneous body analysis, training image is obtained;Training image is input to preset YOLOv2 neural network model to be trained, obtains vehicle detection model;Obtain real-time vehicle video data;Noise reduction, the weight that disappears, local smoothing method processing and local inhomogeneous body analysis are carried out to real-time vehicle video data, the real-time vehicle video data after obtaining secondary treatment;It is frame image by the real-time vehicle partitioning video data after secondary treatment and inputs in vehicle detection model, obtains result figure.The accuracy and detection speed of vehicle detection can be improved in the vehicle target detection method and system based on YOLOv2 provided through the invention.

Description

Vehicle target detection method and system based on YOLOv2
Technical field
The present invention relates to vehicle identification fields, more particularly, to a kind of vehicle target detection method based on YOLOv2 And system.
Background technique
The development of science and technology provides richer technical support for the construction in city, and the identification of specific objective is always in image It is the focal spot techniques of computer vision field, and among these, no matter vehicle target detection is civilian or military all with important Meaning.At civilian aspect, vehicle identification advances the application in the fields such as intelligent transportation, wisdom parking, security protection;In military aspect, Vehicle identification is being implemented accurately to beat to the recognition and tracking of the vehicle (battlebus, panzer etc.) in fast changing battlefield surroundings Target is hit, enemy's situation dynamic monitoring etc. plays key effect.
It is required with the propulsion of Development of China's Urbanization, the increase of car ownership, urban transportation and vehicle in and out port management It improves, the development of the technologies such as internet, Internet of Things, big data, machine learning, artificial intelligence technology is pushed to examine in vehicle target Survey the acceleration landing in scene.And the premise of vehicle target detection technique is then that vehicle is isolated from real-time video, it is still, existing Grow directly from seeds in living, vehicle is generally in complicated traffic environment, this just gives vehicle target to detect.Known existing vehicle detection side Method has: the vehicle identification based on CNN, such as: application publication number is the patent of invention of CN201710043464.4, using Sobel Operator carries out the detection of marginal point to the video frame of acquisition, then carries out gradient detection, describes the exterior contour of vehicle, then with Object candidate area is compared, and then can just draw out the final rectangle for surrounding vehicle by convolution algorithm, this is also most Original object detection method, however this method generates a large amount of target candidate characteristic patterns in the presence of can not such as handle occlusion issue, The problems such as poor to small target deteection effect, particularly since arithmetic logic is excessively complicated, often goes out in real-time video detection Vehicle can not now be detected or rectangle frame offset is excessive, the problems such as to noise-sensitive.
It would therefore be highly desirable to invent a kind of pair of vehicle target detection more accurate vehicle checking method of detection effect.
Summary of the invention
In view of this, the present invention provides a kind of vehicle target detection method and system based on YOLOv2, solves existing Have in technology that vehicle detection accuracy is low, slow-footed problem.
To solve the above-mentioned problems, the vehicle target detection method based on YOLOv2 that the present invention provides a kind of, including with Lower step:
It obtains sample traffic video data and sample captures image data;
The sample traffic video data are divided into frame image, capture image data collectively as sample with the sample Image;
Image noise reduction processing is carried out to the sample image;
Disappear to the sample image progress shade after noise reduction and handles again;
Disappearing weight to shade treated, the sample image carries out local histogram equalization processing;
To local histogram equalization, treated that the sample image carries out local inhomogeneous body analysis, obtains training image;
The training image is input to preset YOLOv2 neural network model to be trained, obtains vehicle detection mould Type;
Obtain real-time vehicle video data;
Noise reduction, the weight that disappears, local smoothing method processing are carried out to the real-time vehicle video data that gets, obtain that treated Real-time vehicle video data;
To treated, the real-time vehicle video data carries out local inhomogeneous body analysis, real-time after obtaining secondary treatment Vehicle video data;
It is frame image by the real-time vehicle partitioning video data after the secondary treatment, obtains target frame image;
The target frame image is inputted in the vehicle detection model, obtains result figure, wherein wrap in the result figure Containing several target frames, the target frame is the rectangle frame for all vehicle images irised out in the result figure.
Further, the YOLOv2 neural network model includes convolutional layer, pond layer and full linking layer,
The training image is input to preset YOLOv2 neural network model to be trained, obtains vehicle detection model The step of further comprise:
The training image is inputted into the convolutional layer and carries out feature extraction, by multiple convolution kernel functions in the training It is slided on image, exports multiple multidimensional characteristic vectors;
The multidimensional characteristic vectors are inputted into the pond layer and carry out pond processing, output pool characteristic pattern;
The pond characteristic pattern is inputted into the full linking layer, exports the result figure.
Further, the vehicle detection model is for executing following steps:
It is n*p grid by the target frame image segmentation, is m encirclement frame to each grid application platform;It is wherein every A encirclement frame has center point coordinate, wide, high and four parameters of confidence score;
The m encirclement frames are normalized according to the width of encirclement frame, height described in each;
Calculate each described confidence score for surrounding frame;
All confidence scores are left out lower than the score according to preset score threshold for grid described in each The encirclement frame of threshold value;
For all encirclement frames in grid described in each, arranged according to the descending sequence of confidence score Sequence;
For all encirclement frames in grid described in each, taking the maximum encirclement frame of confidence score is target Frame.
Further, in the step of calculating the confidence score that each surrounds frame, calculated using following methods described in Surround the confidence score of frame:
Judge in the region for surrounding frame with the presence or absence of object;
Object if it does not exist then determines that the confidence score for surrounding frame is 0;
Object if it exists then calculates the image in the encirclement frame and belongs to the posterior probability Pr of vehicle target, described in calculating Surround the value of the detection evaluation function IOU of frame;The detection evaluation function IOU indicates the vehicle target and the encirclement frame The ratio of intersection and the vehicle target and the union for surrounding frame;
The confidence for surrounding frame is obtained multiplied by the value of the detection evaluation function IOU for surrounding frame with posterior probability Pr Spend score.
Further, noise reduction process is carried out to the sample image using Gaussian filter.
Further, to after noise reduction the sample image carry out shade disappear again processing the step of in, using Poisson's equation Disappeared to image shade and is handled again.
To solve the above-mentioned problems, the vehicle target detection system based on YOLOv2 that the present invention provides a kind of, comprising:
Sample acquisition module: image data is captured for obtaining sample traffic video data and sample;
Sample conversion module: it for the sample traffic video data to be divided into frame image, captures and schemes with the sample Sheet data is collectively as sample image
Sample noise reduction module: for carrying out image noise reduction processing to the sample image;
Sample removes shaded block: handling again for disappearing to the sample image progress shade after noise reduction;
Sample balance module: for disappearing weight to shade treated, the sample image is carried out at local histogram equalization Reason;
Local inhomogeneous body analysis module: for treated that the sample image carries out part not to local histogram equalization Variable analysis obtains training image;
Model training module: it is carried out for the training image to be input to the preset YOLOv2 neural network model Training, obtains vehicle detection model;
Data acquisition module: for obtaining real-time vehicle video data;
Data single treatment module: for carrying out noise reduction, the weight that disappears, part to the real-time vehicle video data got Smoothing processing, the real-time vehicle video data that obtains that treated;
Data secondary treatment module: local inhomogeneous body point is carried out for the real-time vehicle video data described to treated Analysis, the real-time vehicle video data after obtaining secondary treatment;
Data conversion module: it for being frame image by the real-time vehicle partitioning video data after the secondary treatment, obtains Target frame image;
Vehicle identification module: for obtaining result figure in the target frame image input vehicle detection model, In, it include several target frames in the result figure, the target frame is all vehicle images irised out in the result figure Rectangle frame.
Further, the model training module is when being trained the YOLOv2 neural network model, specifically Executing step includes:
Training image input convolutional layer is subjected to feature extraction, by multiple convolution kernel functions in the training image Upper sliding exports multiple multidimensional characteristic vectors;
Multidimensional characteristic vectors input pond layer is subjected to pond processing, output pool characteristic pattern;
The pond characteristic pattern is inputted into full linking layer, exports the result figure.
Further, the vehicle identification model is handled to the target frame image, when obtaining the result figure, Specifically executing step includes:
It is n*p grid by the target frame image segmentation, is m encirclement frame to each grid application platform;It is wherein every A encirclement frame has center point coordinate, wide, high and four parameters of confidence score;
The m encirclement frames are normalized according to the width of encirclement frame, height described in each;
Calculate each described confidence score for surrounding frame;
All confidence scores are left out lower than the score according to preset score threshold for grid described in each The encirclement frame of threshold value;
For all encirclement frames in grid described in each, arranged according to the descending sequence of confidence score Sequence;Taking the maximum encirclement frame of confidence score is target frame.
Further, the vehicle identification model each it is described surround frame confidence score calculate when, Specifically executing step includes:
Judge in the region for surrounding frame with the presence or absence of object;
Object if it does not exist then determines that the confidence score for surrounding frame is 0;
Object if it exists then calculates the image in the encirclement frame and belongs to the posterior probability Pr of vehicle target, described in calculating Surround the value of the detection evaluation function IOU of frame;The detection evaluation function IOU indicates the vehicle target and the encirclement frame The ratio of intersection and the vehicle target and the union for surrounding frame;
The confidence level for surrounding frame is obtained multiplied by the value of the detection evaluation function IOU with the posterior probability Pr to obtain Point.
Compared with prior art, the vehicle target detection method and system provided by the invention based on YOLOv2, at least in fact Showed it is following the utility model has the advantages that
One, YOLOv2 neural network has cracking detection rates, can satisfy the rate requirement of video detection;
Two, the present invention is had more using vehicle target detection method end to end compared to traditional vehicle checking method Strong robustness, can be with multiple vehicle targets in one-off recognition picture;
Three, the present invention first carries out noise reduction to image, disappears again, local flat before identifying to video and image data A series of processing such as sliding processing, local inhomogeneous body analysis, can be enhanced the clarity of vehicle edge profile in target frame image, mention The contrast of high target frame image weakens the influence mutually blocked between vehicle, improves the accuracy of vehicle identification.
Certainly, implementing any of the products of the present invention specific needs while must not reach all the above technical effect.
By referring to the drawings to the detailed description of exemplary embodiment of the present invention, other feature of the invention and its Advantage will become apparent.
Detailed description of the invention
It is combined in the description and the attached drawing for constituting part of specification shows the embodiment of the present invention, and even With its explanation together principle for explaining the present invention.
Fig. 1 is the vehicle target detection method flow chart based on YOLOv2;
Fig. 2 is another vehicle target detection method flow chart based on YOLOv2;
Fig. 3 is the vehicle target detection system block diagram based on YOLOv2.
301, sample acquisition module;302, sample conversion module;303, sample noise reduction module;304, sample removes shade mould Block;305, sample balance module;306, local inhomogeneous body analysis module;307, model training module;311, data acquisition module; 312, data single treatment module;313, data secondary treatment module;314, data conversion module;315, vehicle identification module.
Specific embodiment
Carry out the various exemplary embodiments of detailed description of the present invention now with reference to attached drawing.It should also be noted that unless in addition having Body explanation, the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally The range of invention.
Be to the description only actually of at least one exemplary embodiment below it is illustrative, never as to the present invention And its application or any restrictions used.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable In the case of, the technology, method and apparatus should be considered as part of specification.
It is shown here and discuss all examples in, any occurrence should be construed as merely illustratively, without It is as limitation.Therefore, other examples of exemplary embodiment can have different values.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, then in subsequent attached drawing does not need that it is further discussed.
Embodiment 1:
A kind of vehicle target detection method based on YOLOv2 is present embodiments provided, for according to real-time traffic video counts According to or snapshot image data identify all vehicle targets in image.This method mainly includes model training and actually detected two It is most of, before detecting actual video or picture, need to be trained to obtain vehicle by a large amount of sample image Video data can be input in the vehicle detection model of training completion by detection model after training finishes, can be direct Export result figure.
Specifically, being as shown in Figure 1 the flow chart of the vehicle target detection method based on YOLOv2, this method includes following Step:
S101: obtaining sample traffic video data and sample captures image data;
Wherein, sample traffic video data and sample are captured image data and can be obtained by multiple channel, such as pass through The video monitoring data and capture image data that Department of Communications provides, this method is not to acquisition sample traffic video data and sample The channel for capturing image data is defined, as long as there are vehicles in video data and candid photograph image data.
S102: being divided into frame image for sample traffic video data, captures image data collectively as sample graph with sample Picture;
The data of video format are converted to picture format.
S103: image noise reduction processing is carried out to sample image;
Noise reduction process can remove the noise on sample image, improve the clarity of sample image.
S104: disappear to the sample image progress shade after noise reduction and handle again;
Since the presence of shade in image can interfere the identification of vehicle, shade is carried out to the sample image after noise reduction and is disappeared It handles again, removes the shade in sample image, improve the accuracy of vehicle identification.
S105: local histogram equalization processing is carried out to shade weight treated the sample image that disappears;
Due in the case where bad weather, such as the image that takes when the cloudy day under normal weather relative to taking The contrast of vehicle can reduce in image, and edge contour is fuzzy, interfere to the identification of vehicle, therefore carry out to sample image Local histogram equalization processing, can be sharpened vehicle edge part and smoothing processing, improve the contrast of vehicle, make vehicle Profile be more clear.
S106: local inhomogeneous body analysis is carried out to local histogram equalization treated sample image, obtains training image;
Local inhomogeneous body analysis can further weaken the influence mutually blocked between vehicle.
S107: training image is input to preset YOLOv2 neural network model and is trained, vehicle detection mould is obtained Type;
Specifically, the present embodiment uses DarkNet pre-training model as initial model, DarkNet pre-training model is The pre-training model of open source is input with training image, and result figure is output, and being trained to DarkNet pre-training model can Obtain vehicle detection model.
Above step is that the training process of vehicle detection model can be to reality after vehicle detection model training finishes The traffic video data and candid photograph image data on border are handled to obtain result figure.Next it will introduce to actual real-time vehicle The process that video data is handled.
S108: real-time vehicle video data is obtained;
Similar with step S101, difference is that the sample traffic video data and sample got by step S101 are captured Image data is to carry out model training as sample, and vehicle video data acquired in step S108 is as actually detected Data.
S109: noise reduction, the weight that disappears, local smoothing method processing are carried out to the real-time vehicle video data got, after obtaining processing Real-time vehicle video data;
The step is identical as step of the step S103 into S105, is to handle the video data got, goes Except noise, shade is eliminated, the accuracy rate of vehicle identification is improved.
S110: to treated, real-time vehicle video data carries out local inhomogeneous body analysis, the reality after obtaining secondary treatment When vehicle video data;
The step is identical as step S105, is similarly the contrast for improving vehicle in picture, makes the edge contour of vehicle more It is clear to add, to improve the accuracy rate of vehicle identification
S111: it is frame image by the real-time vehicle partitioning video data after secondary treatment, obtains target frame image;
The step is identical as step S102, automobile video frequency is divided into frame image, as target frame image.
S112: target frame image is inputted in vehicle detection model, result figure is obtained.
It wherein, include several target frames in result figure, target frame is the square for all vehicle images irised out in result figure Shape frame.
Vehicle detection model is obtained due to trained in step S101 to S107, it in this step, only need to be by target Frame image inputs in vehicle detection model, can immediately arrive at result figure.In result figure, all vehicles pass through the mesh of rectangle Mark frame is irised out.
The vehicle target detection method based on YOLOv2 provided through this embodiment, realize it is following the utility model has the advantages that
One, the vehicle target detection method provided in this embodiment based on YOLOv2, to the vehicle video data got After carrying out a series of processing such as noise reduction, the weight that disappears, local smoothing method processing, local inhomogeneous body analysis, it can be enhanced in target frame image The clarity of vehicle edge profile improves the contrast of target frame image, weakens the influence mutually blocked between vehicle, improves vehicle Identification accuracy;
Two, YOLOv2 neural network has cracking detection rates, can satisfy the rate requirement of video detection;
Three, the present invention is had more using vehicle target detection method end to end compared to traditional vehicle checking method Strong robustness, can be with multiple vehicle targets in one-off recognition picture.
Embodiment 2:
The present embodiment on the basis of embodiment 1, provides a kind of vehicle target detection side for being preferably based on YOLOv2 Method.Related place can be with the description in reference implementation example 1.Specifically, being illustrated in figure 2 another vehicle mesh based on YOLOv2 Mark detection method flow chart.This method comprises:
S201: obtaining sample traffic video data and sample captures image data;
Wherein, sample traffic video data and sample are captured image data and can be obtained by multiple channel, such as pass through The video monitoring data and capture image data that Department of Communications provides, this method is not to acquisition sample traffic video data and sample The channel for capturing image data is defined, as long as there are vehicles in video data and candid photograph image data.
S202: being divided into frame image for sample traffic video data, captures image data collectively as sample graph with sample Picture;The data of video format are converted to picture format.
S203: image noise reduction processing is carried out to sample image using with Gaussian filter;
Noise reduction process can remove the noise on sample image, improve the clarity of sample image.
S204: disappeared to the sample image after noise reduction using Poisson's equation progress shade and handled again;
Since the presence of shade in image can interfere the identification of vehicle, shade is carried out to the sample image after noise reduction and is disappeared It handles again, removes the shade in sample image, improve the accuracy of vehicle identification.
S205: local histogram equalization processing is carried out to shade weight treated the sample image that disappears;
Due in the case where bad weather, such as the image that takes when the cloudy day under normal weather relative to taking The contrast of vehicle can reduce in image, and edge contour is fuzzy, interfere to the identification of vehicle, therefore carry out to sample image Local histogram equalization processing, can be sharpened vehicle edge part and smoothing processing, improve the contrast of vehicle, make vehicle Profile be more clear.
S206: local inhomogeneous body analysis is carried out to local histogram equalization treated sample image, obtains training image;
Local inhomogeneous body analysis can further weaken the influence mutually blocked between vehicle.
S207: training image is input to preset YOLOv2 neural network model and is trained, vehicle detection mould is obtained Type;
Specifically, the present embodiment is using DarkNet pre-training model as initial model.YOLOv2 neural network model packet Include convolutional layer, pond layer and full linking layer.
In this step, training image input convolutional layer is first subjected to feature extraction, is being instructed by multiple convolution kernel functions Practice and slided on image, exports multiple multidimensional characteristic vectors;Convolution kernel function can extract the feature on training image, and And dimension-reduction treatment is carried out to feature, obtain multidimensional characteristic vectors.
Multidimensional characteristic vectors input pond layer is subjected to pond processing, output pool characteristic pattern again;
Pond characteristic pattern is finally inputted into full linking layer, exports result figure.
Above step is that the training process of vehicle detection model can be to reality after vehicle detection model training finishes The traffic video data and candid photograph image data on border carry out vehicle detection and obtain result figure.Next it will introduce to actual real-time Process of the vehicle video data into vehicle detection.
S208: real-time vehicle video data is obtained;
Similar with step S201, difference is that the sample traffic video data and sample got by step S101 are captured Image data is to carry out model training as sample, and vehicle video data acquired in step S208 is as actually detected Data.
S209: noise reduction, the weight that disappears, local smoothing method processing are carried out to the real-time vehicle video data got, after obtaining processing Real-time vehicle video data;
The step is identical as step of the step S203 into S205, is to handle the video data got, goes Except noise, shade is eliminated, the accuracy rate of vehicle identification is improved.
S210: to treated, real-time vehicle video data carries out local inhomogeneous body analysis, the reality after obtaining secondary treatment When vehicle video data;
The step is identical as step S205, is similarly the contrast for improving vehicle in picture, makes the edge contour of vehicle more It is clear to add, to improve the accuracy rate of vehicle identification.
S211: it is frame image by the real-time vehicle partitioning video data after secondary treatment, obtains target frame image;
The step is identical as step S202, automobile video frequency is divided into frame image, as target frame image.
S212: target frame image is inputted in vehicle detection model, is n*p grid by target frame image segmentation, to every One grid application platform is m encirclement frame;Wherein each encirclement frame has center point coordinate, wide, high and confidence score four Parameter;
Such as a target frame image, vehicle detection model is divided into n*p grid first, for each Grid application platform is m encirclement frame, i.e., while being divided into n*p*m encirclement frame altogether, then obtain all encirclement frames, each Surround frame all have center point coordinate, each surround wide, high and confidence score this four parameters of frame.
S213: according on target frame image, each surrounds the width of frame, m encirclement frame is normalized in height;
S214: judge that each is surrounded with the presence or absence of object in the region of frame on target frame image, object is then held if it exists Row step S216;Object if it does not exist thens follow the steps S215;
S215: the confidence score for determining the encirclement frame is 0;
S216: calculating the posterior probability Pr that the image surrounded in frame in target frame image belongs to vehicle target, calculates and surrounds The value of the detection evaluation function IOU of frame;
Wherein, detection evaluation function IOU indicate vehicle target and surround frame intersection and vehicle target and surround frame and The ratio of collection;
S217: obtain surrounding the confidence score of frame multiplied by the value of detection evaluation function IOU with posterior probability Pr.
Frame is surrounded for each and is carried out step S214 to S217, is obtained to obtain the confidence level that each surrounds frame Point.
S218: all confidence scores are left out lower than score threshold according to preset score threshold for each grid Encirclement frame;
Due to there is m encirclement frame in each grid, each encirclement is calculated to step S217 by step S214 The confidence score of frame.Score threshold is the parameter being manually set, and confidence score is indicated lower than the encirclement frame of the score threshold Object in the encirclement frame is unlikely to be vehicle target, and the encirclement frame that confidence score is more than or equal to score threshold indicates the encirclement Content in frame is likely to be vehicle target.By setting score threshold, leave out all confidence scores lower than the score threshold Encirclement frame, the encirclement frame that can not largely have vehicle target can be disposably excluded, only to there may be vehicle mesh Target is surrounded frame and is handled, and the amount of data processing is greatly reduced.
S219: it for all encirclement frames in each grid, is arranged according to the descending sequence of confidence score Sequence;Taking the maximum encirclement frame of confidence score is target frame.
The step is for deleting confidence level in step S218 lower than remaining encirclement after all encirclement frames of score threshold Frame sorts to all remaining encirclement frames according to confidence score size, and taking the maximum encirclement frame of confidence score is target Frame, the maximum encirclement frame of confidence score indicate that the object in the encirclement frame surrounds frame most likely vehicle mesh compared to other Mark.Therefore by the highest encirclement frame of confidence score target frame the most.
S220: output result figure.For a target frame image, result figure is on target frame image by multiple Target frame irises out the picture of all vehicles in target frame image.
For example there are 5 vehicles in a target frame image, 5 vehicles are respectively at different positions, then result Target frame in figure is the rectangle frame for irising out this five vehicles, the size of each rectangle frame respectively with the vehicle irised out Profile match, i.e., the size of rectangle frame be slightly larger than vehicle profile.
The vehicle target detection method based on YOLOv2 provided through this embodiment, realize it is following the utility model has the advantages that
Vehicle target detection method provided in this embodiment based on YOLOv2 carries out the vehicle video data got After a series of processing such as noise reduction, the weight that disappears, local smoothing method processing, local inhomogeneous body analysis, vehicle in target frame image can be enhanced The clarity of edge contour improves the contrast of target frame image, weakens the influence mutually blocked between vehicle, improves vehicle and knows Other accuracy;
Target frame image is divided into several grids by the present embodiment, and each grid is divided into several and surrounds frame, then passes through Confidence score is calculated to each encirclement frame, taking the highest encirclement frame of confidence score is that target surrounds frame, and vehicle can be improved The accuracy of identification concurrently sets confidence score threshold value, leaves out the encirclement frame that confidence score is lower than score threshold, to subtract The amount for having lacked data processing, accelerates recognition speed.
Embodiment 3:
The present embodiment provides a kind of vehicle target detection based on YOLOv2 on the basis of embodiment 1 and embodiment 2 System.
It is illustrated in figure 3 the vehicle target detection system block diagram based on YOLOv2.It is provided in this embodiment to be based on YOLOv2 Vehicle target detection system integrally include model training and actually detected two parts, model training part is for training vehicle to examine Model is surveyed, actually detected part is used to carry out vehicle detection to video data, obtains result figure.Separately below to model training and It is actually detected to be described in detail.Model training part includes sample acquisition module 301, sample conversion module 302, sample noise reduction Module 303, sample remove shaded block 304, sample balance module 305, local inhomogeneous body analysis module 306 and model training mould Block 307;Actually detected part includes: data acquisition module 311, data single treatment module 312, data secondary treatment module 313, data conversion module 314 and vehicle identification module 315.
Specifically, sample acquisition module 301 captures image data for obtaining sample traffic video data and sample.
Sample conversion module 302 is used to sample traffic video data being divided into frame image, captures image data with sample Collectively as sample image.
Sample noise reduction module 303 is used to carry out image noise reduction processing to sample image by Gaussian filter.
Sample goes shaded block 304 for the sample image progress shade after noise reduction being disappeared and being handled using Poisson's equation again.
Weight treated the sample image that is used to disappear to shade of sample balance module 305 carries out local histogram equalization processing.
Due in the case where bad weather, such as the image that takes when the cloudy day under normal weather relative to taking The contrast of vehicle can reduce in image, and edge contour is fuzzy, interfere to the identification of vehicle, therefore carry out to sample image Local histogram equalization processing, can be sharpened vehicle edge part and smoothing processing, improve the contrast of vehicle, make vehicle Profile be more clear.
Local inhomogeneous body analysis module 306 is used to carry out local invariant to local histogram equalization treated sample image Amount analysis, obtains training image.
Model training module 307 is trained for training image to be input to preset YOLOv2 neural network model, Obtain vehicle detection model;Wherein, model training module 307 is when being trained YOLOv2 neural network model, specifically Executing step includes:
Training image input convolutional layer is subjected to feature extraction, is slided on training image by multiple convolution kernel functions, Export multiple multidimensional characteristic vectors;Convolution kernel function can extract the feature on training image, and carry out to feature Dimension-reduction treatment obtains multidimensional characteristic vectors.
Multidimensional characteristic vectors input pond layer is subjected to pond processing, output pool characteristic pattern;
Pond characteristic pattern is inputted into full linking layer, exports result figure.
Data acquisition module 311: for obtaining real-time vehicle video data;
Data single treatment module 312: for carrying out noise reduction, the weight that disappears, part to the real-time vehicle video data got Smoothing processing, the real-time vehicle video data that obtains that treated;
Data secondary treatment module 313: for carrying out local inhomogeneous body analysis to treated real-time vehicle video data, Real-time vehicle video data after obtaining secondary treatment;
Data conversion module 314: for being frame image by the real-time vehicle partitioning video data after secondary treatment, mesh is obtained Mark frame image;
Vehicle identification module 315: being built-in with vehicle identification model, for target frame image to be inputted vehicle detection model In, obtain result figure, wherein include several target frames in result figure, target frame is all vehicle figures irised out in result figure The rectangle frame of picture.
Vehicle identification model is handled to target frame image, and when obtaining result figure, the specific step that executes includes:
S301: being n*p grid by target frame image segmentation, is m encirclement frame to each grid application platform;It is wherein every A encirclement frame has center point coordinate, wide, high and four parameters of confidence score;
S302: m encirclement frame is normalized in width, the height for surrounding frame according to each;
S303: each confidence score for surrounding frame is calculated;
S304: all confidence scores are left out lower than score threshold according to preset score threshold for each grid Encirclement frame;Wherein score threshold is the parameter being manually set, and confidence score is indicated lower than the encirclement frame of the score threshold should The object surrounded in frame is unlikely to be vehicle target, and the encirclement frame that confidence score is more than or equal to score threshold indicates the encirclement frame In content be likely to be vehicle target.By setting score threshold, leave out all confidence scores lower than the score threshold Frame is surrounded, the encirclement frame that can not largely have vehicle target can be disposably excluded, only to there may be vehicle target Encirclement frame handled, greatly reduce the amount of data processing.
S305: it for all encirclement frames in each grid, is arranged according to the descending sequence of confidence score Sequence;Taking the maximum encirclement frame of confidence score is target frame;
S306: output result figure.For a target frame image, result figure is on target frame image by multiple Target frame irises out the picture of all vehicles in target frame image.
More specifically, vehicle identification model is specifically held when the confidence score for surrounding frame to each calculates Row step includes:
Judgement, which is surrounded, whether there is object in the region of frame;
Object if it does not exist then determines that the confidence score for surrounding frame is 0;
Object if it exists then calculates the posterior probability Pr that the image surrounded in frame belongs to vehicle target, calculates and surrounds frame Detect the value of evaluation function IOU;It detects evaluation function IOU expression vehicle target and surrounds intersection and the vehicle target and encirclement of frame The ratio of the union of frame;
Obtain surrounding the confidence score of frame multiplied by the value of detection evaluation function IOU with posterior probability Pr.
Although some specific embodiments of the invention are described in detail by example, the skill of this field Art personnel it should be understood that example above merely to being illustrated, the range being not intended to be limiting of the invention.The skill of this field Art personnel are it should be understood that can without departing from the scope and spirit of the present invention modify to above embodiments.This hair Bright range is defined by the following claims.

Claims (10)

1. a kind of vehicle target detection method based on YOLOv2, which comprises the following steps:
It obtains sample traffic video data and sample captures image data;
The sample traffic video data are divided into frame image, capture image data collectively as sample graph with the sample Picture;
Image noise reduction processing is carried out to the sample image;
Disappear to the sample image progress shade after noise reduction and handles again;
Disappearing weight to shade treated, the sample image carries out local histogram equalization processing;
To local histogram equalization, treated that the sample image carries out local inhomogeneous body analysis, obtains training image;
The training image is input to preset YOLOv2 neural network model to be trained, obtains vehicle detection model;
Obtain real-time vehicle video data;
Noise reduction, the weight that disappears, local smoothing method processing are carried out to the real-time vehicle video data that gets, obtain that treated in real time Vehicle video data;
To treated, the real-time vehicle video data carries out local inhomogeneous body analysis, the real-time vehicle after obtaining secondary treatment Video data;
It is frame image by the real-time vehicle partitioning video data after the secondary treatment, obtains target frame image;
The target frame image is inputted in the vehicle detection model, result figure is obtained, wherein if including in the result figure Dry target frame, the target frame is the rectangle frame for all vehicle images irised out in the result figure.
2. the vehicle target detection method according to claim 1 based on YOLOv2, which is characterized in that the YOLOv2 mind It include convolutional layer, pond layer and full linking layer through network model,
The training image is input to preset YOLOv2 neural network model to be trained, obtains the step of vehicle detection model Suddenly further comprise:
The training image is inputted into the convolutional layer and carries out feature extraction, by multiple convolution kernel functions in the training image Upper sliding exports multiple multidimensional characteristic vectors;
The multidimensional characteristic vectors are inputted into the pond layer and carry out pond processing, output pool characteristic pattern;
The pond characteristic pattern is inputted into the full linking layer, exports the result figure.
3. the vehicle target detection method according to claim 1 based on YOLOv2, which is characterized in that the vehicle detection Model is for executing following steps:
It is n*p grid by the target frame image segmentation, is m encirclement frame to each grid application platform;Wherein each institute It states and surrounds frame with center point coordinate, wide, high and four parameters of confidence score;
The m encirclement frames are normalized according to the width of encirclement frame, height described in each;
Calculate each described confidence score for surrounding frame;
All confidence scores are left out lower than the score threshold according to preset score threshold for grid described in each The encirclement frame;
For all encirclement frames in grid described in each, it is ranked up according to the descending sequence of confidence score;
For all encirclement frames in grid described in each, taking the maximum encirclement frame of confidence score is target frame.
4. the vehicle target detection method according to claim 3 based on YOLOv2, which is characterized in that calculating each In the step of surrounding the confidence score of frame, the confidence score for surrounding frame is calculated using following methods:
Judge in the region for surrounding frame with the presence or absence of object;
Object if it does not exist then determines that the confidence score for surrounding frame is 0;
Object if it exists then calculates the posterior probability Pr that the image in the encirclement frame belongs to vehicle target, calculates the encirclement The value of the detection evaluation function IOU of frame;The detection evaluation function IOU indicates the vehicle target and the intersection for surrounding frame With the ratio of the vehicle target and the union for surrounding frame;
The confidence level for surrounding frame is obtained multiplied by the value of the detection evaluation function IOU for surrounding frame with posterior probability Pr to obtain Point.
5. the vehicle target detection method according to claim 1 based on YOLOv2, which is characterized in that use gaussian filtering Device carries out noise reduction process to the sample image.
6. the vehicle target detection method according to claim 1 based on YOLOv2, which is characterized in that the institute after noise reduction State sample image carry out shade disappear again processing the step of in, disappeared using Poisson's equation to image shade and handled again.
7. a kind of vehicle target detection system based on YOLOv2 characterized by comprising
Sample acquisition module: image data is captured for obtaining sample traffic video data and sample;
Sample conversion module: for the sample traffic video data to be divided into frame image, picture number is captured with the sample According to collectively as sample image
Sample noise reduction module: for carrying out image noise reduction processing to the sample image;
Sample removes shaded block: handling again for disappearing to the sample image progress shade after noise reduction;
Sample balance module: for disappearing weight to shade treated, the sample image carries out local histogram equalization processing;
Local inhomogeneous body analysis module: for treated that the sample image carries out local inhomogeneous body to local histogram equalization Analysis, obtains training image;
Model training module: it is instructed for the training image to be input to the preset YOLOv2 neural network model Practice, obtains vehicle detection model;
Data acquisition module: for obtaining real-time vehicle video data;
Data single treatment module: for carrying out noise reduction to the real-time vehicle video data got, disappear weight, local smoothing method Processing, the real-time vehicle video data that obtains that treated;
Data secondary treatment module: local inhomogeneous body analysis is carried out for the real-time vehicle video data described to treated, is obtained Real-time vehicle video data after to secondary treatment;
Data conversion module: for being frame image by the real-time vehicle partitioning video data after the secondary treatment, target is obtained Frame image;
Vehicle identification module: for inputting the target frame image in the vehicle detection model, result figure is obtained, wherein It include several target frames in the result figure, the target frame is the rectangle for all vehicle images irised out in the result figure Frame.
8. the vehicle target detection system according to claim 7 based on YOLOv2, which is characterized in that
When being trained to the YOLOv2 neural network model, the specific step that executes includes: the model training module
Training image input convolutional layer is subjected to feature extraction, it is sliding on the training image by multiple convolution kernel functions It is dynamic, export multiple multidimensional characteristic vectors;
Multidimensional characteristic vectors input pond layer is subjected to pond processing, output pool characteristic pattern;
The pond characteristic pattern is inputted into full linking layer, exports the result figure.
9. the vehicle target detection system according to claim 7 based on YOLOv2, which is characterized in that
The vehicle identification model is handled to the target frame image, when obtaining the result figure, specifically executes step Suddenly include:
It is n*p grid by the target frame image segmentation, is m encirclement frame to each grid application platform;Wherein each institute It states and surrounds frame with center point coordinate, wide, high and four parameters of confidence score;
The m encirclement frames are normalized according to the width of encirclement frame, height described in each;
Calculate each described confidence score for surrounding frame;
All confidence scores are left out lower than the score threshold according to preset score threshold for grid described in each The encirclement frame;
For all encirclement frames in grid described in each, it is ranked up according to the descending sequence of confidence score;It takes The maximum encirclement frame of confidence score is target frame.
10. the vehicle target detection system according to claim 9 based on YOLOv2, which is characterized in that
The vehicle identification model specifically executes step when calculating each described confidence score for surrounding frame Include:
Judge in the region for surrounding frame with the presence or absence of object;
Object if it does not exist then determines that the confidence score for surrounding frame is 0;
Object if it exists then calculates the posterior probability Pr that the image in the encirclement frame belongs to vehicle target, calculates the encirclement The value of the detection evaluation function IOU of frame;The detection evaluation function IOU indicates the vehicle target and the intersection for surrounding frame With the ratio of the vehicle target and the union for surrounding frame;
The confidence score for surrounding frame is obtained multiplied by the value of the detection evaluation function IOU with the posterior probability Pr.
CN201810803074.7A 2018-07-20 2018-07-20 Vehicle target detection method and system based on YOLOv2 Pending CN108960185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810803074.7A CN108960185A (en) 2018-07-20 2018-07-20 Vehicle target detection method and system based on YOLOv2

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810803074.7A CN108960185A (en) 2018-07-20 2018-07-20 Vehicle target detection method and system based on YOLOv2

Publications (1)

Publication Number Publication Date
CN108960185A true CN108960185A (en) 2018-12-07

Family

ID=64482118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810803074.7A Pending CN108960185A (en) 2018-07-20 2018-07-20 Vehicle target detection method and system based on YOLOv2

Country Status (1)

Country Link
CN (1) CN108960185A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109817009A (en) * 2018-12-31 2019-05-28 天合光能股份有限公司 Method for acquiring dynamic traffic information required by unmanned driving
CN109936774A (en) * 2019-03-29 2019-06-25 广州虎牙信息科技有限公司 Virtual image control method, device and electronic equipment
CN110222764A (en) * 2019-06-10 2019-09-10 中南民族大学 Shelter target detection method, system, equipment and storage medium
CN110300241A (en) * 2019-08-05 2019-10-01 上海天诚比集科技有限公司 A kind of video detection area noise frame minimizing technology
CN110399803A (en) * 2019-07-01 2019-11-01 北京邮电大学 A kind of vehicle checking method and device
WO2020200082A1 (en) * 2019-03-29 2020-10-08 广州虎牙信息科技有限公司 Live broadcast interaction method and apparatus, live broadcast system and electronic device
CN111950475A (en) * 2020-08-15 2020-11-17 哈尔滨理工大学 Yalhe histogram enhancement type target recognition algorithm based on yoloV3
CN112347819A (en) * 2019-08-08 2021-02-09 初速度(苏州)科技有限公司 Vehicle path transformation method and device based on full graph and local detection
CN115623318A (en) * 2022-12-20 2023-01-17 荣耀终端有限公司 Focusing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840507A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化系统工程有限公司 Target tracking method based on character feature invariant and graph theory clustering
CN106023605A (en) * 2016-07-15 2016-10-12 姹ゅ钩 Traffic signal lamp control method based on deep convolution neural network
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107844769A (en) * 2017-11-01 2018-03-27 济南浪潮高新科技投资发展有限公司 Vehicle checking method and system under a kind of complex scene
CN108229434A (en) * 2018-02-01 2018-06-29 福州大学 A kind of vehicle identification and the method for careful reconstruct

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840507A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化系统工程有限公司 Target tracking method based on character feature invariant and graph theory clustering
CN106023605A (en) * 2016-07-15 2016-10-12 姹ゅ钩 Traffic signal lamp control method based on deep convolution neural network
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107844769A (en) * 2017-11-01 2018-03-27 济南浪潮高新科技投资发展有限公司 Vehicle checking method and system under a kind of complex scene
CN108229434A (en) * 2018-02-01 2018-06-29 福州大学 A kind of vehicle identification and the method for careful reconstruct

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周纪强: "监控视频中多类目标检测与多目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李云鹏 等: "基于YOLOv2的复杂场景下车辆目标检测", 《电视技术》 *
谢晓竹 等: "复杂环境背景下车辆目标识别研究综述", 《兵器装备工程学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109817009A (en) * 2018-12-31 2019-05-28 天合光能股份有限公司 Method for acquiring dynamic traffic information required by unmanned driving
CN109936774A (en) * 2019-03-29 2019-06-25 广州虎牙信息科技有限公司 Virtual image control method, device and electronic equipment
WO2020200082A1 (en) * 2019-03-29 2020-10-08 广州虎牙信息科技有限公司 Live broadcast interaction method and apparatus, live broadcast system and electronic device
CN110222764A (en) * 2019-06-10 2019-09-10 中南民族大学 Shelter target detection method, system, equipment and storage medium
CN110399803A (en) * 2019-07-01 2019-11-01 北京邮电大学 A kind of vehicle checking method and device
CN110300241A (en) * 2019-08-05 2019-10-01 上海天诚比集科技有限公司 A kind of video detection area noise frame minimizing technology
CN110300241B (en) * 2019-08-05 2021-09-17 上海天诚比集科技有限公司 Method for removing noise frame in video detection area
CN112347819A (en) * 2019-08-08 2021-02-09 初速度(苏州)科技有限公司 Vehicle path transformation method and device based on full graph and local detection
CN112347819B (en) * 2019-08-08 2022-05-17 魔门塔(苏州)科技有限公司 Vehicle path transformation method and device based on full graph and local detection
CN111950475A (en) * 2020-08-15 2020-11-17 哈尔滨理工大学 Yalhe histogram enhancement type target recognition algorithm based on yoloV3
CN115623318A (en) * 2022-12-20 2023-01-17 荣耀终端有限公司 Focusing method and related device
CN115623318B (en) * 2022-12-20 2024-04-19 荣耀终端有限公司 Focusing method and related device

Similar Documents

Publication Publication Date Title
CN108960185A (en) Vehicle target detection method and system based on YOLOv2
CN111444821B (en) Automatic identification method for urban road signs
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
CN103632158B (en) Forest fire prevention monitor method and forest fire prevention monitor system
CN106845408A (en) A kind of street refuse recognition methods under complex environment
CN103034852B (en) The detection method of particular color pedestrian under Still Camera scene
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN107122777A (en) A kind of vehicle analysis system and analysis method based on video file
CN109636824A (en) A kind of multiple target method of counting based on image recognition technology
CN111611905A (en) Visible light and infrared fused target identification method
CN107133604A (en) A kind of pig abnormal gait detection method based on ellipse fitting and predictive neutral net
CN104537689B (en) Method for tracking target based on local contrast conspicuousness union feature
CN102194108A (en) Smiley face expression recognition method based on clustering linear discriminant analysis of feature selection
CN110334660A (en) A kind of forest fire monitoring method based on machine vision under the conditions of greasy weather
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN106909883A (en) A kind of modularization hand region detection method and device based on ROS
CN109241814A (en) Pedestrian detection method based on YOLO neural network
CN107944392A (en) A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN111191531A (en) Rapid pedestrian detection method and system
Zhang et al. Fire detection and identification method based on visual attention mechanism
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN111951283A (en) Medical image identification method and system based on deep learning
CN102722701B (en) Visual monitoring method and device in fingerprint collection process
Fan et al. Covered vehicle detection in autonomous driving based on faster rcnn

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181207