CN109190444A - A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video - Google Patents

A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video Download PDF

Info

Publication number
CN109190444A
CN109190444A CN201810705071.XA CN201810705071A CN109190444A CN 109190444 A CN109190444 A CN 109190444A CN 201810705071 A CN201810705071 A CN 201810705071A CN 109190444 A CN109190444 A CN 109190444A
Authority
CN
China
Prior art keywords
vehicle
target
video
lane
drivers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810705071.XA
Other languages
Chinese (zh)
Other versions
CN109190444B (en
Inventor
阮雅端
赵博睿
陈林凯
葛嘉琦
陈启美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201810705071.XA priority Critical patent/CN109190444B/en
Publication of CN109190444A publication Critical patent/CN109190444A/en
Application granted granted Critical
Publication of CN109190444B publication Critical patent/CN109190444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B15/00Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
    • G07B15/06Arrangements for road pricing or congestion charging of vehicles or vehicle users, e.g. automatic toll systems

Abstract

The implementation method for the lane in which the drivers should pay fees vehicle feature recognition system based on video that the invention proposes a kind of includes three modules: vehicle detection module, vehicle tracking module and vehicle feature recognition module.The present invention is detected using SSD object detector, is tracked using the comparison of characteristic pattern histogram and distance versus method, by characteristic pattern by convolutional neural networks, carries out vehicle feature recognition.The method of the present invention can efficiently identify feature, and can reduce the repeat consumption of computing resource with real time execution, improve the accuracy of system.

Description

A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video
Technical field
The invention belongs to image procossing and Computer Vision Detection Technique fields, are related to algorithm of target detection and depth The application in vehicle detection of algorithm is practised, is a kind of realization side of the lane in which the drivers should pay fees vehicle feature recognition system based on video Method.
Background technique
The construction situation of China's highway is quickly grown, and Freeway Transport also becomes the main side of land cargo transport One of formula.Freeway Transport has many advantages, such as fast and stable.But the fee evasion phenomenon in expressway tol lcollection lane is but increasingly tight Weight.Many vehicles are obviously motor buses, but install the ETC charging device of car, by according to small sedan-chair when by the lane in which the drivers should pay fees The charging standard of vehicle is charged.Increasingly mature with deep learning and target detection technique, the lane in which the drivers should pay fees vehicle is examined automatically It surveys and feature identifies an important subject for having become intelligent transportation system, in the lane in which the drivers should pay fees management of highway In, manpower consumption can be effectively reduced, fee evasion phenomenon can be efficiently hit.But the vehicle detection of the lane in which the drivers should pay fees and feature Identification has higher requirement to the real-time and accuracy of system.If real-time does not reach requirement, system can not be just It is often used;If accuracy does not reach requirement, system is easy to appear a large amount of false judgment, influences the lane in which the drivers should pay fees normal work. Therefore, how to improve the real-time of detection and identifying system and accuracy simultaneously is particularly important, and one studied at present There is the intelligent transportation system of the lane in which the drivers should pay fees in big hot topic direction important meaning and value.
Currently, most vehicle feature recognition systems use mixed Gaussian background subtraction algorithm (GMBSD, gaussian Mixture background subtraction division) background in the lane in which the drivers should pay fees video is modeled, thus real Existing moving vehicles detection and tracking, but this method accuracy rate in vehicle congestion is very low, does not have universality.It is many based on deep at present Algorithm of target detection such as Faster R-CNN, the SSD etc. for spending study, there is a preferable detection accuracy, but these target detections Device real-time is lower, can not effectively, economically carry out large scale deployment, and since no subsequent vehicle tracking is calculated Method, system are easy to carry out duplicate vehicle feature recognition to same vehicle, calculate even if merely increasing subsequent vehicle tracking Method and vehicle feature recognition algorithm, the real-time of system also can be lower, are still difficult to large scale deployment.
Summary of the invention
The problem to be solved in the present invention is: for the requirement of the lane in which the drivers should pay fees vehicle feature recognition, existing identifying system is used Recognition methods cannot take into account accuracy, real-time and economy, be not able to satisfy large scale deployment and accurate real-time to identification Requirement.The purpose of the present invention is improving the real-time of existing vehicle feature recognition system, and its accuracy is not lost;Needle To vehicle feature recognition task, a kind of method for tracking target is realized, reduce the repetition of vehicle feature recognition;It is obtained using detection Characteristic pattern carries out vehicle feature recognition, lifting system real-time.
The technical solution of the present invention is as follows: a kind of the lane in which the drivers should pay fees vehicle feature recognition method based on video, including vehicle inspection It surveys, three steps of vehicle tracking and vehicle feature recognition:
Step S1, vehicle detection is carried out based on video of the deep learning method to the lane in which the drivers should pay fees, and will test vehicle Characteristic pattern normalizing saves behind pond, while saving position and the classification information of each car:
S1.1) training convolutional neural networks are used for vehicle detection, and the vehicle that will test is divided into 3 classes, respectively motor bus, Truck and car;
S1.2) each frame picture of the lane in which the drivers should pay fees video is detected using the convolutional neural networks, test object packet The position and classification of each vehicle are included, position refers to the center point coordinate and width, height of vehicle, and classification refers to one of 3 kinds of classifications;
S1.3 the characteristic pattern that) will test the video image of vehicle carries out normalizing pond, obtains subcharacter figure, and will be sub special Sign figure, vehicle location and class of vehicle save as detection information, use an ID as index, the information of preservation in each vehicle It indicates are as follows:
Content (id)={ featuremap, loc, class } (1)
In formula, featuremap indicates characteristic pattern, is the vector of 3x3x256 dimension;Loc=(x, y, w, h) indicates position Confidence breath, four respectively indicate central point abscissa, central point ordinate, vehicle width and height of car, and value is 0 to 1 Between;Class=(cls1, cls2, cls3) indicates class of vehicle, and three respectively represent by present frame, and target is identified as The totalframes of the totalframes of car, the totalframes of motor bus and truck;
Step S2, the detection information of the detection information of present frame and former frame is subjected to characteristic pattern similarity comparison and position The marking of cars similar in comparing result is same vehicle, realizes vehicle tracking function by comparison:
S2.1) vehicle detecting information of former frame and the vehicle detecting information of present frame are compared one by one, by feature The target that figure similarity and positional distance meet given threshold is considered as same vehicle target, will be in the present frame with same target Vehicle corresponds to ID and is changed to the corresponding ID of vehicle in previous frame, and is updated using the correspondence detection information of present frame, until same One vehicle target is no longer present in video frame, realizes target following, same at this time vehicle corresponds to same in multiple image ID, detection information is finally to detect the corresponding detection information of this vehicle video frame, if the target in present frame is not preceding Occur in one frame, be then considered as the new vehicle occurred in video, present frame is corresponded into the ID that vehicle ID is considered as the vehicle, carries out new One wheel tracking;
S2.2) classification of the current class for belonging to same vehicle target and all historical frames is weighted and averaged, is obtained The final classification of vehicle, the average method of classification indicate are as follows:
Cls=argmax (cls1, cls2, cls3) (3)
In formula, argmax expression takes index value to maximum value;
It step S3, will when the vehicle being tracked to is by the prior polygon area-of-interest marked in video The corresponding normalization subcharacter figure of vehicle target is input to two deep learning sub-networks, carries out vehicle cab recognition respectively and color is known Not, and by all characteristic informations it saves, realizes the lane in which the drivers should pay fees vehicle feature recognition function:
S3.1) location information of all vehicle targets of present frame is judged, if a certain target is in area-of-interest It is interior, then the corresponding subcharacter figure of the target is extracted, for judging whether target is in the mode of area-of-interest are as follows: successively traverse The vertex of the polygon of area-of-interest, if the sub- gore that all vertex of area-of-interest and vehicle center point are constituted Product is equal with area of a polygon, then the point is located in area-of-interest, is otherwise located at interested outer, discriminate expression are as follows:
In formula, Area expression quadratures to triangle, and P indicates the central point of target, RiIndicate polygon clock-wise order I-th point, n indicate polygon point number, if there is area=area ', then target is located at the polygonal internal;
S3.2 obtained subcharacter figure) is respectively obtained into colouring information and vehicle classification by two convolutional neural networks Information, described two convolutional neural networks use the lane in which the drivers should pay fees video collected as training data, for identification vehicle color And vehicle model information;
S3.3) by color, vehicle model information corresponds to vehicle ID and is saved, if there is same ID in all frames later Target when will not repeat to save, complete identification to the lane in which the drivers should pay fees vehicle characteristics.
It is preferred that deep learning method used in step S1 specifically:
Vehicle inspection is carried out with frame image of the Single Shot MultiBox Detector algorithm to the lane in which the drivers should pay fees video It surveys, inputs as the color image of 300x300 size, convolutional neural networks structure specifically:
(1) the characteristic pattern scale that detection uses is 10x10,5x5,3x3 and 1x1;
(2) the convolution kernel size that detection uses is that 5x5,3x3 and 1x1 convolution kernel are in parallel, for the convolution kernel of three kinds of scales It is filled, the characteristic pattern size after guaranteeing convolution is identical, and correspondence null filling (padding) scale of three kinds of scales is respectively 2,1,0;
(3) loss function used when training is divided into position loss and classification loss, and loss function indicates are as follows:
Loss=lossloc*0.8+lossclass*0.2 (5)
In formula, lossloc=smoothL1 () indicates that position returns loss, loss function SmoothL1, lossclass= The loss of softmax () presentation class, loss function SoftMax.
Further, characteristic pattern normalizing pond method used in step S1 specifically:
Firstly, vehicle dimension is mapped to the reference characteristic figure as reference characteristic figure by the characteristic pattern for choosing 38x38 scale On obtain subcharacter figure, to the obtained subcharacter figure of mapping, use variable pond step-length and pond core progress pond, guarantee pond The characteristic pattern size exported after change is unified for 3x3, and the step-length and pond core size in pond are uniquely determined by the size of subcharacter figure, Determining method may be expressed as:
In formula, W and H are the width and length of subcharacter figure, and the lateral step-length and pond core width in pond are equal, are sw;Longitudinal step-length and the Chi Huahe height in pond are equal, are sh, [] expression is rounded downwards real number.
Further, S2.1) in, the comparison content for target following includes characteristic pattern similarity comparison and positional distance Comparison, characteristic pattern similarity comparison method are to calculate feature histogram distance, distance calculating method higher apart from smaller similarity For Euclidean distance, indicate are as follows:
In formula, x1,y1,x2,y2Respectively represent present frame vehicle center point transverse and longitudinal coordinate and previous frame vehicle center point Transverse and longitudinal coordinate.
The invention has the benefit that
In order to carry out important feature identification to the vehicle by the lane in which the drivers should pay fees, the present invention examines the target based on deep learning Method of determining and calculating is combined with based on the target tracking algorism that characteristic pattern compares, and is obtained the subcharacter figure of vehicle and is calculated using deep learning Method carries out the identification of the features such as color, classification to it;
Present invention improves over vehicle detecting algorithm, in SSD algorithm of target detection, the very low characteristic pattern of utilization rate is removed, Time loss when detection has been saved, the real-time of system is improved;And it is directed to loss and convolution kernel of the task to t raining period Scale modify, the accuracy of lifting system.
The method of the present invention has comprehensively considered system real time and accuracy, has cut off the redundancy section in object detector, Improve the accuracy of vehicle detection by modification network structure and loss function simultaneously;Characteristic pattern histogram pair has been used simultaneously Than realizing the track algorithm to the vehicle detected with the method that position versus combines, there is preferable robustness;Finally, System carries out the identification of the features such as color, vehicle to the vehicle for possessing unique ID, and the input used is no longer picture, but by examining The subcharacter figure that survey grid network obtains, improves the real-time of system and the service efficiency of network parameter, and system is made to have good reality When property and validity.
Detailed description of the invention
Fig. 1 is system framework figure of the invention.
Fig. 2 is deep learning method used in step S1 of the present invention, i.e. SSD schematic network structure.
Fig. 3 is that SSD detects convolution kernel structural schematic diagram in step S1 of the present invention.
Fig. 4 is present invention normalization pond algorithm schematic diagram.
Fig. 5 is the method schematic diagram that point and polygon relationship is judged in the present invention, and (a) is point outside figure, (b) exists for point In figure.
Fig. 6 is the convolutional neural networks schematic diagram of color and vehicle cab recognition in the present invention.
Fig. 7 is each step effect picture of the present invention, and (a) is input picture;It (b) is detection effect figure;It (c) is tracking effect Figure;(d) picture is corresponded to for vehicle subcharacter figure;It (e) is vehicle feature recognition result.
Specific embodiment
The implementation method for the lane in which the drivers should pay fees vehicle feature recognition system based on video that the invention proposes a kind of, can be effective It realizes moving vehicles detection and tracking, avoids carrying out same vehicle duplicate feature identification, and then promote the standard of vehicle feature recognition True property and real-time.
Invention is further explained with example with reference to the accompanying drawing.
The technical solution of the present invention is as follows: a kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video. As shown in Figure 1, specifically including three parts, respectively vehicle detection, vehicle tracking and vehicle feature recognition, steps are as follows:
Step S1: to each frame picture, as shown in Fig. 7 (a), the vehicle detection based on deep learning method is carried out, and will It is saved behind the characteristic pattern normalizing pond of the vehicle detected, and saves the location information of each car simultaneously:
S1.1 the part the lane in which the drivers should pay fees video being collected into) is subjected to picture interception and vehicle marks, obtained image data For training convolutional neural networks, the vehicle which will test is divided into 3 classes, respectively motor bus, truck with Car;
S1.2) each frame picture is detected using trained convolutional neural networks, as shown in Fig. 7 (b), is detected Result be that the position of each vehicle and classification, position are expressed as center point coordinate and width, height, classification is expressed as 3 types One of not;
S1.3 the characteristic pattern that) will test the video image of vehicle carries out normalizing pond, obtains subcharacter figure, and will be sub special Sign figure, vehicle location and class of vehicle save as detection information, use an ID as index, the information of preservation in each vehicle It indicates are as follows: the information of preservation is expressed as:
Content (id)={ featuremap, loc, class } (1)
In formula, featuremap indicates characteristic pattern, is the vector of 3x3x256 dimension;Loc=(x, y, w, h) indicates position Confidence breath, four respectively indicate central point abscissa, central point ordinate, vehicle width and height of car, and value is 0 to 1 Between;Class=(cls1, cls2, cls3) indicates class of vehicle, and three respectively represent so far, and target is identified as The totalframes of the totalframes of car, the totalframes of motor bus and truck.
Step S2: sorting according to the picture that video will test out vehicle, carries out vehicle tracking by present frame and former frame, will The detection information of present frame and the detection information of former frame carry out characteristic pattern similarity comparison and position versus, such as Fig. 7 (c) institute Show, is same vehicle by the marking of cars similar in comparing result, realizes vehicle tracking function;
S2.1) vehicle detecting information of former frame and the vehicle detecting information of present frame are compared one by one, by feature The target that figure similarity and positional distance meet given threshold is considered as same vehicle target, will be in the present frame with same target Vehicle corresponds to ID and is changed to the corresponding ID of vehicle in previous frame, and is updated using the correspondence detection information of present frame, until same One vehicle target is no longer present in video frame, realizes target following, same at this time vehicle corresponds to same in multiple image ID, detection information is finally to detect the corresponding detection information of this vehicle video frame, if the target in present frame is not preceding Occur in one frame, be then considered as the new vehicle occurred in video, present frame is corresponded into the ID that vehicle ID is considered as the vehicle, carries out new One wheel tracking;
Characteristic pattern similarity comparison method is to calculate feature histogram distance, feature histogram higher apart from smaller similarity Figure statistical method is similar to statistical color histogram, and difference is that the channel components counted are changed into from color triple channel numerical value The character numerical value in 256 channels.Distance calculating method is Euclidean distance, is indicated are as follows:
In formula, x1,y1,x2,y2Respectively represent present frame vehicle center point transverse and longitudinal coordinate and previous frame vehicle center point Transverse and longitudinal coordinate.After obtaining comparing result, the target that similarity comparison and position versus result meet given threshold is considered as same Vehicle target;
S2.2) classification of the current class for belonging to same target and all historical frames is weighted and averaged, obtains vehicle Final classification, the average method of classification indicates are as follows:
Cls=argmax (cls1, cls2, cls3) (3)
In formula, argmax expression takes index value to maximum value, and cls1, cls2, cls3 respectively indicate three in class content A component.
Step S3: when the vehicle being tracked to is by the artificial polygon area-of-interest marked in advance, by vehicle mesh It marks corresponding normalization subcharacter figure (step S1.3) and is input to two deep learning sub-networks, carry out vehicle cab recognition and face respectively Color identification, and all characteristic informations are saved.Realize the lane in which the drivers should pay fees vehicle feature recognition function:
S3.1) location information of all vehicle targets of present frame is judged, if a certain target is in area-of-interest Interior, then to the corresponding subcharacter figure of the target is extracted, the corresponding original image of the subcharacter figure such as Fig. 7 (d) is shown, subcharacter figure It is considered as the character representation of information of vehicles, is similar to color histogram.As shown in figure 5, judging whether target is in interested The mode in region is to calculate sub- triangle area, the vertex of the polygon of area-of-interest is successively traversed, if area-of-interest All vertex and the vehicle center point triangle area constituted it is equal with area of a polygon, then the point is located at area-of-interest It is interior, otherwise it is located at interested outer, discriminate expression are as follows:
In formula, Area expression quadratures to triangle, and P indicates the central point of target, RiIndicate polygon clock-wise order I-th point, n indicate polygon point number.If there is area=area ', then target is located at the polygonal internal;
S3.2) as shown in Fig. 7 (e), by obtained subcharacter figure by two convolutional neural networks, color letter is respectively obtained Breath and vehicle classification information, the two convolutional neural networks all have passed through training, and training data uses the charge being collected into Lane video.Method is that the input of the two convolutional neural networks is not image but characteristic pattern similar to S1.1, difference, convolution Neural network structure is as shown in fig. 6, wherein color totally 8 class: black, white, red, yellow, blue, green, brown, silver-colored:, vehicle totally 76 class: BMW, Masses etc.;
S3.3) all characteristic informations are saved, each car has a unique ID when preservation, in all frames later such as Fruit will not repeat to save when there is the target of same ID.
Further, in above scheme, deep learning algorithm used in step S1 specifically:
Vehicle is carried out to the lane in which the drivers should pay fees video frame images with Single Shot MultiBox Detector (SSD) algorithm Detection.The input of SSD is the color image of 300x300 size.As shown in Fig. 2, it is directed to the lane in which the drivers should pay fees vehicle detection problem, it is right SSD network structure is modified as follows:
(1) the characteristic pattern scale that detection uses is 10x10,5x5,3x3,1x1.Delete the 19x19 that uses originally and The characteristic pattern of 38x38 size.Because the lane in which the drivers should pay fees vehicle feature recognition only needs to realize detection i.e. to by the vehicle of the lane in which the drivers should pay fees Can, and these vehicles, all in the place of having a contest of camera angles, scale is generally larger.And the characteristic pattern of 19x19 and 38x38 size Detection Small object is contributed to, so can delete in vehicle detection task, thus the real-time of lift scheme.
(2) as shown in figure 3, the convolution kernel size that detection uses is changed to 5x5,3x3 and 1x1 convolution kernel parallel connection, for three kinds The convolution kernel of scale carries out different size of padding, and the characteristic pattern size after guaranteeing convolution is identical, so as to carry out feature Figure fusion.The correspondence padding of three kinds of scales is respectively 2,1,0.This structure is similar to Inception structure.Purpose be in order to The characteristic information for preferably extracting a variety of receptive fields improves the accuracy of network.
(3) loss function used when training is divided into position loss and classification loss, improves the weight ratio of position loss, Keep the position of testing result more acurrate.The loss function redefined may be expressed as:
Loss=lossloc*0.8+lossclass*0.2 (5)
In formula, lossloc=smoothL1 () indicates that position returns loss, loss function SmoothL1, lossclass= The loss of softmax () presentation class, loss function SoftMax.
Because class object only has 3 classes, classification task is simpler compared to for position recurrence task, so appropriate drop The ratio of low classification loss can't reduce detection accuracy, can make detection accuracy due to improving position loss ratio instead It is promoted.
Characteristic pattern normalizing pond method used in step S1 specifically:
Firstly, choosing the characteristic pattern of 38x38 scale as reference characteristic figure.By the corresponding size of target, i.e. vehicle dimension It is mapped on the reference characteristic figure, obtains subcharacter figure.Choosing the scale feature figure is because of are as follows: (1) language of the scale feature figure Adopted information is lower, can effectively differentiate same category vehicle;(2) the maximum characteristic pattern that detection uses is used having a size of 10x10 The characteristic pattern of 38x38 scale can make the corresponding subcharacter figure size of target not less than 3x3 as benchmark.Mapping is obtained Subcharacter figure carries out pond using variable pond step-length and pond core, guarantees that the characteristic pattern size exported behind pond is unified for 3x3.As shown in figure 4, the step-length and pond core size in pond can be uniquely determined by the size of subcharacter figure, determining method can table It is shown as:
In formula, W and H are the width and length of subcharacter figure, and the lateral step-length and pond core width in pond are equal, are sw;Longitudinal step-length and the Chi Huahe height in pond are equal, are sh.[] expression is rounded downwards real number.
By above-mentioned implementation, the vehicle identification to lane video is realized.

Claims (4)

1. a kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video, it is characterized in that include vehicle detection, Three steps of vehicle tracking and vehicle feature recognition:
Step S1, vehicle detection is carried out based on video of the deep learning method to the lane in which the drivers should pay fees, and will test the feature of vehicle Figure normalizing saves behind pond, while saving position and the classification information of each car:
S1.1) training convolutional neural networks are used for vehicle detection, and the vehicle that will test is divided into 3 classes, respectively motor bus, truck With car;
S1.2) each frame picture of the lane in which the drivers should pay fees video is detected using the convolutional neural networks, test object includes every The position and classification, position of one vehicle refer to the center point coordinate and width, height of vehicle, and classification refers to one of 3 kinds of classifications;
S1.3 the characteristic pattern that) will test the video image of vehicle carries out normalizing pond, obtains subcharacter figure, and by subcharacter Figure, vehicle location and class of vehicle save as detection information, use an ID as index, the information table of preservation in each vehicle It is shown as:
Content (id)={ featuremap, loc, class } (1)
In formula, featuremap indicates characteristic pattern, is the vector of 3x3x256 dimension;Loc=(x, y, w, h) indicates position letter Breath, four respectively indicate central point abscissa, central point ordinate, vehicle width and height of car, and value is between 0 to 1; Class=(cls1, cls2, cls3) indicates class of vehicle, and three respectively represent by present frame, and target is identified as car The totalframes of totalframes, the totalframes of motor bus and truck;
Step S2, the detection information of the detection information of present frame and former frame is subjected to characteristic pattern similarity comparison and position pair Than being same vehicle by the marking of cars similar in comparing result, realizing vehicle tracking function:
S2.1) vehicle detecting information of former frame and the vehicle detecting information of present frame are compared one by one, by characteristic pattern phase It is considered as same vehicle target like the target that degree and positional distance meet given threshold, by vehicle in the present frame with same target Corresponding ID is changed to the corresponding ID of vehicle in previous frame, and is updated using the correspondence detection information of present frame, until same vehicle Target is no longer present in video frame, realizes target following, and same at this time vehicle correspond to the same ID in multiple image, inspection Measurement information is finally to detect the corresponding detection information of this vehicle video frame, if the target in present frame is not in former frame Occur, be then considered as the new vehicle occurred in video, present frame correspond into the ID that vehicle ID is considered as the vehicle, a progress new round with Track;
S2.2) classification of the current class for belonging to same vehicle target and all historical frames is weighted and averaged, obtains vehicle Final classification, the average method of classification indicates are as follows:
Cls=argmax (cls1, cls2, cls3) (3)
In formula, argmax expression takes index value to maximum value;
Step S3, when the vehicle being tracked to is by the prior polygon area-of-interest marked in video, by vehicle The corresponding normalization subcharacter figure of target is input to two deep learning sub-networks, carries out vehicle cab recognition and color identification respectively, And save all characteristic informations, realize the lane in which the drivers should pay fees vehicle feature recognition function:
S3.1) location information of all vehicle targets of present frame is judged, if a certain target in area-of-interest, The corresponding subcharacter figure of the target is extracted, for judging whether target is in the mode of area-of-interest are as follows: successively traversal sense is emerging The vertex of the polygon in interesting region, if the sub- triangle area that constitutes of all vertex of area-of-interest and vehicle center point with Area of a polygon is equal, then the point is located in area-of-interest, is otherwise located at interested outer, discriminate expression are as follows:
In formula, Area expression quadratures to triangle, and P indicates the central point of target, RiIndicate the i-th of polygon clock-wise order A, n indicates the number of the point of polygon, and if there is area=area ', then target is located at the polygonal internal;
S3.2 obtained subcharacter figure) is respectively obtained into colouring information and vehicle classification information by two convolutional neural networks, Described two convolutional neural networks use the lane in which the drivers should pay fees video collected as training data, for identification vehicle color and vehicle Information;
S3.3) by color, vehicle model information corresponds to vehicle ID and is saved, if there is the mesh of same ID in all frames later It will not repeat to save when mark, complete the identification to the lane in which the drivers should pay fees vehicle characteristics.
2. the implementation method of the lane in which the drivers should pay fees vehicle feature recognition system according to claim 1 based on video, feature It is deep learning method used in step S1 specifically:
Vehicle detection is carried out with frame image of the Single Shot MultiBox Detector algorithm to the lane in which the drivers should pay fees video, it is defeated Enter for the color image of 300x300 size, convolutional neural networks structure specifically:
(1) the characteristic pattern scale that detection uses is 10x10,5x5,3x3 and 1x1;
(2) the convolution kernel size that detection uses is that 5x5,3x3 and 1x1 convolution kernel are in parallel, and the convolution kernel of three kinds of scales is carried out Filling, the characteristic pattern size after guaranteeing convolution is identical, correspondence null filling (padding) scale of three kinds of scales is respectively 2,1, 0;
(3) loss function used when training is divided into position loss and classification loss, and loss function indicates are as follows:
Loss=lossloc*0.8+lossclass*0.2 (5)
In formula, lossloc=smoothL1 () indicates that position returns loss, loss function SmoothL1, lossclass= The loss of softmax () presentation class, loss function SoftMax.
3. the implementation method of the lane in which the drivers should pay fees vehicle feature recognition system according to claim 1 based on video, feature It is characteristic pattern normalizing pond method used in step S1 specifically:
Firstly, vehicle dimension is mapped on the reference characteristic figure and obtains as reference characteristic figure by the characteristic pattern for choosing 38x38 scale To subcharacter figure, to the subcharacter figure that mapping obtains, pond is carried out using variable pond step-length and pond core, guarantees Chi Huahou The characteristic pattern size of output is unified for 3x3, and the step-length and pond core size in pond are uniquely determined by the size of subcharacter figure, determines Method may be expressed as:
In formula, W and H are the width and length of subcharacter figure, and it is s that the lateral step-length and pond core width in pond are equalw;Chi Hua Longitudinal step-length and Chi Huahe height it is equal, be sh, [] expression is rounded downwards real number.
4. the implementation method of the lane in which the drivers should pay fees vehicle feature recognition system according to claim 1 based on video, feature It is S2.1) in, the comparison content for target following includes that characteristic pattern similarity comparison and positional distance comparison, characteristic pattern are similar Spending control methods is to calculate feature histogram distance, and higher apart from smaller similarity, distance calculating method is Euclidean distance, is indicated Are as follows:
In formula, x1,y1,x2,y2Respectively represent present frame vehicle center point transverse and longitudinal coordinate and previous frame vehicle center point transverse and longitudinal Coordinate.
CN201810705071.XA 2018-07-02 2018-07-02 Method for realizing video-based toll lane vehicle feature recognition system Active CN109190444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810705071.XA CN109190444B (en) 2018-07-02 2018-07-02 Method for realizing video-based toll lane vehicle feature recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810705071.XA CN109190444B (en) 2018-07-02 2018-07-02 Method for realizing video-based toll lane vehicle feature recognition system

Publications (2)

Publication Number Publication Date
CN109190444A true CN109190444A (en) 2019-01-11
CN109190444B CN109190444B (en) 2021-05-18

Family

ID=64948776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810705071.XA Active CN109190444B (en) 2018-07-02 2018-07-02 Method for realizing video-based toll lane vehicle feature recognition system

Country Status (1)

Country Link
CN (1) CN109190444B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886312A (en) * 2019-01-28 2019-06-14 同济大学 A kind of bridge wheel of vehicle detection method based on multilayer feature fused neural network model
CN109902733A (en) * 2019-02-22 2019-06-18 北京三快在线科技有限公司 The method, apparatus and storage medium of typing Item Information
CN110223279A (en) * 2019-05-31 2019-09-10 上海商汤智能科技有限公司 A kind of image processing method and device, electronic equipment
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN110555867A (en) * 2019-09-05 2019-12-10 杭州立宸科技有限公司 Multi-target object tracking method fusing object capturing and identifying technology
CN110569785A (en) * 2019-09-05 2019-12-13 杭州立宸科技有限公司 Face recognition method based on fusion tracking technology
CN111523419A (en) * 2020-04-13 2020-08-11 北京巨视科技有限公司 Video detection method and device for motor vehicle exhaust emission
WO2021008018A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Vehicle identification method and device employing artificial intelligence, and program and storage medium
CN112668497A (en) * 2020-12-30 2021-04-16 南京佑驾科技有限公司 Vehicle accurate positioning and identification method and system
CN113033449A (en) * 2021-04-02 2021-06-25 上海国际汽车城(集团)有限公司 Vehicle detection and marking method and system and electronic equipment
CN113371035A (en) * 2021-08-16 2021-09-10 山东矩阵软件工程股份有限公司 Train information identification method and system
US20230112822A1 (en) * 2021-10-08 2023-04-13 Realtek Semiconductor Corporation Character recognition method, character recognition device and non-transitory computer readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2532075A (en) * 2014-11-10 2016-05-11 Lego As System and method for toy recognition and detection based on convolutional neural networks
CN105868700A (en) * 2016-03-25 2016-08-17 哈尔滨工业大学深圳研究生院 Vehicle type recognition and tracking method and system based on monitoring video
CN107066953A (en) * 2017-03-22 2017-08-18 北京邮电大学 It is a kind of towards the vehicle cab recognition of monitor video, tracking and antidote and device
CN107133974A (en) * 2017-06-02 2017-09-05 南京大学 The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
US20180181822A1 (en) * 2016-12-27 2018-06-28 Automotive Research & Testing Center Hierarchical system for detecting object with parallel architecture and hierarchical method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2532075A (en) * 2014-11-10 2016-05-11 Lego As System and method for toy recognition and detection based on convolutional neural networks
CN105868700A (en) * 2016-03-25 2016-08-17 哈尔滨工业大学深圳研究生院 Vehicle type recognition and tracking method and system based on monitoring video
US20180181822A1 (en) * 2016-12-27 2018-06-28 Automotive Research & Testing Center Hierarchical system for detecting object with parallel architecture and hierarchical method thereof
CN107066953A (en) * 2017-03-22 2017-08-18 北京邮电大学 It is a kind of towards the vehicle cab recognition of monitor video, tracking and antidote and device
CN107133974A (en) * 2017-06-02 2017-09-05 南京大学 The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HONG QIAO等: "Deep Fusion Feature for Vehicle Classification and Recognition", 《2018 2ND IEEE ADVANCED INFORMATION MANAGEMENT,COMMUNICATES,ELECTRONIC AND AUTOMATION CONTROL CONFERENCE》 *
蔡英凤等: "视觉车辆识别迁移学习算法", 《东南大学学报(自然科学版)》 *
陈林凯等: "基于卷积神经网络的运动车辆视频检测方法", 《2016年全国通信软件学术会议程序册与交流文集》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886312A (en) * 2019-01-28 2019-06-14 同济大学 A kind of bridge wheel of vehicle detection method based on multilayer feature fused neural network model
CN109886312B (en) * 2019-01-28 2023-06-06 同济大学 Bridge vehicle wheel detection method based on multilayer feature fusion neural network model
CN109902733A (en) * 2019-02-22 2019-06-18 北京三快在线科技有限公司 The method, apparatus and storage medium of typing Item Information
CN110223279A (en) * 2019-05-31 2019-09-10 上海商汤智能科技有限公司 A kind of image processing method and device, electronic equipment
CN110223279B (en) * 2019-05-31 2021-10-08 上海商汤智能科技有限公司 Image processing method and device and electronic equipment
WO2021008018A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Vehicle identification method and device employing artificial intelligence, and program and storage medium
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
US11393103B2 (en) * 2019-08-29 2022-07-19 Boe Technology Group Co., Ltd. Target tracking method, device, system and non-transitory computer readable medium
CN110569785B (en) * 2019-09-05 2023-07-11 杭州智爱时刻科技有限公司 Face recognition method integrating tracking technology
CN110555867B (en) * 2019-09-05 2023-07-07 杭州智爱时刻科技有限公司 Multi-target object tracking method integrating object capturing and identifying technology
CN110569785A (en) * 2019-09-05 2019-12-13 杭州立宸科技有限公司 Face recognition method based on fusion tracking technology
CN110555867A (en) * 2019-09-05 2019-12-10 杭州立宸科技有限公司 Multi-target object tracking method fusing object capturing and identifying technology
CN111523419A (en) * 2020-04-13 2020-08-11 北京巨视科技有限公司 Video detection method and device for motor vehicle exhaust emission
CN112668497A (en) * 2020-12-30 2021-04-16 南京佑驾科技有限公司 Vehicle accurate positioning and identification method and system
CN113033449A (en) * 2021-04-02 2021-06-25 上海国际汽车城(集团)有限公司 Vehicle detection and marking method and system and electronic equipment
CN113371035B (en) * 2021-08-16 2021-11-23 山东矩阵软件工程股份有限公司 Train information identification method and system
CN113371035A (en) * 2021-08-16 2021-09-10 山东矩阵软件工程股份有限公司 Train information identification method and system
US20230112822A1 (en) * 2021-10-08 2023-04-13 Realtek Semiconductor Corporation Character recognition method, character recognition device and non-transitory computer readable medium
US11922710B2 (en) * 2021-10-08 2024-03-05 Realtek Semiconductor Corporation Character recognition method, character recognition device and non-transitory computer readable medium

Also Published As

Publication number Publication date
CN109190444B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN109190444A (en) A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video
CN111444821B (en) Automatic identification method for urban road signs
CN106935035B (en) Parking offense vehicle real-time detection method based on SSD neural network
Li et al. Traffic light recognition for complex scene with fusion detections
CN103258213B (en) A kind of for the dynamic vehicle model recognizing method in intelligent transportation system
CN109447033A (en) Vehicle front obstacle detection method based on YOLO
CN105513349B (en) Mountainous area highway vehicular events detection method based on double-visual angle study
CN109508715A (en) A kind of License Plate and recognition methods based on deep learning
CN109902806A (en) Method is determined based on the noise image object boundary frame of convolutional neural networks
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN105354568A (en) Convolutional neural network based vehicle logo identification method
CN105184271A (en) Automatic vehicle detection method based on deep learning
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN107315998B (en) Vehicle class division method and system based on lane line
CN104298969A (en) Crowd scale statistical method based on color and HAAR feature fusion
CN104715244A (en) Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN104183142A (en) Traffic flow statistics method based on image visual processing technology
He et al. A robust method for wheatear detection using UAV in natural scenes
CN103679214B (en) Vehicle checking method based on online Class area estimation and multiple features Decision fusion
CN103679205A (en) Preceding car detection method based on shadow hypothesis and layered HOG (histogram of oriented gradient) symmetric characteristic verification
CN107273852A (en) Escalator floor plates object and passenger behavior detection algorithm based on machine vision
CN106886757B (en) A kind of multiclass traffic lights detection method and system based on prior probability image
CN111523415A (en) Image-based two-passenger one-dangerous vehicle detection method and device
CN110069982A (en) A kind of automatic identifying method of vehicular traffic and pedestrian
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant