CN103258432B - Traffic accident automatic identification processing method and system based on videos - Google Patents

Traffic accident automatic identification processing method and system based on videos Download PDF

Info

Publication number
CN103258432B
CN103258432B CN201310139545.6A CN201310139545A CN103258432B CN 103258432 B CN103258432 B CN 103258432B CN 201310139545 A CN201310139545 A CN 201310139545A CN 103258432 B CN103258432 B CN 103258432B
Authority
CN
China
Prior art keywords
vehicle
video
traffic accident
traffic
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310139545.6A
Other languages
Chinese (zh)
Other versions
CN103258432A (en
Inventor
王拓
周斌
向宸薇
华莉琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi an Jiaotong University
Original Assignee
Xi an Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi an Jiaotong University filed Critical Xi an Jiaotong University
Priority to CN201310139545.6A priority Critical patent/CN103258432B/en
Publication of CN103258432A publication Critical patent/CN103258432A/en
Application granted granted Critical
Publication of CN103258432B publication Critical patent/CN103258432B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a traffic accident automatic identification processing method and system based on videos. The traffic accident automatic identification processing method and system based on the videos comprises the steps of respectively setting up three-dimensional feature libraries of different vehicle types, obtaining road video image sequences, carrying out foreground vehicle separation based on background modeling, carrying out static target judgment based on target centroid displacement, recognizing the vehicle types based on a vehicle outer contour recognition algorithm of three-dimensional modeling, judging whether a traffic accident occurs or not according to whether contour deformation occurs or not, extracting improved SIFT feature points of the vehicles based on an improved SIFT feature recognition algorithm of vehicles of the three-dimensional modeling, and judging whether a traffic accident occurs or not according to comparison results of the improved SIFT feature points. The traffic accident automatic identification processing method and system based on the videos judges whether vehicles on a current road are driven safely or not by analyzing images collected by a camera, collects information of a traffic accident scene in first time of a traffic accident, and transfers the information to a commanding monitoring centre. Workers can effectively and timely work by watching the videos, and the traffic accident automatic identification processing method and system based on the videos plays a huge role in perfecting a whole intelligent traffic system.

Description

The automatic identifying processing method and system of traffic hazard based on video
Technical field
The present invention relates to Intelligent traffic video picture control and video image analysis field, particularly the automatic identifying processing method and system of a kind of traffic hazard based on video.
Background technology
In recent years, along with the fast development of national economy and the continuous progress of society, road extends in all direction, and the quantity of automobile grows with each passing day especially, and thing followed traffic problems are also increasingly serious, traffic jam.How to realize the real-time monitoring of traffic scene, scheduling and controlling, setting up effective intelligent transportation system becomes the focus and current problem demanding prompt solution paid close attention to already both at home and abroad.Under this background, the process of visual intelligent treatment technology to traffic hazard based on computer vision and Digital Image Processing provides more real-time, the accurate and efficient method of one, and provides technical support to the rehabilitation such as the rescue of traffic hazard, the confirmation of responsibility of accident.
At present, the domestic research automatically identified for traffic hazard based on video is less, following several prior art is mainly comprised to the research of traffic hazard: 1, toroidal inductor method, the method technology maturation, do not affect by weather political reform, but this technology mainly carries out the estimation of traffic accident based on travel condition of vehicle; 2, Infrared Detection Method, it is more that the method obtains parameter, but be infraredly easily disturbed, and whether cannot distinguish by thing is vehicle; 3, ultrasonic Detection Method, the method directionality is good, reflection potential is strong, but it is less to obtain parameter, large by temperature, climate effect.And at present based on the traffic hazard identification of video technique, only utilize simple car speed parameter, well can not identify traffic hazard, can not Realtime Alerts be accomplished.
Summary of the invention
The object of this invention is to provide the automatic identifying processing method and system of a kind of traffic hazard based on video, traffic hazard being detected fast and effectively for realizing, and Realtime Alerts and the effective management to accident.
To achieve these goals, the present invention adopts following technical scheme:
The automatic identifying processing method of traffic hazard based on video, is characterized in that, specifically comprise the following steps:
Step S10: its three-dimensional feature storehouse is set up respectively to different automobile types;
Step S11: obtain road sequence of video images;
Step S12: based on the prospect Vehicles separation of background modeling;
Step S13: the static target of based target barycenter displacement judges;
Step S14: based on the vehicle outline recognizer of three-dimensional modeling, identifying vehicle, then according to profile, whether deformation occurs and judge whether to get into an accident, as judged to get into an accident, reporting to the police, otherwise entering step S15, judging by improving SIFT feature point;
Step S15: the improvement SIFT feature recognizer based on the vehicle of three-dimensional modeling: whether the improvement SIFT feature point extracting vehicle, with the local feature Point matching in three-dimensional vehicle feature database, and get into an accident according to matching result comprehensive descision.
The present invention further improves and is: set up its three-dimensional feature storehouse step respectively to different automobile types in described step S10 and comprise:
S101) image of different automobile types in the different attitude angle of three dimensions is obtained;
S102) unique point of vehicle outline in image is extracted;
S103) extract vehicle outline in image surround improvement SIFT feature point in closed curve;
S104) the above-mentioned 2 category feature points extracted are used to set up three-dimensional vehicle feature database.
The present invention further improves and is: obtaining road sequence of video images in described step S11 is set up single fixed cameras acquisition road video image by being interposed between on road with a certain.
The present invention further improves and is: in described step S12, prospect Vehicles separation step comprises:
S121) for colour imagery shot, the background modeling method based on color space cluster road model is adopted; For black and white camera, adopt the background modeling method based on mixed Gauss model;
S122) prospect and background image is obtained based on image difference method;
S123) morphology is utilized to improve vehicles segmentation result.
The present invention further improves and is: the stationary vehicle identification step based on image difference in described step S13 comprises:
S131) 8 neighborhood communicating methods are utilized to be marked respectively by moving target;
S132) moving target marked is calculated respectively to the size of its barycenter and connected region;
S133) contrast with previous frame image, judged whether that target offsets.
The present invention further improves and is: the vehicle outline recognizer step based on three-dimensional modeling in described step S14 comprises:
S141) canny operator is used to obtain the outline of prospect vehicle;
S142) mathematical description is carried out to the outline obtained, obtain its mathematic(al) representation;
S143) angle point in contouring curve, as unique point, mates with the three-dimensional vehicle profile library set up in advance;
S144) identify vehicle model, and judge whether vehicle outline changes;
S145) as outline changes, carry out traffic accident warning, otherwise carry out the judgement of improvement SIFT feature by step S15.
The present invention further improves and is: improve SIFT feature recognizer step based on the vehicle of three-dimensional modeling in described step S15 and comprise:
S151) the improvement SIFT feature of vehicle is obtained;
S152) mate with the three-dimensional feature storehouse set up in advance.
The automatic identification processing system of traffic hazard based on video, comprises communication subsystem, information storage subsystem, identification of accidental events subsystem and Incident Management subsystem;
Wherein, described communication subsystem, 1) for the Video Sequence Transmission of camera acquisition to workstation; 2) for haveing containing the server of accident transmission of video images to district's Command center through workstation processing and identification; 3) district, communication between city's Command center and server is completed;
Described information storage subsystem, store the original video of traffic accident period of right time, corresponding video reflect identifying information, traffic accident dot information and the relevant rehabilitation information relatively of traffic accident grade;
Described identification of accidental events subsystem, 1) identify that traffic accident occurs, and intercept the video information comprising traffic accident generating process; 2) intensity grade is identified according to relevant specification or regulation;
Described Incident Management subsystem, for, 1) traffic accident warning; 2) traffic accident ranking compositor; 3) accident rehabilitation; 4) historical information inquiry, statistics.
The present invention further improves and is: described identification of accidental events subsystem comprises: background modeling module, prospect vehicle extraction module, vehicle ' s contour identification module and vehicle improve SIFT feature identification module;
Background modeling module: utilize road video image information, adopts based on the method for color space cluster respectively for colored and black and white camera and carries out background modeling based on gauss hybrid models;
Prospect vehicle extraction module: utilize background difference algorithm, does difference by present frame and background, obtains prospect vehicle, utilizes eight connected region algorithm separately to be identified by moving target;
Vehicle ' s contour identification module: use canny operator to obtain the profile of prospect vehicle, and mate with the three-dimensional vehicle profile library set up in advance, first identify vehicle, if then vehicle ' s contour deformation is obvious, be judged to be traffic accident, otherwise use local SFIT feature to differentiate;
Vehicle improves SIFT feature identification module: extract vehicle and improve SIFT feature, mate, determine whether traffic accident with the feature database set up in advance.
Compared with prior art, the invention has the beneficial effects as follows: by analyze camera collection to image judge the whether safety traffic of vehicle on current road, and under the very first time collection that traffic accident occurs the information of traffic accident scene, transfer to Command center.The staff of Command center, by checking that video can be carried out the work timely and effectively, had so both saved manpower, and can obtain how effective conclusion again, had very large effect for improving whole intelligent transportation system.
Accompanying drawing explanation
Fig. 1 is the automatic identifying processing method flow diagram of traffic hazard that the present invention is based on video;
Fig. 2 is detailed process process flow diagram different automobile types being set up respectively to its three-dimensional feature storehouse;
Figure 3 shows that the prospect Vehicles separation detailed process process flow diagram based on background modeling;
Figure 4 shows that the vehicle outline recognizer particular flow sheet based on three-dimensional modeling;
Figure 5 shows that the improvement SIFT feature recognizer particular flow sheet of the vehicle based on three-dimensional modeling;
Figure 6 shows that the automatic identification processing system structural representation of the traffic hazard that the present invention is based on video.
Fig. 7 is the schematic diagram under RGB coordinate system.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, in conjunction with the following drawings and embodiment, the automatic identifying processing method and system of the traffic hazard that the present invention is based on video is further elaborated.Concrete enforcement described herein, only in order to explain the present invention, is not intended to limit the present invention.
The automatic identifying processing method and system of traffic hazard based on video of the present invention, by the analysis to road video monitoring image, realize the automatic identification of road traffic accident, and in time traffic accident information is sent to traffic control center, thus the loss that minimizing traffic hazard brings.
The automatic identifying processing method of traffic hazard based on video of the present invention is described below in detail, as shown in Figure 1, comprise and its three-dimensional feature storehouse is set up respectively to different automobile types, obtain road sequence of video images, based on the prospect Vehicles separation of background modeling, based on the vehicle outline recognizer of three-dimensional modeling, based on the improvement SIFT feature recognizer of the vehicle of three-dimensional modeling.Its treatment step is specific as follows:
S11, obtains Real-time Road sequence of video images by video camera;
S12, based on background modeling, the prospect vehicle utilizing background subtraction to be photographed by camera and road background are separated, and extract prospect vehicle;
S13, adopt the method based on image difference, namely 8 neighborhood communicating methods are first utilized to be marked respectively by moving target, then the moving target marked is calculated respectively to the size of its barycenter and connected region, last and previous frame image contrasts, judge whether that target offsets, identified stationary vehicle with this;
S14, canny operator extraction is used to go out in step S13 to identify stationary vehicle profile, obtain angle point in contour curve as unique point, with the vehicle ' s contour Feature Points Matching in dimensional profile features storehouse, first identifying vehicle, then according to profile, whether deformation occurs and judge whether to get into an accident, as judged to get into an accident, reporting to the police, otherwise entering step S15, judging by improving SIFT feature point;
Whether S15, extract the improvement SIFT feature point of vehicle, with the local feature Point matching in three-dimensional vehicle feature database, and get into an accident according to above-mentioned matching result comprehensive descision.As there is traffic hazard, traffic accident information and video are sent to district, city's Command center by grade rapidly by system, and report to the police, and traffic accident can be processed rapidly, and personnel are succoured.
As shown in Figure 2, the detailed process that different automobile types sets up its three-dimensional feature storehouse is respectively comprised the following steps:
First to gather the three-dimensional image sequence treating modeling vehicle, set up three-dimensional vehicle feature database.This is the traffic accident owing to occurring at every turn, and the attitude of vehicle is immesurable, thus needs to set up three-dimensional vehicle feature database.In space coordinates, vehicle is positioned at coordinate origin, first vehicle remains unchanged in Y-axis and Z axis formation plane, in the plane that X-axis and Y-axis are formed clockwise or be rotated counterclockwise certain angle θ, gather a photograph frame of now vehicle, and will the rear warehouse-in of two features (the vehicle ' s contour characteristic sum that this angle is little improves SIFT feature) set of vehicle on this photograph frame be gathered, and then gather a photograph frame again after rotating θ, gather two characteristic sets of vehicle on this photograph frame and put in storage.Rotate a circle like this and after setting up characteristic of correspondence, form in plane in Y-axis and Z axis, around X-axis, after anglec of rotation ε, repeating above said collection one photograph frame extract feature and warehouse-in until ε rotating 360 degrees.In this way, then the multi-angle multi-pose image of vehicle can be obtained.Need to illustrate, anglec of rotation θ and ε can not be too large, and to prevent from missing Partial Feature point in rotary course, θ, ε anglec of rotation of this patent is employing 11 °.
After obtaining the multi-angle image of vehicle, extract vehicle ' s contour, obtain the unique point of angle point as vehicle ' s contour of contour curve.Obtain the improvement SIFT feature of vehicle in contour curve simultaneously.Be described the unique point obtained, output is proper vector.Obtain while feature, need the angle information adding feature in its feature, with the combination of the characteristic information after facilitating.
After completing feature extraction, the feature of image all angles photographed carries out screening and arranging combination, just can set up three-dimensional feature model, by the proper vector after screening and arrangement combination stored in database.
As shown in Figure 3, the prospect Vehicles separation detailed process based on background modeling comprises step:
First, for colour imagery shot, adopt the background modeling method based on color space cluster road model.
Theoretical according to color distortion, one by one cluster is carried out, if c to the pixel of video present frame k=(R k, G k, B k) tbe the coded word of a pixel, I kfor c kbrightness, define its distortion interval for this point and initial point line be the right cylinder of axle, wherein, Δ cfor color distortion radius;
To the two continuous frames image f collected t, f t+1carry out difference, f t+1?f tobtain invariant region v wherein t+1, following parameters is arranged to each pixel in invariant region, cluster centre c, brightness distortion radius Δ iwith color distortion radius Δ c, subclass weights omega, maximum subclass number M, gets sequence of video images first frame as initial back-ground model, using the color vector v of location of pixels each in this frame as its first cluster centre c 1, such weights omega is set simultaneously 1=1.Calculate the distortion difference D that current pixel vector sum has existed all cluster centres, and choose minimum value D wherein minand corresponding subclass k;
Be illustrated in fig. 7 shown below, a certain pixel x of current video image i=(R, G, B) tto c kcolor distortion radius Δ is less than with the distance, delta C of the line of rgb space true origin c, can think that this pixel meets color distortion rule, Δ C is the color distortion value of this pixel;
Brightness distortion rule is by the brightness value I of a certain pixel of current video image iwith coded word c k=(R k, G k, B k) tbrightness I kthe absolute value delta I=|I of difference i-I k| as brightness distortion value, when Δ I is less than brightness distortion radius Δ itime, then this pixel meets such brightness distortion rule;
If D minmeet the clustering criteria be jointly made up of color distortion rule and brightness distortion rule, show that current pixel belongs to subclass k, so the parameter of subclass is upgraded according to the following formula;
c k,t+1(x,y)=(1-α)c k,t(x,y)+αv t+1(x,y)
In formula: c k, t+1cluster centre after (x, y)---pixel (x, a y) place kth subclass upgrades; c k,tcluster centre before (x, y)---pixel (x, a y) place kth subclass upgrades; α---learning rate, value is 0.03;
ω k,t+1=(1-α)ω k,t
Wherein, ω k, t+1---the weight after a kth subclass upgrades; ω k,t---the weight before a kth subclass upgrades;
If D mindo not meet clustering criteria, show that current pixel does not belong to any one already present subclass, if current subclass number is less than the maximum subclass number of setting, then add new subclass, cluster centre is set to current pixel proper vector, weights initialisation ω 0=0.2; Otherwise replacing by current pixel proper vector and there is minimum one of weight in cluster centre, is also ω by its weights initialisation 0=0.2.
ω k, t+1=(1-ω 0) ω k,t, other subclass weights are decayed.
To each location of pixels, sort to already present subclass is descending according to weights omega, and according to select the reasonable description of qualified top n subclass model as a setting.If namely current pixel belongs to top n subclass, then can think that it is road background pixel, otherwise be foreground pixel.
For black and white camera, adopt the background modeling method based on mixed Gauss model.
The basic thought of mixed Gaussian background modeling is: represent to the multiple Gauss model in the position residing for each pixel the state that this pixel changes along with the change of time in video simultaneously.In mixed Gaussian background modeling, 3 to 5 single Gauss models usually can be selected jointly to describe the feature of a certain pixel.In this K state, each represents with a Gaussian function, and to sort from big to small storage according to its possibility becoming background.Mixed Gauss model is described below:
P ( f t ) = Σ i = 1 K ω i , t * η ( f t , μ i , t , σ i , t 2 )
Wherein, f t--t two field picture; ω i,t---t frame f 2the weight of individual Gaussian distribution, and
η (f t, μ i,t, )---t frame i-th gauss of distribution function.
The renewal of mixed Gauss model mainly upgrades the Gaussian parameter describing its distribution, need at no point in the update process to consider parameter and parameter weighted value in a model simultaneously, whole process also more complicated, finally also needs again to sort according to the size of weighted value.In shooting process, video constantly updates, a new two field picture can constantly join in video, now mixed Gauss model is also in constantly upgrading, its basic thought is compared by the pixel following formula in the image newly obtained, if meet the distribution of mixed Gauss model, think that this point is background dot, if instead do not meet model, then regard as the impact point of motion.
d 1---threshold value, general value 2.5 in practical application; σ i, t-1---i-th Gaussian function is in the standard deviation of moment t-1.
According to matching result, the parameters of mixed Gauss model is upgraded, comprise weight, expectation and variance.ω i, t+1=(1-α 1) ω i,t+ α 1m i, t+1, μ i, t+1=(1-ρ) μ i,t+ ρ × T t+1, wherein α 1(0≤α 1≤ 1) be Background learning rate, α 1decide the speed of background model renewal speed, α 1large then renewal speed is fast, α 1little then renewal speed is slow.M i, t+1represent the matching degree in t+1 moment pixel color value and i-th Gauss model, being 1 during coupling, not mating, is 0, and for unmatched Gauss model, its expectation and variance all remains unchanged, and for the model of coupling, upgrades it.ρ=α 1η (x t| u h, σ k) be Gauss model Studying factors, represent the speed that Gauss model parameter upgrades.If do not have Gauss model and current picture value to match, the Gauss model that so weights are minimum will be substituted, and the expectation of new model is current pixel color value, and variance is preset as a larger initial value, and weight is preset as less initial value.
As shown in Figure 4, the vehicle outline recognizer concrete steps based on three-dimensional modeling comprise:
Canny operator is used to carry out edge extracting first compute gradient value and deflection.Ask for the gradient M in x direction and y direction of foreground image respectively xand M y.
Ask for gradient to carry out convolution by 3 × 3 templates in image and complete:
m x = - 1 0 1 - 2 0 2 - 1 0 1 , m y = - 1 - 2 - 1 0 0 0 1 2 1
Grad is: | Δf | = M x 2 + M y 2
Gradient direction angle: θ=arctan (M y/ M x)
Be four direction by 0 ° ~ 360 ° gradient direction angle merger;
Non-maximization suppresses and hysteresis threshold, obtains vehicle ' s contour curve f (x).
Its angle point is got to contour curve f (x), as the unique point of contour curve.
With the contour curve Feature Points Matching in 3 d model library.
First identify vehicle, criterion of identification is as follows:
The ratio of matching characteristic number and total all number of features is more than or equal to κ (κ value of the present invention is 0.6).
Judge whether contour curve deformation occurs by the unique point of real time profile curve f (x) again, with contour curve f (x) in 3 d model library ' unique point Point matching, the ratio of matching characteristic number and all number of features is less than λ (λ value is 0.9), then be judged to be traffic accident, and report to the police, otherwise entering next step, judging by improving SIFT feature point.
As shown in Figure 5, the improvement SIFT feature recognizer concrete steps based on the vehicle of three-dimensional modeling comprise:
Extract vehicle characteristics, be specially: first use difference of Gaussian (DoG) to carry out the multi-scale Representation of Description Image, namely the DoG metric space of synthetic image.
Shown in DoG is defined as follows: G (x, y, k σ)-G (x, y, σ)
Wherein: shown in Gaussian function G (x, y, σ) is expressed as follows:
For each point, determine whether it is extreme point.By by this point and its up and down and diagonal line eight points altogether, also have 18 abutment points of levels to compare to determine whether it is extreme point.If determine that this point is extreme point, so this point is exactly unique point, and can according to the principal direction of this point of the gradient calculation of its neighborhood;
Expressive Features point, in improvement SIFT feature point describing method, getting a size around unique point is the neighborhood of 8 × 8, form 8 concentric circless, within the scope of each concentric circles, add up the accumulated value of the gradient weighting modulus value in its 8 directions, 8 dimensional vectors choosing annular region from the inside to the outside in order respectively form final proper vector.Therefore, Expressive Features point is carried out at the vectors improving use 64 dimension in SIFT algorithm.
Mate with the feature in model bank, will carry out the coupling of feature after having extracted characteristics of image, the characteristics of image extracted describes in vector form, as a P 1(x 11, x 12..., x 1n) and P 2(x 21, x 22..., x 2n), feature identification now has uniqueness, identification point and have the highest similarity between being identified a little.
Point P 1(x 11, x 12..., x 1n) and P 2(x 21, x 22..., x 2n) between Euclidean distance be expressed as follows shown in:
dis ( P 1 , P 2 ) = Σ 1 n ( x 1 i - x 2 i ) 2
Whether then statistics obtains minimum distance and thinks that the similarity between two proper vectors producing this minor increment is the highest, after the coupling of carrying out feature, got into an accident by the ratio in judgement vehicle of matching characteristic number and total all number of features.
System of the present invention as shown in Figure 6, comprises communication subsystem, information storage subsystem, identification of accidental events subsystem and Incident Management subsystem;
Wherein, described communication subsystem, 1) for the Video Sequence Transmission of camera acquisition to workstation; 2) for haveing containing the server of accident transmission of video images to district's Command center through workstation processing and identification; 3) district, communication between city's Command center and server is completed;
Information storage subsystem, store the original video of traffic accident period of right time, corresponding video reflect identifying information, traffic accident dot information and the relevant rehabilitation information relatively of traffic accident grade;
Identification of accidental events subsystem, 1) identify that traffic accident occurs, and intercept the video information comprising traffic accident generating process; 2) intensity grade is identified according to relevant specification or regulation;
Incident Management subsystem, for, 1) traffic accident warning; 2) traffic accident ranking compositor; 3) accident rehabilitation; 4) historical information inquiry, statistics.
Above-mentioned identification of accidental events subsystem comprises: background modeling module, prospect vehicle extraction module, vehicle ' s contour identification module and vehicle improve SIFT feature identification module;
Background modeling module: utilize road video image information, adopts based on the method for color space cluster respectively for colored and black and white camera and carries out background modeling based on gauss hybrid models;
Prospect vehicle extraction module: utilize background difference algorithm, does difference by present frame and background, obtains prospect vehicle, utilizes eight connected region algorithm separately to be identified by moving target;
Vehicle ' s contour identification module: use canny operator to obtain the profile of prospect vehicle, and mate with the three-dimensional vehicle profile library set up in advance, first identify vehicle, if then vehicle ' s contour deformation is obvious, be judged to be traffic accident, otherwise use local SFIT feature to differentiate;
Vehicle improves SIFT feature identification module: extract vehicle and improve SIFT feature, mate, determine whether traffic accident with the feature database set up in advance.
Present system, the responsibility etc. to the transmission of the identification of traffic hazard and process, accident information and preservation, traffic accident video playback, auxiliary identification accident can be realized, for traffic administration provide convenient, effectively help, and the further investigation of intelligent transportation system to be contributed to some extent.
Be described specific embodiments of the invention above and illustrate, these embodiments are only exemplary, and are not used in and limit the invention, and the present invention should make an explanation according to appended claim.

Claims (8)

1., based on the automatic identifying processing method of traffic hazard of video, it is characterized in that, specifically comprise the following steps:
Step S10: its three-dimensional feature storehouse is set up respectively to different automobile types;
Step S11: obtain road sequence of video images;
Step S12: based on the prospect Vehicles separation of background modeling;
Step S13: the static target of based target barycenter displacement judges;
Step S14: based on the vehicle outline recognizer of three-dimensional modeling, identifying vehicle, then according to profile, whether deformation occurs and judge whether to get into an accident, as judged to get into an accident, reporting to the police, otherwise entering step S15, judging by improving SIFT feature point;
Step S15: the improvement SIFT feature recognizer based on the vehicle of three-dimensional modeling: whether the improvement SIFT feature point extracting vehicle, with the local feature Point matching in three-dimensional vehicle feature database, and get into an accident according to matching result comprehensive descision;
Set up its three-dimensional feature storehouse step respectively to different automobile types in described step S10 to comprise:
S101) image of different automobile types in the different attitude angle of three dimensions is obtained;
S102) unique point of vehicle outline in image is extracted;
S103) extract vehicle outline in image surround improvement SIFT feature point in closed curve;
S104) use the unique point of the above-mentioned vehicle outline extracted and improve SIFT feature point and set up three-dimensional vehicle feature database.
2. the automatic identifying processing method of the traffic hazard based on video according to claim 1, it is characterized in that, obtaining road sequence of video images in described step S11 is set up single fixed cameras acquisition road video image by being interposed between on road with a certain.
3. the automatic identifying processing method of the traffic hazard based on video according to claim 1, is characterized in that, in described step S12, prospect Vehicles separation step comprises:
S121) for colour imagery shot, the background modeling method based on color space cluster road model is adopted; For black and white camera, adopt the background modeling method based on mixed Gauss model;
S122) prospect and background image is obtained based on image difference method;
S123) morphology is utilized to step S122) vehicles segmentation result improves.
4. the automatic identifying processing method of the traffic hazard based on video according to claim 1, is characterized in that, the stationary vehicle identification step based on image difference in described step S13 comprises:
S131) 8 neighborhood communicating methods are utilized to be marked respectively by moving target;
S132) moving target marked is calculated respectively to the size of its barycenter and connected region;
S133) contrast with previous frame image, judged whether that target offsets.
5. the automatic identifying processing method of the traffic hazard based on video according to claim 1, is characterized in that, the vehicle outline recognizer step based on three-dimensional modeling in described step S14 comprises:
S141) canny operator is used to obtain the outline of prospect vehicle;
S142) mathematical description is carried out to the outline obtained, obtain its mathematic(al) representation;
S143) angle point in contouring curve, as unique point, mates with the three-dimensional vehicle profile library set up in advance;
S144) identify vehicle model, and judge whether vehicle outline changes;
S145) as outline changes, carry out traffic accident warning, otherwise carry out the judgement of improvement SIFT feature by step S15.
6. the automatic identifying processing method of the traffic hazard based on video according to claim 1, is characterized in that, improves SIFT feature recognizer step comprise in described step S15 based on the vehicle of three-dimensional modeling:
S151) the improvement SIFT feature of vehicle is obtained;
S152) mate with the three-dimensional feature storehouse set up in advance.
7. based on the automatic identification processing system of traffic hazard of video, it is characterized in that, this system comprises communication subsystem, information storage subsystem, identification of accidental events subsystem and Incident Management subsystem;
Wherein, described communication subsystem, 1) for the Video Sequence Transmission of camera acquisition to workstation; 2) for haveing containing the server of accident transmission of video images to district's Command center through workstation processing and identification; 3) district, communication between city's Command center and server is completed;
Described information storage subsystem, store the original video of traffic accident period of right time, corresponding video reflect identifying information, traffic accident dot information and the relevant rehabilitation information relatively of traffic accident grade;
Described identification of accidental events subsystem, 1) identify that traffic accident occurs, and intercept the video information comprising traffic accident generating process; 2) intensity grade is identified according to relevant specification or regulation;
Described Incident Management subsystem, for, 1) traffic accident warning; 2) traffic accident ranking compositor; 3) accident rehabilitation; 4) historical information inquiry, statistics.
8. the automatic identification processing system of traffic hazard according to claim 7, is characterized in that, described identification of accidental events subsystem comprises: background modeling module, prospect vehicle extraction module, vehicle ' s contour identification module and vehicle improve SIFT feature identification module;
Background modeling module: utilize road video image information, adopts based on the method for color space cluster respectively for colored and black and white camera and carries out background modeling based on gauss hybrid models;
Prospect vehicle extraction module: utilize background difference algorithm, does difference by present frame and background, obtains prospect vehicle, utilizes eight connected region algorithm separately to be identified by moving target;
Vehicle ' s contour identification module: use canny operator to obtain the profile of prospect vehicle, and mate with the three-dimensional vehicle profile library set up in advance, first identify vehicle, if then vehicle ' s contour deformation is obvious, be judged to be traffic accident, otherwise use local SFIT feature to differentiate;
Vehicle improves SIFT feature identification module: extract vehicle and improve SIFT feature, mate, determine whether traffic accident with the feature database set up in advance.
CN201310139545.6A 2013-04-19 2013-04-19 Traffic accident automatic identification processing method and system based on videos Expired - Fee Related CN103258432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310139545.6A CN103258432B (en) 2013-04-19 2013-04-19 Traffic accident automatic identification processing method and system based on videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310139545.6A CN103258432B (en) 2013-04-19 2013-04-19 Traffic accident automatic identification processing method and system based on videos

Publications (2)

Publication Number Publication Date
CN103258432A CN103258432A (en) 2013-08-21
CN103258432B true CN103258432B (en) 2015-05-27

Family

ID=48962312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310139545.6A Expired - Fee Related CN103258432B (en) 2013-04-19 2013-04-19 Traffic accident automatic identification processing method and system based on videos

Country Status (1)

Country Link
CN (1) CN103258432B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793922B (en) * 2013-09-12 2016-07-06 电子科技大学 A kind of particular pose real-time detection method
CN103473547A (en) * 2013-09-23 2013-12-25 百年金海科技有限公司 Vehicle target recognizing algorithm used for intelligent traffic detecting system
WO2016138640A1 (en) * 2015-03-04 2016-09-09 GM Global Technology Operations LLC Systems and methods for assigning responsibility during traffic incidents
CN106373332A (en) * 2016-09-30 2017-02-01 北京奇虎科技有限公司 Vehicle-mounted intelligent alarm method and device
CN106548624A (en) * 2016-11-03 2017-03-29 武汉理工大学 A kind of car accident grade automatic identification and monitoring warning device, system and method
CN106781436B (en) * 2016-12-19 2020-03-06 东软集团股份有限公司 Traffic accident handling method and device
CN108629963A (en) * 2017-03-24 2018-10-09 纵目科技(上海)股份有限公司 Traffic accident report method based on convolutional neural networks and system, car-mounted terminal
CN107909113A (en) * 2017-11-29 2018-04-13 北京小米移动软件有限公司 Traffic-accident image processing method, device and storage medium
CN107933471B (en) * 2017-12-04 2019-12-20 惠州市德赛西威汽车电子股份有限公司 Accident active calling rescue method and vehicle-mounted automatic help-seeking system
CN108320517A (en) * 2017-12-28 2018-07-24 浙江中新长清信息科技有限公司 Car plate and vehicle identification system and monitoring server
CN109993976A (en) * 2017-12-29 2019-07-09 技嘉科技股份有限公司 Traffic accident monitors system and method
CN108458691B (en) * 2018-02-02 2019-04-19 新华智云科技有限公司 A kind of collision checking method and equipment
CN109902591A (en) * 2018-03-13 2019-06-18 北京影谱科技股份有限公司 A kind of automobile search system
CN109118516A (en) * 2018-07-13 2019-01-01 高新兴科技集团股份有限公司 A kind of target is from moving to static tracking and device
CN108986474A (en) * 2018-08-01 2018-12-11 平安科技(深圳)有限公司 Fix duty method, apparatus, computer equipment and the computer storage medium of traffic accident
CN108986468A (en) * 2018-08-01 2018-12-11 平安科技(深圳)有限公司 Processing method, device, computer equipment and the computer storage medium of traffic accident
WO2020061766A1 (en) * 2018-09-25 2020-04-02 西门子股份公司 Vehicle event detection apparatus and method, and computer program product and computer-readable medium
CN111199643A (en) * 2018-11-20 2020-05-26 远创智慧股份有限公司 Road condition monitoring method and system
CN109657573A (en) * 2018-12-04 2019-04-19 联想(北京)有限公司 Image-recognizing method and device and electronic equipment
CN111369807A (en) * 2020-03-24 2020-07-03 北京百度网讯科技有限公司 Traffic accident detection method, device, equipment and medium
CN111564058A (en) * 2020-05-19 2020-08-21 杭州自动桌信息技术有限公司 Device for monitoring parking lot condition and parking lot management system and management method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0376800B1 (en) * 1988-12-21 1995-04-05 Serge Besnard Automatic site monitoring process and apparatus
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0376800B1 (en) * 1988-12-21 1995-04-05 Serge Besnard Automatic site monitoring process and apparatus
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
华莉琴,许维,王拓.采用改进的尺度不变特征转换及多视角模型对车型识别.《西安交通大学学报》.2013,第47卷(第4期),第92-99页. *
基于自适应轮廓匹配的视频运动车辆检测和跟踪;杨建国,尹旭全,方丽;《西安交通大学学报》;20050410;第39卷(第4期);第351-355页 *
拜佩,李金屏.一种基于视频的交通事故检测方法.《济南大学学报(自然科学版)》.2012,第26卷(第3期),第282-286页. *

Also Published As

Publication number Publication date
CN103258432A (en) 2013-08-21

Similar Documents

Publication Publication Date Title
Gurghian et al. Deeplanes: End-to-end lane position estimation using deep neural networksa
CN107729818B (en) Multi-feature fusion vehicle re-identification method based on deep learning
Liu et al. Learning a rotation invariant detector with rotatable bounding box
CN104392212B (en) The road information detection and front vehicles recognition methods of a kind of view-based access control model
WO2019169816A1 (en) Deep neural network for fine recognition of vehicle attributes, and training method thereof
CN103077384B (en) A kind of method and system of vehicle-logo location identification
Diaz-Cabrera et al. Suspended traffic lights detection and distance estimation using color features
US8509478B2 (en) Detection of objects in digital images
Chang et al. Automatic license plate recognition
CN102999918B (en) Multi-target object tracking system of panorama video sequence image
CN102598057B (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
John et al. Saliency map generation by the convolutional neural network for real-time traffic light detection using template matching
CN100485710C (en) Method for recognizing vehicle type by digital picture processing technology
Huttunen et al. Car type recognition with deep neural networks
CN102609686B (en) Pedestrian detection method
CN101800890B (en) Multiple vehicle video tracking method in expressway monitoring scene
US7724962B2 (en) Context adaptive approach in vehicle detection under various visibility conditions
CN103093249B (en) A kind of taxi identification method based on HD video and system
CN101141633B (en) Moving object detecting and tracing method in complex scene
Chen et al. Vehicle type categorization: A comparison of classification schemes
CN106096602A (en) A kind of Chinese licence plate recognition method based on convolutional neural networks
CN102831618B (en) Hough forest-based video target tracking method
CN104463241A (en) Vehicle type recognition method in intelligent transportation monitoring system
CN103324930B (en) A kind of registration number character dividing method based on grey level histogram binaryzation
Li et al. Traffic light recognition for complex scene with fusion detections

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150527

Termination date: 20190419

CF01 Termination of patent right due to non-payment of annual fee