CN110263693A - In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier - Google Patents

In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier Download PDF

Info

Publication number
CN110263693A
CN110263693A CN201910510558.7A CN201910510558A CN110263693A CN 110263693 A CN110263693 A CN 110263693A CN 201910510558 A CN201910510558 A CN 201910510558A CN 110263693 A CN110263693 A CN 110263693A
Authority
CN
China
Prior art keywords
vehicle
pedestrian
car
people
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910510558.7A
Other languages
Chinese (zh)
Inventor
王国举
刘慧林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yuan Lian Sensing Technology Co Ltd
Original Assignee
Suzhou Yuan Lian Sensing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yuan Lian Sensing Technology Co Ltd filed Critical Suzhou Yuan Lian Sensing Technology Co Ltd
Priority to CN201910510558.7A priority Critical patent/CN110263693A/en
Publication of CN110263693A publication Critical patent/CN110263693A/en
Priority to PCT/CN2019/119689 priority patent/WO2020248515A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The present invention relates to field of image recognition, are related to the traffic detection recognition method of a kind of combination inter-frame difference and Bayes classifier.The complete area of vehicle and pedestrian moved on road and less complete static vehicle and pedestrian region can be detected in the drawbacks of present invention improves traditional inter-frame difference missing inspection and can't detect stationary object using local vehicle and pedestrian information;Its speed of service is fast in the identical situation of accuracy rate compared to traditional SVM+HOG or neural network method simultaneously, and cost is relatively low;The method of the present invention only needs positive sample, and the sample of the sample size ratio SVM+HOG needed is few, and opposite neural network learning process is simpler, and real-time is high.

Description

In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier
Technical field
The present invention relates to field of image recognition, are related to the traffic inspection of a kind of combination inter-frame difference and Bayes classifier Detection identifying method.
Background technique
In convectional reliability theory, it is SVM+HOG that traffic, which detects common method, and this method is in the training process On the one hand a large amount of positive negative sample is needed, and only strong to the apparent pedestrian's resolution capability of feature, it is slow on the other hand detects speed.
It is fixed in the case that object detection environment determines in camera position, using the method for traditional SVM+HOG, in order to The accuracy for improving detection and identification needs to acquire non-vehicle and pedestrian in the positive sample and environment of a large amount of vehicle and pedestrian Negative sample, due to the unicity of SVM (support vector machines) detection object, when the detection target in environment there are two when, Two classifiers of training are needed, while poor using the real-time that SVM+HOG carries out traffic detection, one frame picture of processing needs Time be more than one second.
Pedestrian is solved the problems, such as using neural network learning popular at present, and a little " big material is small again with vehicle detection With ", the only detection of two type objects after all, and expensive hardware and complicated algorithm are needed using neural network learning, also There is the sample of flood tide, even if cannot guarantee that real-time detection in this way, the threshold that operates is higher, and cost is also higher.
Summary of the invention
Traditional inter-frame difference missing inspection is improved the technical problem to be solved in the present invention is to provide one kind and can't detect static The drawbacks of object, the speed of service is fast, and cost is relatively low, and the sample size needed is few, the high combination inter-frame difference of real-time and Bayes The traffic detection recognition method of classifier.
In order to solve the above-mentioned technical problem, the technical solution adopted by the present invention to solve the technical problems is:
A kind of traffic detection recognition method of combination inter-frame difference and Bayes classifier, specific steps include:
Road image containing traffic in S1, continuous precisely pickup area, by carrying out two-value point to road image Processing is cut to extract the road area of each frame road image;
S2, the vehicle and pedestrian location drawing in road image is obtained using traditional interframe difference processing to road image Piece, then local interframe difference processing is carried out to vehicle and pedestrian position picture and obtains vehicle and pedestrian area image, it is obtained with this The external square picture of vehicle and pedestrian;
S3, the external square picture of the vehicle and pedestrian of acquisition is divided into training set and test set, and training set is inputted Training, obtains trained vehicle and pedestrian identification model in Naive Bayes Classifier;
S4, test set is inputted to the vehicle and pedestrian identification model put up, vehicle and pedestrian identification model identifies respectively Vehicle and pedestrian in road image in out.
Preferably, in step S1, binary segmentation processing is carried out to road image specifically: first to the different moments of acquisition Multiple road images carry out binary segmentation, carry out empty filling to the bianry image after segmentation, extract the profile of road area, protect Deposit the maximum road profile in every image, and these road profiles be ranked up, with maximum road profile as The mask image of road area, then the road area in each frame picture is extracted by road mask image.
Preferably, in step S2, traditional interframe difference processing specifically: after carrying out binary conversion treatment to road image, make With morphologic closed operation, then cavity filling and morphologic expansion and corrosion treatment are carried out, finally extracted in road area Vehicle and pedestrian profile, obtain out vehicle and pedestrian position picture using the vehicle and pedestrian profile in road area.
Preferably, in step S2, local interframe difference processing specifically: by traditional interframe difference processing obtain vehicle and The position picture of pedestrian expands and saves position of the vehicle and pedestrian in picture, obtains local wide picture, next detecting In frame road image when vehicle and pedestrian, inter-frame difference is carried out to obtain to position picture and the local wide picture of vehicle and pedestrian Obtain the external square picture of vehicle and pedestrian.
Preferably, in step s3, several vehicle and pedestrians used in the training process of Naive Bayes Classifier Feature, the different distributions by the external square picture of the vehicle and pedestrian in training set to each feature of determination in different sections Probability.
Preferably, the vehicle and pedestrian feature is the perimeter X in regionper, region area XArea, dispersion degree Xdisp, it is more The number of vertex X of side shapeNumpoly, gray average XAvegray, mean square error Xgrayerror, three points of gray value Xtgd, convex closure number XConHull, point set number XGPoint, the channel H gray average XHAgray, channel S gray average XSAgray, the channel V gray average XVAgray, the channel R gray average XRAgray, the channel G gray average XGAgray, channel B gray average XBAgray, geometry ratio XRatio
Preferably, vehicle and pedestrian to be identified is denoted as Y respectivelycar、Ypeople, then classifying to vehicle and pedestrian can To indicate are as follows:
hnb_car=P (Xper|Ycar)P(XArea|Ycar)P(Xdisp|Ycar)P(XNumpoly|Ycar)P(XAvegray|Ycar)P (Xgrayerror|Ycar)P(Xtgd|Ycar)P(XConHull|Ycar)P(XGPoint|Ycar)P(XHAgray|Ycar)P(XSAgray|Ycar)P (XVAgray|Ycar)P(XRAgray|Ycar)P(XGAgray|Ycar)P(XBAgray|Ycar)P(XRatio|Ycar)P(Ycar)
hnb_people=P (Xper|Ypeople)P(XArea|Ypeople)P(Xdisp|Ypeople)P(XNumpoly|Ypeople)P(XAvegray |Ypeople)P(Xgrayerror|Ypeople)P(Xtgd|Ypeople)P(XConHull|Ypeople)P(XGPoint|Ypeople)P(XHAgray| Ypeople)P(XSAgray|Ypeople)P(XVAgray|Ypeople)P(XRAgray|Ypeople)P(XGAgray|Ypeople)P(XBAgray|Ypeople)P (XRatio|Ypeople)
If hnb_car> hnb_people, then predict that object is vehicle, it is on the contrary then predict vehicle for pedestrian.
It preferably, further include that part RGB and hsv color segmentation are carried out to vehicle and pedestrian position picture in step S2 Processing obtains local segmentation picture, and local segmentation picture and vehicle and pedestrian position picture are carried out AND operation and obtain whole frame figure Piece carries out HSV to whole frame picture and RGB color is divided, and the binary image and vehicle and pedestrian area image after segmentation carry out XOR operation, the binary image are the external square picture of vehicle and pedestrian.
Beneficial effects of the present invention:
The present invention improves traditional inter-frame difference missing inspection using local vehicle and pedestrian information and can't detect stationary object The drawbacks of, the complete area of vehicle and pedestrian moved on road and less complete static vehicle and row can be detected People region;Its speed of service is fast in the identical situation of accuracy rate compared to traditional SVM+HOG or neural network method simultaneously, And cost is relatively low;The method of the present invention only needs positive sample, and the sample of the sample size ratio SVM+HOG needed is few, opposite nerve net Network learning process is simpler, and real-time is high.
Detailed description of the invention
Fig. 1 is detection identification process schematic diagram of the invention.
Fig. 2 is the mask image of road area of the invention.
Fig. 3 is hsv color spatial model of the invention.
Fig. 4 is vehicle and pedestrian testing process schematic diagram of the invention.
Fig. 5 is the profile of extraction road area of the invention.
Fig. 6 is the vehicle and pedestrian on binary image of the invention.
Fig. 7 is the effect picture of vehicle and pedestrian detection on road of the invention.
Fig. 8 is Bayes classifier recognition effect of the invention.
Specific embodiment
The present invention will be further explained below with reference to the attached drawings and specific examples, so that those skilled in the art can be with It more fully understands the present invention and can be practiced, but illustrated embodiment is not as a limitation of the invention.
Referring to Fig.1 shown in -8, institute's application environment of the present invention specifically:
The source of training data acquires road vehicle and pedestrian, saves into video, utilizes traffic detection module Vehicle and pedestrian region is taken off in inspection, and saves the sample database at picture, as training data.
Data processing is broadly divided into three types, vehicle and gray level image as trained vehicle and pedestrian data, colored Image, and the traffic picture of the external square with yellow information.
Data analysis, key data analysis tool Python (common packet: pandas, numpy, matplotlib etc.), C ++ (common packet: opencv etc.).
A kind of traffic detection recognition method of combination inter-frame difference and Bayes classifier, specific steps include:
Road image containing traffic in S1, continuous precisely pickup area, by carrying out two-value point to road image Processing is cut to extract the road area of each frame road image;
S2, the vehicle and pedestrian location drawing in road image is obtained using traditional interframe difference processing to road image Piece, then local interframe difference processing is carried out to vehicle and pedestrian position picture and obtains vehicle and pedestrian area image, it is obtained with this The external square picture of vehicle and pedestrian;
S3, the external square picture of the vehicle and pedestrian of acquisition is divided into training set and test set, and training set is inputted Training, obtains trained vehicle and pedestrian identification model in Naive Bayes Classifier;
S4, test set is inputted to the vehicle and pedestrian identification model put up, vehicle and pedestrian identification model identifies respectively Vehicle and pedestrian in road image in out.
The object detection effect of improved inter-frame difference module is shown in Fig. 7, and road vehicle and pedestrian may be used To detect, the sorted effect picture of Bayes classifier is shown in Fig. 8, can identify vehicle and pedestrian, wherein dark Frame indicates detection is pedestrian, what light frame indicated detection is vehicle.
The present invention improves traditional inter-frame difference missing inspection using local vehicle and pedestrian information and can't detect stationary object The drawbacks of, the complete area of vehicle and pedestrian moved on road and less complete static vehicle and row can be detected People region;Its speed of service is fast in the identical situation of accuracy rate compared to traditional SVM+HOG or neural network method simultaneously, And cost is relatively low;The method of the present invention only needs positive sample, and the sample of the sample size ratio SVM+HOG needed is few, opposite nerve net Network learning process is simpler, and real-time is high.
The present invention can detecte engage in this profession road measuring car and row using this method in the case where no trained template People, it is smaller compared to traditional inter-frame difference omission factor, and can detecte the vehicle and pedestrian being still among road.
In step S1, binary segmentation processing is carried out to road image specifically: first to multiple roads of the different moments of acquisition Road image carries out binary segmentation, carries out empty filling to the bianry image after segmentation, extracts the profile of road area (refering to figure 5) the maximum road profile in every image, is saved, and these road profiles are ranked up, is come with maximum road profile The road area in each frame picture is extracted as the mask image of road area, then by road mask image.
Traditional inter-frame difference method for testing motion
Inter-frame difference is that a kind of two continuous frames image by sequence of video images does calculus of differences and obtains moving target The method of profile, when occurring abnormal object movement in monitoring scene, it is more apparent poor to will appear between adjacent two field pictures Not, two frames subtract each other, and acquire image corresponding position pixel value absolute value of the difference, judge whether it is greater than a certain threshold value, and then analyze The object of which movement feature of video or image sequence:
D (x, y) indicates that differentiated image, I (t) indicate that the picture of t moment capture, I (t-1) indicate that the t-1 moment captures Picture, T indicates the differentiated threshold value for carrying out binaryzation, and 1 indicate is prospect, and 0 indicate is background
In step S2, traditional interframe difference processing specifically: after carrying out binary conversion treatment to road image, use morphology Closed operation, then carry out cavity filling and morphologic expansion and corrosion treatment, finally extract vehicle in road area and Pedestrian contour obtains out vehicle and pedestrian position picture using the vehicle and pedestrian profile in road area.
Due to using morphologic closed operation after binarization, inter-frame difference carry out the vehicle and row of binaryzation The region in people region is there are fragment, and after opening operation, these regions may be weeded out, and causes the vehicle and pedestrian region extracted not It is connected to or there are loopholes.In order to solve this problem, using the method for local inter-frame difference, by the vehicle and row after binaryzation People region is added to the vehicle and pedestrian region of the differentiated extraction of traditional interframe, i.e., local interframe difference processing specifically: logical The position picture that traditional interframe difference processing obtains vehicle and pedestrian is crossed, expands and saves position of the vehicle and pedestrian in picture It sets, obtains local wide picture, in detecting next frame road image when vehicle and pedestrian, to the position picture of vehicle and pedestrian Inter-frame difference is carried out with local wide picture to obtain the external square picture of vehicle and pedestrian.
In step s3, several vehicle and pedestrian features used in the training process of Naive Bayes Classifier are led to Cross different distributions probability of the external square picture of the vehicle and pedestrian in training set to each feature of determination in different sections.
The vehicle and pedestrian feature is the perimeter X in regionper, region area XArea, dispersion degree Xdisp, polygon top Count XNumpoly, gray average XAvegray, mean square error Xgrayerror, three points of gray value Xtgd, convex closure number XConHull, point set number XGPoint, the channel H gray average XHAgray, channel S gray average XSAgray, the channel V gray average XVAgray, R channel Gray average XRAgray, the channel G gray average XGAgray, channel B gray average XBAgray, geometry ratio XRatio
1, the perimeter in region: the perimeter in the vehicle and pedestrian region that traffic detection module extracts;
This feature can indicate are as follows: Xper
2, the area in region: the area in the vehicle and pedestrian region that traffic detection module extracts;
This feature can indicate are as follows: XArea
3, dispersion degree: the area in vehicle and pedestrian region/vehicle and pedestrian region area;
This feature can indicate are as follows: Xdisp
4, the number of vertex of polygon: the fixed-point number of the approximate polygon in vehicle and pedestrian region;
This feature can indicate are as follows: XNumpoly
5, gray average: the mean value of the gray value of the gray level image in vehicle and pedestrian region;
This feature can indicate are as follows: XAvegray
6, mean square error: the mean square error of the corresponding gray value of gray level image in vehicle and pedestrian region;
This feature can indicate are as follows: Xgrayerror
7, three points of gray values: the external square of vehicle and pedestrian is divided into three parts, calculates the gray average in every a region, so The difference of the gray average in adjacent area is calculated separately afterwards, finally calculates the difference of the two differences.
This feature can indicate are as follows: Xtgd, it can specifically be formulated:
If the gray average of the first part after the trisection of the external square of vehicles or pedestrians is R1, the gray scale of second part Mean value is R2, the gray average in Part III region is R3, the height in entire external square region is H, width W, and three points of gray values are Xtgd
Xtgd=abs (abs (R1-R2)-abs(R3-R2))
8, convex closure number: the convex closure number that vehicle and pedestrian region contour detects;
This feature can indicate are as follows: XConHull
9, point set number: the point set number in vehicle and pedestrian region is constituted;
This feature can indicate are as follows: XGPoint
10, the gray average in the channel H: the gray average in the channel H of the corresponding HSV image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XHAgray
11, the gray average of channel S: the gray average of the channel S of the corresponding HSV image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XSAgray
12, the gray average in the channel V: the gray average in the channel V of the corresponding HSV image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XVAgray
13, the gray average in the channel R: the gray average in the channel R of the corresponding RGB image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XRAgray
14, the gray average in the channel G: the gray average in the channel G of the corresponding RGB image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XGAgray
15, the gray average of channel B: the gray average of the channel B of the corresponding RGB image of vehicle and pedestrian detection zone;
This feature can indicate are as follows: XBAgray
16, geometry ratio: the ratio of the Gao Yukuan of the external square of vehicle and pedestrian detection zone.
This feature can indicate are as follows: XRatio
When each condition be it is independent, then X and Y are independent from each other, then:
P (X, Y)=P (X) P (Y)
The formula of conditional probability is as follows:
P (Y | X)=P (X, Y)/P (X)
P (X | Y)=P (X, Y)/P (Y)
The formula of full probability:
Wherein,
Bayesian formula:
Since this uses 16 different features, when this 16 features are independent from each other in training process, wait know Other vehicle and pedestrian is denoted as Y respectivelycar、Ypeople, then vehicle and pedestrian is classified and can be indicated by above formula are as follows:
hnb_car=P (Xper|Ycar)P(XArea|Ycar)P(Xdisp|Ycar)P(XNumpoly|Ycar)P(XAvegray|Ycar)P (Xgrayerror|Ycar)P(Xtgd|Ycar)P(XConHull|Ycar)P(XGPoint|Ycar)P(XHAgray|Ycar)P(XSAgray|Ycar)P (XVAgray|Ycar)P(XRAgray|Ycar)P(XGAgray|Ycar)P(XBAgray|Ycar)P(XRatio|Ycar)P(Ycar)
hnb_people=P (Xper|Ypeople)P(XArea|Ypeople)P(Xdisp|Ypeople)P(XNumpoly|Ypeople)P(XAvegray |Ypeople)P(Xgrayerror|Ypeople)P(Xtgd|Ypeople)P(XConHull|Ypeople)P(XGPoint|Ypeople)P(XHAgray| Ypeople)P(XSAgray|Ypeople)P(XVAgray|Ypeople)P(XRAgray|Ypeople)P(XGAgray|Ypeople)P(XBAgray|Ypeople)P (XRatio|Ypeople)
If hnb_car> hnb_people, then predict that object is vehicle, it is on the contrary then predict vehicle for pedestrian.
It further include being obtained to vehicle and pedestrian position picture progress part RGB and hsv color dividing processing in step S2 Local segmentation picture and vehicle and pedestrian position picture are carried out AND operation and obtain whole frame picture, to whole by local segmentation picture Frame picture carry out HSV and RGB color segmentation, after segmentation binary image (refering to Fig. 6, wherein the vehicle of black is static, Due to detecting static vehicles or pedestrians using the color segmentation of hsv and rgb in detection process, the area of object can be extracted Domain, but the integrality in region is limited) XOR operation is carried out with vehicle and pedestrian area image, which is vehicle With the external square picture of pedestrian.
Improved inter-frame difference makes full use of local vehicle and pedestrian information and colorful vehicle and pedestrian information, The drawbacks of improving traditional inter-frame difference missing inspection and can't detect stationary object;
HSV is a kind of color space model of back taper, this model can describe tone, saturation degree and the light and shade of color Degree.In conical model, angle represents tone H, and saturation degree indicates that brightness is indicated with V with S.
The value range of H in OPENCV: the value range of 0-180, S are 0-255, and the value range of V is 0-255.HSV Carry out the principle of color segmentation:
For the channel tone H:
For saturation degree channel S:
For the channel brightness V:
XOR operation is carried out to three channels:
Wherein, h (x, y) indicates the gray value in the channel H, THIndicate the segmentation threshold in the channel H, fHAfter indicating H channel segmentation Bianry image;S (x, y) indicates the gray value of channel S, TSIndicate the segmentation threshold of channel S, fSTwo-value after indicating channel S segmentation Change image;V (x, y) indicates the gray value in the channel V, TVIndicate the segmentation threshold in the channel V, fVBinaryzation after indicating V channel segmentation Image.fhsvIt is that three channels carry out the binaryzation picture after XOR operation, is exactly the binaryzation picture after HSV segmentation.
RGB color segmentation principle is similar with HSV segmentation principle, and only representative channel meaning is different, and what R was represented is Red, what G was represented is the channel of green, and what B was represented is the channel of blue.
According to the profile characteristic of vehicle and pedestrian, gray scale and chromatic characteristic have divided 16 vehicle and pedestrian features, utilize The sample acquired carries out Fast Classification to vehicle and pedestrian by Bayes classifier, and identification real-time compares support vector machines The real-time of SVM classifier is high.
Embodiment described above is only to absolutely prove preferred embodiment that is of the invention and being lifted, protection model of the invention It encloses without being limited thereto.Those skilled in the art's made equivalent substitute or transformation on the basis of the present invention, in the present invention Protection scope within.Protection scope of the present invention is subject to claims.

Claims (8)

1. the traffic detection recognition method of a kind of combination inter-frame difference and Bayes classifier, which is characterized in that specific step Suddenly include:
Road image containing traffic in S1, continuous precisely pickup area, by being carried out at binary segmentation to road image It manages to extract the road area of each frame road image;
S2, the vehicle and pedestrian position picture in road image is obtained using traditional interframe difference processing to road image, then Local interframe difference processing is carried out to vehicle and pedestrian position picture and obtains vehicle and pedestrian area image, with this obtain vehicle and The external square picture of pedestrian;
S3, the external square picture of the vehicle and pedestrian of acquisition is divided into training set and test set, and training set is inputted into simplicity Training, obtains trained vehicle and pedestrian identification model in Bayes classifier;
S4, test set is inputted to the vehicle and pedestrian identification model put up, vehicle and pedestrian identification model identifies respectively Vehicle and pedestrian of the road image in.
2. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as described in claim 1, it is special Sign is, in step S1, carries out binary segmentation processing to road image specifically: first to multiple roads of the different moments of acquisition Image carries out binary segmentation, carries out empty filling to the bianry image after segmentation, extracts the profile of road area, saves every figure Maximum road profile as in, and these road profiles are ranked up, with maximum road profile as road area Mask image, then the road area in each frame picture is extracted by road mask image.
3. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as described in claim 1, it is special Sign is, in step S2, traditional interframe difference processing specifically: after carrying out binary conversion treatment to road image, use morphology Closed operation, then carry out cavity filling and morphologic expansion and corrosion treatment, finally extract vehicle in road area and Pedestrian contour obtains out vehicle and pedestrian position picture using the vehicle and pedestrian profile in road area.
4. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as described in claim 1, it is special Sign is, in step S2, local interframe difference processing specifically: the position of vehicle and pedestrian is obtained by traditional interframe difference processing Picture is set, expands and saves position of the vehicle and pedestrian in picture, obtain local wide picture, in detection next frame mileage chart As in when vehicle and pedestrian, inter-frame difference is carried out to the position picture of vehicle and pedestrian and local wide picture with obtain vehicle and The external square picture of pedestrian.
5. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as described in claim 1, it is special Sign is, in step s3, in several vehicle and pedestrian features that the training process of Naive Bayes Classifier uses, passes through Different distributions probability of the external square picture of vehicle and pedestrian in training set to each feature of determination in different sections.
6. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as claimed in claim 5, it is special Sign is that the vehicle and pedestrian feature is the perimeter X in regionper, region area XArea, dispersion degree Xdisp, polygon top Count XNumpoly, gray average XAvegray, mean square error Xgrayerror, three points of gray value Xtgd, convex closure number XConHull, point set number XGPoint, the channel H gray average XHAgray, channel S gray average XSAgray, the channel V gray average XVAgray, R channel Gray average XRAgray, the channel G gray average XGAgray, channel B gray average XBAgray, geometry ratio XRatio
7. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as claimed in claim 6, it is special Sign is, vehicle and pedestrian to be identified is denoted as Y respectivelycar、Ypeople, then classifying to vehicle and pedestrian can indicate Are as follows:
hnb_car=P (Xper|Ycar)P(XArea|Ycar)P(Xdisp|Ycar)P(XNumpoly|Ycar)P(XAvegray|Ycar)P(Xgrayerror| Ycar)
P(Xtgd|Ycar)P(XConHull|Ycar)P(XGPoint|Ycar)P(XHAgray|Ycar)P(XSAgray|Ycar)P(XVAgray|Ycar)
P(XRAgray|Ycar)P(XGAgray|Ycar)P(XBAgray|Ycar)P(XRatio|Ycar)P(Ycar)
hnb_people=P (Xper|Ypeople)P(XArea|Ypeople)P(Xdisp|Ypeople)P(XNumpoly|Ypeople)P(XAvegray| Ypeople)
P(Xgrayerror|Ypeople)P(Xtgd|Ypeople)P(XConHull|Ypeople)P(XGPoint|Ypeople)P(XHAgray|Ypeople)
P(XSAgray|Ypeople)P(XVAgray|Ypeople)P(XRAgray|Ypeople)P(XGAgray|Ypeople)P(XBAgray|Ypeople)
P(XRatio|Ypeople)
If hnb_car> hnb_people, then predict that object is vehicle, it is on the contrary then predict vehicle for pedestrian.
8. the traffic detection recognition method of inter-frame difference and Bayes classifier is combined as described in claim 1, it is special Sign is, further includes obtaining to vehicle and pedestrian position picture progress part RGB and hsv color dividing processing in step S2 Local segmentation picture and vehicle and pedestrian position picture are carried out AND operation and obtain whole frame picture, to whole by local segmentation picture Frame picture carries out HSV and RGB color segmentation, and the binary image and vehicle and pedestrian area image after segmentation carry out exclusive or fortune It calculates, which is the external square picture of vehicle and pedestrian.
CN201910510558.7A 2019-06-13 2019-06-13 In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier Pending CN110263693A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910510558.7A CN110263693A (en) 2019-06-13 2019-06-13 In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier
PCT/CN2019/119689 WO2020248515A1 (en) 2019-06-13 2019-11-20 Vehicle and pedestrian detection and recognition method combining inter-frame difference and bayes classifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910510558.7A CN110263693A (en) 2019-06-13 2019-06-13 In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier

Publications (1)

Publication Number Publication Date
CN110263693A true CN110263693A (en) 2019-09-20

Family

ID=67918060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910510558.7A Pending CN110263693A (en) 2019-06-13 2019-06-13 In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier

Country Status (2)

Country Link
CN (1) CN110263693A (en)
WO (1) WO2020248515A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248515A1 (en) * 2019-06-13 2020-12-17 苏州玖物互通智能科技有限公司 Vehicle and pedestrian detection and recognition method combining inter-frame difference and bayes classifier
CN113052037A (en) * 2021-03-16 2021-06-29 蔡勇 Method for judging moving vehicle and human shape by adopting AI technology

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766141A (en) * 2020-12-31 2021-05-07 北京中科晶上科技股份有限公司 Method and system for detecting foreign matters in tobacco wrapping equipment
CN113221653A (en) * 2021-04-09 2021-08-06 浙江工业大学 Mask-RCNN-based non-motor vehicle driver front and back matching method
CN116758081B (en) * 2023-08-18 2023-11-17 安徽乾劲企业管理有限公司 Unmanned aerial vehicle road and bridge inspection image processing method
CN117636482B (en) * 2024-01-26 2024-04-09 东莞市杰瑞智能科技有限公司 Visual detection system for urban road personnel behavior

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101165720A (en) * 2007-09-18 2008-04-23 湖南大学 Medical large transfusion machine vision on-line detection method
CN103473547A (en) * 2013-09-23 2013-12-25 百年金海科技有限公司 Vehicle target recognizing algorithm used for intelligent traffic detecting system
CN105631414A (en) * 2015-12-23 2016-06-01 上海理工大学 Vehicle-borne multi-obstacle classification device and method based on Bayes classifier
CN108596129A (en) * 2018-04-28 2018-09-28 武汉盛信鸿通科技有限公司 A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN108615365A (en) * 2018-05-09 2018-10-02 扬州大学 A kind of statistical method of traffic flow based on vehicle detection and tracking

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101032726B1 (en) * 2009-09-01 2011-05-06 엘지이노텍 주식회사 eye state detection method
CN107122734A (en) * 2017-04-25 2017-09-01 武汉理工大学 A kind of moving vehicle detection algorithm based on machine vision and machine learning
CN110263693A (en) * 2019-06-13 2019-09-20 苏州元联传感技术有限公司 In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101165720A (en) * 2007-09-18 2008-04-23 湖南大学 Medical large transfusion machine vision on-line detection method
CN103473547A (en) * 2013-09-23 2013-12-25 百年金海科技有限公司 Vehicle target recognizing algorithm used for intelligent traffic detecting system
CN105631414A (en) * 2015-12-23 2016-06-01 上海理工大学 Vehicle-borne multi-obstacle classification device and method based on Bayes classifier
CN108596129A (en) * 2018-04-28 2018-09-28 武汉盛信鸿通科技有限公司 A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN108615365A (en) * 2018-05-09 2018-10-02 扬州大学 A kind of statistical method of traffic flow based on vehicle detection and tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘红等: "一种改进的三帧差分运动目标检测", 《安徽大学学报(自然科学版)》 *
周文静等: "基于改进帧间差分与局部camshift", 《软件导刊》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248515A1 (en) * 2019-06-13 2020-12-17 苏州玖物互通智能科技有限公司 Vehicle and pedestrian detection and recognition method combining inter-frame difference and bayes classifier
CN113052037A (en) * 2021-03-16 2021-06-29 蔡勇 Method for judging moving vehicle and human shape by adopting AI technology

Also Published As

Publication number Publication date
WO2020248515A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN110263693A (en) In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier
TWI409718B (en) Method of locating license plate of moving vehicle
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
CN102880863B (en) Method for positioning license number and face of driver on basis of deformable part model
CN104298969B (en) Crowd size's statistical method based on color Yu HAAR Fusion Features
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN104978567B (en) Vehicle checking method based on scene classification
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN106886778B (en) License plate character segmentation and recognition method in monitoring scene
CN102622584B (en) Method for detecting mask faces in video monitor
CN109101924A (en) A kind of pavement marking recognition methods based on machine learning
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN109255326A (en) A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features
Wali et al. Shape matching and color segmentation based traffic sign detection system
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
Do et al. Speed limit traffic sign detection and recognition based on support vector machines
CN109190455A (en) Black smoke vehicle recognition methods based on Gaussian Mixture and autoregressive moving-average model
Sheng et al. Real-time anti-interference location of vehicle license plates using high-definition video
CN109325426B (en) Black smoke vehicle detection method based on three orthogonal planes time-space characteristics
Laroca et al. A first look at dataset bias in license plate recognition
Abdullah et al. Vehicles detection system at different weather conditions
CN110866435B (en) Far infrared pedestrian training method for self-similarity gradient orientation histogram
Hommos et al. Hd Qatari ANPR system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215000 Building B1, 1st Floor, Dongfang Chuangzhi Park, No. 18 Jinfang Road, Suzhou Industrial Park, Jiangsu Province

Applicant after: Suzhou Jiuwu Interchange Intelligent Technology Co., Ltd.

Address before: No. 456 Puhui Road, Suzhou Industrial Park, Jiangsu Province, 215000

Applicant before: Suzhou yuan Lian Sensing Technology Co., Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920