CN102622582B - Road pedestrian event detection method based on video - Google Patents

Road pedestrian event detection method based on video Download PDF

Info

Publication number
CN102622582B
CN102622582B CN201210039553.9A CN201210039553A CN102622582B CN 102622582 B CN102622582 B CN 102622582B CN 201210039553 A CN201210039553 A CN 201210039553A CN 102622582 B CN102622582 B CN 102622582B
Authority
CN
China
Prior art keywords
target
piece
frame
field picture
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210039553.9A
Other languages
Chinese (zh)
Other versions
CN102622582A (en
Inventor
宋焕生
付洋
朱小平
杨孟拓
陈艳
刘童
施春宁
赵倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201210039553.9A priority Critical patent/CN102622582B/en
Publication of CN102622582A publication Critical patent/CN102622582A/en
Application granted granted Critical
Publication of CN102622582B publication Critical patent/CN102622582B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a road pedestrian event detection method based on a video. The road pedestrian event detection method mainly comprises three steps as follows: based on binary segmentation of blocks, separating a target background in each frame of image in the video to be processed; based on the joint domain marks of the blocks, separating pedestrian targets through the profile characteristics of the marked targets, and recording the gravity center position information of the pedestrian targets at the same time; and based on the gravity center position characteristics, matching and calculating the speed of the pedestrian targets so as to judge whether the pedestrians pass through the road or not. Compared with the prior art, the road pedestrian event detection method can detect all the pedestrian targets in the video range and judge the real-time video, has the advantages of no environmental limit, short detection time, easiness in realization, higher accuracy and wide application prospect, and is suitable for real-time pedestrian event detection.

Description

A kind of road pedestrian event detecting method based on video
Technical field
The invention belongs to video detection technology field, be specifically related to a kind of road pedestrian event detecting method based on traffic video.
Background technology
Road pedestrian event refers to that pedestrian enters without any safeguard measure in the situation that on car lane, the behavior that jammer motor-car normally travels.Although traffic control department takes measures, the situation that pedestrian swarms into car lane happens occasionally, and particularly, in urban transportation, this phenomenon is particularly evident.The danger of this traffic behavior is very large, easily causes traffic congestion, even leads to traffic hazard, makes troubles and danger to people's life.Traditional pedestrian's event detecting method mainly contains temperature checking method, electronic coil detection method, digital video detection method.Wherein temperature checking method is easily subject to vehicle interference; Electronic coil poor expandability, must suspend traffic, destroy road surface during installation and maintenance, these methods can not be used widely in real life.
Current new project adopts installation more and more, safeguard do not need to destroy roadbed, surveyed area large, implement the convenient, flexible transport information detection technique based on video.Pedestrian detection method based on video becomes the focus of research, and existing method mainly contains based on neural network pedestrian detection, the template matches detection method based on wavelet transformation etc.Although these methods can realize pedestrian's affair alarm, the complex disposal process of video data, poor reliability, can not meet the requirement of real-time of detection, cannot meet the requirement of practical application.
Summary of the invention
Defect or deficiency for prior art, the object of the invention is to, and a kind of road pedestrian event detecting method based on video is provided, and the method can realize in real time, detect reliably all pedestrian's events in range of video.
In order to realize above-mentioned task, the present invention takes following technical solution:
A road pedestrian event detecting method based on video, is characterized in that, the method is implemented according to the following step:
Step 1, adopts a kind of video camera geometric calibration method, sets up image pixel to the mapping relations of road surface actual range, i.e. mapping table;
Step 2 is all divided into a plurality of by the first two field picture and background image under identical piece coordinate system;
Step 3, each piece to the first two field picture finds the background piece identical with this piece position in background image, and calculates the absolute value sum of the gray scale difference value of each same pixel position between its corresponding background piece of this piece;
When the absolute value sum of gained is greater than the threshold value of setting, this piece is object block, and the gray-scale value that the inner all pixels of this piece are set is 255;
When the absolute value sum of gained is less than or equal to the threshold value of setting, this piece is background piece, and the gray-scale value that the inner all pixels of this piece are set is 0;
Finally the background in the first two field picture and target are separated, obtained the binary image of the first two field picture;
Step 4, to the binary image of the first two field picture according to from left to right, order Yi Kuaiwei unit scanning from top to bottom, adjacent object block is labeled as to same target, calculate height and the width value of each target-marking simultaneously, when value that this value meets height over width is in certain threshold range, for object construction body of this target-marking dynamic creation, record it on the border, upper and lower, left and right of this target, current centre of gravity place and original center of gravity position, coupling lock-on counter information, coupling lock-on counter is initialized as zero for the first time.
Step 5, the second two field picture is processed according to step 2, step 3, step 4, and to take the centre of gravity place of the first frame recording be foundation, compare with the centre of gravity place of the target of record in the second frame, when the absolute value difference of both centre of gravity places is less than certain threshold value, the target of just thinking this first frame is mated in the second frame, with the border, upper and lower, left and right of the target of present frame and the record that current centre of gravity place is replaced the first frame, original center of gravity invariant position, mates lock-on counter simultaneously and adds 1;
When do not find the target that can mate at the second frame, abandon the record of the first frame.
Step 6, to the 3rd two field picture ..., m two field picture, repeating step two, step 3, step 4 and step 5 are processed;
Step 7, when the coupling lock-on counter of this structure record is greater than certain threshold value, by the mapping table of setting up in finding step three, can obtain actual road surface distance corresponding to current centre of gravity place and original center of gravity position, calculate the displacement of this target, coupling lock-on counter statistical value is total time frame, can be in the hope of the speed of this target; When the speed of this target is in certain threshold range, can judge that this target is for pedestrian.
Wherein:
Threshold value described in step 3 is the area of the area~20 * piece of 10 * piece.
Threshold range described in step 4 is 2~10.
Threshold value under in step 5 is 5 block lengths.
M described in step 6 is the natural number of 40≤m≤60.
Coupling lock-on counter threshold value described in step 7 is 40 frames, and threshold speed scope is 0.3m/s~2.0m/s.
Road pedestrian event detecting method based on video of the present invention, compared with prior art, can detect all pedestrian targets in range of video, be not subject to environmental restraint, can detect real-time video, and detection time is short, be easy to realize, accuracy is higher, is well suited for real-time detection pedestrian event, has broad application prospects.
Accompanying drawing explanation
Fig. 1 is the first frame video image;
Fig. 2 is together with field mark schematic diagram;
Fig. 3 is the first frame binaryzation marking image, the binaryzation target-marking that in figure, white portion is present frame, the border that white box is this target-marking;
Fig. 4 is the second frame video image;
Fig. 5 is the second frame binaryzation signature, and this binaryzation target-marking white box is around the border of this target-marking of present frame, and another white box is the border of the first frame flag target;
Fig. 6 is the 20 frame video image;
Fig. 7 is the 40th frame binaryzation signature, and this binaryzation target-marking white box is around the border of this target-marking of present frame, and all the other white box are the border of past 39 frame flag targets.
Below in conjunction with drawings and Examples, the present invention is described in further detail.
Embodiment
Road pedestrian event detecting method based on video of the present invention, in process handled image be in video along positive seasonal effect in time series the first two field picture, the second two field picture, the 3rd two field picture ..., m(m is natural number) two field picture.
The concrete following steps that adopt realize:
Step 1, adopt a kind of video camera geometric calibration method, according to video camera imaging principle, take out geometric model, the functional relation that while extrapolating pixel fragment variation, corresponding road surface actual range changes, thereby set up image pixel to the mapping relations of road surface actual range, i.e. mapping table;
Step 2, the first two field picture and background image are all divided into a plurality of under identical piece coordinate system, the first two field picture and background image adopt identical piece coordinate system, and the piece number T that the first two field picture is divided is: T=(W/w) * (H/h); The size that is two field picture is W * H, and the area of piece is w * h;
The pixel of the horizontal direction that wherein W is image, H is the pixel of image vertical direction, w is the width in piece region, the height that h is piece.
Step 3, each piece to the first two field picture, in background image, finding identical with this piece position is the background piece that coordinate is identical, and calculates the absolute value sum of the gray scale difference value of each same pixel position between its corresponding background piece of this piece;
When the absolute value sum of gained is greater than the threshold value of setting, this piece is object block, and the gray-scale value that the inner all pixels of this piece are set is 255, and threshold value span is wherein
The area of the area~20 * piece of 10 * piece, i.e. 10 * (w * h)~20 * (w * h);
When the absolute value sum of gained is less than or equal to the threshold value of setting, this piece is background piece, and the gray-scale value that the inner all pixels of this piece are set is 0;
Finally the background in the first two field picture and target are separated, obtained the binary image of the first two field picture;
Step 4, does together with field mark binary image
According to from left to right, order Yi Kuaiwei unit passing marker the first frame binary image from top to bottom, is labeled as same target by adjacent object block, calculates the height h of each target-marking simultaneously 0with width w 0, as this width and highly satisfied 2≤(h 0/ w 0)≤10 o'clock, for this target-marking creates a target-marking structure, record the current centre of gravity place (x of this target 1, y 1) and original center of gravity position (x 0, y 0), simultaneously for this target arranges a coupling lock-on counter, and coupling lock-on counter is initialized as zero for the first time;
Step 5, processes according to step 2, step 3 and step 4 the second two field picture, and with the centre of gravity place (x of the first frame recording 1, y 1) be foundation, with the centre of gravity place (x of the target of record in the second frame 2, y 2) do poor, when both centre of gravity place absolute values with meet
Figure GDA0000397272330000051
during the length of piece, the target of just thinking this first frame flag realizes coupling and follows the tracks of in the second frame, with the centre of gravity place (x of the target of present frame 2, y 2) replacement (x 1, y 1), original center of gravity position (x 0, y 0) constant, mate lock-on counter simultaneously and add 1;
When do not find the target that can mate at the second frame, abandon the record of the first frame.
Step 6, to the 3rd two field picture ... m two field picture is processed according to step 2, step 3, step 4 and step 5;
Step 7, when the coupling lock-on counter of target-marking structure record is cumulative while equaling 40 frame, searches mapping table and obtains current centre of gravity place (x 40, y 40) and original center of gravity position (x 1, y 1) corresponding actual range L1 and L2, calculate the displacement S=|L1-L2| of this target, coupling lock-on counter statistical value is 40 frames, normal video is 25fps, the average velocity V=S/ (40*1/25) of this target-marking in 40 frames, satisfied 0.3≤V≤2.0(the unit of speed when this target-marking: in the time of m/s), can judge that this target is pedestrian's event.
In conjunction with Fig. 2, to being illustrated together with field mark in above-mentioned steps, in figure, Far Left and rightmost are respectively image from coordinate and horizontal ordinate, in binary image, there are two objects to divide, to binary image according to from left to right, order is from top to bottom block scan one by one, adopt 8 together with territory method of discrimination, the adjacent object block in space is labeled as to same target, as target-marking a and b in figure, by mark, can obtain the height of target-marking, width and centre of gravity place information, wherein marking objects b meets height over width and is greater than 2 conditions that are less than 10, illustrate that this target-marking may be pedestrian target.
In conjunction with Fig. 3 and Fig. 5, coupling in above-mentioned steps is followed the tracks of and is illustrated, in Fig. 3, binaryzation target-marking white box around represents the border of the binaryzation target of mark in the first two field picture, in Fig. 5, binaryzation target-marking white box around represents the binaryzation object boundary of mark in the second frame, another one white box is the border of the binaryzation target of mark in the first two field picture, and in Fig. 3, the border centre of gravity place of binaryzation target-marking is (x 1, y 1), in Fig. 5, the border centre of gravity place of binaryzation target-marking is (x 2, y 2), when both centre of gravity place meets
Figure GDA0000397272330000061
during the length of piece, in key diagram 5, binaryzation target-marking is realized and being mated with binaryzation target-marking in Fig. 3, and both are same target-markings, and the centre of gravity place of present frame target-marking is (x 2, y 2), successively at the 3rd frame picture ..., in m two field picture, occurring that target-marking finds the coupling target of former frame target-marking, coupling that so can realize target is followed the tracks of.
It is below the specific embodiment that inventor provides.
Embodiment:
Known video positive sowing time, pedestrian target is complete appearing in the 1st two field picture for the first time, and as shown in Figure 1, Fig. 4 is the second two field picture, and Fig. 6 is the 40th two field picture.In embodiment, in processing procedure, the sample frequency of video is that 25 frames are per second, two field picture size is 720 * 288, the size in every region is 8 * 6, two field picture is divided into 90 * 48 piece regions, target area binarization segmentation threshold value is 576, according to method of the present invention, successively the first frame to the 40 two field pictures is processed.
White box in figure is followed successively by the border of pedestrian's target-marking in the first frame to the 40 frame two field pictures as can be seen from Figure 7, the white box that wherein comprises binaryzation target-marking is the border of target-marking in the 40 two field picture, pedestrian target has been realized to 39 couplings to be followed the tracks of, the current centre of gravity place obtaining by the original center of gravity position that obtains in its first two field picture and the 40 frame can be tried to achieve its average velocity, if average every speed meets pedestrian's velocity characteristic, illustrate that coupling tracking target is for pedestrian.

Claims (1)

1. the road pedestrian event detecting method based on video, is characterized in that, the method realizes through the following steps:
Step 1, adopt video camera geometric calibration method, according to video camera imaging principle, take out geometric model, the functional relation that while extrapolating pixel fragment variation, corresponding road surface actual range changes, thereby set up image pixel to the mapping relations of road surface actual range, i.e. mapping table;
Step 2, the first two field picture and background image are all divided into a plurality of under identical piece coordinate system, the first two field picture and background image adopt identical piece coordinate system, and the piece number T that the first two field picture is divided is: T=(W/w) * (H/h); The size that is two field picture is W * H, and the area of piece is w * h;
The pixel of the horizontal direction that wherein W is image, H is the pixel of image vertical direction, w is the width in piece region, the height that h is piece;
Step 3, each piece to the first two field picture, in background image, finding identical with this piece position is the background piece that coordinate is identical, and calculates the absolute value sum of the gray scale difference value of each same pixel position between its corresponding background piece of this piece;
When the absolute value sum of gained is greater than the threshold value of setting, this piece is object block, and the gray-scale value that the inner all pixels of this piece are set is 255, and threshold value span is wherein:
The area of the area~20 * piece of 10 * piece, i.e. 10 * (w * h)~20 * (w * h);
When the absolute value sum of gained is less than or equal to the threshold value of setting, this piece is background piece, and the gray-scale value that the inner all pixels of this piece are set is 0;
Finally the background in the first two field picture and target are separated, obtained the binary image of the first two field picture;
Step 4, does together with field mark binary image
According to from left to right, order Yi Kuaiwei unit passing marker the first frame binary image from top to bottom, is labeled as same target by adjacent object block, calculates the height h of each target-marking simultaneously 0with width w 0, as this width and highly satisfied 2≤(h 0/ w 0)≤10 o'clock, for this target-marking creates a target-marking structure, record the current centre of gravity place (x of this target 1, y 1) and original center of gravity position (x 0, y 0), simultaneously for this target arranges a coupling lock-on counter, and coupling lock-on counter is initialized as zero for the first time;
Step 5, processes according to step 2, step 3 and step 4 the second two field picture, and with the centre of gravity place (x of the first frame recording 1, y 1) be foundation, with the centre of gravity place (x of the target of record in the second frame 2, y 2) do poor, when both centre of gravity place absolute values with meet
Figure FDA0000397272320000021
during the length of piece, the target of just thinking this first frame flag realizes coupling and follows the tracks of in the second frame, with the centre of gravity place (x of the target of present frame 2, y 2) replacement (x 1, y 1), original center of gravity position (x 0, y 0) constant, mate lock-on counter simultaneously and add 1;
When do not find the target that can mate at the second frame, abandon the record of the first frame;
Step 6, to the 3rd two field picture ..., m two field picture, repeating step two, step 3, step 4 and step 5 are processed;
Step 7, when the coupling lock-on counter of target-marking structure record is cumulative while equaling 40 frame, searches mapping table and obtains current centre of gravity place (x 40, y 40) and original center of gravity position (x 1, y 1) corresponding actual range L1 and L2, calculate the displacement S=|L1-L2| of this target, coupling lock-on counter statistical value is 40 frames, normal video is 25fps, the average velocity V=S/ (40*1/25) of this target-marking in 40 frames, when the speed of this target-marking meets 0.3m/s≤V≤2.0m/s, judge that this target is pedestrian's event.
CN201210039553.9A 2012-02-21 2012-02-21 Road pedestrian event detection method based on video Expired - Fee Related CN102622582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210039553.9A CN102622582B (en) 2012-02-21 2012-02-21 Road pedestrian event detection method based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210039553.9A CN102622582B (en) 2012-02-21 2012-02-21 Road pedestrian event detection method based on video

Publications (2)

Publication Number Publication Date
CN102622582A CN102622582A (en) 2012-08-01
CN102622582B true CN102622582B (en) 2014-04-30

Family

ID=46562492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210039553.9A Expired - Fee Related CN102622582B (en) 2012-02-21 2012-02-21 Road pedestrian event detection method based on video

Country Status (1)

Country Link
CN (1) CN102622582B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150550B (en) * 2013-02-05 2015-10-28 长安大学 A kind of road pedestrian event detection method based on gripper path analysis
KR101480348B1 (en) * 2013-05-31 2015-01-09 삼성에스디에스 주식회사 People Counting Apparatus and Method
CN104112118B (en) * 2014-06-26 2017-09-05 大连民族学院 Method for detecting lane lines for Lane Departure Warning System
US10192319B1 (en) * 2017-07-27 2019-01-29 Nanning Fugui Precision Industrial Co., Ltd. Surveillance method and computing device using the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
KR101030257B1 (en) * 2009-02-17 2011-04-22 한남대학교 산학협력단 Method and System for Vision-Based People Counting in CCTV

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
KR101030257B1 (en) * 2009-02-17 2011-04-22 한남대학교 산학협력단 Method and System for Vision-Based People Counting in CCTV
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
交通监控系统中车辆和行人的检测与识别;胡建华等;《电子测量技术》;20070131;16-17,71 *
基于单目视觉的运动行人检测与跟踪方法;常好丽等;《交通运输工程学报》;20060630;55-59 *
姚亚夫等.基于位置特征的运动行人检测与跟踪方法.《广西大学学报:自然科学版》.2009, *
常好丽等.基于单目视觉的运动行人检测与跟踪方法.《交通运输工程学报》.2006,
胡建华等.交通监控系统中车辆和行人的检测与识别.《电子测量技术》.2007,

Also Published As

Publication number Publication date
CN102622582A (en) 2012-08-01

Similar Documents

Publication Publication Date Title
CN102622886B (en) Video-based method for detecting violation lane-changing incident of vehicle
Luvizon et al. A video-based system for vehicle speed measurement in urban roadways
CN103117005B (en) Lane deviation warning method and system
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
US7046822B1 (en) Method of detecting objects within a wide range of a road vehicle
CN101916383B (en) Vehicle detecting, tracking and identifying system based on multi-camera
CN104575003B (en) A kind of vehicle speed detection method based on traffic surveillance videos
CN102915433B (en) Character combination-based license plate positioning and identifying method
Chen et al. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction
CN103150549B (en) A kind of road tunnel fire detection method based on the early stage motion feature of smog
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
Yaghoobi Ershadi et al. Robust vehicle detection in different weather conditions: Using MIPM
CN105005771A (en) Method for detecting full line of lane based on optical flow point locus statistics
CN104239867A (en) License plate locating method and system
CN102622582B (en) Road pedestrian event detection method based on video
Luo et al. Multiple lane detection via combining complementary structural constraints
Lee et al. Clustering learning model of CCTV image pattern for producing road hazard meteorological information
CN112084900A (en) Underground garage random parking detection method based on video analysis
JP2007316685A (en) Traveling path boundary detection device and traveling path boundary detection method
CN109117702A (en) The detection and count tracking method and system of target vehicle
Janda et al. Road boundary detection for run-off road prevention based on the fusion of video and radar
Wu et al. Adjacent lane detection and lateral vehicle distance measurement using vision-based neuro-fuzzy approaches
Chen et al. A precise information extraction algorithm for lane lines
Yeshwanth et al. Estimation of intersection traffic density on decentralized architectures with deep networks
CN103150550B (en) A kind of road pedestrian event detection method based on gripper path analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140430

Termination date: 20160221