CN103324913B - A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis - Google Patents

A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis Download PDF

Info

Publication number
CN103324913B
CN103324913B CN201310208226.6A CN201310208226A CN103324913B CN 103324913 B CN103324913 B CN 103324913B CN 201310208226 A CN201310208226 A CN 201310208226A CN 103324913 B CN103324913 B CN 103324913B
Authority
CN
China
Prior art keywords
point
target
threshold value
pedestrian
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310208226.6A
Other languages
Chinese (zh)
Other versions
CN103324913A (en
Inventor
宋焕生
崔华
付洋
张骁
王国锋
李东方
李建成
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Changan University
Original Assignee
CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd, Changan University filed Critical CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Priority to CN201310208226.6A priority Critical patent/CN103324913B/en
Publication of CN103324913A publication Critical patent/CN103324913A/en
Application granted granted Critical
Publication of CN103324913B publication Critical patent/CN103324913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis, foreground target is obtained by using the Target Segmentation of background subtraction, block-based method is adopted to mark to the connected domain of same target, record the boundary rectangle of this connected domain simultaneously and extract its geometric characteristic and complete target identification, the angle point of target is extracted after identifying class pedestrian target, utilize corner location information to angle point tracking and matching, repeat said process, the movement locus of target can be obtained, segmentation flex point is asked to this track, linear analysis is done respectively in each segmentation that flex point is formed, asking for of realize target speed, analyze pedestrian's state information of event on this basis, complete traffic safety early warning.Detection method of the present invention is applicable to traffic scene complicated and changeable, and accurately can identify the pedestrian occurred within the scope of monitor video, follow the tracks of and early warning, practical value is high, has broad application prospects.

Description

A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
Technical field
The invention belongs to field of video detection, be specifically related to a kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis.
Background technology
Along with the development that road traffic is built, the contradiction of pedestrian and vehicle is more and more outstanding, causes traffic hazard constantly to occur.Pedestrians disobeying traffic rule event is the major reason causing traffic hazard, such as, make a dash across the red light, jaywalk, swarm into highway etc., therefore the monitoring of pedestrians disobeying traffic rule event is become to a pith of traffic monitoring.Current traffic monitoring is maked an inspection tour mainly through artificial checking monitoring video and road and is realized, and this mode efficiency is low, can not accomplish real-time monitoring, also cause significant wastage to resource.At intelligent transportation field, traditional pedestrian detection method mainly comprises temperature detection, inductive coil detection, sound detection etc.Temperature detection is subject to the interference of the numerous thermal source target of traffic scene, causes flase drop.Inductive coil detection sensitivity is not high, installation is inconvenient, maintainability is poor.Because traffic scene noise is a lot of, Detection accuracy is not high yet for sound detection.
In recent years, based on the detection technique of video because its sensing range is large, can meet real-time, can provide the advantage widespread uses such as a lot of supplementarys.But due to traffic scene more complicated, background and moving target are easily because the factor such as light, weather changes, although there is the method for a lot of pedestrian detection, as based on human parameters model method, pedestrian's affair alarm can be realized based on the method etc. of body local feature, but cannot meet to environmental factor adaptability and obtain in real time, the requirement of traffic detection information accurately.
Summary of the invention
For shortcomings and deficiencies of the prior art, the object of the invention is to, a kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis is provided, the method can overcome traffic route scene factor complicated and changeable, realize detecting in real time, accurately to the pedestrian's event in surveyed area, and to its danger classes early warning.
In order to realize above-mentioned task, the present invention adopts following technical scheme to be achieved:
A pedestrian event detection method for Shape-based interpolation characteristic sum trajectory analysis, the method is carried out according to following steps:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table, is divided into by road image in road and curb two parts simultaneously;
Step 2,1st two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, the size of background is W*H, the block size divided is w*h, the block areal T divided is T=(W/w) * (H/h), subtracts each other, obtain the frame difference image that size is all W*H with current 1st two field picture and background image respective pixel, frame difference image is divided into the block that T size is all w*h, the number of pixels being greater than gray threshold A in a note jth block is N jif, N jbe greater than threshold value B, then in this block, all pixel values are composed is 255, otherwise tax is 0, wherein:
W is image level direction number of pixels;
H is image vertical direction number of pixels;
W is the pixel wide of block;
H is the pixels tall of block;
j=1,2,3...T;
The value of described threshold value A is 30;
The span of described threshold value B is 0.5 ~ 0.75 times of sum of all pixels in block;
Step 3, scans in units of block to binary image from left to right from top to bottom successively, is marked with identical label to the connected domain of same target, obtains the minimum enclosed rectangle of this connected domain simultaneously, calculates the height R of this boundary rectangle h, width R w, depth-width ratio R awith rectangular degree R j, work as R avalue within the scope of threshold value C, and R jvalue within the scope of threshold value D time, retain this target, work as R aor R jtime not within the scope of threshold value C and D, remove this target, wherein:
Described threshold value C scope is 1.5 ~ 8;
Described threshold value D scope is 0.5 ~ 1;
Step 4, finds best angle point to the jth foreground target that the 1st two field picture marks, and chooses this target pixel P icentered by (m, n), setting up a size is the window of a*a, respectively calculated central pixel point P iin horizontal, longitudinal and two diagonals of (m, n), the quadratic sum of neighbor gray scale difference, gets the minimum value g in its result minif, g minbe greater than threshold value E, then this point is angle point, if g minbe less than or equal to threshold value E, then this point is not angle point and casts out, wherein:
Described a is the pixel wide of the window length of side;
The span of described threshold value E is 180 ~ 220;
Step 5, the positional information of angle point and matched jamming number information are recorded in the structure of a newly-built sky, object matching is followed the tracks of number of times and is initialized as zero;
Step 6, to the second frame, the 3rd frame ..., the i-th two field picture, the method for step 2, step 3 and step 4 of repetition obtains the corner location of target in present frame, then with the corner location of last frame recording for foundation, compare with the corner location of the target of the record in present frame, then have:
When both positions, absolute value difference is greater than threshold value F, just determines that the target at this angle point place in present frame is target new in present frame, then process according to the method for step 5;
When both positions, absolute value difference is less than or equal to threshold value F, then replace the corner location information of former frame as new benchmark foundation by the corner location information of present frame, matched jamming number of times adds 1, wherein:
I is positive integer;
The span of described threshold value F is 1 ~ 5;
Step 7, when matching times is more than or equal to threshold value G, then follows the tracks of complete, obtains pursuit path and is: Track={P i, P i+1... P i+n, perform step 8, wherein:
The span of described threshold value G is 50 ~ 70;
Step 8, searches mapping table, obtains track Track={P i, P i+1... P i+nin actual range corresponding to each angle point, i.e. actual motion track Track ,={ (s i, 0), (s i+1, 1) ..., (s i+n, n) }, wherein:
S i+nrepresent pixel P i+ncorresponding actual range, n represents subscript a little;
Step 9, by actual motion geometric locus first point P iwith tail point P i+nobtain the straight-line equation through this straight line of 2: y=kx+b, on this track, any point to the distance of this straight line is:
d r = | k S i + r - r + b | k 2 + 1
In formula: k is the slope of straight line, b is intercept, and (x, y) represents any point on this straight line, (s i+r, r) represent any point on this track, d rrepresent (s i+r, r) to the distance of straight line;
To all d rcarrying out sorts finds out maximum d maxif be greater than threshold value H, then this point is the flex point of target trajectory, preserves this point (s i+r, r), perform step 10, wherein:
The value of described threshold value H is 70cm;
Step 10, flex point (s i+r, r) by path curves segmentation, with (s 0, 0) and (s i+r, be r) segment of curve that head and the tail are put, and with point (s i+r, r) with (s i+n, be n) segment of curve that head and the tail are put, in these two segment of curve, perform step 9 respectively, continue the flex point asking for each section of track, until all distance between beeline and dot d on this track rtill during≤H, obtain one group of flex point { (s like this i+r0, r 0), (s i+r1, r 1) ..., (s i+rm, r m);
Step 11, movement locus is divided into path curves section by flex point, uses least square method to carry out linear fit obtain correlation coefficient r, then have each section of path curves section:
When r >=0.5, retain this section of path curves section;
As r < 0.5, remove this section of path curves section;
Finally obtain one group of path curves section;
Step 12, utilize the first point of each path curves section that obtains and the actual range of tail point and mistiming after step 11 screening to ask for segmentation internal object speed v, expression formula is:
v = | s f - s s | N&Delta;t
In formula:
N represents the interval hop count of tracing point in one section of path curves section;
S frepresent the tail point actual range of one section of path curves section;
S srepresent the first point actual range of one section of path curves section;
Δ t represents the time interval of adjacent two tracing points in one section of path curves section;
When the speed in all segmentations all meets 0.3m/s < v < 2m/s, can determine that this target is for pedestrian;
Step 13, according to the coordinate points P of target at present frame i+nposition judgment pedestrian event danger classes:
(1) as a P i+nwhen being in road inside, this pedestrian's event danger classes is high;
(2) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be greater than 30 degree, then during this pedestrian's event danger classes is;
(3) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be less than or equal to 30 degree, then this pedestrian's event danger classes is low.
The pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis of the present invention, compared with prior art, the feature of the Target Segmentation difficulty that this method causes for the polytrope comparatively far away and pedestrian's attitude of traffic scene monitor video sighting distance, the geometric characteristic of pedestrian is utilized to complete Target Segmentation and preliminary identification in conjunction with background subtraction, and Objective extraction is stablized, single angle point, utilize matching algorithm to obtain target following track.Due to the randomness of pedestrian movement, gained pedestrian track has nonlinear feature, if directly to whole section of geometric locus linear fit, larger error will be had, therefore the present invention adopts the method for segmentation, find flex point to target trajectory curve, geometric locus can be divided into several to have the line segment of linear relationship by these flex points, then uses least square method to carry out linear fit to the line segment between each adjacent comers.After this kind of process, target speed information more accurately can be obtained, improve the accuracy rate of pedestrian's event detection.In addition, the present invention, by calculating and analyzing pedestrian position information and direction of motion, can realize the judgement of pedestrian's event danger classes, complete the function that gives warning in advance.
Accompanying drawing explanation
Fig. 1 is the 1st two field picture.
Fig. 2 is that road zones of different divides, and white represents in road.
Fig. 3 is the class pedestrian that Shape-based interpolation feature identifies, and in figure, white box is target boundary rectangle.
Fig. 4 is target signature point schematic diagram, and wherein white round dot is this clarification of objective angle point.
Fig. 5 is stencil matching searching method, angle point institute black round dot representation feature angle point in the picture, outside stain, little square is template, in image to be searched, dash area is region of search, with template traversal search region in region of search, find the match block making MAD value minimum, as new masterplate, its center is new angle point.
Fig. 6 is target following track schematic diagram in video, and wherein white wire represents the feature angle point movement locus in the picture of pedestrian.
Fig. 7 is target actual motion trajectory diagram, and the position in each moment represents with white round dot, and in figure, horizontal ordinate is the time, and unit is 0.04s; Ordinate is actual range, and unit is cm.
Fig. 8 finds segmentation flex point to actual path curve, and flex point place grey cross marks.
Fig. 9 is the early warning of pedestrian's event danger classes, display pedestrian position and direction of motion, and judges pedestrian's event danger classes.
Below in conjunction with drawings and Examples, content of the present invention is described in further detail.
Embodiment
The present embodiment provides a kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis, by using the Target Segmentation of background subtraction, block-based connected component labeling, the target identification based on geometric characteristic, angle point grid, target trajectory is followed the tracks of and the segmentation flex point that seeks trajectory, linear analysis realize target speed must be asked for, complete pedestrian's event state analysis on this basis, complete traffic safety early warning.
It should be noted that, image handled in procedure of the present invention be in video along positive seasonal effect in time series first two field picture, the second two field picture, the 3rd two field picture ..., i-th (i is positive integer) two field picture.
It should be noted that the mapping table in the present embodiment adopts the video camera geometric calibration method described in patent of invention " a kind of video camera geometric calibration method under linear model " (open (bulletin) number: CN102222332A) to obtain.
If the size of each frame video image is W*H, the size of each piece is w*h, and wherein W is the pixel of each frame video image horizontal direction, and H is the pixel of each frame video image vertical direction, and w is the width in each piece of region, and h is the height in each piece of region.
The method of the present embodiment specifically adopts following steps to realize:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table, is divided into by road image in road and curb two parts simultaneously;
Step 2,1st two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, the size of background is W*H, the block size divided is w*h, the block areal T divided is T=(W/w) * (H/h), subtracts each other, obtain the frame difference image that size is all W*H with current 1st two field picture and background image respective pixel, frame difference image is divided into the block that T size is all w*h, the number of pixels being greater than gray threshold A in a note jth block is N jif, N jbe greater than threshold value B, then in this block, all pixel values are composed is 255, otherwise tax is 0, wherein:
W is image level direction number of pixels;
H is image vertical direction number of pixels;
W is the pixel wide of block;
H is the pixels tall of block;
j=1,2,3...T;
The value of described threshold value A is 30;
The span of described threshold value B is 0.5 ~ 0.75 times of sum of all pixels in block;
Step 3, scans in units of block to binary image from left to right from top to bottom successively, is marked with identical label to the connected domain of same target, obtains the minimum enclosed rectangle of this connected domain simultaneously, calculates the height R of this boundary rectangle h, width R w, depth-width ratio R awith rectangular degree R j, work as R avalue within the scope of threshold value C, and R jvalue within the scope of threshold value D time, retain this target, work as R aor R jtime not within the scope of threshold value C and D, remove this target, wherein:
Described threshold value C scope is 1.5 ~ 8;
Described threshold value D scope is 0.5 ~ 1;
Step 4, finds best angle point to the jth foreground target that the 1st two field picture marks, and chooses this target pixel P icentered by (m, n), setting up a size is the window of a*a, respectively calculated central pixel point P iin horizontal, longitudinal and two diagonals of (m, n), the quadratic sum of neighbor gray scale difference, gets the minimum value g in its result minif, g minbe greater than threshold value E, then this point is angle point, if g minbe less than or equal to threshold value E, then this point is not angle point and casts out, wherein:
Described a is the pixel wide of the window length of side;
The span of described threshold value E is 180 ~ 220;
Step 5, the positional information of angle point and matched jamming number information are recorded in the structure of a newly-built sky, object matching is followed the tracks of number of times and is initialized as zero;
Step 6, to the second frame, the 3rd frame ..., the i-th two field picture, the method for step 2, step 3 and step 4 of repetition obtains the corner location of target in present frame, then with the corner location of last frame recording for foundation, compare with the corner location of the target of the record in present frame, then have:
When both positions, absolute value difference is greater than threshold value F, just determines that the target at this angle point place in present frame is target new in present frame, then process according to the method for step 5;
When both positions, absolute value difference is less than or equal to threshold value F, then replace the corner location information of former frame as new benchmark foundation by the corner location information of present frame, matched jamming number of times adds 1, wherein:
I is positive integer;
The span of described threshold value F is 1 ~ 5;
Step 7, when matching times is more than or equal to threshold value G, then follows the tracks of complete, obtains pursuit path and is: Track={P i, P i+1... P i+n, perform step 8, wherein:
The span of described threshold value G is 50 ~ 70;
Step 8, searches mapping table, obtains track Track={P i, P i+1... P i+nin actual range corresponding to each angle point, i.e. actual motion track Track '={ (s i, 0), (s i+1, 1) ..., (s i+n, n) }, wherein:
S i+nrepresent pixel P i+ncorresponding actual range, n represents subscript a little;
Step 9, by actual motion geometric locus first point P iwith tail point P i+nobtain the straight-line equation through this straight line of 2: y=kx+b, on this track, any point to the distance of this straight line is:
d r = | k S i + r - r + b | k 2 + 1
In formula: k is the slope of straight line, b is intercept, and (x, y) represents any point on this straight line, (s i+r, r) represent any point on this track, d rrepresent (s i+r, r) to the distance of straight line;
To all d rcarrying out sorts finds out maximum d maxif be greater than threshold value H, then this point is the flex point of target trajectory, preserves this point (s i+r, r), perform step 10, wherein:
The value of described threshold value H is 70cm;
Step 10, flex point (s i+r, r) by path curves segmentation, with (s 0, 0) and (s i+r, be r) segment of curve that head and the tail are put, and with point (s i+r, r) with (s i+n, be n) segment of curve that head and the tail are put, in these two segment of curve, perform step 9 respectively, continue the flex point asking for each section of track, until all distance between beeline and dot d on this track rtill during≤H, obtain one group of flex point { (s like this i+r0, r 0), (s i+r1, r 1) ..., (s i+rm, r m);
Step 11, movement locus is divided into path curves section by flex point, uses least square method to carry out linear fit obtain correlation coefficient r, then have each section of path curves section:
When r >=0.5, retain this section of path curves section;
As r < 0.5, remove this section of path curves section;
Finally obtain one group of path curves section;
Step 12, utilize the first point of each path curves section that obtains and the actual range of tail point and mistiming after step 11 screening to ask for segmentation internal object speed v, expression formula is:
v = | s f - s s | N&Delta;t
In formula:
N represents the interval hop count of tracing point in one section of path curves section;
S frepresent the tail point actual range of one section of path curves section;
S srepresent the first point actual range of one section of path curves section;
Δ t represents the time interval of adjacent two tracing points in one section of path curves section;
When the speed in all segmentations all meets 0.3m/s < v < 2m/s, can determine that this target is for pedestrian;
Step 13, according to the coordinate points P of target at present frame i+nposition judgment pedestrian event danger classes:
(2) as a P i+nwhen being in road inside, this pedestrian's event danger classes is high;
(2) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be greater than 30 degree, then during this pedestrian's event danger classes is;
(3) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be less than or equal to 30 degree, then this pedestrian's event danger classes is low.
Below provide specific embodiments of the invention, it should be noted that the present invention is not limited to following specific embodiment, all equivalents done on technical scheme basis all fall into protection scope of the present invention.
Embodiment:
In the processing procedure of the present embodiment, the sample frequency of video is that 25 frames are per second, every two field picture size is 720 × 288, the block size that frame difference image carries out block process is 8 × 6, image be divide into 90 × 48 block regions, gray threshold A when carrying out background subtraction is 30, threshold value B is 36, the scope meeting the length breadth ratio threshold value C of pedestrian's feature is 1.5 ~ 8, rectangular degree threshold value D scope is 0.5 ~ 1, the value choosing the threshold value E of angle point is 180 ~ 220, the threshold value F of corners Matching distance is 3, corners Matching frequency threshold value G gets 50, to actual motion track find segmentation flex point time judging distance threshold value H get 70cm, as shown in Figures 1 to 9, use said method from the first frame, to defer to said method successively to process video image.
In figure, white wire is pedestrian movement's track as can be seen from Figure 6, and when video image runs to the 51st frame, corners Matching number of times reaches 50 times, therefore trajectory is ended from the 1st frame to the 51st frame.The lower end of this track enters scene, the characteristic point position found for pedestrian first time, puts the unique point for finding at the 50th frame topmost.
Fig. 7 is the actual range curve map that target following track is corresponding, and use the method in step 9 and step 10 to ask for flex point to this geometric locus, as shown in Figure 8, flex point place cross symbol is marked result.Then adopt least square method to the track fitting in each segmentation, the actual motion speed 0.71m/s of pedestrian can be tried to achieve, so can judge that this target is as pedestrian.Now according to position and the direction of this pedestrian, judge the danger classes that this pedestrian's event causes traffic safety to realize traffic safety early warning.

Claims (1)

1. a pedestrian event detection method for Shape-based interpolation characteristic sum trajectory analysis, is characterized in that, the method is carried out according to following steps:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table, is divided into by road image in road and curb two parts simultaneously;
Step 2,1st two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, the size of background is W*H, the block size divided is w*h, the block areal T divided is T=(W/w) * (H/h), subtracts each other, obtain the frame difference image that size is all W*H with current 1st two field picture and background image respective pixel, frame difference image is divided into the block that T size is all w*h, the number of pixels being greater than gray threshold A in a note jth block is N jif, N jbe greater than threshold value B, then in this block, all pixel values are composed is 255, otherwise tax is 0, wherein:
W is image level direction number of pixels;
H is image vertical direction number of pixels;
W is the pixel wide of block;
H is the pixels tall of block;
j=1,2,3...T;
The value of described threshold value A is 30;
The span of described threshold value B is 0.5 ~ 0.75 times of sum of all pixels in block;
Step 3, scans in units of block to binary image from left to right from top to bottom successively, is marked with identical label to the connected domain of same target, obtains the minimum enclosed rectangle of this connected domain simultaneously, calculates the height R of this boundary rectangle h, width R w, depth-width ratio R awith rectangular degree R j, work as R avalue within the scope of threshold value C, and R jvalue within the scope of threshold value D time, retain this target, work as R aor R jtime not within the scope of threshold value C and D, remove this target, wherein:
Described threshold value C scope is 1.5 ~ 8;
Described threshold value D scope is 0.5 ~ 1;
Step 4, finds best angle point to the jth foreground target that the 1st two field picture marks, and chooses this target pixel P icentered by (m, n), setting up a size is the window of a*a, respectively calculated central pixel point P iin horizontal, longitudinal and two diagonals of (m, n), the quadratic sum of neighbor gray scale difference, gets the minimum value g in its result minif, the g of certain point in window minbe greater than threshold value E, then this point in window is angle point, if g minbe less than or equal to threshold value E, then this point is not angle point and casts out, wherein:
Described a is the pixel wide of the window length of side;
The span of described threshold value E is 180 ~ 220;
Step 5, the positional information of angle point and matched jamming number information are recorded in the structure of a newly-built sky, object matching is followed the tracks of number of times and is initialized as zero;
Step 6, to the second frame, the 3rd frame ..., the i-th two field picture, the method for step 2, step 3 and step 4 of repetition obtains the corner location of target in present frame, then with the corner location of last frame recording for foundation, compare with the corner location of the target of the record in present frame, then have:
When both positions, absolute value difference is greater than threshold value F, just determines that the target at this angle point place in present frame is target new in present frame, then process according to the method for step 5;
When both positions, absolute value difference is less than or equal to threshold value F, then replace the corner location information of former frame as new benchmark foundation by the corner location information of present frame, matched jamming number of times adds 1, wherein:
I is positive integer;
The span of described threshold value F is 1 ~ 5;
Step 7, when matching times is more than or equal to threshold value G, then follows the tracks of complete, obtains pursuit path and is: Track={P i, P i+1... P i+n, perform step 8, wherein:
The span of described threshold value G is 50 ~ 70;
Step 8, searches mapping table, obtains track Track={P i, P i+1... P i+nin actual range corresponding to each angle point, i.e. actual motion track Track '={ (s i, 0), (s i+1, 1) ..., (s i+n, n) }, wherein:
S i+nrepresent pixel P i+ncorresponding actual range, n represents subscript a little;
Step 9, by actual motion geometric locus first point P iwith tail point P i+nobtain the straight-line equation through this straight line of 2: y=kx+b, on this track, any point to the distance of this straight line is:
d r = | kS i + r - r + b | k 2 + 1
In formula: k is the slope of straight line, b is intercept, and (x, y) represents any point on this straight line, (s i+r, r) represent any point on this track, d rrepresent (s i+r, r) to the distance of straight line;
To all d rcarrying out sorts finds out maximum d maxif be greater than threshold value H, then this point on actual motion geometric locus is the flex point of target trajectory, preserves this point (s i+r, r), perform step 10, wherein:
The value of described threshold value H is 70cm;
Step 10, flex point (s i+r, r) by path curves segmentation, with (s 0, 0) and (s i+r, be r) segment of curve that head and the tail are put, and with point (s i+r, r) with (s i+n, be n) segment of curve that head and the tail are put, in these two segment of curve, perform step 9 respectively, continue the flex point asking for each section of track, until all distance between beeline and dot d on this track rtill during≤H, H is threshold value, and the value of described threshold value H is 70cm, obtains one group of flex point { (s like this i+r0, r 0), (s i+r1, r 1) ..., (s i+rm, r m);
Step 11, movement locus is divided into path curves section by flex point, uses least square method to carry out linear fit obtain correlation coefficient r, then have each section of path curves section:
When r >=0.5, retain this section of path curves section;
As r < 0.5, remove this section of path curves section;
Finally obtain one group of path curves section;
Step 12, utilize the first point of each path curves section that obtains and the actual range of tail point and mistiming after step 11 screening to ask for segmentation internal object speed v, expression formula is:
v = | s f - s s | N &Delta; t
In formula:
N represents the interval hop count of tracing point in one section of path curves section;
S frepresent the tail point actual range of one section of path curves section;
S srepresent the first point actual range of one section of path curves section;
Δ t represents the time interval of adjacent two tracing points in one section of path curves section;
When the speed in all segmentations all meets 0.3m/s < v < 2m/s, can determine that this target is for pedestrian;
Step 13, according to the coordinate points P of target at present frame i+nposition judgment pedestrian event danger classes:
(1) as a P i+nwhen being in road inside, this pedestrian's event danger classes is high;
(2) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be greater than 30 degree, then during this pedestrian's event danger classes is;
(3) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be less than or equal to 30 degree, then this pedestrian's event danger classes is low.
CN201310208226.6A 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis Active CN103324913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310208226.6A CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310208226.6A CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Publications (2)

Publication Number Publication Date
CN103324913A CN103324913A (en) 2013-09-25
CN103324913B true CN103324913B (en) 2016-03-30

Family

ID=49193644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310208226.6A Active CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Country Status (1)

Country Link
CN (1) CN103324913B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469084A (en) * 2015-11-20 2016-04-06 中国科学院苏州生物医学工程技术研究所 Rapid extraction method and system for target central point
CN105741321B (en) * 2016-01-31 2018-12-11 华南理工大学 Video object movement tendency analysis method based on trace point distribution
CN105959639B (en) * 2016-06-06 2019-06-14 南京工程学院 Pedestrian's monitoring method in avenue region based on ground calibration
CN106127826B (en) * 2016-06-27 2019-01-22 安徽慧视金瞳科技有限公司 It is a kind of for projecting the connected component labeling method of interactive system
CN106341263B (en) * 2016-09-05 2019-06-14 南通大学 Personnel state information detecting method based on accumulated time model
CN107330919B (en) * 2017-06-27 2020-07-10 中国科学院成都生物研究所 Method for acquiring pistil motion track
CN109445587A (en) * 2018-10-22 2019-03-08 北京顺源开华科技有限公司 Kinematic parameter determines method and device
CN109670419B (en) * 2018-12-04 2023-05-23 天津津航技术物理研究所 Pedestrian detection method based on perimeter security video monitoring system
CN111447562B (en) * 2020-03-02 2021-12-24 北京梧桐车联科技有限责任公司 Vehicle travel track analysis method and device and computer storage medium
CN111914699B (en) * 2020-07-20 2023-08-08 同济大学 Pedestrian positioning and track acquisition method based on video stream of camera
CN111811567B (en) * 2020-07-21 2022-03-01 北京中科五极数据科技有限公司 Equipment detection method based on curve inflection point comparison and related device
CN112016409B (en) * 2020-08-11 2024-08-02 艾普工华科技(武汉)有限公司 Deep learning-based process specification visual identification judging method and system
CN112288975A (en) * 2020-11-13 2021-01-29 珠海大横琴科技发展有限公司 Event early warning method and device
CN112613365B (en) * 2020-12-11 2024-09-17 北京影谱科技股份有限公司 Pedestrian detection and behavior analysis method and device and computing equipment
CN113392723A (en) * 2021-05-25 2021-09-14 珠海市亿点科技有限公司 Unmanned aerial vehicle forced landing area screening method, device and equipment based on artificial intelligence
CN113221926B (en) * 2021-06-23 2022-08-02 华南师范大学 Line segment extraction method based on angular point optimization
CN113537035A (en) * 2021-07-12 2021-10-22 宁波溪棠信息科技有限公司 Human body target detection method, human body target detection device, electronic device and storage medium
CN113705355A (en) * 2021-07-30 2021-11-26 汕头大学 Real-time detection method for abnormal behaviors
CN113869166A (en) * 2021-09-18 2021-12-31 沈阳帝信人工智能产业研究院有限公司 Substation outdoor operation monitoring method and device
CN115049654B (en) * 2022-08-15 2022-12-06 成都唐源电气股份有限公司 Method for extracting reflective light bar of steel rail
CN116958189B (en) * 2023-09-20 2023-12-12 中国科学院国家空间科学中心 Moving point target time-space domain track tracking method based on line segment correlation
CN118364418A (en) * 2024-06-20 2024-07-19 无锡中基电机制造有限公司 Intelligent corrosion resistance detection method and system for bearing pedestal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
付洋等.《一种基于视频的道路行人检测方法》.《电视技术:视频应用与工程》.2012,第36卷(第13期),正文第140-144页. *
崔华.《基于小波阈值去噪方法的一种改进方案》.《测控技术》.2005,正文第8-10页. *
郭永涛等.《视频交通监控系统中背景提取算法》.《视频技术应用与工程》.2006,(第5期),正文第91-93页. *

Also Published As

Publication number Publication date
CN103324913A (en) 2013-09-25

Similar Documents

Publication Publication Date Title
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
CN103927526B (en) Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN103971380B (en) Pedestrian based on RGB-D trails detection method
CN105913041B (en) It is a kind of based on the signal lamp recognition methods demarcated in advance
CN104318258B (en) Time domain fuzzy and kalman filter-based lane detection method
CN102693423B (en) One pinpoint method of car plate under intense light conditions
CN102810250B (en) Video based multi-vehicle traffic information detection method
CN111563469A (en) Method and device for identifying irregular parking behaviors
CN106842231A (en) A kind of road edge identification and tracking
CN105488454A (en) Monocular vision based front vehicle detection and ranging method
CN102915433B (en) Character combination-based license plate positioning and identifying method
CN108388871B (en) Vehicle detection method based on vehicle body regression
CN103150550B (en) A kind of road pedestrian event detection method based on gripper path analysis
CN102243705B (en) Method for positioning license plate based on edge detection
CN103077387B (en) Carriage of freight train automatic testing method in video
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN111832388B (en) Method and system for detecting and identifying traffic sign in vehicle running
CN105426868A (en) Lane detection method based on adaptive region of interest
CN104063882A (en) Vehicle video speed measuring method based on binocular camera
CN103632376A (en) Method for suppressing partial occlusion of vehicles by aid of double-level frames
CN106951829A (en) A kind of notable method for checking object of video based on minimum spanning tree
CN102663778B (en) A kind of method for tracking target based on multi-view point video and system
CN106803087A (en) A kind of car number automatic identification method and system
CN109272482A (en) A kind of urban road crossing vehicle queue detection system based on sequence image
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant