CN108830246A - A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method - Google Patents

A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method Download PDF

Info

Publication number
CN108830246A
CN108830246A CN201810661219.4A CN201810661219A CN108830246A CN 108830246 A CN108830246 A CN 108830246A CN 201810661219 A CN201810661219 A CN 201810661219A CN 108830246 A CN108830246 A CN 108830246A
Authority
CN
China
Prior art keywords
pedestrian
image
frame
chicken
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810661219.4A
Other languages
Chinese (zh)
Other versions
CN108830246B (en
Inventor
刘辉
李燕飞
韩宇阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201810661219.4A priority Critical patent/CN108830246B/en
Publication of CN108830246A publication Critical patent/CN108830246A/en
Application granted granted Critical
Publication of CN108830246B publication Critical patent/CN108830246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Abstract

The invention discloses a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting methods, including:Step 1:Construct pedestrian movement's database;Step 2:Extract the pedestrian detection block diagram picture with a group traveling together in successive image frame;Step 3:Extract the HOG feature of same pedestrian movement's energy diagram;Step 4:Construct pedestrian movement's gesture recognition model based on Elman neural network;Step 5:Using pedestrian movement's gesture recognition model based on Elman neural network, pedestrian's posture in current video is judged;Step 6:And the instantaneous velocity sequence for obtaining pedestrian in X-axis and Y direction is calculated, obtain pedestrian's real-time speed;Step 7:According to the 3 D stereo scene under the environment of crossing, the location information of pedestrian in image is obtained in real time, in conjunction with pedestrian's posture and real-time speed, obtains the real time kinematics feature of pedestrian.The program has the characteristics that identification accuracy rate is high, robustness is good, and application is convenient, has preferable application space.

Description

A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method
Technical field
The invention belongs to Traffic monitoring fields, in particular to a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extraction side Method.
Background technique
In recent years, with the rapid development of science and technology, in terms of more and more intelligent methods are applied to traffic, especially It is intelligent driving field.Traffic safety is eternal topic, and in collision class accident, the collision between vehicle and pedestrian is also accounted for Very big specific gravity.Pedestrian is timely detected and posture identification is the key point in present intelligent transportation active protective system. Realize accurate identification, it is most important that pedestrian movement's feature extraction.
The posture identification of pedestrian includes global characteristics method and local characteristic method.Global characteristics mostly use motion history image Method, i.e., by the frame difference information accumulation of video sequence into a frame image, frame difference includes certain motion information but does not include The shape information of movement human, and vulnerable to noise jamming.It is to extract the static edge letter of pedestrian in each frame image there are also method Breath, inter frame image need artificial joint, cause difficulty to identification.The velocity measuring of pedestrian at present, the method for mostly using radar, It cannot be combined well with visual pattern.
Chinese patent CN105957103A proposes a kind of motion feature extracting method of view-based access control model, including following step Suddenly:1. extracting the motion vector of each pixel based on successive frame;2. extracting the feature in the direction X, Y, T pixel value strong variations Point;3. constructing direction-amplitude histogram cube eigen vector based on motion vector centered on characteristic point;4. passing through Clustering algorithm forms coding vector to local description.The patent has the following problems:1. extracting the movement arrow of each pixel When amount, pixel is not screened effectively, data volume is big, calculates complicated;2. clustering algorithm applied by patent is easy Existing local convergence.
In conclusion being badly in need of proposing a kind of more accurate extracting method of pedestrian's motion feature under traffic environment.
Summary of the invention
The present invention provides a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting methods, it is intended that row The pedestrian's posture of people in the road is accurately extracted, and is that vehicle carries out timely early warning in carriage way, is reduced traffic accident Generation.
A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method, includes the following steps:
Step 1:Construct pedestrian movement's database;
The video of various movement postures and locating site of road of the pedestrian under each shooting direction of depth camera is acquired, In, the shooting direction include towards camera lens just before, it is left front, right before, side, just rear, left back and right back to seven directions, The posture includes walking, running and stands three kinds;
Step 2:Image zooming-out is carried out to the video in pedestrian movement's database, and to the image preprocessing after extraction, is obtained The pedestrian detection frame of every frame image is obtained, then extracts pedestrian detection block diagram picture of the same a group traveling together in successive image frame;
Step 3:Gray processing processing is carried out to each width pedestrian detection block diagram picture, synthesis is with a group traveling together in successive image frame Pedestrian detection block diagram as the kinergety figure of corresponding gray level image, and extract the HOG feature of the kinergety figure;
Step 4:Construct pedestrian movement's gesture recognition model based on Elman neural network;
Each pedestrian is made in the corresponding kinergety figure of successive image frame as input data with the posture of corresponding pedestrian For output data, Elman neural network is trained;
The stance output corresponds to [001], and walking postures output corresponds to [010], and posture of running output corresponds to For [100];
The Elman neural network parameter setting, input layer number correspond to kinergety figure number of pixels x, hidden layer Node is 2x+1, and output node layer is 3, maximum number of iterations 1500, learning rate 0.001, threshold value 0.00001;
Step 5:Using pedestrian movement's gesture recognition model based on Elman neural network, pedestrian in current video is judged Posture;
Current video is extracted into the pedestrian detection block diagram picture with a group traveling together in sequential frame image according to step 2, and is inputted In pedestrian movement's gesture recognition model based on Elman neural network, corresponding posture is obtained, carries out postural discrimination;
Step 6:The pixel coordinate calculated with a group traveling together's pedestrian detection frame lower-left angular vertex in sequential frame image changes sequence Column, and the instantaneous velocity sequence for obtaining pedestrian in X-axis and Y direction is calculated, obtain pedestrian's real-time speed;
Step 7:According to the 3 D stereo scene under the environment of crossing, the location information of pedestrian in image is obtained in real time, in conjunction with Pedestrian's posture and real-time speed obtain the real time kinematics feature of pedestrian.
The camera at crossing uses depth camera, establishes the 3 D stereo scene under the environment of crossing, obtains image in real time The location information of middle pedestrian, according to real road situation by 3 D stereo scene partitioning be Route for pedestrians and carriageway, work as people Into in 3 D stereo scene, an ID is established to everyone, the motion feature of people is judged by sequential frame image information.
Further, using chicken group's algorithm in pedestrian movement's gesture recognition model based on Elman neural network The weight and threshold value of Elman neural network optimize, and specific step is as follows:
Step A1:Using chicken group body position as the weight of Elman neural network and threshold value, chicken swarm parameter is initialized;
Population scale M=[20,100], search space dimension are j, and the value of j is the power of required optimization Elman neural network The sum of the number of parameters of value and threshold value, maximum to count T=[400,1000] repeatly, the number of iterations t, initial value 0, cock ratio Pg=20%, hen ratio Pm=70%, chicken ratio Px=10% randomly choose female godmother chicken, ratio Pd=from hen 10%;
Step A2:Fitness function is set, and enables the number of iterations t=1;
The chicken group corresponding weight in body position and threshold value are successively substituted into pedestrian movement's posture based on Elman neural network In identification model, and it is true using pedestrian movement's gesture recognition model based on Elman neural network that chicken group body position determines Surely pedestrian posture of the same a group traveling together inputted in the pedestrian detection block diagram picture in sequential frame image, will be with a group traveling together in successive frame The inverse of the difference of pedestrian's posture detection value and corresponding pedestrian's posture actual value of pedestrian detection block diagram picture in image is as One fitness function f1(x);
Fitness is bigger, and individual is more outstanding;
Step A3:Construct chicken group subgroup;
It is ranked up according to all ideal adaptation angle value, the chicken group's individual for choosing M*Pg before fitness value is arranged is determined as public affairs Chicken, header of the every cock as a sub-group;The chicken group's individual for choosing M*Px after fitness value is arranged is determined as chicken;Other Chicken group's individual is determined as hen;
Chicken group is divided into, subgroup is divided according to cock number, if a subgroup includes a cock, several chickens and fundatrix Chicken, and each chicken randomly chooses a hen in population and constructs mother-child relationship (MCR);
Step A4:The individual location updating of chicken group and the fitness for calculating current each individual;
Cock location update formula:
Wherein,Indicate position of the cock i individual in j dimension space in the t times iteration,Corresponding cock individual The new position in the t+1 times iteration, r (0, σ2) be obey mean value be 0, standard deviation σ2Normal distribution N (0, σ2);
Hen location update formula:
Wherein,For in the t times iteration hen g in the position of j dimension space,For the hen g institute in the t times iteration Unique cock i in subgroup1A body position,For the random cock except subgroup where the hen i in the t times iteration i2A body position, rand (0,1) are random function, uniformly random value, L between (0,1)1、L2It is hen i by place subgroup The location updating coefficient influenced with other subgroups, L1Value range [0.25,0.55], L2Value range [0.15,0.35];
Chicken location update formula:
Wherein,For in the t times iteration chicken l in the position of j dimension space,For the chicken l in the t times iteration Female godmother chicken g of corresponding mother-child relationship (MCR)mA body position,For unique cock is individual in subgroup where the chicken in the t times iteration Position, ω, α, β are respectively chicken self-renewing coefficient [0.2,0.7], follow female godmother chicken coefficient [0.5,0.8], follow cock Coefficient [0.8,1.5];
Step A5:Personal best particle and all personal best particles of chicken group are updated according to fitness function, is judged whether Reach maximum number of iterations, is exited if meeting, otherwise, enable t=t+1, be transferred to step A3, until meeting maximum number of iterations, The weight and threshold value for exporting the corresponding Elman neural network in optimal chicken group body position, obtain the row based on Elman neural network People's movement posture identification model.
Further, pedestrian's real-time speed is
Wherein,WithPedestrian is respectively indicated in the instantaneous velocity of X-direction and Y direction,
ΔWj=k | w2-w1|=k | x2×P-x1× P |, Δ Lj=| f (l2)-f(l1) |, l1=(N-y1) × P,
Pixel coordinate of the pedestrian target point in previous frame image and current frame image is respectively (x1,y1) and (x2,y2);l1 And l2Respectively indicate distance of the pedestrian target point in adjacent two field pictures apart from display screen Y-axis edge;
K indicates the ratio of actual scene distance and scene image-forming range in display screen, and M and N respectively indicate X-axis in display screen With the number of total pixel of Y direction;P indicates the length of each pixel in display screen, and MP, NP are respectively entire screen The total length of X-axis and Y-axis;ΔWjWith Δ LjPedestrian target point is respectively indicated to produce in adjacent two field pictures along X-axis and Y direction Raw displacement;
AB indicates depth camera to the distance of pedestrian, α expression depth camera to the line and ground level between pedestrian Between angle, θ be depth camera to the angle of straight line and imaging plane between pedestrian, m is frame number.
The value of AB, α, θ carry out real-time measurement acquisition using depth camera.
Further, according to the real time kinematics feature of pedestrian, it is pre- that pedestrian behavior rank is carried out to the vehicle on carriage way It is alert;
The behavior rank includes safety, threat and dangerous three ranks;
The safety behavior includes that pedestrian is under stance except one meter of carriageway, and pedestrian is in Route for pedestrians It is upper and be in walking postures under with outer carriageway parallel direction or apart from one meter of carriageway backwards to carriageway, backwards to vehicle Trade road is in running posture;
The threat behavior include pedestrian on Route for pedestrians within one meter of carriageway, and be located at Route for pedestrians in Under stance, within one meter of carriageway edge under running posture;
The hazardous act include pedestrian on Route for pedestrians towards carriageway direction or pedestrian in carriageway Under running posture, and in carriageway under walking postures;
When the speed of travel in pedestrian in threat behavior is greater than 1.9m/s or velocity is greater than 8m/s, behavior is threatened Upgrade to hazardous act.
The behavior rank refers to that different behavior ranks is to friendship to the safe condition in pedestrian's state in traffic environment The vehicle driver travelled in logical environment prompts, it is ensured that traffic safety;
Further, the pedestrian target point is the lower left corner pixel of pedestrian detection block diagram picture.
Further, pedestrian image frame is pre-processed, and pedestrian detection frame, pedestrian is arranged to pretreated image Target identification and pedestrian position label vector construct pedestrian track;
The pedestrian detection frame is the minimum circumscribed rectangle of pedestrian image frame middle row people's profile;
The pedestrian target mark is the unique identification P of the different pedestrians occurred in all pedestrian image frames;
The expression-form of the pedestrian position label vector is [t, x, y, a, b], and t indicates that current pedestrian's picture frame belongs to prison The t frame in video is controlled, x and y respectively indicate the abscissa and ordinate in the lower left corner of the pedestrian detection frame in pedestrian image frame, It is long and wide that a and b respectively indicates pedestrian detection frame;
Appearance result of the pedestrian in a later frame pedestrian image in former frame pedestrian image refers to if former frame pedestrian Pedestrian in image occurs in a later frame pedestrian image, then otherwise it is 0 that the tracking result of the pedestrian, which is 1,;If pedestrian tracking As a result it is 1, then the correspondence pedestrian position label vector occurred in a later frame pedestrian image is added in pedestrian track.
Beneficial effect
The present invention provides a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting methods, including following step Suddenly:Step 1:Construct pedestrian movement's database;Step 2:Image zooming-out is carried out to the video in pedestrian movement's database, and to mentioning Image preprocessing after taking, obtains the pedestrian detection frame of every frame image, then extracts row of the same a group traveling together in successive image frame People's detection block image;Step 3:Gray processing processing is carried out to each width pedestrian detection block diagram picture, synthesis is with a group traveling together in sequential chart As the pedestrian detection block diagram in frame is as the kinergety figure of corresponding gray level image, and the HOG for extracting the kinergety figure is special Sign;Step 4:Construct pedestrian movement's gesture recognition model based on Elman neural network;Step 5:Using based on Elman nerve Pedestrian movement's gesture recognition model of network, judges pedestrian's posture in current video;Step 6:It calculates with a group traveling together in successive frame The pixel coordinate change sequence of pedestrian detection frame lower-left angular vertex in image, and calculate and obtain pedestrian in X-axis and Y direction Instantaneous velocity sequence obtains pedestrian's real-time speed;Step 7:According to the 3 D stereo scene under the environment of crossing, image is obtained in real time The location information of middle pedestrian obtains the real time kinematics feature of pedestrian in conjunction with pedestrian's posture and real-time speed.
In terms of existing technologies, it has the following advantages that:
1. it is high to recognize accuracy rate:The HOG feature for the resultant motion energy diagram that the present invention extracts both had contained whole image sequence Pedestrian movement's information of column, and the kinergety information of pedestrian is contained, feature is representative, can greatly facilitate pedestrian's Posture identification;
2. facilitating application:Pedestrian's speed calculation method proposed by the present invention is directly based upon visual pattern and carries out operation, realizes The perfect combination of velocity measuring and image recognition, is convenient for the user to use;
3. network structure is complete, the present invention, which is not only realized, also achieves the speed of pedestrian to the posture identification of pedestrian in image Degree calculates, and network structure is complete, can greatly facilitate user;
4. robustness is good:The present invention use neural network, have extremely strong nonlinear fitting ability, reply illumination variation, There is preferable robustness when the problems such as pedestrian is blocked.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the invention;
Fig. 2 is the distance between depth camera and pedestrian relation schematic diagram.
Specific embodiment
The present invention is described further below in conjunction with drawings and examples.
As shown in Figure 1, a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method, includes the following steps:
Step 1:Construct pedestrian movement's database;
The video of various movement postures and locating site of road of the pedestrian under each shooting direction of depth camera is acquired, In, the shooting direction include towards camera lens just before, it is left front, right before, side, just rear, left back and right back to seven directions, The posture includes walking, running and stands three kinds;
Step 2:Image zooming-out is carried out to the video in pedestrian movement's database, and to the image preprocessing after extraction, is obtained The pedestrian detection frame of every frame image is obtained, then extracts pedestrian detection block diagram picture of the same a group traveling together in successive image frame;
Step 3:Gray processing processing is carried out to each width pedestrian detection block diagram picture, synthesis is with a group traveling together in successive image frame Pedestrian detection block diagram as the kinergety figure of corresponding gray level image, and extract the HOG feature of the kinergety figure;
Step 4:Construct pedestrian movement's gesture recognition model based on Elman neural network;
Each pedestrian is made in the corresponding kinergety figure of successive image frame as input data with the posture of corresponding pedestrian For output data, Elman neural network is trained;
The stance output corresponds to [001], and walking postures output corresponds to [010], and posture of running output corresponds to For [100];
The Elman neural network parameter setting, input layer number correspond to kinergety figure number of pixels x, hidden layer Node is 2x+1, and output node layer is 3, maximum number of iterations 1500, learning rate 0.001, threshold value 0.00001;
Using chicken group's algorithm to the Elman nerve net in pedestrian movement's gesture recognition model based on Elman neural network The weight and threshold value of network optimize, and specific step is as follows:
Step A1:Using chicken group body position as the weight of Elman neural network and threshold value, chicken swarm parameter is initialized;
Population scale M=[20,100], search space dimension are j, and the value of j is the power of required optimization Elman neural network The sum of the number of parameters of value and threshold value, maximum to count T=[400,1000] repeatly, the number of iterations t, initial value 0, cock ratio Pg=20%, hen ratio Pm=70%, chicken ratio Px=10% randomly choose female godmother chicken, ratio Pd=from hen 10%;
Step A2:Fitness function is set, and enables the number of iterations t=1;
The chicken group corresponding weight in body position and threshold value are successively substituted into pedestrian movement's posture based on Elman neural network In identification model, and it is true using pedestrian movement's gesture recognition model based on Elman neural network that chicken group body position determines Surely pedestrian posture of the same a group traveling together inputted in the pedestrian detection block diagram picture in sequential frame image, will be with a group traveling together in successive frame The inverse of the difference of pedestrian's posture detection value and corresponding pedestrian's posture actual value of pedestrian detection block diagram picture in image is as One fitness function f1(x);
Fitness is bigger, and individual is more outstanding;
Step A3:Construct chicken group subgroup;
It is ranked up according to all ideal adaptation angle value, the chicken group's individual for choosing M*Pg before fitness value is arranged is determined as public affairs Chicken, header of the every cock as a sub-group;The chicken group's individual for choosing M*Px after fitness value is arranged is determined as chicken;Other Chicken group's individual is determined as hen;
Chicken group is divided into, subgroup is divided according to cock number, if a subgroup includes a cock, several chickens and fundatrix Chicken, and each chicken randomly chooses a hen in population and constructs mother-child relationship (MCR);
Step A4:The individual location updating of chicken group and the fitness for calculating current each individual;
Cock location update formula:
Wherein,Indicate position of the cock i individual in j dimension space in the t times iteration,Corresponding cock individual The new position in the t+1 times iteration, r (0, σ2) be obey mean value be 0, standard deviation σ2Normal distribution N (0, σ2);
Hen location update formula:
Wherein,For in the t times iteration hen g in the position of j dimension space,For the hen g institute in the t times iteration Unique cock i in subgroup1A body position,For the random cock except subgroup where the hen i in the t times iteration i2A body position, rand (0,1) are random function, uniformly random value, L between (0,1)1、L2It is hen i by place subgroup The location updating coefficient influenced with other subgroups, L1Value range [0.25,0.55], L2Value range [0.15,0.35];
Chicken location update formula:
Wherein,For in the t times iteration chicken l in the position of j dimension space,For the chicken l in the t times iteration Female godmother chicken g of corresponding mother-child relationship (MCR)mA body position,For unique cock is individual in subgroup where the chicken in the t times iteration Position, ω, α, β are respectively chicken self-renewing coefficient [0.2,0.7], follow female godmother chicken coefficient [0.5,0.8], follow cock Coefficient [0.8,1.5];
Step A5:Personal best particle and all personal best particles of chicken group are updated according to fitness function, is judged whether Reach maximum number of iterations, is exited if meeting, otherwise, enable t=t+1, be transferred to step A3, until meeting maximum number of iterations, The weight and threshold value for exporting the corresponding Elman neural network in optimal chicken group body position, obtain the row based on Elman neural network People's movement posture identification model.
Step 5:Using pedestrian movement's gesture recognition model based on Elman neural network, pedestrian in current video is judged Posture;
Current video is extracted into the pedestrian detection block diagram picture with a group traveling together in sequential frame image according to step 2, and is inputted In pedestrian movement's gesture recognition model based on Elman neural network, corresponding posture is obtained, carries out postural discrimination;
Step 6:The pixel coordinate calculated with a group traveling together's pedestrian detection frame lower-left angular vertex in sequential frame image changes sequence Column, and the instantaneous velocity sequence for obtaining pedestrian in X-axis and Y direction is calculated, obtain pedestrian's real-time speed;
Pedestrian's real-time speed is
Wherein,WithPedestrian is respectively indicated in the instantaneous velocity of X-direction and Y direction,
ΔWj=k | w2-w1|=k | x2×P-x1× P |, Δ Lj=| f (l2)-f(l1) |, l1=(N-y1) × P,
Pixel coordinate of the pedestrian target point in previous frame image and current frame image is respectively (x1,y1) and (x2,y2);l1 And l2Respectively indicate distance of the pedestrian target point in adjacent two field pictures apart from display screen Y-axis edge;
K indicates the ratio of actual scene distance and scene image-forming range in display screen, and M and N respectively indicate X-axis in display screen With the number of total pixel of Y direction;P indicates the length of each pixel in display screen, and MP, NP are respectively entire screen The total length of X-axis and Y-axis;ΔWjWith Δ LjPedestrian target point is respectively indicated to produce in adjacent two field pictures along X-axis and Y direction Raw displacement;
As shown in Fig. 2, AB indicates depth camera to the distance of pedestrian, α expression depth camera to the company between pedestrian Angle between line and ground level, θ are angle of the depth camera to straight line and imaging plane between pedestrian, and AB, α, θ's takes Value carries out real-time measurement acquisition using depth camera, and m is frame number.
Step 7:According to the 3 D stereo scene under the environment of crossing, the location information of pedestrian in image is obtained in real time, in conjunction with Pedestrian's posture and real-time speed obtain the real time kinematics feature of pedestrian.
The camera at crossing uses depth camera, establishes the 3 D stereo scene under the environment of crossing, obtains image in real time The location information of middle pedestrian, according to real road situation by 3 D stereo scene partitioning be Route for pedestrians and carriageway, work as people Into in 3 D stereo scene, an ID is established to everyone, the motion feature of people is judged by sequential frame image information.
According to the real time kinematics feature of pedestrian, the early warning of pedestrian behavior rank is carried out to the vehicle on carriage way;
The behavior rank includes safety, threat and dangerous three ranks;
The safety behavior includes that pedestrian is under stance except one meter of carriageway, and pedestrian is in Route for pedestrians It is upper and be in walking postures under with outer carriageway parallel direction or apart from one meter of carriageway backwards to carriageway, backwards to vehicle Trade road is in running posture;
The threat behavior include pedestrian on Route for pedestrians within one meter of carriageway, and be located at Route for pedestrians in Under stance, within one meter of carriageway edge under running posture;
The hazardous act include pedestrian on Route for pedestrians towards carriageway direction or pedestrian in carriageway Under running posture, and in carriageway under walking postures;
When the speed of travel in pedestrian in threat behavior is greater than 1.9m/s or velocity is greater than 8m/s, behavior is threatened Upgrade to hazardous act.
The behavior rank refers to that different behavior ranks is to friendship to the safe condition in pedestrian's state in traffic environment The vehicle driver travelled in logical environment prompts, it is ensured that traffic safety;
In this example, the lower left corner pixel of pedestrian detection block diagram picture is as pedestrian target point.
Pedestrian image frame is pre-processed, and pretreated image setting pedestrian detection frame, pedestrian target are identified And pedestrian position label vector, construct pedestrian track;
The pedestrian detection frame is the minimum circumscribed rectangle of pedestrian image frame middle row people's profile;
The pedestrian target mark is the unique identification P of the different pedestrians occurred in all pedestrian image frames;
The expression-form of the pedestrian position label vector is [t, x, y, a, b], and t indicates that current pedestrian's picture frame belongs to prison The t frame in video is controlled, x and y respectively indicate the abscissa and ordinate in the lower left corner of the pedestrian detection frame in pedestrian image frame, It is long and wide that a and b respectively indicates pedestrian detection frame;
Appearance result of the pedestrian in a later frame pedestrian image in former frame pedestrian image refers to if former frame pedestrian Pedestrian in image occurs in a later frame pedestrian image, then otherwise it is 0 that the tracking result of the pedestrian, which is 1,;If pedestrian tracking As a result it is 1, then the correspondence pedestrian position label vector occurred in a later frame pedestrian image is added in pedestrian track.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (6)

1. a kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method, which is characterized in that include the following steps:
Step 1:Construct pedestrian movement's database;
Acquire the video of various movement postures and locating site of road of the pedestrian under each shooting direction of depth camera, wherein The shooting direction include towards camera lens just before, it is left front, right before, side, just rear, left back and right back is to seven directions, institute Posture is stated to include walking, running and stand three kinds;
Step 2:Image zooming-out is carried out to the video in pedestrian movement's database, and to the image preprocessing after extraction, is obtained every The pedestrian detection frame of frame image, then extract pedestrian detection block diagram picture of the same a group traveling together in successive image frame;
Step 3:Gray processing processing is carried out to each width pedestrian detection block diagram picture, synthesizes the row with a group traveling together in successive image frame The kinergety figure of the corresponding gray level image of people's detection block image, and extract the HOG feature of the kinergety figure;
Step 4:Construct pedestrian movement's gesture recognition model based on Elman neural network;
Using each pedestrian in the corresponding kinergety figure of successive image frame as input data, using the posture of corresponding pedestrian as defeated Data out are trained Elman neural network;
The stance output corresponds to [001], and walking postures output corresponds to [010], and posture of running output corresponds to [100];
The Elman neural network parameter setting, input layer number correspond to kinergety figure number of pixels x, hide node layer For 2x+1, exporting node layer is 3, maximum number of iterations 1500, learning rate 0.001, threshold value 0.00001;
Step 5:Using pedestrian movement's gesture recognition model based on Elman neural network, pedestrian's posture in current video is judged;
Current video is extracted into the pedestrian detection block diagram picture with a group traveling together in sequential frame image according to step 2, and inputs and is based on In pedestrian movement's gesture recognition model of Elman neural network, corresponding posture is obtained, carries out postural discrimination;
Step 6:The pixel coordinate change sequence with a group traveling together's pedestrian detection frame lower-left angular vertex in sequential frame image is calculated, and The instantaneous velocity sequence for obtaining pedestrian in X-axis and Y direction is calculated, pedestrian's real-time speed is obtained;
Step 7:According to the 3 D stereo scene under the environment of crossing, the location information of pedestrian in image is obtained in real time, in conjunction with pedestrian Posture and real-time speed obtain the real time kinematics feature of pedestrian.
2. the method according to claim 1, wherein using chicken group's algorithm to the row based on Elman neural network The weight and threshold value of Elman neural network in people's movement posture identification model optimize, and specific step is as follows:
Step A1:Using chicken group body position as the weight of Elman neural network and threshold value, chicken swarm parameter is initialized;
Population scale M=[20,100], search space dimension are j, the value of j be required optimization Elman neural network weight and The sum of number of parameters of threshold value, maximum to count T=[400,1000] repeatly, the number of iterations t, initial value 0, cock ratio Pg= 20%, hen ratio Pm=70%, chicken ratio Px=10% randomly choose female godmother chicken, ratio Pd=10% from hen;
Step A2:Fitness function is set, and enables the number of iterations t=1;
The chicken group corresponding weight in body position and threshold value are successively substituted into pedestrian movement's gesture recognition based on Elman neural network In model, and determined using pedestrian movement's gesture recognition model based on Elman neural network that chicken group body position determines defeated Pedestrian posture of the same a group traveling together entered in the pedestrian detection block diagram picture in sequential frame image, will be with a group traveling together in sequential frame image In pedestrian detection block diagram picture pedestrian's posture detection value and corresponding pedestrian's posture actual value difference inverse it is suitable as first Response function f1(x);
Step A3:Construct chicken group subgroup;
It is ranked up according to all ideal adaptation angle value, the chicken group's individual for choosing M*Pg before fitness value is arranged is determined as cock, often Header of the cock as a sub-group;The chicken group's individual for choosing M*Px after fitness value is arranged is determined as chicken;Other chickens group Individual is determined as hen;
Chicken group is divided into, subgroup is divided according to cock number, if a subgroup includes a cock, several chickens and dried hen, and And each chicken randomly chooses a hen in population and constructs mother-child relationship (MCR);
Step A4:The individual location updating of chicken group and the fitness for calculating current each individual;
Cock location update formula:
Wherein,Indicate position of the cock i individual in j dimension space in the t times iteration,The corresponding cock individual is the New position in t+1 iteration, r (0, σ2) be obey mean value be 0, standard deviation σ2Normal distribution N (0, σ2);
Hen location update formula:
Wherein,For in the t times iteration hen g in the position of j dimension space,For where the hen g in the t times iteration Unique cock i of subgroup1A body position,For the random cock i except subgroup where the hen i in the t times iteration2 A body position, rand (0,1) are random function, uniformly random value, L between (0,1)1、L2For hen i by place subgroup and The location updating coefficient that other subgroups influence, L1Value range [0.25,0.55], L2Value range [0.15,0.35];
Chicken location update formula:
Wherein,For in the t times iteration chicken l in the position of j dimension space,For the chicken l correspondence in the t times iteration Female godmother chicken g of mother-child relationship (MCR)mA body position,For unique cock position in subgroup where the chicken in the t times iteration It sets, ω, α, β are respectively chicken self-renewing coefficient [0.2,0.7], follow female godmother chicken coefficient [0.5,0.8], follow the cock to be Number [0.8,1.5];
Step A5:Personal best particle and all personal best particles of chicken group are updated according to fitness function, judges whether to reach Maximum number of iterations exits if meeting, otherwise, enables t=t+1, be transferred to step A3, until meeting maximum number of iterations, exports The weight and threshold value of the corresponding Elman neural network in optimal chicken group body position obtain pedestrian's fortune based on Elman neural network Dynamic gesture recognition model.
3. the method according to claim 1, wherein pedestrian's real-time speed is
Wherein,WithPedestrian is respectively indicated in the instantaneous velocity of X-direction and Y direction,
ΔWj=k | w2-w1|=k | x2×P-x1× P |, Δ Lj=| f (l2)-f(l1) |, l1=(N-y1) × P, l2=(N-y2)× P,
Pixel coordinate of the pedestrian target point in previous frame image and current frame image is respectively (x1,y1) and (x2,y2);l1And l2 Respectively indicate distance of the pedestrian target point in adjacent two field pictures apart from display screen Y-axis edge;
K indicates the ratio of actual scene distance and scene image-forming range in display screen, and M and N respectively indicate X-axis and Y in display screen The number of total pixel of axis direction;P indicates the length of each pixel in display screen, and MP, NP are respectively the X-axis of entire screen With the total length of Y-axis;ΔWjWith Δ LjRespectively indicate what pedestrian target point generated in adjacent two field pictures along X-axis and Y direction Displacement;
AB indicates depth camera to the distance of pedestrian, and α expression depth camera is between the line and ground level between pedestrian Angle, θ be depth camera to the angle of straight line and imaging plane between pedestrian, m is frame number.
4. method according to claim 1-3, which is characterized in that according to the real time kinematics feature of pedestrian, to row Vehicle road vehicle carries out the early warning of pedestrian behavior rank;
The behavior rank includes safety, threat and dangerous three ranks;
The safety behavior include pedestrian except one meter of carriageway under the stance, pedestrian on Route for pedestrians and It is under walking postures apart from one meter of carriageway with outer carriageway parallel direction or backwards to carriageway, backwards to driveway Road is in running posture;
The threat behavior includes that pedestrian is within one meter of carriageway, and in Route for pedestrians on Route for pedestrians Under stance, within one meter of carriageway edge under running posture;
The hazardous act includes that pedestrian is in race on Route for pedestrians towards carriageway direction or pedestrian in carriageway It walks under posture, and in carriageway under walking postures;
When the speed of travel in pedestrian in threat behavior is greater than 1.9m/s or velocity is greater than 8m/s, behavior upgrading is threatened For hazardous act.
5. according to the method described in claim 4, it is characterized in that, pedestrian target point is the lower-left of pedestrian detection block diagram picture Angle pixel.
6. according to the method described in claim 5, it is characterized in that, pre-processed to pedestrian image frame, and to pretreatment after Image setting pedestrian detection frame, pedestrian target mark and pedestrian position label vector, construct pedestrian track;
The pedestrian detection frame is the minimum circumscribed rectangle of pedestrian image frame middle row people's profile;
The pedestrian target mark is the unique identification P of the different pedestrians occurred in all pedestrian image frames;
The expression-form of the pedestrian position label vector is [t, x, y, a, b], and t indicates that current pedestrian's picture frame belongs to monitoring view T frame in frequency, x and y respectively indicate the abscissa and ordinate in the lower left corner of the pedestrian detection frame in pedestrian image frame, a and b It is long and wide to respectively indicate pedestrian detection frame;
Appearance result of the pedestrian in a later frame pedestrian image in former frame pedestrian image refers to if former frame pedestrian image In pedestrian, occur in a later frame pedestrian image, then the tracking result of the pedestrian be 1, be otherwise 0;If pedestrian tracking result It is 1, then the correspondence pedestrian position label vector occurred in a later frame pedestrian image is added in pedestrian track.
CN201810661219.4A 2018-06-25 2018-06-25 Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment Active CN108830246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810661219.4A CN108830246B (en) 2018-06-25 2018-06-25 Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810661219.4A CN108830246B (en) 2018-06-25 2018-06-25 Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment

Publications (2)

Publication Number Publication Date
CN108830246A true CN108830246A (en) 2018-11-16
CN108830246B CN108830246B (en) 2022-02-15

Family

ID=64138303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810661219.4A Active CN108830246B (en) 2018-06-25 2018-06-25 Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment

Country Status (1)

Country Link
CN (1) CN108830246B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110632636A (en) * 2019-09-11 2019-12-31 桂林电子科技大学 Carrier attitude estimation method based on Elman neural network
WO2020103462A1 (en) * 2018-11-21 2020-05-28 百度在线网络技术(北京)有限公司 Video search method and apparatus, computer device, and storage medium
CN111265218A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Motion attitude data processing method and device and electronic equipment
CN111338344A (en) * 2020-02-28 2020-06-26 北京小马慧行科技有限公司 Vehicle control method and device and vehicle
WO2020253499A1 (en) * 2019-06-17 2020-12-24 平安科技(深圳)有限公司 Video object acceleration monitoring method and apparatus, and server and storage medium
CN115092091A (en) * 2022-07-11 2022-09-23 中国第一汽车股份有限公司 Vehicle and pedestrian protection system and method based on Internet of vehicles
CN116935447A (en) * 2023-09-19 2023-10-24 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160075016A1 (en) * 2014-09-17 2016-03-17 Brain Corporation Apparatus and methods for context determination using real time sensor data
CN206033008U (en) * 2016-03-09 2017-03-22 秀景A.I.D 股份有限公司 Automatic hand track sterilization equipment of power type
CN106789214A (en) * 2016-12-12 2017-05-31 广东工业大学 It is a kind of based on the just remaining pair network situation awareness method and device of string algorithm
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN107122707A (en) * 2017-03-17 2017-09-01 山东大学 Video pedestrian based on macroscopic features compact representation recognition methods and system again
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN107153800A (en) * 2017-05-04 2017-09-12 天津工业大学 A kind of reader antenna Optimization deployment scheme that alignment system is recognized based on the super high frequency radio frequency for improving chicken group's algorithm
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107657232A (en) * 2017-09-28 2018-02-02 南通大学 A kind of pedestrian's intelligent identification Method and its system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160075016A1 (en) * 2014-09-17 2016-03-17 Brain Corporation Apparatus and methods for context determination using real time sensor data
CN206033008U (en) * 2016-03-09 2017-03-22 秀景A.I.D 股份有限公司 Automatic hand track sterilization equipment of power type
CN106789214A (en) * 2016-12-12 2017-05-31 广东工业大学 It is a kind of based on the just remaining pair network situation awareness method and device of string algorithm
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN107122707A (en) * 2017-03-17 2017-09-01 山东大学 Video pedestrian based on macroscopic features compact representation recognition methods and system again
CN107153800A (en) * 2017-05-04 2017-09-12 天津工业大学 A kind of reader antenna Optimization deployment scheme that alignment system is recognized based on the super high frequency radio frequency for improving chicken group's algorithm
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN107657232A (en) * 2017-09-28 2018-02-02 南通大学 A kind of pedestrian's intelligent identification Method and its system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KUANZHANG ET AL: "Assessment of human locomotion by using an insole measurement system and artificial neural networks", 《JOURNAL OF BIOMECHANICS》 *
左航等: "基于行走拓扑结构分析的行人检测", 《光电子.激光》 *
聂建亮等: "Elman神经网络在区域速度场建模中的应用", 《大地测量与地球动力学》 *
訾春元: "基于轮廓特征与多重分形分析的步态识别方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020103462A1 (en) * 2018-11-21 2020-05-28 百度在线网络技术(北京)有限公司 Video search method and apparatus, computer device, and storage medium
US11348254B2 (en) 2018-11-21 2022-05-31 Baidu Online Network Technology (Beijing) Co., Ltd. Visual search method, computer device, and storage medium
CN111265218A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Motion attitude data processing method and device and electronic equipment
WO2020253499A1 (en) * 2019-06-17 2020-12-24 平安科技(深圳)有限公司 Video object acceleration monitoring method and apparatus, and server and storage medium
US11816570B2 (en) 2019-06-17 2023-11-14 Ping An Technology (Shenzhen) Co., Ltd. Method for accelerated detection of object in videos, server, and non-transitory computer readable storage medium
CN110632636A (en) * 2019-09-11 2019-12-31 桂林电子科技大学 Carrier attitude estimation method based on Elman neural network
CN111338344A (en) * 2020-02-28 2020-06-26 北京小马慧行科技有限公司 Vehicle control method and device and vehicle
CN115092091A (en) * 2022-07-11 2022-09-23 中国第一汽车股份有限公司 Vehicle and pedestrian protection system and method based on Internet of vehicles
CN116935447A (en) * 2023-09-19 2023-10-24 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN116935447B (en) * 2023-09-19 2023-12-26 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system

Also Published As

Publication number Publication date
CN108830246B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN108830246A (en) A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN106980829B (en) Abnormal behaviour automatic testing method of fighting based on video analysis
CN107423730B (en) Human gait behavior active detection and recognition system and method based on semantic folding
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
Polana et al. Temporal texture and activity recognition
CN103077423B (en) To run condition detection method based on crowd's quantity survey of video flowing, local crowd massing situation and crowd
CN103198302B (en) A kind of Approach for road detection based on bimodal data fusion
CN106778655B (en) Human body skeleton-based entrance trailing entry detection method
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
CN106611157A (en) Multi-people posture recognition method based on optical flow positioning and sliding window detection
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN104183142B (en) A kind of statistical method of traffic flow based on image vision treatment technology
CN105844663A (en) Adaptive ORB object tracking method
CN104504394A (en) Dese population estimation method and system based on multi-feature fusion
CN100565557C (en) System for tracking infrared human body target based on corpuscle dynamic sampling model
CN101388080A (en) Passerby gender classification method based on multi-angle information fusion
CN105404894A (en) Target tracking method used for unmanned aerial vehicle and device thereof
CN103198296A (en) Method and device of video abnormal behavior detection based on Bayes surprise degree calculation
CN103426179A (en) Target tracking method and system based on mean shift multi-feature fusion
CN103871081A (en) Method for tracking self-adaptive robust on-line target
CN103810703A (en) Picture processing based tunnel video moving object detection method
CN110147748A (en) A kind of mobile robot obstacle recognition method based on road-edge detection
CN109086682A (en) A kind of intelligent video black smoke vehicle detection method based on multi-feature fusion
CN104159088A (en) System and method of remote monitoring of intelligent vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant