CN108805907B - Pedestrian posture multi-feature intelligent identification method - Google Patents

Pedestrian posture multi-feature intelligent identification method Download PDF

Info

Publication number
CN108805907B
CN108805907B CN201810578415.5A CN201810578415A CN108805907B CN 108805907 B CN108805907 B CN 108805907B CN 201810578415 A CN201810578415 A CN 201810578415A CN 108805907 B CN108805907 B CN 108805907B
Authority
CN
China
Prior art keywords
pedestrian
ant
individual
image
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810578415.5A
Other languages
Chinese (zh)
Other versions
CN108805907A (en
Inventor
刘辉
李燕飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201810578415.5A priority Critical patent/CN108805907B/en
Publication of CN108805907A publication Critical patent/CN108805907A/en
Application granted granted Critical
Publication of CN108805907B publication Critical patent/CN108805907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a pedestrian posture multi-feature intelligent identification method, which comprises the following steps: step 1: constructing a pedestrian sample image database; step 2: preprocessing a pedestrian image frame in a pedestrian sample database, and setting a pedestrian detection frame, a pedestrian target identifier and a pedestrian position tag vector for the preprocessed image; and step 3: constructing a pedestrian detection model based on an extreme learning machine; and 4, step 4: constructing a pedestrian tracking model based on a BP neural network; and 5: tracking and identifying the pedestrian track in real time; the method adopts a BP neural network-based method in pedestrian tracking detection, can realize quick and effective detection and marking of pedestrians, can meet the requirement of instant identification of emergency in actual traffic environment, is also suitable for complex environments such as intelligent factories, laboratories and robot delivery, and is beneficial to the improvement of modern traffic intelligence and industrial intelligence.

Description

Pedestrian posture multi-feature intelligent identification method
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a pedestrian posture multi-feature intelligent identification method.
Background
With the rapid development of scientific technology, pedestrian detection technology using computer vision related technology has been widely applied in various aspects of life, such as the fields of intelligent trains, automatic vehicle driving, and the like. The safety of traffic is a constant topic. In the case of a vehicle collision type accident, the collision between the vehicle and the pedestrian also takes a large part. Nowadays, conventional safety technologies of a seat belt, an airbag, and the like have been widely popularized, however, these are passive protection methods. It is desirable to develop an active protective safety system for vehicles, and accurate identification and tracking of pedestrians is a major concern.
At present, most of pedestrian tracking methods are description methods, namely, the appearance features of pedestrians, such as clothes colors and the like, are used as judgment features, a color histogram of an image is extracted, and then the similarity is calculated through Euclidean distance or Pasteur distance. Still, the scholars propose to use a multi-feature fusion description method to perform matching identification on pedestrians, but shallow features are easily limited by description capacity and have high subjective factors.
Chinese patent CN201610317720 discloses a multi-target tracking method based on a recurrent neural network, comprising the following steps: 1: constructing a monitoring video data set marked with the pedestrian position of each frame; 2: carrying out manual expansion on the monitoring video data set marked with the pedestrian position of each frame to obtain a training set sample; 3: grouping the training set samples to obtain a plurality of training groups; 4: constructing a multi-target tracking network; 5: inputting each training group into a multi-target tracking network by taking a sequence as a unit for training; 6: and inputting the video data to be tested into the trained multi-target tracking network, and performing forward propagation to obtain the motion tracks of the multiple targets. The solution described in this patent presents the following problems: 1. the pedestrian tracking system does not consider the situation that pedestrians disappear and reappear temporarily in the video or new pedestrians enter the video midway, and the two situations can cause system judgment errors; 2. the pedestrian data set needs to be manually expanded, so that the judgment system is complicated, and the efficiency is reduced; 3. local convergence is prone to occur using a recurrent neural network algorithm.
Disclosure of Invention
The invention provides a pedestrian posture multi-feature intelligent identification method, and aims to solve the problems that in the prior art, the accuracy of identification of pedestrian tracks in a monitoring video is not high, and the efficiency is low.
A pedestrian posture multi-feature intelligent identification method comprises the following steps:
step 1: constructing a pedestrian sample image database;
the pedestrian sample image database extracts continuous pedestrian image frames from the intersection monitoring video to obtain three types of image groups;
the three types of image groups are respectively a negative sample without pedestrians, a multi-person sample containing a plurality of pedestrians and a single-row person sample only containing the same pedestrian, and each type of image group at least comprises 300 frames of images;
step 2: preprocessing a pedestrian image frame in a pedestrian sample database, and setting a pedestrian detection frame, a pedestrian target identifier and a pedestrian position tag vector for the preprocessed image;
the pedestrian detection frame is a minimum circumscribed rectangle of a pedestrian outline in a pedestrian image frame;
the pedestrian target identification is a unique identification P of different pedestrians appearing in all the pedestrian image frames;
the expression form of the pedestrian position label vector is [ t, x, y, a, b ], t represents that the current pedestrian image frame belongs to the t-th frame in the monitoring video, x and y respectively represent the abscissa and the ordinate of the lower left corner of a pedestrian detection frame in the pedestrian image frame, and a and b respectively represent the length and the width of the pedestrian detection frame;
in different frame images of the monitoring video, the target identifications of the same pedestrian are the same;
and step 3: constructing a pedestrian detection model based on an extreme learning machine;
taking the preprocessed image of the pedestrian image frame in the pedestrian sample database as input data, and taking the corresponding pedestrian position label vector and the number of pedestrians as output data, training the extreme learning machine to obtain a pedestrian detection model based on the extreme learning machine;
for images not containing pedestrians, the number of the pedestrians and the position label vector are both null; for the multi-pedestrian sample, the number of the pedestrian position label vectors is the same as that of the pedestrians;
the number of input layer nodes of the extreme learning machine is the number s of pixel points of an input image, the number of hidden layer wavelet elements is 2s < -1 >, the number of output layer nodes is 4, the maximum iteration number in the training process is set to 2000, the training learning rate is 0.01, and the threshold value is 0.00005;
and 4, step 4: constructing a pedestrian tracking model based on a BP neural network;
sequentially using the preprocessed pedestrian tracking detection images in the two adjacent frames of images and the corresponding pedestrian position label vector extracted by using a pedestrian detection model based on an extreme learning machine as input layer data, using the tracking result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image as output layer data, training a BP neural network model, and obtaining a pedestrian tracking model based on the BP neural network;
the pedestrian tracking detection graph serving as the input layer data is obtained by extracting a single pedestrian contour graph from a preprocessed image of one frame, and assuming that 4 pedestrians exist in one frame, 4 pedestrian tracking detection graphs exist;
the appearance result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image means that if the pedestrian in the previous frame of pedestrian image appears in the next frame of pedestrian image, the tracking result of the pedestrian is 1, otherwise, the tracking result is 0; if the pedestrian tracking result is 1, adding a corresponding pedestrian position tag vector appearing in a pedestrian image of the next frame into a pedestrian track, wherein the initial value of the pedestrian track is the pedestrian position tag vector appearing in the image for the first time in the monitoring video;
the tracking model processes two frames of images each time, only judges whether the pedestrian appears in the pedestrian image of the previous frame or not, and adds the tag vector of the person in the second frame into the record of the person in the first frame if the pedestrian appears in the pedestrian image of the next frame;
when the model is used, all the pedestrian tracking detection images of the previous frame and the next frame are combined one by one to serve as input layer data for matching, if the pedestrians appearing in the second frame image and the pedestrians appearing in the first frame are the same person, the target identification of the pedestrians appearing in the first frame is given to the corresponding pedestrians in the second frame, and meanwhile, the position label vector of the pedestrians corresponding to the second frame is recorded into the target tracking track; if the pedestrian appearing in the second frame image is not matched with any pedestrian appearing in the first frame image, setting a new target identifier for the pedestrian appearing in the second frame image;
and 5: tracking and identifying the pedestrian track in real time;
sequentially extracting two adjacent frames of pedestrian images from a real-time monitoring video, inputting the images into the pedestrian detection model based on the extreme learning machine, detecting pedestrian position label vectors and the number of pedestrians in the two frames of images, then inputting the pedestrian tracking detection images in the two frames of images into the pedestrian tracking model based on the BP neural network, and tracking the pedestrian tracks of the pedestrians appearing in the pedestrian image of the previous frame to obtain the tracking tracks of all the pedestrians in the monitoring video.
Further, the following preprocessing is carried out on the pedestrian sample image and the monitoring image acquired in real time:
step A1: cutting image frames extracted from the intersection monitoring video in a uniform size;
step A2: carrying out gray processing on the cut image, and adjusting the image contrast by adopting a Gamma correction method;
step A3: extracting the directional gradient histogram characteristics of the image after the contrast adjustment, and performing dimension reduction processing on the directional gradient histogram characteristics by adopting PCA;
step A4: extracting the reduced directional gradient histogram features exceeding the set directional gradient histogram threshold value to obtain the corresponding pedestrian region;
step A5: carrying out smooth denoising processing on the pedestrian region, and extracting a maximum connected domain as a pedestrian contour region to obtain a human image more beneficial to neural network recognition;
step A6: and taking the maximum width and the maximum height of the pedestrian outline area as the width and the height of the pedestrian detection frame.
Further, optimizing the weight and the threshold of the extreme learning machine in the extreme learning machine-based pedestrian detection model by using a chicken swarm algorithm, and specifically comprising the following steps of:
step B1: taking the individual positions of the chicken flocks as the weight and the threshold of the extreme learning machine, and initializing parameters;
the population scale M is [50,200], the search space dimension is j, the value of j is the sum of the weight of the required optimization extreme learning machine and the parameter number of the threshold, the maximum iteration frequency T is [500,800], the iteration frequency is T, the initial value is 0, the proportion Pg of the cock is 20%, the proportion Pm of the hen is 70%, the proportion Px of the chick is 10%, and the hens are randomly selected from the hens, and the proportion Pd is 10%;
step B2: setting a fitness function, and enabling the iteration time t to be 1;
sequentially substituting the weight value and the threshold value corresponding to the chicken flock individual position into a pedestrian detection model based on an extreme learning machine, detecting the pedestrian label vector in the input image by using the pedestrian detection model based on the extreme learning machine determined by the chicken flock individual position, and taking the reciprocal of the absolute value of the sum of the differences between the detection values of all pedestrian label vectors contained in the input image and the actual values of the corresponding pedestrian label vectors as a first fitness function f1(x);
Step B3: constructing a chicken flock subgroup;
sorting according to all individual fitness values, selecting chicken individuals with fitness values M × Pg in front of the fitness values to be judged as cocks, wherein each cock is used as the head of a subgroup; selecting chicken flock individuals with the fitness value of M x Px after ranking as chickens; judging other chicken individuals as hens;
dividing the chicken group into subgroups according to the number of the cocks, wherein one subgroup comprises one cock, a plurality of chickens and a plurality of hens, and each chicken randomly selects one hen in the subgroup to construct a hen-offspring relationship;
step B4: updating the individual positions of the chicken flock and calculating the fitness of each individual at present;
cock position updating formula:
Figure BDA0001685828680000041
wherein the content of the first and second substances,
Figure BDA0001685828680000042
indicates the position of the individual cock i in the j-dimensional space in the t-th iteration,
Figure BDA0001685828680000043
corresponding to the new position of the individual cock in the t +1 iteration, r (0, sigma)2) Subject to a mean of 0 and a standard deviation of σ2Normal distribution of (0, σ)2);
Hen position update formula:
Figure BDA0001685828680000044
wherein the content of the first and second substances,
Figure BDA0001685828680000045
to locate the hen g in j-dimensional space in the t-th iteration,
Figure BDA0001685828680000046
is the only cock i in the subgroup of hens g in the t-th iteration1The location of the individual is determined by the location of the individual,
Figure BDA0001685828680000047
is a random cock i outside the subgroup of the hen i in the t-th iteration2Individual position, rand (0,1) is a random function, values are uniformly and randomly taken between (0,1), and L1、L2Updating the coefficients, L, for the positions of the hen i affected by the subgroup and other subgroups1Value range of [0.3,0.6 ]],L2Value range of [0.2,0.4 ]]。
Chicken position update formula:
Figure BDA0001685828680000048
wherein the content of the first and second substances,
Figure BDA0001685828680000049
to locate chicken/in j-dimensional space in the t-th iteration,
Figure BDA00016858286800000410
for the hen g of the mother generation with the chick l corresponding to the mother-child relationship in the t-th iterationmThe location of the individual is determined by the location of the individual,
Figure BDA00016858286800000411
omega, alpha and beta are the chicken self-update coefficients [0.2,0.7 ] respectively for the unique individual positions of the cocks in the subgroup of the chicks in the t iteration]Coefficient of following hen generation [0.5,0.8 ]]Coefficient of following cock [0.8, 1.5%];
Step B5: and updating the individual optimal position and the overall individual optimal position of the chicken flock according to the fitness function, judging whether the maximum iteration times are reached, if so, quitting, otherwise, making t equal to t +1, and turning to the step B3 until the maximum iteration times are met, outputting the weight and the threshold of the extreme learning machine corresponding to the individual position of the optimal chicken flock, and obtaining the pedestrian detection model based on the extreme learning machine.
Further, the weight and the threshold of the BP neural network in the BP neural network-based pedestrian tracking model are optimized by using an ant lion algorithm, and the specific steps are as follows:
step C1: taking each individual position in the ant lion group and the ant colony as a weight and a threshold value of a BP neural network in a pedestrian tracking model based on the BP neural network, and initializing population parameters;
the number of ant lions and ants is N, the value range is [40,100], the maximum iteration number T is [600,2000], the lower boundary value of a parameter variable to be optimized is set to be lb, the upper boundary value of the parameter variable to be optimized is set to be ub, the upper and lower boundary values of an ownership value variable are [0.01,0.6], and the value ranges of the upper and lower boundaries of all threshold variables are [0.0001,0.001 ];
step C2: initializing the positions of all ant lion and ants in the ant lion group and the ant group;
the initial positions of ants and ant lions are initialized randomly in the search space, and the formula is as follows:
Figure BDA0001685828680000051
wherein the content of the first and second substances,
Figure BDA0001685828680000052
the position of the ith individual when the iteration number is 1; rand (0,1) isRandom function, evenly and randomly taking values between (0, 1);
step C3: setting a fitness function, calculating the fitness of each individual, selecting an elite ant lion according to the fitness function, and enabling the iteration time t to be 1;
substituting the penalty coefficient and the nuclear parameter corresponding to each individual position in the ant lion group and the ant group into the pedestrian tracking model based on the BP neural network, and taking the reciprocal of the absolute value of the difference between the pedestrian tracking result and the actual tracking result obtained by the pedestrian tracking model based on the BP neural network and determined by the ant lion and ant individual positions as a second fitness function f2(x);
Ant lions or ant individuals with large fitness values are excellent;
step C4: selecting the individual with the largest second fitness function value from the ant group and the ant lion group as the elite ant lion, then arranging the individuals from large to small according to the fitness, selecting the first N-1 individuals as the ant lions, and using the last N individuals as ants;
step C5: updating the positions of the ants and the ant lions, and calculating a second fitness function value of each individual;
(2) enabling the ant individuals to randomly walk, and normalizing the positions of the ant individuals after random walking by using the boundaries and the positions of the ant lion individuals selected by roulette;
Figure BDA0001685828680000053
wherein, aiAnd biCorresponding to the minimum value and the maximum value of the boundary of the ant individual i in the whole wandering process, ci tAnd di tCorresponding to the minimum and maximum of the boundary at the t-th iteration, the values are influenced by the ant lion position:
Figure BDA0001685828680000054
Figure BDA0001685828680000055
randomly selected from the ant lion group by the ant individual i at the t-th iterationLocation of the individual Ant-lion, ubtAnd lbtRespectively representing an upper boundary and a lower boundary at the time of the t-th iteration;
(2) the ant lion preys on ants and updates the position of the ant lion individual;
if the position of the migrated ant individual is larger than the adaptability of the position of the ant lion individual selected by roulette, the ant lion predates the ant, and the position of the migrated ant individual is used for replacing the corresponding ant lion individual;
(3) updating the positions of the individuals prey on ants by using the updated positions of the ant lions and the positions of the elite ant lions;
Figure BDA0001685828680000061
Figure BDA0001685828680000062
representing the position of the predated ant individual n after the t-th iteration,
Figure BDA0001685828680000063
and
Figure BDA0001685828680000064
respectively representing the ant lion individuals s at the t-th iteration and the elite ant lion at the t-th iteration;
(4) updating the boundary range of the ant wandering;
Figure BDA0001685828680000065
wherein ubtAnd lbtRespectively, an upper boundary and a lower boundary at the t-th iteration, ω being related to the current iteration number,
Figure BDA0001685828680000066
(5) calculating a second fitness function value of all individuals;
step C6: and judging whether the maximum iteration times are met, if not, if t is t +1, returning to the step C4, and determining the weight and the threshold of the pedestrian tracking model based on the BP neural network according to the corresponding position of the Elite lion individual when the second fitness function value is maximum after the maximum iteration times are met.
Advantageous effects
The invention provides a pedestrian posture multi-feature intelligent identification method, which comprises the following steps: step 1: constructing a pedestrian sample image database; step 2: preprocessing a pedestrian image frame in a pedestrian sample database, and setting a pedestrian detection frame, a pedestrian target identifier and a pedestrian position tag vector for the preprocessed image; and step 3: constructing a pedestrian detection model based on an extreme learning machine; and 4, step 4: constructing a pedestrian tracking model based on a BP neural network; and 5: tracking and identifying the pedestrian track in real time;
compared with the prior art, the method has the following advantages:
1. the detection is accurate: the method based on the neural network is adopted in pedestrian detection, so that rapid and effective detection and marking of pedestrians can be realized, the requirement of real-time identification of emergency in an actual traffic environment can be met, and the method is also suitable for complex environments such as intelligent factories, laboratories and robot carrying, and is beneficial to the intellectualization of modern traffic and the improvement of industrial intellectualization;
2. the re-recognition rate is high, the neural network automatically extracts high-level abstract features from the tracked target, and efficient and rapid matching re-recognition of the tracked target is realized;
3. the optimization algorithm is used for optimizing the neural network parameters, so that the running efficiency of the neural network can be improved, the accuracy of the neural network in identifying pedestrians is improved, the neural network can deal with high-flow traffic pedestrians, and the robustness is good.
4. Creatively provides the method for extracting the pedestrians from the image by using the image processing method, and the pedestrians are tracked by comparing the images frame by frame, thereby ensuring the stability of pedestrian tracking and being not easy to lose the target.
5. Image data is processed remotely, street equipment does not need to be added or updated, and equipment cost is saved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
As shown in fig. 1, a pedestrian posture multi-feature intelligent identification method includes the following steps:
step 1: constructing a pedestrian sample image database;
the pedestrian sample image database extracts continuous pedestrian image frames from the intersection monitoring video to obtain three types of image groups;
the three types of image groups are respectively a negative sample without pedestrians, a multi-person sample containing a plurality of pedestrians and a single-row person sample only containing the same pedestrian, and each type of image group at least comprises 300 frames of images;
step 2: preprocessing a pedestrian image frame in a pedestrian sample database, and setting a pedestrian detection frame, a pedestrian target identifier and a pedestrian position tag vector for the preprocessed image;
the pedestrian detection frame is a minimum circumscribed rectangle of a pedestrian outline in a pedestrian image frame;
the pedestrian target identification is a unique identification P of different pedestrians appearing in all the pedestrian image frames;
the expression form of the pedestrian position label vector is [ t, x, y, a, b ], t represents that the current pedestrian image frame belongs to the t-th frame in the monitoring video, x and y respectively represent the abscissa and the ordinate of the lower left corner of a pedestrian detection frame in the pedestrian image frame, and a and b respectively represent the length and the width of the pedestrian detection frame;
in different frame images of the monitoring video, the target identifications of the same pedestrian are the same;
carrying out the following preprocessing on the pedestrian sample image and the monitoring image acquired in real time:
step A1: cutting image frames extracted from the intersection monitoring video in a uniform size;
step A2: carrying out gray processing on the cut image, and adjusting the image contrast by adopting a Gamma correction method;
step A3: extracting the directional gradient histogram characteristics of the image after the contrast adjustment, and performing dimension reduction processing on the directional gradient histogram characteristics by adopting PCA;
step A4: extracting the reduced directional gradient histogram features exceeding the set directional gradient histogram threshold value to obtain the corresponding pedestrian region;
step A5: carrying out smooth denoising processing on the pedestrian region, and extracting a maximum connected domain as a pedestrian contour region;
obtaining an image of a person more conducive to neural network recognition;
step A6: and taking the maximum width and the maximum height of the pedestrian outline area as the width and the height of the pedestrian detection frame.
And step 3: constructing a pedestrian detection model based on an extreme learning machine;
taking the preprocessed image of the pedestrian image frame in the pedestrian sample database as input data, and taking the corresponding pedestrian position label vector and the number of pedestrians as output data, training the extreme learning machine to obtain a pedestrian detection model based on the extreme learning machine;
for images not containing pedestrians, the number of the pedestrians and the position label vector are both null; for the multi-pedestrian sample, the number of the pedestrian position label vectors is the same as that of the pedestrians;
the number of input layer nodes of the extreme learning machine is the number s of pixel points of an input image, the number of hidden layer wavelet elements is 2s < -1 >, the number of output layer nodes is 4, the maximum iteration number in the training process is set to 2000, the training learning rate is 0.01, and the threshold value is 0.00005;
the chicken flock algorithm basic principle is as follows:
1) several subgroups were present throughout the chicken flock, each subgroup consisting of one cock, several hens and some chicks.
2) How the chicken flock is divided into several subgroups and how the kind of chicken is determined depends on the fitness value of the chicken itself. In the chicken group, a plurality of individuals with the best fitness value are used as cocks, and each cock is the head of a subgroup; several individuals with the worst fitness value were treated as chicks; the remaining individuals served as hens. The hen can randomly select which subgroup the hen belongs to, and the maternal-child relationship between the hen and the chick is also randomly established.
3) The individuals in each subgroup all look for food around the cocks in the subgroup, and other individuals can also be prevented from snatching their own food; and assuming that the chicks can randomly steal food that other individuals have found, each chick follows their mother looking for food. The dominant individuals in the chicken flock have a good competitive advantage in that they find food before the other individuals.
The hen generation hen is a hen in a subgroup randomly selected by the chicks and is used as a hen generation hen for follow-up learning;
optimizing the weight and the threshold of the extreme learning machine in the extreme learning machine-based pedestrian detection model by using a chicken flock algorithm, and specifically comprising the following steps:
step B1: taking the individual positions of the chicken flocks as the weight and the threshold of the extreme learning machine, and initializing parameters;
the population scale M is [50,200], the search space dimension is j, the value of j is the sum of the weight of the required optimization extreme learning machine and the parameter number of the threshold, the maximum iteration frequency T is [500,800], the iteration frequency is T, the initial value is 0, the proportion Pg of the cock is 20%, the proportion Pm of the hen is 70%, the proportion Px of the chick is 10%, and the hens are randomly selected from the hens, and the proportion Pd is 10%;
step B2: setting a fitness function, and enabling the iteration time t to be 1;
sequentially substituting the weight value and the threshold value corresponding to the chicken flock individual position into the pedestrian detection model based on the extreme learning machine, detecting the pedestrian label vector in the input image by using the pedestrian detection model based on the extreme learning machine determined by the chicken flock individual position, and detecting the sum of the detection values of all the pedestrian label vectors contained in the input image and the actual values of the corresponding pedestrian label vectorsReciprocal as a first fitness function f1(x);
Step B3: constructing a chicken flock subgroup;
sorting according to all individual fitness values, selecting chicken individuals with fitness values M × Pg in front of the fitness values to be judged as cocks, wherein each cock is used as the head of a subgroup; selecting chicken flock individuals with the fitness value of M x Px after ranking as chickens; judging other chicken individuals as hens;
dividing the chicken group into subgroups according to the number of the cocks, wherein one subgroup comprises one cock, a plurality of chickens and a plurality of hens, and each chicken randomly selects one hen in the subgroup to construct a hen-offspring relationship;
step B4: updating the individual positions of the chicken flock and calculating the fitness of each individual at present;
cock position updating formula:
Figure BDA0001685828680000091
wherein the content of the first and second substances,
Figure BDA0001685828680000092
indicates the position of the individual cock i in the j-dimensional space in the t-th iteration,
Figure BDA0001685828680000093
corresponding to the new position of the individual cock in the t +1 iteration, r (0, sigma)2) Subject to a mean of 0 and a standard deviation of σ2Normal distribution of (0, σ)2);
Hen position update formula:
Figure BDA0001685828680000094
wherein the content of the first and second substances,
Figure BDA0001685828680000095
to locate the hen g in j-dimensional space in the t-th iteration,
Figure BDA0001685828680000096
for hens in the t-th iterationg unique cock i of subgroup1The location of the individual is determined by the location of the individual,
Figure BDA0001685828680000097
is a random cock i outside the subgroup of the hen i in the t-th iteration2Individual position, rand (0,1) is a random function, values are uniformly and randomly taken between (0,1), and L1、L2Updating the coefficients, L, for the positions of the hen i affected by the subgroup and other subgroups1Value range of [0.3,0.6 ]],L2Value range of [0.2,0.4 ]]。
Chicken position update formula:
Figure BDA0001685828680000098
wherein the content of the first and second substances,
Figure BDA0001685828680000099
to locate chicken/in j-dimensional space in the t-th iteration,
Figure BDA00016858286800000910
for the hen g of the mother generation with the chick l corresponding to the mother-child relationship in the t-th iterationmThe location of the individual is determined by the location of the individual,
Figure BDA00016858286800000911
omega, alpha and beta are the chicken self-update coefficients [0.2,0.7 ] respectively for the unique individual positions of the cocks in the subgroup of the chicks in the t iteration]Coefficient of following hen generation [0.5,0.8 ]]Coefficient of following cock [0.8, 1.5%];
Step B5: and updating the individual optimal position and the overall individual optimal position of the chicken flock according to the fitness function, judging whether the maximum iteration times are reached, if so, quitting, otherwise, making t equal to t +1, and turning to the step B3 until the maximum iteration times are met, outputting the weight and the threshold of the extreme learning machine corresponding to the individual position of the optimal chicken flock, and obtaining the pedestrian detection model based on the extreme learning machine.
And 4, step 4: constructing a pedestrian tracking model based on a BP neural network;
sequentially using the preprocessed pedestrian tracking detection images in the two adjacent frames of images and the corresponding pedestrian position label vector extracted by using a pedestrian detection model based on an extreme learning machine as input layer data, using the tracking result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image as output layer data, training a BP neural network model, and obtaining a pedestrian tracking model based on the BP neural network;
the ant lion algorithm rationale is as follows:
the ant lion optimization algorithm is a new colony intelligent optimization algorithm inspired by a hunting mechanism of ant lion hunting ants in nature. In nature, the ant lion moves along a circular track in sand, a conical pit for trapping ants is dug by using the lower jaw, and when the ants moving randomly sink into the pit, the ant lion catches the ants and restores the pit to wait for the next prey (ants).
The ant lion algorithm simulates the interaction of ant lion and ant to realize the optimization of the problem: ants realize the exploration of a search space by random walk around the ant lions, and learn the ant lions and elite to ensure the diversity of populations and the optimization performance of an algorithm; the ant lion is equivalent to the solution of the problem, and the updating and the storage of the approximate optimal solution are realized by hunting ants with high adaptability.
The weight and the threshold of the BP neural network in the pedestrian tracking model based on the BP neural network are optimized by adopting an ant lion algorithm, and the method specifically comprises the following steps:
step C1: taking each individual position in the ant lion group and the ant colony as a weight and a threshold value of a BP neural network in a pedestrian tracking model based on the BP neural network, and initializing population parameters;
the number of ant lions and ants is N, the value range is [40,100], the maximum iteration number T is [600,2000], the lower boundary value of a parameter variable to be optimized is set to be lb, the upper boundary value of the parameter variable to be optimized is set to be ub, the upper and lower boundary values of an ownership value variable are [0.01,0.6], and the value ranges of the upper and lower boundaries of all threshold variables are [0.0001,0.001 ];
step C2: initializing the positions of all ant lion and ants in the ant lion group and the ant group;
the initial positions of ants and ant lions are initialized randomly in the search space, and the formula is as follows:
Figure BDA0001685828680000101
wherein the content of the first and second substances,
Figure BDA0001685828680000102
the position of the ith individual when the iteration number is 1; rand (0,1) is a random function, and values are uniformly and randomly selected between (0, 1);
step C3: setting a fitness function, calculating the fitness of each individual, selecting an elite ant lion according to the fitness function, and enabling the iteration time t to be 1;
substituting the penalty coefficient and the nuclear parameter corresponding to each individual position in the ant lion group and the ant group into the pedestrian tracking model based on the BP neural network, and taking the reciprocal of the absolute value of the difference between the pedestrian tracking result and the actual tracking result obtained by the pedestrian tracking model based on the BP neural network and determined by the ant lion and ant individual positions as a second fitness function f2(x);
Ant lions or ant individuals with large fitness values are excellent;
step C4: selecting the individual with the largest second fitness function value from the ant group and the ant lion group as the elite ant lion, then arranging the individuals from large to small according to the fitness, selecting the first N-1 individuals as the ant lions, and using the last N individuals as ants;
step C5: updating the positions of the ants and the ant lions, and calculating a second fitness function value of each individual;
(3) enabling the ant individuals to randomly walk, and normalizing the positions of the ant individuals after random walking by using the boundaries and the positions of the ant lion individuals selected by roulette;
the ant carries out random walk behavior, and the specific formula is as follows:
Figure BDA0001685828680000111
wherein cumsum is a calculated cumulative value, T is a maximum number of iterations, T is a current number of iterations, r (T) is a random function, and the formula is as follows:
Figure BDA0001685828680000112
in order to prevent the random migration of the ants from crossing the boundary, the random migration of the ants is normalized according to the boundary:
Figure BDA0001685828680000113
wherein, aiAnd biCorresponding to the minimum value and the maximum value of the boundary of the ant individual i in the whole wandering process, ci tAnd di tCorresponding to the minimum and maximum of the boundary at the t-th iteration, the values are influenced by the ant lion position:
Figure BDA0001685828680000114
Figure BDA0001685828680000115
the position of the ant lion individual s randomly selected by the ant individual i from the ant lion group at the t iteration; ubtAnd lbtRespectively representing an upper boundary and a lower boundary at the time of the t-th iteration;
(2) the ant lion preys on ants and updates the position of the ant lion individual;
if the position of the migrated ant individual is larger than the adaptability of the position of the ant lion individual selected by roulette, the ant lion predates the ant, and the position of the migrated ant individual is used for replacing the corresponding ant lion individual;
(3) updating the positions of the individuals prey on ants by using the updated positions of the ant lions and the positions of the elite ant lions;
Figure BDA0001685828680000116
Figure BDA0001685828680000117
representing the position of the predated ant individual n after the t-th iteration,
Figure BDA0001685828680000118
and
Figure BDA0001685828680000119
respectively representing the ant lion individuals s at the t-th iteration and the elite ant lion at the t-th iteration;
(4) updating the boundary range of the ant wandering;
Figure BDA00016858286800001110
wherein ubtAnd lbtRespectively, an upper boundary and a lower boundary at the t-th iteration, ω being related to the current iteration number,
Figure BDA00016858286800001111
(5) calculating a second fitness function value of all individuals;
step C6: and judging whether the maximum iteration times are met, if not, if t is t +1, returning to the step C4, and determining the weight and the threshold of the pedestrian tracking model based on the BP neural network according to the corresponding position of the Elite lion individual when the second fitness function value is maximum after the maximum iteration times are met.
The pedestrian tracking detection graph serving as the input layer data is obtained by extracting a single pedestrian contour graph from a preprocessed image of one frame, and assuming that 4 pedestrians exist in one frame, 4 pedestrian tracking detection graphs exist;
the appearance result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image means that if the pedestrian in the previous frame of pedestrian image appears in the next frame of pedestrian image, the tracking result of the pedestrian is 1, otherwise, the tracking result is 0; if the pedestrian tracking result is 1, adding a corresponding pedestrian position tag vector appearing in a pedestrian image of the next frame into a pedestrian track, wherein the initial value of the pedestrian track is the pedestrian position tag vector appearing in the image for the first time in the monitoring video;
the tracking model processes two frames of images each time, only judges whether the pedestrian appears in the pedestrian image of the previous frame or not, and adds the tag vector of the person in the second frame into the record of the person in the first frame if the pedestrian appears in the pedestrian image of the next frame;
when the model is used, all the pedestrian tracking detection images of the previous frame and the next frame are combined one by one to serve as input layer data for matching, if the pedestrians appearing in the second frame image and the pedestrians appearing in the first frame are the same person, the target identification of the pedestrians appearing in the first frame is given to the corresponding pedestrians in the second frame, and meanwhile, the position label vector of the pedestrians corresponding to the second frame is recorded into the target tracking track; if the pedestrian appearing in the second frame image is not matched with any pedestrian appearing in the first frame image, setting a new target identifier for the pedestrian appearing in the second frame image;
and 5: tracking and identifying the pedestrian track in real time;
sequentially extracting two adjacent frames of pedestrian images from a real-time monitoring video, inputting the images into the pedestrian detection model based on the extreme learning machine, detecting pedestrian position label vectors and the number of pedestrians in the two frames of images, then inputting the pedestrian tracking detection images in the two frames of images into the pedestrian tracking model based on the BP neural network, and tracking the pedestrian tracks of the pedestrians appearing in the pedestrian image of the previous frame to obtain the tracking tracks of all the pedestrians in the monitoring video.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (4)

1. A pedestrian posture multi-feature intelligent identification method is characterized by comprising the following steps:
step 1: constructing a pedestrian sample image database;
the pedestrian sample image database extracts continuous pedestrian image frames from the intersection monitoring video to obtain three types of image groups;
the three types of image groups are respectively a negative sample without pedestrians, a multi-person sample containing a plurality of pedestrians and a single-row person sample only containing the same pedestrian, and each type of image group at least comprises 300 frames of images;
step 2: preprocessing a pedestrian image frame in a pedestrian sample database, and setting a pedestrian detection frame, a pedestrian target identifier and a pedestrian position tag vector for the preprocessed image;
the pedestrian detection frame is a minimum circumscribed rectangle of a pedestrian outline in a pedestrian image frame;
the pedestrian target identification is a unique identification P of different pedestrians appearing in all the pedestrian image frames;
the expression form of the pedestrian position label vector is [ t, x, y, a, b ], t represents that the current pedestrian image frame belongs to the t-th frame in the monitoring video, x and y respectively represent the abscissa and the ordinate of the lower left corner of a pedestrian detection frame in the pedestrian image frame, and a and b respectively represent the length and the width of the pedestrian detection frame;
and step 3: constructing a pedestrian detection model based on an extreme learning machine;
taking the preprocessed image of the pedestrian image frame in the pedestrian sample database as input data, and taking the corresponding pedestrian position label vector and the number of pedestrians as output data, training the extreme learning machine to obtain a pedestrian detection model based on the extreme learning machine;
the number of input layer nodes of the extreme learning machine is the number s of pixel points of an input image, the number of hidden layer wavelet elements is 2s < -1 >, the number of output layer nodes is 5Pe, Pe is the maximum number of pedestrians in the input image, the maximum iteration number in the training process is set to 2000, the training learning rate is 0.01, and the threshold value is 0.00005;
and 4, step 4: constructing a pedestrian tracking model based on a BP neural network;
sequentially using the preprocessed pedestrian tracking detection images in the two adjacent frames of images and the corresponding pedestrian position label vector extracted by using a pedestrian detection model based on an extreme learning machine as input layer data, using the tracking result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image as output layer data, training a BP neural network model, and obtaining a pedestrian tracking model based on the BP neural network;
the appearance result of the pedestrian in the previous frame of pedestrian image in the next frame of pedestrian image means that if the pedestrian in the previous frame of pedestrian image appears in the next frame of pedestrian image, the tracking result of the pedestrian is 1, otherwise, the tracking result is 0; if the pedestrian tracking result is 1, adding a corresponding pedestrian position tag vector appearing in a pedestrian image of the next frame into a pedestrian track, wherein the initial value of the pedestrian track is the pedestrian position tag vector appearing in the image for the first time in the monitoring video;
and 5: tracking and identifying the pedestrian track in real time;
sequentially extracting two adjacent frames of pedestrian images from a real-time monitoring video, inputting the images into the pedestrian detection model based on the extreme learning machine, detecting pedestrian position label vectors and the number of pedestrians in the two frames of images, then inputting the pedestrian tracking detection images in the two frames of images into the pedestrian tracking model based on the BP neural network, and tracking the pedestrian tracks of the pedestrians appearing in the pedestrian image of the previous frame to obtain the tracking tracks of all the pedestrians in the monitoring video.
2. The method according to claim 1, characterized in that the following pre-processing is performed on the pedestrian sample image and the monitoring image acquired in real time:
step A1: cutting image frames extracted from the intersection monitoring video in a uniform size;
step A2: carrying out gray processing on the cut image, and adjusting the image contrast by adopting a Gamma correction method;
step A3: extracting the directional gradient histogram characteristics of the image after the contrast adjustment, and performing dimension reduction processing on the directional gradient histogram characteristics by adopting PCA;
step A4: extracting the reduced directional gradient histogram features exceeding the set directional gradient histogram threshold value to obtain the corresponding pedestrian region;
step A5: carrying out smooth denoising processing on the pedestrian region, and extracting a maximum connected domain as a pedestrian contour region;
step A6: and taking the maximum width and the maximum height of the pedestrian outline area as the width and the height of the pedestrian detection frame.
3. The method according to claim 1, characterized in that a chicken flock algorithm is used to optimize the weight and threshold of the extreme learning machine in the extreme learning machine-based pedestrian detection model, and the specific steps are as follows:
step B1: taking the individual positions of the chicken flocks as the weight and the threshold of the extreme learning machine, and initializing parameters;
the population scale M is [50,200], the search space dimension is j, the value of j is the sum of the weight of the required optimization extreme learning machine and the parameter number of the threshold, the maximum iteration frequency T is [500,800], the iteration frequency is T, the initial value is 0, the proportion Pg of the cock is 20%, the proportion Pm of the hen is 70%, the proportion Px of the chick is 10%, and the hens are randomly selected from the hens, and the proportion Pd is 10%;
step B2: setting a fitness function, and enabling the iteration time t to be 1;
sequentially substituting the weight value and the threshold value corresponding to the chicken flock individual position into a pedestrian detection model based on an extreme learning machine, detecting the pedestrian label vector in the input image by using the pedestrian detection model based on the extreme learning machine determined by the chicken flock individual position, and taking the reciprocal of the absolute value of the sum of the differences between the detection values of all pedestrian label vectors contained in the input image and the actual values of the corresponding pedestrian label vectors as a first fitness function f1(x);
Step B3: constructing a chicken flock subgroup;
sorting according to all individual fitness values, selecting chicken individuals with fitness values M × Pg in front of the fitness values to be judged as cocks, wherein each cock is used as the head of a subgroup; selecting chicken flock individuals with the fitness value of M x Px after ranking as chickens; judging other chicken individuals as hens;
dividing the chicken group into subgroups according to the number of the cocks, wherein one subgroup comprises one cock, a plurality of chickens and a plurality of hens, and each chicken randomly selects one hen in the subgroup to construct a hen-offspring relationship;
step B4: updating the individual positions of the chicken flock and calculating the fitness of each individual at present;
cock position updating formula:
Figure FDA0003357365450000031
wherein the content of the first and second substances,
Figure FDA0003357365450000032
indicates the position of the individual cock i in the j-dimensional space in the t-th iteration,
Figure FDA0003357365450000033
corresponding to the new position of the individual cock in the t +1 iteration, r (0, sigma)2) Subject to a mean of 0 and a standard deviation of σ2Normal distribution of (0, σ)2);
Hen position update formula:
Figure FDA0003357365450000034
wherein the content of the first and second substances,
Figure FDA0003357365450000035
to locate the hen g in j-dimensional space in the t-th iteration,
Figure FDA0003357365450000036
is the only cock i in the subgroup of hens g in the t-th iteration1The location of the individual is determined by the location of the individual,
Figure FDA0003357365450000037
is a random cock i outside the subgroup of the hen i in the t-th iteration2Individual position, rand (0,1) is a random function, values are uniformly and randomly taken between (0,1), and L1、L2Updating the coefficients, L, for the positions of the hen i affected by the subgroup and other subgroups1Value range of [0.3,0.6 ]],L2Value range of [0.2,0.4 ]];
Chicken position update formula:
Figure FDA0003357365450000038
wherein the content of the first and second substances,
Figure FDA0003357365450000039
to locate chicken/in j-dimensional space in the t-th iteration,
Figure FDA00033573654500000310
for the hen g of the mother generation with the chick l corresponding to the mother-child relationship in the t-th iterationmThe location of the individual is determined by the location of the individual,
Figure FDA00033573654500000311
omega, alpha and beta are the chicken self-update coefficients [0.2,0.7 ] respectively for the unique individual positions of the cocks in the subgroup of the chicks in the t iteration]Coefficient of following hen generation [0.5,0.8 ]]Coefficient of following cock [0.8, 1.5%];
Step B5: and updating the individual optimal position and the overall individual optimal position of the chicken flock according to the fitness function, judging whether the maximum iteration times are reached, if so, quitting, otherwise, making t equal to t +1, and turning to the step B3 until the maximum iteration times are met, outputting the weight and the threshold of the extreme learning machine corresponding to the individual position of the optimal chicken flock, and obtaining the pedestrian detection model based on the extreme learning machine.
4. The method according to any one of claims 1 to 3, wherein the weights and thresholds of the BP neural network in the BP neural network-based pedestrian tracking model are optimized by using the ant lion algorithm, and the method comprises the following specific steps:
step C1: taking each individual position in the ant lion group and the ant colony as a weight and a threshold value of a BP neural network in a pedestrian tracking model based on the BP neural network, and initializing population parameters;
the number of ant lions and ants is N, the value range is [40,100], the maximum iteration number T is [600,2000], the lower boundary value of a parameter variable to be optimized is set to be lb, the upper boundary value of the parameter variable to be optimized is set to be ub, the upper and lower boundary values of an ownership value variable are [0.01,0.6], and the value ranges of the upper and lower boundaries of all threshold variables are [0.0001,0.001 ];
step C2: initializing the positions of all ant lion and ants in the ant lion group and the ant group;
the initial positions of ants and ant lions are initialized randomly in the search space, and the formula is as follows:
Figure FDA00033573654500000312
wherein the content of the first and second substances,
Figure FDA0003357365450000041
the position of the ith individual when the iteration number is 1; rand (0,1) is a random function, and values are uniformly and randomly selected between (0, 1);
step C3: setting a fitness function, calculating the fitness of each individual, selecting an elite ant lion according to the fitness function, and enabling the iteration time t to be 1;
substituting the penalty coefficient and the nuclear parameter corresponding to each individual position in the ant lion group and the ant group into the pedestrian tracking model based on the BP neural network, and taking the reciprocal of the absolute value of the difference between the pedestrian tracking result and the actual tracking result obtained by the pedestrian tracking model based on the BP neural network and determined by the ant lion and ant individual positions as a second fitness function f2(x);
Step C4: selecting the individual with the largest second fitness function value from the ant group and the ant lion group as the elite ant lion, then arranging the individuals from large to small according to the fitness, selecting the first N-1 individuals as the ant lions, and using the last N individuals as ants;
step C5: updating the positions of the ants and the ant lions, and calculating a second fitness function value of each individual;
(1) enabling the ant individuals to randomly walk, and normalizing the positions of the ant individuals after random walking by using the boundaries and the positions of the ant lion individuals selected by roulette;
Figure FDA0003357365450000042
wherein, aiAnd biCorresponding to the minimum value and the maximum value of the boundary of the ant individual i in the whole walking process, and cit and dit correspond to the minimum value and the maximum value of the boundary at the t-th iteration, wherein the values are influenced by the positions of the ant lions:
Figure FDA0003357365450000043
Figure FDA0003357365450000044
the position of the ant lion individual s randomly selected by the ant individual i from the ant lion group at the t iteration; ubtAnd lbtRespectively representing an upper boundary and a lower boundary at the time of the t-th iteration;
(2) the ant lion preys on ants and updates the position of the ant lion individual;
if the position of the migrated ant individual is larger than the adaptability of the position of the ant lion individual selected by roulette, the ant lion predates the ant, and the position of the migrated ant individual is used for replacing the corresponding ant lion individual;
(3) updating the positions of the individuals prey on ants by using the updated positions of the ant lions and the positions of the elite ant lions;
Figure FDA0003357365450000045
Figure FDA0003357365450000046
representing the position of the predated ant individual n after the t-th iteration,
Figure FDA0003357365450000047
and
Figure FDA0003357365450000048
respectively representing the ant lion individuals s at the t-th iteration and the elite ant lion at the t-th iteration;
(4) updating the boundary range of the ant wandering;
Figure FDA0003357365450000049
wherein ubtAnd lbtRespectively, an upper boundary and a lower boundary at the t-th iteration, ω being related to the current iteration number,
Figure FDA0003357365450000051
(5) calculating a second fitness function value of all individuals;
step C6: and judging whether the maximum iteration times are met, if not, if t is t +1, returning to the step C4, and determining the weight and the threshold of the pedestrian tracking model based on the BP neural network according to the corresponding position of the Elite lion individual when the second fitness function value is maximum after the maximum iteration times are met.
CN201810578415.5A 2018-06-05 2018-06-05 Pedestrian posture multi-feature intelligent identification method Active CN108805907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810578415.5A CN108805907B (en) 2018-06-05 2018-06-05 Pedestrian posture multi-feature intelligent identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810578415.5A CN108805907B (en) 2018-06-05 2018-06-05 Pedestrian posture multi-feature intelligent identification method

Publications (2)

Publication Number Publication Date
CN108805907A CN108805907A (en) 2018-11-13
CN108805907B true CN108805907B (en) 2022-03-29

Family

ID=64088732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810578415.5A Active CN108805907B (en) 2018-06-05 2018-06-05 Pedestrian posture multi-feature intelligent identification method

Country Status (1)

Country Link
CN (1) CN108805907B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712160B (en) * 2018-12-26 2023-05-23 桂林电子科技大学 Method for realizing image threshold segmentation based on generalized entropy combined improved lion group algorithm
CN110288606B (en) * 2019-06-28 2024-04-09 中北大学 Three-dimensional grid model segmentation method of extreme learning machine based on ant lion optimization
CN112836721B (en) * 2020-12-17 2024-03-22 北京仿真中心 Image recognition method and device, computer equipment and readable storage medium
CN113239900A (en) * 2021-06-17 2021-08-10 云从科技集团股份有限公司 Human body position detection method and device and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
WO2017120336A3 (en) * 2016-01-05 2017-08-24 Mobileye Vision Technologies Ltd. Trained navigational system with imposed constraints
CN107274433A (en) * 2017-06-21 2017-10-20 吉林大学 Method for tracking target, device and storage medium based on deep learning
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
CN107563313A (en) * 2017-08-18 2018-01-09 北京航空航天大学 Multiple target pedestrian detection and tracking based on deep learning
CN107767405A (en) * 2017-09-29 2018-03-06 华中科技大学 A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking
CN108021848A (en) * 2016-11-03 2018-05-11 浙江宇视科技有限公司 Passenger flow volume statistical method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161591A1 (en) * 2015-12-04 2017-06-08 Pilot Ai Labs, Inc. System and method for deep-learning based object tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
WO2017120336A3 (en) * 2016-01-05 2017-08-24 Mobileye Vision Technologies Ltd. Trained navigational system with imposed constraints
CN108021848A (en) * 2016-11-03 2018-05-11 浙江宇视科技有限公司 Passenger flow volume statistical method and device
CN107274433A (en) * 2017-06-21 2017-10-20 吉林大学 Method for tracking target, device and storage medium based on deep learning
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
CN107563313A (en) * 2017-08-18 2018-01-09 北京航空航天大学 Multiple target pedestrian detection and tracking based on deep learning
CN107767405A (en) * 2017-09-29 2018-03-06 华中科技大学 A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Detect to Track and Track to Detect;Christoph Feichtenhofer 等;《2017 IEEE International Conference on Computer Vision》;20171225;第3057-3065页 *
Object Location and Track in Image Sequences by Means of Neural Networks;Zhenghao Shi 等;《International Journal of Computational Science》;20081231;第2卷(第2期);第274-285页 *
一种无刷同步发电机旋转整流器故障快速识别方法;唐军祥 等;《计算机月现代化》;20171031;第66-71页 *
基于ALO-ENN算法的洪灾评估模型及应用;崔东文 等;《人民珠江》;20160531;第37卷(第5期);第44-50页 *
基于卷积神经网络的实时跟踪算法;程朋 等;《传感器与微系统》;20180531;第37卷(第5期);第144-146页 *

Also Published As

Publication number Publication date
CN108805907A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108805907B (en) Pedestrian posture multi-feature intelligent identification method
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN109409297B (en) Identity recognition method based on dual-channel convolutional neural network
CN107657226B (en) People number estimation method based on deep learning
CN110378281A (en) Group Activity recognition method based on pseudo- 3D convolutional neural networks
Wang et al. Modified salp swarm algorithm based multilevel thresholding for color image segmentation
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
Shi et al. Detection and identification of stored-grain insects using deep learning: A more effective neural network
CN107609512A (en) A kind of video human face method for catching based on neutral net
Arif et al. Automated body parts estimation and detection using salient maps and Gaussian matrix model
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
CN108830246B (en) Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment
CN111709285A (en) Epidemic situation protection monitoring method and device based on unmanned aerial vehicle and storage medium
CN107967442A (en) A kind of finger vein identification method and system based on unsupervised learning and deep layer network
CN108537181A (en) A kind of gait recognition method based on the study of big spacing depth measure
CN110263920A (en) Convolutional neural networks model and its training method and device, method for inspecting and device
CN112395977A (en) Mammal posture recognition method based on body contour and leg joint skeleton
JP7422456B2 (en) Image processing device, image processing method and program
CN109918995B (en) Crowd abnormity detection method based on deep learning
CN114863263B (en) Snakehead fish detection method for blocking in class based on cross-scale hierarchical feature fusion
CN110738166B (en) Fishing administration monitoring system infrared target identification method based on PCNN and PCANet and storage medium
CN112347930A (en) High-resolution image scene classification method based on self-learning semi-supervised deep neural network
Adiwinata et al. Fish species recognition with faster r-cnn inception-v2 using qut fish dataset
CN108846344B (en) Pedestrian posture multi-feature intelligent identification method integrating deep learning
CN107862696A (en) Specific pedestrian&#39;s clothing analytic method and system based on the migration of fashion figure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant