CN109948490A - A kind of employee's specific behavior recording method identified again based on pedestrian - Google Patents

A kind of employee's specific behavior recording method identified again based on pedestrian Download PDF

Info

Publication number
CN109948490A
CN109948490A CN201910178684.7A CN201910178684A CN109948490A CN 109948490 A CN109948490 A CN 109948490A CN 201910178684 A CN201910178684 A CN 201910178684A CN 109948490 A CN109948490 A CN 109948490A
Authority
CN
China
Prior art keywords
employee
pedestrian
training
network
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910178684.7A
Other languages
Chinese (zh)
Inventor
赵云波
林建武
李灏
杨晨铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910178684.7A priority Critical patent/CN109948490A/en
Publication of CN109948490A publication Critical patent/CN109948490A/en
Pending legal-status Critical Current

Links

Abstract

Based on employee's specific behavior recording method that pedestrian identifies, first pedestrian detection model of the construction based on yolo, the pedestrian based on mgn weight identification model, the multitask convolutional neural networks based on densenet again;Then respectively with VOC data set training pedestrian detection model, market1501 data set training pedestrian's weight identification model, BOT data set training multitask convolutional neural networks;Further, employee's feature database is made using pedestrian's weight identification model;Finally, reading monitored picture, pedestrian detection model inspection goes out pedestrian, pedestrian's weight identification model extracts pedestrian's feature, it compares to judge identity with employee's feature database, multitask convolutional neural networks is recycled to carry out action recognition to employee's image, save the image of employee's specific action.Finally realize that the function of being recorded to employee's specific action, the solution have versatility.

Description

A kind of employee's specific behavior recording method identified again based on pedestrian
Technical field
The present invention relates to a kind of computer monitoring and managing methods for employee's supervision.
Background technique
There is a problem of that employee is difficult to regulate in shop and factory, the improper operation of employee will cause economic loss or even draw Play safety accident.With the development of computer vision and the extensive arrangement of monitoring camera, using deep learning algorithm to prison Employee in control picture detects, analyzes, and is recorded and is reminded when specific behavior occurs, can effectively reduce supervisor Energy consumption, promoted enterprise automation management level.
Employee and customer are the main activities personnel of shop and factory, to both different crowds carry out analysis have it is different Meaning, customer analysis help to optimize marketing program by obtaining user's portrait of customer, and employee analysis helps to promote management Efficiency and working efficiency.However, employee and customer are physically without apparent characteristic of division, conventional method is difficult distinguishing pedestrian Object is employee or customer, and the first step that employee is employee's image identification is retrieved from monitoring.
Tradition is broadly divided into three kinds using the method that computer differentiates employee and customer: first is that employee wears Intelligent bracelet etc. Instrument with signal emitting-source;Second is that determining the identity of pedestrian by recognition of face;Third is that customer and wearing shop uniform Employee be divided into two classes, utilize convolutional neural networks carry out image classification.
However these three methods are all defective.Increase additional signal source location instrument using needs such as Intelligent bracelets, increases Use cost is added, and can increase with the growth of arrangement site area;In monitor video, due to camera resolution and The reason of shooting angle is not typically available the very high face picture of quality, this causes great difficulty to recognition of face, is difficult Detect all employees;Since employee's uniform can change with the change in shop, if to move in other shops, need It collects the image data set that a large amount of new shop employee is dressed in a uniform and carries out re -training, otherwise with the training of the third method Disaggregated model can fail because of the change in shop.
It can be seen that there is presently no perfect solution party for detection, analysis and the system for recording employee's specific behavior Case.
Summary of the invention
The present invention will overcome the shortcomings that prior art, provide a kind of employee's specific behavior record side identified again based on pedestrian Method.
A kind of employee's specific behavior recording method identified again based on pedestrian of the invention, first row of the training based on yolo People's detection model, pedestrian weight identification model, the multitask action recognition model based on densenet based on mgn;It constructs later Employee's image library extracts the feature vector of every employee's image and saves local;Further, the picture that monitoring is read is carried out It handles frame by frame, pedestrian detection model inspection goes out the position of all pedestrians;The feature of pedestrian image is extracted with pedestrian's weight identification model Vector is compared with employee library, whether belongs to salesman's work according to threshold decision pedestrian;The image for the employee that will identify that is sent into more Task pedestrian's action recognition model, judges whether pedestrian makes specific movement, chooses whether to save the picture with this.The present invention It solves the problems, such as supervision employee to a certain extent, is easy to migrate into different shops, can be applied to staff attendance system, danger Danger movement monitoring etc..
The technical solution adopted by the present invention to solve the technical problems is:
A kind of employee's specific behavior recording method identified again based on pedestrian, contains following steps:
Step 1. trains pedestrian detection model: building yolo convolutional neural networks model, training yolo convolutional neural networks;
Step 2. training pedestrian's weight identification model: mgn network model is improved, training pedestrian identifies neural network again;
Step 3. trains multitask action recognition model: multitask action recognition network of the building based on densenet, instruction Practice multitask action recognition network model;
Step 4. makes employee's feature database: collecting employee's image library, extracts employee's feature vector and save local;
Step 5. identifies and records employee's specific behavior: reading monitoring camera picture, detects the pedestrian in picture, pedestrian Weight recognizer determines pedestrian's identity, and identification employee's movement saves specific action picture.
Compared with prior art, have the advantages of technical solution of the present invention:
(1) present invention provides the solution of complete record employee's specific action, comprising pedestrian detection, identification, Action recognition etc.;
(2) present invention determines pedestrian's identity using pedestrian's weight recognizer, can migrate to different scenes without again Training pattern.
Detailed description of the invention
Fig. 1 is pedestrian detection schematic network structure of the invention;
Fig. 2 is multitask action recognition schematic network structure of the invention;
Fig. 3 is the flow chart of the method for the present invention.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and examples to this hair It is bright to be described in further detail.
Embodiment 1:
A kind of employee's specific behavior recording method identified again based on pedestrian, contains following steps:
(1) training pedestrian detection model
Step 11: building yolo convolutional neural networks model;
The present invention is based on yolo v2 to build pedestrian detection model, constructs disaggregated model and detection model respectively.
Disaggregated model is constructed first, is prepared for subsequent transfer learning.The specific structure of disaggregated model is as shown in table 1, First layer is made of to eleventh floor convolutional layer, batch normalization layer, activation primitive, pond layer, and Floor 12 is 1000 nerves The full articulamentum of member, is finally constrained by Softmax loss function.In order to larger utilize image information, disaggregated model is defeated Enter picture size and is set as 448 × 448 × 3.
1 disaggregated model structure of table
Detection model is modified to obtain by disaggregated model, as shown in table 2, the Layer12 of disaggregated model is changed to k 1*1 The convolutional layer of convolution kernel size, the value of k and the number of detection object are related, specific formula is as follows:
K=(ncls+5)*nanchor (1)
Wherein nclsRepresent the number of detection object, nanchorThe number of anchor box is represented, experiments have shown that nanchor=5 Effect is best.The present invention uses VOC data set, classification number 20, therefore k value 125.
The loss function of detection model has Coordinate Loss (coordinate loss function), Class Loss (classification loss Function), Confidence Loss (confidence level loss function) is denoted as L respectivelycoord,Lcon,Lclass, three kinds of loss functions calculating Formula is as follows:
L=Lcoord+Lcon+Lclass (5)
Wherein, λcoordobjnoobjRespectively coordinate loss, objective degrees of confidence loss, the weight without objective degrees of confidence, Different loss functions is balanced with this.X, y are the top left co-ordinate for the object that model inspection goes out,For the actual upper left of object Angular coordinate, w, h are the width and height for the object that model inspection goes out,It is actual wide and high for object.Formula 5 indicates final damage Mistake functional value is the sum of coordinate loss function, classification loss function, confidence level loss function, and loss function acts on yolo detection The schematic diagram of network is as shown in Figure 1.
2 detection model structure of table
Step 12: training yolo convolutional neural networks:
Training yolo convolutional neural networks are divided into two steps: training sorter network and training detection network.
Training sorter network: building disaggregated model according to shown in table 1, with the ImageNet data set training classification net Network.Specifically, using stochastic gradient descent method, learning rate is set as 0.001, polynomial rate decay and sets It is set to 4, weight decay and is set as 0.0005, momentum and be set as 0.9.Initial pictures resolution ratio is 224*224, training 448*448 will be adjusted in resolution ratio after 160 epoch, adjust micro- learning rate be 0.0001, retraining 10 epoch。
Training detection network: according to detection network is built shown in table 2, structure corresponding to trained sorter network is loaded Weight parameter.It is the training of initial learning rate 160 epoch, weight with 0.001 using VOC data set training detection model Decay is set as 0.0005, momentum and is set as 0.9.In training process using data enhance method, to image carry out with Machine cutting, horizontal rotation etc..
(2) training pedestrian weight identification model
Step 21: improving mgn network model;
Mgn includes the branch that the branch for being used to extract global characteristics and two are used to extract local feature, with Resnet50 is underlying model, and three incoherent branches are divided into after res_conv4_1, and it is special to finally obtain 11 son insertions Sign, respectively carries out about it with 11 loss functions (8 softmax loss functions, 3 hard triplet loss functions) Beam.The present invention improves mgn network, and specifically, subnetwork structure is constant before keeping, we are in the last of network structure One layer splices sub- insertion feature, is carried out about using hard triplet loss function to the insertion feature after splicing Beam, in addition original 11 loss functions, improved mgn network shares 12 loss functions, respectively 8 softmax damages Lose function, 4 hard triplet loss functions.
Step 22: training pedestrian identifies neural network again;
The present invention utilizes market1501 data set training pedestrian's weight identification model.Improved mgn convolution mind is constructed first Through network, the underlying model of part is resnet50 before mgn, and load resnet50 pre-training on ImageNet data set is crossed Parameter.Data enhancing, flip horizontal, random cropping image are carried out to pedestrian image.Initial learning rate is set as 0.0003, uses The variable quantity of parameter when adam optimization algorithm calculates backpropagation, 160 epoch of training, training batch size are set as 64.
(3) training multitask action recognition model:
Step 31: multitask action recognition network of the building based on densenet:
The present invention constructs multitask action recognition network using densenet121 as underlying model, realizes that a network executes The target of two classification tasks, the present invention need training two tasks: the posture (standing/seat) of employee, employee movement (whether Play mobile phone).Specifically, as shown in Fig. 2, the removal full articulamentum of densenet121 network the last layer, connects two branch networks, Every branch network connects the full articulamentum FC1 that one layer of neuron number is 1024 first, then reconnects one layer of neuron number For 2 full articulamentum FC2, the neuron number of FC2 is related with the classification of each subclassification, finally uses softmax loss function It is constrained.
Step 32: training multitask action recognition network model:
The present invention trains the multitask based on densenet121 using the data set of the new retail technology challenge match of BOT2018 Action recognition network.Specifically, it is first built later according to step 31 with ImageNet data set training densenet121 network Multitask action recognition network based on densenet121, and load in pre-training model parameter, finally with BOT2018 new zero The data set for selling technological challenge match is trained, and initial learning rate is set as 0.0003, is optimized using adam optimization algorithm, 30 epoch of training.
(4) employee's feature database is made:
Step 41: collect employee's image library:
The present invention needs to collect the image that employee is dressed in a uniform in the case where monitoring visual angle.4 employees of random shooting are in 4 angles Whole body images, only retain employee where rectangle frame image, the image that totally 16 employees are dressed in a uniform.
Step 42: it extracts employee's feature vector and saves local:
Pedestrian's weight identification model of training in load step 22, using employee's image in step 41 as the input of model, Output obtains the feature vector that 16 dimensions are 2048, saves 16 feature vectors into local cvs file.
(5) it identifies and records employee's specific behavior:
Step 51: read monitoring camera picture:
The present invention identifies the employee under monitoring visual angle, analyzes, therefore reads the picture of monitoring camera first.This Invention uses the web camera of Haikang, and video camera connect the same local area network with computer, by rtsp agreement video camera Picture transmission is to computer.
Step 52: the pedestrian in detection picture:
The network structure and weight of trained pedestrian detection model in load step 12.By the picture read carry out by Frame processing, takes a frame picture every 10 frames, and using the image as the input of pedestrian detection model, output obtains the seat of multiple pedestrians Cursor position.
Step 53: pedestrian's weight recognizer determines pedestrian's identity:
The network structure and weight that trained pedestrian identifies again in load step 22.The pedestrian that will be detected in step 51 It identifies that the input of network, output obtain the feature vector of the pedestrian again as pedestrian, calculates this feature vector and employee's feature database The Euclidean distance of middle each feature vector, if Euclidean distance is less than threshold alpha, then it is assumed that the pedestrian is employee, otherwise it is assumed that It is not employee.In the present invention, α value is 2000.
Step 54: identification employee's movement:
The employee that selection step 53 identifies further is identified.Trained multitask movement is known in load step 32 Other network, using employee's image as input, output obtains two classification results, is respectively as follows: the posture (standing/seat) of employee, member The movement (whether playing mobile phone) of work.
Step 55: save specific action picture:
The preservation of progress specific action is required according to shop.Assuming that the specific action for currently needing to save are as follows: employee is seated Perhaps play mobile phone then we save the employee's image for being seated or playing mobile phone identified in step 54, using current time as The filename that image saves, is checked convenient for subsequent.The main-process stream for saving employee's specific action is as shown in Figure 3.
Embodiment 2:
(1) experimental result that pedestrian identifies again
Network is identified again according to pedestrian of step 22 training based on mgn in embodiment 1, is finally tested in market1501 Precision on collection is as shown in table 3:
3 experimental result of table
(2) experimental result of multitask network
According to multitask convolutional neural networks of step 32 training based on densenet121 in embodiment 1, finally exist Precision on BOT test set is as shown in table 4:
4 experimental result of table
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention Range should not be construed as being limited to the specific forms stated in the embodiments, and protection scope of the present invention is also and in art technology Personnel conceive according to the present invention it is conceivable that equivalent technologies mean.

Claims (1)

1. a kind of employee's specific behavior recording method identified based on pedestrian, contains following steps again:
(1) training pedestrian detection model;
Step 11: building yolo convolutional neural networks model;
Pedestrian detection model is built based on yolo v2, constructs disaggregated model and detection model respectively;
Disaggregated model is constructed first, is prepared for subsequent transfer learning;The specific structure of disaggregated model is as shown in table 1, and first Layer is made of to eleventh floor convolutional layer, batch normalization layer, activation primitive, pond layer, and Floor 12 is 1000 neurons Full articulamentum is finally constrained by Softmax loss function;In order to larger utilize image information, disaggregated model input figure Piece is dimensioned to 448 × 448 × 3;
1 disaggregated model structure of table
Detection model is modified to obtain by disaggregated model, as shown in table 2, the Layer12 of disaggregated model is changed to k 1*1 convolution The convolutional layer of core size, the value of k and the number of detection object are related, specific formula is as follows:
K=(ncls+5)*nanchor (1)
Wherein nclsRepresent the number of detection object, nanchorRepresent the number of anchor box;Use VOC data set, classification Number is 20, therefore k value 125;
The loss function of detection model has coordinate loss function Coordinate Loss, classification loss function Class Loss to set Reliability loss function Confidence Loss, is denoted as L respectivelycoord,Lcon,Lclass, three kinds of loss function calculation formula are as follows:
L=Lcoord+Lcon+Lclass (5)
Wherein, λcoordobjnoobjRespectively coordinate loss, objective degrees of confidence loss, the weight without objective degrees of confidence, it is flat with this Weigh different loss functions;X, y are the top left co-ordinate for the object that model inspection goes out,It is sat for the actual upper left corner of object Mark, w, h are the width and height for the object that model inspection goes out,It is actual wide and high for object;Formula (5) indicates final loss Functional value is the sum of coordinate loss function, classification loss function, confidence level loss function;
2 detection model structure of table
Step 12: training yolo convolutional neural networks:
Training yolo convolutional neural networks are divided into two steps: training sorter network and training detection network;
Training sorter network: building disaggregated model according to shown in table 1, with the ImageNet data set training sorter network;Tool Body, using stochastic gradient descent method, learning rate is set as 0.001, polynomial rate decay and is set as 4, weight decay are set as 0.0005, momentum and are set as 0.9;Initial pictures resolution ratio is 224*224, training 160 448*448 will be adjusted in resolution ratio after a epoch, adjusting micro- learning rate is 0.0001,10 epoch of retraining;
Training detection network: according to detection network is built shown in table 2, the power of structure corresponding to trained sorter network is loaded Weight parameter;It is the training of initial learning rate 160 epoch, weight with 0.001 using VOC data set training detection model Decay is set as 0.0005, momentum and is set as 0.9;In training process using data enhance method, to image carry out with Machine cutting, horizontal rotation etc.;
(2) training pedestrian weight identification model;
Step 21: improving mgn network model;
Mgn includes the branch that the branch for being used to extract global characteristics and two are used to extract local feature, with resnet50 For underlying model, three incoherent branches are divided into after res_conv4_1, finally obtain 11 son insertion features, respectively It is constrained with 11 loss functions, wherein 8 softmax loss functions, 3 hard triplet loss functions;It is right Mgn network improves, and specifically, subnetwork structure is constant before keeping, in the last layer of network structure that sub- insertion is special Sign is spliced, and is constrained using hard triplet loss function the insertion feature after splicing, in addition original 11 A loss function, improved mgn network share 12 loss functions, respectively 8 softmax loss functions, 4 hard Triplet loss function;
Step 22: training pedestrian identifies neural network again;
Utilize market1501 data set training pedestrian's weight identification model;Improved mgn convolutional neural networks, mgn are constructed first The underlying model of preceding part is resnet50, loads the resnet50 parameter that pre-training is crossed on ImageNet data set;To row People's image carries out data enhancing, flip horizontal, random cropping image;Initial learning rate is set as 0.0003, is optimized using adam The variable quantity of parameter when algorithm calculates backpropagation, 160 epoch of training, training batch size are set as 64;
(3) training multitask action recognition model:
Step 31: multitask action recognition network of the building based on densenet:
Using densenet121 as underlying model, multitask action recognition network is constructed, realizes that a network executes two classification and appoints The target of business needs two tasks of training: the posture (standing/seat) of employee, the movement (whether playing mobile phone) of employee;Specifically, The full articulamentum of densenet121 network the last layer is removed, connects two branch networks, every branch network connects one layer of nerve first The full articulamentum FC1 that first number is 1024 then reconnects the full articulamentum FC2 that one layer of neuron number is 2, the nerve of FC2 First number is related with the classification of each subclassification, is finally constrained with softmax loss function;
Step 32: training multitask action recognition network model:
Utilize multitask action recognition net of the data set training based on densenet121 of the new retail technology challenge match of BOT2018 Network;Specifically, it is first built be based on according to step 31 later with ImageNet data set training densenet121 network The multitask action recognition network of densenet121, and load in pre-training model parameter, finally skill is newly sold with BOT2018 The data set of art challenge match is trained, and initial learning rate is set as 0.0003, is optimized using adam optimization algorithm, training 30 epoch;
(4) employee's feature database is made:
Step 41: collect employee's image library:
Collect the image that employee is dressed in a uniform in the case where monitoring visual angle;4 employees of random shooting 4 angles whole body images, only Retain the image of the rectangle frame where employee, the image that totally 16 employees are dressed in a uniform;
Step 42: it extracts employee's feature vector and saves local:
Pedestrian's weight identification model of training in load step 22, using employee's image in step 41 as the input of model, output The feature vector that 16 dimensions are 2048 is obtained, saves 16 feature vectors into local cvs file;
(5) it identifies and records employee's specific behavior:
Step 51: read monitoring camera picture:
Employee under monitoring visual angle is identified, is analyzed, therefore reads the picture of monitoring camera first;Use the net of Haikang Network video camera, video camera connect the same local area network with computer, by rtsp agreement the picture transmission of video camera to computer;
Step 52: the pedestrian in detection picture:
The network structure and weight of trained pedestrian detection model in load step 12;The picture read is located frame by frame Reason, takes a frame picture every 10 frames, and using the image as the input of pedestrian detection model, output obtains the coordinate bit of multiple pedestrians It sets;
Step 53: pedestrian's weight recognizer determines pedestrian's identity:
The network structure and weight that trained pedestrian identifies again in load step 22;Using the pedestrian detected in step 51 as Pedestrian identifies that the input of network, output obtain the feature vector of the pedestrian again, calculates every in this feature vector and employee's feature database The Euclidean distance of one feature vector, if Euclidean distance is less than threshold alpha, then it is assumed that the pedestrian is employee, otherwise it is assumed that not being Employee;α value is 2000;
Step 54: identification employee's movement:
The employee that selection step 53 identifies further is identified;Trained multitask action recognition net in load step 32 Network, using employee's image as input, output obtains two classification results, is respectively as follows: the posture (standing/seat) of employee, employee's It acts (whether playing mobile phone);
Step 55: save specific action picture:
The preservation of progress specific action is required according to shop;Assuming that the specific action for currently needing to save are as follows: employee be seated or Mobile phone is played, then saves the employee's image for being seated or playing mobile phone identified in step 54, is saved current time as image Filename, checked convenient for subsequent.
CN201910178684.7A 2019-03-11 2019-03-11 A kind of employee's specific behavior recording method identified again based on pedestrian Pending CN109948490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910178684.7A CN109948490A (en) 2019-03-11 2019-03-11 A kind of employee's specific behavior recording method identified again based on pedestrian

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910178684.7A CN109948490A (en) 2019-03-11 2019-03-11 A kind of employee's specific behavior recording method identified again based on pedestrian

Publications (1)

Publication Number Publication Date
CN109948490A true CN109948490A (en) 2019-06-28

Family

ID=67009415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910178684.7A Pending CN109948490A (en) 2019-03-11 2019-03-11 A kind of employee's specific behavior recording method identified again based on pedestrian

Country Status (1)

Country Link
CN (1) CN109948490A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414421A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Activity recognition method based on sequential frame image
CN111723777A (en) * 2020-07-07 2020-09-29 广州织点智能科技有限公司 Method and device for judging commodity taking and placing process, intelligent container and readable storage medium
CN111931802A (en) * 2020-06-16 2020-11-13 南京信息工程大学 Pedestrian re-identification method based on fusion of middle-layer features of Simese network structure
CN112163531A (en) * 2020-09-30 2021-01-01 四川弘和通讯有限公司 Method for identifying gestures of oiler based on pedestrian arm angle analysis
CN112541448A (en) * 2020-12-18 2021-03-23 济南博观智能科技有限公司 Pedestrian re-identification method and device, electronic equipment and storage medium
CN112699730A (en) * 2020-12-01 2021-04-23 贵州电网有限责任公司 Machine room character re-identification method based on YOLO and convolution-cycle network
CN113111859A (en) * 2021-05-12 2021-07-13 吉林大学 License plate deblurring detection method based on deep learning
CN113268641A (en) * 2020-12-14 2021-08-17 王玉华 User data processing method based on big data and big data server
CN113989499A (en) * 2021-12-27 2022-01-28 智洋创新科技股份有限公司 Intelligent alarm method in bank scene based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140253A1 (en) * 2015-11-12 2017-05-18 Xerox Corporation Multi-layer fusion in a convolutional neural network for image classification
CN108345912A (en) * 2018-04-25 2018-07-31 电子科技大学中山学院 Commodity rapid settlement system based on RGBD information and deep learning
CN109284733A (en) * 2018-10-15 2019-01-29 浙江工业大学 A kind of shopping guide's act of omission monitoring method based on yolo and multitask convolutional neural networks
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109409222A (en) * 2018-09-20 2019-03-01 中国地质大学(武汉) A kind of multi-angle of view facial expression recognizing method based on mobile terminal
CN109446946A (en) * 2018-10-15 2019-03-08 浙江工业大学 A kind of multi-cam real-time detection method based on multithreading

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140253A1 (en) * 2015-11-12 2017-05-18 Xerox Corporation Multi-layer fusion in a convolutional neural network for image classification
CN108345912A (en) * 2018-04-25 2018-07-31 电子科技大学中山学院 Commodity rapid settlement system based on RGBD information and deep learning
CN109409222A (en) * 2018-09-20 2019-03-01 中国地质大学(武汉) A kind of multi-angle of view facial expression recognizing method based on mobile terminal
CN109284733A (en) * 2018-10-15 2019-01-29 浙江工业大学 A kind of shopping guide's act of omission monitoring method based on yolo and multitask convolutional neural networks
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109446946A (en) * 2018-10-15 2019-03-08 浙江工业大学 A kind of multi-cam real-time detection method based on multithreading

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUANSHUO WANG 等: "Learning Discriminative Features with Multiple Granularities for Person Re-Identification", 《MM 18:PROCEEDINGS OF THE 26TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *
JOSEPH REDMON 等: "YOLO9000:Better,Faster,Stronger", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414421A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Activity recognition method based on sequential frame image
CN110414421B (en) * 2019-07-25 2023-04-07 电子科技大学 Behavior identification method based on continuous frame images
CN111931802A (en) * 2020-06-16 2020-11-13 南京信息工程大学 Pedestrian re-identification method based on fusion of middle-layer features of Simese network structure
CN111723777A (en) * 2020-07-07 2020-09-29 广州织点智能科技有限公司 Method and device for judging commodity taking and placing process, intelligent container and readable storage medium
CN112163531A (en) * 2020-09-30 2021-01-01 四川弘和通讯有限公司 Method for identifying gestures of oiler based on pedestrian arm angle analysis
CN112699730A (en) * 2020-12-01 2021-04-23 贵州电网有限责任公司 Machine room character re-identification method based on YOLO and convolution-cycle network
CN113268641A (en) * 2020-12-14 2021-08-17 王玉华 User data processing method based on big data and big data server
CN112541448B (en) * 2020-12-18 2023-04-07 济南博观智能科技有限公司 Pedestrian re-identification method and device, electronic equipment and storage medium
CN112541448A (en) * 2020-12-18 2021-03-23 济南博观智能科技有限公司 Pedestrian re-identification method and device, electronic equipment and storage medium
CN113111859A (en) * 2021-05-12 2021-07-13 吉林大学 License plate deblurring detection method based on deep learning
CN113111859B (en) * 2021-05-12 2022-04-19 吉林大学 License plate deblurring detection method based on deep learning
CN113989499B (en) * 2021-12-27 2022-03-29 智洋创新科技股份有限公司 Intelligent alarm method in bank scene based on artificial intelligence
CN113989499A (en) * 2021-12-27 2022-01-28 智洋创新科技股份有限公司 Intelligent alarm method in bank scene based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN109948490A (en) A kind of employee's specific behavior recording method identified again based on pedestrian
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
CN109740413A (en) Pedestrian recognition methods, device, computer equipment and computer storage medium again
CN108564049A (en) A kind of fast face detection recognition method based on deep learning
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN108334848A (en) A kind of small face identification method based on generation confrontation network
CN107729838A (en) A kind of head pose evaluation method based on deep learning
CN105574510A (en) Gait identification method and device
CN109117897A (en) Image processing method, device and readable storage medium storing program for executing based on convolutional neural networks
CN108898620A (en) Method for tracking target based on multiple twin neural network and regional nerve network
CN109145784A (en) Method and apparatus for handling video
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
CN109241871A (en) A kind of public domain stream of people's tracking based on video data
CN107358223A (en) A kind of Face datection and face alignment method based on yolo
CN110532970A (en) Age-sex's property analysis method, system, equipment and the medium of face 2D image
CN109214298B (en) Asian female color value scoring model method based on deep convolutional network
CN107767335A (en) A kind of image interfusion method and system based on face recognition features' point location
Yacoob et al. Labeling of human face components from range data
Mania et al. A framework for self-training perceptual agents in simulated photorealistic environments
CN110427795A (en) A kind of property analysis method based on head photo, system and computer equipment
CN108090406A (en) Face identification method and system
CN106650670A (en) Method and device for detection of living body face video
CN107316029A (en) A kind of live body verification method and equipment
CN110008793A (en) Face identification method, device and equipment
CN109377429A (en) A kind of recognition of face quality-oriented education wisdom evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190628

WD01 Invention patent application deemed withdrawn after publication