CN110765833A - Crowd density estimation method based on deep learning - Google Patents

Crowd density estimation method based on deep learning Download PDF

Info

Publication number
CN110765833A
CN110765833A CN201910761952.8A CN201910761952A CN110765833A CN 110765833 A CN110765833 A CN 110765833A CN 201910761952 A CN201910761952 A CN 201910761952A CN 110765833 A CN110765833 A CN 110765833A
Authority
CN
China
Prior art keywords
crowd density
neural network
network model
density map
estimation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910761952.8A
Other languages
Chinese (zh)
Inventor
齐秀梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinocloud Wisdom Beijing Technology Co Ltd
Original Assignee
Sinocloud Wisdom Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinocloud Wisdom Beijing Technology Co Ltd filed Critical Sinocloud Wisdom Beijing Technology Co Ltd
Priority to CN201910761952.8A priority Critical patent/CN110765833A/en
Publication of CN110765833A publication Critical patent/CN110765833A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a crowd density estimation method based on deep learning, which comprises the following steps: s1: labeling the training set; s2: generating a first crowd density map by using the labeling information; s3: constructing a deep neural network model, and extracting features of different layers through the deep neural network model to establish a multilayer feature pyramid; analyzing the output characteristics of the characteristic pyramid to obtain a second crowd density graph; s4: training the constructed neural network model by using the first crowd density map generated by the S2 to obtain a trained deep neural network model; s5: and inputting the image to be estimated into the trained deep neural network model to obtain a final crowd density map of the image to be estimated, and integrating the final crowd density map to obtain the estimated number of people of the image to be estimated. The method estimates the crowd density by fusing pyramid features of different layers, and has the characteristics of high robustness and good performance.

Description

Crowd density estimation method based on deep learning
Technical Field
The invention relates to the technical field of computer vision, in particular to an airport crowd density estimation method based on deep learning.
Background
Along with the convenience of traffic, crowding occurs more and more in places where people flow such as railway stations, airports and the like, and potential safety hazards are prominent. The number of people is estimated for the video or the image through computer vision and other technologies, so that the crowd with excessive density can be evacuated, the trampling event can be prevented, and early warning and the like can be realized.
Current people counting methods can be summarized into three categories: target detection based methods, regression based methods and deep learning based methods. The target detection-based method first detects each individual in the population and then counts them to obtain population density. The method is difficult to cope with medium and high density scenes, and the time required for detection is long. Regression-based methods use extracted local or global features to regress to the number of people in the image. The algorithm mainly depends on the selection of features, the main problem of the algorithm is that how to design effective features, and the method has high dependency on scenes. The deep learning-based method utilizes a deep convolutional neural network, trains the characteristics of the network learning population through a large number of labeled samples, and accordingly outputs the number of people in the image. However, the current deep learning algorithm generally adopts a method of marking the center coordinates of the human head, and a lot of position information is lost; the generated density map does not contain information of the size of the human head; in addition, a multi-column convolution neural network structure is adopted, so that the problems of high complexity, large sample requirement and long training time exist.
Disclosure of Invention
In view of the above, the present invention provides a crowd density estimation method based on deep learning, which performs people counting for an airport scene, so that a generated density map contains information of both the number of people and the size of the head of the person, thereby avoiding losing a lot of position information, and includes the following steps:
s1: marking the training set to prepare training data, wherein marking information comprises the coordinate of the center point of the head and the width and height of the head;
s2: generating a first crowd density map by using the labeling information;
s3: constructing a deep neural network model, and extracting features of different layers through the deep neural network model to establish a multilayer feature pyramid; obtaining a second crowd density map by fusing the output features of the feature pyramid;
s4: training the constructed neural network model by using the first crowd density map generated by the S2 to obtain a trained deep neural network model;
s5: and inputting the image to be estimated into the trained deep neural network model to obtain a final crowd density map of the image to be estimated, and integrating the final crowd density map to obtain the estimated number of people of the image to be estimated.
Further, S1: marking the training set to prepare training data, wherein marking information comprises the coordinate of the center point of the human head and the width and height of the human head, and the marking method comprises the following steps: and when the training set is labeled, labeling by adopting a mode of labeling the boundary box of each human head.
Further, S2: generating a first crowd density map by using the labeling information, wherein the generating comprises the following steps: and processing the labeling information in a Gaussian fuzzy mode.
Further, the size of the gaussian kernel in the gaussian blur is the width and height of the human head.
Further, the first population density map size is 1/16 times the training set map size.
Further, S3: constructing a deep neural network model, and extracting features of different layers through the deep neural network model to establish a multilayer feature pyramid; obtaining a second crowd density map by fusing the output features of the feature pyramid, including: the multilayer characteristic pyramid is used for representing head information of different sizes.
Further, analyzing the output characteristics of the characteristic pyramid to obtain a second crowd density map, including: and obtaining a second crowd density map by up-sampling the output features of the feature pyramid at the bottom layer and adding and fusing the output features of the feature pyramid at the upper layer.
Further, the deep neural network model is ResNet 18-FPN.
Further, S4 trains the constructed deep neural network model by using the generated first population density map, and obtains a trained deep neural network model, including: the network has trained the weight parameters using the ResNet 18.
The invention has the beneficial effects that:
in the people number estimation method, firstly, a bounding box of the head in a people group image is marked, and then a more ideal label image is obtained by utilizing a Gaussian kernel with the size of the head; the convolution network ResNet18 introduced in the image classification field is a basic network, and because the network is trained by a large amount of data, strong feature extraction capability is obtained. The characteristic pyramid is built by utilizing the characteristic graphs of different layers of the neural network, the crowd density is estimated by fusing different pyramids, and the method has the characteristics of high robustness and good performance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating a crowd density estimation method based on deep learning according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention adopts a mode of marking the boundary frame of the human head to mark the sample, generates a convolution kernel by using the size of the human head, and carries out Gaussian blur on the coordinates of the human head to obtain the label information. And extracting different layers by using a ResNet18 network to establish a multi-scale feature pyramid for representing information of people with different sizes, and obtaining a crowd density map by up-sampling output features of the feature pyramid at the last layer and adding and fusing the output features with features of the previous layer.
The method comprises the following specific implementation steps:
s1: and marking the training set, wherein the marking information comprises the coordinate of the center point of the head and the width and height of the head.
The method specifically comprises the following steps: in the specific embodiment, the training set is a crowd image collected from an airport, people in the airport are shot in the image, and the training set comprises approximately 16000 images. When the crowd density image is labeled, a mode of labeling a bounding box (bounding box) of each head is adopted, so that more head information can be acquired. The labeling information of the sample is the bounding box information of the human head in the image sample as (x, y, w, h).
S2: and generating a first crowd density map by using the labeling information.
The method specifically comprises the following steps: and generating a first crowd density map by using the bounding box information of the head of the person in a Gaussian fuzzy mode, and then taking the first crowd density map as label information, wherein the first crowd density map is equivalent to a supervision signal. The width and height of the Gaussian kernel are w and h (width and height) of the human head, and the generated label information simultaneously comprises the size of the human head and the number of the human head. And calculating a density map of the sample based on the Gaussian kernel according to the coordinate information of the head center in the training image sample. The Gaussian core is the size of the human head, and the density map generated by the method simultaneously contains information of the number of the people and the size of the human head. In order to make the density map and the feature map have the same size, the density map is reduced to 1/16 of the original map.
S3: constructing a deep neural network model (ResNet18-FPN), extracting features of different layers through the neural network, and establishing a multi-scale feature pyramid for representing information of heads of different sizes; obtaining a second crowd density map by up-sampling the output characteristics of the last layer of characteristic pyramid and adding and fusing the output characteristics with the characteristics of the previous layer;
in this embodiment, the deep neural network model is a convolutional neural network ResNet 18-FPN. In the convolutional neural network layer, the neural network acquires the extracted features by a convolutional kernel. The deep neural network is composed of a plurality of layers, and the invention uses two layers of the deep neural network to obtain output. Feature extraction may be performed using multiple convolution kernels in each layer, each convolution kernel extracting a feature. In general, the higher the hierarchy of the convolutional layer, the closer the extracted features are to the features of the object itself. The convolution kernel is an n × n × r weight matrix. Where n is the size of the convolution kernel and r is the depth of the convolution kernel. The depth of the convolution kernel is kept consistent with the depth of the input data.
As shown in fig. 1, using ResNet18-FPN as a basic network model, the size of the feature map obtained through five times of downsampling is 1/32 of the original image; extracting feature maps of different convolutional layers, in order to eliminate aliasing effect of upsampling, using convolution kernel with size of 1x1 to perform convolution on the feature map of the last layer, enabling an output channel to be the channel number of the feature map of the previous layer, then performing upsampling on the output feature of the feature pyramid of the last layer, adding and fusing the output feature of the feature pyramid of the last layer with the feature of the previous layer, and then using convolution kernel with size of 3x3 to obtain a second crowd density map with channel number of 1, wherein the size of the density map is 1/16 of the original image.
The second population density map is the output of the neural network model and is an estimated population density map.
S4: training the constructed deep neural network model by using the first crowd density map in the S2 to obtain a trained deep neural network model; during training, the trained weight parameters of the ResNet18 network can be directly used.
S5: and inputting the image to be estimated into the trained deep neural network model to obtain a final crowd density map of the image to be estimated, and integrating the final crowd density map to obtain the estimated number of people of the image to be estimated.
The ResNet18-FPN network is trained, and by using a pre-training model of ResNet18, the training time can be shortened, the weight value can be updated, and the network error value can be gradually reduced.
Convolutional neural networks have become a focus of research in the field of current speech analysis and image recognition. The weight sharing network structure of the system is more similar to a biological neural network, the complexity of a network model is reduced, and the number of weights is reduced. The advantage is more obvious when the input of the network is a multi-dimensional image, so that the image can be directly used as the input of the network, and the complex characteristic extraction and data reconstruction process in the traditional recognition algorithm is avoided. Convolutional networks are a multi-layered perceptron specifically designed to recognize two-dimensional shapes, the structure of which is highly invariant to translation, scaling, tilting, or other forms of deformation.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, elements recited by the phrase "comprising a" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A crowd density estimation method based on deep learning is characterized by comprising the following steps:
s1: marking the training set to prepare training data, wherein marking information comprises the coordinate of the center point of the head and the width and height of the head;
s2: generating a first crowd density map by using the labeling information;
s3: constructing a deep neural network model, and extracting features of different layers through the deep neural network model to establish a multilayer feature pyramid; obtaining a second crowd density map by fusing the output features of the feature pyramid;
s4: training the constructed neural network model by using the first crowd density map generated by the S2 to obtain a trained deep neural network model;
s5: and inputting the image to be estimated into the trained deep neural network model to obtain a final crowd density map of the image to be estimated, and integrating the final crowd density map to obtain the estimated number of people of the image to be estimated.
2. The deep learning based crowd density estimation method according to claim 1, wherein S1: marking the training set to prepare training data, wherein marking information comprises the coordinate of the center point of the human head and the width and height of the human head, and the marking method comprises the following steps: and when the training set is labeled, labeling by adopting a mode of labeling the boundary box of each human head.
3. The deep learning based crowd density estimation method according to claim 1, wherein S2: generating a first crowd density map by using the labeling information, wherein the generating comprises the following steps: and processing the labeling information in a Gaussian fuzzy mode.
4. The deep learning-based crowd density estimation method according to claim 3, wherein the Gaussian kernel size in the Gaussian blur is the width and height of the human head.
5. The deep learning based crowd density estimation method of claim 3, wherein the first crowd density map size is 1/16 of the training set original size.
6. The deep learning based crowd density estimation method according to claim 1, wherein S3: constructing a deep neural network model, and extracting features of different layers through the deep neural network model to establish a multilayer feature pyramid; obtaining a second crowd density map by fusing the output features of the feature pyramid, including: the multilayer characteristic pyramid is used for representing head information of different sizes.
7. The deep learning based crowd density estimation method of claim 6, wherein analyzing the output features of the feature pyramid to obtain a second crowd density map comprises: and obtaining a second crowd density map by up-sampling the output features of the feature pyramid at the bottom layer and adding and fusing the output features of the feature pyramid at the upper layer.
8. The deep learning based crowd density estimation method of claim 6, wherein the deep neural network model is ResNet 18-FPN.
9. The deep learning based crowd density estimation method according to claim 1, wherein S4 trains the constructed deep neural network model by using the generated first crowd density map to obtain a trained deep neural network model, and includes: the network has trained the weight parameters using the ResNet 18.
CN201910761952.8A 2019-08-19 2019-08-19 Crowd density estimation method based on deep learning Pending CN110765833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910761952.8A CN110765833A (en) 2019-08-19 2019-08-19 Crowd density estimation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910761952.8A CN110765833A (en) 2019-08-19 2019-08-19 Crowd density estimation method based on deep learning

Publications (1)

Publication Number Publication Date
CN110765833A true CN110765833A (en) 2020-02-07

Family

ID=69329382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910761952.8A Pending CN110765833A (en) 2019-08-19 2019-08-19 Crowd density estimation method based on deep learning

Country Status (1)

Country Link
CN (1) CN110765833A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476188A (en) * 2020-04-14 2020-07-31 山东师范大学 Crowd counting method, system, medium and electronic device based on characteristic pyramid
CN111488827A (en) * 2020-04-10 2020-08-04 山东师范大学 Crowd counting method and system based on multi-scale feature information
CN111639668A (en) * 2020-04-17 2020-09-08 北京品恩科技股份有限公司 Crowd density detection method based on deep learning
CN111738922A (en) * 2020-06-19 2020-10-02 新希望六和股份有限公司 Method and device for training density network model, computer equipment and storage medium
CN112215059A (en) * 2020-08-26 2021-01-12 厦门大学 Urban village identification and population estimation method and system based on deep learning and computer readable storage medium
CN112364788A (en) * 2020-11-13 2021-02-12 润联软件系统(深圳)有限公司 Monitoring video crowd quantity monitoring method based on deep learning and related components thereof
CN112989916A (en) * 2020-12-17 2021-06-18 北京航空航天大学 Crowd counting method combining density estimation and target detection
CN112989952A (en) * 2021-02-20 2021-06-18 复旦大学 Crowd density estimation method and device based on mask guidance
CN113327233A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Cell image detection method based on transfer learning
CN113538402A (en) * 2021-07-29 2021-10-22 燕山大学 Crowd counting method and system based on density estimation
CN116935310A (en) * 2023-07-13 2023-10-24 百鸟数据科技(北京)有限责任公司 Real-time video monitoring bird density estimation method and system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
KR20180020724A (en) * 2016-08-19 2018-02-28 주식회사 케이티 Pyramid history map generating method for calculating feature map in deep learning based on convolution neural network and feature map generating method
CN108921822A (en) * 2018-06-04 2018-11-30 中国科学技术大学 Image object method of counting based on convolutional neural networks
CN109101930A (en) * 2018-08-18 2018-12-28 华中科技大学 A kind of people counting method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
KR20180020724A (en) * 2016-08-19 2018-02-28 주식회사 케이티 Pyramid history map generating method for calculating feature map in deep learning based on convolution neural network and feature map generating method
CN108921822A (en) * 2018-06-04 2018-11-30 中国科学技术大学 Image object method of counting based on convolutional neural networks
CN109101930A (en) * 2018-08-18 2018-12-28 华中科技大学 A kind of people counting method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CODING思想: ""人群行为分析算法调研"", 《CSDNHTTPS://BLOG.CSDN.NET/LOVE1055259415/ARTICLE/DETAILS/80119254?UTM_SOURCE=BLOGXGWZ3》 *
SINAT_36165006: ""图像金字塔上采样降采样"", 《CSDNHTTPS://BLOG.CSDN.NET/SINAT_36165006/ARTICLE/DETAILS/78410572》 *
荣荣闲不住: ""特征金字塔(FPN)的学习过程"", 《CSDNHTTPS://BLOG.CSDN.NET/HTML5BABY/ARTICLE/DETAILS/90312100》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488827A (en) * 2020-04-10 2020-08-04 山东师范大学 Crowd counting method and system based on multi-scale feature information
CN111476188B (en) * 2020-04-14 2023-09-12 山东师范大学 Crowd counting method, system, medium and electronic equipment based on feature pyramid
CN111476188A (en) * 2020-04-14 2020-07-31 山东师范大学 Crowd counting method, system, medium and electronic device based on characteristic pyramid
CN111639668A (en) * 2020-04-17 2020-09-08 北京品恩科技股份有限公司 Crowd density detection method based on deep learning
CN111738922A (en) * 2020-06-19 2020-10-02 新希望六和股份有限公司 Method and device for training density network model, computer equipment and storage medium
CN112215059A (en) * 2020-08-26 2021-01-12 厦门大学 Urban village identification and population estimation method and system based on deep learning and computer readable storage medium
CN112215059B (en) * 2020-08-26 2023-10-27 厦门大学 Deep learning-based urban village identification and population estimation method, system and computer-readable storage medium
CN112364788A (en) * 2020-11-13 2021-02-12 润联软件系统(深圳)有限公司 Monitoring video crowd quantity monitoring method based on deep learning and related components thereof
CN112364788B (en) * 2020-11-13 2021-08-03 润联软件系统(深圳)有限公司 Monitoring video crowd quantity monitoring method based on deep learning and related components thereof
CN112989916A (en) * 2020-12-17 2021-06-18 北京航空航天大学 Crowd counting method combining density estimation and target detection
CN112989952B (en) * 2021-02-20 2022-10-18 复旦大学 Crowd density estimation method and device based on mask guidance
CN112989952A (en) * 2021-02-20 2021-06-18 复旦大学 Crowd density estimation method and device based on mask guidance
CN113327233A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Cell image detection method based on transfer learning
CN113538402A (en) * 2021-07-29 2021-10-22 燕山大学 Crowd counting method and system based on density estimation
CN116935310A (en) * 2023-07-13 2023-10-24 百鸟数据科技(北京)有限责任公司 Real-time video monitoring bird density estimation method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110765833A (en) Crowd density estimation method based on deep learning
Idrees et al. Composition loss for counting, density map estimation and localization in dense crowds
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
CN109344736B (en) Static image crowd counting method based on joint learning
CN107967451B (en) Method for counting crowd of still image
Hou et al. Change detection based on deep features and low rank
Nakamura et al. Scene text eraser
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN110287960A (en) The detection recognition method of curve text in natural scene image
CN111191667B (en) Crowd counting method based on multiscale generation countermeasure network
CN110263712B (en) Coarse and fine pedestrian detection method based on region candidates
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN110782420A (en) Small target feature representation enhancement method based on deep learning
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
Chen et al. Research on recognition of fly species based on improved RetinaNet and CBAM
CN110781980B (en) Training method of target detection model, target detection method and device
CN111881853B (en) Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
CN111709300B (en) Crowd counting method based on video image
Lu et al. Learning attention map from images
Gammulle et al. Coupled generative adversarial network for continuous fine-grained action segmentation
CN111353544A (en) Improved Mixed Pooling-Yolov 3-based target detection method
CN116129291A (en) Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device
US20230095533A1 (en) Enriched and discriminative convolutional neural network features for pedestrian re-identification and trajectory modeling
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200207