WO2016183766A1 - Procédé et appareil de génération de modèles prédictifs - Google Patents
Procédé et appareil de génération de modèles prédictifs Download PDFInfo
- Publication number
- WO2016183766A1 WO2016183766A1 PCT/CN2015/079178 CN2015079178W WO2016183766A1 WO 2016183766 A1 WO2016183766 A1 WO 2016183766A1 CN 2015079178 W CN2015079178 W CN 2015079178W WO 2016183766 A1 WO2016183766 A1 WO 2016183766A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distribution
- patches
- cnn
- truth
- ground
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Definitions
- the present application relates to an apparatus and a method for generating a predictive model to predict a crowd density distribution and counts of persons in image frames.
- Counting crowd pedestrians in videos draws a lot of attention because of its intense demands in video surveillance, and it is especially important for metropolis security.
- Crowd counting is a challenging task due to severe occlusions, scene perspective distortions and diverse crowd distributions. Since pedestrian detection and tracking has difficulty when being used in crowd scenes, most state-of-the-art methods are regression based and the goal is to learn a mapping between low-level features and crowd counts. However, these works are scene-specific, i.e., a crowd counting model learned for a particular scene can only be applied to the same scene. Given an unseen scene or a changed scene layout, the model has to be re-trained with new annotations.
- Counting by global regression ignores spatial information of pedestrians.
- Lempitsky et al introduced an object counting method through pixel-level object density map regression. Following this work, Fiaschi et al. used random forest to regress the object density and improve training efficiency.
- Another advantage of density regression based methods is that they are able to estimate object counts in any region of an image. Taking this advantage, an interactive object counting system was introduced, which visualized region counts to help users to determine the relevance feedback efficiently. Rodrigueze made use of density map estimation to improve the head detection results. These methods are scene-specific and not applicable to cross-scene counting.
- the disclosures address the problem of crowd density and count estimation, of which the aim is to automatically estimate the density map and the number/count of people on a given surveillance video frame.
- the present application proposes a cross-scene density and count estimation system, which is capable of estimating both density map and people counts for any target scene even the scene does not exist in the training set.
- an apparatus for generating a predictive model to predict the crowd density map and counts comprising a density map creation unit, a CNN generation unit, a similar data retrieval unit and a model fine-tuning unit.
- the density map creation unit is configured to approximate the perspective map for each training scene from a training set (with pedestrian head annotations, labeled the head position of every person in the region of interest (ROI) ) to create, based on the labels and the perspective map, ground-truth density maps and counts on the training set;
- the density map represents the crowd distribution for every frame and the integration over the density map is equal to the total number of pedestrians.
- the CNN generation unit is configured to construct and initialize a crowd convolutional neural networks (CNN) , to train the CNN by inputting the crowd patches and corresponding ground-truth density maps and counts sampled from the training set to the CNN.
- the similar data retrieval unit is configured to receive sample frames from the target scene and the samples from training set with the ground-truth density maps and counts created by the unit, and to retrieve the similar data from the training set for each target scene to overcome the scene gaps.
- the model fine-tuning unit is configured to receive the retrieved similar data and construct a second CNN, wherein the second CNN is initialized by using the trained first CNN, and the unit is further configured to fine-tune the initialized second CNN with the similar data to make the second CNN capable of predicting density map and the pedestrian count in region of interest for the video frame to be detected and the region of interest.
- method for generating a predictive model to predict a crowd density distribution and counts of persons in image frames comprising:
- an apparatus for generating a predictive model to predict a crowd density distribution and counts of persons in image frames comprising:
- a CNN training unit for training a CNN by inputting one or more crowd patches from frames in a training set, each of the crowd patches having a predetermined ground-truth density distribution and counts of persons in the inputted crowd patches;
- a similar data retrieval unit for sampling frames from a target scene image set and receiving the training images from the training set that having the determined ground-truth density distribution and counts/number, and retrieving similar image data from the received training frames for each of the sampled target image frames to overcome scene gaps between the target scene image set and the training images;
- a model fine-tuning unit for fine-tuning the CNN by inputting the similar image data to the CNN so as to determine a predictive model for predicting the crowd density map and counts of persons in image frames.
- a system for generating a predictive model to predict a crowd density distribution and counts of persons in image frames comprising:
- a processor electrically coupled to the memory to execute the executable components to perform operations of the system, wherein, the executable components comprise:
- a CNN training component for training a CNN by inputting one or more crowd patches from frames in a training set, each of the crowd patches having a predetermined ground-truth density distribution and counts of persons in the inputted crowd patches;
- a similar data retrieval component for sampling frames from a target scene image set and receiving the training images from the training set that having the determined ground-truth density distribution and counts/number, and retrieving similar image data from the received training frames for each of the sampled target image frames to overcome scene gaps between the target scene image set and the training images;
- a model fine-tuning component for fine-tuning the CNN by inputting the similar image data to the CNN so as to determine a predictive model for predicting the crowd density map and counts of persons in image frames.
- ⁇ Multi-task system can estimate both crowd density maps and counts together.
- the count number can be calculated through integration over the density map.
- the two related task can also help each other to obtain a better solution for our training model.
- Fig. 1 is a schematic diagram illustrating a block view of an apparatus 1000 for generating a predictive model to predict the crowd density map and counts according to one embodiment of the present application.
- Fig. 2 is a schematic diagram illustrating a flow chart for the apparatus 1000 generating a predictive model to predict a crowd density distribution and counts of persons in image frames according to one embodiment of the present application.
- Fig. 3 is a schematic diagram illustrating a flow process chart for the density map creation unit 10 according to one embodiment of the present application.
- Fig. 4 is a schematic diagram illustrating a flow process chart for the CNN training unit according to one embodiment of the present application.
- Fig. 5 is a schematic diagram illustrating overview of the crowd CNN model with switchable objectives according to one embodiment of the present application.
- Fig. 6 is a schematic diagram illustrating a flow chart for similar data retrieval according to another embodiment of the present application.
- Fig. 7 is a schematic diagram illustrating a system for generating a predictive model according to one embodiment of the present application, in which the functions of the present invention are carried out by the software.
- Fig. 1 is a schematic diagram illustrating a block view of an apparatus 1000 for generating a predictive model to predict the crowd density map and counts according to one embodiment of the present application.
- the apparatus 2000 may comprise a density map creation unit 10, a CNN generation unit 20, a similar data retrieval unit 30 and a model fine-tuning unit 40.
- Fig. 2 is a general schematic diagram illustrating a flow process chart 2000 for the apparatus 1000 according to one embodiment of the present application.
- the ground-truth density map creation unit 10 operates to select image patches from one or more training image frames in the training set; and determine the ground-truth crowd distribution in the selected patches and the ground-truth total counts of pedestrians in the selected patches.
- the CNN training unit 20 operates to train a CNN by inputting one or more crowd patches from frames in a training set, each of the crowd patches having a predetermined ground-truth density distribution and counts of persons in the inputted crowd patches.
- the similar data retrieval unit 30 operates to sample frames from a target scene image set and receiving the training images from the training set that having the determined ground-truth density distribution and counts/number, and retrieve similar image data from the received training frames for each of the sampled target image frames to overcome scene gaps between the target scene image set and the training images.
- the model fine-tuning unit 40 operates to fine-tune the CNN by inputting the similar image data to the CNN so as to determine a predictive model for predicting the crowd density map and counts of persons in image frames. The cooperation for the density map creation unit 10, a CNN generation unit 20, a similar data retrieval unit 30 and a model fine-tuning unit 40 will be discussed below in detail.
- the initial input to the apparatus 100 is a training set contained an amount of video frames captured from various surveillance cameras with pedestrians head labels.
- the density map creation unit 10 operates to output a density map and counts for each of the video frames based on the input training set.
- Fig. 3 is a schematic diagram illustrating a flow process chart for the density map creation unit 10 according to one embodiment of the present application.
- the density map creation unit 10 operates to approximate a perspective map/distribution for each training scene/frame from the training set.
- the pedestrian heads are labeled to indicate the head position of each person in the region of interest of the each training frame. With the labeled head position, the pedestrians’s patial location and human body shape will be located in each frame.
- the ground-truth density map/distribution is created based on pedestrians’s patial location, human body shape and perspective distortion of images so as to determine a ground-truth density for the pedestrians/crowds in each frame and to estimate counts of persons in the crowds in the each frame of the training set.
- the ground-truth density map/distribution represents a crowd distribution in each frame, and the integration over the density map/distribution is equal to the total number of pedestrians.
- the main objective for the crowd CNN model to be discussed later is to learn a mapping F: X —D, where X is the set of low-level features extracted from training images and D is the crowd density map/distribution of the image.
- X is the set of low-level features extracted from training images
- D is the crowd density map/distribution of the image.
- the density map/distribution is created based on pedestrians’s patial location, human body shape and perspective distortion of images. Patches randomly selected from the training images are treated as training samples, and the density maps/distributions of corresponding patches are treated as the ground truth for the crowd CNN model, which will be further discussed later.
- the total crowd number in a selected training patch is calculated through integration over the density map/distribution. Note that the total number will be a decimal, but not an integer.
- the density map regression ground truth has been defined as a sum of Gaussian kernels centered on the locations of objects.
- This kind of density maps /distributions is suitable for characterizing the density distribution of circle-like objects such as cells and bacteria.
- this assumption may fail when it comes to the pedestrian crowd, where cameras are generally not in a bird-view.
- An example of pedestrians in an ordinary surveillance camera three visible characteristics: 1) pedestrian images in the surveillance videos have different scales due to perspective distortion; 2) the shapes of pedestrians are more similar to ellipses than circles; 3) due to severe occlusions, heads and shoulders are the main cues to judge whether there exists a pedestrian at each position.
- the body parts of pedestrians are not reliable for human annotation. Taking these characteristics into account, the crowd density map/distribution is created by the combination of several distributions with perspective normalization.
- Perspective normalization is necessary to estimate the pedestrian scales. For each scene, several adult pedestrians will be randomly selected and then are labeled from head to toe. Assuming that the mean height of adults is 175cm (for example) , the perspective map M can be approximated through a linear regression.
- the crowd density map/distribution is created by rule of:
- the crowd density distribution kernel contains two terms, a normalized 2D Gaussian kernel Nh as a head part and a bivariate normal distribution Nb as a body part.
- Pb is the position of the pedestrian body, estimated by the head position and the perspective value.
- the whole distribution is normalized by Z.
- a body shape density distribution or kernel (hereinafter, “kernel” ) as described in formula (1) will be determined. All body shape kernels for all the labeled persons are combined (overlapped) to form the ground-truth density map/distribution for each frame. The bigger the values of the locations in the ground-truth density map/distribution are, the higher the crowd density is in these locations.
- the normalized value for each body shape kernel is equal to 1, the counts for the persons in the crowds will be equal to the sum of all the values for the body shape kernels in the ground-truth density map/distribution.
- the CNN generation unit 20 is configured to construct and initialize a first crowd convolutional neural network (CNN) .
- the generation unit 20 operates to retrieve/sample crowd patches from the frames in the training set, and obtain the corresponding ground-truth density maps and numbers of persons in the sampled crowd patches as determined by the unit 10. And then, the generation unit 20 inputs the crowd patches sampled from the training set and their corresponding ground-truth density maps/distributions and numbers of persons as target objectiveness into the CNN, so as to train the CNN.
- CNN first crowd convolutional neural network
- Fig. 4 is a schematic diagram illustrating a flow chart for the process 4000 for generating and training the CNN according to one embodiment of the present application.
- the process 300 samples one or more crowd patches from the frames in the training set, and obtains the corresponding ground-truth density distribution/maps and numbers of persons /crowd in the sampled crowd patches.
- the input is the image patches cropped from training images.
- the size of each patch at different locations is chosen according to the perspective value of its center pixel.
- each patch may be set to cover a 3-meter by 3-meter square in the actual scene.
- the patches are warped to 72 (for example) pixels by 72 (for example) pixels as the input of the crowd CNN model generated in step 302 as below.
- step s402 the process 4000 initializes randomly, based on Gaussian random distribution, a crowd convolutional neural network.
- An overview of the crowd CNN model with switchable objectives is shown in Figure 5.
- the crowd CNN model 500 contains 3 convolution layers (con1-conv3) and three fully connected layers (fc4, fc5 and fc6 or fc7) .
- Conv1 has 32 7X7X3 filters
- conv2 has 32 7X7X32 filters
- the last convolution layer has 64 5X5X32 filters.
- Max pooling layers with a 2X2 kernel size are used after conv1 and conv2.
- Rectified linear unit (ReLU) which is not shown in Fig. 5, is the activation function applied after every convolutional layer and fully connected layer. It shall be appreciated that the numbers of the filters and the number of layers are just described herein as an example for purpose of illustration, and the present application is not limited to these specific numbers and other numbers would be acceptable.
- the process 400 learns a mapping from the crowd patches to the density maps/distributions, for example, by using the mini-batch gradient descent and back-propagation until the density maps/distributions converge to the ground-truth density/distribution as created by the ground-truth density map creation unit 10.
- the process 400 switches the objectiveness and learns the mapping from the crowd patches to the counts until the learned counts converge to the counts estimated by the ground-truth density map creation unit 10.
- the steps s403-s405 will be discussed in detail.
- the main task for the crowd CNN model 400 is to estimate the crowd density map/distribution of the input patches.
- the output density map/distribution is down-sampled to 18X18. Therefore, the ground truth density map/distribution is also down-sampled to 18X18. Since the density map/distribution contains rich and abundant local and detailed information, the CNN model 500 can benefit from learning to predict density map/distribution and can obtain a better representation of crowd patches.
- the total count regression of the input patch is treated as the secondary task, which is calculated by integrating the density map patch. Two tasks alternatively assist each other and obtain a better solution.
- the two loss functions are defined by rule of:
- L D is the loss between estimated density map Fd (X i ; ⁇ ) (the output of fc6) and the ground truth density map Di.
- L Y is the loss between the estimated crowd number Fy (Xi; ⁇ ) (the output of fc7) and the ground truth number Y i . Euclidean distance is adopted in these two objective losses. The loss is minimized using mini-batch gradient descent and back propagation.
- the switchable training procedure is summarized in Algorithm 1.
- L D is set as the first objective loss to minimize, since the density map/distribution can introduce more spatial information to the CNN model, the density map/distribution estimation requires the model 500 to learn a general representation for the crowds.
- the model 500 switches to minimize the objective of global count regression.
- Count regression is an easier task and its learning converges faster than the task of density map/distribution regression.
- the two objective losses should be normalized to the similar or same scales; otherwise the objective with the larger scale would be dominant in the training process.
- the scale weight of density loss can be set to 10
- the scale weight of count loss can be set to 1.
- the training loss converged after about 6 switch iterations.
- the proposed switching learning approach can achieve better performance than the widely used multi-task learning approach.
- the similar data retrieval unit 30 is configured to receive sample frames from the target scene, and receive the samples from training set with the ground-truth density maps/distributions and counts created by the unit 10, and then to obtain the similar data from the training set for each target scene to overcome the scene gaps.
- the crowd CNN model 500 is pre-trained based on all training scene data through the proposed switchable learning process.
- each query crowd scene has its unique scene properties, such as different view angles, scales and different density distributions. These properties significantly change the appearance of crowd patches and affect the performance of the crowd CNN model 500.
- a nonparametric fine-tuning scheme is designed to adapt the pre-trained CNN model 500 to unseen target scenes.
- the retrieval task consists of two steps, candidate scenes retrieval and local patch retrieval.
- Candidate scene retrieval (step 601) .
- the view angle and the scale of a scene are the main factors affecting the appearance of crowd.
- the perspective map/distribution can indicate both the view angle and the scale.
- each input patch is normalized into the same scale, which covers a 3-meter by 3-meter square (for example) in the actual scene according to the perspective map/distribution. Therefore, the first step of the nonparametric fine-tuning method focuses on retrieving training scenes that have similar perspective maps/distributions with the target scene from all the training scenes. Those retrieved scenes are called candidate fine-tuning scenes.
- a perspective descriptor is designed to represent the view angle of each scene.
- the top (for example, 20) perspective-map-similar scenes are retrieved from the whole training dataset.
- the images in the retrieved scenes are treated as the candidate scenes for local patch retrieval.
- the second step is to select similar patches, which have similar density distributions with those in the test scene, from candidate scenes.
- the crowd density distribution also affects the appearance pattern of crowds. Higher density crowd has more severe occlusions, and only heads and shoulders can be observed. On the contrary, in sparse crowd, pedestrians appear with entire body shapes. Therefore, the similar data retrieval unit 30 is configured to try to predict the density distribution of the target scene and retrieve similar patches that match the predicted target density distribution from the candidate scenes. For example, for a crowd scene with high densities, denser patches should be retrieved to fine-tune the pre-trained model to fit the target scene.
- y i is the integrating count of estimated density map/distribution for sample i.
- the model fine-tuning unit 40 is configured to receive the retrieved similar dataand to fine-tune the pre-trained CNN 500 with the similar data to make the CNN 500 capable of predicting density map/distribution and the pedestrian count in region of interest for the Video frame to be detected and the region of interest.
- the fine-tuned crowd CNN model achieves better performance for the target scene.
- the fine-tuning unit 40 samples the similar patches obtained from the unit 30 and inputs the obtain similar patches into the pre-trained CNN to fine-tune it , for example, by using the mini-batch gradient descent and back-propagation until the density maps/distributions converge to the ground-truth density/distribution as created by the ground-truth density map creation unit 10. And then the fine-tuning unit 40 switches the objectiveness and learns the mapping from the crowd patches to the counts until the learned counts converge to the counts estimated by the ground-truth density map creation unit 10. At last, it is determined if the estimated density map/distribution and the count converge to the ground-truth, if not, repeat the above steps.
- the fine-tuned predictive model generated by the model fine-tuning unit 40 may receive video frames to be detected and region of interest, and then predict an estimated density map and the pedestrian count in the region of interest.
- the present invention may be embodied as a system, method or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment and hardware aspects that may all generally be referred to herein as a “unit” , “circuit, ” “module” or “system. ”
- ICs integrated circuits
- Fig. 7 illustrates a system 7000 for generating a predictive model to predict a crowd density distribution and counts of persons in image frames.
- the system 7000 comprises a memory 3001 that stores executable components and a processor 3002, electrically coupled to the memory 3001 to execute the executable components to perform operations of the system 3000.
- the executable components may comprise: a ground-truth density map creation component 701, a CNN training component 702, a similar data retrieval component 703, and a model fine-tuning component 704.
- the ground-truth density map creation component 701 is configured for selecting image patches from one or more training image frames in the training set; and determining the ground-truth crowd distribution in the selected patches and the ground-truth total counts of pedestrians in the selected patches.
- the CNN training component 702 is configured for training a CNN by inputting one or more crowd patches from frames in a training set, each of the crowd patches having a predetermined ground-truth density distribution and counts of persons in the inputted crowd patches.
- the similar data retrieval component 703 is configured for sampling frames from a target scene image set and receiving the training images from the training set that having the determined ground-truth density distribution and counts/number, and retrieving similar image data from the received training frames for each of the sampled target image frames to overcome scene gaps between the target scene image set and the training images.
- the model fine-tuning component 703 is configured for fine-tuning the CNN by inputting the similar image data to the CNN 500 so as to determine a predictive model for predicting the crowd density map and counts of persons in image frames.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé de génération d'un modèle prédictif pour prédire une distribution de densité d'une foule et des nombres de personnes dans des images, comprenant : l'apprentissage d'un réseau de neurones à convolution (CNN) par application, à son entrée, d'un ou plusieurs carreaux de foule provenant d'images présentes dans un ensemble d'apprentissage, chacun des carreaux de foule ayant une distribution de densité et des nombres de personnes de vérité terrain prédéterminés dans les carreaux de foule appliqués en entrée ; l'échantillonnage d'images provenant d'un ensemble d'images de scène cibles et la réception des images d'apprentissage provenant de l'ensemble d'apprentissage qui ont la distribution de densité et les nombres de vérité terrain déterminés, la recherche de données d'image similaires dans les images d'apprentissage reçues pour chacune des images cibles afin de surmonter des discontinuités de scène entre l'ensemble d'images de scène cibles et les images d'apprentissage ; et le réglage fin du CNN par application, à l'entrée du CNN, des données d'image similaires de manière à déterminer un modèle prédictif pour prédire la carte de densité de foule et les nombres de personnes dans des images.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580080145.XA CN107624189B (zh) | 2015-05-18 | 2015-05-18 | 用于生成预测模型的方法和设备 |
PCT/CN2015/079178 WO2016183766A1 (fr) | 2015-05-18 | 2015-05-18 | Procédé et appareil de génération de modèles prédictifs |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/079178 WO2016183766A1 (fr) | 2015-05-18 | 2015-05-18 | Procédé et appareil de génération de modèles prédictifs |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016183766A1 true WO2016183766A1 (fr) | 2016-11-24 |
Family
ID=57319199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/079178 WO2016183766A1 (fr) | 2015-05-18 | 2015-05-18 | Procédé et appareil de génération de modèles prédictifs |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107624189B (fr) |
WO (1) | WO2016183766A1 (fr) |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106997459A (zh) * | 2017-04-28 | 2017-08-01 | 成都艾联科创科技有限公司 | 一种基于神经网络和图像叠合分割的人数统计方法及系统 |
CN107330364A (zh) * | 2017-05-27 | 2017-11-07 | 上海交通大学 | 一种基于cGAN网络的人群计数方法及系统 |
CN107566781A (zh) * | 2016-06-30 | 2018-01-09 | 北京旷视科技有限公司 | 视频监控方法和视频监控设备 |
CN107563349A (zh) * | 2017-09-21 | 2018-01-09 | 电子科技大学 | 一种基于VGGNet的人数估计方法 |
CN107609597A (zh) * | 2017-09-26 | 2018-01-19 | 嘉世达电梯有限公司 | 一种电梯轿厢人数检测系统及其检测方法 |
CN107657226A (zh) * | 2017-09-22 | 2018-02-02 | 电子科技大学 | 一种基于深度学习的人数估计方法 |
CN107977025A (zh) * | 2017-11-07 | 2018-05-01 | 中国农业大学 | 一种用于工厂化水产养殖溶解氧的调控系统及方法 |
CN108154089A (zh) * | 2017-12-11 | 2018-06-12 | 中山大学 | 一种基于尺度自适应的头部检测和密度图的人群计数方法 |
CN108615027A (zh) * | 2018-05-11 | 2018-10-02 | 常州大学 | 一种基于长短期记忆-加权神经网络对视频人群计数的方法 |
CN108875456A (zh) * | 2017-05-12 | 2018-11-23 | 北京旷视科技有限公司 | 目标检测方法、目标检测装置和计算机可读存储介质 |
CN109117791A (zh) * | 2018-08-14 | 2019-01-01 | 中国电子科技集团公司第三十八研究所 | 一种基于膨胀卷积的人群密度图生成方法 |
CN109409318A (zh) * | 2018-11-07 | 2019-03-01 | 四川大学 | 统计模型的训练方法、统计方法、装置及存储介质 |
CN109447008A (zh) * | 2018-11-02 | 2019-03-08 | 中山大学 | 基于注意力机制和可变形卷积神经网络的人群分析方法 |
CN109635634A (zh) * | 2018-10-29 | 2019-04-16 | 西北大学 | 一种基于随机线性插值的行人再识别数据增强方法 |
WO2019084854A1 (fr) * | 2017-11-01 | 2019-05-09 | Nokia Technologies Oy | Comptage d'objets sensible à la profondeur |
EP3534300A3 (fr) * | 2018-07-02 | 2019-12-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Procédé, appareil, dispositif et support d'enregistrement pour prédire le nombre de personnes dans une foule dense |
US20200053515A1 (en) * | 2015-11-04 | 2020-02-13 | XAD INC. (dba GROUNDTRUTH) | Systems and Methods for Discovering Lookalike Mobile Devices |
CN110826496A (zh) * | 2019-11-07 | 2020-02-21 | 腾讯科技(深圳)有限公司 | 一种人群密度估计方法、装置、设备及存储介质 |
CN110942015A (zh) * | 2019-11-22 | 2020-03-31 | 上海应用技术大学 | 人群密度估计方法 |
CN111062275A (zh) * | 2019-12-02 | 2020-04-24 | 汇纳科技股份有限公司 | 一种多层次监督的人群计数方法、装置、介质及电子设备 |
CN111178235A (zh) * | 2019-12-27 | 2020-05-19 | 卓尔智联(武汉)研究院有限公司 | 一种目标数量确定方法、装置、设备及存储介质 |
CN111191667A (zh) * | 2018-11-15 | 2020-05-22 | 天津大学青岛海洋技术研究院 | 基于多尺度生成对抗网络的人群计数方法 |
CN111274900A (zh) * | 2020-01-15 | 2020-06-12 | 北京航空航天大学 | 一种基于底层特征提取的空基人群计数方法 |
CN111291587A (zh) * | 2018-12-06 | 2020-06-16 | 深圳光启空间技术有限公司 | 一种基于密集人群的行人检测方法、存储介质及处理器 |
CN111626141A (zh) * | 2020-04-30 | 2020-09-04 | 上海交通大学 | 基于生成图像的人群计数模型建立方法、计数方法及系统 |
CN111652168A (zh) * | 2020-06-09 | 2020-09-11 | 腾讯科技(深圳)有限公司 | 基于人工智能的群体检测方法、装置、设备及存储介质 |
CN111898578A (zh) * | 2020-08-10 | 2020-11-06 | 腾讯科技(深圳)有限公司 | 人群密度的获取方法、装置、电子设备及计算机程序 |
CN112001274A (zh) * | 2020-08-06 | 2020-11-27 | 腾讯科技(深圳)有限公司 | 人群密度确定方法、装置、存储介质和处理器 |
US10880682B2 (en) | 2015-11-04 | 2020-12-29 | xAd, Inc. | Systems and methods for creating and using geo-blocks for location-based information service |
US10939233B2 (en) | 2018-08-17 | 2021-03-02 | xAd, Inc. | System and method for real-time prediction of mobile device locations |
CN112801018A (zh) * | 2021-02-07 | 2021-05-14 | 广州大学 | 一种跨场景目标自动识别与追踪方法及应用 |
CN113033342A (zh) * | 2021-03-10 | 2021-06-25 | 西北工业大学 | 基于密度估计的拥挤场景行人目标检测与计数方法 |
CN113269224A (zh) * | 2021-03-24 | 2021-08-17 | 华南理工大学 | 一种场景图像分类方法、系统及存储介质 |
US11106904B2 (en) * | 2019-11-20 | 2021-08-31 | Omron Corporation | Methods and systems for forecasting crowd dynamics |
US11134359B2 (en) | 2018-08-17 | 2021-09-28 | xAd, Inc. | Systems and methods for calibrated location prediction |
US11172324B2 (en) | 2018-08-17 | 2021-11-09 | xAd, Inc. | Systems and methods for predicting targeted location events |
CN113822111A (zh) * | 2021-01-19 | 2021-12-21 | 北京京东振世信息技术有限公司 | 人群检测模型训练方法、装置以及人群计数方法、装置 |
CN113920391A (zh) * | 2021-09-17 | 2022-01-11 | 北京理工大学 | 一种基于生成尺度自适应真值图的目标计数方法 |
CN115293465A (zh) * | 2022-10-09 | 2022-11-04 | 枫树谷(成都)科技有限责任公司 | 一种人群密度预测方法及系统 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815936B (zh) * | 2019-02-21 | 2023-08-22 | 深圳市商汤科技有限公司 | 一种目标对象分析方法及装置、计算机设备和存储介质 |
CN110197502B (zh) * | 2019-06-06 | 2021-01-22 | 山东工商学院 | 一种基于身份再识别的多目标跟踪方法及系统 |
CN111340801A (zh) * | 2020-03-24 | 2020-06-26 | 新希望六和股份有限公司 | 一种牲畜盘点方法、装置、设备及存储介质 |
CN112990530B (zh) * | 2020-12-23 | 2023-12-26 | 北京软通智慧科技有限公司 | 区域人口数量预测方法、装置、电子设备和存储介质 |
CN118155142B (zh) * | 2024-05-09 | 2024-08-16 | 浙江大华技术股份有限公司 | 对象密度识别方法及事件识别方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222388A1 (en) * | 2007-11-16 | 2009-09-03 | Wei Hua | Method of and system for hierarchical human/crowd behavior detection |
CN104077613A (zh) * | 2014-07-16 | 2014-10-01 | 电子科技大学 | 一种基于级联多级卷积神经网络的人群密度估计方法 |
CN104268524A (zh) * | 2014-09-24 | 2015-01-07 | 朱毅 | 一种基于动态调整训练目标的卷积神经网络的图像识别方法 |
CN104573744A (zh) * | 2015-01-19 | 2015-04-29 | 上海交通大学 | 精细粒度类别识别及物体的部分定位和特征提取方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7991193B2 (en) * | 2007-07-30 | 2011-08-02 | International Business Machines Corporation | Automated learning for people counting systems |
CN103971100A (zh) * | 2014-05-21 | 2014-08-06 | 国家电网公司 | 基于视频并针对自动提款机的伪装与偷窥行为的检测方法 |
-
2015
- 2015-05-18 CN CN201580080145.XA patent/CN107624189B/zh active Active
- 2015-05-18 WO PCT/CN2015/079178 patent/WO2016183766A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222388A1 (en) * | 2007-11-16 | 2009-09-03 | Wei Hua | Method of and system for hierarchical human/crowd behavior detection |
CN104077613A (zh) * | 2014-07-16 | 2014-10-01 | 电子科技大学 | 一种基于级联多级卷积神经网络的人群密度估计方法 |
CN104268524A (zh) * | 2014-09-24 | 2015-01-07 | 朱毅 | 一种基于动态调整训练目标的卷积神经网络的图像识别方法 |
CN104573744A (zh) * | 2015-01-19 | 2015-04-29 | 上海交通大学 | 精细粒度类别识别及物体的部分定位和特征提取方法 |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200053515A1 (en) * | 2015-11-04 | 2020-02-13 | XAD INC. (dba GROUNDTRUTH) | Systems and Methods for Discovering Lookalike Mobile Devices |
US11683655B2 (en) | 2015-11-04 | 2023-06-20 | xAd, Inc. | Systems and methods for predicting mobile device locations using processed mobile device signals |
US10880682B2 (en) | 2015-11-04 | 2020-12-29 | xAd, Inc. | Systems and methods for creating and using geo-blocks for location-based information service |
US10715962B2 (en) * | 2015-11-04 | 2020-07-14 | Xad Inc. | Systems and methods for predicting lookalike mobile devices |
CN107566781A (zh) * | 2016-06-30 | 2018-01-09 | 北京旷视科技有限公司 | 视频监控方法和视频监控设备 |
CN107566781B (zh) * | 2016-06-30 | 2019-06-21 | 北京旷视科技有限公司 | 视频监控方法和视频监控设备 |
CN106997459A (zh) * | 2017-04-28 | 2017-08-01 | 成都艾联科创科技有限公司 | 一种基于神经网络和图像叠合分割的人数统计方法及系统 |
CN106997459B (zh) * | 2017-04-28 | 2020-06-26 | 成都艾联科创科技有限公司 | 一种基于神经网络和图像叠合分割的人数统计方法及系统 |
CN108875456A (zh) * | 2017-05-12 | 2018-11-23 | 北京旷视科技有限公司 | 目标检测方法、目标检测装置和计算机可读存储介质 |
CN107330364A (zh) * | 2017-05-27 | 2017-11-07 | 上海交通大学 | 一种基于cGAN网络的人群计数方法及系统 |
CN107330364B (zh) * | 2017-05-27 | 2019-12-03 | 上海交通大学 | 一种基于cGAN网络的人群计数方法及系统 |
CN107563349A (zh) * | 2017-09-21 | 2018-01-09 | 电子科技大学 | 一种基于VGGNet的人数估计方法 |
CN107657226A (zh) * | 2017-09-22 | 2018-02-02 | 电子科技大学 | 一种基于深度学习的人数估计方法 |
CN107609597B (zh) * | 2017-09-26 | 2020-10-13 | 嘉世达电梯有限公司 | 一种电梯轿厢人数检测系统及其检测方法 |
CN107609597A (zh) * | 2017-09-26 | 2018-01-19 | 嘉世达电梯有限公司 | 一种电梯轿厢人数检测系统及其检测方法 |
US11270441B2 (en) | 2017-11-01 | 2022-03-08 | Nokia Technologies Oy | Depth-aware object counting |
WO2019084854A1 (fr) * | 2017-11-01 | 2019-05-09 | Nokia Technologies Oy | Comptage d'objets sensible à la profondeur |
CN111295689B (zh) * | 2017-11-01 | 2023-10-03 | 诺基亚技术有限公司 | 深度感知对象计数 |
CN111295689A (zh) * | 2017-11-01 | 2020-06-16 | 诺基亚技术有限公司 | 深度感知对象计数 |
CN107977025A (zh) * | 2017-11-07 | 2018-05-01 | 中国农业大学 | 一种用于工厂化水产养殖溶解氧的调控系统及方法 |
CN108154089B (zh) * | 2017-12-11 | 2021-07-30 | 中山大学 | 一种基于尺度自适应的头部检测和密度图的人群计数方法 |
CN108154089A (zh) * | 2017-12-11 | 2018-06-12 | 中山大学 | 一种基于尺度自适应的头部检测和密度图的人群计数方法 |
CN108615027B (zh) * | 2018-05-11 | 2021-10-08 | 常州大学 | 一种基于长短期记忆-加权神经网络对视频人群计数的方法 |
CN108615027A (zh) * | 2018-05-11 | 2018-10-02 | 常州大学 | 一种基于长短期记忆-加权神经网络对视频人群计数的方法 |
US11302104B2 (en) | 2018-07-02 | 2022-04-12 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus, device, and storage medium for predicting the number of people of dense crowd |
EP3534300A3 (fr) * | 2018-07-02 | 2019-12-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Procédé, appareil, dispositif et support d'enregistrement pour prédire le nombre de personnes dans une foule dense |
CN109117791A (zh) * | 2018-08-14 | 2019-01-01 | 中国电子科技集团公司第三十八研究所 | 一种基于膨胀卷积的人群密度图生成方法 |
US11134359B2 (en) | 2018-08-17 | 2021-09-28 | xAd, Inc. | Systems and methods for calibrated location prediction |
US11172324B2 (en) | 2018-08-17 | 2021-11-09 | xAd, Inc. | Systems and methods for predicting targeted location events |
US10939233B2 (en) | 2018-08-17 | 2021-03-02 | xAd, Inc. | System and method for real-time prediction of mobile device locations |
CN109635634B (zh) * | 2018-10-29 | 2023-03-31 | 西北大学 | 一种基于随机线性插值的行人再识别数据增强方法 |
CN109635634A (zh) * | 2018-10-29 | 2019-04-16 | 西北大学 | 一种基于随机线性插值的行人再识别数据增强方法 |
CN109447008A (zh) * | 2018-11-02 | 2019-03-08 | 中山大学 | 基于注意力机制和可变形卷积神经网络的人群分析方法 |
CN109409318A (zh) * | 2018-11-07 | 2019-03-01 | 四川大学 | 统计模型的训练方法、统计方法、装置及存储介质 |
CN111191667B (zh) * | 2018-11-15 | 2023-08-18 | 天津大学青岛海洋技术研究院 | 基于多尺度生成对抗网络的人群计数方法 |
CN111191667A (zh) * | 2018-11-15 | 2020-05-22 | 天津大学青岛海洋技术研究院 | 基于多尺度生成对抗网络的人群计数方法 |
CN111291587A (zh) * | 2018-12-06 | 2020-06-16 | 深圳光启空间技术有限公司 | 一种基于密集人群的行人检测方法、存储介质及处理器 |
CN110826496B (zh) * | 2019-11-07 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 一种人群密度估计方法、装置、设备及存储介质 |
CN110826496A (zh) * | 2019-11-07 | 2020-02-21 | 腾讯科技(深圳)有限公司 | 一种人群密度估计方法、装置、设备及存储介质 |
US11106904B2 (en) * | 2019-11-20 | 2021-08-31 | Omron Corporation | Methods and systems for forecasting crowd dynamics |
CN110942015B (zh) * | 2019-11-22 | 2023-04-07 | 上海应用技术大学 | 人群密度估计方法 |
CN110942015A (zh) * | 2019-11-22 | 2020-03-31 | 上海应用技术大学 | 人群密度估计方法 |
CN111062275A (zh) * | 2019-12-02 | 2020-04-24 | 汇纳科技股份有限公司 | 一种多层次监督的人群计数方法、装置、介质及电子设备 |
CN111178235A (zh) * | 2019-12-27 | 2020-05-19 | 卓尔智联(武汉)研究院有限公司 | 一种目标数量确定方法、装置、设备及存储介质 |
CN111274900A (zh) * | 2020-01-15 | 2020-06-12 | 北京航空航天大学 | 一种基于底层特征提取的空基人群计数方法 |
CN111626141A (zh) * | 2020-04-30 | 2020-09-04 | 上海交通大学 | 基于生成图像的人群计数模型建立方法、计数方法及系统 |
CN111626141B (zh) * | 2020-04-30 | 2023-06-02 | 上海交通大学 | 基于生成图像的人群计数模型建立方法、计数方法及系统 |
CN111652168A (zh) * | 2020-06-09 | 2020-09-11 | 腾讯科技(深圳)有限公司 | 基于人工智能的群体检测方法、装置、设备及存储介质 |
CN111652168B (zh) * | 2020-06-09 | 2023-09-08 | 腾讯科技(深圳)有限公司 | 基于人工智能的群体检测方法、装置、设备及存储介质 |
CN112001274B (zh) * | 2020-08-06 | 2023-11-17 | 腾讯科技(深圳)有限公司 | 人群密度确定方法、装置、存储介质和处理器 |
CN112001274A (zh) * | 2020-08-06 | 2020-11-27 | 腾讯科技(深圳)有限公司 | 人群密度确定方法、装置、存储介质和处理器 |
CN111898578A (zh) * | 2020-08-10 | 2020-11-06 | 腾讯科技(深圳)有限公司 | 人群密度的获取方法、装置、电子设备及计算机程序 |
CN111898578B (zh) * | 2020-08-10 | 2023-09-19 | 腾讯科技(深圳)有限公司 | 人群密度的获取方法、装置、电子设备 |
CN113822111A (zh) * | 2021-01-19 | 2021-12-21 | 北京京东振世信息技术有限公司 | 人群检测模型训练方法、装置以及人群计数方法、装置 |
CN113822111B (zh) * | 2021-01-19 | 2024-05-24 | 北京京东振世信息技术有限公司 | 人群检测模型训练方法、装置以及人群计数方法、装置 |
CN112801018B (zh) * | 2021-02-07 | 2023-07-07 | 广州大学 | 一种跨场景目标自动识别与追踪方法及应用 |
CN112801018A (zh) * | 2021-02-07 | 2021-05-14 | 广州大学 | 一种跨场景目标自动识别与追踪方法及应用 |
CN113033342A (zh) * | 2021-03-10 | 2021-06-25 | 西北工业大学 | 基于密度估计的拥挤场景行人目标检测与计数方法 |
CN113269224B (zh) * | 2021-03-24 | 2023-10-31 | 华南理工大学 | 一种场景图像分类方法、系统及存储介质 |
CN113269224A (zh) * | 2021-03-24 | 2021-08-17 | 华南理工大学 | 一种场景图像分类方法、系统及存储介质 |
CN113920391A (zh) * | 2021-09-17 | 2022-01-11 | 北京理工大学 | 一种基于生成尺度自适应真值图的目标计数方法 |
CN115293465A (zh) * | 2022-10-09 | 2022-11-04 | 枫树谷(成都)科技有限责任公司 | 一种人群密度预测方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN107624189A (zh) | 2018-01-23 |
CN107624189B (zh) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016183766A1 (fr) | Procédé et appareil de génération de modèles prédictifs | |
Pal et al. | Deep learning in multi-object detection and tracking: state of the art | |
Liu et al. | Exploiting unlabeled data in cnns by self-supervised learning to rank | |
Ming et al. | Deep learning-based person re-identification methods: A survey and outlook of recent works | |
Yang et al. | Sanet: Scene agnostic network for camera localization | |
US11836931B2 (en) | Target detection method, apparatus and device for continuous images, and storage medium | |
US20180114071A1 (en) | Method for analysing media content | |
CN110717411A (zh) | 一种基于深层特征融合的行人重识别方法 | |
CN115240121B (zh) | 一种用于增强行人局部特征的联合建模方法和装置 | |
CN112070044B (zh) | 一种视频物体分类方法及装置 | |
Fooladgar et al. | Multi-modal attention-based fusion model for semantic segmentation of RGB-depth images | |
CN108875456B (zh) | 目标检测方法、目标检测装置和计算机可读存储介质 | |
Xian et al. | Evaluation of low-level features for real-world surveillance event detection | |
EP3249610B1 (fr) | Procédé, appareil et produit-programme d'ordinateur pour segmentation d'objet vidéo | |
Li et al. | Toward a comprehensive face detector in the wild | |
Porikli et al. | Object tracking in low-frame-rate video | |
CN112084952A (zh) | 一种基于自监督训练的视频点位跟踪方法 | |
Dahirou et al. | Motion Detection and Object Detection: Yolo (You Only Look Once) | |
Delibasoglu et al. | Motion detection in moving camera videos using background modeling and FlowNet | |
US20230072445A1 (en) | Self-supervised video representation learning by exploring spatiotemporal continuity | |
Aldhaheri et al. | MACC Net: Multi-task attention crowd counting network | |
Zhang et al. | Visual Object Tracking via Cascaded RPN Fusion and Coordinate Attention. | |
Li et al. | TFMFT: Transformer-based multiple fish tracking | |
Zhang et al. | A review of small target detection based on deep learning | |
Zhang et al. | Lightweight mobile network for real-time violence recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15892145 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15892145 Country of ref document: EP Kind code of ref document: A1 |