WO2022174517A1 - 一种人群计数方法、装置、计算机设备及存储介质 - Google Patents

一种人群计数方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022174517A1
WO2022174517A1 PCT/CN2021/090518 CN2021090518W WO2022174517A1 WO 2022174517 A1 WO2022174517 A1 WO 2022174517A1 CN 2021090518 W CN2021090518 W CN 2021090518W WO 2022174517 A1 WO2022174517 A1 WO 2022174517A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
convolution
layer
scale feature
scale
Prior art date
Application number
PCT/CN2021/090518
Other languages
English (en)
French (fr)
Inventor
刘钊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022174517A1 publication Critical patent/WO2022174517A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a crowd counting method, device, computer equipment and storage medium.
  • Dense crowd counting refers to counting the number of people in crowded scenes, which is to map an input image of people flow to the corresponding density map, which is crucial for building higher-level cognitive abilities in crowded scenes.
  • the inventors realized that the current crowd counting problem is mainly solved by regressing the crowd density map and then summing to obtain the number of crowds in the image.
  • the current crowd counting problem is mainly solved by regressing the crowd density map and then summing to obtain the number of crowds in the image.
  • head scale due to the large variation of head scale, severe head occlusion, and background noise, there are still some difficulties in accurate crowd counting.
  • multi-array or multi-branch network structures are currently used to obtain different receptive fields, so as to perceive changes in crowd size, but the number of columns or branches will limit the complexity of the model.
  • the purpose of the embodiments of the present application is to propose a crowd counting method, apparatus, computer equipment and storage medium, so as to solve the problem of limited crowd counting accuracy due to multi-scale in the related art.
  • the embodiment of the present application provides a crowd counting method, which adopts the following technical solutions:
  • the general model for crowd counting includes a pyramid pooling module and a convolution module;
  • the third multi-scale feature map is decoded and converted into a crowd density map.
  • the embodiment of the present application also provides a crowd counting device, which adopts the following technical solutions:
  • a building module for building a general model for crowd counting including a pyramid pooling module and a convolution module;
  • the pooling module is used to input the original image features into the pyramid pooling module, and perform pooling of different scales according to the preset output feature size of each pyramid layer to obtain a first multi-scale feature map;
  • a convolution module configured to input the first multi-scale feature map into the convolution module to perform a convolution operation to output a second multi-scale feature map
  • a splicing module for splicing and merging the second multi-scale feature map and the original image feature to obtain a third multi-scale feature map
  • the decoding module is used for converting the third multi-scale feature map into a crowd density map after decoding.
  • the embodiment of the present application also provides a computer device, which adopts the following technical solutions:
  • the computer device includes a memory and a processor, wherein computer-readable instructions are stored in the memory, and when the processor executes the computer-readable instructions, the steps of the crowd counting method as described below are implemented:
  • the general model for crowd counting includes a pyramid pooling module and a convolution module, wherein the pyramid pooling module includes a multi-layer pyramid layer;
  • the third multi-scale feature map is decoded and converted into a crowd density map.
  • the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
  • the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, implements the steps of the crowd counting method as follows:
  • the general model for crowd counting includes a pyramid pooling module and a convolution module, wherein the pyramid pooling module includes a multi-layer pyramid layer;
  • the third multi-scale feature map is decoded and converted into a crowd density map.
  • the general model of crowd counting includes a pyramid pooling module and a convolution module.
  • the original image features are input into the pyramid pooling module, and different scales are performed according to the preset output feature size of each pyramid layer.
  • pooling to obtain the first multi-scale feature map then input the first multi-scale feature map to the convolution module for convolution operation to output the second multi-scale feature map, and then splicing the second multi-scale feature map with the original image features to obtain The third multi-scale feature map, and finally the third multi-scale feature map is decoded and converted into a crowd density map;
  • this application uses the constructed general model of crowd counting to perform pyramid pooling on the original image features and then perform an adaptive convolution operation to obtain The multi-scale feature information of the crowd, and splicing the multi-scale features with the original image features to further obtain the final multi-scale feature map, decoding the final multi-scale feature map and outputting the crowd density map, which can correct the information in crowd density and crowding. True and inaccurate cases, thereby improving the accuracy of crowd counting at multiple scales.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • Figure 2 is a flow chart of one embodiment of a crowd counting method according to the present application.
  • Fig. 3 is a flow chart of a specific implementation manner of step S202 in Fig. 2;
  • Fig. 4 is a flowchart of a specific implementation of step S203 in Fig. 2;
  • Fig. 5 is a kind of frame diagram of crowd counting method according to the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a crowd counting device according to the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the present application provides a crowd counting method, involving artificial intelligence computer vision, which can be applied to the system architecture 100 shown in FIG. 1
  • the system Architecture 100 may include end devices 101 , 102 , 103 , network 104 and server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications may be installed on the terminal devices 101 , 102 and 103 , such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, and the like.
  • the terminal devices 101, 102, and 103 can be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • the server 105 may be a server that provides various services, such as a background server that provides support for the pages displayed on the terminal devices 101 , 102 , and 103 .
  • the crowd counting method provided by the embodiments of the present application is generally performed by a server or a terminal device, and accordingly, a crowd counting apparatus is generally set in the server or terminal device.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the crowd counting method includes the following steps:
  • step S201 a general model of crowd counting is constructed, and the general model of crowd counting includes a pyramid pooling module and a convolution module.
  • the constructed general model of crowd counting can be embedded in the current mainstream network, and the general model of crowd counting includes a pyramid pooling module and a convolution module.
  • the pyramid pooling module is a pyramid structure, including multiple pyramid layers.
  • Pyramid pooling refers to performing pooling operations of different sizes on the input feature maps to further obtain feature information of different resolutions, effectively improving the recognition accuracy of features by the network. Pooling is performed according to the preset size of the output feature map of each pyramid layer. Specifically, the feature images are divided by windows with different scales, each scale represents a pyramid layer, and the size of each feature image block after division is called window_size, and then use window_size to perform the maximum pooling operation.
  • the input feature map size of the pyramid pooling layer is a ⁇ b
  • the output feature map size of the pyramid pooling layer is n ⁇ n
  • use the pooling window size window_size Perform a pooling operation for (a/n, b/n), and round up if the values of a/n and b/n are non-integer.
  • pyramid pooling is to generate a fixed-size output for an input of any image size.
  • the pyramid pooling module includes multiple pyramid layers, and each pyramid layer includes a pooling layer, a first convolution layer, and an upper sampling layer.
  • each pyramid layer corresponds to a feature map of one scale, and the input image features are pooled according to the scale to output a feature map of the corresponding size, and the scale can be set as needed; the convolution module is used to The scale feature map is convolved.
  • Step S202 inputting multiple original image features into the pyramid pooling module, and performing pooling at different scales according to the preset output feature size of each pyramid layer to obtain a first multi-scale feature map.
  • the original image features are extracted by a feature extraction model, and input into the pyramid pooling module.
  • the feature extraction model may be a neural network model (backbone).
  • the original image is input into the neural network model for image processing.
  • Feature extraction input the extracted original image features into the pyramid pooling module for pooling.
  • the output feature size of each pyramid layer can be preset, and the extracted original image features are pooled at different scales according to the preset output feature size.
  • Neural network models include VGGNet network, GoogleNet network, DenseNet network, etc.
  • the original picture may be obtained by collecting video frames in the surveillance video, or may be obtained by constructing a picture database.
  • the specified features can be extracted from the original picture.
  • the specified features can be divided into: face distinction, behavioral features, skin color features and appearance features, etc.
  • the above-mentioned original picture features may also be stored in a node of a blockchain.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • Step S203 the first multi-scale feature map is input into the convolution module to perform a convolution operation to output the second multi-scale feature map.
  • Convolution is to use filters to operate in the image.
  • the filter is the convolution kernel. After each convolution calculation, the size of the image will be reduced.
  • the size rule of the image matrix obtained after convolution is:
  • the obtained matrix is (n-f+1) ⁇ (n-f+1).
  • the convolution module is an adaptive convolution layer structure
  • the convolution module includes a second convolution layer, a third convolution layer and an output layer
  • the second convolution layer is used to reduce the channel
  • the third convolution layer Multi-scale layers are used to preserve the dimensionality of multi-scale features.
  • the pixel block of the human head close to the lens is large and the signal is strong, and the corresponding pixel block of the human head far away from the lens is small and the signal is weak, if the pixel block far from the lens has a synergistic effect with the adjacent pixel block , which can enhance its signal transmission and thus improve the accuracy of crowd techniques at multiple scales.
  • the information of different channels can be fused, that is, the network will extract the original image and the second multi-scale feature map at the same time.
  • the features of the multi-scale feature map enable the co-occurrence relationship to be better learned, thereby realizing the synergy of adjacent pixels and improving the accuracy of crowd counting.
  • the extracted original image features and the second multi-scale feature map are spliced and fused according to the channel dimension, which can be implemented by using the concate method.
  • the feature map is spliced to obtain the third splicing feature, and the 1*1 convolution kernel is used for fusion after splicing.
  • Step S205 convert the third multi-scale feature map into a crowd density map after decoding.
  • a multi-layer convolution layer is used to decode the third multi-scale feature map, and the spatial size of the decoded third multi-scale feature map is restored to the original picture size to obtain a crowd density map.
  • the decoder includes multi-layer convolution layers.
  • the convolution layer is 5 layers of convolution layers, and the size of the convolution kernel decreases layer by layer.
  • the convolution kernel uses 11*11, 9*9, 7*7, 5*5 and 1*1, after 5 layers of convolution layer operations, reduce the feature dimension size in the convolution layer, integrate the feature dimension information into the spatial dimension, and realize the decoding of the image; and use the bilinear interpolation method to decode the decoded image.
  • the third multi-scale feature map is upsampled to the same size as the original image; the crowd density map is obtained by restoring the third multi-scale feature map to the original size, which can improve the quality of the crowd density map and reduce the general model for crowd counting. Detail loss due to downsampling due to pooling and convolution operations.
  • Bilinear interpolation is a good image scaling algorithm. It makes full use of the four real pixel values around the virtual point in the source image to jointly determine a pixel value in the target image, so the scaling effect is simpler than The nearest neighbor interpolation is much better.
  • the algorithm of bilinear interpolation method is described as follows:
  • f(i+u,j+v) (1-u)(1-v)f(i,j)+(1-u)vf(i,j+1)+u(1-v)f( i+1,j)+uvf(i+1,j+1)
  • f(i,j) represents the pixel value at the source image (i,j), and so on.
  • This application uses the constructed general model of crowd counting to perform pyramid pooling on the original image features and then perform adaptive convolution operation to obtain the multi-scale feature information of the crowd, and then splicing and merging the multi-scale features with the original image features to further obtain the final image.
  • Multi-scale feature map decode the final multi-scale feature map and output the crowd density map, which can correct the fact and inaccuracy of information in crowded crowd density, thereby improving the accuracy of crowd counting under multi-scale.
  • step 202 specifically includes the following steps:
  • step S301 the original image features are respectively input into the pooling layer of each pyramid layer to perform a pooling operation, and a corresponding first feature map is obtained on each pyramid layer.
  • the pyramid pooling module includes multiple pyramid layers, each pyramid layer includes a pooling layer, a convolution layer, and an upsampling layer, and each pyramid layer corresponds to a feature map of a scale, that is, through the pyramid
  • the pooling module can extract feature maps of different scales.
  • the level of the pyramid pooling module is preset. After setting the level, set the size of the pooling core of the pooling layer in each pyramid.
  • the pyramid has three layers, each layer corresponds to a scale, and the size of the pooling core 4x4, 2x2 and 1x1 respectively.
  • the original image features are input into the pooling layer of each pyramid layer for pooling operation, and the first feature maps corresponding to different layer features will be obtained.
  • Step S302 performing a first convolution operation on the first feature map through the first convolution layer, and outputting the corresponding first convolution feature map.
  • a convolution layer with a convolution kernel size of 1 ⁇ 1 and a stride of 1 is used in each pyramid layer to convolve the pooled first feature map.
  • Step S303 performing an upsampling operation on the first convolutional feature map input to the upsampling layer, and outputting a first scale feature map of a preset size.
  • the multi-scale feature sizes obtained by different scale levels are different. Therefore, an up-sampling operation is performed through an up-sampling layer, and each layer outputs a first-scale feature map of a given preset size.
  • Step S304 splicing the first-scale feature maps of each layer in the channel dimension to obtain a first multi-scale feature map.
  • Each layer in the pyramid pooling module extracts features of one scale, and finally splices these features, so as to achieve the purpose of being compatible with features of multiple scales.
  • the pyramid tower pooling model is set to a three-layer pyramid layer, the base of the pyramid is a 1x1 convolution kernel, the middle of the pyramid is a 2x2 convolution kernel, and the top seat of the pyramid is a 4x4 convolution kernel.
  • Pooling is performed at the conv5 layer. This layer has 256 filters, after the pooling operation, the feature is (16+4+1)x256 dimension, that is, the dimension corresponding to conv5 is 256.
  • the present application combines the features extracted at different scales by splicing the first-scale feature maps of the preset size output by each pyramid layer according to the channel dimension, which ensures the accuracy of subsequent crowd density estimation, and has high robustness.
  • step S203 specifically includes the following steps:
  • Step S401 inputting the first multi-scale feature map into the second convolution layer to perform a second convolution operation to obtain a second convolution feature map
  • Step S402 adjusting the number of output channels of the second convolution layer and outputting the second convolution feature map.
  • Step S403 input the second convolution feature map to the third convolution layer to perform a third convolution operation and output the second multi-scale feature map.
  • the convolution module includes a second convolution layer, a third convolution layer, and an output layer.
  • the purpose of the second convolution operation is to reduce the number of channels and reduce the amount of calculation. Specifically, a 1*1*c
  • the convolution layer performs convolution operations, where c is the number of channels, which can be set as needed. Output the obtained second convolution feature map, adjust the output channel through adaptive*cout, increase the number of channels, and increase the ability of subsequent feature representation.
  • the output channel of the convolution module can be flexibly adjusted according to the actual situation according to the number of channels output by the pyramid pooling module. It can be set manually or according to preset rules. For example, set the output of the convolution module.
  • the number of channels is the same as the number of output channels of the pyramid pooling module, or the number of output channels of the convolution module is set to be twice the number of output channels of the pyramid pooling module, which is not limited here.
  • the third convolution operation can use a convolutional layer with a kernel size of 1*1 to convolve the pooled feature map.
  • the biggest advantage of using a 1*1 convolution kernel for the convolution operation is that it does not change the dimension of the original eigenvalues, thereby ensuring that redundant information will not be added or some original information will not be added during the convolution process, while strengthening the The positioning information of the pixel position.
  • the output layer convm*n*cin*c adaptive is output to obtain the second multi-scale feature map.
  • the first multi-scale feature map is obtained by splicing the extracted features of different scales in the channel dimension, that is, the number of channels of the first multi-scale feature map increases, and the features that characterize the image itself increase, and each feature increases.
  • the information under the second multi-scale feature map is not increased; the second multi-scale feature map is the first multi-scale feature map after the convolution operation, so that the features are fused, so that the information under each feature is increased.
  • FIG. 5 is a frame diagram of the crowd counting method provided in this embodiment.
  • the original image is extracted by the feature extraction model, and the extracted original image features are input into the pyramid pooling module.
  • the first feature map corresponding to the layer the first feature map is subjected to the first convolution operation through the first convolution layer, the corresponding first convolution feature map is output, and the first convolution feature map is input to the upsampling layer for up-sampling.
  • Sampling operation output the first-scale feature map of preset size, splicing the first-scale feature map of each layer in the channel dimension to obtain the first multi-scale feature map, and input the first multi-scale feature map into the second convolution layer
  • Perform the second convolution operation to obtain the second convolution feature map, adjust the number of output channels of the second convolution layer and output the second convolution feature map, and input the second convolution feature map to the third convolution layer for the first step.
  • Triple convolution operation and output the second multi-scale feature map, splicing and merging the second multi-scale feature map with the original image features to obtain a third multi-scale feature map, decoding the third multi-scale feature map and converting it into a crowd density map , which can correct the true and inaccurate information in crowded crowd density, thereby improving the accuracy of crowd counting at multiple scales.
  • step 205 the above electronic device may perform the following steps:
  • each pixel in the crowd density map is the density of the pixel at that pixel, so the integration operation is performed directly on the density map. For digital images, the values of all pixels are added together, then to get the final total.
  • the present application may be used in numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, handheld or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, including A distributed computing environment for any of the above systems or devices, and the like.
  • the application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.
  • the present application can be applied to the monitoring field of smart security, thereby promoting the construction of smart cities.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
  • the present application provides an embodiment of a crowd counting device, which corresponds to the method embodiment shown in FIG. 2 , and the device may specifically be Used in various electronic devices.
  • the crowd counting apparatus described in this embodiment includes: a building module 601 , a pooling module 602 , a convolution module 603 , a splicing module 604 and a decoding module 605 . in:
  • the building module 601 is used to build a general model of crowd counting, and the general model of crowd counting includes a pyramid pooling module and a convolution module;
  • the pooling module 602 is configured to input a plurality of original picture features into the pyramid pooling module, and perform pooling of different scales according to the preset output feature size of each pyramid layer to obtain a first multi-scale feature map;
  • the convolution module 603 is configured to input the first multi-scale feature map into the convolution module to perform a convolution operation and output the second multi-scale feature map;
  • the splicing module 604 is used for splicing and merging the second multi-scale feature map and the original picture feature to obtain a third multi-scale feature map;
  • the decoding module 605 is configured to convert the third multi-scale feature map into a crowd density map after decoding.
  • the above-mentioned original picture features may also be stored in a node of a blockchain.
  • the convolution module 603 is further configured to perform a second convolution operation on the first multi-scale feature map to obtain a second convolution feature map; perform a third convolution operation on the second convolution feature map product operation and output the second multi-scale feature map.
  • the decoding module 605 includes a convolution unit and a generation unit, where the convolution unit is used to decode the third multi-scale feature map using a multi-layer convolution layer, and the generation unit is used to decode the third multi-scale feature map.
  • the spatial size of the decoded third multi-scale feature map is restored to the original picture size to obtain a crowd density map.
  • the above-mentioned crowd counting device uses the constructed general model of crowd counting to perform an adaptive convolution operation on the original image features after pyramid pooling to obtain the multi-scale feature information of the crowd, and splices and fuses the multi-scale features with the original image features for further steps.
  • the generating unit is further configured to use a bilinear interpolation method to upsample the decoded third multi-scale feature map to a size equal to that of the original picture.
  • the pooling module 602 includes a pooling unit, a convolution unit, an upsampling unit, and a splicing and fusion unit;
  • the pooling unit is used to input the original picture feature into the pooling layer of each pyramid layer for pooling operation, and obtain the corresponding first feature map on the pyramid layer of each layer;
  • the convolution unit is used to perform a first convolution operation on the first feature map through the first convolution layer, and output the corresponding first convolution feature map;
  • the upsampling unit is used to input the first convolution feature map to the upsampling layer to perform an upsampling operation, and output a first scale feature map of a preset size;
  • the splicing and fusion unit is used for splicing the first scale feature map of each layer in the channel dimension to obtain a first multi-scale feature map.
  • the above-mentioned crowd counting device combines the features extracted at different scales by splicing the first-scale feature maps of the preset size output by each layer of the pyramid according to the channel dimension, which ensures the accuracy of subsequent crowd density estimation and has robustness. High and good performance advantages.
  • the crowd counting device further includes a counting module, and the counting module is configured to integrate the value of each pixel in the crowd density map to obtain a crowd density estimate, The values are added and summed to get the total head count.
  • the value of each pixel is the density of the crowd at that pixel, so the density map is directly integrated.
  • the total number of people can be obtained by adding the values of all pixels. .
  • FIG. 7 is a block diagram of the basic structure of a computer device according to this embodiment.
  • the computer device 7 includes a memory 71 , a processor 72 , and a network interface 73 that communicate with each other through a system bus. It should be pointed out that only the computer device 7 with components 71-73 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead.
  • the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • embedded equipment etc.
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 71 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the memory 71 may be an internal storage unit of the computer device 7 , such as a hard disk or a memory of the computer device 7 .
  • the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 71 may also include both the internal storage unit of the computer device 7 and its external storage device.
  • the memory 71 is generally used to store the operating system and various application software installed on the computer device 7 , such as computer-readable instructions for a crowd counting method.
  • the memory 71 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 72 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips. This processor 72 is typically used to control the overall operation of the computer device 7 . In this embodiment, the processor 72 is configured to execute computer-readable instructions stored in the memory 71 or process data, such as computer-readable instructions for executing the crowd counting method.
  • CPU Central Processing Unit
  • controller a controller
  • microcontroller a microcontroller
  • microprocessor microprocessor
  • This processor 72 is typically used to control the overall operation of the computer device 7 .
  • the processor 72 is configured to execute computer-readable instructions stored in the memory 71 or process data, such as computer-readable instructions for executing the crowd counting method.
  • the network interface 73 may include a wireless network interface or a wired network interface, and the network interface 73 is generally used to establish a communication connection between the computer device 7 and other electronic devices.
  • the steps of the crowd counting method in the above-mentioned embodiment are implemented, and an adaptive convolution operation is performed after performing pyramid pooling on the original image features through the constructed general crowd counting model,
  • the multi-scale feature information of the crowd is obtained, and the multi-scale features are spliced and fused with the original image features to further obtain the final multi-scale feature map, and the final multi-scale feature map is decoded to output the crowd density map, which can correct crowd density
  • the information is true and inaccurate, thereby improving the accuracy of crowd counting at multiple scales.
  • the present application also provides another implementation manner, which is to provide a computer-readable storage medium, where the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium stores computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the crowd counting method as described above, through the constructed crowd.
  • the counting general model performs pyramid pooling on the original image features and then performs adaptive convolution operation to obtain the multi-scale feature information of the crowd. After splicing and fusing the multi-scale features with the original image features, the final multi-scale feature map is obtained.
  • the final multi-scale feature map is decoded to output the crowd density map, which can correct the information in the crowd density crowding is true and inaccurate, thereby improving the accuracy of crowd counting under multi-scale.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种人群计数方法、装置、计算机设备及存储介质,所述方法包括:构建人群计数通用模型,人群计数通用模型包括金字塔池化模块与卷积模块(S201);将多个原始图片特征输入到金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图(S202);将第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图(S203);将第二多尺度特征图与原始图片特征进行拼接得到第三多尺度特征图(S204);将第三多尺度特征图进行解码后转化为人群密度图(S205)。此外还可利用区块链技术,将原始图片特征存储于区块链中。利用上述方法,可以提高多尺度下人群计数的准确性。

Description

一种人群计数方法、装置、计算机设备及存储介质
本申请要求于2021年02月19日提交中国专利局、申请号为202110191656.6,发明名称为“一种人群计数方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种人群计数方法、装置、计算机设备及存储介质。
背景技术
随着世界人口的指数增长和由此产生的城市化,导致近年来人群聚集更加频繁,在这种情况下,为了更好的管理人口、保障人口的安全,必须在公共场所监控人流密度。尤其新型冠状病毒爆发以来,准确监控车站、职场及商场等人流聚集区域的人流密度,对疫情防控、企业复工起到了重要的作用。
密集人群计数是指计算拥挤场景中的人数,是将一个输入的人流图像映射到相应的密度图上,它对于在拥挤的场景中建立更高层次的认知能力至关重要。发明人意识到,当前人群计数问题主要通过回归人群密度图,然后求和获得图像中人群的数来解决。然而由于存在人头尺度变化大、人头遮挡严重以及背景噪声等问题,准确人群计数仍然存在一些困难。针对多尺度问题,目前多采用多阵列或者多分支的网络结构来获取不同的感受野,从而感知人群大小的变化,但是列或者分支数会限制模型的复杂度。
发明内容
本申请实施例的目的在于提出一种人群计数方法、装置、计算机设备及存储介质,以解决相关技术中由于多尺度导致的人群计数准确率受限的问题。
为了解决上述技术问题,本申请实施例提供一种人群计数方法,采用了如下所述的技术方案:
构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块;
将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图;
将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
将所述第三多尺度特征图进行解码后转化为人群密度图。
为了解决上述技术问题,本申请实施例还提供一种人群计数装置,采用了如下所述的技术方案:
构建模块,用于构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块;
池化模块,用于将原始图片特征输入到所述金字塔池化模块中,根据每个金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
卷积模块,用于将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图;
拼接模块,用于将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
解码模块,用于将所述第三多尺度特征图进行解码后转化为人群密度图。
为了解决上述技术问题,本申请实施例还提供一种计算机设备,采用了如下所述的技术方案:
该计算机设备包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理 器执行所述计算机可读指令时实现如下所述的人群计数方法的步骤:
构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块,其中所述金字塔池化模块包括多层金字塔层;
将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
将所述第一多尺度特征图输入卷积层模块进行卷积操作输出第二多尺度特征图;
将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
将所述第三多尺度特征图进行解码后转化为人群密度图。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,采用了如下所述的技术方案:
所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下所述的人群计数方法的步骤:
构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块,其中所述金字塔池化模块包括多层金字塔层;
将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
将所述第一多尺度特征图输入卷积层模块进行卷积操作输出第二多尺度特征图;
将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
将所述第三多尺度特征图进行解码后转化为人群密度图。
与现有技术相比,本申请实施例主要有以下有益效果:
本申请通过构建人群计数通用模型,人群计数通用模型包括金字塔池化模块与卷积模块,将原始图片特征输入到金字塔池化模块中,根据每个金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图,然后将第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图,再将第二多尺度特征图与原始图片特征进行拼接得到第三多尺度特征图,最后将第三多尺度特征图进行解码后转化为人群密度图;本申请通过构建的人群计数通用模型将原始图片特征进行金字塔池化后进行自适应卷积操作,获得人群的多尺度特征信息,并将多尺度特征与原始图片特征进行拼接进一步获得最终的多尺度特征图,将最终的多尺度特征图进行解码后输出人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
附图说明
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图作一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请可以应用于其中的示例性系统架构图;
图2根据本申请的人群计数方法的一个实施例的流程图;
图3是图2中步骤S202的一种具体实施方式的流程图;
图4是图2中步骤S203的一种具体实施方式的流程图;
图5为根据本申请的人群计数方法的一种框架图;
图6是根据本申请的人群计数装置的一个实施例的结构示意图;
图7是根据本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书 和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
为了使本技术领域的人员更好地理解本申请方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。
为了解决相关技术中由于多尺度导致的人群计数准确率受限的问题,本申请提供了一种人群计数方法,涉及人工智能计算机视觉,可以应用于如图1所示的系统架构100中,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。
需要说明的是,本申请实施例所提供的人群计数方法一般由服务器或终端设备执行,相应地,人群计数装置一般设置于服务器或终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的人群计数的方法的一个实施例的流程图。所述的人群计数方法,包括以下步骤:
步骤S201,构建人群计数通用模型,人群计数通用模型包括金字塔池化模块与卷积模块。
在本实施例中,构建的人群计数通用模型可以嵌入目前的主流网络中,该人群计数通用模型包括金字塔池化模块与卷积模块。金字塔池化模块为金字塔结构,包括多层金字塔层。
金字塔池化是指对输入的特征图进行不同尺寸的池化操作,进一步得到不同分辨率的特征信息,有效提高网络对特征的识别精度。根据预设每个金字塔层输出特征图尺寸大小进行池化,具体的,使用不同刻度的窗口对特征图像进行划分,每一种刻度代表一个金字塔层,划分之后每个特征图像块的大小称为window_size,然后使用window_size进行最大池化操作,举例而言,金字塔池化层输入的特征图尺寸为a×b,金字塔池化层输出的特征图尺寸为n×n,则使用池化窗口大小window_size为(a/n,b/n)进行池化操作,若a/n和b/n的值为非整数时,进行向上取整。
金字塔池化的目的是对于任意图片尺寸的输入产生固定大小的输出,在本实施例中,金字塔池化模块包括多层金字塔层,每层金字塔层包括池化层、第一卷积层以及上采样层。
应当理解,每层金字塔层对应一个尺度的特征图,输入的图片特征根据尺度进行池化输出相应大小的特征图,尺度可以根据需要进行设置;卷积模块则用于将经过金字塔池化的多尺度特征图进行卷积操作。
步骤S202,将多个原始图片特征输入到金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图。
在本实施例中,原始图片特征经特征提取模型进行提取,将其输入至金字塔池化模块中,特征提取模型可以为神经网络模型(backbone),具体的,将原始图片输入神经网络模型进行图片特征提取,将提取出来的原始图片特征输入金字塔池化模块中进行池化。由上述可知,每层金字塔层可以预设输出特征尺寸,根据预设输出特征尺寸对提取出来的原始图片特征进行不同尺度的池化。神经网络模型包括VGGNet网络、GoogleNet网络、DenseNet网络等。
在本实施例中,原始图片可以是通过采集监控视频中视频帧来获取,也可以是通过构建图片数据库来获取。
输入原始图片后,可以对原始图片进行指定特征提取,指定特征可分为:人脸区别、行态特征、肤色特征及长相特征等,通过这些特征并基于计算机图片视觉的人群计数方法有着比较重要的意义,尤其是在一些需要对聚集人群进行监管的地方,能够通过对监控视频的分析,及时得到当前的人群数量统计以及分布情况,相关部门能够提前做好预案,尽可能的减少因为人流量过大造成的意外。
需要强调的是,为进一步保证上述原始图片特征的私密和安全性,上述原始图片特征还可以存储于一区块链的节点中。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
步骤S203,将第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图。
卷积是在图像中利用过滤器进行操作,过滤器即为卷积核,每次卷积计算后,都会缩小图像的尺寸。卷积后得到的图像矩阵大小规律为:
假设原始图片是n×n的矩阵,核为f×f,则进行卷积运算后,得到的矩阵为(n-f+1)×(n-f+1)。
在本实施例中,卷积模块为自适应卷积层结构,该卷积模块包括第二卷积层、第三卷积层以及输出层,第二卷积层用于降低通道,第三卷积层用于保持多尺度特征的维度。步骤S204,将第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图。
在相关技术中,在人群密度大的地方,靠近镜头的人头像素块大、信号强,相应的远离镜头的人头像素块小、信号弱,如果远离镜头的像素块跟邻近的像素块具有协同效果,可以加强其信号传递进而可以提高多尺度下人群技术的准确性。在本实施例中,通过将原始图片特征与第二多尺度特征图按通道拼接后,后续网络进行卷积操作时,可将不同通道的信息进行融合,即网络会同时提取原始图片与第二多尺度特征图的特征,使得共生关系可以被更好地学习到,从而实现相邻像素的协同性,提高人群计数准确率。
在本实施例中,将提取出来的原始图片特征与第二多尺度特征图按通道维度进行拼接融合,可以使用concate方法实现,具体的,按照在通道维度上将原始图片特征与第二多尺度特征图进行拼接得到第三拼接特征,拼接之后使用1*1卷积核进行融合。
步骤S205,将第三多尺度特征图进行解码后转化为人群密度图。
在本实施例中,使用多层卷积层对第三多尺度特征图进行解码,并将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸得到人群密度图。
具体的,解码器包括多层卷积层,例如,卷积层为5层卷积层,卷积核大小逐层减小,卷积核分别使用11*11、9*9、7*7、5*5和1*1,经过5层卷积层操作,在卷积层中缩小特征维尺寸,将特征维信息整合到空间维,实现图像的解码;并采用双线性插值法对解码后的第三多尺度特征图进行上采样到与原始图片等大的尺寸;通过将第三多尺度特征图恢复 到原始尺寸得到人群密度图,可以提升人群密度图的质量,降低在人群计数通用模型中由于池化以及卷积操作而进行下采样带来的细节损失。
双线性插值法是一种比较好的图像缩放算法,它充分的利用了源图中虚拟点四周的四个真实存在的像素值来共同决定目标图中的一个像素值,因此缩放效果比简单的最邻近插值要好很多。双线性插值法的算法描述如下:
对于一个目的像素,设置坐标通过反向变换得到的浮点坐标为(i+u,j+v),(其中i、j均为浮点坐标的整数部分,u、v为浮点坐标的小数部分,是取值[0,1)区间的浮点数),则这个像素得值f(i+u,j+v)可由原图像中坐标为(i,j)、(i+1,j)、(i,j+1)、(i+1,j+1)所对应的周围四个像素的值决定,公式如下:
f(i+u,j+v)=(1-u)(1-v)f(i,j)+(1-u)vf(i,j+1)+u(1-v)f(i+1,j)+uvf(i+1,j+1)
其中f(i,j)表示源图像(i,j)处的像素值,以此类推。通过此方法,可以将特征图恢复到空间分辨率,得到与原始图片尺寸相同的人群密度图。
本申请通过构建的人群计数通用模型将原始图片特征进行金字塔池化后进行自适应卷积操作,获得人群的多尺度特征信息,并将多尺度特征与原始图片特征进行拼接融合后进一步获得最终的多尺度特征图,将最终的多尺度特征图进行解码后输出人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
在本实施例的一些可选的实现方式中,参见图3所示,步骤202具体包括如下步骤:
步骤S301,将原始图片特征分别输入至每层金字塔层的池化层中进行池化运算,在每层金字塔层上得到对应的第一特征图。
在本实施例中,金字塔池化模块包括多层金字塔层,每层金字塔层包括一个池化层、一个卷积层以及一个上采样层,每层金字塔层对应一个尺度的特征图,即通过金字塔池化模块可以提取不同尺度的特征图。
应当理解,金字塔池化模块层级是预先设置好的,设置好层级之后,设置每层金字塔中池化层的池化核的大小,例如,金字塔有三层,每层对应一个尺度,池化核大小分别为4x4、2x2和1x1。
将原始图片特征分别输入至每层金字塔层的池化层中进行池化运算,将得到对应不同层特征的第一特征图。
步骤S302,将第一特征图经过第一卷积层进行第一卷积操作,输出对应的第一卷积特征图。
在本实施例中,每层金字塔层中使用卷积核大小为1×1,步长为1的卷积层对池化后的第一特征图进行卷积。使用1×1的卷积核进行卷积操作的好处在于不会改变原始特征值的维度,从而确保不会在卷积的过程中增加冗余信息或是漏掉一些原本的信息,同时加强了像素点位置的定位信息。
步骤S303,对第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图。
在本实施例中,不同尺度层级得到的多尺度特征尺寸是不相同的,因此,通过上采样层进行上采样操作,每层输出给定的预设大小的第一尺度特征图。
步骤S304,将每层的第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图。
在金字塔池化模块中的每层提取一个尺度的特征,最后拼接这些特征,从而达到兼容多个尺度特征的目的。
举例说明,金字塔塔池化模型设置为三层金字塔层,金字塔底座为1x1卷积核,金字塔中间为2x2卷积核,金字塔顶座为4x4卷积核,在conv5层进行池化,该层有256个过滤器,分别进行池化操作后,出来的特征就是(16+4+1)x256维度,即conv5对应的维度为256。
本申请通过对每层金字塔层输出的预设大小的第一尺度特征图按通道维度进行拼接,融合了在不同尺度提取的特征,保证了后续人群密度估计的准确性,具有鲁棒性高,性能 好的优点。
在本实施例的一些可选的实现方式中,参见图4所示,步骤S203具体包括如下步骤:
步骤S401,将第一多尺度特征图输入到第二卷积层进行第二卷积操作,得到第二卷积特征图;
步骤S402,调整第二卷积层的输出通道数并输出第二卷积特征图。
步骤S403,把第二卷积特征图输入到第三卷积层进行第三卷积操作并输出第二多尺度特征图。
在本实施例中,卷积模块包括第二卷积层、第三卷积层以及输出层,第二卷积操作目的是降低通道数,减少计算量,具体的,采用1*1*c的卷积层进行卷积操作,其中,c为通道数,可以根据需要进行设置。将得到的第二卷积特征图输出,通过adaptive*cout调整输出通道,提高通道数,增加后续的特征表征能力。
需要说明的是,卷积模块的输出通道可以根据金字塔池化模块输出的通道数量按照实际情况进行灵活调整,可以人为进行设置,也可以按照预设规则进行设置,例如,设置卷积模块的输出通道数与金字塔池化模块输出通道数一样,或者设置卷积模块的输出通道数是金字塔池化模块输出通道数的两倍,在这里不进行限制。
第三卷积操作可以使用卷积核大小为1*1的卷积层对池化后的特征图进行卷积。使用1*1的卷积核进行卷积操作最大的好处在于不会改变原始特征值的维度,从而确保不会在卷积的过程中增加冗余信息或是漏掉一些原本的信息,同时加强了像素点位置的定位信息。
第三卷积操作之后通过输出层convm*n*cin*c adaptive进行输出,获得第二多尺度特征图。
需要说明的是,第一多尺度特征图是将提取的不同尺度特征在通道维度进行拼接得到,即第一多尺度特征图的通道数增加了,表征图片本身的特征增加了,而每一特征下的信息没有增加;第二多尺度特征图是将第一多尺度特征图进行卷积操作后,从而将特征进行融合,使得每一特征下的信息增加了。
综上所述,参见图5所示,为本实施例提供的人群计数方法的框架图。如图所示,原始图片经特征提取模型进行原始图片特征提取,将提取出来的原始图片特征输入到金字塔池化模块中,经金字塔池化模块不同层级的池化层进行池化操作,得到每层对应的第一特征图,将第一特征图经过第一卷积层进行第一卷积操作,输出对应的第一卷积特征图,对第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图,将每层的第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图,将第一多尺度特征图输入第二卷积层进行第二卷积操作,得到第二卷积特征图,调整第二卷积层的输出通道数并输出第二卷积特征图,对第二卷积特征图输入到第三卷积层进行第三卷积操作并输出第二多尺度特征图,将第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图,将第三多尺度特征图进行解码后转化为人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
在一些可选的实现方式中,在步骤205之后,上述电子设备可以执行以下步骤:
对人群密度图中每个像素点的值求积分得到人群密度估计,将所有像素点的值相加求和,得到总人数计数。
需要说明的是,在人群密度图中每个像素点的值为该像素点人群的密度,因此直接对密度图进行积分操作,对于数字图像而言,即将所有像素点的值相加,即可得到最终的总人数。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类 型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本申请可应用于智慧安防的监控领域中,从而推动智慧城市的建设。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
进一步参考图6,作为对上述图2所示方法的实现,本申请提供了一种人群计数装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图6所示,本实施例所述的人群计数装置包括:构建模块601、池化模块602、卷积模块603、拼接模块604以及解码模块605。其中:
构建模块601用于构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块;
池化模块602用于将多个原始图片特征输入到所述金字塔池化模块中,根据每个金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
卷积模块603用于将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图;
拼接模块604用于将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;
解码模块605用于将所述第三多尺度特征图进行解码后转化为人群密度图。
需要强调的是,为进一步保证上述原始图片特征的私密和安全性,上述原始图片特征还可以存储于一区块链的节点中。
在本实施例中,卷积模块603进一步用于将所述第一多尺度特征图进行第二卷积操作,得到第二卷积特征图;对所述第二卷积特征图进行第三卷积操作并输出第二多尺度特征图。
在本实施例的一种具体实施方式中,解码模块605包括卷积单元以及生成单元,卷积单元用于使用多层卷积层对所述第三多尺度特征图进行解码,生成单元用于将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸得到人群密度图。
上述人群计数装置,通过构建的人群计数通用模型将原始图片特征进行金字塔池化后进行自适应卷积操作,获得人群的多尺度特征信息,并将多尺度特征与原始图片特征进行拼接融合后进一步获得最终的多尺度特征图,将最终的多尺度特征图进行解码后输出人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
在本实施例一种具体实施方式中,生成单元进一步用于采用双线性插值法对解码后的第三多尺度特征图进行上采样到与原始图片等大的尺寸。
在本实施例的一些可选的实现方式中,池化模块602包括池化单元、卷积单元、上采样单元以及拼接融合单元;
池化单元用于将所述原始图片特征分别输入至每层金字塔层的池化层中进行池化运 算,在每层所述金字塔层上得到对应的第一特征图;
卷积单元用于将所述第一特征图经过所述第一卷积层进行第一卷积操作,输出对应的第一卷积特征图;
上采样单元用于对所述第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图;
拼接融合单元用于将每层的所述第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图。
上述人群计数装置,通过对每层金字塔输出的预设大小的第一尺度特征图按通道维度进行拼接,融合了在不同尺度提取的特征,保证了后续人群密度估计的准确性,具有鲁棒性高,性能好的优点。
在本实施例的一些可选的实现方式中,人群计数装置还包括计数模块,计数模块用于对所述人群密度图中每个像素点的值求积分得到人群密度估计,将所有像素点的值相加求和,得到总人数计数。
在人群密度图中每个像素点的值为该像素点人群的密度,因此直接对密度图进行积分操作,对于数字图像而言,即将所有像素点的值相加,即可得到最终的总人数。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图7,图7为本实施例计算机设备基本结构框图。
所述计算机设备7包括通过系统总线相互通信连接存储器71、处理器72、网络接口73。需要指出的是,图中仅示出了具有组件71-73的计算机设备7,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器71至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器71可以是所述计算机设备7的内部存储单元,例如该计算机设备7的硬盘或内存。在另一些实施例中,所述存储器71也可以是所述计算机设备7的外部存储设备,例如该计算机设备7上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器71还可以既包括所述计算机设备7的内部存储单元也包括其外部存储设备。本实施例中,所述存储器71通常用于存储安装于所述计算机设备7的操作系统和各类应用软件,例如人群计数方法的计算机可读指令等。此外,所述存储器71还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器72在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器72通常用于控制所述计算机设备7的总体操作。本实施例中,所述处理器72用于运行所述存储器71中存储的计算机可读指令或者处理数据,例如运行所述人群计数方法的计算机可读指令。
所述网络接口73可包括无线网络接口或有线网络接口,该网络接口73通常用于在所述计算机设备7与其他电子设备之间建立通信连接。
本实施例通过处理器执行存储在存储器的计算机可读指令时实现如上述实施例人群计数方法的步骤,通过构建的人群计数通用模型将原始图片特征进行金字塔池化后进行自 适应卷积操作,获得人群的多尺度特征信息,并将多尺度特征与原始图片特征进行拼接融合后进一步获得最终的多尺度特征图,将最终的多尺度特征图进行解码后输出人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性。所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行如上述的人群计数方法的步骤,通过构建的人群计数通用模型将原始图片特征进行金字塔池化后进行自适应卷积操作,获得人群的多尺度特征信息,并将多尺度特征与原始图片特征进行拼接融合后进一步获得最终的多尺度特征图,将最终的多尺度特征图进行解码后输出人群密度图,可以修正人群密度拥挤中的信息确实和不准确的情况,从而提高多尺度下人群计数的准确性。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种人群计数方法,包括下述步骤:
    构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块,其中所述金字塔池化模块包括多层金字塔层;
    将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
    将所述第一多尺度特征图输入卷积层模块进行卷积操作输出第二多尺度特征图;
    将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
    将所述第三多尺度特征图进行解码后转化为人群密度图。
  2. 根据权利要求1所述的人群计数方法,其中,所述每层金字塔层包括池化层、第一卷积层以及上采样层;所述将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图的步骤包括:
    将所述原始图片特征分别输入至每层金字塔层的池化层中进行池化运算,在每层所述金字塔层上得到对应的第一特征图;
    将所述第一特征图经过所述第一卷积层进行第一卷积操作,输出对应的第一卷积特征图;
    对所述第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图;
    将每层的所述第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图。
  3. 根据权利要求1所述的人群计数方法,其中,所述卷积模块包括第二卷积层和第三卷积层;所述将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图的步骤包括:
    将所述第一多尺度特征图输入到第二卷积层进行第二卷积操作,得到第二卷积特征图;
    调整所述第二卷积层的输出通道数并输出所述第二卷积特征图;
    把所述第二卷积特征图输入到第三卷积层进行第三卷积操作并输出第二多尺度特征图。
  4. 根据权利要求1所述的人群计数方法,其中,所述将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图的步骤包括:
    将所述第二多尺度特征图与原始图片特征按照通道维度进行拼接得到第三拼接特征;
    使用1*1卷积核对所述第三拼接特征进行融合得到第三多尺度特征图。
  5. 根据权利要求1所述的人群计数方法,其中,所述将所述第三多尺度特征图进行解码后转化为人群密度图的步骤包括:
    使用多层卷积层对所述第三多尺度特征图进行解码;
    将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸,得到人群密度图。
  6. 根据权利要求5所述的人群计数方法,其中,所述将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸得到人群密度图的步骤包括:
    采用双线性插值法对解码后的第三多尺度特征图进行上采样,得到与原始图片等大的尺寸。
  7. 根据权利要求1至6中任一项所述的人群计数方法,其中,在所述将所述第三多尺度特征进行解码后转化为人群密度图的步骤之后还包括:
    对所述人群密度图中每个像素点的值求积分得到人群密度估计,将所有像素点的值相加求和,得到总人数计数。
  8. 一种人群计数装置,包括:
    构建模块,用于构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块;
    池化模块,用于将原始图片特征输入到所述金字塔池化模块中,根据每个金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
    卷积模块,用于将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图;
    拼接模块,用于将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
    解码模块,用于将所述第三多尺度特征图进行解码后转化为人群密度图。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下所述的人群计数方法的步骤:
    构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块,其中所述金字塔池化模块包括多层金字塔层;
    将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
    将所述第一多尺度特征图输入卷积层模块进行卷积操作输出第二多尺度特征图;
    将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
    将所述第三多尺度特征图进行解码后转化为人群密度图。
  10. 根据权利要求9所述的计算机设备,其中,所述每层金字塔层包括池化层、第一卷积层以及上采样层;所述将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图的步骤包括:
    将所述原始图片特征分别输入至每层金字塔层的池化层中进行池化运算,在每层所述金字塔层上得到对应的第一特征图;
    将所述第一特征图经过所述第一卷积层进行第一卷积操作,输出对应的第一卷积特征图;
    对所述第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图;
    将每层的所述第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图。
  11. 根据权利要求9所述的计算机设备,其中,所述卷积模块包括第二卷积层和第三卷积层;所述将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图的步骤包括:
    将所述第一多尺度特征图输入到第二卷积层进行第二卷积操作,得到第二卷积特征图;
    调整所述第二卷积层的输出通道数并输出所述第二卷积特征图;
    把所述第二卷积特征图输入到第三卷积层进行第三卷积操作并输出第二多尺度特征图。
  12. 根据权利要求9所述的计算机设备,其中,所述将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图的步骤包括:
    将所述第二多尺度特征图与原始图片特征按照通道维度进行拼接得到第三拼接特征;
    使用1*1卷积核对所述第三拼接特征进行融合得到第三多尺度特征图。
  13. 根据权利要求9所述的计算机设备,其中,所述将所述第三多尺度特征图进行解码后转化为人群密度图的步骤包括:
    使用多层卷积层对所述第三多尺度特征图进行解码;
    将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸,得到人群密度图。
  14. 根据权利要求13所述的计算机设备,其中,所述将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸得到人群密度图的步骤包括:
    采用双线性插值法对解码后的第三多尺度特征图进行上采样,得到与原始图片等大的尺寸。
  15. 根据权利要求9至14中任一项所述的计算机设备,其中,在所述将所述第三多尺度特征进行解码后转化为人群密度图的步骤之后还包括:
    对所述人群密度图中每个像素点的值求积分得到人群密度估计,将所有像素点的值相 加求和,得到总人数计数。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下所述的人群计数方法的步骤:
    构建人群计数通用模型,所述人群计数通用模型包括金字塔池化模块与卷积模块,其中所述金字塔池化模块包括多层金字塔层;
    将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图;
    将所述第一多尺度特征图输入卷积层模块进行卷积操作输出第二多尺度特征图;
    将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图;及
    将所述第三多尺度特征图进行解码后转化为人群密度图。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述每层金字塔层包括池化层、第一卷积层以及上采样层;所述将多个原始图片特征输入到所述金字塔池化模块中,根据每层金字塔层预设的输出特征尺寸进行不同尺度的池化,得到第一多尺度特征图的步骤包括:
    将所述原始图片特征分别输入至每层金字塔层的池化层中进行池化运算,在每层所述金字塔层上得到对应的第一特征图;
    将所述第一特征图经过所述第一卷积层进行第一卷积操作,输出对应的第一卷积特征图;
    对所述第一卷积特征图输入到上采样层进行上采样操作,输出预设大小的第一尺度特征图;
    将每层的所述第一尺度特征图在通道维度上进行拼接得到第一多尺度特征图。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述卷积模块包括第二卷积层和第三卷积层;所述将所述第一多尺度特征图输入卷积模块进行卷积操作输出第二多尺度特征图的步骤包括:
    将所述第一多尺度特征图输入到第二卷积层进行第二卷积操作,得到第二卷积特征图;
    调整所述第二卷积层的输出通道数并输出所述第二卷积特征图;
    把所述第二卷积特征图输入到第三卷积层进行第三卷积操作并输出第二多尺度特征图。
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述将所述第二多尺度特征图与原始图片特征进行拼接融合得到第三多尺度特征图的步骤包括:
    将所述第二多尺度特征图与原始图片特征按照通道维度进行拼接得到第三拼接特征;
    使用1*1卷积核对所述第三拼接特征进行融合得到第三多尺度特征图。
  20. 根据权利要求16所述的计算机可读存储介质,其中,所述将所述第三多尺度特征图进行解码后转化为人群密度图的步骤包括:
    使用多层卷积层对所述第三多尺度特征图进行解码;
    将解码后的第三多尺度特征图的空间尺寸恢复至原始图片尺寸,得到人群密度图。
PCT/CN2021/090518 2021-02-19 2021-04-28 一种人群计数方法、装置、计算机设备及存储介质 WO2022174517A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110191656.6 2021-02-19
CN202110191656.6A CN112991274B (zh) 2021-02-19 2021-02-19 一种人群计数方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022174517A1 true WO2022174517A1 (zh) 2022-08-25

Family

ID=76394183

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090518 WO2022174517A1 (zh) 2021-02-19 2021-04-28 一种人群计数方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112991274B (zh)
WO (1) WO2022174517A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253184A (zh) * 2023-08-25 2023-12-19 燕山大学 一种雾先验频域注意表征引导的雾天图像人群计数方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405570B (zh) * 2023-12-13 2024-03-08 长沙思辰仪器科技有限公司 一种油液颗粒度计数器自动检测方法与系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104056A1 (en) * 2014-10-09 2016-04-14 Microsoft Technology Licensing, Llc Spatial pyramid pooling networks for image processing
CN109948553A (zh) * 2019-03-20 2019-06-28 北京航空航天大学 一种多尺度密集人群计数方法
CN111429466A (zh) * 2020-03-19 2020-07-17 北京航空航天大学 一种基于多尺度信息融合网络的空基人群计数与密度估计方法
CN111476188A (zh) * 2020-04-14 2020-07-31 山东师范大学 基于特征金字塔的人群计数方法、系统、介质及电子设备
CN111488827A (zh) * 2020-04-10 2020-08-04 山东师范大学 一种基于多尺度特征信息的人群计数方法及系统
CN111523449A (zh) * 2020-04-22 2020-08-11 山东师范大学 基于金字塔注意力网络的人群计数方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543695B (zh) * 2018-10-26 2023-01-06 复旦大学 基于多尺度深度学习的泛密度人群计数方法
CN110705340B (zh) * 2019-08-12 2023-12-26 广东石油化工学院 一种基于注意力神经网络场的人群计数方法
CN111242036B (zh) * 2020-01-14 2023-05-09 西安建筑科技大学 一种基于编码-解码结构多尺度卷积神经网络的人群计数方法
CN111639585A (zh) * 2020-05-21 2020-09-08 中国科学院重庆绿色智能技术研究院 一种自适应人群计数系统及自适应人群计数方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104056A1 (en) * 2014-10-09 2016-04-14 Microsoft Technology Licensing, Llc Spatial pyramid pooling networks for image processing
CN109948553A (zh) * 2019-03-20 2019-06-28 北京航空航天大学 一种多尺度密集人群计数方法
CN111429466A (zh) * 2020-03-19 2020-07-17 北京航空航天大学 一种基于多尺度信息融合网络的空基人群计数与密度估计方法
CN111488827A (zh) * 2020-04-10 2020-08-04 山东师范大学 一种基于多尺度特征信息的人群计数方法及系统
CN111476188A (zh) * 2020-04-14 2020-07-31 山东师范大学 基于特征金字塔的人群计数方法、系统、介质及电子设备
CN111523449A (zh) * 2020-04-22 2020-08-11 山东师范大学 基于金字塔注意力网络的人群计数方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253184A (zh) * 2023-08-25 2023-12-19 燕山大学 一种雾先验频域注意表征引导的雾天图像人群计数方法
CN117253184B (zh) * 2023-08-25 2024-05-17 燕山大学 一种雾先验频域注意表征引导的雾天图像人群计数方法

Also Published As

Publication number Publication date
CN112991274A (zh) 2021-06-18
CN112991274B (zh) 2023-06-30

Similar Documents

Publication Publication Date Title
US20220351390A1 (en) Method for generating motion capture data, electronic device and storage medium
CN107491174A (zh) 用于远程协助的方法、装置、系统及电子设备
CN110136229A (zh) 一种用于实时虚拟换脸的方法与设备
WO2022174517A1 (zh) 一种人群计数方法、装置、计算机设备及存储介质
WO2023035531A1 (zh) 文本图像超分辨率重建方法及其相关设备
CN110032701B (zh) 图像展示控制方法、装置、存储介质及电子设备
CN107566793A (zh) 用于远程协助的方法、装置、系统及电子设备
US20230047748A1 (en) Method of fusing image, and method of training image fusion model
CN114792355B (zh) 虚拟形象生成方法、装置、电子设备和存储介质
US20230102804A1 (en) Method of rectifying text image, training method, electronic device, and medium
CN110619670A (zh) 人脸互换方法、装置、计算机设备及存储介质
CN107393410B (zh) 在地图上展示数据的方法、介质、装置和计算设备
WO2024041235A1 (zh) 图像处理方法、装置、设备、存储介质及程序产品
WO2020155908A1 (zh) 用于生成信息的方法和装置
US11910068B2 (en) Panoramic render of 3D video
CN111489289A (zh) 一种图像处理方法、图像处理装置及终端设备
CN116129534A (zh) 一种图像活体检测方法、装置、存储介质及电子设备
CN115203487A (zh) 基于多方安全图的数据处理方法及相关装置
CN114040129A (zh) 视频生成方法、装置、设备及存储介质
CN114066790A (zh) 图像生成模型的训练方法、图像生成方法、装置和设备
CN113610856A (zh) 训练图像分割模型和图像分割的方法和装置
CN113361535A (zh) 图像分割模型训练、图像分割方法及相关装置
CN112016889A (zh) 流程构建方法、装置、电子设备及存储介质
CN112016503B (zh) 人行道检测方法、装置、计算机设备及存储介质
CN113608615B (zh) 对象数据处理方法、处理装置、电子设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21926234

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21926234

Country of ref document: EP

Kind code of ref document: A1