WO2021163103A1 - Light-weight pose estimation network with multi-scale heatmap fusion - Google Patents

Light-weight pose estimation network with multi-scale heatmap fusion Download PDF

Info

Publication number
WO2021163103A1
WO2021163103A1 PCT/US2021/017341 US2021017341W WO2021163103A1 WO 2021163103 A1 WO2021163103 A1 WO 2021163103A1 US 2021017341 W US2021017341 W US 2021017341W WO 2021163103 A1 WO2021163103 A1 WO 2021163103A1
Authority
WO
WIPO (PCT)
Prior art keywords
joints
image
indication
scale
feature maps
Prior art date
Application number
PCT/US2021/017341
Other languages
French (fr)
Inventor
Yun Fu
Songyao JIANG
Bin Sun
Original Assignee
Northeastern University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University filed Critical Northeastern University
Priority to US17/759,939 priority Critical patent/US20230126178A1/en
Publication of WO2021163103A1 publication Critical patent/WO2021163103A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • Pose estimation aims to generate an interpretable low-dimension representation of bodies in images. Pose estimation is useful for many real-world applications in sports, security, autonomous self-driving cars, and robotics, amongst other examples. Speed and accuracy are two major concerns in pose estimation applications. As a trade-off, existing methods often sacrifice accuracy in order to boost speed. In contrast, embodiments of the present invention provide a light-weight, accurate, and fast pose estimation network with a multi-scale heatmap fusion mechanism to estimate 2D poses from a single RGB image. Advantageously, embodiments can run on mobile devices in real-time while achieving comparable performance with state-of-the-art methods in terms of accuracy.
  • One such example embodiment is directed to a method of identifying joints of a multi-limb body in an image.
  • Such an example embodiment first, unifies depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth.
  • an initial indication of one or more joints in the image is generated for each of the plurality of feature maps having the same depth.
  • the one or more joints are located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb.
  • a final indication of the one or more joints in the image is generated using each generated initial indication of the one or more joints.
  • An embodiment generates an indication of one or more limbs in the image from the generated final indication of the one or more joints in the image. Such an embodiment may also generate an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image. [0006] In an embodiment, the final indication of the one or more joints in the image is generated by first, upsampling at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale.
  • the upsampled at least one initial indication of the one or more joints and the given initial indication of the one or more joints with the largest scale are added together to generate the final indication of the one or more joints in the image.
  • Another embodiment unifies depth of the plurality of multi-scale feature maps by applying a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
  • the heatmap estimating layer is composed of a convolutional neural network, e.g., is a convolutional neural network layer.
  • An embodiment trains the aforementioned convolutional neural network.
  • the image is a training image.
  • trains the convolutional neural network by: (1) comparing each generated initial indication of the one or more joints in the image to a respective ground-truth indication of the one or more joints in the training image to determine losses and (2) back propagating the losses to the convolutional neural network.
  • each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth.
  • Another embodiment generates the plurality of multi-scale feature maps by processing the image using a backbone neural network.
  • processing the image using the backbone neural network includes performing multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps.
  • Another embodiment is directed to a computer system for identifying joints of a multi-limb body in an image.
  • the system includes a processor and a memory with computer code instructions stored thereon.
  • the processor and the memory, with the computer code instructions are configured to cause the system to implement any embodiments or combination of embodiments described herein.
  • Yet another embodiment is directed to a computer program product for identifying joints in an image.
  • the computer program product comprises one or more non-transitory computer-readable storage devices and program instructions stored on at least one of the one or more storage devices. When the program instructions are loaded and executed by a processor, the program instructions cause an apparatus associated with the processor to implement any embodiments or combination of embodiments described herein.
  • FIG. l is a simplified diagram of a system to identify joints according to an embodiment.
  • FIGs. 2A-C are block diagrams of convolutional architectures that may be utilized in embodiments.
  • FIG. 3 is a flow diagram of a method for identifying joints of a multi-limb body in an image according to an embodiment.
  • FIGs. 4A-B are block diagrams of a system embodiment for identifying joints in an image.
  • FIG. 5 is a simplified diagram of an embodiment that trains a neural network.
  • FIG. 6 is a simplified block diagram of a computer system for identifying joints of a multi-limb body in an image according to an embodiment.
  • FIG. 7 is a simplified diagram of a computer network environment in which an embodiment of the present invention may be implemented.
  • FIG. 1 is simplified diagram of a system 100 that provides such functionality. Specifically, the system 100 identifies joints and limbs in an image.
  • the system 100 includes the trained neural network 101.
  • the neural network 101 is trained to identify joints and limbs in an image as described herein.
  • the neural network 101 may implement the system 440 described hereinbelow in relation to FIGs. 4A-B and the neural network 101 may be configured to carry out the method 330 described hereinbelow in relation to FIG. 3.
  • the trained neural network 101 receives the image 102 and processes the image 102 to generate the indication 103 of body parts, e.g., the joint 104 and the limb 105, in the image 102.
  • Embodiments e.g., the system 100, implement a light-weight pose estimation network with a multi-scale heatmap fusion mechanism.
  • the proposed network has two parts: a backbone architecture and a head structure.
  • an embodiment utilizes a plug-and-play structure referred to herein as Low-rank Pointwise Residual module (LPR).
  • LPR Low-rank Pointwise Residual module
  • FIG. 2C illustrates a standard convolution network 220a
  • FIG. 2B illustrates a depthwise separable convolution (DSC) module 220b [6]
  • DSC depthwise separable convolution
  • the computation cost and parameters are reduced significantly when the number of point-wise layers, e.g., the 1 by 1 convolution layer PI in FIG. 2C, is much less than the input channels.
  • a residual operation through depthwise convolution is implemented to complement the feature maps without any additional parameters.
  • an embodiment implements the LPR module (the structure 220c) on the architecture of HRNet [3], which is specifically designed for pose estimation and achieves state-of-the-art performance by maintaining high- resolution representations through the whole process of pose estimation.
  • an embodiment implements a novel multi-scale heatmap estimation and fusion mechanism, which localizes joints from extracted feature maps at multiple scales and combines the multi-scale results together to make a final joint location estimation.
  • the multiscale estimation and fusion technique attempts to localize body joints on different scales using a single estimating layer.
  • the estimation is done on multi-scale feature maps.
  • a single estimating layer is utilized which ensures that such an embodiment is looking for the same-scale joints on multiple scales. This process is looking for multi-scale joints on the same image. This allows embodiments to handle different scales.
  • Such a design of the head network further boosts the accuracy of pose estimation performance.
  • one parameter i.e., weight
  • 4 bytes i.e., 4 bytes.
  • embodiments provide a novel head structure for pose estimation.
  • Embodiments extract multi-scale feature maps from the input image and estimate multi-scale joint heatmaps from those multi-scale feature maps. Then, those multi-scale estimations (feature maps) are fused together to determine a final estimation. This approach solves the scaling problem of pose estimation.
  • Pose Estimation aims to estimate poses of people, e.g., multiple person poses, in an image. Pose estimation has been studied in computer vision [30, 31, 32, 33, 34] for a long time. Before deep learning was introduced, pose estimation methods utilized pictorial structures [30] or graphical models [34] Recently, with the development and application of deep learning models, i.e., neural networks, attempts have been made to utilize deep convolutional neural networks to do 2D multi-person pose estimation. These attempts can be categorized into two major categories: (1) top-down methods and (2) bottom -up methods. [0038] Top-down approaches have two stages. The first stage detects people in the image using a person detector.
  • the second stage uses a single person pose estimator to determine poses for the people detected in the first stage.
  • He et al. [35] extended the Mask-RCNN framework to human pose estimation by predicting a one-hot mask for each body part.
  • Papandreou et al. [36] utilized a Faster RCNN detector to predict person boxes and applied ResNet in a fully convolutional fashion to predict heatmaps for every body part.
  • Fang et al. [37] designed a symmetric spatial transformer network to alleviate the inaccurate bounding box problem.
  • Bottom-up approaches also have two stages. Bottom up approaches first detect body parts and, second, associate body parts into people.
  • Pishchulin et al. [38] proposed using an Inter Linear Program method to solve the body part association problem, i.e., associating joints estimated from an image into different persons.
  • Cao et al. [2] introduced Part Affinity Fields to predict the direction and activations for each limb to help associate body parts.
  • Newell et al. [39] utilized predicted pixel-wise embeddings to assign detected body parts into different groups.
  • Embodiments follow a top-down approach and utilize a person detector to first detect a person bounding box and, second, estimate the location of body joints within the bounding box.
  • Embodiments shrink down the capacity of pose estimation networks using a novel light-weight neural network block and utilize a multi-scale heatmap extraction and fusion mechanism to solve the scaling problem and improve the performance.
  • FIG. 3 is a flow diagram of a method 330 for identifying joints of a multi-limb body in an image according to an embodiment.
  • the method 330 unifies 331 depths of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth.
  • a typical feature map has four dimensions [N,C,H,W] where, N is mini-batch size, C is channel s/depth, H is height, and W is width.
  • Embodiments unify 331 the feature maps so they have the same number of channels/same depth.
  • the method 330 For each of the plurality of feature maps having the same depth (the depth-unified feature maps), the method 330 generates 332 an initial indication of one or more joints in the image. In other words, at 332, for each respective depth-unified feature map, a respective initial indication of joints in the image is generated. According to an embodiment, the one or more joints are located at an interconnection of a limb to the multi limb body or at an interconnection of a limb to another limb. To continue, a final indication of the one or more joints in the image is generated 333 using each generated initial indication of the one or more joints. In an embodiment of the method 330, the unifying 331, generation 332 of the initial indications of one or more joints, and generation 333 of the final indication of the one or more joints may be implemented as described hereinbelow in relation to FIG.
  • Embodiments of the method 330 may be used to identify joints of any type of object.
  • the indication of the one or more joints in the image corresponds to joints of at least one of: a human, animal, machine, and robot, amongst other examples.
  • embodiments may identify joints for multiple objects, e.g., people, in an image.
  • the initial indications of one or more joints and final indication of one or more joints are indications of locations of joints in the image.
  • the indications of one or more joints indicates a probability of a joint at each location in the image.
  • locations are x-y coordinates in the image.
  • the unit of the locations, e.g., coordinates are in pixels.
  • An example embodiment of the method 330 generates the plurality of multi-scale feature maps (that are unified 331) by processing the image using a backbone neural network.
  • processing the image using the backbone neural network comprises performing multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps.
  • the plurality of multi-scale feature maps are generated using the functionality described hereinbelow in relation to FIG. 4A and, specifically, the backbone network 442 of the system 440
  • An embodiment of the method 330 unifies 331 depths of the plurality of multi scale feature maps by applying a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
  • a different convolutional layer is applied to each different feature map, and these different convolutional layers are configured to output feature maps that have the same depth. It can be said that such functionality unifies channels of the feature maps.
  • the feature maps are unified using the functionality described hereinbelow in relation to FIG. 4B.
  • Yet another embodiment generates 332 the initial indication of the one or more joints in the image for each of the plurality of feature maps having the same depth by applying a heatmap estimating layer to each of the plurality of feature maps having the same depth.
  • a respective indication of joints in the image is generated 332 for each respective feature map.
  • the heatmap estimating layer is composed of a convolutional neural network.
  • Another embodiment of the method 330 trains the heatmap estimating layer composed of the convolutional neural network that is used to generate 332 the initial indications of the one or more joints in the image.
  • the image is a training image.
  • trains the convolutional neural network by comparing each generated initial indication of the one or more joints in the image to a respective ground- truth indication of the one or more joints in the training image to determine losses. These determined losses are back propagated to the convolutional neural network to adjust weights of the neural network.
  • each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth. Further training functionality that may be employed in embodiments is described hereinbelow in relation to FIG. 5.
  • the final indication of the one or more joints in the image is generated 333 by first, upsampling at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale.
  • Such functionality may include performing upsampling on a plurality of initial indications of the one or more joints so that all of the initial indications have an equal scale, i.e., size.
  • the sizes (HxW) of the initial estimations of joint locations generated using the multi-scale feature maps are the same sizes as the feature maps. To illustrate, consider an embodiment with three multi-scale feature maps (64x64, 128x128, 256x256).
  • the initial joints/body estimation on those feature maps will have the same sizes (64x64, 128x128, 256x256).
  • the estimations can be added together, or processed with a max() operator, to generate the final indication of joints (256x256).
  • the initial estimations are matrices/tensors filled with float values. After the upsampling, the initial estimations have the same size (number of joints c height x width). These matrices can be added together elementwise.
  • the max() operator can be implemented elementwise. In such implementations, the result of adding the matrices together elementwise or applying the max() operator elementwise is the final indication of joints in the image.
  • the upsampling processes the initial indications of the one or more joints so that the indications have the same scale as the initial indication with the largest scale.
  • one initial indication is not upsampled, the initial indication with the largest scale.
  • the upsampled at least one initial indication of the one or more joints and the given initial indication of the one or more joints with the largest scale are added together to generate 333 the final indication of the one or more joints in the image.
  • the final indication of the one or more joints is generated 333 by adding together all of the initial indications of joints (which were previously upsampled).
  • An embodiment generates 333 the final indication of the one or more joints in the image as described hereinbelow in relation to FIG. 4B.
  • Another embodiment of the method 330 generates an indication of one or more limbs in the image from the generated 333 final indication of the one or more joints in the image. Such an embodiment may also generate an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image.
  • p (n p x 2) denotes the 2D x - y coordinates of the body joints keypoints of that person.
  • the mapping function G is obtained by training the proposed deep convolutional neural networks.
  • the estimated 2D keypoints p can be obtained by finding the location of strongest responding signal from the heatmap.
  • an implementation utilizes a deep neural network architecture (referred to herein as backbone) which extracts features to capture the related information contained in the images. Then, a shallow convolutional neural network (referred to herein as head) is used to estimate the heatmap of joints, e.g., human joints.
  • backbone deep neural network architecture
  • head shallow convolutional neural network
  • LPRNet low-rank pointwise residual network
  • Table 2 below compares the computational cost in FLOPS and parameters of existing networks and the LPRNet that may be used in embodiments.
  • SConv, DSC, Shufflev2, and LPR modules are used to build VGG [4], Mobilenetvl [6], ShuffleNetv2 [14], and the LPRNet respectively.
  • the convolution operation is applied between each filter and the input feature map.
  • the filter applies different weights to different features while doing convolution. Afterwards, all features convoluted by one filter are added together to generate a new feature map.
  • the whole procedure is equivalent to certain matrix products, which can be formally written as: where Wij is the weight of the filter i corresponding to the feature map j, F j is the input feature map, and means the feature map F j is convoluted by a filter with the weight Wij.
  • each Wij is a 3 x 3 matrix (filter), and constitutes a large matrix [Wij], or simply W.
  • Depthwise Separable Convolution layers are key components for many light- weight neural networks [13, 6, 12],
  • a DSC structure has two layers, a depthwise convolutional layer and a pointwise convolutional layer [6],
  • the depthwise convolutional layer applies a single convolutional filter to each input channel which massively reduces the parameter and computational cost. Following the process of its convolution, the depthwise convolution can be described using the matrix:
  • D ij is usually a 3 x 3 matrix
  • m is the number of the input feature maps.
  • An embodiment defines D as the matrix [D jj ]. Because D is a diagonal matrix, the depthwise layer has significantly fewer parameters than a standard convolution layer.
  • the pointwise convolutional layer uses 1x1 convolution to build the new features through computing the linear combinations of all input channels.
  • the pointwise convolutional layer is a kind of traditional convolution layer with the kernel size set as 1. Following the process of its convolution, the pointwise convolution can be described using the matrix:
  • Equation 3 is a scalar, m is the number of input feature maps, and n is the number of outputs.
  • the computational cost is S F ⁇ S F ⁇ C in ⁇ C out , and the number of parameters is C in X C out.
  • P e R mx as the matrix [/3 ⁇ 4]. Since the depthwise separable convolution is composed of the depthwise convolution and pointwise convolution, the depthwise separable convolution can be represented as:
  • This subsection details the proposed LPR module 220c of FIG. 2.
  • the depthwise convolution can be considered as the convolution between a diagonal matrix dig(D 11 . . . D mm ) and a feature map matrix [F 1 ... F m ].
  • the LPR structure keeps this procedure, but further explores the pointwise convolution in the following manner.
  • an embodiment implements a low-rank decomposition of P such that .
  • the highest rank of this approximation is r, and the size of m x r is much smaller than m 2 .
  • an embodiment can convert the original DSC module to: where F p means the output features after this new low-rank pointwise convolution operation.
  • an embodiment may reduce the parameters and computational cost, however, such an embodiment may undermine the original structure of P when r is inappropriately small, e.g., r ⁇ rank(P),
  • a term i.e., the original feature map after the depthwise convolution with D. This ensures that if the overall structure of P is compromised, the depthwise convolution is still able to capture the spatial features of the input. Note, this is similar to the popular residual learning where is added to the module output, but embodiments use instead.
  • this residual term such an embodiment can formulate a low-rank pointwise residual module as: where I m is an identity matrix.
  • an embodiment may normalize the features of with L2 normalization on the channel, and apply batch normalization on D.
  • FIGs. 4A and 4B illustrate a system 440 that processes an image 441 with a backbone network 442 and head network 443 to generate a heatmap 444 that indicates the location of joints in the image 441.
  • FIG. 4 A illustrates details of processing the image 441 using the backbone network 442 to create the multi-scale features 445a-c.
  • FIG. 4B illustrates details of processing the multi-scale features 445a-c using the head network 443 to generate the heatmap 444.
  • the backbone 442 is constructed in a parallel architecture.
  • the backbone 442 is a multi-stage, multi-scale feature extracting network with multi-scale feature exchanging.
  • the backbone network 442 extracts features in high-resolution, medium- resolution, and low-resolution scales.
  • the backbone 442 extracts features from the input image 441 in the original resolution without downsampling to create the feature map 448a.
  • the backbone 442 downsamples from the original resolution of the image 441 while extracting features once to create the feature map 448b.
  • the backbone extracts features and downsamples from the mid-resolution path 447b once to create the feature map 448c.
  • the backbone 442 implements multi-scale feature extraction, i.e., determines feature maps and multiple different resolutions.
  • exchanging modules illustrated by merging arrows in FIG. 4A
  • This fusing allows features from the different resolution paths 447a-c to exchange their information and better learn representation.
  • the backbone 442 implements multi-scale feature fusion, i.e., combines feature maps from multiple different resolutions.
  • the exchanging process can be expressed as follows: where are the feature maps in high, medium, and low resolutions at the end of stage i.
  • U(f,s ) is a unifying function which unifies the channel size as well as upsamples or downsamples the feature map function with scale 5.
  • every feature extraction is done by the LPR module 220c described hereinabove in relation to FIG. 2C. Utilizing the LPR module 220c reduces more than 70% FLOPs and over 85% parameters compared with traditional convolutional neural networks.
  • the backbone network 442 processes feature maps across the stages 446a-d by implementing multi-scale feature extraction (i.e., creating feature maps at multiple different resolutions) and by implementing multi-scale feature fusion where features maps are upsampled (e.g., so a feature map from the low resolution path 447c is combined with a feature map from the medium resolution path 447b or high resolution path 447a) or downsampled (e.g., so a feature map from the high resolution path 447a is combined with a feature map from the medium resolution path 447b or low resolution path 447c) to create the multi-scale features 445a-c.
  • multi-scale feature extraction i.e., creating feature maps at multiple different resolutions
  • multi-scale feature fusion where features maps are upsampled (e.g., so a feature map from the low resolution path 447c is combined with a feature map from the medium resolution path 447b or high resolution path 447a) or downsampled (e.g., so a feature
  • the backbone network 442 is designed to extract the multi-scale features 445a-c from the input image 441.
  • the backbone network 442 is concatenated with the head network 443 to output the estimated heatmap 444.
  • the overall design of the head network 443 is depicted in FIG. 4B.
  • the multi-scale feature maps 445a-c are obtained from the backbone network 442.
  • a respective convolutional layer 449a-c with kernel size of 1 is applied on each feature map 445a-c to change each feature map’s channel size, i.e., depth, to a fixed size.
  • a heatmap estimating layer 451 with fixed kernel size is applied on the feature maps 450a-c to generate initial estimated heatmaps 452a-c at multiple scales.
  • processing of the multi-scale feature maps 450a-c by the heatmap layer 451 utilizes weight sharing.
  • weight sharing means that the CONVlxls 451 depicted in FIG. 4B are the same module that have the same parameters/weights.
  • the head network 443 fuses the initial heatmaps 452a-c by, first, upsampling the smaller-size heatmaps (452b-c) to the size of the largest heatmap 452a.
  • the heatmap 452b is upsampled using the upsampler 453a and the heatmap 452c is upsampled using the upsampler 453b.
  • an upsampler e.g., 453a
  • Upsampling features is similar to upsampling images, in that it increases the height and width of the estimated heatmaps.
  • the upsamplers 453a and 453b can perform the upsampling using mathematical methods such as bicubic or bilinear methods.
  • all the heatmaps (452a and upsampled 452b-c) are summed together to create a fused heatmap 444 as a final estimation of joint locations in the image 441.
  • the heatmaps (452a and upsampled 452b-c) can also be processed using a max() operation to determine the fused heatmap 444.
  • the objective function can be described as where / is the input cropped image, h is the corresponding ground-truth heatmap generated from the ground-truth keypoints. The objective function aims to help the whole network to learn to extract features from the input image 441 and estimate the possible location of human body joints.
  • Multi-scale heatmap supervision can further improve the accuracy of the pose estimation, i.e., the heatmap 444.
  • FIG. 5 is a system 550 for training the neural network(s) 552 according to an embodiment.
  • the neural network(s) 552 trained in the system 550 may be any and/or all of the networks in FIGs. 4A-B.
  • the system 550 begins with the convolutional neural network(s) 552 processing the training image 551 to generate the estimated heatmaps 553a-c which are indications of joints in the training image 551.
  • the heatmaps 553a-c are compared to respective ground- truth heatmaps 555a-c by the loss calculator 554 to calculate the losses 556.
  • the ground-truth heatmaps 555a-c are known accurate indications of joints in the training image 551.
  • each respective ground-truth heatmap 555a-c has the same respective scale as the estimated heatmaps 553a-c. As such, the heatmap 553a and ground truth 555a have the same scale, the heat map 553b and ground truth 555b have the same scale, and the heatmap 553c and ground truth 555c have the same scale.
  • the loss calculator 554 forwards the losses 556 to the back propagator 557 and the back propagator 557 determines the gradients 558.
  • the gradients 558 are provided to the convolutional neural network(s) 552 so that weights of the neural network(s) 552 are adjusted and, in future iterations, results (e.g., the estimated heatmaps 553a-c) generated by the neural network(s) 552 are closer to the ground-truths 555a-c.
  • Embodiments implement a novel light-weight deep neural network with multiscale heatmap fusion that is particularly optimized for fast pose estimation applications.
  • An embodiment introduces light-weight modular design for multi-scale feature extraction, heatmap estimation, and fusion.
  • Embodiments significantly reduce the complexity of deep neural networks and solve the scaling problem in pose estimation.
  • embodiments of the present invention greatly reduce the running time required for pose estimation while maintaining a comparable accuracy with existing state-of-the-art methods.
  • Embodiments can be deployed on mobile devices and achieve real-time and accurate pose estimation performance.
  • embodiments can be easily adapted to different network architectures because the described neural networks have an expandable modular design.
  • An example embodiment of the invention uses a low-rank approach pose estimation framework that reduces computational costs (in FLOPs) by more than 70% FLOPs and reduces parameters by over 85% while providing comparable accuracy compared with the state-of-the-art methods.
  • Another embodiment applies a backward loop to reconstruct a previous pose estimation from current frames to improve robustness and minimize inconsistent estimation.
  • a novel head structure for pose estimation is also employed in an example embodiment.
  • An example embodiment extracts multi-scale features from an input image and estimates multi-scale joint heatmaps from those feature maps. Then, those multiscale estimations are fused together to produce a final estimation. This approach solves a scaling problem of pose estimation.
  • embodiments of the invention run much faster compared to state-of-the-art methods and achieve comparable accuracy.
  • Example embodiments of the invention have been implemented in mobile devices and run in real-time with robust and accurate performance.
  • An example embodiment of the invention solves a scaling problem of pose estimation by utilizing multi-scale feature extraction, feature fusion, and multi-scale heatmap estimation and fusion mechanisms.
  • Embodiments can be employed in numerous commercial applications. For instance, embodiments can be applied in detecting human behaviors in monitoring systems and embodiments can be applied for human-computer interaction such as in video games which use human body movement as input (e.g., Xbox Kinect). Embodiments can also be applied in many interesting mobile apps that require human body movement as input such as personal fitting and training.
  • FIG. 6 is a simplified block diagram of a computer-based system 660 that may be used to implement any variety of the embodiments of the present invention described herein.
  • the system 660 comprises a bus 663.
  • the bus 663 serves as an interconnect between the various components of the system 660.
  • an input/output device interface 666 for connecting various input and output devices such as a keyboard, mouse, display, speakers, etc. to the system 660.
  • a central processing unit (CPU) 662 is connected to the bus 663 and provides for the execution of computer instructions implementing embodiments.
  • Memory 665 provides volatile storage for data used for carrying out computer instructions implementing embodiments described herein, such as those embodiments previously described hereinabove.
  • Storage 664 provides non-volatile storage for software instructions, such as an operating system (not shown) and embodiment configurations, etc.
  • the system 660 also comprises a network interface 661 for connecting to any variety of networks known in the art, including wide area networks (WANs) and local area networks (LANs).
  • WANs wide area networks
  • LANs local area networks
  • the various methods and systems described herein may each be implemented by a physical, virtual, or hybrid general purpose computer, such as the computer system 660, or a computer network environment such as the computer environment 770, described herein below in relation to FIG. 7.
  • the computer system 660 may be transformed into the systems that execute the methods described herein, for example, by loading software instructions into either memory 665 or non-volatile storage 664 for execution by the CPU 662.
  • the system 660 and its various components may be configured to carry out any embodiments or combination of embodiments of the present invention described herein.
  • FIG. 7 illustrates a computer network environment 770 in which an embodiment of the present invention may be implemented.
  • the server 771 is linked through the communications network 772 to the clients 773a-n.
  • the environment 770 may be used to allow the clients 773a-n, alone or in combination with the server 771, to execute any of the embodiments described herein.
  • computer network environment 770 provides cloud computing embodiments, software as a service (SAAS) embodiments, and the like.
  • SAAS software as a service
  • Embodiments or aspects thereof may be implemented in the form of hardware, firmware, or software. If implemented in software, the software may be stored on any non transient computer readable medium that is configured to enable a processor to load the software or subsets of instructions thereof. The processor then executes the instructions and is configured to operate or cause an apparatus to operate in a manner as described herein.
  • firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
  • Mnasnet Platformaware neural architecture search for mobile. In CVPR , 2019.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments identify joints of a multi-limb body in an image. One such embodiment unifies depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth. In turn, for each of the plurality of feature maps having the same depth, an initial indication of one or more joints in the image is generated. The one or more joints are located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb. To continue, a final indication of the one or more joints in the image is generated using each generated initial indication of the one or more joints.

Description

Light-Weight Pose Estimation Network With Multi-Scale Heatmap Fusion
RELATED APPLICATION
[0001] This application claims the benefit of U.S. Provisional Application No. 62/976,099, filed on February 13, 2020. The entire teachings of the above application are incorporated herein by reference.
BACKGROUND
[0002] Location of joints in images along with pose estimation, i.e., locating body parts in images, has been a computer vision task of increasing importance.
SUMMARY
[0003] Pose estimation aims to generate an interpretable low-dimension representation of bodies in images. Pose estimation is useful for many real-world applications in sports, security, autonomous self-driving cars, and robotics, amongst other examples. Speed and accuracy are two major concerns in pose estimation applications. As a trade-off, existing methods often sacrifice accuracy in order to boost speed. In contrast, embodiments of the present invention provide a light-weight, accurate, and fast pose estimation network with a multi-scale heatmap fusion mechanism to estimate 2D poses from a single RGB image. Advantageously, embodiments can run on mobile devices in real-time while achieving comparable performance with state-of-the-art methods in terms of accuracy.
[0004] One such example embodiment is directed to a method of identifying joints of a multi-limb body in an image. Such an example embodiment, first, unifies depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth. In turn, for each of the plurality of feature maps having the same depth, an initial indication of one or more joints in the image is generated. In such an embodiment, the one or more joints are located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb. To continue, a final indication of the one or more joints in the image is generated using each generated initial indication of the one or more joints.
[0005] An embodiment generates an indication of one or more limbs in the image from the generated final indication of the one or more joints in the image. Such an embodiment may also generate an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image. [0006] In an embodiment, the final indication of the one or more joints in the image is generated by first, upsampling at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale. Second, the upsampled at least one initial indication of the one or more joints and the given initial indication of the one or more joints with the largest scale are added together to generate the final indication of the one or more joints in the image. Another embodiment unifies depth of the plurality of multi-scale feature maps by applying a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
[0007] Yet another embodiment applies a heatmap estimating layer to each of the plurality of feature maps having the same depth to generate each initial indication of the one or more joints in the image. According to an embodiment, the heatmap estimating layer is composed of a convolutional neural network, e.g., is a convolutional neural network layer. [0008] An embodiment trains the aforementioned convolutional neural network. In such an embodiment, the image is a training image. Such an embodiment trains the convolutional neural network by: (1) comparing each generated initial indication of the one or more joints in the image to a respective ground-truth indication of the one or more joints in the training image to determine losses and (2) back propagating the losses to the convolutional neural network. According to an embodiment, each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth.
[0009] Another embodiment generates the plurality of multi-scale feature maps by processing the image using a backbone neural network. According to an embodiment, processing the image using the backbone neural network includes performing multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps.
[0010] Another embodiment is directed to a computer system for identifying joints of a multi-limb body in an image. In one such embodiment, the system includes a processor and a memory with computer code instructions stored thereon. The processor and the memory, with the computer code instructions, are configured to cause the system to implement any embodiments or combination of embodiments described herein. [0011] Yet another embodiment is directed to a computer program product for identifying joints in an image. The computer program product comprises one or more non-transitory computer-readable storage devices and program instructions stored on at least one of the one or more storage devices. When the program instructions are loaded and executed by a processor, the program instructions cause an apparatus associated with the processor to implement any embodiments or combination of embodiments described herein.
[0012] It is noted that embodiments of the method, system, and computer program product may be configured to implement any embodiments or combination of embodiments described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
[0014] FIG. l is a simplified diagram of a system to identify joints according to an embodiment.
[0015] FIGs. 2A-C are block diagrams of convolutional architectures that may be utilized in embodiments.
[0016] FIG. 3 is a flow diagram of a method for identifying joints of a multi-limb body in an image according to an embodiment.
[0017] FIGs. 4A-B are block diagrams of a system embodiment for identifying joints in an image.
[0018] FIG. 5 is a simplified diagram of an embodiment that trains a neural network.
[0019] FIG. 6 is a simplified block diagram of a computer system for identifying joints of a multi-limb body in an image according to an embodiment.
[0020] FIG. 7 is a simplified diagram of a computer network environment in which an embodiment of the present invention may be implemented.
DETAILED DESCRIPTION
[0021] A description of example embodiments follows.
[0022] The teachings of all patents, published applications, and references cited herein are incorporated by reference in their entirety. [0023] Two-dimensional (2D) pose estimation, which was studied before the deep learning era, is a well-studied, yet challenging problem. Given an input image, the objective of 2D pose estimation is to estimate the 2D location of body joints, e.g., human body parts. [0024] In real-world applications, pose estimation acts as a basis for other tasks such as autonomous driving, security, human action recognition, and human-computer interaction, amongst other examples. Traditionally, pose estimation is done via a graphical pose model. Recently, developments in deep convolutional neural networks (CNNs) have significantly boosted the performance of pose estimation. To improve the performance of pose estimation, existing methods tend to use a deep and high-capacity CNN architecture pretrained on a large-scale dataset and adapted to the pose estimation task [1, 2, 3] (bracketed numbers in this document refer to the enumerated list of references hereinbelow). However, the scaling problem still remains as a bottleneck. The scaling problem results from people in images being different sizes (scales) and their joints/limbs also being different sizes. This occurs, for example, when only a person’s upper body is in an image. Traditional network architectures tend to capture/detect joints at fixed sizes. Changes in scale greatly reduces the accuracy of these traditional architectures. In an attempt to solve the scaling problem, existing methods use large capacity networks as the backbone for learning feature representations. The backbone networks are usually designed for image classification [4, 5]
[0025] However, it is difficult to utilize these backbone architectures for direct applications on mobile and embedded devices because of the model complexity of these backbone architectures [4, 5] in terms of time and space. Therefore, there is a need to design dedicated deep convolutional neural network (DCNN) modules to reduce the computational cost and storage size for further applications on end devices, e.g., mobile phones. Although some light-weight structures have emerged recently, their accuracy on pose estimation is unsatisfactory since these light-weight structures are designed for image classification. Thus, a fast and accurate network for pose estimation is needed.
[0026] FIG. 1 is simplified diagram of a system 100 that provides such functionality. Specifically, the system 100 identifies joints and limbs in an image. The system 100 includes the trained neural network 101. The neural network 101 is trained to identify joints and limbs in an image as described herein. The neural network 101 may implement the system 440 described hereinbelow in relation to FIGs. 4A-B and the neural network 101 may be configured to carry out the method 330 described hereinbelow in relation to FIG. 3.
[0027] In operation, the trained neural network 101 receives the image 102 and processes the image 102 to generate the indication 103 of body parts, e.g., the joint 104 and the limb 105, in the image 102.
[0028] Embodiments, e.g., the system 100, implement a light-weight pose estimation network with a multi-scale heatmap fusion mechanism. In an embodiment, the proposed network has two parts: a backbone architecture and a head structure. To achieve low model complexity, an embodiment utilizes a plug-and-play structure referred to herein as Low-rank Pointwise Residual module (LPR). The structure 220c of the LPR module is shown in FIG. 2C. In contrast, FIG. 2 A illustrates a standard convolution network 220a and FIG. 2B illustrates a depthwise separable convolution (DSC) module 220b [6], While embodiments can utilize the structures 220a-c, particularly advantageous efficiencies are achieved by embodiments utilizing the LPR module structure 220c of FIG. 2C.
[0029] On one hand, the computation cost and parameters are reduced significantly when the number of point-wise layers, e.g., the 1 by 1 convolution layer PI in FIG. 2C, is much less than the input channels. On the other hand, to compensate for the low-rankness of pointwise convolution and performance recession due to this compression, a residual operation through depthwise convolution is implemented to complement the feature maps without any additional parameters.
[0030] To achieve better performance for pose estimation, an embodiment implements the LPR module (the structure 220c) on the architecture of HRNet [3], which is specifically designed for pose estimation and achieves state-of-the-art performance by maintaining high- resolution representations through the whole process of pose estimation. To further improve the performance, an embodiment implements a novel multi-scale heatmap estimation and fusion mechanism, which localizes joints from extracted feature maps at multiple scales and combines the multi-scale results together to make a final joint location estimation. The multiscale estimation and fusion technique attempts to localize body joints on different scales using a single estimating layer. In embodiments, the estimation is done on multi-scale feature maps. A single estimating layer is utilized which ensures that such an embodiment is looking for the same-scale joints on multiple scales. This process is looking for multi-scale joints on the same image. This allows embodiments to handle different scales. Such a design of the head network further boosts the accuracy of pose estimation performance.
[0031] By implementing a light-weight structure that uses a low-rank approach and implementing the structure on the architecture of HRNet [3] as a backbone, embodiments reduce computational costs by more than 70% (in FLOPs) and reduce parameters by over 85% with only a 2% loss in accuracy. This is shown in Table 2 below where a standard convolution layer, e.g., SConv, has 589,824 parameters which requires 2.36 MB of memory to store and the LPR block used in embodiments has only 18,688 parameters and only requires 0.07 MB of memory to store. In embodiments, parameters are weights stored in a convolution layer and, thus, the number of parameters refers to the number of weights. Typically, one parameter, i.e., weight, requires 4 bytes to store. Thus, it is advantageous to reduce the number of parameters as described herein. Further, embodiments, provide a novel head structure for pose estimation. Embodiments extract multi-scale feature maps from the input image and estimate multi-scale joint heatmaps from those multi-scale feature maps. Then, those multi-scale estimations (feature maps) are fused together to determine a final estimation. This approach solves the scaling problem of pose estimation.
[0032] Deep Light-weight Structure
[0033] In recent years, methods have emerged for speeding up the deep learning model.
A faster activation function referred to as rectified-linear activation function (ReLU) was proposed to accelerate the model [7] Jin et al. [8] showed the flattened CNN structure to accelerate the feedforward procedure. In [9] depthwise separable convolution was initially introduced and was used in Inception models [10], Xception network [11], MobileNet [6, 12], and ShuffleNet [13, 14], and condensenet [15]
[0034] Besides designing architectures manually, implementing networks to search CNN architectures was another significant method. Many networks are searched by algorithms automatically, such as Darts [16], NasNet [17], PNasNet [18], ProxylessNas [19], FBNet [20], MNasNet [21], MobileNetv3 [22], and MixNet [23] These implementations pushed the state-of-the-art performance while requiring fewer FLOPs and parameters.
[0035] Low-rank methods are another way to make light-weight models. Group Lasso
[24] is an efficient method for regularization of learning sparse structures. Jaderberg et al.
[25] implemented the low-rank theory on the weights of filters with separate convolution in different dimensions. An architecture referred to as SVDNet [26] also considers matrix low rankness in its framework to optimize the deep representation learning process. IGC [27, 28, 29] utilizes grouped pointwise convolution to factorize the weight matrices as block matrices. In contrast from IGC, embodiments of the present invention implement the LPRNet module 220c of FIG. 2C which employs a low dimension pointwise layer to compress the model. Moreover, embodiments can recover the information loss with residuals from the depthwise layer and L2 layer normalization. [0036] Pose Estimation
[0037] Pose Estimation aims to estimate poses of people, e.g., multiple person poses, in an image. Pose estimation has been studied in computer vision [30, 31, 32, 33, 34] for a long time. Before deep learning was introduced, pose estimation methods utilized pictorial structures [30] or graphical models [34] Recently, with the development and application of deep learning models, i.e., neural networks, attempts have been made to utilize deep convolutional neural networks to do 2D multi-person pose estimation. These attempts can be categorized into two major categories: (1) top-down methods and (2) bottom -up methods. [0038] Top-down approaches have two stages. The first stage detects people in the image using a person detector. The second stage uses a single person pose estimator to determine poses for the people detected in the first stage. He et al. [35] extended the Mask-RCNN framework to human pose estimation by predicting a one-hot mask for each body part. Papandreou et al. [36] utilized a Faster RCNN detector to predict person boxes and applied ResNet in a fully convolutional fashion to predict heatmaps for every body part. Fang et al. [37] designed a symmetric spatial transformer network to alleviate the inaccurate bounding box problem.
[0039] Bottom-up approaches also have two stages. Bottom up approaches first detect body parts and, second, associate body parts into people. Pishchulin et al. [38] proposed using an Inter Linear Program method to solve the body part association problem, i.e., associating joints estimated from an image into different persons. Cao et al. [2] introduced Part Affinity Fields to predict the direction and activations for each limb to help associate body parts. Newell et al. [39] utilized predicted pixel-wise embeddings to assign detected body parts into different groups.
[0040] More recently, there have been efforts to develop a single-stage approach for multi-person pose estimation [40] The speed of single-stage methods surpasses the two-stage methods, but the accuracy of single-stage methods is still much lower than the state-of-the-art top-down methods.
[0041] Embodiments follow a top-down approach and utilize a person detector to first detect a person bounding box and, second, estimate the location of body joints within the bounding box. Embodiments shrink down the capacity of pose estimation networks using a novel light-weight neural network block and utilize a multi-scale heatmap extraction and fusion mechanism to solve the scaling problem and improve the performance.
[0042] FIG. 3 is a flow diagram of a method 330 for identifying joints of a multi-limb body in an image according to an embodiment. The method 330, unifies 331 depths of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth. A typical feature map has four dimensions [N,C,H,W] where, N is mini-batch size, C is channel s/depth, H is height, and W is width. Embodiments unify 331 the feature maps so they have the same number of channels/same depth. In turn, for each of the plurality of feature maps having the same depth (the depth-unified feature maps), the method 330 generates 332 an initial indication of one or more joints in the image. In other words, at 332, for each respective depth-unified feature map, a respective initial indication of joints in the image is generated. According to an embodiment, the one or more joints are located at an interconnection of a limb to the multi limb body or at an interconnection of a limb to another limb. To continue, a final indication of the one or more joints in the image is generated 333 using each generated initial indication of the one or more joints. In an embodiment of the method 330, the unifying 331, generation 332 of the initial indications of one or more joints, and generation 333 of the final indication of the one or more joints may be implemented as described hereinbelow in relation to FIG.
4B.
[0043] Embodiments of the method 330 may be used to identify joints of any type of object. For example, in an embodiment, the indication of the one or more joints in the image corresponds to joints of at least one of: a human, animal, machine, and robot, amongst other examples. Moreover, embodiments may identify joints for multiple objects, e.g., people, in an image.
[0044] According to an embodiment, the initial indications of one or more joints and final indication of one or more joints, are indications of locations of joints in the image. In an embodiment, the indications of one or more joints indicates a probability of a joint at each location in the image. According to an embodiment, locations are x-y coordinates in the image. Further, in an embodiment, the unit of the locations, e.g., coordinates, are in pixels. [0045] An example embodiment of the method 330 generates the plurality of multi-scale feature maps (that are unified 331) by processing the image using a backbone neural network. According to an embodiment, processing the image using the backbone neural network comprises performing multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps. According to an embodiment of the method 330, the plurality of multi-scale feature maps are generated using the functionality described hereinbelow in relation to FIG. 4A and, specifically, the backbone network 442 of the system 440
[0046] An embodiment of the method 330, unifies 331 depths of the plurality of multi scale feature maps by applying a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
In other words, in such an embodiment, a different convolutional layer is applied to each different feature map, and these different convolutional layers are configured to output feature maps that have the same depth. It can be said that such functionality unifies channels of the feature maps. In an embodiment, the feature maps are unified using the functionality described hereinbelow in relation to FIG. 4B.
[0047] Yet another embodiment generates 332 the initial indication of the one or more joints in the image for each of the plurality of feature maps having the same depth by applying a heatmap estimating layer to each of the plurality of feature maps having the same depth. In such an embodiment, a respective indication of joints in the image is generated 332 for each respective feature map. In an embodiment, the heatmap estimating layer is composed of a convolutional neural network.
[0048] Another embodiment of the method 330 trains the heatmap estimating layer composed of the convolutional neural network that is used to generate 332 the initial indications of the one or more joints in the image. In such an embodiment, the image is a training image. Such an embodiment trains the convolutional neural network by comparing each generated initial indication of the one or more joints in the image to a respective ground- truth indication of the one or more joints in the training image to determine losses. These determined losses are back propagated to the convolutional neural network to adjust weights of the neural network. According to an embodiment, each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth. Further training functionality that may be employed in embodiments is described hereinbelow in relation to FIG. 5.
[0049] According to an embodiment of the method 330, the final indication of the one or more joints in the image is generated 333 by first, upsampling at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale. Such functionality may include performing upsampling on a plurality of initial indications of the one or more joints so that all of the initial indications have an equal scale, i.e., size. In an embodiment, the sizes (HxW) of the initial estimations of joint locations generated using the multi-scale feature maps are the same sizes as the feature maps. To illustrate, consider an embodiment with three multi-scale feature maps (64x64, 128x128, 256x256). In such an embodiment, the initial joints/body estimation on those feature maps will have the same sizes (64x64, 128x128, 256x256). By upsampling the initial estimations to the same size (256x256), the estimations can be added together, or processed with a max() operator, to generate the final indication of joints (256x256). In an embodiment, the initial estimations are matrices/tensors filled with float values. After the upsampling, the initial estimations have the same size (number of joints c height x width). These matrices can be added together elementwise. Likewise, the max() operator can be implemented elementwise. In such implementations, the result of adding the matrices together elementwise or applying the max() operator elementwise is the final indication of joints in the image.
[0050] In an embodiment, the upsampling processes the initial indications of the one or more joints so that the indications have the same scale as the initial indication with the largest scale. As such, in an embodiment, one initial indication is not upsampled, the initial indication with the largest scale. To continue, the upsampled at least one initial indication of the one or more joints and the given initial indication of the one or more joints with the largest scale are added together to generate 333 the final indication of the one or more joints in the image. In an embodiment, the final indication of the one or more joints is generated 333 by adding together all of the initial indications of joints (which were previously upsampled). An embodiment generates 333 the final indication of the one or more joints in the image as described hereinbelow in relation to FIG. 4B.
[0051] Another embodiment of the method 330 generates an indication of one or more limbs in the image from the generated 333 final indication of the one or more joints in the image. Such an embodiment may also generate an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image.
[0052] Hereinbelow, a problem formulation for joint identification is provided. In addition, a light-weight convolutional neural network module and a framework architecture for light-weight multi-scale feature map extraction that may be utilized in embodiments are described. Details for estimating and fusing multi-scale heatmaps according to embodiments for identifying joints in images is also further elaborated upon.
[0053] Problem Definition [0054] Let F be an image containing multiple persons and / be a cropped image (
H x W x 3) of one single person using a corresponding bounding box estimated from a pretrained person detector. Let p (np x 2) denotes the 2D x - y coordinates of the body joints keypoints of that person. Then, the objective can be described as finding the estimated heatmap of human body joints h from the input cropped image /, denoted as h = G(l). The mapping function G is obtained by training the proposed deep convolutional neural networks. The estimated 2D keypoints p can be obtained by finding the location of strongest responding signal from the heatmap.
[0055] As described in further detail below, an implementation utilizes a deep neural network architecture (referred to herein as backbone) which extracts features to capture the related information contained in the images. Then, a shallow convolutional neural network (referred to herein as head) is used to estimate the heatmap of joints, e.g., human joints.
[0056] Light-weight CNN Block
[0057] Hereinbelow, a low-rank pointwise residual network (LPRNet) that may be used in embodiments is described. First, the matrix explanations of standard convolution [4] and depthwise separable convolution [6] are described. Next, a novel LPR structure and functionality for using the novel LPR structure to build the LPRNet is presented. Finally, discussions and preliminary experiments of the LPRNet are shown. Denotations used herein are summarized in in Table 1.
Figure imgf000013_0001
Table 1: Denotations
[0058] Table 2 below compares the computational cost in FLOPS and parameters of existing networks and the LPRNet that may be used in embodiments. SConv, DSC, Shufflev2, and LPR modules are used to build VGG [4], Mobilenetvl [6], ShuffleNetv2 [14], and the LPRNet respectively.
Figure imgf000014_0001
Table 2: Comparisons of Computational Costs (Flops) and Parameters [0059] Standard Convolutions (SConv)
[0060] In traditional DCNNs, the convolution operation is applied between each filter and the input feature map. Essentially, the filter applies different weights to different features while doing convolution. Afterwards, all features convoluted by one filter are added together to generate a new feature map. The whole procedure is equivalent to certain matrix products, which can be formally written as:
Figure imgf000014_0002
where Wij is the weight of the filter i corresponding to the feature map j, Fj is the input feature map, and means the feature map Fj is convoluted by a filter with the weight Wij.
Figure imgf000014_0006
Herein, each Wij is a 3 x 3 matrix (filter), and constitutes a large matrix [Wij], or simply W.
[0061] Depthwise Separable Convolutions (DSC)
[0062] Depthwise Separable Convolution layers are key components for many light- weight neural networks [13, 6, 12], A DSC structure has two layers, a depthwise convolutional layer and a pointwise convolutional layer [6],
[0063] The depthwise convolutional layer applies a single convolutional filter to each input channel which massively reduces the parameter and computational cost. Following the process of its convolution, the depthwise convolution can be described using the matrix:
Figure imgf000014_0003
In Equation 2 D
Figure imgf000014_0005
Figure imgf000014_0004
ij is usually a 3 x 3 matrix, m is the number of the input feature maps. An embodiment defines D as the matrix [Djj ]. Because D is a diagonal matrix, the depthwise layer has significantly fewer parameters than a standard convolution layer.
[0064] The pointwise convolutional layer uses 1x1 convolution to build the new features through computing the linear combinations of all input channels. The pointwise convolutional layer is a kind of traditional convolution layer with the kernel size set as 1. Following the process of its convolution, the pointwise convolution can be described using the matrix:
Figure imgf000015_0001
In equation 3 is a scalar, m is the number of input feature maps, and n is the number of outputs. The computational cost is SF × SF × Cin × Cout, and the number of parameters is C in X Cout. An embodiment defines P e Rmx" as the matrix [/¾]. Since the depthwise separable convolution is composed of the depthwise convolution and pointwise convolution, the depthwise separable convolution can be represented as:
Figure imgf000015_0002
[0065] LPR Stmcture
[0066] This subsection details the proposed LPR module 220c of FIG. 2. As noted in the previous section, the depthwise convolution can be considered as the convolution between a diagonal matrix dig(D11 . . . Dmm) and a feature map matrix [F 1 ... Fm], The LPR structure keeps this procedure, but further explores the pointwise convolution in the following manner. To further reduce the size of matrix P, an embodiment implements a low-rank decomposition of P such that
Figure imgf000015_0005
. Clearly, the highest rank of this approximation is r, and the size of m x r is much smaller than m2. Thus, an embodiment can convert the original DSC module to:
Figure imgf000015_0003
where Fp means the output features after this new low-rank pointwise convolution operation. [0067] While using the strategy above, an embodiment may reduce the parameters and computational cost, however, such an embodiment may undermine the original structure of P when r is inappropriately small, e.g., r < rank(P), To address this issue, an embodiment adds a term i.e., the original feature map after the depthwise convolution with D.
Figure imgf000015_0004
This ensures that if the overall structure of P is compromised, the depthwise convolution is still able to capture the spatial features of the input. Note, this is similar to the popular residual learning where is added to the module output, but embodiments use
Figure imgf000016_0006
instead. By considering this residual term, such an embodiment can formulate a low-rank pointwise residual module as:
Figure imgf000016_0001
where Im is an identity matrix. To further improve the performance, an embodiment may normalize the features of with L2 normalization on the channel, and apply batch
Figure imgf000016_0005
normalization on D.
[0068] With the factorization of the large matrix 1\ the LPR described herein successfully reduces the parameters and computational costs compared with other state-of-the-art modules. To verify these performance improvements, a set of experiments on ImageNet with MobileNet architecture has been performed to select the best rank control parameter k during the low-rank decomposition. The results of these experiments are shown in Table 3.
Figure imgf000016_0002
Table 3 : Experiments to select the best rank parameter k [0069] The results in Table 3 show that if k is 8, good performance is achieved while also providing a significant reduction to the computational costs and parameters. With k = 8 as the rank control parameter, the theoretical comparisons among the prevalent light-weight modules are shown in Table 2. The results in Table 2 show that the LPR module has the lowest computational costs and parameters when the input and output are the same. Note that is the sufficient and necessary condition which can result in the LPR module
Figure imgf000016_0003
having lower computational costs and parameters than a ShuffleNetv2 module. Thus, k should be larger than 4. Note that are learned to approximate the optimized
Figure imgf000016_0004
matrices through training.
[0070] Multi-scale Feature Extraction and Fusion [0071] An embodiment implements a multi-scale feature extraction and fusion approach to extract high-resolution features from an input image as detailed in FIG. 4A. This high- resolution feature extraction approach is suitable for the fine-grain task of body joint localization. FIGs. 4A and 4B illustrate a system 440 that processes an image 441 with a backbone network 442 and head network 443 to generate a heatmap 444 that indicates the location of joints in the image 441. FIG. 4 A illustrates details of processing the image 441 using the backbone network 442 to create the multi-scale features 445a-c. FIG. 4B illustrates details of processing the multi-scale features 445a-c using the head network 443 to generate the heatmap 444.
[0072] In the system 440 the backbone 442 is constructed in a parallel architecture. The backbone 442 is a multi-stage, multi-scale feature extracting network with multi-scale feature exchanging. The backbone network 442 extracts features in high-resolution, medium- resolution, and low-resolution scales. At the first stage 446a, in the high-resolution path 447a, the backbone 442 extracts features from the input image 441 in the original resolution without downsampling to create the feature map 448a. Meanwhile, in the first stage 446a of the mid-resolution path 447b, the backbone 442 downsamples from the original resolution of the image 441 while extracting features once to create the feature map 448b. At the second stage 446b, in the low-resolution path 447c, the backbone extracts features and downsamples from the mid-resolution path 447b once to create the feature map 448c. In this way, the backbone 442 implements multi-scale feature extraction, i.e., determines feature maps and multiple different resolutions.
[0073] Meanwhile, there are exchanging modules (illustrated by merging arrows in FIG. 4A) between each stage 446b-d, which fuses multi-scale features together to be the input features of the next stage. This fusing allows features from the different resolution paths 447a-c to exchange their information and better learn representation. In this way, the backbone 442 implements multi-scale feature fusion, i.e., combines feature maps from multiple different resolutions. Mathematically the exchanging process can be expressed as follows:
Figure imgf000017_0001
where are the feature maps in high, medium, and low resolutions at the end of stage i. U(f,s ) is a unifying function which unifies the channel size as well as upsamples or downsamples the feature map function with scale 5. In an embodiment, at each stage 446a-d, every feature extraction is done by the LPR module 220c described hereinabove in relation to FIG. 2C. Utilizing the LPR module 220c reduces more than 70% FLOPs and over 85% parameters compared with traditional convolutional neural networks.
[0074] The backbone network 442 processes feature maps across the stages 446a-d by implementing multi-scale feature extraction (i.e., creating feature maps at multiple different resolutions) and by implementing multi-scale feature fusion where features maps are upsampled (e.g., so a feature map from the low resolution path 447c is combined with a feature map from the medium resolution path 447b or high resolution path 447a) or downsampled (e.g., so a feature map from the high resolution path 447a is combined with a feature map from the medium resolution path 447b or low resolution path 447c) to create the multi-scale features 445a-c. It is noted that while three resolution paths 447a-c are implemented by the backbone network 442 illustrated in FIG. 4A, embodiments are not so limited and more or less resolution paths may be utilized.
[0075] Multi-scale Heatmap Estimation. Fusion and Supervision [0076] As described above, in the system 440 the backbone network 442 is designed to extract the multi-scale features 445a-c from the input image 441. In the system 440, the backbone network 442 is concatenated with the head network 443 to output the estimated heatmap 444. The overall design of the head network 443 is depicted in FIG. 4B.
[0077] In the head network 443, first, the multi-scale feature maps 445a-c are obtained from the backbone network 442. A respective convolutional layer 449a-c with kernel size of 1 is applied on each feature map 445a-c to change each feature map’s channel size, i.e., depth, to a fixed size. This results in the multi-scale feature maps 450a-c which all have the same channel size, i.e., unified depth. Then, a heatmap estimating layer 451 with fixed kernel size is applied on the feature maps 450a-c to generate initial estimated heatmaps 452a-c at multiple scales. It is noted that in the head network 443 there is a single heatmap layer 451, but the heatmap layer 451 is depicted multiple times to more clearly illustrate the functionality of the head network 443. In an embodiment, processing of the multi-scale feature maps 450a-c by the heatmap layer 451 utilizes weight sharing. Here, weight sharing means that the CONVlxls 451 depicted in FIG. 4B are the same module that have the same parameters/weights. To continue, the head network 443 fuses the initial heatmaps 452a-c by, first, upsampling the smaller-size heatmaps (452b-c) to the size of the largest heatmap 452a. In such an embodiment, the heatmap 452b is upsampled using the upsampler 453a and the heatmap 452c is upsampled using the upsampler 453b. To illustrate an embodiment, an upsampler, e.g., 453a, may upsample a 64x64 heatmap to be 128x128. Upsampling features is similar to upsampling images, in that it increases the height and width of the estimated heatmaps. The upsamplers 453a and 453b can perform the upsampling using mathematical methods such as bicubic or bilinear methods. Second, all the heatmaps (452a and upsampled 452b-c) are summed together to create a fused heatmap 444 as a final estimation of joint locations in the image 441. The heatmaps (452a and upsampled 452b-c) can also be processed using a max() operation to determine the fused heatmap 444. The objective function can be described as
Figure imgf000019_0002
where / is the input cropped image, h is the corresponding ground-truth heatmap generated from the ground-truth keypoints. The objective function aims to help the whole network to learn to extract features from the input image 441 and estimate the possible location of human body joints. This objective can be further extended to multi-scale supervision, which supervises heatmaps at multiple scales by
Figure imgf000019_0001
where the subscript i = l,m,h indicates the low-resolution, mid-resolution, and high-resolution feature extraction and ground-truth heatmaps. Multi-scale heatmap supervision can further improve the accuracy of the pose estimation, i.e., the heatmap 444.
[0078] FIG. 5 is a system 550 for training the neural network(s) 552 according to an embodiment. The neural network(s) 552 trained in the system 550 may be any and/or all of the networks in FIGs. 4A-B.
[0079] The system 550 begins with the convolutional neural network(s) 552 processing the training image 551 to generate the estimated heatmaps 553a-c which are indications of joints in the training image 551. The heatmaps 553a-c are compared to respective ground- truth heatmaps 555a-c by the loss calculator 554 to calculate the losses 556. The ground-truth heatmaps 555a-c are known accurate indications of joints in the training image 551. According to an embodiment, each respective ground-truth heatmap 555a-c has the same respective scale as the estimated heatmaps 553a-c. As such, the heatmap 553a and ground truth 555a have the same scale, the heat map 553b and ground truth 555b have the same scale, and the heatmap 553c and ground truth 555c have the same scale.
[0080] To continue, the loss calculator 554 forwards the losses 556 to the back propagator 557 and the back propagator 557 determines the gradients 558. The gradients 558 are provided to the convolutional neural network(s) 552 so that weights of the neural network(s) 552 are adjusted and, in future iterations, results (e.g., the estimated heatmaps 553a-c) generated by the neural network(s) 552 are closer to the ground-truths 555a-c.
[0081] Embodiments implement a novel light-weight deep neural network with multiscale heatmap fusion that is particularly optimized for fast pose estimation applications. An embodiment introduces light-weight modular design for multi-scale feature extraction, heatmap estimation, and fusion. Embodiments significantly reduce the complexity of deep neural networks and solve the scaling problem in pose estimation. As a result, embodiments of the present invention greatly reduce the running time required for pose estimation while maintaining a comparable accuracy with existing state-of-the-art methods. Embodiments can be deployed on mobile devices and achieve real-time and accurate pose estimation performance. Advantageously, embodiments can be easily adapted to different network architectures because the described neural networks have an expandable modular design. [0082] An example embodiment of the invention uses a low-rank approach pose estimation framework that reduces computational costs (in FLOPs) by more than 70% FLOPs and reduces parameters by over 85% while providing comparable accuracy compared with the state-of-the-art methods. Another embodiment applies a backward loop to reconstruct a previous pose estimation from current frames to improve robustness and minimize inconsistent estimation. A novel head structure for pose estimation is also employed in an example embodiment. An example embodiment extracts multi-scale features from an input image and estimates multi-scale joint heatmaps from those feature maps. Then, those multiscale estimations are fused together to produce a final estimation. This approach solves a scaling problem of pose estimation.
[0083] Advantageously, embodiments of the invention run much faster compared to state-of-the-art methods and achieve comparable accuracy. Example embodiments of the invention have been implemented in mobile devices and run in real-time with robust and accurate performance. An example embodiment of the invention solves a scaling problem of pose estimation by utilizing multi-scale feature extraction, feature fusion, and multi-scale heatmap estimation and fusion mechanisms. [0084] Embodiments can be employed in numerous commercial applications. For instance, embodiments can be applied in detecting human behaviors in monitoring systems and embodiments can be applied for human-computer interaction such as in video games which use human body movement as input (e.g., Xbox Kinect). Embodiments can also be applied in many interesting mobile apps that require human body movement as input such as personal fitting and training.
[0085] FIG. 6 is a simplified block diagram of a computer-based system 660 that may be used to implement any variety of the embodiments of the present invention described herein. The system 660 comprises a bus 663. The bus 663 serves as an interconnect between the various components of the system 660. Connected to the bus 663 is an input/output device interface 666 for connecting various input and output devices such as a keyboard, mouse, display, speakers, etc. to the system 660. A central processing unit (CPU) 662 is connected to the bus 663 and provides for the execution of computer instructions implementing embodiments. Memory 665 provides volatile storage for data used for carrying out computer instructions implementing embodiments described herein, such as those embodiments previously described hereinabove. Storage 664 provides non-volatile storage for software instructions, such as an operating system (not shown) and embodiment configurations, etc. The system 660 also comprises a network interface 661 for connecting to any variety of networks known in the art, including wide area networks (WANs) and local area networks (LANs).
[0086] It should be understood that the example embodiments described herein may be implemented in many different ways. In some instances, the various methods and systems described herein may each be implemented by a physical, virtual, or hybrid general purpose computer, such as the computer system 660, or a computer network environment such as the computer environment 770, described herein below in relation to FIG. 7. The computer system 660 may be transformed into the systems that execute the methods described herein, for example, by loading software instructions into either memory 665 or non-volatile storage 664 for execution by the CPU 662. One of ordinary skill in the art should further understand that the system 660 and its various components may be configured to carry out any embodiments or combination of embodiments of the present invention described herein. Further, the system 660 may implement the various embodiments described herein utilizing any combination of hardware, software, and firmware modules operatively coupled, internally, or externally, to the system 660. [0087] FIG. 7 illustrates a computer network environment 770 in which an embodiment of the present invention may be implemented. In the computer network environment 770, the server 771 is linked through the communications network 772 to the clients 773a-n. The environment 770 may be used to allow the clients 773a-n, alone or in combination with the server 771, to execute any of the embodiments described herein. For non-limiting example, computer network environment 770 provides cloud computing embodiments, software as a service (SAAS) embodiments, and the like.
[0088] Embodiments or aspects thereof may be implemented in the form of hardware, firmware, or software. If implemented in software, the software may be stored on any non transient computer readable medium that is configured to enable a processor to load the software or subsets of instructions thereof. The processor then executes the instructions and is configured to operate or cause an apparatus to operate in a manner as described herein.
[0089] Further, firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
[0090] It should be understood that the flow diagrams, block diagrams, and network diagrams may include more or fewer elements, be arranged differently, or be represented differently. But it further should be understood that certain implementations may dictate the block and network diagrams and the number of block and network diagrams illustrating the execution of the embodiments be implemented in a particular way.
[0091] Accordingly, further embodiments may also be implemented in a variety of computer architectures, physical, virtual, cloud computers, and/or some combination thereof, and thus, the data processors described herein are intended for purposes of illustration only and not as a limitation of the embodiments.
[0092] The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
[0093] While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.
[0094] References [0095] [1] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision , pages 483-499. Springer, 2016.
[0096] [2] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multiperson 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7291-7299, 2017.
[0097] [3] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In CVPR , 2019.
[0098] [4] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR , 2015.
[0099] [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR , pages 770-778, 2016.
[00100] [6] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun
Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR , abs/1704.04861, 2017. [00101] [7] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InAISTATS, pages 315-323, 2011.
[00102] [8] Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello. Flattened convolutional neural networks for feedforward acceleration. CoRR , 2014.
[00103] [9] Laurent Sifre and PS Mallat. Rigid-motion scattering for image classification.
PhD thesis, Citeseer, 2014.
[00104] [10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML , 2015.
[00105] [11] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR , pages 1251-1258, 2017.
[00106] [12] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and
Liang-Chieh Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CVPR , 2018.
[00107] [13] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR , 2018.
[00108] [14] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2:
Practical guidelines for efficient cnn architecture design. InECCV, pages 122-138, 2018. [00109] [15] Gao Huang, Shichen Liu, Laurens van der Maaten, and Kilian Q.
Weinberger. Condensenet: An efficient densenet using learned group convolutions. In CVPR , June 2018.
[00110] [16] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. InICLR, 2019.
[00111] [17] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In CVPR , pages 8697-8710, 2018. [00112] [18] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-
Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, pages 19-34, 2018.
[00113] [19] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. In ICLR , 2019.
[00114] [20] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming
Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In CVPR , pages 10734- 10742, 2019.
[00115] [21] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V Le.
Mnasnet: Platformaware neural architecture search for mobile. In CVPR , 2019.
[00116] [22] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen,
Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In ICCV, 2019.
[00117] [23] Mingxing JTan and Quoc V Le. Mixnet: Mixed depthwise convolutional kernels. InBMVC, 2019.
[00118] [24] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(l):49-67, 2006.
[00119] [25] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. InBMVC, 2014.
[00120] [26] Yifan Sun, Liang Zheng, Weijian Deng, and Shengjin Wang. Svdnet for pedestrian retrieval. ICCV , 2017.
[00121] [27] Ting Zhang, Guo-Jun Qi, Bin Xiao, and Jingdong Wang. Interleaved group convolutions. In ICCV, pages 4373-4382, 2017. [00122] [28] Guotian Xie, Jingdong Wang, Ting Zhang, Jianhuang Lai, Richang Hong, and Guo-Jun Qi. Interleaved structured sparse convolutional neural networks. In CVPR , June 2018.
[00123] [29] Ke Sun, Mingjie Li, Dong Liu, and Jingdong Wang. Igcv3: Interleaved low- rank group convolutions for efficient deep neural networks. 2018.
[00124] [30] Mykhaylo Andriluka, Stefan Roth, and Bemt Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1014-1021. IEEE, 2009.
[00125] [31] Georgia Gkioxari, Pablo Arbelaez, Lubomir Bourdev, and Jitendra Malik.
Articulated pose estimation using discriminative armlet classifiers. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3342-3349. IEEE, 2013. [00126] [32] Yi Yang and Deva Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1385-1392. IEEE, 2011.
[00127] [33] Sam Johnson and Mark Everingham. Learning effective human pose estimation from inaccurate annotation. In Computer vision and pattern recognition (CVPR), 2011 IEEE conference on, pages 1465-1472. IEEE, 2011.
[00128] [34] Xianjie Chen and Alan L Yuille. Articulated pose estimation by a graphical model with image dependent pairwise relations. In Advances in neural information processing systems, pages 1736-1744, 2014.
[00129] [35] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn.
In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980-2988. IEEE, 2017.
[00130] [36] George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan
Tompson, Chris Bregler, and Kevin Murphy. Towards accurate multi-person pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4903-4911, 2017.
[00131] [37] Haoshu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. Rmpe: Regional multi-person pose estimation. In The IEEE International Conference on Computer Vision (ICCV), volume 2, 2017.
[00132] [38] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoem Andres, Mykhaylo
Andriluka, Peter V Gehler, and Bemt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4929-4937, 2016.
[00133] [39] Alejandro Newell, Zhiao Huang, and Jia Deng. Associative embedding: End- to-end learning for joint detection and grouping. In Advances in Neural Information Processing Systems, pages 2274-2284, 2017.
[00134] [40] Xuecheng Nie, Jiashi Feng, Jianfeng Zhang, and Shuicheng Yan. Single- stage multi-person pose machines. In Proceedings of the IEEE International Conference on Computer Vision, pages 6951-6960, 2019.

Claims

CLAIMS What is claimed is:
1. A computer-implemented method of identifying joints of a multi-limb body in an image, the method comprising: unifying depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth; for each of the plurality of feature maps having the same depth, generating an initial indication of one or more joints in the image, the one or more joints being located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb; and generating a final indication of the one or more joints in the image using each generated initial indication of the one or more joints.
2. The computer-implemented method of Claim 1 further comprising: from the generated final indication of the one or more joints in the image, generating an indication of one or more limbs in the image.
3. The computer-implemented method of Claim 2 further comprising: generating an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image.
4. The computer-implemented method of Claim 1 wherein generating the final indication of the one or more joints in the image comprises: upsampling at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale; and adding together (i) the upsampled at least one initial indication of the one or more joints and (ii) the given initial indication of the one or more joints with the largest scale, to generate the final indication of the one or more joints in the image.
5. The method of Claim 1 wherein unifying depth of the plurality of multi-scale feature maps comprises: applying a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
6. The method of Claim 1 wherein generating the initial indication of the one or more joints in the image for each of the plurality of feature maps having the same depth comprises: applying a heatmap estimating layer to each of the plurality of feature maps having the same depth to generate each initial indication of the one or more joints in the image.
7. The method of Claim 6 wherein the heatmap estimating layer is composed of a convolutional neural network.
8. The method of Claim 7 wherein the image is a training image and the method further comprises: training the convolutional neural network by comparing each generated initial indication of the one or more joints in the image to a respective ground-truth indication of the one or more joints in the training image to determine losses and back propagating the losses to the convolutional neural network, wherein each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth.
9. The method of Claim 1 further comprising: generating the plurality of multi-scale feature maps by processing the image using a backbone neural network.
10. The method of Claim 9 wherein processing the image using the backbone neural network comprises: performing multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps.
11. A computer system for identifying joints of a multi -limb body in an image, the computer system comprising: a processor; and a memory with computer code instructions stored thereon, the processor and the memory, with the computer code instructions, being configured to cause the system to: unify depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth; for each of the plurality of feature maps having the same depth, generate an initial indication of one or more joints in the image, the one or more joints being located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb; and generate a final indication of the one or more joints in the image using each generated initial indication of the one or more joints.
12. The system of Claim 11 wherein the processor and the memory, with the computer code instructions, are further configured to cause the system to: from the generated final indication of the one or more joints in the image, generate an indication of one or more limbs in the image.
13. The system of Claim 12 wherein the processor and the memory, with the computer code instructions, are further configured to cause the system to: generate an indication of pose using the generated final indication of the one or more joints in the image and the generated indication of the one or more limbs in the image.
14. The system of Claim 11 wherein, in generating the final indication of the one or more joints in the image, the processor and the memory, with the computer code instructions, are further configured to cause the system to: upsample at least one initial indication of the one or more joints in the image to have a scale equivalent to a scale of a given initial indication of the one or more joints with a largest scale; and add together (i) the upsampled at least one initial indication of the one or more joints and (ii) the given initial indication of the one or more joints with the largest scale, to generate the final indication of the one or more joints in the image.
15. The system of Claim 11 wherein, in unifying depth of the plurality of multi-scale feature maps, the processor and the memory, with the computer code instructions, are further configured to cause the system to: apply a respective convolutional layer to each of the plurality of multi-scale feature maps to create the plurality of feature maps each having the same depth.
16. The system of Claim 11 wherein, in generating the initial indication of the one or more joints in the image for each of the plurality of feature maps having the same depth, the processor and the memory, with the computer code instructions, are further configured to cause the system to: apply a heatmap estimating layer, composed of a convolution neural network, to each of the plurality of feature maps having the same depth to generate each initial indication of the one or more joints in the image.
17. The system of Claim 16 wherein the image is a training image and the processor and the memory, with the computer code instructions, are further configured to cause the system to: train the convolutional neural network by comparing each generated initial indication of the one or more joints in the image to a respective ground-truth indication of the one or more joints in the training image to determine losses and back propagating the losses to the convolutional neural network, wherein each respective ground-truth indication of the one or more joints corresponds to a respective scale of a given feature map of the plurality of feature maps having the same depth.
18. The system of Claim 11 wherein the processor and the memory, with the computer code instructions, are further configured to cause the system to: generate the plurality of multi-scale feature maps by processing the image using a backbone neural network.
19. The system of Claim 18 wherein, in processing the image using the backbone neural network, the processor and the memory, with the computer code instructions, are further configured to cause the system to: perform multi-scale feature extraction and multi-scale feature fusion to generate the plurality of multi-scale feature maps.
20. A computer program product for identifying joints of a multi-limb body in an image, the computer program product comprising: one or more non-transitory computer-readable storage devices and program instructions stored on at least one of the one or more storage devices, the program instructions, when loaded and executed by a processor, cause an apparatus associated with the processor to: unify depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth; for each of the plurality of feature maps having the same depth, generate an initial indication of one or more joints in the image, the one or more joints being located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb; and generate a final indication of the one or more joints in the image using each generated initial indication of the one or more joints.
PCT/US2021/017341 2020-02-13 2021-02-10 Light-weight pose estimation network with multi-scale heatmap fusion WO2021163103A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/759,939 US20230126178A1 (en) 2020-02-13 2021-02-10 Light-Weight Pose Estimation Network With Multi-Scale Heatmap Fusion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062976099P 2020-02-13 2020-02-13
US62/976,099 2020-02-13

Publications (1)

Publication Number Publication Date
WO2021163103A1 true WO2021163103A1 (en) 2021-08-19

Family

ID=74845130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/017341 WO2021163103A1 (en) 2020-02-13 2021-02-10 Light-weight pose estimation network with multi-scale heatmap fusion

Country Status (2)

Country Link
US (1) US20230126178A1 (en)
WO (1) WO2021163103A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119753A (en) * 2021-12-08 2022-03-01 北湾科技(武汉)有限公司 Transparent object 6D attitude estimation method facing mechanical arm grabbing
CN115115851A (en) * 2022-08-30 2022-09-27 广州市玄武无线科技股份有限公司 Method and device for estimating commodity attitude and storage medium
CN115861762A (en) * 2023-02-27 2023-03-28 中国海洋大学 Plug-and-play infinite deformation fusion feature extraction method and application thereof

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12081880B2 (en) * 2021-05-11 2024-09-03 Samsung Electronics Co., Ltd. Image super-resolution with reference images from one or more cameras
NL2032161B1 (en) * 2022-06-14 2023-12-21 Navinfo Europe B V Method and system for multi-scale vision transformer architecture
CN116342582B (en) * 2023-05-11 2023-08-04 湖南工商大学 Medical image classification method and medical equipment based on deformable attention mechanism
CN117612267B (en) * 2024-01-24 2024-04-12 中国海洋大学 Efficient human body posture estimation method and model building method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271933A (en) * 2018-09-17 2019-01-25 北京航空航天大学青岛研究院 The method for carrying out 3 D human body Attitude estimation based on video flowing
US20190357615A1 (en) * 2018-04-20 2019-11-28 Bodygram, Inc. Systems and methods for full body measurements extraction using multiple deep learning networks for body feature measurements

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778635B (en) * 2006-05-11 2016-09-28 苹果公司 For the method and apparatus processing data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190357615A1 (en) * 2018-04-20 2019-11-28 Bodygram, Inc. Systems and methods for full body measurements extraction using multiple deep learning networks for body feature measurements
CN109271933A (en) * 2018-09-17 2019-01-25 北京航空航天大学青岛研究院 The method for carrying out 3 D human body Attitude estimation based on video flowing

Non-Patent Citations (42)

* Cited by examiner, † Cited by third party
Title
ALEJANDRO NEWELLKAIYU YANGJIA DENG: "European Conference on Computer Vision", 2016, SPRINGER, article "Stacked hourglass networks for human pose estimation", pages: 483 - 499
ALEJANDRO NEWELLZHIAO HUANGJIA DENG: "Associative embedding: End-to-end learning for joint detection and grouping", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2017, pages 2274 - 2284
ANDREW G HOWARDMENGLONG ZHUBO CHENDMITRY KALENICHENKOWEIJUN WANGTOBIAS WEYANDMARCO ANDREETTOHARTWIG ADAM: "Mobilenets: Efficient convolutional neural networks for mobile vision applications", CORR, 2017
ANDREW HOWARDMARK SANDLERGRACE CHULIANG-CHIEH CHENBO CHENMINGXING TANWEIJUN WANGYUKUN ZHURUOMING PANGVIJAY VASUDEVAN ET AL.: "Searching for mobilenetv3", ICCV, 2019
BARRET ZOPHVIJAY VASUDEVANJONATHON SHLENSQUOC V LE: "Learning transferable architectures for scalable image recognition", CVPR, 2018, pages 8697 - 8710, XP033473794, DOI: 10.1109/CVPR.2018.00907
BICHEN WUXIAOLIANG DAIPEIZHAO ZHANGYANGHAN WANGFEI SUNYIMING WUYUANDONG TIANPETER VAJDAYANGQING JIAKURT KEUTZER: "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", CVPR, 2019, pages 10734 - 10742
CHENXI LIUBARRET ZOPHMAXIM NEUMANNJONATHON SHLENSWEI HUALI-JIA LILI FEI-FEIALAN YUILLEJONATHAN HUANGKEVIN MURPHY: "Progressive neural architecture search", ECCV, 2018, pages 19 - 34
FRANCOIS CHOLLET: "Xception: Deep learning with depthwise separable convolutions", CVPR, 2017, pages 1251 - 1258
GAO BINGKUN ET AL: "A Lightweight Network Based on Pyramid Residual Module for Human Pose Estimation", PATTERN RECOGNITION. IMAGE ANALYSIS, ALLEN PRESS, LAWRENCE, KS, US, vol. 29, no. 4, 1 October 2019 (2019-10-01), pages 668 - 675, XP036970686, ISSN: 1054-6618, [retrieved on 20191227], DOI: 10.1134/S1054661819040023 *
GAO HUANGSHICHEN LIULAURENS VAN DER MAATENKILIAN Q. WEINBERGER: "Condensenet: An efficient densenet using learned group convolutions", CVPR, June 2018 (2018-06-01)
GEORGE PAPANDREOUTYLER ZHUNORI KANAZAWAALEXANDER TOSHEVJONATHAN TOMPSONCHRIS BREGLERKEVIN MURPHY: "Towards accurate multi-person pose estimation in the wild", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2017, pages 4903 - 4911
GEORGIA GKIOXARIPABLO ARBELAEZLUBOMIR BOURDEVJITENDRA MALIK: "Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on", 2013, IEEE, article "Articulated pose estimation using discriminative armlet classifiers", pages: 3342 - 3349
GUOTIAN XIEJINGDONG WANGTING ZHANGJIANHUANG LAIRICHANG HONGGUO-JUN QI: "Interleaved structured sparse convolutional neural networks", CVPR, June 2018 (2018-06-01)
HAN CAILIGENG ZHUSONG HAN: "Proxylessnas: Direct neural architecture search on target task and hardware", ICLR, 2019
HANXIAO LIUKAREN SIMONYANYIMING YANG: "Darts: Differentiable architecture search", ICLR, 2019
HAOSHU FANGSHUQIN XIEYU-WING TAICEWU LU: "Rmpe: Regional multi-person pose estimation", THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), vol. 2, 2017
JONGHOON JINAYSEGUL DUNDAREUGENIO CULURCIELLO: "Flattened convolutional neural networks for feedforward acceleration", CORR, 2014
KAIMING HEGEORGIA GKIOXARIPIOTR DOLLARROSS GIRSHICK: "Computer Vision (ICCV), 2017 IEEE International Conference on", 2017, IEEE, article "Mask r-cnn", pages: 2980 - 2988
KAIMING HEXIANGYU ZHANGSHAOQING RENJIAN SUN: "Deep residual learning for image recognition", CVPR, 2016, pages 770 - 778, XP055536240, DOI: 10.1109/CVPR.2016.90
KAREN SIMONYANANDREW ZISSERMAN: "Very deep convolutional networks for large-scale image recognition", ICLR, 2015
KE SUNBIN XIAODONG LIUJINGDONG WANG: "Deep high-resolution representation learning for human pose estimation", CVPR, 2019
KE SUNMINGJIE LIDONG LIUJINGDONG WANG, IGCV3: INTERLEAVED LOW-RANK GROUP CONVOLUTIONS FOR EFFICIENT DEEP NEURAL NETWORKS, 2018
LAURENT SIFREPS MALLAT: "Rigid-motion scattering for image classification", PHD THESIS, 2014
LEONID PISHCHULINELDAR INSAFUTDINOVSIYU TANGBJOERN ANDRESMYKHAYLO ANDRILUKAPETER V GEHLERBERNT SCHIELE: "Deepcut: Joint subset partition and labeling for multi person pose estimation", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016, pages 4929 - 4937, XP033021686, DOI: 10.1109/CVPR.2016.533
MARK SANDLERANDREW HOWARDMENGLONG ZHUANDREY ZHMOGINOVLIANG-CHIEH CHEN: "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", CVPR, 2018
MAX JADERBERGANDREA VEDALDIANDREW ZISSERMAN: "Speeding up convolutional neural networks with low rank expansions", BMVC, 2014
MING YUANYI LIN: "Model selection and estimation in regression with grouped variables", JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), vol. 68, no. 1, 2006, pages 49 - 67, XP055557711, DOI: 10.1111/j.1467-9868.2005.00532.x
MINGXING JTANQUOC V LE: "Mixnet: Mixed depthwise convolutional kernels", BMVC, 2019
MINGXING TANBO CHENRUOMING PANGVIJAY VASUDEVANQUOC V LE: "Mnasnet: Platformaware neural architecture search for mobile", CVPR, 2019
MYKHAYLO ANDRILUKASTEFAN ROTHBERNT SCHIELE: "Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on", 2009, IEEE, article "Pictorial structures revisited: People detection and articulated pose estimation", pages: 1014 - 1021
NINGNING MAXIANGYU ZHANGHAI-TAO ZHENGJIAN SUN: "Shufflenet v2: Practical guidelines for efficient cnn architecture design", ECCV, 2018, pages 122 - 138, XP047500421, DOI: 10.1007/978-3-030-01264-9_8
SAM JOHNSONMARK EVERINGHAM: "Computer vision and pattern recognition (CVPR), 2011 IEEE conference on", 2011, IEEE, article "Learning effective human pose estimation from inaccurate annotation", pages: 1465 - 1472
SERGEY IOFFECHRISTIAN SZEGEDY: "Batch normalization: Accelerating deep network training by reducing internal covariate shift", ICML, 2015
TING ZHANGGUO-JUN QIBIN XIAOJINGDONG WANG: "Interleaved group convolutions", ICCV, 2017, pages 4373 - 4382
XAVIER GLOROTANTOINE BORDESYOSHUA BENGIO: "Deep sparse rectifier neural networks", AISTATS, 2011, pages 315 - 323
XIANGYU ZHANGXINYU ZHOUMENGXIAO LINJIAN SUN: "Shufflenet: An extremely efficient convolutional neural network for mobile devices", CVPR, 2018
XIANJIE CHENALAN L YUILLE: "Articulated pose estimation by a graphical model with image dependent pairwise relations", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2014, pages 1736 - 1744
XUECHENG NIEJIASHI FENGJIANFENG ZHANGSHUICHENG YAN: "Single-stage multi-person pose machines", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2019, pages 6951 - 6960
YI YANGDEVA RAMANAN: "Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on", 2011, IEEE, article "Articulated pose estimation with flexible mixtures-of-parts", pages: 1385 - 1392
YIFAN SUNLIANG ZHENGWEIJIAN DENGSHENGJIN WANG: "Svdnet for pedestrian retrieval", ICCV, 2017
ZHAO YING ET AL: "Cluster-wise learning network for multi-person pose estimation", PATTERN RECOGNITION, ELSEVIER, GB, vol. 98, 3 October 2019 (2019-10-03), XP085886229, ISSN: 0031-3203, [retrieved on 20191003], DOI: 10.1016/J.PATCOG.2019.107074 *
ZHE CAOTOMAS SIMONSHIH-EN WEIYASER SHEIKH: "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", 2017, article "Realtime multi-person 2d pose estimation using part affinity fields", pages: 7291 - 7299

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119753A (en) * 2021-12-08 2022-03-01 北湾科技(武汉)有限公司 Transparent object 6D attitude estimation method facing mechanical arm grabbing
CN115115851A (en) * 2022-08-30 2022-09-27 广州市玄武无线科技股份有限公司 Method and device for estimating commodity attitude and storage medium
CN115115851B (en) * 2022-08-30 2023-01-31 广州市玄武无线科技股份有限公司 Method and device for estimating commodity attitude and storage medium
CN115861762A (en) * 2023-02-27 2023-03-28 中国海洋大学 Plug-and-play infinite deformation fusion feature extraction method and application thereof

Also Published As

Publication number Publication date
US20230126178A1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
US11361546B2 (en) Action recognition in videos using 3D spatio-temporal convolutional neural networks
US20230126178A1 (en) Light-Weight Pose Estimation Network With Multi-Scale Heatmap Fusion
Mahmoudi et al. Multi-target tracking using CNN-based features: CNNMTT
US20220156554A1 (en) Lightweight Decompositional Convolution Neural Network
Abbas et al. A comprehensive review of recent advances on deep vision systems
Johnander et al. A generative appearance model for end-to-end video object segmentation
Basly et al. CNN-SVM learning approach based human activity recognition
Koyun et al. Focus-and-Detect: A small object detection framework for aerial images
Liu et al. Fg-net: A fast and accurate framework for large-scale lidar point cloud understanding
WO2016054779A1 (en) Spatial pyramid pooling networks for image processing
CN110765860A (en) Tumble determination method, tumble determination device, computer apparatus, and storage medium
EP2579184B1 (en) Mobile apparatus and method of controlling the same
Wu et al. Real-time background subtraction-based video surveillance of people by integrating local texture patterns
WO2019222383A1 (en) Multi-person pose estimation using skeleton prediction
Li et al. A lightweight multi-scale aggregated model for detecting aerial images captured by UAVs
WO2016179808A1 (en) An apparatus and a method for face parts and face detection
WO2020088763A1 (en) Device and method for recognizing activity in videos
Tsai et al. MobileNet-JDE: a lightweight multi-object tracking model for embedded systems
Nguyen Fast traffic sign detection approach based on lightweight network and multilayer proposal network
Wang et al. EMAT: Efficient feature fusion network for visual tracking via optimized multi-head attention
Rajendran et al. RelMobNet: End-to-end relative camera pose estimation using a robust two-stage training
US20230154191A1 (en) Apparatus and method with image segmentation
Gaheen et al. Students head-pose estimation using partially-latent mixture
Zhu et al. Multiple human upper bodies detection via candidate-region convolutional neural network
Thai et al. An effective deep network for head pose estimation without keypoints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21709307

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21709307

Country of ref document: EP

Kind code of ref document: A1