WO2020098225A1 - 关键点检测方法及装置、电子设备和存储介质 - Google Patents
关键点检测方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2020098225A1 WO2020098225A1 PCT/CN2019/083721 CN2019083721W WO2020098225A1 WO 2020098225 A1 WO2020098225 A1 WO 2020098225A1 CN 2019083721 W CN2019083721 W CN 2019083721W WO 2020098225 A1 WO2020098225 A1 WO 2020098225A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- feature map
- processing
- map
- maps
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 75
- 238000003860 storage Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 222
- 238000013528 artificial neural network Methods 0.000 claims abstract description 132
- 238000000034 method Methods 0.000 claims abstract description 92
- 238000012549 training Methods 0.000 claims description 122
- 238000007499 fusion processing Methods 0.000 claims description 54
- 230000002441 reversible effect Effects 0.000 claims description 52
- 230000006870 function Effects 0.000 claims description 47
- 230000008569 process Effects 0.000 claims description 46
- 238000000605 extraction Methods 0.000 claims description 45
- 238000010586 diagram Methods 0.000 claims description 37
- 230000009467 reduction Effects 0.000 claims description 28
- 238000005070 sampling Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 abstract description 10
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 238000011176 pooling Methods 0.000 description 7
- 238000011946 reduction process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 3
- 238000000746 purification Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/033—Recognition of patterns in medical or anatomical images of skeletal patterns
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular, to a key point detection method and device, electronic equipment, and storage medium.
- Human key point detection is to detect the position information of key points such as joints or facial features from the human body image, so as to describe the posture of the human body through the position information of these key points.
- the existing technology can usually use a neural network to obtain the multi-scale features of the image, and finally used to predict the position of the key point of the human body.
- we found that using this method we cannot fully mine and use multi-scale features, and the detection accuracy of key points is low.
- Embodiments of the present disclosure provide an effective key point detection method and device, electronic equipment, and storage medium that improve key point detection accuracy.
- a key point detection method which includes:
- first feature maps for multiple scales of the input image and the scale of each first feature map has a multiple relationship; use the first pyramid neural network to perform a forward processing on each of the first feature maps to obtain A second feature map with one-to-one correspondence of feature maps, wherein the second feature map has the same scale as the first feature map with one-to-one correspondence; a second pyramid neural network is used to invert each of the second feature maps Processing to obtain a third feature map corresponding to each of the second feature maps, wherein the third feature map and the one-to-one corresponding second feature map have the same scale; for each of the third features The map performs feature fusion processing, and uses the feature map after feature fusion processing to obtain the positions of key points in the input image.
- the obtaining the first feature map of multiple scales for the input image includes: adjusting the input image to a first image of a preset specification; and inputting the first image to a residual
- the neural network performs downsampling processing at different sampling frequencies on the first image to obtain multiple first feature maps with different scales.
- the forward processing includes first convolution processing and first linear interpolation processing
- the reverse processing includes second convolution processing and second linear interpolation processing
- the forward processing of each of the first feature maps by using the first pyramid neural network to obtain a second feature map corresponding to each of the first feature maps includes: wherein a first check convolution FIG C 1 ... C n first feature of C n in FIG performs convolution processing, wherein FIG obtain a second first feature corresponding to FIG C n F. n, where n represents a first FIG characteristic number, and n is an integer greater than 1; the second feature map F n performs linear interpolation processing to obtain the second feature map F n corresponding to the first intermediate characteristic graph F 'n, wherein the first intermediate feature FIG. F 'n is the same scale dimensions of the first feature of the FIG.
- the second feature map F i is composed of the second intermediate feature map C ′ i and the first An intermediate feature map F ′ i + 1 is obtained by performing superposition processing.
- the first intermediate feature map F ′ i is obtained by linear interpolation from the corresponding second feature map F i
- the second intermediate feature map C ′ i is The scale of an intermediate feature map F ′ i + 1 is the same, where i is an integer greater than or equal to 1 and less than n.
- the inverse processing of each of the second feature maps by using the second pyramid neural network to obtain a third feature map corresponding to each of the second feature maps includes: wherein the second check convolutional FIG three F 1 ... F. m second feature map F 1 of the convoluting process, obtaining a second characteristic diagram of FIG third feature F 1 corresponding to R 1, where m represents a second The number of feature maps, and m is an integer greater than 1; the fourth convolution kernel is used to convolve the second feature maps F 2 ... F m to obtain the corresponding third intermediate feature maps F ′′ 2 ... F ′′ m , where the scale of the third intermediate feature map is the same as the scale of the corresponding second feature map;
- the performing feature fusion processing on each of the third feature maps, and using the feature maps after feature fusion processing to obtain the position of each key point in the input image includes: Perform feature fusion processing on the three feature maps to obtain a fourth feature map: obtain the positions of key points in the input image based on the fourth feature map.
- performing feature fusion processing on each third feature map to obtain a fourth feature map includes: using linear interpolation to adjust each third feature map to a feature map with the same scale; The feature maps with the same scale are connected to obtain the fourth feature map.
- the method before performing feature fusion processing on each third feature map to obtain a fourth feature map, the method further includes: inputting the first group of third feature maps into different bottleneck block structures respectively Perform convolution processing to obtain an updated third feature map, each of the bottleneck block structures includes a different number of convolution modules, wherein the third feature map includes a first set of third feature maps and a second A set of third feature maps, each of the first set of third feature maps and the second set of third feature maps includes at least one third feature map.
- performing feature fusion processing on each third feature map to obtain a fourth feature map includes: using linear interpolation, the updated third feature map and the third feature map Two sets of third feature maps are adjusted to feature maps with the same scale; the feature maps with the same scale are connected to obtain the fourth feature map.
- the obtaining the position of each key point in the input image based on the fourth feature map includes: performing a dimensionality reduction process on the fourth feature map using a fifth convolution kernel; The fourth feature map after the dimension processing determines the positions of the key points of the input image.
- the obtaining the position of each key point in the input image based on the fourth feature map includes: performing a dimensionality reduction process on the fourth feature map using a fifth convolution kernel; using a volume
- the block attention module performs purification processing on the features in the fourth feature map after dimensionality reduction processing to obtain a purified feature map; and uses the purified feature map to determine the position of the key point of the input image.
- the method further includes training the first pyramid neural network using a training image data set, which includes: using the first pyramid neural network to perform a first feature corresponding to each image in the training image data set The image is subjected to the forward processing to obtain a second feature map corresponding to each image in the training image data set; each second feature map is used to determine the identified key points; and the first loss of the key points is obtained according to the first loss function ; Using the first loss to reversely adjust each convolution kernel in the first pyramid neural network until the number of trainings reaches the set first number threshold.
- the method further includes training the second pyramid neural network using a training image data set, which includes using the second pyramid neural network to output training image data output from the first pyramid neural network to the first pyramid neural network Perform the reverse processing on the second feature map corresponding to each image to obtain a third feature map corresponding to each image in the training image data set; use each third feature map to determine the identified key points; obtain according to the second loss function The second loss of each identified key point; use the second loss to reversely adjust the convolution kernel in the second pyramid neural network until the number of trainings reaches the set second number threshold; or, use the second The loss reversely adjusts the convolution kernel in the first pyramid network and the convolution kernel in the second pyramid neural network until the number of trainings reaches the set second number threshold.
- performing the feature fusion processing on each of the third feature maps through a feature extraction network and performing the feature fusion processing on each of the third feature maps through a feature extraction network
- the method further included: training the feature extraction network using the training image data set, which includes using a feature extraction network to output a third feature map corresponding to each image in the training image data set from the second pyramid neural network Perform the feature fusion process, and use the feature map after the feature fusion process to identify the key points of each image in the training image data set; obtain the third loss of each key point according to the third loss function; use the third loss value Reversely adjust the parameters of the feature extraction network until the number of training times reaches the set third-time threshold; or, use the third loss function to reversely adjust the convolution kernel parameters and the first in the first pyramid neural network The convolution kernel parameters in the two-pyramid neural network and the parameters of the feature extraction network until the training times reach the set third times threshold.
- a key point detection apparatus which includes: a multi-scale feature acquisition module configured to obtain first feature maps for multiple scales of an input image, each of the first feature maps The scale is in a multiple relationship; the forward processing module is configured to forward process each of the first feature maps using the first pyramid neural network to obtain a second feature map corresponding to each of the first feature maps, wherein, The second feature map has the same scale as the first feature map in one-to-one correspondence; the inverse processing module is configured to perform inverse processing on each of the second feature maps using the second pyramid neural network to obtain A third feature map corresponding to the second feature map in one-to-one correspondence, wherein the third feature map has the same scale as the second feature map in one-to-one correspondence; a key point detection module is configured for each third The feature map is subjected to feature fusion processing, and the feature map after feature fusion processing is used to obtain the position of each key point in the input image.
- the multi-scale feature acquisition module is configured to adjust the input image to a first image of a preset specification, and input the first image to a residual neural network.
- the image is down-sampled at different sampling frequencies to obtain multiple first feature maps of different scales.
- the forward processing includes first convolution processing and first linear interpolation processing
- the reverse processing includes second convolution processing and second linear interpolation processing
- the forward processing module is configured to perform a convolution process on the first feature map C n in the first feature map C 1 ... C n using the first convolution kernel to obtain FIG feature a second feature corresponding to C n F n in FIG, where n represents the number of a first characteristic diagram, and n is an integer greater than 1; and F n performs linear interpolation processing on the second characteristic diagram obtained with a second The first intermediate feature map F ′ n corresponding to the feature map F n , wherein the scale of the first intermediate feature map F ′ n is the same as the scale of the first feature map C n-1 ; and the second feature map is used to check the first feature map FIG respective first feature other than C n C 1 ...
- C n- 1 performs convolution processing, respectively, to obtain C 1 ... C n-1-one correspondence of the second intermediate first feature characteristic diagram C of FIG. ' 1 ... C ' n-1 , wherein the scale of the second intermediate feature map is the same as the scale of the first feature map corresponding to it; and based on the second feature map F n and each of the first Two intermediate feature maps C ' 1 ... C' n-1 to obtain a second feature map F 1 ... F n-1 and a first intermediate feature map F ′ 1 ...
- the second feature map F i is obtained by superimposing the second intermediate feature map C ′ i and the first intermediate feature map F ′ i + 1 , and the first intermediate feature map F ′ i is formed by the corresponding second feature
- the graph F i is obtained by linear interpolation, and the second intermediate feature map C ′ i has the same scale as the first intermediate feature map F ′ i + 1 , where i is an integer greater than or equal to 1 and less than n.
- the reverse processing module is configured to perform a convolution process on the second feature map F 1 in the second feature maps F 1 ... F m using a third convolution kernel to obtain 1 corresponding to the third characteristic feature of Figure II in FIG. F R 1, wherein m represents the number of the second characteristic diagram, and m is an integer greater than 1; and using a second feature matching fourth convolution F 2 ... F m FIG. Perform a convolution process to obtain the corresponding third intermediate feature maps F ′′ 2 ...
- FIG third collation performed convolution processing to obtain the third characteristic corresponds to FIG fourth intermediate R 1 wherein FIG R '1; and FIG characterized by each of the third intermediate F "2 ... F" m and a fourth Intermediate feature map R ' 1 to obtain a third feature map R 2 ... R m and a fourth intermediate feature map R' 2 ...
- the key point detection module is configured to perform feature fusion processing on each third feature map to obtain a fourth feature map, and obtain each key in the input image based on the fourth feature map The location of the point.
- the key point detection module is configured to use linear interpolation to adjust each third feature map to a feature map with the same scale, and connect the feature maps with the same scale to obtain The fourth characteristic diagram is described.
- the device further includes: an optimization module configured to input the first set of third feature maps to different bottleneck block structures for convolution processing to obtain updated third features, respectively Figures, each of the bottleneck block structures includes a different number of convolution modules, wherein the third feature map includes a first set of third feature maps and a second set of third feature maps, the first set of third Both the feature map and the second set of third feature maps include at least one third feature map.
- an optimization module configured to input the first set of third feature maps to different bottleneck block structures for convolution processing to obtain updated third features, respectively Figures, each of the bottleneck block structures includes a different number of convolution modules, wherein the third feature map includes a first set of third feature maps and a second set of third feature maps, the first set of third Both the feature map and the second set of third feature maps include at least one third feature map.
- the keypoint detection module is further configured to adjust each of the updated third feature map and the second set of third feature maps to the same scale using linear interpolation Feature map, and connect the feature maps with the same scale to obtain the fourth feature map.
- the key point detection module is further configured to perform dimensionality reduction processing on the fourth feature map using a fifth convolution kernel, and determine the key of the input image using the fourth feature map after the dimensionality reduction processing The location of the point.
- the keypoint detection module is further configured to perform a dimensionality reduction process on the fourth feature map using a fifth convolution kernel, and use a convolutional block attention module to perform the dimensionality reduction on the fourth feature
- the features in the figure are purified to obtain a purified feature map, and the purified feature map is used to determine the position of the key point of the input image.
- the forward processing module is further configured to train the first pyramid neural network using a training image data set, which includes: using the first pyramid neural network to correspond to each image in the training image data set The first feature map of the is subjected to the forward processing to obtain a second feature map corresponding to each image in the training image data set; the second feature map is used to determine the identified key points; the key points are obtained according to the first loss function The first loss; using the first loss to reversely adjust each convolution kernel in the first pyramid neural network until the training times reach the set first times threshold.
- the reverse processing module is further configured to train the second pyramid neural network using a training image data set, which includes: using the second pyramid neural network to output the first pyramid neural network Perform the reverse processing on the second feature map corresponding to each image in the training image data set to obtain a third feature map corresponding to each image in the training image data set; use each third feature map to determine the identified key points;
- the second loss function obtains the second loss of each identified key point; the second loss is used to reversely adjust the convolution kernel in the second pyramid neural network until the number of trainings reaches the set second number threshold; or, use The second loss reversely adjusts the convolution kernel in the first pyramid network and the convolution kernel in the second pyramid neural network until the number of training times reaches the set second number threshold.
- the key point detection module is further configured to perform the feature fusion process on each of the third feature maps through a feature extraction network, and execute the Before performing feature fusion processing on the third feature map, the training image data set is used to train the feature extraction network, which includes: using the feature extraction network to output the second pyramid neural network with respect to each image corresponding to each image in the training image data set.
- the three feature maps perform the feature fusion processing, and use the feature maps after feature fusion processing to identify the key points of each image in the training image data set; obtain the third loss of each key point according to the third loss function;
- the three loss values reversely adjust the parameters of the feature extraction network until the training times reach the set third times threshold; or, use the third loss function to reversely adjust the convolution kernel in the first pyramid neural network
- an electronic device including: a processor; a memory for storing processor executable instructions; wherein the processor is configured to: execute any of the first aspect One of the methods.
- a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the method of any one of the first aspects when executed by a processor .
- An embodiment of the present disclosure proposes to use a bidirectional pyramid neural network to perform key point feature detection, in which not only the forward processing is used to obtain multi-scale features, but also the reverse processing is used to fuse more features, which can further improve the key Point detection accuracy.
- FIG. 1 shows a flowchart of a key point detection method according to an embodiment of the present disclosure
- step S100 in a key point detection method according to an embodiment of the present disclosure
- FIG. 3 shows another flowchart of a key point detection method according to an embodiment of the present disclosure
- step S200 shows a flowchart of step S200 in a key point detection method according to an embodiment of the present disclosure
- step S300 shows a flowchart of step S300 in the key point detection method according to an embodiment of the present disclosure
- step S400 is a flowchart of step S400 in the key point detection method according to an embodiment of the present disclosure
- step S401 shows a flowchart of step S401 in the key point detection method according to an embodiment of the present disclosure
- FIG. 8 shows another flowchart of a key point detection method according to an embodiment of the present disclosure
- step S402 shows a flowchart of step S402 in the key point detection method according to an embodiment of the present disclosure
- FIG. 10 shows a flowchart of training a first pyramid neural network in a keypoint detection method according to an embodiment of the present disclosure
- FIG. 11 shows a flowchart of training a second pyramid neural network in a keypoint detection method according to an embodiment of the present disclosure
- FIG. 12 shows a flowchart of a training feature extraction network model in a keypoint detection method according to an embodiment of the present disclosure
- FIG. 13 shows a block diagram of a key point detection device according to an embodiment of the present disclosure
- FIG. 14 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure
- FIG. 15 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- An embodiment of the present disclosure provides a key point detection method, which can be used to perform key point detection of a human image, which utilizes two pyramid network models to respectively perform forward processing and reverse processing of multi-scale features of key points ,
- the fusion of more feature information can improve the accuracy of key point position detection.
- FIG. 1 shows a flowchart of a key point detection method according to an embodiment of the present disclosure.
- the key point detection method of the embodiments of the present disclosure may include:
- the embodiments of the present disclosure use the fusion of multi-scale features of the input image to perform the above-mentioned key point detection.
- the first feature maps in multiple scales of the input image can be obtained, the scales of the first feature maps are different, and there is a multiple relationship between the scales.
- Embodiments of the present disclosure may use a multi-scale analysis algorithm to obtain multiple scale first feature maps of the input image, or may also obtain multiple scale first feature maps of the input image through a neural network model that can perform multi-scale analysis.
- the disclosed embodiments are not specifically limited.
- S200 Perform a forward processing on each of the first feature maps by using a first pyramid neural network to obtain a second feature map corresponding to each of the first feature maps, wherein the second feature map corresponds to each of them
- the scale of the first feature map is the same.
- the forward processing may include a first convolution process and a first linear interpolation process.
- a second feature with the same scale as the corresponding first feature map may be obtained
- each second feature map further integrates the features of the input image, and the obtained second feature map has the same number as the first feature map, and the second feature map has the same scale as the corresponding first feature map.
- the first feature map obtained by the embodiment of the present disclosure may be C 1 , C 2 , C 3 and C 4
- the corresponding second feature map obtained after the forward processing may be F 1 , F 2 , F 3 and F 4 .
- S300 Perform a reverse processing on each second feature map by using a second pyramid neural network to obtain a third feature map corresponding to each of the second feature maps.
- the reverse processing includes a second convolution process, where, The third feature map has the same scale as the second feature map in one-to-one correspondence.
- the reverse processing includes second convolution processing and second linear interpolation processing.
- a third feature map with the same scale as the corresponding second feature map can be obtained , And each third feature map further integrates the features of the input image relative to the second feature map, and the number of the obtained third feature map and the second feature map is the same, and the third feature map and the corresponding second feature map The scale is the same.
- the second feature map obtained by the embodiment of the present disclosure may be F 1 , F 2 , F 3 and F 4
- the corresponding third feature map obtained after the reverse processing may be R 1 , R 2 , R 3 and R 4 .
- S400 Perform feature fusion processing on each of the third feature maps, and obtain the positions of key points in the input image by using the feature maps after feature fusion processing.
- the features of each third feature map can be executed Fusion processing.
- the embodiments of the present disclosure may use the corresponding convolution process to realize feature fusion of each third feature map, and when the scale of the third feature map is different, the scale conversion may be performed, and then the feature map stitching may be performed, and Extraction of key points.
- the embodiments of the present disclosure can perform the detection of different key points of the input image, for example, when the input image is an image of a person, the key points may be left and right eyes, nose, right and left ears, left and right shoulders, right and left elbows, right and left wrists, right and left crotch , At least one of left and right knees, right and left ankles, or in other embodiments, the input image may also be other types of images, and other key points may be identified when performing key point detection. Therefore, the embodiment of the present disclosure may further perform key point detection and identification according to the feature fusion result of the third feature map.
- the embodiments of the present disclosure can perform forward processing and further reverse processing based on the first feature map through bidirectional pyramid neural networks (first pyramid neural network and second pyramid neural network), respectively, which can effectively improve the input image
- the degree of feature fusion further improves the detection accuracy of key points.
- the embodiments of the present disclosure may first obtain an input image, which may be any image type, for example, a person image, a landscape image, an animal image, and so on. For different types of images, different key points can be identified. For example, the embodiment of the present disclosure takes a person image as an example for description.
- the first feature map of the input image at multiple different scales can be obtained through step S100.
- FIG. 2 shows a flowchart of step S100 in a key point detection method according to an embodiment of the present disclosure.
- obtaining first feature maps of different scales for the input image may include:
- the embodiment of the present disclosure may first normalize the size specifications of the input image, that is, the input image may be first adjusted to a first image of a preset specification, where the preset specification in the embodiment of the present disclosure may be 256pix * 192pix, and pix is a pixel value
- the input image may be uniformly converted into images of other specifications, which is not specifically limited in the embodiments of the present disclosure.
- S102 Input the first image to a residual neural network, and perform downsampling processing with different sampling frequencies on the first image to obtain first feature maps with different scales.
- a sampling process of multiple sampling frequencies may be performed on the first image.
- the first feature maps of different scales for the first image can be obtained through the residual neural network processing.
- the first image can be sampled by using different sampling frequencies to obtain first feature maps of different scales.
- the sampling frequency of the embodiment of the present disclosure may be 1/8, 1/16, 1/32, etc., but the embodiment of the present disclosure does not limit this.
- the feature map in the embodiment of the present disclosure refers to the feature matrix of the image, for example, the feature matrix of the embodiment of the present disclosure may be a three-dimensional matrix, and the length and width of the feature map described in the embodiment of the present disclosure may be corresponding to The dimension of the feature matrix in the row and column directions.
- FIG. 3 shows another flowchart of a key point detection method according to an embodiment of the present disclosure.
- part (a) shows the process of step S100 in the embodiment of the present disclosure, and four first feature maps C 1 , C 2 , C 3 and C 4 can be obtained through step S100, wherein the first feature map C 1
- the length and width may correspond to twice the length and width of the first feature map C 2, respectively
- the length and width of the second feature map C 2 may correspond to twice the length and width of the third feature map C 3 , respectively
- the length and width of the third feature map C 3 may correspond to twice the length and width of the fourth feature map C 4 , respectively.
- the scale multiples between C 1 and C 2, between C 2 and C 3 , and between C 3 and C 4 may be the same, for example, k 1 takes the value 1.
- k 1 may have different values, for example, the length and width of the first feature map C 1 may correspond to twice the length and width of the first feature map C 2 , respectively.
- the length and width FIGS C 2 may correspond to the third feature, respectively length and width quadruple FIGS C 3, and a third length and width characteristics FIGS C 3 may correspond respectively to the fourth feature of the C 4 of FIG. Eight times the length and width, but this embodiment of the present disclosure does not limit this.
- the forward processing of the first feature map may be performed through step S200 to obtain a plurality of second feature maps of different scales fused with the features of each first feature map .
- FIG. 4 shows a flowchart of step S200 in a key point detection method according to an embodiment of the present disclosure.
- using the first pyramid neural network to perform forward processing on each of the first feature maps to obtain a second feature map corresponding to each of the first feature maps includes:
- S201 a first collation using the first convolution wherein FIG C 1 ... C n in FIG characteristic C n for the first convolution processing to obtain the first feature a second feature map corresponding to FIG C n F n, wherein , Where n represents the number of first feature maps, and n is an integer greater than 1, and the length and width of the first feature map C n correspond to the length and width of the second feature map F n , respectively.
- the forward processing performed by the first pyramid neural network in the embodiment of the present disclosure may include first convolution processing and first linear interpolation processing, and may also include other processing procedures, which are not limited in the embodiment of the present disclosure.
- the first feature map obtained in the embodiment of the present disclosure may be C 1 ... C n , that is, n first feature maps, and C n may be a feature map with the smallest length and width, That is, the first feature map with the smallest scale.
- a first pyramid can be used wherein a first neural network convolution process for FIG C n, i.e., check the first convolution using a first characteristic diagram for convolution processing C n, to obtain a second characteristic graph F n.
- the second feature map F n are the length and width are the same as the length and width of the first feature of C n in FIG.
- the first convolution kernel may be a 3 * 3 convolution kernel, or may be another type of convolution kernel.
- the second feature map F n performs linear interpolation processing to obtain a first and a second intermediate feature map F n corresponding to FIG feature F 'n, wherein the first intermediate feature map F' n first feature map scale The scale of C n-1 is the same.
- the second feature map F n may be used to obtain the corresponding first intermediate feature map F ′ n , and the embodiment of the present disclosure may be obtained by performing linear interpolation processing on the second feature map F n
- the first intermediate feature map F ′ n corresponding to the second feature map F n wherein the scale of the first intermediate feature map F ′ n is the same as the scale of the first feature map C n-1 , for example, at C n-1
- the scale of the first intermediate feature map F ′ n is twice the length of the second feature map F n
- the width of the first intermediate feature map F ′ n is the second feature Figure Fn is twice the width.
- S203 Perform a convolution process on each of the first feature maps C 1 ... C n-1 except the first feature map C n by using the second convolution kernel to obtain each first feature other than the first feature map C n
- Figures C 1 ... C n-1 correspond to the second intermediate feature map C ' 1 ... C' n-1
- the scale of the second intermediate feature map corresponds to the first one-to-one correspondence
- the scale of the feature map is the same.
- the second convolution kernel can be used to perform a second convolution process on the first feature maps C 1 ... C n-1 , respectively, to obtain one -to- one with each of the first feature maps C 1 ... C n-1
- the second convolution kernel may be a 1 * 1 convolution kernel, but this disclosure does not specifically limit it.
- each second intermediate feature map obtained by the second convolution process is the same as the scale of the corresponding first feature map.
- the present embodiment of the present disclosure may be characterized in a first descending FIG C 1 ... C n-1 to obtain each of the first feature FIG C 1 ... C n-1 wherein a second intermediate FIG C '1 .. .C ' n-1 . That is, it is possible to obtain the first characteristic corresponds to FIG. C n-1, the second intermediate FIG C 'n-1, then to obtain a first characteristic corresponding to FIG C n-2 second intermediate FIG C' n-2, in order to by analogy, until a first characteristic diagram corresponding to a second intermediate C 1 characterized in FIG C '1.
- C n-1 is composed of the second intermediate feature map C ′ i and the first intermediate feature map F ′ I + 1 is obtained by superposition processing (addition processing), and the first intermediate feature map F ′ i is obtained by linear interpolation from the corresponding second feature map F i , and the second intermediate feature map C ′ i is The scale of the intermediate characteristic map F ′ i + 1 is the same, where i is an integer greater than or equal to 1 and less than n.
- the second feature map F i other than the second feature map F n may still be obtained by using a reverse processing method. That is, the present embodiment of the present disclosure may first obtain the first intermediate characteristic graph F n-1, wherein the first feature may be utilized FIGS C n-1 corresponding to the second intermediate FIG C 'n-1 and the first intermediate feature map F' n performs superposition processing to obtain a second feature map F n-1 , wherein the length and width of the second intermediate feature map C ′ n-1 are the same as the length and width of the first intermediate feature map F ′ n , respectively, and the second feature The length and width of the graph F n-1 are the length and width of the second intermediate characteristic graphs C ′ n-1 and F ′ n .
- a second characteristic graph F n-1 are the length and width of the second feature F in FIG twice the length and width of n (C n-1 C n scale of a scale of twice).
- the second feature map F n-1 can be linearly interpolated to obtain the first intermediate feature map F ′ n-1 , so that the scale of F ′ n-1 is the same as the scale of C n-1 , which can then be used the first characteristic graph C n-2 corresponding to FIG second intermediate C 'n-2 and the first intermediate feature map F' n-1 to obtain the second superposing characteristic graph F n-2, wherein the second intermediate feature FIG.
- the length and width of C ' n-2 are the same as those of the first intermediate feature map F' n-1
- the length and width of the second feature map F n-2 are the second intermediate feature map C ' n- 2 and the length and width of F ′ n-1
- the second feature for example, the length and width FIGS F n-2, respectively, twice a second characteristic diagram F n-1 in length and width.
- the first intermediate feature map F ′ 2 can be finally obtained, and the length of the second feature map F 1 , F 1 can be obtained according to the superposition process of the first intermediate feature map F ′ 2 and the first feature map C ′ 1 and width with the length and width C 1 of the same.
- step S200 may use a first pyramid neural network (Feature Pyramid Network-FPN) to obtain a multi-scale second feature map.
- first C 4 may be calculated through a convolution core 3 * 3 to obtain a new feature F in FIG. 4 (a second characteristic diagram), the same length and width F 4 and C 4 size.
- the up-sampling operation of double linear interpolation is performed on F 4 to obtain a feature map whose length and width are enlarged twice, that is, the first intermediate feature map F ′ 4 .
- C 3 is calculated by a 1 * 1 second convolution kernel to obtain a second intermediate feature map C ′ 3 , C ′ 3 and F ′ 4 are the same size, and the two feature maps are added to obtain a new feature map F 3 ( FIG second feature), characterized in that the second length and a width F of FIG. 3 are the second feature F 4 twice FIG.
- C 2 is calculated by a 1 * 1 second convolution kernel to obtain a second intermediate feature map C ′ 2 , C ′ 2 and F ′ 3 are the same size, and the two feature maps are added to obtain a new feature map F 2 ( FIG second feature), characterized in that the second length and a width F of FIG. 2, respectively, twice a second feature F 3 in FIG.
- the up-sampling operation of bilinear interpolation is performed on F 2 to obtain a feature map whose length and width are enlarged twice, that is, the first intermediate feature map F ′ 2 .
- C 1 is calculated by a 1 * 1 second convolution kernel to obtain a second intermediate feature map C ′ 1 , C ′ 1 and F ′ 2 are the same size, and the two feature maps are added to obtain a new feature map F 2 ( FIG second feature), characterized in that the second panel F, respectively length and width of a second F 2 characterized twice FIG.
- FPN four second feature maps of different scales were also obtained, which were denoted as F 1 , F 2 , F 3 and F 4 .
- the multiples of the length and width between F 1 and F 2 are the same as the multiples of the length and width between C 1 and C 2
- the multiples of the length and width between F 2 and F 3 are the same as C 2 and C 3
- the multiples of the length and width are the same
- the multiples of the length and width between F 3 and F 4 are the same as the multiples of the length and width between C 3 and C 4 .
- each second feature map performs reverse processing.
- the reverse processing may include second convolution processing and second linear interpolation processing.
- other processing may also be included, which is not specifically limited in the embodiment of the present disclosure.
- FIG. 5 shows a flowchart of step S300 in the key point detection method according to an embodiment of the present disclosure.
- using the second pyramid neural network to perform reverse processing on each second feature map to obtain a third feature map R i of different scales may include:
- S301 third convolution using a second feature matching F in FIG. 1 ... F m F 1 of the convoluting process, wherein FIG obtain a second view corresponding to the third feature F 1 R 1, wherein the third feature of FIG.
- the length and width of R 1 correspond to the length and width of the first feature map C 1 , respectively, where m represents the number of second feature maps, and m is an integer greater than 1, in which case m and the number of first feature maps n the same.
- inverse processing can be performed first from the second feature map F 1 with the largest length and width.
- the second feature map F 1 can be convoluted by a third convolution kernel to obtain the length are the same width F, and third intermediate wherein FIG 1 R 1.
- the third convolution kernel may be a 3 * 3 convolution kernel or other types of convolution kernels. The technical field in the art may select a desired convolution kernel according to different requirements.
- S302 Perform a convolution process on the second feature maps F 2 ... F m using a fourth convolution kernel to obtain corresponding third intermediate feature maps F ′′ 2 ... F ′′ m , where the third intermediate feature map
- the scale of is the same as the scale of the corresponding second feature map.
- the fourth convolution may be utilized to check each of the second feature of the second feature F in FIG other than FIG. 1
- F 2 ... F m are the convolution process, wherein the third intermediate to give the corresponding Figure F ′′ 1 ... F ′′ m-1 .
- the second feature maps F 2 ...
- each third intermediate feature map F ′′ j may be the length and width of the corresponding second feature map F j .
- the fourth convolution may be utilized to check each of the second feature of the second feature F in FIG other than FIG. 1
- F 2 ... F m are the convolution process, wherein the third intermediate to give the corresponding Figure F ′′ 1 ... F ′′ m-1 .
- the second feature maps F 2 ...
- F m other than the second feature map F 1 can be convolved through a fourth convolution kernel, where F 2 can be first convolved to obtain the corresponding first Three intermediate feature maps F ′′ 2 , and then F 3 can be convolved to obtain a corresponding third intermediate feature map F ′′ 3 , and so on, to obtain a third intermediate feature map F ′′ n corresponding to the second feature map F m
- the length and width of each third intermediate feature map F ′′ j may be half of the length and width of the corresponding second feature map F j .
- the collation can also use the third volume V of FIG wherein R 1 convoluting process to obtain a third characteristic diagram corresponding to R 1 in FIG fourth intermediate wherein R '1.
- the length and width of the fourth intermediate feature map R ′ 1 are the length and width of the second feature map F 2 .
- step S302 of FIG. F in FIG fourth intermediate wherein R '1 i and obtained in step S303, to obtain the third characteristic feature of FIG third FIG other than R 1 R 2 ... R m.
- each R in FIG third characteristic than the third feature of FIG 1 R 2 ... R m FIG characterized by a third intermediate F "j with the FIG fourth intermediate wherein R 'j-1 in superimposition processing obtained.
- step S304 respectively, may be utilized wherein the third intermediate view corresponding to F "i and FIG fourth intermediate wherein R 'i-1 for each of the third superposition processing to obtain the third characteristic feature map R of FIG. 1 other than R j .
- the third feature map R 2 can be obtained first by using the addition result of the third intermediate feature map F ′′ 2 and the fourth intermediate feature map R ′ 1 .
- a fifth convolution kernel is used to convolve R 2 to obtain a fourth intermediate feature map R ′ 2
- the sum of the results between the third intermediate feature map F ′′ 3 and the fourth intermediate feature map R ′ 2 is used to obtain the third Three feature maps R 3.
- the remaining fourth intermediate feature maps R ′ 3 ... R ′ m and the third feature map R 4 ... R m can be further obtained.
- each fourth intermediate feature map R ′ 1 obtained are the same as the length and width of the second feature map F 2 , respectively.
- the length and width of the fourth intermediate feature map R ′ j are the same as the length and width of the fourth intermediate feature map F ′′ j + 1 respectively.
- the length and width of the obtained third feature map R j are the second features
- the length and width of the graph F i , and the lengths and widths of the further third feature maps R 1 ... Rn respectively correspond to those of the first feature maps C 1 ... C n .
- the second feature pyramid network (Reverse Feature Pyramid Network-RFPN) is then used to further optimize the multi-scale features.
- FIG via a second feature F a 3 * 3 convolution kernel (third convolution kernel), to give a new characteristic diagram R 1 (FIG third feature), R 1 length and width the same as the size of the F 1.
- the feature map R 1 undergoes a convolution kernel with a convolution kernel of 3 * 3 (fifth convolution kernel) and a stride of 2 to calculate a new feature map, which is denoted as R ′ 1 and the length of R ′ 1 Both the width and the width can be half of R 1 .
- the second feature graph F 2 is calculated by a 3 * 3 convolution kernel (fourth convolution kernel) to obtain a new feature graph, which is denoted as F ′′ 2.
- R ′ 1 and F ′′ 2 are the same size, and R ′ 1 and F ′′ 2 are added to obtain a new feature map R 2.
- RFPN four different scale feature maps are also obtained, which are denoted as R 1 , R 2 , R 3 and R 4.
- R 1 and R 2 The multiples of the length and width between C 1 and C 2 are the same, and the multiples of the length and width between R 2 and R 3 are the same as the length and width between R 2 and R 3 The multiples are the same, and the multiples of the length and width between R 3 and R 4 are the same as the multiples of the length and width between C 3 and C 4 .
- the third feature map R 1 ... Rn obtained by the reverse processing of the second fundraising network model can be obtained.
- the forward and reverse processing can further improve the characteristics of image fusion, based on Each third feature map can accurately identify feature points.
- step S300 the result may be fused according to the third characteristic feature of each of R i in FIG., The position of the key points of the input image.
- step S400 performs feature fusion processing on each of the third feature maps and obtaining the position of each key point in the input image using the feature map after feature fusion processing (step S400) may include:
- S401 Perform feature fusion processing on each third feature map to obtain a fourth feature map.
- feature fusion may be performed on each third feature map, because the length and width of each third feature map in the embodiment of the present disclosure Different, so R 2 ... R n can be linearly interpolated, and finally the length and width of each third feature map R 2 ... R n are the same as the length and width of the third feature map R 1 . Then, the processed third feature maps can be combined to form a fourth feature map.
- S402 Obtain the positions of key points in the input image based on the fourth feature map.
- the fourth feature map may be subjected to dimensionality reduction processing, for example, the fourth feature map may be subjected to dimensionality reduction through convolution processing, and the feature point after the dimensionality reduction may be used to identify the position of the feature point of the input image .
- step S401 shows a flowchart of step S401 in the key point detection method according to an embodiment of the present disclosure, wherein the feature fusion processing performed on each third feature map to obtain a fourth feature map (step S401) may include:
- S4012 Use linear interpolation to adjust each third feature map to feature maps with the same scale.
- the scales of the third feature maps R 1 ... R n obtained by the embodiments of the present disclosure are different, it is first necessary to adjust the third feature maps to the feature maps of the same scale.
- the three feature maps perform different linear interpolation processes so that the scale of each feature map is the same, wherein the multiple of linear interpolation may be related to the scale multiple between each third feature map.
- the feature maps can be stitched and combined to obtain the fourth feature map.
- the feature maps after interpolation processing of the embodiments of the present disclosure have the same length and width, and the feature maps can be Connect in the height direction to obtain the fourth feature map.
- each feature map after S4012 processing can be represented as A, B, C, and D, and the obtained fourth feature map can be:
- step S401 in order to optimize small-scale features, the embodiment of the present disclosure may further optimize the third feature map with a smaller length and width, and may perform further convolution processing on the partial features.
- FIG. 8 shows another flowchart of a key point detection method according to an embodiment of the present disclosure, where, before performing feature fusion processing on each third feature map to obtain a fourth feature map, S4011 may also be included:
- S4011 Input the first set of third feature maps into different bottleneck block structures for convolution processing, respectively corresponding to the updated third feature maps, each of the bottleneck block structures includes a different number of volumes Product module; wherein, the third feature map includes a first set of third feature maps and a second set of third feature maps, both the first set of third feature maps and the second set of third feature maps include At least one third feature map.
- the convolution processing can be further characterized in view of the small scale, wherein the third feature map may be R 1 ... R m is divided into two groups, a first group of the third feature
- the scale of the map is smaller than the scale of the second set of third feature maps.
- each third feature map in the first group of third feature maps can be input into different bottleneck block structures to obtain an updated third feature map.
- the bottleneck block structure can include at least one volume
- the number of convolution modules in different bottleneck block structures may be different.
- the size of the feature map obtained after the convolution processing of the bottleneck block structure is the same as the size of the third feature map before input.
- the first group of third feature maps may be determined according to a preset ratio value of the number of third feature maps.
- the preset ratio can be 50%, that is, the third feature map with the smaller half of each third feature map can be input as the first set of third feature maps to different bottleneck block structures for feature optimization processing .
- the preset ratio may also be other ratio values, which is not limited in this disclosure.
- the first set of third feature maps input to the bottleneck block structure may also be determined according to the scale threshold.
- the feature map smaller than the threshold value of the scale determines that it needs to be input into the bottleneck block structure for feature optimization processing.
- the determination of the scale threshold may be determined according to the scale of each feature map, which is not specifically limited in the embodiments of the present disclosure.
- bottleneck block structure is not specifically limited in the embodiments of the present disclosure, and the form of the convolution module can be selected according to requirements.
- the optimized first set of third feature maps and the second set of third features may be scale normalized, that is, the feature maps are adjusted to feature maps of the same size.
- corresponding linear interpolation processing is performed for the third feature map optimized for each S4011 and the second group of third feature maps, respectively, so as to obtain feature maps of the same size.
- R 2 , R 3 and R 4 are followed by different numbers of bottleneck block structures, After R 2 is connected a bottleneck block, a new feature map is denoted as R ′′ 2 , after R 3 is connected with two bottleneck blocks, a new feature map is denoted as R ′′ 3 , followed by R 4 After three bottleneck blocks, the new feature map has to be rewritten, which is denoted as R ′′ 4.
- step S4012 feature maps with the same scale can be connected, for example, the above four feature maps are concatenated to obtain a new feature map, which is the fourth feature map, for example, R 1 , R ′′ ′ 2 , R ′′ ′ 3
- the four feature maps of R ′′ ′ 4 are all 256 dimensions, and the obtained fourth feature map can be 1024 dimensions.
- the corresponding fourth feature map can be obtained through the configuration in the above different embodiments.
- the key point position of the input image can be obtained according to the fourth feature map.
- the fourth feature map may be directly subjected to dimensionality reduction processing, and the position of the key point of the input image may be determined using the dimensionality reduction processed feature map.
- the feature map after dimensionality reduction may also be purified to further improve the accuracy of key points.
- 9 shows a flowchart of step S402 in a key point detection method according to an embodiment of the present disclosure.
- the obtaining the position of each key point in the input image based on the fourth feature map may include:
- S4021 Perform dimensionality reduction processing on the fourth feature map using a fifth convolution kernel.
- the manner of performing the dimensionality reduction processing may be convolution processing, that is, the preset feature convolution module is used to perform convolution processing on the fourth feature map to achieve the dimensionality reduction of the fourth feature map, for example, 256 Feature map.
- S4022 Use the convolution block attention module to perform purification processing on the features in the fourth feature map after the dimensionality reduction process, to obtain a purified feature map.
- the convolutional block attention module can be further used to purify the fourth feature map after the dimensionality reduction process.
- the convolutional block attention module may be a convolutional block attention module in the prior art.
- the convolutional block attention module of the embodiment of the present disclosure may include a channel attention unit and an importance attention unit.
- the fourth feature map after dimensionality reduction processing can be first input to the channel attention unit, wherein firstly, the fourth feature map after dimensionality reduction processing can be subjected to global maximum pooling based on height and width, and Global average pooling, and then input the first result obtained by the global maximum pooling and the second result obtained by the global average pooling to the multi-layer perceptron (MLP), and the MLP processed The two results are summed to obtain a third result, and the third result is activated to obtain a channel attention feature map.
- MLP multi-layer perceptron
- the channel attention feature map is input to the importance attention unit.
- the channel attention feature map can be input to the channel-based global maximum pooling (global maxpooling) and the global average Globalization (pooling) processing to obtain the fourth result and the fifth result respectively, and then connect the fourth result and the fifth result, and then perform dimensionality reduction on the connected result through convolution processing, and use the sigmoid function to reduce the
- the dimension result is processed to obtain the importance attention feature map, and then the importance attention feature map is multiplied by the channel attention feature map to obtain the purified feature map.
- the convolutional block attention module in the embodiment of the present disclosure.
- other structures may also be used to purify the fourth feature map after dimensionality reduction.
- S4023 Determine the position of key points of the input image using the purified feature map.
- the feature map can be used to obtain the position information of key points, for example, the purified feature map can be input to a 3 * 3 convolution module to predict the position information of each key point in the input image .
- the predicted key points may be the positions of 17 key points, for example, may include left and right eyes, nose, right and left ears, left and right shoulders, left and right elbows, left and right wrists, left and right crotch, left and right knee 3.
- the position of the left and right ankles may also be obtained, which is not limited in the embodiments of the present disclosure.
- FIG. 10 shows a flowchart of training a first pyramid neural network in a keypoint detection method according to an embodiment of the present disclosure.
- the embodiment of the present disclosure may use the training image data set to train the first pyramid neural network, which includes:
- S501 Perform the forward processing on the first feature map corresponding to each image in the training image data set using a first pyramid neural network to obtain a second feature map corresponding to each image in the training image data set.
- the training image data set may be input to the first pyramid neural network for training.
- the training image data set may include multiple images and the actual positions of key points corresponding to the images.
- steps S100 and S200 extraction of multi-scale first feature map and forward processing
- steps S100 and S200 extraction of multi-scale first feature map and forward processing
- S502 Use each second feature map to determine the identified key points.
- the obtained second feature map may be used to identify key points of the training image to obtain the first position of each key point of the training image.
- S504 Use the first loss value to reversely adjust each convolution kernel in the first pyramid neural network until the training times reach the set first time threshold.
- the first loss corresponding to the predicted first position can be obtained.
- the parameters of the first pyramid neural network can be reversely adjusted according to the first loss obtained in each training, such as the parameters of the convolution kernel, until the number of trainings reaches the first number threshold, the first number threshold It can be set according to requirements, and is generally a value greater than 120.
- the threshold of the first number of times in the embodiment of the present disclosure may be 140.
- the first loss corresponding to the first position may be a loss value obtained by inputting the first difference between the first position and the real position into the first loss function, where the first loss function may be a logarithmic loss function.
- the first position and the real position may be input to the first loss function to obtain the corresponding first loss.
- the embodiments of the present disclosure do not limit this. Based on the above, the training process of the first pyramid neural network can be realized, and the parameters of the first pyramid neural network can be optimized.
- FIG. 11 shows a flowchart of training a second pyramid neural network in a keypoint detection method according to an embodiment of the present disclosure.
- the embodiment of the present disclosure may use the training image data set to train the second pyramid neural network, which includes:
- S601 Use the second pyramid neural network to perform the reverse processing on the second feature map output by the first pyramid neural network and corresponding to each image in the training image data set, to obtain the first corresponding to each image in the training image data set.
- the first pyramid neural network may be used first to obtain the second feature map of each image in the training data set, and then the second feature map corresponding to each image in the training image data set may be processed through the second pyramid neural network.
- the third feature map corresponding to each image in the training image data set and then use the third feature map to predict the second position of the key point of the corresponding image.
- S604 Use the second loss to reverse adjust the convolution kernel in the second pyramid neural network until the number of training times reaches the set second number threshold, or use the second loss to reverse adjust the first pyramid The convolution kernel in the network and the convolution kernel in the second pyramid neural network until the training times reach the set second times threshold.
- the second loss corresponding to the predicted second position can be obtained after the second position of each key point is obtained.
- the parameters of the second pyramid neural network can be reversely adjusted according to the second loss obtained in each training, such as the parameters of the convolution kernel, until the number of trainings reaches the second number threshold, the second number threshold can be based on
- the requirement is set, generally a value greater than 120, for example, the threshold of the second number of times in the embodiment of the present disclosure may be 140.
- the second loss corresponding to the second position may be a loss value obtained by inputting the second difference between the second position and the real position into the second loss function, where the second loss function may be a logarithmic loss function.
- the second position and the real position may be input to the second loss function to obtain the corresponding second loss value.
- the embodiments of the present disclosure do not limit this.
- the first pyramid neural network while training the second pyramid neural network, can be further optimized and trained simultaneously. That is, in the embodiment of the present disclosure, in step S604, the obtained second loss can be used The value simultaneously reverses the parameters of the convolution kernel in the first pyramid neural network and the convolution kernel parameters in the second pyramid neural network sink. In order to achieve further optimization of the entire network model.
- the training process of the second pyramid neural network can be realized, and the optimization of the first pyramid neural network can be realized.
- step S400 may be implemented by a feature extraction network model, wherein the embodiment of the present disclosure may also perform an optimization process of the feature extraction network model, where FIG. 12 shows a first embodiment of the present disclosure.
- Flowchart of a training feature extraction network model in a key point detection method, wherein training the feature extraction network model using a training image data set may include:
- S701 Use the feature extraction network model to perform the feature fusion process on the third feature map corresponding to each image in the training image data set output by the second pyramid neural network, and use the feature map after feature fusion processing to identify the training The key points of each image in the image data set.
- the third feature map obtained by the first pyramid neural network forward processing and the second pyramid neural network processing corresponding to the image training data set may be input to the feature extraction network model, and the feature extraction network The model performs feature fusion, purification and other processing to obtain the third position of the key point of each image in the training image data set.
- S703 Use the third loss value to reverse adjust the parameters of the feature extraction network until the number of training times reaches the set third time threshold, or use the third loss function to reverse adjust the first pyramid neural network
- the third loss value corresponding to the predicted third position can be obtained.
- the parameters of the network model can be extracted based on the third loss reverse adjustment feature obtained in each training, such as the parameters of the convolution kernel, or the parameters of the above pooling process, until the number of training reaches the third number
- the threshold, the third times threshold may be set according to requirements, and is generally a value greater than 120.
- the third times threshold may be 140 in the embodiment of the present disclosure.
- the third loss corresponding to the third position may be a loss value obtained by inputting the third difference between the third position and the real position into the first loss function, where the third loss function may be a logarithmic loss function.
- the third position and the real position may be input to the third loss function to obtain the corresponding third loss value.
- the embodiments of the present disclosure do not limit this.
- the training process of the feature extraction network model can be realized, and the parameter optimization of the feature extraction network model can be realized.
- the first pyramid neural network and the second pyramid neural network can be further optimized and trained simultaneously, that is, in the embodiment of the present disclosure, in step S703, the obtained At the same time, the third loss value reversely adjusts the parameters of the convolution kernel in the first pyramid neural network, the convolution kernel parameters in the second pyramid neural network sink, and the parameters of the feature extraction network model, thereby realizing the further network model. optimization.
- the embodiment of the present disclosure proposes to use a bidirectional pyramid network model to perform key point feature detection, in which not only multi-scale features are obtained by forward processing, but also more features are fused by reverse processing. This can further improve the detection accuracy of key points.
- the present disclosure also provides a key point detection device, an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any key point detection method provided by the present disclosure.
- a key point detection device an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any key point detection method provided by the present disclosure.
- FIG. 13 shows a block diagram of a key point detection device according to an embodiment of the present disclosure.
- the key point detection device includes:
- the multi-scale feature acquisition module 10 is configured to obtain a first feature map for multiple scales of the input image, and the scale of each first feature map is in a multiple relationship;
- the forward processing module 20 is configured to use the first pyramid neural network for each Performing forward processing on the first feature map to obtain a second feature map corresponding to each of the first feature maps, wherein the second feature map has the same scale as the first feature map corresponding to each one
- the reverse processing module 30 is configured to perform a reverse processing on each of the second feature maps using a second pyramid neural network to obtain a third feature map corresponding to each of the second feature maps, wherein, the first The three feature maps have the same scale as the one-to-one corresponding second feature map;
- the key point detection module 40 is configured to perform feature fusion processing on each of the third feature maps, and use the feature fusion processed feature maps to obtain Describe the position of each key point in the input image.
- the multi-scale feature acquisition module is configured to adjust the input image to a first image of a preset specification, and input the first image to a residual neural network.
- the image is down-sampled at different sampling frequencies to obtain multiple first feature maps of different scales.
- the forward processing includes first convolution processing and first linear interpolation processing
- the reverse processing includes second convolution processing and second linear interpolation processing
- the forward processing module is configured to perform a convolution process on the first feature map C n in the first feature map C 1 ... C n using the first convolution kernel to obtain FIG feature a second feature corresponding to C n F n in FIG, where n represents the number of a first characteristic diagram, and n is an integer greater than 1; and F n performs linear interpolation processing on the second characteristic diagram obtained with a second The first intermediate feature map F ′ n corresponding to the feature map F n , wherein the scale of the first intermediate feature map F ′ n is the same as the scale of the first feature map C n-1 ; and the second feature map is used to check the first feature map FIG respective first feature other than C n C 1 ...
- C n- 1 performs convolution processing, respectively, to obtain C 1 ... C n-1-one correspondence of the second intermediate first feature characteristic diagram C of FIG. ' 1 ... C ' n-1 , wherein the scale of the second intermediate feature map is the same as the scale of the first feature map corresponding to it; and based on the second feature map F n and each of the first Two intermediate feature maps C ' 1 ... C' n-1 to obtain a second feature map F 1 ... F n-1 and a first intermediate feature map F ′ 1 ...
- the second feature map F i is obtained by superimposing the second intermediate feature map C ′ i and the first intermediate feature map F ′ i + 1 , and the first intermediate feature map F ′ i is formed by the corresponding second feature
- the graph F i is obtained by linear interpolation, and the second intermediate feature map C ′ i has the same scale as the first intermediate feature map F ′ i + 1 , where i is an integer greater than or equal to 1 and less than n.
- the reverse processing module is configured to perform a convolution process on the second feature map F 1 in the second feature maps F 1 ... F m using a third convolution kernel to obtain 1 corresponding to the third characteristic feature of Figure II in FIG. F R 1, wherein m represents the number of the second characteristic diagram, and m is an integer greater than 1; and using a second feature matching fourth convolution F 2 ... F m FIG. Perform a convolution process to obtain the corresponding third intermediate feature maps F ′′ 2 ...
- FIG third collation performed convolution processing to obtain the third characteristic corresponds to FIG fourth intermediate R 1 wherein FIG R '1; and FIG characterized by each of the third intermediate F "2 ... F" m and a fourth Intermediate feature map R ' 1 to obtain a third feature map R 2 ... R m and a fourth intermediate feature map R' 2 ...
- the key point detection module is configured to perform feature fusion processing on each third feature map to obtain a fourth feature map, and obtain each key in the input image based on the fourth feature map The location of the point.
- the key point detection module is configured to use linear interpolation to adjust each third feature map to a feature map with the same scale, and connect the feature maps with the same scale to obtain The fourth characteristic diagram is described.
- the device further includes: an optimization module configured to input the first set of third feature maps to different bottleneck block structures for convolution processing to obtain updated third features, respectively Figures, each of the bottleneck block structures includes a different number of convolution modules, wherein the third feature map includes a first set of third feature maps and a second set of third feature maps, the first set of third Both the feature map and the second set of third feature maps include at least one third feature map.
- an optimization module configured to input the first set of third feature maps to different bottleneck block structures for convolution processing to obtain updated third features, respectively Figures, each of the bottleneck block structures includes a different number of convolution modules, wherein the third feature map includes a first set of third feature maps and a second set of third feature maps, the first set of third Both the feature map and the second set of third feature maps include at least one third feature map.
- the keypoint detection module is further configured to adjust each of the updated third feature map and the second set of third feature maps to the same scale using linear interpolation Feature map, and connect the feature maps with the same scale to obtain the fourth feature map.
- the key point detection module is further configured to perform dimensionality reduction processing on the fourth feature map using a fifth convolution kernel, and determine the key of the input image using the fourth feature map after the dimensionality reduction processing The location of the point.
- the keypoint detection module is further configured to perform a dimensionality reduction process on the fourth feature map using a fifth convolution kernel, and use a convolutional block attention module to perform the dimensionality reduction on the fourth feature
- the features in the figure are purified to obtain a purified feature map, and the purified feature map is used to determine the position of the key point of the input image.
- the forward processing module is further configured to train the first pyramid neural network using a training image data set, which includes: using the first pyramid neural network to correspond to each image in the training image data set The first feature map of the is subjected to the forward processing to obtain a second feature map corresponding to each image in the training image data set; the second feature map is used to determine the identified key points; the key points are obtained according to the first loss function The first loss; using the first loss to reversely adjust each convolution kernel in the first pyramid neural network until the training times reach the set first times threshold.
- the reverse processing module is further configured to train the second pyramid neural network using a training image data set, which includes: using the second pyramid neural network to output the first pyramid neural network Perform the reverse processing on the second feature map corresponding to each image in the training image data set to obtain a third feature map corresponding to each image in the training image data set; use each third feature map to determine the identified key points;
- the second loss function obtains the second loss of each identified key point; the second loss is used to reversely adjust the convolution kernel in the second pyramid neural network until the number of trainings reaches the set second number threshold; or, use The second loss reversely adjusts the convolution kernel in the first pyramid network and the convolution kernel in the second pyramid neural network until the number of training times reaches the set second number threshold.
- the key point detection module is further configured to perform the feature fusion process on each of the third feature maps through a feature extraction network, and execute the Before performing feature fusion processing on the third feature map, the training image data set is used to train the feature extraction network, which includes: using the feature extraction network to output the second pyramid neural network with respect to each image corresponding to each image in the training image data set.
- the three feature maps perform the feature fusion processing, and use the feature maps after feature fusion processing to identify the key points of each image in the training image data set; obtain the third loss of each key point according to the third loss function;
- the three loss values reversely adjust the parameters of the feature extraction network until the training times reach the set third times threshold; or, use the third loss function to reversely adjust the convolution kernel in the first pyramid neural network
- the functions provided by the apparatus provided by the embodiments of the present disclosure or the modules contained therein may be used to perform the methods described in the above method embodiments.
- An embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing processor executable instructions; wherein the processor is configured as the above method.
- the electronic device may be provided as a terminal, server, or other form of device.
- FIG. 14 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
- the electronic device 800 may be a terminal such as a mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical device, fitness device, and personal digital assistant.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input / output (I / O) interface 812, and a sensor component 814 , ⁇ ⁇ ⁇ 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps in the above method.
- the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operation at the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, and so on.
- the memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable and removable Programmable read only memory
- PROM programmable read only memory
- ROM read only memory
- magnetic memory flash memory
- flash memory magnetic disk or optical disk.
- the power supply component 806 provides power to various components of the electronic device 800.
- the power component 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
- the multimedia component 808 includes a front camera and / or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and / or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I / O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with status evaluation in various aspects.
- the sensor component 814 can detect the on / off state of the electronic device 800, and the relative positioning of the components, for example, the component is the display and keypad of the electronic device 800, and the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of user contact with the electronic device 800, the orientation or acceleration / deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field Programming gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above method.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field Programming gate array
- controller microcontroller, microprocessor or other electronic components are used to implement the above method.
- a non-volatile computer-readable storage medium is also provided, for example, a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the above method.
- FIG. 15 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and memory resources represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application programs stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above method.
- the electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I / O) interface 1958 .
- the electronic device 1900 can operate an operating system based on the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, for example, a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the above method.
- the present disclosure may be a system, method, and / or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for causing the processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), and erasable programmable read only memory (EPROM (Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical coding device, such as a computer on which instructions are stored
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical coding device such as a computer on which instructions are stored
- the convex structure in the hole card or the groove and any suitable combination of the above.
- the computer-readable storage medium used herein is not to be interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, optical pulses through fiber optic cables), or through wires The transmitted electrical signal.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing / processing devices, or to an external computer or external storage device through a network, such as the Internet, a local area network, a wide area network, and / or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and / or edge servers.
- the network adapter card or network interface in each computing / processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing / processing device .
- Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages Source code or object code written in any combination.
- the programming languages include object-oriented programming languages such as Smalltalk, C ++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer readable program instructions can be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or completely on the remote computer or server carried out.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to pass the Internet connection).
- electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs) or programmable logic arrays (PLA), can be personalized by utilizing the status information of computer-readable program instructions, which can be Computer-readable program instructions are executed to implement various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, or other programmable data processing device, thereby producing a machine that causes these instructions to be executed by the processor of a computer or other programmable data processing device A device that implements the functions / actions specified in one or more blocks in the flowchart and / or block diagram is generated.
- the computer-readable program instructions may also be stored in a computer-readable storage medium. These instructions enable the computer, programmable data processing apparatus, and / or other devices to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions to implement various aspects of the functions / acts specified in one or more blocks in the flowchart and / or block diagram.
- the computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment, so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , So that the instructions executed on the computer, other programmable data processing device, or other equipment implement the functions / acts specified in one or more blocks in the flowchart and / or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more Executable instructions.
- the functions marked in the blocks may also occur in an order different from that marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or actions Or, it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (30)
- 一种关键点检测方法,包括:获得针对输入图像的多个尺度的第一特征图,各第一特征图的尺度成倍数关系;利用第一金字塔神经网络对各所述第一特征图进行正向处理得到与各个所述第一特征图一一对应的第二特征图,其中,所述第二特征图与其一一对应的所述第一特征图的尺度相同;利用第二金字塔神经网络对各个所述第二特征图进行反向处理得到与各个所述第二特征图一一对应的第三特征图,其中,所述第三特征图与其一一对应的所述第二特征图的尺度相同;对各所述第三特征图进行特征融合处理,并利用特征融合处理后的特征图获得所述输入图像中的各关键点的位置。
- 根据权利要求1所述的方法,其中,所述获得针对输入图像的多个尺度的第一特征图包括:将所述输入图像调整为预设规格的第一图像;将所述第一图像输入至残差神经网络,对第一图像执行不同采样频率的降采样处理得到多个不同尺度的第一特征图。
- 根据权利要求1所述的方法,其中,所述正向处理包括第一卷积处理和第一线性插值处理,所述反向处理包括第二卷积处理和第二线性插值处理。
- 根据权利要求1-3中任意一项所述的方法,其中,所述利用第一金字塔神经网络对各所述第一特征图进行正向处理得到与各个所述第一特征图一一对应的第二特征图,包括:利用第一卷积核对第一特征图C 1...C n中的第一特征图C n进行卷积处理,获得与第一特征图C n对应的第二特征图F n,其中n表示第一特征图的数量,以及n为大于1的整数;对所述第二特征图F n执行线性插值处理获得与第二特征图F n对应的第一中间特征图F′ n,其中第一中间特征图F′ n的尺度与第一特征图C n-1的尺度相同;利用第二卷积核对第一特征图C n以外的各第一特征图C 1...C n-1进行卷积处理,得到分别与第一特征图C 1...C n-1一一对应的第二中间特征图C′ 1...C′ n-1,其中所述第二中间特征图的尺度与和其一一对应的第一特征图的尺度相同;基于所述第二特征图F n以及各所述第二中间特征图C′ 1...C′ n-1,得到第二特征图F 1...F n-1以及第一中间特征图F′ 1...F′ n-1,其中所述第二特征图F i由所述第二中间特征图C′ i与所述第一中间特征图F′ i+1进行叠加处理得到,第一中间特征图F′ i由对应的第二特征图F i经线性插值得到,并且,所述第二中间特征图C′ i与第一中间特征图F′ i+1的尺度相同,其中,i为大于或者等于1且小于n的整数。
- 根据权利要求1-4中任意一项所述的方法,其中,所述利用第二金字塔神经网络对各个所述第二特征图进行反向处理得到与各个所述第二特征图一一对应的第三特征图,包括:利用第三卷积核对第二特征图F 1...F m中的第二特征图F 1进行卷积处理,获得与第二特征图F 1对应的第三特征图R 1,其中m表示第二特征图的数量,以及m为大于1的整数;利用第四卷积核对第二特征图F 2...F m进行卷积处理,分别得到对应的第三中间特征图F″ 2...F″ m,其中,第三中间特征图的尺度与对应的第二特征图的尺度相同;利用第五卷积核对第三特征图R 1进行卷积处理得到与第三特征图R 1对应的第四中间特征图R′ 1;利用各第三中间特征图F″ 2...F″ m以及第四中间特征图R′ 1,得到第三特征图R 2...R m以及第四中间特征图R′ 2...R′ m,其中,第三特征图R j由第三中间特征图F″ j与第四中间特征图R′ j-1的叠加处理得到,第四中间特征图R′ j-1由对应的第三特征图R j-1通过第五卷积核卷积处理获得,其中j为大于1且小于或者等于m。
- 根据权利要求1-5中任意一项所述的方法,其中,所述对各所述第三特征图进行特征融合处理,并利用特征融合处理后的特征图获得所述输入图像中的各关键点的位置,包括:对各第三特征图进行特征融合处理,得到第四特征图:基于所述第四特征图获得所述输入图像中各关键点的位置。
- 根据权利要求6所述的方法,其中,所述对各第三特征图进行特征融合处理,得到第四特征图,包括:利用线性插值的方式,将各第三特征图调整为尺度相同的特征图;对所述尺度相同的特征图进行连接得到所述第四特征图。
- 根据权利要求6或7所述的方法,其中,在所述对各第三特征图进行特征融合处理,得到第四特征图之前,还包括:将第一组第三特征图分别输入至不同的瓶颈区块结构中进行卷积处理,分别得到更新后的第三特征图,各所述瓶颈区块结构中包括不同数量的卷积模块,其中,所述第三特征图包括第一组第三特征图和第二组第三特征图,所述第一组第三特征图和所述第二组第三特征图中均包括至少一个第三特征图。
- 根据权利要求8所述的方法,其中,所述对各第三特征图进行特征融合处理,得到第四特征图,包括:利用线性插值的方式,将各所述更新后的第三特征图以及所述第二组第三特征图,调整为尺度相同的特征图;对所述尺度相同的特征图进行连接得到所述第四特征图。
- 根据权利要求6-9中任意一项所述的方法,其中,所述基于所述第四特征图获得所述输入图像中各关键点的位置,包括:利用第五卷积核对所述第四特征图进行降维处理;利用降维处理后的第四特征图确定输入图像的关键点的位置。
- 根据权利要求6-9中任意一项所述的方法,其中,所述基于所述第四特征图获得所述输入图像中各关键点的位置,包括:利用第五卷积核对所述第四特征图进行降维处理;利用卷积块注意力模块对降维处理后的第四特征图中的特征进行提纯处理,得到提纯后的特征图;利用提纯后的特征图确定所述输入图像的关键点的位置。
- 根据权利要求1-11中任意一项所述的方法,其中,所述方法还包括利用训练图像数据集训练所述第一金字塔神经网络,其包括:利用第一金字塔神经网络对所述训练图像数据集中各图像对应的第一特征图进行所述正向处理,得到所述训练图像数据集中各图像对应的第二特征图;利用各第二特征图确定识别的关键点;根据第一损失函数得到所述关键点的第一损失;利用所述第一损失反向调节所述第一金字塔神经网络中的各卷积核,直至训练次数达到设定的第一次数阈值。
- 根据权利要求1-12中任意一项所述的方法,其中,所述方法还包括利用训练图像数据集训练所述第二金字塔神经网络,其包括:利用第二金字塔神经网络对所述第一金字塔神经网络输出的关于训练图像数据集中各图像对应的第二特征图进行所述反向处理,得到所述训练图像数据集中各图像对应的第三特征图;利用各第三特征图确定识别的关键点;根据第二损失函数得到识别的各关键点的第二损失;利用所述第二损失反向调节所述第二金字塔神经网络中卷积核,直至训练次数达到设定的第二次数阈值;或者,利用所述第二损失反向调节所述第一金字塔网络中的卷积核以及第二金字塔神经网络中的卷积核,直至训练次数达到设定的第二次数阈值。
- 根据权利要求1-13中任意一项所述的方法,其中,通过特征提取网络执行所述对各所述第三特征图进行特征融合处理,并且,在通过特征提取网络执行所述对各所述第三特征图进行特征融合处理之前,所述方法还包括:利用训练图像数据集训练所述特征提取网络,其包括:利用特征提取网络对所述第二金字塔神经网络输出的关于训练图像数据集中各图像对应的第三特征图进行所述特征融合处理,并利用特征融合处理后的特征图识别所述训练图像数据集中各图像的关键点;根据第三损失函数得到各关键点的第三损失;利用所述第三损失值反向调节所述特征提取网络的参数,直至训练次数达到设定的第三次数阈值;或者,利用所述第三损失函数反向调节所述第一金字塔神经网络中的卷积核参数、第二金字塔 神经网络中的卷积核参数,以及所述特征提取网络的参数,直至训练次数达到设定的第三次数阈值。
- 一种关键点检测装置,包括:多尺度特征获取模块,配置为获得针对输入图像的多个尺度的第一特征图,各第一特征图的尺度成倍数关系;正向处理模块,配置为利用第一金字塔神经网络对各所述第一特征图进行正向处理得到与各个所述第一特征图一一对应的第二特征图,其中,所述第二特征图与其一一对应的所述第一特征图的尺度相同;反向处理模块,配置为利用第二金字塔神经网络对各个所述第二特征图进行反向处理得到与各个所述第二特征图一一对应的第三特征图,其中,所述第三特征图与其一一对应的所述第二特征图的尺度相同;关键点检测模块,配置为对各所述第三特征图进行特征融合处理,并利用特征融合处理后的特征图获得所述输入图像中的各关键点的位置。
- 根据权利要求15所述的装置,其中,所述多尺度特征获取模块,配置为将所述输入图像调整为预设规格的第一图像,并将所述第一图像输入至残差神经网络,对第一图像执行不同采样频率的降采样处理得到多个不同尺度的第一特征图。
- 根据权利要求15所述的装置,其中,所述正向处理包括第一卷积处理和第一线性插值处理,所述反向处理包括第二卷积处理和第二线性插值处理。
- 根据权利要求15-17中任意一项所述的装置,其中,所述正向处理模块,配置为利用第一卷积核对第一特征图C 1...C n中的第一特征图C n进行卷积处理,获得与第一特征图C n对应的第二特征图F n,其中n表示第一特征图的数量,以及n为大于1的整数;以及对所述第二特征图F n执行线性插值处理获得与第二特征图F n对应的第一中间特征图F′ n,其中第一中间特征图F′ n的尺度与第一特征图C n-1的尺度相同;以及利用第二卷积核对第一特征图C n以外的各第一特征图C 1...C n-1进行卷积处理,得到分别与第一特征图C 1...C n-1一一对应的第二中间特征图C′ 1...C′ n-1,其中所述第二中间特征图的尺度与和其一一对应的第一特征图的尺度相同;并且基于所述第二特征图F n以及各所述第二中间特征图C′ 1...C′ n-1,得到第二特征图F 1...F n-1以及第一中间特征图F′ 1...F′ n-1,其中所述第二特征图F i由所述第二中间特征图C′ i与所述第一中间特征图F′ i+1进行叠加处理得到,第一中间特征图F′ i由对应的第二特征图F i经线性插值得到,并且,所述第二中间特征图C′ i与第一中间特征图F′ i+1的尺度相同,其中,i为大于或者等于1且小于n的整数。
- 根据权利要求15-18中任意一项所述的装置,其中,所述反向处理模块,配置为利用第三卷积核对第二特征图F 1...F m中的第二特征图F 1进行卷积处理,获得与第二特征图F 1对应的第三特征图R 1,其中m表示第二特征图的数量,以及m为大于1的整数;以及利用第四卷积核对第二特征图F 2...F m进行卷积处理,分别得到对应的第三中间特征图F″ 2...F″ m,其中,第三中间特征图的尺度与对应的第二特征图的尺度相同;以及利用第五卷积核对第三特征图R 1进行卷积处理得到与第三特征图R 1对应的第四中间特征图R′ 1;并且利用各第三中间特征图F″ 2...F″ m以及第四中间特征图R′ 1,得到第三特征图R 2...R m以及第四中间特征图R′ 2...R′ m,其中,第三特征图R j由第三中间特征图F″ j与第四中间特征图R′ j-1的叠加处理得到,第四中间特征图R′ j-1由对应的第三特征图R j-1通过第五卷积核卷积处理获得,其中j为大于1且小于或者等于m。
- 根据权利要求15-19中任意一项所述的装置,其中,所述关键点检测模块,配置为对各第三特征图进行特征融合处理,得到第四特征图,并基于所述第四特征图获得所述输入图像中各关键点的位置。
- 根据权利要求20所述的装置,其中,所述关键点检测模块,配置为利用线性插值的方式,将各第三特征图调整为尺度相同的特征图,并对所述尺度相同的特征图进行连接得到所述第四特征 图。
- 根据权利要求20或21所述的装置,其中,所述装置还包括:优化模块,配置为将第一组第三特征图分别输入至不同的瓶颈区块结构中进行卷积处理,分别得到更新后的第三特征图,各所述瓶颈区块结构中包括不同数量的卷积模块,其中,所述第三特征图包括第一组第三特征图和第二组第三特征图,所述第一组第三特征图和所述第二组第三特征图中均包括至少一个第三特征图。
- 根据权利要求22所述的装置,其中,所述关键点检测模块还配置为利用线性插值的方式,将各所述更新后的第三特征图以及所述第二组第三特征图,调整为尺度相同的特征图,并对所述尺度相同的特征图进行连接得到所述第四特征图。
- 根据权利要求20-23中任意一项所述的装置,其中,所述关键点检测模块还配置为利用第五卷积核对所述第四特征图进行降维处理,并利用降维处理后的第四特征图确定输入图像的关键点的位置。
- 根据权利要求20-23中任意一项所述的装置,其中,所述关键点检测模块还配置为利用第五卷积核对所述第四特征图进行降维处理,利用卷积块注意力模块对降维处理后的第四特征图中的特征进行提纯处理,得到提纯后的特征图,并利用提纯后的特征图确定所述输入图像的关键点的位置。
- 根据权利要求15-25中任意一项所述的装置,其中,所述正向处理模块还配置为利用训练图像数据集训练所述第一金字塔神经网络,其包括:利用第一金字塔神经网络对所述训练图像数据集中各图像对应的第一特征图进行所述正向处理,得到所述训练图像数据集中各图像对应的第二特征图;利用各第二特征图确定识别的关键点;根据第一损失函数得到所述关键点的第一损失;利用所述第一损失反向调节所述第一金字塔神经网络中的各卷积核,直至训练次数达到设定的第一次数阈值。
- 根据权利要求15-26中任意一项所述的装置,其中,所述反向处理模块还配置为利用训练图像数据集训练所述第二金字塔神经网络,其包括:利用第二金字塔神经网络对所述第一金字塔神经网络输出的关于训练图像数据集中各图像对应的第二特征图进行所述反向处理,得到所述训练图像数据集中各图像对应的第三特征图;利用各第三特征图确定识别的关键点;根据第二损失函数得到识别的各关键点的第二损失;利用所述第二损失反向调节所述第二金字塔神经网络中卷积核,直至训练次数达到设定的第二次数阈值;或者,利用所述第二损失反向调节所述第一金字塔网络中的卷积核以及第二金字塔神经网络中的卷积核,直至训练次数达到设定的第二次数阈值。
- 根据权利要求15-27中任意一项所述的装置,其中,所述关键点检测模块还配置为通过特征提取网络执行所述对各所述第三特征图进行特征融合处理,并且在通过特征提取网络执行所述对各所述第三特征图进行特征融合处理之前,还利用训练图像数据集训练所述特征提取网络,其包括:利用特征提取网络对所述第二金字塔神经网络输出的关于训练图像数据集中各图像对应的第三特征图进行所述特征融合处理,并利用特征融合处理后的特征图识别所述训练图像数据集中各图像的关键点;根据第三损失函数得到各关键点的第三损失;利用所述第三损失值反向调节所述特征提取网络的参数,直至训练次数达到设定的第三次数阈值;或者,利用所述第三损失函数反向调节所述第一金字塔神经网络中的卷积核参数、第二金字塔神经网络中的卷积核参数,以及所述特征提取网络的参数,直至训练次数达到设定的第三次数阈值。
- 一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至14中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1至14中任意一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207012580A KR102394354B1 (ko) | 2018-11-16 | 2019-04-22 | 키 포인트 검출 방법 및 장치, 전자 기기 및 저장 매체 |
JP2020518758A JP6944051B2 (ja) | 2018-11-16 | 2019-04-22 | キーポイント検出方法及び装置、電子機器並びに記憶媒体 |
SG11202003818YA SG11202003818YA (en) | 2018-11-16 | 2019-04-22 | Key point detection method and apparatus, electronic device, and storage medium |
US16/855,630 US20200250462A1 (en) | 2018-11-16 | 2020-04-22 | Key point detection method and apparatus, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811367869.4 | 2018-11-16 | ||
CN201811367869.4A CN109614876B (zh) | 2018-11-16 | 2018-11-16 | 关键点检测方法及装置、电子设备和存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/855,630 Continuation US20200250462A1 (en) | 2018-11-16 | 2020-04-22 | Key point detection method and apparatus, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020098225A1 true WO2020098225A1 (zh) | 2020-05-22 |
Family
ID=66003175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/083721 WO2020098225A1 (zh) | 2018-11-16 | 2019-04-22 | 关键点检测方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20200250462A1 (zh) |
JP (1) | JP6944051B2 (zh) |
KR (1) | KR102394354B1 (zh) |
CN (7) | CN113591755B (zh) |
SG (1) | SG11202003818YA (zh) |
TW (1) | TWI720598B (zh) |
WO (1) | WO2020098225A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695519A (zh) * | 2020-06-12 | 2020-09-22 | 北京百度网讯科技有限公司 | 关键点定位方法、装置、设备以及存储介质 |
CN111709945A (zh) * | 2020-07-17 | 2020-09-25 | 成都三零凯天通信实业有限公司 | 一种基于深度局部特征的视频拷贝检测方法 |
CN111784642A (zh) * | 2020-06-10 | 2020-10-16 | 中铁四局集团有限公司 | 一种图像处理方法、目标识别模型训练方法和目标识别方法 |
CN112131925A (zh) * | 2020-07-22 | 2020-12-25 | 浙江元亨通信技术股份有限公司 | 一种多通道特征空间金字塔的构造方法 |
CN112836710A (zh) * | 2021-02-23 | 2021-05-25 | 浙大宁波理工学院 | 一种基于特征金字塔网络的房间布局估计获取方法与系统 |
CN116738296A (zh) * | 2023-08-14 | 2023-09-12 | 大有期货有限公司 | 机房状况综合智能监控系统 |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102227583B1 (ko) * | 2018-08-03 | 2021-03-15 | 한국과학기술원 | 딥 러닝 기반의 카메라 캘리브레이션 방법 및 장치 |
CN113591755B (zh) * | 2018-11-16 | 2024-04-16 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
JP7103240B2 (ja) * | 2019-01-10 | 2022-07-20 | 日本電信電話株式会社 | 物体検出認識装置、方法、及びプログラム |
CN110378253B (zh) * | 2019-07-01 | 2021-03-26 | 浙江大学 | 一种基于轻量化神经网络的实时关键点检测方法 |
CN110378976B (zh) * | 2019-07-18 | 2020-11-13 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN110705563B (zh) * | 2019-09-07 | 2020-12-29 | 创新奇智(重庆)科技有限公司 | 一种基于深度学习的工业零件关键点检测方法 |
CN110647834B (zh) * | 2019-09-18 | 2021-06-25 | 北京市商汤科技开发有限公司 | 人脸和人手关联检测方法及装置、电子设备和存储介质 |
KR20210062477A (ko) * | 2019-11-21 | 2021-05-31 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
US20220092735A1 (en) * | 2019-11-21 | 2022-03-24 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
US11080833B2 (en) * | 2019-11-22 | 2021-08-03 | Adobe Inc. | Image manipulation using deep learning techniques in a patch matching operation |
WO2021146890A1 (en) * | 2020-01-21 | 2021-07-29 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for object detection in image using detection model |
CN111414823B (zh) * | 2020-03-12 | 2023-09-12 | Oppo广东移动通信有限公司 | 人体特征点的检测方法、装置、电子设备以及存储介质 |
CN111382714B (zh) * | 2020-03-13 | 2023-02-17 | Oppo广东移动通信有限公司 | 图像检测方法、装置、终端及存储介质 |
CN111401335B (zh) * | 2020-04-29 | 2023-06-30 | Oppo广东移动通信有限公司 | 一种关键点检测方法及装置、存储介质 |
CN111709428B (zh) * | 2020-05-29 | 2023-09-15 | 北京百度网讯科技有限公司 | 图像中关键点位置的识别方法、装置、电子设备及介质 |
US11847823B2 (en) | 2020-06-18 | 2023-12-19 | Apple Inc. | Object and keypoint detection system with low spatial jitter, low latency and low power usage |
CN112132011B (zh) * | 2020-09-22 | 2024-04-26 | 深圳市捷顺科技实业股份有限公司 | 一种面部识别方法、装置、设备及存储介质 |
CN112149558A (zh) * | 2020-09-22 | 2020-12-29 | 驭势科技(南京)有限公司 | 一种用于关键点检测的图像处理方法、网络和电子设备 |
CN112232361B (zh) * | 2020-10-13 | 2021-09-21 | 国网电子商务有限公司 | 图像处理的方法及装置、电子设备及计算机可读存储介质 |
CN112364699B (zh) * | 2020-10-14 | 2024-08-02 | 珠海欧比特宇航科技股份有限公司 | 基于加权损失融合网络的遥感图像分割方法、装置及介质 |
CN112257728B (zh) * | 2020-11-12 | 2021-08-17 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备以及存储介质 |
CN112329888B (zh) * | 2020-11-26 | 2023-11-14 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备以及存储介质 |
CN112434713A (zh) * | 2020-12-02 | 2021-03-02 | 携程计算机技术(上海)有限公司 | 图像特征提取方法、装置、电子设备、存储介质 |
CN112581450B (zh) * | 2020-12-21 | 2024-04-16 | 北京工业大学 | 基于膨胀卷积金字塔与多尺度金字塔的花粉检测方法 |
CN112800834B (zh) * | 2020-12-25 | 2022-08-12 | 温州晶彩光电有限公司 | 一种基于跪拜行为识别来定位炫彩射灯的方法及系统 |
JP2023527615A (ja) * | 2021-04-28 | 2023-06-30 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | 目標対象検出モデルのトレーニング方法、目標対象検出方法、機器、電子機器、記憶媒体及びコンピュータプログラム |
CN113902903B (zh) * | 2021-09-30 | 2024-08-02 | 北京工业大学 | 一种基于下采样的双注意力多尺度融合方法 |
KR102647320B1 (ko) * | 2021-11-23 | 2024-03-12 | 숭실대학교산학협력단 | 객체 추적 장치 및 방법 |
CN114022657B (zh) * | 2022-01-06 | 2022-05-24 | 高视科技(苏州)有限公司 | 一种屏幕缺陷分类方法、电子设备及存储介质 |
CN114724175B (zh) * | 2022-03-04 | 2024-03-29 | 亿达信息技术有限公司 | 行人图像的检测网络、检测方法、训练方法、电子设备和介质 |
WO2024011281A1 (en) * | 2022-07-11 | 2024-01-18 | James Cook University | A method and a system for automated prediction of characteristics of aquaculture animals |
KR20240083242A (ko) * | 2022-12-02 | 2024-06-12 | 주식회사 Lg 경영개발원 | 기계 학습 기반 이상 검출 장치 및 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339680A (zh) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | 人脸关键点定位方法及装置 |
US9552510B2 (en) * | 2015-03-18 | 2017-01-24 | Adobe Systems Incorporated | Facial expression capture for character animation |
CN108280455A (zh) * | 2018-01-19 | 2018-07-13 | 北京市商汤科技开发有限公司 | 人体关键点检测方法和装置、电子设备、程序和介质 |
CN109614876A (zh) * | 2018-11-16 | 2019-04-12 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
Family Cites Families (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0486635A1 (en) * | 1990-05-22 | 1992-05-27 | International Business Machines Corporation | Scalable flow virtual learning neurocomputer |
CN101510257B (zh) * | 2009-03-31 | 2011-08-10 | 华为技术有限公司 | 一种人脸相似度匹配方法及装置 |
CN101980290B (zh) * | 2010-10-29 | 2012-06-20 | 西安电子科技大学 | 抗噪声环境多聚焦图像融合方法 |
CN102622730A (zh) * | 2012-03-09 | 2012-08-01 | 武汉理工大学 | 基于非降采样Laplacian金字塔和BEMD的遥感图像融合处理方法 |
CN103049895B (zh) * | 2012-12-17 | 2016-01-20 | 华南理工大学 | 基于平移不变剪切波变换的多模态医学图像融合方法 |
CN103279957B (zh) * | 2013-05-31 | 2015-11-25 | 北京师范大学 | 一种基于多尺度特征融合的遥感图像感兴趣区域提取方法 |
CN103793692A (zh) * | 2014-01-29 | 2014-05-14 | 五邑大学 | 低分辨率多光谱掌纹、掌静脉实时身份识别方法与系统 |
JP6474210B2 (ja) * | 2014-07-31 | 2019-02-27 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 大規模画像データベースの高速検索手法 |
WO2016054779A1 (en) * | 2014-10-09 | 2016-04-14 | Microsoft Technology Licensing, Llc | Spatial pyramid pooling networks for image processing |
CN104346607B (zh) * | 2014-11-06 | 2017-12-22 | 上海电机学院 | 基于卷积神经网络的人脸识别方法 |
CN104793620B (zh) * | 2015-04-17 | 2019-06-18 | 中国矿业大学 | 基于视觉特征捆绑和强化学习理论的避障机器人 |
CN104866868B (zh) * | 2015-05-22 | 2018-09-07 | 杭州朗和科技有限公司 | 基于深度神经网络的金属币识别方法和装置 |
US10007863B1 (en) * | 2015-06-05 | 2018-06-26 | Gracenote, Inc. | Logo recognition in images and videos |
CN105184779B (zh) * | 2015-08-26 | 2018-04-06 | 电子科技大学 | 一种基于快速特征金字塔的车辆多尺度跟踪方法 |
CN105912990B (zh) * | 2016-04-05 | 2019-10-08 | 深圳先进技术研究院 | 人脸检测的方法及装置 |
GB2549554A (en) * | 2016-04-21 | 2017-10-25 | Ramot At Tel-Aviv Univ Ltd | Method and system for detecting an object in an image |
US10032067B2 (en) * | 2016-05-28 | 2018-07-24 | Samsung Electronics Co., Ltd. | System and method for a unified architecture multi-task deep learning machine for object recognition |
US10993697B2 (en) * | 2016-06-20 | 2021-05-04 | Butterfly Network, Inc. | Automated image acquisition for assisting a user to operate an ultrasound device |
US10365617B2 (en) * | 2016-12-12 | 2019-07-30 | Dmo Systems Limited | Auto defect screening using adaptive machine learning in semiconductor device manufacturing flow |
CN110475505B (zh) * | 2017-01-27 | 2022-04-05 | 阿特瑞斯公司 | 利用全卷积网络的自动分割 |
CN108229490B (zh) * | 2017-02-23 | 2021-01-05 | 北京市商汤科技开发有限公司 | 关键点检测方法、神经网络训练方法、装置和电子设备 |
CN106934397B (zh) * | 2017-03-13 | 2020-09-01 | 北京市商汤科技开发有限公司 | 图像处理方法、装置及电子设备 |
WO2018169639A1 (en) * | 2017-03-17 | 2018-09-20 | Nec Laboratories America, Inc | Recognition in unlabeled videos with domain adversarial learning and knowledge distillation |
CN108664981B (zh) * | 2017-03-30 | 2021-10-26 | 北京航空航天大学 | 显著图像提取方法及装置 |
CN107194318B (zh) * | 2017-04-24 | 2020-06-12 | 北京航空航天大学 | 目标检测辅助的场景识别方法 |
CN108229281B (zh) * | 2017-04-25 | 2020-07-17 | 北京市商汤科技开发有限公司 | 神经网络的生成方法和人脸检测方法、装置及电子设备 |
CN108229497B (zh) * | 2017-07-28 | 2021-01-05 | 北京市商汤科技开发有限公司 | 图像处理方法、装置、存储介质、计算机程序和电子设备 |
CN107909041A (zh) * | 2017-11-21 | 2018-04-13 | 清华大学 | 一种基于时空金字塔网络的视频识别方法 |
CN108021923B (zh) * | 2017-12-07 | 2020-10-23 | 上海为森车载传感技术有限公司 | 一种用于深度神经网络的图像特征提取方法 |
CN108182384B (zh) * | 2017-12-07 | 2020-09-29 | 浙江大华技术股份有限公司 | 一种人脸特征点定位方法及装置 |
CN108229445A (zh) * | 2018-02-09 | 2018-06-29 | 深圳市唯特视科技有限公司 | 一种基于级联金字塔网络的多人姿态估计方法 |
CN108664885B (zh) * | 2018-03-19 | 2021-08-31 | 杭州电子科技大学 | 基于多尺度级联HourGlass网络的人体关键点检测方法 |
CN108520251A (zh) * | 2018-04-20 | 2018-09-11 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
CN108596087B (zh) * | 2018-04-23 | 2020-09-15 | 合肥湛达智能科技有限公司 | 一种基于双网络结果的驾驶疲劳程度检测回归模型 |
CN108764133B (zh) * | 2018-05-25 | 2020-10-20 | 北京旷视科技有限公司 | 图像识别方法、装置及系统 |
-
2018
- 2018-11-16 CN CN202110904136.5A patent/CN113591755B/zh active Active
- 2018-11-16 CN CN202110904119.1A patent/CN113569798B/zh active Active
- 2018-11-16 CN CN202110902644.XA patent/CN113569796B/zh active Active
- 2018-11-16 CN CN202110902646.9A patent/CN113569797B/zh active Active
- 2018-11-16 CN CN202110902641.6A patent/CN113591750B/zh active Active
- 2018-11-16 CN CN201811367869.4A patent/CN109614876B/zh active Active
- 2018-11-16 CN CN202110904124.2A patent/CN113591754B/zh active Active
-
2019
- 2019-04-22 SG SG11202003818YA patent/SG11202003818YA/en unknown
- 2019-04-22 WO PCT/CN2019/083721 patent/WO2020098225A1/zh active Application Filing
- 2019-04-22 JP JP2020518758A patent/JP6944051B2/ja active Active
- 2019-04-22 KR KR1020207012580A patent/KR102394354B1/ko active IP Right Grant
- 2019-08-26 TW TW108130497A patent/TWI720598B/zh active
-
2020
- 2020-04-22 US US16/855,630 patent/US20200250462A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9552510B2 (en) * | 2015-03-18 | 2017-01-24 | Adobe Systems Incorporated | Facial expression capture for character animation |
CN106339680A (zh) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | 人脸关键点定位方法及装置 |
CN108280455A (zh) * | 2018-01-19 | 2018-07-13 | 北京市商汤科技开发有限公司 | 人体关键点检测方法和装置、电子设备、程序和介质 |
CN109614876A (zh) * | 2018-11-16 | 2019-04-12 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784642A (zh) * | 2020-06-10 | 2020-10-16 | 中铁四局集团有限公司 | 一种图像处理方法、目标识别模型训练方法和目标识别方法 |
CN111695519B (zh) * | 2020-06-12 | 2023-08-08 | 北京百度网讯科技有限公司 | 关键点定位方法、装置、设备以及存储介质 |
EP3869402A1 (en) * | 2020-06-12 | 2021-08-25 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for positioning key point, device, storage medium and computer program product |
JP2021197157A (ja) * | 2020-06-12 | 2021-12-27 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | キーポイントの特定方法及び装置、機器、記憶媒体 |
JP7194215B2 (ja) | 2020-06-12 | 2022-12-21 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | キーポイントの特定方法及び装置、機器、記憶媒体 |
US11610389B2 (en) | 2020-06-12 | 2023-03-21 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for positioning key point, device, and storage medium |
CN111695519A (zh) * | 2020-06-12 | 2020-09-22 | 北京百度网讯科技有限公司 | 关键点定位方法、装置、设备以及存储介质 |
CN111709945A (zh) * | 2020-07-17 | 2020-09-25 | 成都三零凯天通信实业有限公司 | 一种基于深度局部特征的视频拷贝检测方法 |
CN112131925A (zh) * | 2020-07-22 | 2020-12-25 | 浙江元亨通信技术股份有限公司 | 一种多通道特征空间金字塔的构造方法 |
CN112131925B (zh) * | 2020-07-22 | 2024-06-07 | 随锐科技集团股份有限公司 | 一种多通道特征空间金字塔的构造方法 |
CN112836710A (zh) * | 2021-02-23 | 2021-05-25 | 浙大宁波理工学院 | 一种基于特征金字塔网络的房间布局估计获取方法与系统 |
CN116738296A (zh) * | 2023-08-14 | 2023-09-12 | 大有期货有限公司 | 机房状况综合智能监控系统 |
CN116738296B (zh) * | 2023-08-14 | 2024-04-02 | 大有期货有限公司 | 机房状况综合智能监控系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113591750A (zh) | 2021-11-02 |
KR20200065033A (ko) | 2020-06-08 |
CN113569798B (zh) | 2024-05-24 |
CN113591754A (zh) | 2021-11-02 |
CN113591755B (zh) | 2024-04-16 |
CN113569796B (zh) | 2024-06-11 |
CN113569798A (zh) | 2021-10-29 |
CN113591755A (zh) | 2021-11-02 |
SG11202003818YA (en) | 2020-06-29 |
CN109614876A (zh) | 2019-04-12 |
CN113591750B (zh) | 2024-07-19 |
CN113569797B (zh) | 2024-05-21 |
CN113569796A (zh) | 2021-10-29 |
US20200250462A1 (en) | 2020-08-06 |
JP6944051B2 (ja) | 2021-10-06 |
TW202020806A (zh) | 2020-06-01 |
TWI720598B (zh) | 2021-03-01 |
CN113591754B (zh) | 2022-08-02 |
KR102394354B1 (ko) | 2022-05-04 |
CN113569797A (zh) | 2021-10-29 |
CN109614876B (zh) | 2021-07-27 |
JP2021508388A (ja) | 2021-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020098225A1 (zh) | 关键点检测方法及装置、电子设备和存储介质 | |
KR102406354B1 (ko) | 비디오 수복 방법 및 장치, 전자 기기 및 기억 매체 | |
JP6916970B2 (ja) | ビデオ処理方法及び装置、電子機器並びに記憶媒体 | |
TWI740309B (zh) | 圖像處理方法及裝置、電子設備和電腦可讀儲存介質 | |
WO2021051650A1 (zh) | 人脸和人手关联检测方法及装置、电子设备和存储介质 | |
WO2020155711A1 (zh) | 图像生成方法及装置、电子设备和存储介质 | |
WO2020107813A1 (zh) | 图像的描述语句定位方法及装置、电子设备和存储介质 | |
TW202029125A (zh) | 圖像處理方法及裝置、電子設備和儲存介質 | |
TWI718631B (zh) | 人臉圖像的處理方法及裝置、電子設備和儲存介質 | |
TW202109449A (zh) | 影像處理方法、電子設備,和電腦可讀儲存介質 | |
KR102334279B1 (ko) | 얼굴 특징점 위치결정 방법 및 장치 | |
TWI719777B (zh) | 圖像重建方法、圖像重建裝置、電子設備和電腦可讀儲存媒體 | |
CN110188865B (zh) | 信息处理方法及装置、电子设备和存储介质 | |
WO2023142645A1 (zh) | 图像处理方法、装置、电子设备、存储介质和计算机程序产品 | |
KR102324001B1 (ko) | 위치자세 검출 방법 및 장치, 전자 기기 및 저장 매체 | |
CN110929616B (zh) | 一种人手识别方法、装置、电子设备和存储介质 | |
WO2024124913A1 (zh) | 实体信息确定方法、装置和设备 | |
CN111046780A (zh) | 神经网络训练及图像识别方法、装置、设备和存储介质 | |
CN114821799B (zh) | 基于时空图卷积网络的动作识别方法、装置和设备 | |
CN108227927B (zh) | 基于vr的产品展示方法、装置及电子设备 | |
CN111753596B (zh) | 神经网络的训练方法及装置、电子设备和存储介质 | |
CN114489333A (zh) | 图像的处理方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2020518758 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207012580 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19884469 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/08/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19884469 Country of ref document: EP Kind code of ref document: A1 |