WO2020114236A1 - 关键点检测方法、装置、电子设备及存储介质 - Google Patents

关键点检测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020114236A1
WO2020114236A1 PCT/CN2019/119388 CN2019119388W WO2020114236A1 WO 2020114236 A1 WO2020114236 A1 WO 2020114236A1 CN 2019119388 W CN2019119388 W CN 2019119388W WO 2020114236 A1 WO2020114236 A1 WO 2020114236A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
key point
area
channel
channels
Prior art date
Application number
PCT/CN2019/119388
Other languages
English (en)
French (fr)
Inventor
刘裕峰
董亚娇
郑文
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020114236A1 publication Critical patent/WO2020114236A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to a key point detection method, device, electronic equipment, and storage medium.
  • the gesture image detection mainly includes: detecting the position of each key point of the hand in the gesture image.
  • a regression algorithm is usually used for gesture image detection. Specifically, a regression algorithm is used to fit the key points of the hand in the gesture image containing gestures, so as to obtain the positions of the key points of the hands in the gesture image.
  • the inventor found that because the gesture changes are more flexible, and the position of the key points of the hand in different gestures is different in the gesture image, the regression algorithm is used to fit the key points of the hand, which will lead to the accurate detection of the key points of the hand The rate is low.
  • the present disclosure provides a key point detection method, device, electronic equipment, and storage medium to solve the problem of low accuracy of key point detection in the hand.
  • a key point detection method which includes acquiring a gesture image to be detected and dividing the gesture image into a plurality of regions; for each preset key point of the hand, the determination The probability of the key point of the hand appearing in each area and the first coordinate value in each area; by calculating the probability of the key point of each hand and the first coordinate value, calculating the key point of each hand in the area The second coordinate value in the gesture image.
  • a key point detection apparatus including a dividing unit configured to acquire a gesture image to be detected, dividing the gesture image into a plurality of regions; a determining unit configured to Preset each hand key point, determine the probability of the hand key point appearing in each area and the first coordinate value in each area; the calculation unit is configured to appear through each hand key point And the first coordinate value, calculate the second coordinate value of each potential key point in the gesture image.
  • an electronic device including: a processor; a memory for storing processor executable instructions; wherein the processor is configured to execute the key described in the first aspect above Point detection method.
  • a non-transitory computer-readable storage medium when instructions in the computer-readable storage medium are executed by a processor of an electronic device, causing the electronic device to perform the first aspect described above Provide key point detection methods.
  • a computer program product includes program instructions, and when the instructions in the computer program product are executed by a processor of an electronic device, the electronic device is caused to execute the above-mentioned first On the one hand, it provides key point detection methods.
  • a gesture image to be detected is obtained, and the gesture image is divided into a plurality of areas; for each preset key point of the hand, the probability of occurrence of the key point of the hand in each area is determined The first coordinate value in each area; calculate the second coordinate value of each hand key point in the gesture image by the probability of each hand key point appearing and the first coordinate value.
  • Fig. 1 is a flow chart of a method for detecting a key point according to an exemplary embodiment.
  • Fig. 2 is a schematic diagram showing a position of a key point of a hand according to an exemplary embodiment.
  • Fig. 3 is a block diagram of a key point detection device according to an exemplary embodiment.
  • Fig. 4 is a block diagram of a device for key point detection according to an exemplary embodiment.
  • Fig. 5 is a block diagram of a device for key point detection according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a key point detection method according to an exemplary embodiment. As shown in Fig. 1, the key point detection method is used in an electronic device and includes the following steps.
  • step S11 a gesture image to be detected is acquired, and the gesture image is divided into a plurality of regions.
  • gesture images to be detected can be obtained from various sources.
  • gesture images can be grabbed from the network, or gesture images can be captured in real time, and so on.
  • the gesture image includes the hand of the human body, and the hand gesture can be any gesture.
  • the hand gesture can be any gesture.
  • the gesture image may be an image in a format such as Red, Green and Blue (RGB).
  • a deep convolutional neural network for key point detection may be pre-trained. For example, a large number of gesture images of known gestures and corresponding probability and coordinate values of key points of each hand under the gesture can be collected, and the deep convolutional neural network can be trained through these gesture images, so that the trained deep convolutional neural network can perform this task.
  • the key point detection method of the embodiment is disclosed.
  • the manner of dividing the area of each gesture image collected above is the same as the manner of dividing the area of the gesture image to be detected.
  • the gesture image can be divided into multiple regions, and multiple regions Input the preset convolutional neural network to detect key points of the hand based on multiple regions.
  • the gesture image can be divided into N areas, where N is a positive integer. The value of N is not fixed and can be selected according to specific application scenarios.
  • step S12 for each preset hand key point, the probability of the hand key point appearing in each area and the first coordinate value in each area are determined.
  • Fig. 2 is a schematic diagram showing a position of a key point of a hand according to an exemplary embodiment.
  • the points indicated by the numbers 0-20 are the key points of the hand.
  • the hand may include 21 hand key points. For the positions of the 21 hand key points in the “ok” gesture, see FIG. 2.
  • the probability of the hand key point appearing in each area and the first coordinate value in each area can be determined. That is, for each hand key point, it is possible to predict the probability that the hand key point appears in each area, and to predict the first coordinate value of the hand key point in each area.
  • N regions are respectively region 1, region 2, ..., region N
  • M hand key points are respectively hand key point 1, hand key point 2, ..., hand key point M.
  • For hand key point 1 predict the probability of hand key point 1 appearing in area 1, area 2, ..., area N, and predict hand key point 1 in area 1, area 2, ..., area The first coordinate value in N.
  • For hand key point 2 predict the probability of hand key point 2 appearing in area 1, area 2, ..., area N, and predict hand key point 2 in area 1, area 2, ..., area The first coordinate value in N.
  • the probability of the hand key point M appearing in the area 1, area 2, ..., area N, and the hand key point M is predicted in the area 1, area 2 , ..., the first coordinate value in the area N.
  • the step of determining the probability of occurrence of the key point of the hand in each area and the first coordinate value in each area may include: Extract the image features of each area, and input the image features of each area into the channels in the preset convolutional neural network; for each key point of the hand, obtain the image of the channel in the convolutional neural network for each area The output result of the feature after the convolution operation.
  • the output result includes the probability that each hand key point appears in each area and the first coordinate value in each area.
  • the channel in the above convolutional neural network can be regarded as a module of the convolutional neural network, and the module has corresponding convolutional layers and pooling layers.
  • Each channel in the convolutional neural network can independently perform a convolution operation on the image to obtain the corresponding output result.
  • each image has image characteristics that can be distinguished from other images.
  • Image features that are different from other types of images can include natural features that can be intuitively felt, such as brightness, edges, textures, and colors; image features that are different from other types of images can also include unnatural ones that need to be obtained through transformation or processing Features, such as histogram features and features that characterize principal components.
  • a convolutional neural network can be set to extract any kind of image features of the gesture image, for example, a convolutional neural network can be set to extract the histogram of orientation gradient (Histogram of Oriented Gradient, HOG) feature, local binary mode (Local Binary Pattern (LBP) feature, Haar-like feature, etc., which will not be discussed in detail in the embodiments of the present disclosure.
  • HOG histogram of orientation gradient
  • LBP Local Binary Pattern
  • Haar-like feature etc.
  • the convolutional neural network may include classification branches and regression branches.
  • the classification branch is used to determine the probability that each hand key point exists in each area in the gesture image; the regression branch is used to determine the coordinate value of each hand key point in each area in the gesture image.
  • the output layer structure of the classification branch and the regression branch can be the same.
  • the classification branch includes M classification channels, each classification channel corresponds to a hand key point, and M classification channels correspond to M hand key points.
  • the classification branch may include 21 classification channels, respectively corresponding to 21 key points of the hand.
  • Each classification channel is composed of N grids, each grid corresponds to an area, and N grids correspond to N areas. The value of N is not fixed and can be selected according to specific application scenarios.
  • the regression branch includes M horizontal coordinate channels and M vertical coordinate channels.
  • Each abscissa channel corresponds to one key point of the hand, and M abscissa channels correspond to M key points of the hand.
  • Each horizontal coordinate channel is composed of N grids, each grid corresponds to an area, and N grids correspond to N areas.
  • Each ordinate channel corresponds to one key point of the hand, and M ordinate channels correspond to M key points of the hand.
  • Each ordinate channel is composed of N grids, each grid corresponds to an area, and N grids correspond to N areas.
  • the regression branch may include 21 horizontal coordinate channels and 21 vertical coordinate channels, 21 horizontal coordinate channels corresponding to 21 hand key points, and 21 vertical coordinate channels also corresponding to 21 hand key points, respectively. point.
  • the step of inputting the image features of each area into the channels of the preset convolutional neural network may include: inputting the image features of N areas into the N grids of the M classification channels, to obtain each classification channel pair The first output result of the image features of N regions after the convolution operation.
  • the first output result of each classification channel includes the probability that the key points of the hand corresponding to the classification channel appear in each region;
  • the image features correspond to the N grids of the input M abscissa channels to obtain the second output result after the convolution operation of the image features of N regions for each abscissa channel, and the second output result of each abscissa channel Including the first abscissa value of key points of the hand corresponding to the abscissa channel in each area; input image features of N areas into N grids of M ordinate channels correspondingly, to obtain each ordinate channel
  • the third output result of each ordinate channel includes the first ordinate value of the key point of the hand corresponding to the ordinate channel in each region .
  • N areas are area 1, area 2, ..., area N
  • M hand key points are hand key point 1, hand key point 2, ..., hand key points M
  • the classification channels are the classification channel 1 corresponding to the key point 1 of the hand, the classification channel 2 corresponding to the key point 2 of the hand, ..., the classification channel M corresponding to the key point M of the hand, and N of each classification channel
  • the grids are grid 1 corresponding to area 1, grid 2 corresponding to area 2, ..., and grid N corresponding to area N, respectively.
  • the image features of the area N are respectively input into the grid N of the classification channel 1, the grid N of the classification channel 2,..., The grid N of the classification channel M, to obtain the key point 1 of the hand -Probability of M appearing in area N.
  • the first abscissa value and the first ordinate value of the key point 1-M of the hand in the area 1 are predicted respectively
  • the first abscissa value and the first ordinate value of the key point 1-M of the hand in the area 2 are predicted respectively.
  • Each grid in each classification channel outputs a value, and the value range is [0-1].
  • the output value of each grid represents the probability that key points of the hand corresponding to the classification channel appear in the area corresponding to the grid.
  • the sum of all grid output values on each classification channel is 1.
  • the value output by a grid of a classification channel is large, it is more likely that key points of the hand corresponding to the classification channel will appear in the area corresponding to the grid, and the weight will be larger when subsequent weighted combination is performed.
  • the value output by a grid of a classification channel is small, the probability that the key points of the hand corresponding to the classification channel appear in the area corresponding to the grid is small, and the weight is small when the weighted combination is performed subsequently.
  • Each grid in each abscissa channel outputs a numerical value, which represents: the image features extracted through the area corresponding to the grid, the predicted key points of the hand corresponding to the abscissa channel corresponding to the grid The fitted value of the abscissa in the area.
  • Each grid in each ordinate channel outputs a numerical value, which represents: the image features extracted through the area corresponding to the grid, and the predicted key points of the hand corresponding to the ordinate channel in the grid corresponding to The fitted value of the ordinate in the area.
  • step S13 the second coordinate value of each hand key point in the gesture image is calculated by the probability of each hand key point appearing and the first coordinate value.
  • the probability that the key point of the hand appears in each region and the The first coordinate value, and the second coordinate value of the hand key point in the gesture image is determined by the probability of the hand key point appearing in each area and the first coordinate value in each area.
  • the step of calculating the second coordinate value of each hand key point in the gesture image through the probability of each hand key point appearing and the first coordinate value may include: for each hand key point, the hand key point The probability of occurrence in each area and the first coordinate value in each area are weighted to obtain the second coordinate value of the key point of the hand in the gesture image.
  • the probability that the hand key point appears in each area is taken as the weight of the first coordinate value at which the hand key point appears in each area, which is critical for the hand
  • the first coordinate value of the point in each area is weighted to obtain the second coordinate value of the key point of the hand in the gesture image.
  • the first coordinate value includes a first abscissa value and a first ordinate value.
  • the weighted calculation of the probability that the key point of the hand appears in each area and the first coordinate value in each area to obtain the second coordinate value of the key point of the hand in the gesture image may include: The probability that the key point of the hand appears in each area and the first abscissa value in each area are weighted to obtain the second abscissa value of the hand key point in the gesture image; the hand key The probability of the point appearing in each area and the first ordinate value in each area are weighted to obtain the second ordinate value of the key point of the hand in the gesture image.
  • the probability that the hand key point appears in each area is taken as the weight of the first abscissa value of the hand key point appearing in each area.
  • the first abscissa value of the key point in each area is weighted to obtain the second abscissa value of the hand key point in the gesture image;
  • the probability of the hand key point appearing in each area is taken as The weight of the first ordinate value of the key point of the hand in each area, and the weighted calculation of the first ordinate value of the key point of the hand in each area, to obtain the hand key point in the gesture
  • the second ordinate value in the image is taken as The weight of the first ordinate value of the key point of the hand in each area.
  • weighted calculation refers to calculating the product of the probability that the key point of the hand appears in each region and the first coordinate value in each corresponding region, and adding all the products.
  • the probability that the key point 1 of the hand appears in the area 1 is P1
  • the probability that it appears in the area 2 is P2, ...
  • the probability of appearing in the area N is PN.
  • the first horizontal coordinate value of the key point 1 of the hand in the area 1 is x1
  • the first vertical coordinate value is y1
  • the first horizontal coordinate value in the area 2 is x2
  • the first vertical coordinate value is y2,...
  • the first abscissa value is xN
  • the first ordinate value is yN.
  • the second horizontal coordinate value of the key point 1 of the hand in the gesture image is P1 ⁇ x1+P2 ⁇ x2+ whil+PN ⁇ xN;
  • the second vertical coordinate value of the key point 1 of the hand in the gesture image is P1 ⁇ y1+P2 ⁇ y2+ whil+PN ⁇ yN.
  • the second coordinate value of each hand key point in the gesture image can be output.
  • the function of obtaining the second coordinate value of each hand key point in the gesture image by the above weighted calculation may be integrated in the convolutional neural network, and each hand key is output by the convolutional neural network The second coordinate value of the point in the gesture image.
  • the probability that each key point of the hand appears in each area and the first coordinate value in each area are predicted.
  • the probability of a hand key point appearing in a region is positively related to the probability of the hand key point appearing in the region, and the probability of each hand key point appearing in each region and The first coordinate value in each area determines the second coordinate value of each key point of the hand in the gesture image.
  • the second coordinate value of each hand key point in the gesture image is determined by the probability that each hand key point appears in each area and the first coordinate value in each area, This way of determining the second coordinate value is the "de-false and true" way of the attention mechanism.
  • the attention mechanism stems from the study of human vision. In cognitive science, due to the bottleneck of information processing, humans will selectively pay attention to a part of all information while ignoring other visible information. The above mechanism is usually called the attention mechanism. Joining the attention mechanism will conduct a weight-based screening of the input information.
  • This screening mode is not manually formulated, but it is learned by the convolutional neural network itself, that is, by the weighted combination, the convolutional neural network itself Learn the spatial relationship of the input information, so that the convolutional neural network can adapt well to the diversity of gesture changes.
  • the convolutional neural network In the detection of key points of the hand, the importance of the information of each area in the entire gesture image is not equivalent.
  • the corresponding weight is obtained according to the possibility of the existence of key points of the hand in each area, and the attention is mainly focused To specific areas with high weights, enhance the role of high-weight areas in hand key point detection, and weaken the role of low-weight areas in hand key point detection, thereby increasing the accuracy of hand key point detection.
  • Fig. 3 is a block diagram of a key point detection device according to an exemplary embodiment.
  • the device includes a division unit 301, a determination unit 302 and a calculation unit 303.
  • the dividing unit 301 is configured to acquire a gesture image to be detected, and divide the gesture image into a plurality of regions.
  • the determining unit 302 is configured to determine, for each preset key point of the hand, the probability that the key point of the hand appears in each area and the first coordinate value in each area.
  • the calculation unit 303 is configured to calculate the second coordinate value of each hand key point in the gesture image through the probability of occurrence of each hand key point and the first coordinate value.
  • the determining unit 302 may include: an input module configured to extract image features of each region, and input the image features of each region into a preset channel in a convolutional neural network; an acquisition module , Is configured to obtain the output result of the convolutional neural network for each hand key point, and the output result of the convolution operation of the image features of each area, the output result includes each hand key point in each area The probability of occurrence and the first coordinate value in each area.
  • the above-mentioned area includes N
  • the preset hand key points include M
  • the convolutional neural network includes a classification branch and a regression branch
  • the classification branch includes M classification channels
  • the regression branch includes M horizontal Coordinate channels and M ordinate channels
  • each channel corresponds to a key point of the hand
  • each channel includes N grids
  • each grid corresponds to an area
  • channels include classification channels, abscissa channels, and ordinate channels
  • M A classification channel corresponds to M hand key points
  • M abscissa channels correspond to M hand key points
  • M ordinate channels correspond to M hand key points
  • N grids included in each channel correspond to N areas
  • M and N are positive integers.
  • the above input module may include: a first input sub-module configured to input image features of N regions into N grids of M classification channels to obtain convolutional image features of N regions for each classification channel
  • the first output result after the operation, the first output result of each classification channel includes the probability that the key points of the hand corresponding to the classification channel appear in each region
  • the second input submodule is configured to divide the N regions
  • the image features correspond to the N grids of the input M abscissa channels to obtain the second output result after the convolution operation of the image features of N regions for each abscissa channel, and the second output result of each abscissa channel Including the first abscissa value of each key point of the hand corresponding to the abscissa channel in each region
  • the third input submodule is configured to input the image features of N regions corresponding to N of the M ordinate channels Grid to obtain the third output result of the convolution operation of the image features of N regions by each ordinate channel.
  • the calculation unit 303 may include: a weighting module configured to, for each key point of the hand, the probability of the key point of the hand appearing in each area and the The first coordinate value is weighted to obtain the second coordinate value of the key point of the hand in the gesture image.
  • the first coordinate value includes a first abscissa value and a first ordinate value.
  • the weighting module may include: a first weighting sub-module configured to weight the probability of occurrence of the hand key point in each area and the first abscissa value in each area for each hand key point Calculate to obtain the second abscissa value of the key point of the hand in the gesture image; the second weighting submodule is configured for each key point of the hand, the probability of the hand key point appearing in each area And the first ordinate value in each area is weighted to obtain the second ordinate value of the key point of the hand in the gesture image.
  • Fig. 4 is a block diagram of a device 400 for key point detection according to an exemplary embodiment.
  • the apparatus 400 is provided as an electronic device, and the electronic device may be a mobile terminal.
  • the device 400 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an input/output (Input/Output, I/O) interface 412, a sensor Component 414 and communication component 416.
  • the processing component 402 generally controls the overall operations of the device 400, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 402 may include one or more processors 420 to execute instructions to complete all or part of the steps in the above method.
  • the processing component 402 may include one or more modules to facilitate interaction between the processing component 402 and other components.
  • the processing component 402 may include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
  • the memory 404 is configured to store various types of data to support operation at the device 400. Examples of these data include instructions for any application or method operating on the device 400, contact data, phone book data, messages, pictures, videos, and so on.
  • the memory 404 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (Static Random-Access Memory, SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (Programmable Read Only Memory, PROM), Read Only Memory (Read Only Only Memory, ROM) ), magnetic memory, flash memory, magnetic disk or optical disk, etc.
  • SRAM static random access memory
  • EEPROM Electrically erasable programmable read-only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • Read Only Memory Read Only Only Memory
  • the power supply component 406 provides power to various components of the device 400.
  • the power component 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 400.
  • the multimedia component 408 includes a screen of an output interface provided between the device 400 and the user.
  • the screen may include a liquid crystal display (Liquid Crystal Display, LCD) and a touch panel (Touch Panel, TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor can not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 408 includes a front camera and/or a rear camera. When the device 400 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 410 is configured to output and/or input audio signals.
  • the audio component 410 includes a microphone (Microphone, MIC).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 404 or sent via the communication component 416.
  • the audio component 410 may further include a speaker for outputting audio signals.
  • the I/O interface 412 provides an interface between the processing component 402 and the peripheral interface module.
  • the above peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor assembly 414 includes one or more sensors for providing the device 400 with status assessments in various aspects.
  • the sensor component 414 can detect the on/off state of the device 400 and the relative positioning of the components, such as the display and the keypad of the device 400.
  • the sensor component 414 may also detect a change in the position of the device 400 or a component of the device 400, the presence or absence of user contact with the device 400, the orientation or acceleration/deceleration of the device 400, and the temperature change of the device 400.
  • the sensor assembly 414 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 414 may also include a light sensor, such as a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor (CMOS) sensor or a charge coupled device (Charge Coupled Device, CCD) image sensor, for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the sensor component 414 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 416 is configured to facilitate wired or wireless communication between the device 400 and other devices.
  • the device 400 may access a wireless network based on a communication standard, such as wireless fidelity (WiFi), an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • WiFi wireless fidelity
  • the communication component 416 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 416 may include a Near Field Communication (NFC) module to facilitate short-range communication.
  • NFC Near Field Communication
  • the NFC module can be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (Ultra Wide Band, UWB) technology, Bluetooth (Blue Tooth, BT) technology, and others Technology to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • the apparatus 400 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit (ASIC)), digital signal processor (Digital Signal Processor, DSP), digital signal processing device (Digital Signal Processor), DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor or other electronic components to implement the above key The steps of the point detection method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP digital signal processing device
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • controller microcontroller
  • microprocessor or other electronic components to implement the above key The steps of the point detection method.
  • a non-transitory computer-readable storage medium including instructions is also provided, for example, a memory 404 including instructions, and the above instructions can be executed by the processor 420 of the device 400 to complete the steps of the above key point detection method .
  • the non-transitory computer-readable storage medium may be ROM, random access memory (Random Access Memory, RAM), compact disk (Compact Disc ROM, CD-ROM), magnetic tape, floppy disk, and optical data storage device.
  • Fig. 5 is a block diagram of a device 500 for key point detection according to an exemplary embodiment.
  • the apparatus 500 is provided as an electronic device, and the electronic device may be a server.
  • the apparatus 500 includes a processing component 522, and the processing component 522 may include one or more processors.
  • the device 500 also includes memory resources represented by the memory 532 for storing instructions executable by the processing component 522, such as application programs.
  • the application program stored in the memory 532 may include one or more modules, and each module corresponds to a set of instruction modules.
  • the processing component 522 is configured to execute instructions to perform the key point detection method described above.
  • the apparatus 500 may further include a power supply component 526, a wired or wireless network interface 550, and an input/output (I/O) interface 558.
  • the power component 526 is configured to perform power management of the device 500
  • a wired or wireless network interface 550 is configured to connect the device 500 to the network.
  • the device 500 can operate an operating system stored in the memory 532, such as Windows ServerTM, Mac OSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
  • an embodiment of the present disclosure also provides a computer program product.
  • the computer program product includes program instructions.
  • the program instructions in the computer program product are executed by a processor of an electronic device, the electronic device performs the above key Point detection method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例是关于一种关键点检测方法、装置、电子设备及存储介质,以解决手部关键点检测准确率较低的问题。其中方法包括:获取待检测的手势图像,将手势图像划分为多个区域;针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在手势图像中的第二坐标值。本公开实施例更能适应手势的多样性,能够大幅度提高手部关键点检测的准确率。

Description

关键点检测方法、装置、电子设备及存储介质
本公开要求于2018年12月5日提交中国专利局、申请号为201811481858.9发明名称为“关键点检测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种关键点检测方法、装置、电子设备及存储介质。
背景技术
随着图像处理技术的不断发展,手势图像检测、人脸图像检测等各种图像检测技术随之产生。其中,手势图像检测主要包括:检测出各个手部关键点在手势图像中的位置。
在相关技术中,通常采用回归算法进行手势图像检测。具体的,采用回归算法,对含有手势的手势图像中的手部关键点进行拟合,从而得到各个手部关键点在手势图像中的位置。
但发明人发现,由于手势变化比较灵活,且不同手势的手部关键点在手势图像中的位置差异较大,因此采用回归算法对手部关键点进行拟合,会导致手部关键点的检测准确率较低。
发明内容
为克服相关技术中存在的问题,本公开提供一种关键点检测方法、装置、电子设备及存储介质,以解决手部关键点检测准确率较低的问题。
根据本公开实施例的第一方面,提供一种关键点检测方法,包括获取待检测的手势图像,将所述手势图像划分为多个区域;针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值。
根据本公开实施例的第二方面,提供一种关键点检测装置,包括划分单 元,被配置为获取待检测的手势图像,将所述手势图像划分为多个区域;确定单元,被配置为针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;计算单元,被配置为通过每个手部关键点出现的概率与第一坐标值,计算每个势关键点在所述手势图像中的第二坐标值。
根据本公开实施例的第三方面,提供一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行上述第一方面所述的关键点检测方法。
根据本公开实施例的第四方面,提供一种非临时性计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备执行上述第一方面提供的关键点检测方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,所述计算机程序产品包括程序指令,当所述计算机程序产品中的指令由电子设备的处理器执行时,使电子设备执行上述第一方面提供的关键点检测方法。
本公开实施例提供的技术方案可以包括以下有益效果:
本公开实施例中,获取待检测的手势图像,将手势图像划分为多个区域;针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在手势图像中的第二坐标值。本公开实施例中,通过注意力机制,使得不同区域对于每个手部关键点坐标值的贡献不同,充分考虑整幅手势图像的不同区域对每个手部关键点的重要性不同,将注意力主要集中在手部关键点最可能存在的区域,弱化其他区域在手部关键点检测中的作用,从而减小其他区域对手部关键点坐标值预测的干扰,更能适应手势的多样性,大幅度提高手部关键点检测的准确率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发 明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种关键点检测方法的流程图。
图2是根据一示例性实施例示出的一种手部关键点位置的示意图。
图3是根据一示例性实施例示出的一种关键点检测装置的框图。
图4是根据一示例性实施例示出的一种用于关键点检测的装置的框图。
图5是根据一示例性实施例示出的一种用于关键点检测的装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种关键点检测方法的流程图,如图1所示,关键点检测方法用于电子设备中,包括以下步骤。
在步骤S11中,获取待检测的手势图像,将所述手势图像划分为多个区域。
本公开实施例中,可以从各种来源获取待检测的手势图像。比如,可以从网络中抓取手势图像,或者可以实时拍摄手势图像,等等。
手势图像中包括人体的手部,手部的手势可以为任意手势。比如“ok”手势,“胜利”手势、“比心”手势,等等。手势图像可以为红绿蓝(Red Green Blue,RGB)等格式的图像。
本公开实施例中,可以预先训练用于进行关键点检测的深度卷积神经网络。比如,可以收集大量的已知手势及该手势下各个手部关键点对应概率及坐标值的手势图像,通过这些手势图像训练深度卷积神经网络,使训练后的深度卷积神经网络能够执行本公开实施例的关键点检测方法。
具体的,可以收集大量的已知手势的手势图像,将收集的每一手势图像划分为多个区域,统计该已知手势下各个手部关键点在这些手势图像中每个 区域出现的概率,并统计该已知手势下各个手部关键点在这些手势图像中每个区域的坐标值,通过这些手势图像、该已知手势下各个手部关键点在这些手势图像中每个区域出现的概率和坐标值,训练深度卷积神经网络,使训练后的深度卷积神经网络能够执行本公开实施例的关键点检测方法。
上述收集的每一手势图像的区域划分方式和待检测的手势图像的区域划分方式相同。
对于具体的深度卷积神经网络训练过程,本领域技术人员根据实际经验进行相关处理即可,本公开实施例在此不再详细论述。
将手势图像输入到卷积神经网络中。为了解决直接通过整幅手势图像进行手部关键点检测时检测准确率较低的问题,将手势图像输入到卷积神经网络中时,可以将该手势图像划分为多个区域,将多个区域分别输入预设的卷积神经网络中依据多个区域对手部关键点进行检测。比如,可以将手势图像划分为N个区域,N为正整数。N的取值不是固定不变的,可以根据具体的应用场景选取合适的值即可。
在步骤S12中,针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
对于人体的手部,可以设置多个手部关键点,手部关键点可以用于表征手部的关键位置。比如,可以设置M个手部关键点,M为正整数。图2是根据一示例性实施例示出的一种手部关键点位置的示意图。图2中,0-20数字所指代的点为手部关键点。由图2可知,手部可以包括21个手部关键点,对于“ok”手势中21个手部关键点的位置参见图2。
针对每个手部关键点,可以确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。也就是,针对每个手部关键点,可以预测该手部关键点在每个区域中出现的概率,以及预测该手部关键点在每个区域中的第一坐标值。
比如,N个区域分别为区域1、区域2、……、区域N,M个手部关键点分别为手部关键点1、手部关键点2、……、手部关键点M。针对手部关键点1,分别预测手部关键点1在区域1、区域2、……、区域N中出现的概率,并分别 预测手部关键点1在区域1、区域2、……、区域N中的第一坐标值。针对手部关键点2,分别预测手部关键点2在区域1、区域2、……、区域N中出现的概率,并分别预测手部关键点2在区域1、区域2、……、区域N中的第一坐标值。以此类推,直至针对手部关键点M,分别预测手部关键点M在区域1、区域2、……、区域N中出现的概率,并分别预测手部关键点M在区域1、区域2、……、区域N中的第一坐标值。
在一种可选实施方式中,针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值的步骤可以包括:提取每个区域的图像特征,将每个区域的图像特征分别输入预设的卷积神经网络中的通道;针对每个手部关键点,获取卷积神经网络中的通道对每个区域的图像特征进行卷积操作后的输出结果,输出结果包括每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
上述卷积神经网络中的通道可以认为是卷积神经网络的一个模块,该模块中具有相应的卷积层和池化层。卷积神经网络中的每一通道可以独立的对图像进行卷积操作,获得相应的输出结果。
对于图像而言,每一幅图像都具有能够区别于其他类图像的图像特征。区别于其他类图像的图像特征可以包括可以直观地感受到的自然特征,如亮度、边缘、纹理和色彩等;区别于其他类图像的图像特征还可以包括需要通过变换或处理才能得到的非自然特征,如直方图特征以及表征主成份的特征等。基于此,本公开实施例中,可以设置卷积神经网络提取手势图像的任意种图像特征,比如可以设置卷积神经网络提取方向梯度直方图(Histogram of Oriented Gradient,HOG)特征、局部二值模式(Local Binary Pattern,LBP)特征、Haar-like特征,等等,本公开实施例对此不再详细论述。
本公开实施例中,卷积神经网络可以包括分类分支和回归分支。分类分支用于确定手势图像中每个区域中存在每个手部关键点出现的概率;回归分支用于确定手势图像中每个区域中每个手部关键点的坐标值。分类分支和回归分支的输出层结构可以相同。
分类分支包括M个分类通道,每个分类通道对应一个手部关键点,M个分类通道对应M个手部关键点。比如按照图2所示,分类分支可以包括21个分类 通道,分别对应21个手部关键点。每个分类通道由N个网格构成,每个网格对应一个区域,N个网格对应N个区域。N的取值不是固定不变的,可以根据具体的应用场景选取合适的值即可。
回归分支包括M个横坐标通道及M个纵坐标通道。每个横坐标通道对应一个手部关键点,M个横坐标通道对应M个手部关键点。每个横坐标通道由N个网格构成,每个网格对应一个区域,N个网格对应N个区域。每个纵坐标通道对应一个手部关键点,M个纵坐标通道对应M个手部关键点。每个纵坐标通道由N个网格构成,每个网格对应一个区域,N个网格对应N个区域。比如按照图2所示,回归分支可以包括21个横坐标通道及21个纵坐标通道,21个横坐标通道分别对应21个手部关键点,21个纵坐标通道同样分别对应21个手部关键点。
将每个区域的图像特征分别输入预设的卷积神经网络中的通道的步骤可以包括:将N个区域的图像特征对应输入M个分类通道中的N个网格,得到每个分类通道对N个区域的图像特征进行卷积操作后的第一输出结果,每个分类通道的第一输出结果包括该分类通道对应的手部关键点在每个区域中出现的概率;将N个区域的图像特征对应输入M个横坐标通道中的N个网格,得到每个横坐标通道对N个区域的图像特征进行卷积操作后的第二输出结果,每个横坐标通道的第二输出结果包括该横坐标通道对应的手部关键点在每个区域中的第一横坐标值;将N个区域的图像特征对应输入M个纵坐标通道中的N个网格,得到每个纵坐标通道对N个区域的图像特征进行卷积操作后的第三输出结果,每个纵坐标通道的第三输出结果包括该纵坐标通道对应的手部关键点在每个区域中的第一纵坐标值。
比如,N个区域分别为区域1、区域2、……、区域N,M个手部关键点分别为手部关键点1、手部关键点2、……、手部关键点M,M个分类通道分别为与手部关键点1对应的分类通道1、与手部关键点2对应的分类通道2、……、与手部关键点M对应的分类通道M,每个分类通道的N个网格分别为与区域1对应的网格1、与区域2对应的网格2、……、与区域N对应的网格N。
针对区域1,将区域1的图像特征分别输入分类通道1的网格1、分类通道2的网格1、……、分类通道M的网格1,得到手部关键点1-M在区域1中出现的 概率。针对区域2,将区域2的图像特征分别输入分类通道1的网格2、分类通道2的网格2、……、分类通道M的网格2,得到手部关键点1-M在区域2中出现的概率。以此类推,直至针对区域N,将区域N的图像特征分别输入分类通道1的网格N、分类通道2的网格N、……、分类通道M的网格N,得到手部关键点1-M在区域N中出现的概率。
同理,分别预测得到手部关键点1-M在区域1的第一横坐标值和第一纵坐标值,分别预测得到手部关键点1-M在区域2的第一横坐标值和第一纵坐标值,……,分别预测得到手部关键点1-M在区域N的第一横坐标值和第一纵坐标值。
每个分类通道中的每个网格输出一个数值,该数值的取值范围是[0-1]区间。每个网格输出数值代表该分类通道对应的手部关键点出现在该网格对应的区域的概率。每个分类通道上所有网格输出的数值的总和是1。当一个分类通道的一个网格输出的数值较大时,代表该分类通道对应的手部关键点出现在该网格对应的区域的可能性较大,后续进行加权组合时权重较大。当一个分类通道的一个网格输出的数值较小时,代表该分类通道对应的手部关键点出现在该网格对应的区域的可能性较小,后续进行加权组合时权重较小。
每个横坐标通道中的每个网格输出一个数值,该数值代表:通过该网格对应的区域提取的图像特征,所预测的该横坐标通道对应的手部关键点在该网格对应的区域中的横坐标的拟合值。每个纵坐标通道中的每个网格输出一个数值,该数值代表:通过该网格对应的区域提取的图像特征,所预测的该纵坐标通道对应的手部关键点在该网格对应的区域中的纵坐标的拟合值。
在步骤S13中,通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值。
在一种可选实施方式中,针对每个手部关键点,可以获取到卷积神经网络中的对应通道输出的该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值,并通过该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值,确定该手部关键点在手势图像中的第二坐标值。
通过每个手部关键点出现的概率与第一坐标值,计算每一手部关键点在 手势图像中的第二坐标值的步骤可以包括:针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在手势图像中的第二坐标值。
具体的,针对每个手部关键点,将该手部关键点在每个区域中出现的概率作为该手部关键点在每个区域中出现的第一坐标值的权重,对该手部关键点在每个区域中出现的第一坐标值进行加权计算,得到该手部关键点在手势图像中的第二坐标值。
第一坐标值包括第一横坐标值和第一纵坐标值。将手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在手势图像中的第二坐标值的步骤可以包括:将该手部关键点在每个区域中出现的概率以及在每个区域中的第一横坐标值进行加权计算,得到该手部关键点在手势图像中的第二横坐标值;将该手部关键点在每个区域中出现的概率以及在每个区域中的第一纵坐标值进行加权计算,得到该手部关键点在手势图像中的第二纵坐标值。
具体的,针对每个手部关键点,将该手部关键点在每个区域中出现的概率作为该手部关键点在每个区域中出现的第一横坐标值的权重,对该手部关键点在每个区域中出现的第一横坐标值进行加权计算,得到该手部关键点在手势图像中的第二横坐标值;将该手部关键点在每个区域中出现的概率作为该手部关键点在每个区域中出现的第一纵坐标值的权重,对该手部关键点在每个区域中出现的第一纵坐标值进行加权计算,得到该手部关键点在手势图像中的第二纵坐标值。
本公开实施例中,加权计算是指计算手部关键点在每个区域中出现的概率与在对应的每个区域中的第一坐标值的乘积,并将所有乘积相加。
比如,针对手部关键点1,确定出手部关键点1在区域1中出现的概率为P1,在区域2中出现的概率为P2,……,在区域N中出现的概率为PN。确定出手部关键点1在区域1中的第一横坐标值为x1,第一纵坐标值为y1,在区域2中的第一横坐标值为x2,第一纵坐标值为y2,……,在区域N中的第一横坐标值为xN,第一纵坐标值为yN。因此,手部关键点1在手势图像中的第二横坐标值为P1×x1+P2×x2+……+PN×xN;手部关键点1在手势图像中的第二纵坐标值为 P1×y1+P2×y2+……+PN×yN。
因此,可以输出每个手部关键点在手势图像中的第二坐标值。
在一种可选实施方式中,可将上述加权计算得到每一手部关键点在手势图像中的第二坐标值的功能集成在卷积神经网络中,由卷积神经网络输出每个手部关键点在手势图像中的第二坐标值。
本公开实施例中,预测得到每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。一个手部关键点在一个区域中出现的概率的大小与该手部关键点在该区域中出现的可能性大小呈正相关的关系,通过每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值,确定每个手部关键点在手势图像中的第二坐标值。这样,可有效增强权重高的区域在手部关键点检测中的作用,减弱权重低的区域在手部关键点检测中的作用,从而增加手部关键点检测的准确率。
本公开实施例中,通过每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值,确定每个手部关键点在手势图像中的第二坐标值,这种确定第二坐标值的方式为注意力机制的“去伪存真”方式。注意力机制源于对人类视觉的研究。在认知科学中,由于信息处理的瓶颈,人类会选择性地关注所有信息的一部分,同时忽略其他可见的信息,上述机制通常被称为注意力机制。加入注意力机制会对输入的信息进行一次基于权重的筛选,这种筛选模式并不是人工制定的,而是卷积神经网络自己学到的,即通过加权组合的方式,让卷积神经网络自己学到输入的信息在空间上的结构关系,从而使卷积神经网络能够很好的适应手势变化的多样性。在进行手部关键点检测时,整幅手势图像中各个区域的信息的重要性是不等价的,根据每个区域存在手部关键点可能性的大小获得相应的权重,将注意力主要集中到权重高的特定区域,增强权重高的区域在手部关键点检测中的作用,减弱权重低的区域在手部关键点检测中的作用,从而增加手部关键点检测的准确率。
本公开实施例中,通过注意力机制,使得不同区域对于每个手部关键点坐标值的贡献不同,充分考虑整幅手势图像的不同区域对每个手部关键点的重要性不同,将注意力主要集中在手部关键点最可能存在的区域,弱化其他区域在手部关键点检测中的作用,从而减小其他区域对手部关键点坐标值预 测的干扰,更能适应手势的多样性,大幅度提高手部关键点检测的准确率。
图3是根据一示例性实施例示出的一种关键点检测装置的框图。参照图3,该装置包括划分单元301、确定单元302和计算单元303。
该划分单元301,被配置为获取待检测的手势图像,将该手势图像划分为多个区域。
该确定单元302,被配置为针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
该计算单元303,被配置为通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在手势图像中的第二坐标值。
在一种可选实施方式中,确定单元302可以包括:输入模块,被配置为提取每个区域的图像特征,将每个区域的图像特征输入预设的卷积神经网络中的通道;获取模块,被配置为针对每个手部关键点,获取卷积神经网络中的通道对每个区域的图像特征进行卷积操作后的输出结果,输出结果包括每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
在一种可选实施方式中,上述区域包括N个,预设手部关键点包括M个;卷积神经网络包括分类分支和回归分支,分类分支包括M个分类通道,回归分支包括M个横坐标通道及M个纵坐标通道;每个通道对应一个手部关键点,每个通道包括N个网格,每个网格对应一个区域;通道包括分类通道、横坐标通道和纵坐标通道,M个分类通道对应M个手部关键点,M个横坐标通道对应M个手部关键点,M个纵坐标通道对应M个手部关键点,每个通道包括的N个网格对应N个区域,M、N均为正整数。
上述输入模块可以包括:第一输入子模块,被配置为将N个区域的图像特征对应输入M个分类通道中的N个网格,得到每个分类通道对N个区域的图像特征进行卷积操作后的第一输出结果,每个分类通道的第一输出结果包括该分类通道对应的手部关键点在每个区域中出现的概率;第二输入子模块,被配置为将N个区域的图像特征对应输入M个横坐标通道中的N个网格,得到每个横坐标通道对N个区域的图像特征进行卷积操作后的第二输出结果,每个横坐标通道的第二输出结果包括该横坐标通道对应的手部关键点在每个区域中 的第一横坐标值;第三输入子模块,被配置为将N个区域的图像特征对应输入M个纵坐标通道中的N个网格,得到每个纵坐标通道对N个区域的图像特征进行卷积操作后的第三输出结果,每个纵坐标通道的第三输出结果包括该纵坐标通道对应的手部关键点在每个区域中的第一纵坐标值。
在一种可选实施方式中,计算单元303可以包括:加权模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在手势图像中的第二坐标值。
在一种可选实施方式中,第一坐标值包括第一横坐标值和第一纵坐标值。加权模块可以包括:第一加权子模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一横坐标值进行加权计算,得到该手部关键点在手势图像中的第二横坐标值;第二加权子模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一纵坐标值进行加权计算,得到该手部关键点在手势图像中的第二纵坐标值。
本公开实施例中,通过注意力机制,使得不同区域对于每个手部关键点坐标值的贡献不同,充分考虑整幅手势图像的不同区域对每个手部关键点的重要性不同,将注意力主要集中在手部关键点最可能存在的区域,弱化其他区域在手部关键点检测中的作用,从而减小其他区域对手部关键点坐标值预测的干扰,更能适应手势的多样性,大幅度提高手部关键点检测的准确率。
关于本公开实施例中的装置,其中各个单元或模块执行操作的具体方式已经在有关方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图4是根据一示例性实施例示出的一种用于关键点检测的装置400的框图。例如,装置400被提供为一电子设备,电子设备可以是移动终端。装置400可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等。
参照图4,装置400可以包括以下一个或多个组件:处理组件402、存储器404、电源组件406、多媒体组件408、音频组件410、输入/输出(Input/Output, I/O)接口412、传感器组件414以及通信组件416。
处理组件402通常控制装置400的整体操作,诸如与显示、电话呼叫、数据通信、相机操作和记录操作相关联的操作。处理组件402可以包括一个或多个处理器420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件402可以包括一个或多个模块,便于处理组件402和其他组件之间的交互。例如,处理组件402可以包括多媒体模块,以方便多媒体组件408和处理组件402之间的交互。
存储器404被配置为存储各种类型的数据以支持在装置400的操作。这些数据的示例包括用于在装置400上操作的任何应用程序或方法的指令、联系人数据、电话簿数据、消息、图片、视频等。存储器404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random-Access Memory,SRAM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、可编程只读存储器(Programmable Read Only Memory,PROM)、只读存储器(Read Only Memory,ROM)、磁存储器、快闪存储器、磁盘或光盘等。
电源组件406为装置400的各种组件提供电力。电源组件406可以包括电源管理系统、一个或多个电源、及其他与为装置400生成、管理和分配电力相关联的组件。
多媒体组件408包括在装置400和用户之间提供的一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(Touch Panel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器不仅可以感测触摸或滑动动作的边界,而且还可以检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件408包括一个前置摄像头和/或后置摄像头。当装置400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件410被配置为输出和/或输入音频信号。例如,音频组件410包括一个麦克风(Microphone,MIC),当装置400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器404或经由通信组件416发送。在一些实施例中,音频组件410还可以包括一个扬声器,用于输出音频信号。
I/O接口412为处理组件402和外围接口模块之间提供接口。上述外围接口模块可以是键盘、点击轮、按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮等。
传感器组件414包括一个或多个传感器,用于为装置400提供各个方面的状态评估。例如,传感器组件414可以检测装置400的打开/关闭状态、组件的相对定位,例如所述组件为装置400的显示器和小键盘。传感器组件414还可以检测装置400或装置400的一个组件的位置改变,用户与装置400接触的存在或不存在,装置400方位或加速/减速和装置400的温度变化等。传感器组件414可以包括接近传感器,接近传感器被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件414还可以包括光传感器,如互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)传感器或电荷耦合器件(Charge Coupled Device,CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件414还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器等。
通信组件416被配置为便于装置400和其他设备之间有线或无线方式的通信。装置400可以接入基于通信标准的无线网络,如无线保真(Wireless Fidelity,WiFi)、运营商网络(如2G、3G、4G或5G)、或它们的组合。在一个示例性实施例中,通信组件416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件416可以包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术、红外数据协会(Infrared Data Association,IrDA)技术、超宽带(Ultra Wide Band,UWB)技术、蓝牙(Blue Tooth,BT)技术和其他技术来实现。
在示例性实施例中,装置400可以被一个或多个应用专用集成电路 (Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(Digital Signal Processor Device,DSPD)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述关键点检测方法的步骤。
对于用于关键点检测的装置实施例而言,由于其基本相似于关键点检测方法实施例,所以描述的比较简单,相关之处参见图1-2所示的关键点检测方法实施例的部分说明即可。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器404,上述指令可由装置400的处理器420执行以完成上述关键点检测方法的步骤。具体可参考上述图1-2所示实施例。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc ROM,CD-ROM)、磁带、软盘和光数据存储设备等。
对于非临时性计算机可读存储介质实施例而言,由于其基本相似于关键点检测方法实施例,所以描述的比较简单,相关之处参见图1-2所示的关键点检测方法实施例的部分说明即可。
图5是根据一示例性实施例示出的一种用于关键点检测的装置500的框图。例如,装置500被提供为一电子设备,电子设备可以是服务器。
参照图5,装置500包括处理组件522,处理组件522可以包括一个或多个处理器。装置500还包括由存储器532所代表的存储器资源,用于存储可由处理组件522执行的指令,例如应用程序。存储器532中存储的应用程序可以包括一个或一个以上模块,每一个模块对应于一组指令的模块。此外,处理组件522被配置为执行指令,以执行上述关键点检测方法。
装置500还可以包括一个电源组件526、有线或无线网络接口550和一个输入输出(I/O)接口558。电源组件526被配置为执行装置500的电源管理,一个有线或无线网络接口550被配置为将装置500连接到网络。装置500可以操作存储在存储器532的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM, LinuxTM,FreeBSDTM或类似。
对于用于关键点检测的装置实施例而言,由于其基本相似于关键点检测方法实施例,所以描述的比较简单,相关之处参见图1-2所示的关键点检测方法实施例的部分说明即可。
在示例性实施例中,本公开实施例还提供了一种计算机程序产品,计算机程序产品包括程序指令,当计算机程序产品中的程序指令由电子设备的处理器执行时,使得电子设备执行上述关键点检测方法。
对于计算机程序产品实施例而言,由于其基本相似于关键点检测方法实施例,所以描述的比较简单,相关之处参见图1-2所示的关键点检测方法实施例的部分说明即可。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (17)

  1. 一种关键点检测方法,包括:
    获取待检测的手势图像,将所述手势图像划分为多个区域;
    针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;
    通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值。
  2. 根据权利要求1所述的关键点检测方法,所述针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值的步骤,包括:
    提取每个区域的图像特征,将每个区域的图像特征分别输入预设的卷积神经网络中的通道;
    获取所述卷积神经网络中的通道对每个区域的图像特征进行卷积操作后的输出结果,所述输出结果包括每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
  3. 根据权利要求2所述的关键点检测方法,所述区域包括N个,所述预设的手部关键点包括M个,所述通道包括分类通道、横坐标通道和纵坐标通道;所述卷积神经网络包括分类分支和回归分支,所述分类分支包括M个分类通道,所述回归分支包括M个横坐标通道及M个纵坐标通道;每个通道对应一个手部关键点,每个通道包括N个网格,每个网格对应一个区域;M个分类通道对应M个手部关键点,M个横坐标通道对应M个手部关键点,M个纵坐标通道对应M个手部关键点,每个通道包括的N个网格对应N个区域,M、N均为正整数;
    所述将每个区域的图像特征分别输入预设的卷积神经网络中的通道的步骤,包括:
    将N个区域的图像特征对应输入M个分类通道中的N个网格,得到每个分类通道对N个区域的图像特征进行卷积操作后的第一输出结果,每个分类通道的第一输出结果包括该分类通道对应的手部关键点在每个区域中出现的概率;
    将N个区域的图像特征对应输入M个横坐标通道中的N个网格,得到每个横坐标通道对N个区域的图像特征进行卷积操作后的第二输出结果,每个横坐标通道的第二输出结果包括该横坐标通道对应的手部关键点在每个区域中的第一横坐标值;
    将N个区域的图像特征对应输入M个纵坐标通道中的N个网格,得到每个纵坐标通道对N个区域的图像特征进行卷积操作后的第三输出结果,每个纵坐标通道的第三输出结果包括该纵坐标通道对应的手部关键点在每个区域中的第一纵坐标值。
  4. 根据权利要求1所述的关键点检测方法,所述通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值的步骤,包括:
    针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二坐标值。
  5. 根据权利要求4所述的关键点检测方法,所述第一坐标值包括第一横坐标值和第一纵坐标值;
    所述针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二坐标值的步骤,包括:
    针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一横坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二横坐标值;
    针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一纵坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二纵坐标值。
  6. 一种关键点检测装置,包括:
    划分单元,被配置为获取待检测的手势图像,将所述手势图像划分为多个区域;
    确定单元,被配置为针对预设的每个手部关键点,确定该手部关键点在 每个区域中出现的概率以及在每个区域中的第一坐标值;
    计算单元,被配置为通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值。
  7. 根据权利要求6所述的关键点检测装置,所述确定单元包括:
    输入模块,被配置为提取每个区域的图像特征,将每个区域的图像特征输入预设的卷积神经网络中的通道;
    获取模块,被配置为获取所述卷积神经网络中的通道对每个区域的图像特征进行卷积操作后的输出结果,所述输出结果包括每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
  8. 根据权利要求7所述的关键点检测装置,所述区域包括N个,所述预设的手部关键点包括M个,所述通道包括分类通道、横坐标通道和纵坐标通道;所述卷积神经网络包括分类分支和回归分支,所述分类分支包括M个分类通道,所述回归分支包括M个横坐标通道及M个纵坐标通道;每个通道对应一个手部关键点,每个通道包括N个网格,每个网格对应一个区域;M个分类通道对应M个手部关键点,M个横坐标通道对应M个手部关键点,M个纵坐标通道对应M个手部关键点,每个通道包括的N个网格对应N个区域,M、N均为正整数;
    所述输入模块包括:
    第一输入子模块,被配置为将N个区域的图像特征对应输入M个分类通道中的N个网格,得到每个分类通道对N个区域的图像特征进行卷积操作后的第一输出结果,每个分类通道的第一输出结果包括该分类通道对应的手部关键点在每个区域中出现的概率;
    第二输入子模块,被配置为将N个区域的图像特征对应输入M个横坐标通道中的N个网格,得到每个横坐标通道对N个区域的图像特征进行卷积操作后的第二输出结果,每个横坐标通道的第二输出结果包括该横坐标通道对应的手部关键点在每个区域中的第一横坐标值;
    第三输入子模块,被配置为将N个区域的图像特征对应输入M个纵坐标通道中的N个网格,得到每个纵坐标通道对N个区域的图像特征进行卷积操作后的第三输出结果,每个纵坐标通道的第三输出结果包括该纵坐标通道对 应的手部关键点在每个区域中的第一纵坐标值。
  9. 根据权利要求6所述的关键点检测装置,所述计算单元包括:
    加权模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二坐标值。
  10. 根据权利要求9所述的关键点检测装置,所述第一坐标值包括第一横坐标值和第一纵坐标值;所述加权模块包括:
    第一加权子模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一横坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二横坐标值;
    第二加权子模块,被配置为针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一纵坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二纵坐标值。
  11. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行:
    获取待检测的手势图像,将所述手势图像划分为多个区域;
    针对预设的每个手部关键点,确定该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值;
    通过每个手部关键点出现的概率与第一坐标值,计算每个手部关键点在所述手势图像中的第二坐标值。
  12. 根据权利要求11所述的电子设备,所述处理器被配置为具体执行:
    提取每个区域的图像特征,将每个区域的图像特征分别输入预设的卷积神经网络中的通道;
    获取所述卷积神经网络中的通道对每个区域的图像特征进行卷积操作后的输出结果,所述输出结果包括每个手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值。
  13. 根据权利要求12所述的电子设备,所述区域包括N个,预设的手部 关键点包括M个;所述卷积神经网络包括分类分支和回归分支,所述分类分支包括M个分类通道,所述回归分支包括M个横坐标通道及M个纵坐标通道;每个通道对应一个手部关键点,每个通道包括N个网格,每个网格对应一个区域;所述通道包括分类通道、横坐标通道和纵坐标通道,M个分类通道对应M个手部关键点,M个横坐标通道对应M个手部关键点,M个纵坐标通道对应M个手部关键点,每个通道包括的N个网格对应N个区域,M、N均为正整数;
    所述处理器被配置为具体执行:
    将N个区域的图像特征对应输入M个分类通道中的N个网格,得到每个分类通道对N个区域的图像特征进行卷积操作后的第一输出结果,每个分类通道的第一输出结果包括该分类通道对应的手部关键点在每个区域中出现的概率;
    将N个区域的图像特征对应输入M个横坐标通道中的N个网格,得到每个横坐标通道对N个区域的图像特征进行卷积操作后的第二输出结果,每个横坐标通道的第二输出结果包括该横坐标通道对应的手部关键点在每个区域中的第一横坐标值;
    将N个区域的图像特征对应输入M个纵坐标通道中的N个网格,得到每个纵坐标通道对N个区域的图像特征进行卷积操作后的第三输出结果,每个纵坐标通道的第三输出结果包括该纵坐标通道对应的手部关键点在每个区域中的第一纵坐标值。
  14. 根据权利要求11所述的电子设备,所述处理器被配置为具体执行:
    针对每个手部关键点,将该手部关键点在每个区域中出现的概率以及在每个区域中的第一坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二坐标值。
  15. 根据权利要求14所述的电子设备,所述第一坐标值包括第一横坐标值和第一纵坐标值;
    所述处理器被配置为具体执行:
    将该手部关键点在每个区域中出现的概率以及在每个区域中的第一横坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二横坐标值;
    将该手部关键点在每个区域中出现的概率以及在每个区域中的第一纵坐标值进行加权计算,得到该手部关键点在所述手势图像中的第二纵坐标值。
  16. 一种非临时性计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备执行如权利要求1-5任一项所述的关键点检测方法。
  17. 一种计算机程序产品,所述计算机程序产品包括程序指令,当所述程序指令被电子设备执行时,使所述电子设备执行如权利要求1-5任一项所述的关键点检测方法。
PCT/CN2019/119388 2018-12-05 2019-11-19 关键点检测方法、装置、电子设备及存储介质 WO2020114236A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811481858.9 2018-12-05
CN201811481858.9A CN109784147A (zh) 2018-12-05 2018-12-05 关键点检测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020114236A1 true WO2020114236A1 (zh) 2020-06-11

Family

ID=66496734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/119388 WO2020114236A1 (zh) 2018-12-05 2019-11-19 关键点检测方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN109784147A (zh)
WO (1) WO2020114236A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818897A (zh) * 2021-02-19 2021-05-18 宁波毅诺智慧健康科技有限公司 基于视觉手势识别的智能医疗床控制方法及相关设备
CN112861783A (zh) * 2021-03-08 2021-05-28 北京华捷艾米科技有限公司 一种手部检测方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784147A (zh) * 2018-12-05 2019-05-21 北京达佳互联信息技术有限公司 关键点检测方法、装置、电子设备及存储介质
CN110348412B (zh) * 2019-07-16 2022-03-04 广州图普网络科技有限公司 一种关键点定位方法、装置、电子设备及存储介质
CN111008589B (zh) * 2019-12-02 2024-04-09 杭州网易云音乐科技有限公司 人脸关键点检测方法、介质、装置和计算设备
CN114445716B (zh) * 2022-04-07 2022-07-26 腾讯科技(深圳)有限公司 关键点检测方法、装置、计算机设备、介质及程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267347A1 (en) * 2015-03-09 2016-09-15 Electronics And Telecommunications Research Institute Apparatus and method for detectting key point using high-order laplacian of gaussian (log) kernel
CN108121952A (zh) * 2017-12-12 2018-06-05 北京小米移动软件有限公司 人脸关键点定位方法、装置、设备及存储介质
CN108520251A (zh) * 2018-04-20 2018-09-11 北京市商汤科技开发有限公司 关键点检测方法及装置、电子设备和存储介质
CN109784147A (zh) * 2018-12-05 2019-05-21 北京达佳互联信息技术有限公司 关键点检测方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718913B (zh) * 2016-01-26 2018-11-02 浙江捷尚视觉科技股份有限公司 一种鲁棒的人脸特征点定位方法
US9875398B1 (en) * 2016-06-30 2018-01-23 The United States Of America As Represented By The Secretary Of The Army System and method for face recognition with two-dimensional sensing modality
CN108875723B (zh) * 2018-01-03 2023-01-06 北京旷视科技有限公司 对象检测方法、装置和系统及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267347A1 (en) * 2015-03-09 2016-09-15 Electronics And Telecommunications Research Institute Apparatus and method for detectting key point using high-order laplacian of gaussian (log) kernel
CN108121952A (zh) * 2017-12-12 2018-06-05 北京小米移动软件有限公司 人脸关键点定位方法、装置、设备及存储介质
CN108520251A (zh) * 2018-04-20 2018-09-11 北京市商汤科技开发有限公司 关键点检测方法及装置、电子设备和存储介质
CN109784147A (zh) * 2018-12-05 2019-05-21 北京达佳互联信息技术有限公司 关键点检测方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818897A (zh) * 2021-02-19 2021-05-18 宁波毅诺智慧健康科技有限公司 基于视觉手势识别的智能医疗床控制方法及相关设备
CN112861783A (zh) * 2021-03-08 2021-05-28 北京华捷艾米科技有限公司 一种手部检测方法及系统

Also Published As

Publication number Publication date
CN109784147A (zh) 2019-05-21

Similar Documents

Publication Publication Date Title
WO2020114236A1 (zh) 关键点检测方法、装置、电子设备及存储介质
US9674395B2 (en) Methods and apparatuses for generating photograph
TWI724736B (zh) 圖像處理方法及裝置、電子設備、儲存媒體和電腦程式
CN106651955B (zh) 图片中目标物的定位方法及装置
WO2020093837A1 (zh) 人体骨骼关键点的检测方法、装置、电子设备及存储介质
TWI747325B (zh) 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介
US10007841B2 (en) Human face recognition method, apparatus and terminal
US9959484B2 (en) Method and apparatus for generating image filter
WO2020133966A1 (zh) 锚点确定方法及装置、电子设备和存储介质
US20170220846A1 (en) Fingerprint template input method, device and medium
RU2664003C2 (ru) Способ и устройство для определения ассоциированного пользователя
JP2016531361A (ja) 画像分割方法、画像分割装置、画像分割デバイス、プログラム及び記録媒体
US9924090B2 (en) Method and device for acquiring iris image
CN107368810A (zh) 人脸检测方法及装置
CN107967459B (zh) 卷积处理方法、装置及存储介质
CN107688781A (zh) 人脸识别方法及装置
US10248855B2 (en) Method and apparatus for identifying gesture
EP2975574B1 (en) Method, apparatus and terminal for image retargeting
US11961278B2 (en) Method and apparatus for detecting occluded image and medium
EP3040912A1 (en) Method and device for classifying pictures
CN109784327B (zh) 边界框确定方法、装置、电子设备及存储介质
CN107424130B (zh) 图片美颜方法和装置
CN107133361B (zh) 手势识别方法、装置和终端设备
US9665925B2 (en) Method and terminal device for retargeting images
CN107239758B (zh) 人脸关键点定位的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892008

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19892008

Country of ref document: EP

Kind code of ref document: A1