WO2020088069A1 - 手势关键点检测方法、装置、电子设备及存储介质 - Google Patents

手势关键点检测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020088069A1
WO2020088069A1 PCT/CN2019/103119 CN2019103119W WO2020088069A1 WO 2020088069 A1 WO2020088069 A1 WO 2020088069A1 CN 2019103119 W CN2019103119 W CN 2019103119W WO 2020088069 A1 WO2020088069 A1 WO 2020088069A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
key point
image
category
point detection
Prior art date
Application number
PCT/CN2019/103119
Other languages
English (en)
French (fr)
Inventor
董亚娇
刘裕峰
郑文
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020088069A1 publication Critical patent/WO2020088069A1/zh
Priority to US17/119,975 priority Critical patent/US11514706B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Definitions

  • the present application relates to the technical field of gesture recognition, and in particular to a method, device, electronic device, and storage medium for detecting gesture key points.
  • a gesture recognition device can collect an image of a gesture, and then the computer performs gesture recognition on the collected gesture image and converts it into a corresponding command, and then the computer can execute the corresponding Command and display the execution result on the display.
  • gesture recognition it is necessary to detect the gesture key points in the collected gesture image, and then perform recognition analysis on the detected gesture key points to complete the gesture recognition.
  • the gesture image shown in FIG. 1 can be detected to obtain 21 Key points, and then identify and analyze the 21 key points detected to complete the gesture recognition.
  • the related technology when detecting the 21 key points, first design a deep convolutional neural network, and then use the training data to train the deep convolutional neural network to obtain a multi-layer deep convolutional neural network, and finally use The multi-layer deep convolutional neural network performs key point detection on the collected gesture images.
  • the relative positions of key points are very different. If the multi-layer deep convolutional neural network is used to detect key points of different gesture images, the accuracy of the detection results is not high.
  • the purpose of the embodiments of the present application is to provide a method, device, electronic device, and storage medium for detecting key points of gestures to improve the key of gestures The accuracy of point detection.
  • the specific technical solutions are as follows:
  • a gesture key point detection method including:
  • each key point detection network in the multiple key point detection networks corresponds to a gesture category
  • the gesture image is input into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the position of the gesture key point in the gesture image.
  • a gesture key point detection device including:
  • the acquisition module is configured to acquire a gesture image and a gesture category of the gesture image
  • the key point detection network determination module is configured to determine the key point detection network corresponding to the gesture category among the trained multiple key point detection networks, wherein each key point detection network in the multiple key point detection networks corresponds to A type of gesture;
  • the detection module is configured to input the gesture image into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the position of the gesture key point in the gesture image.
  • an electronic device including a processor and a memory for storing processor executable instructions
  • the processor is configured as:
  • each key point detection network in the multiple key point detection networks corresponds to a gesture category
  • a gesture key point detection method When instructions in the storage medium are executed by a processor of an electronic device, the processor can execute a gesture key point detection method, Methods include:
  • each key point detection network in the multiple key point detection networks corresponds to a gesture category
  • the gesture image is input into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the position of the gesture key point in the gesture image.
  • a program product containing instructions, which when executed on an electronic device, causes the electronic device to execute the steps of the method for detecting a gesture key point provided in the first aspect.
  • a computer program which, when running on an electronic device, causes the electronic device to execute the steps of the method for detecting gesture key points provided in the first aspect.
  • a gesture key point detection method, device, electronic device, and storage medium provided in embodiments of the present application can determine and determine the number of key point detection networks obtained by training after acquiring gesture images and gesture categories of gesture images.
  • the key point detection network corresponding to the gesture category and then input the gesture image into the key point detection network corresponding to the gesture category, so as to obtain the position of the gesture key point corresponding to the gesture category in the gesture image.
  • each key point detection network in the plurality of key point detection networks corresponds to a gesture category
  • the parameters of the key point detection network corresponding to the gesture category are parameters for the gesture category, so
  • a key point detection network corresponding to the gesture category is used to detect the key in the gesture image of the gesture category, the accuracy of gesture key point detection can be improved.
  • FIG. 1 is a schematic diagram of 21 key points in gestures in the related art
  • FIG. 2 is a flowchart of a first implementation manner of a gesture key point detection method according to an exemplary embodiment
  • Fig. 3 is a schematic structural diagram of a gesture key point detection device according to an exemplary embodiment
  • FIG. 4 is a flowchart of a second implementation manner of a gesture key point detection method according to an exemplary embodiment
  • Fig. 5 is a schematic structural diagram of a mobile terminal according to an exemplary embodiment
  • Fig. 6 is a schematic structural diagram of a server device according to an exemplary embodiment.
  • embodiments of the present application provide a gesture key point detection method, device, electronic device, and storage medium to improve the accuracy of gesture key point detection.
  • a gesture key point detection method according to an embodiment of the present application is first introduced.
  • FIG. 1 it is a flowchart of a first implementation manner of a gesture key point detection method according to an exemplary embodiment.
  • the method may include:
  • Step S110 Acquire a gesture image and a gesture category of the gesture image.
  • the gesture key point detection method of the embodiment of the present application may be applied to an electronic device, and the electronic device may be a smart phone, a personal computer, or a server.
  • the electronic device When the electronic device is used to detect the key points of the gesture, the user can input the gesture image to be detected and the gesture type of the gesture image into the electronic device. Therefore, the electronic device can obtain the gesture image and the gesture Gesture category.
  • the gesture image may be marked with a corresponding gesture category.
  • the aforementioned electronic device may extract the gesture type of the gesture image from the gesture image.
  • the gesture image may be a gesture image of a gesture category.
  • the electronic device described above may also classify the gesture image, and determine the gesture category of the gesture image according to the gesture image.
  • the aforementioned electronic device may be provided with a pre-trained target classification algorithm, then the user may input a gesture image into the electronic device, and the electronic device may adopt the pre-trained target classification algorithm Perform classification to determine the gesture type of the gesture image.
  • a target classification algorithm may be preset in the above-mentioned electronic device, the target classification algorithm may be a target classification algorithm in the prior art, and then gesture training labeled with a gesture category may be input into the above-mentioned electronic device Image, after receiving the gesture training image, the electronic device can train the set target classification algorithm.
  • the target classification algorithm may be a classification algorithm based on a neural network, or a K nearest neighbor classification algorithm.
  • Step S120 among the multiple key point detection networks obtained by training, determine the key point detection network corresponding to the gesture category.
  • the key point detection network can use one of the convolutional neural networks, such as a multilayer deep convolutional neural network.
  • a keypoint detection network can be trained for each gesture category. Specifically, for each gesture category, the gesture image of the gesture category is used as the input of the multi-layer deep convolutional neural network, and the gesture keypoint and each gesture are used The position of the key point in the gesture image is used as the output of the multi-layer deep convolutional neural network, and the multi-layer deep convolutional neural network is trained to obtain a trained key point detection network corresponding to the gesture category.
  • the above-mentioned electronic device After acquiring the gesture image and the gesture category of the gesture image, the above-mentioned electronic device adopts corresponding key point detection networks for different gesture categories, which can be in multiple key point detection networks that are pre-trained, Find the key point detection network corresponding to the above gesture category.
  • each key point detection network in the multiple key point detection networks corresponds to a gesture category.
  • the multiple key point detection networks may be detection networks using the same structure or different detection networks, which are all possible.
  • the keypoint detection network may be a second-order stacked deep convolution hourglass network.
  • the parameters of the detection network of each key point are different.
  • the aforementioned electronic device may adopt the following manner to train the key point detection network corresponding to each gesture category:
  • Step A Acquire a preset key point detection network and a gesture training image marked with the same gesture category.
  • the gesture training images are marked with gesture key points corresponding to the gesture category.
  • the marked gesture key point may be a plurality of key points among the 21 key points shown in FIG. 1 or other gesture key points.
  • the gesture training image marked with the gesture category may include the position of the key point of the gesture.
  • the trained key point detection network can recognize the position of the gesture key point when identifying the key point in the gesture.
  • Step B Input a gesture training image marked with the same gesture category into a preset key point detection network to obtain a predicted gesture key point corresponding to the gesture training image.
  • a Gaussian distribution with mean ⁇ and variance ⁇ 2 may be used to detect the preset key point detection network
  • the parameters in are initialized.
  • the mean value ⁇ and variance ⁇ 2 can be set according to experience.
  • the mean value ⁇ can be 0, and the variance ⁇ 2 can be 0.01.
  • Step C based on the loss between the predicted gesture key point and the gesture key point marked in the gesture training image, adjust the parameters of the preset key point detection network.
  • the above steps A to C may be performed in a loop iteration.
  • the predicted gesture keypoint and the gesture training image may be used Gesture key points marked in, calculate the prediction accuracy.
  • the preset key point detection network can be used as the training key corresponding to the gesture category Point detection network.
  • Step S130 Input the gesture image into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the position of the gesture key point in the gesture image.
  • the electronic device may input the gesture image into the key point detection network, so that the key point detection network detects the gesture in the gesture image Key points and the position of each gesture key point in the gesture image.
  • the detection result output by the key point detection network is the gesture key point corresponding to the gesture type of the gesture image and the position of the gesture key point in the gesture image.
  • a gesture key point detection method provided by an embodiment of the present application can determine the key point detection corresponding to the gesture type after acquiring the gesture image and the gesture type of the gesture image in a plurality of pre-trained key point detection networks The network then inputs the gesture image into the key point detection network corresponding to the gesture category, so that the position of the gesture key point corresponding to the gesture category in the gesture image can be obtained.
  • each key point detection network in the plurality of key point detection networks corresponds to a gesture category
  • the parameters of the key point detection network corresponding to the gesture category are parameters for the gesture category, so When a key point detection network corresponding to the gesture category is used to detect the key in the gesture image of the gesture category, the accuracy of gesture key point detection can be improved.
  • an embodiment of the present application further provides a gesture key point detection device.
  • FIG. 3 it is a schematic structural diagram of a gesture key point detection device according to an exemplary embodiment.
  • the apparatus may include an acquisition module 210, a key point detection network determination module 220, and a detection module 230.
  • the acquisition module 210 is configured to acquire a gesture image and a gesture category of the gesture image
  • the key point detection network determination module 220 is configured to determine a key point detection network corresponding to the gesture category among a plurality of key point detection networks pre-trained, wherein each key point in the multiple key point detection networks The detection network corresponds to a gesture category;
  • the detection module 230 is configured to input the gesture image into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the position of the gesture key point in the gesture image.
  • a gesture key point detection device provided by an embodiment of the present application can determine the key point detection corresponding to the gesture type in a plurality of key point detection networks obtained by pre-training after acquiring the gesture image and the gesture type of the gesture image The network then inputs the gesture image into the key point detection network corresponding to the gesture category, so that the position of the gesture key point corresponding to the gesture category in the gesture image can be obtained.
  • each key point detection network in the plurality of key point detection networks corresponds to a gesture category
  • the parameters of the key point detection network corresponding to the gesture category are parameters for the gesture category, so When a key point detection network corresponding to the gesture category is used to detect the key in the gesture image of the gesture category, the accuracy of gesture key point detection can be improved.
  • FIG. 4 is a gesture key point detection according to an exemplary embodiment.
  • FIG. 4 is a gesture key point detection according to an exemplary embodiment.
  • a flowchart of a second implementation manner of the method. In FIG. 4, the method may include:
  • Step S111 Obtain a gesture image, and use a trained target detection algorithm to determine the gesture category in the gesture image and the first area containing the gesture corresponding to the gesture category.
  • the gesture image may include one gesture or multiple gestures.
  • the electronic device may use a pre-trained target detection algorithm to detect The category of each gesture in the gesture image, and the area in the gesture image of the gesture associated with the gesture category.
  • the target detection algorithm can be trained in the following ways:
  • Step D Obtain a gesture training image.
  • the gesture training image includes: a pre-marked gesture category and a pre-marked gesture position.
  • the user may first manually mark the gesture training image, mark the gesture category in the gesture training image and the gesture position corresponding to each gesture category, and then input the gesture training image into the aforementioned electronic device, Therefore, the aforementioned electronic device can acquire the gesture training image.
  • Step E Input the gesture training image into a preset target detection algorithm to obtain a gesture prediction image corresponding to the gesture training image.
  • the gesture prediction image includes the predicted gesture category and the predicted gesture position.
  • the electronic device may input the gesture training image into a preset target detection algorithm, so that the preset target detection algorithm predicts the gesture training image.
  • the preset target detection algorithm may be a target detection algorithm in the prior art.
  • the preset target detection algorithm may be an SSD (Single Shot MultiBox Detector, single-layer multi-core detector) target detection algorithm.
  • Step F Adjust the preset target detection algorithm parameters based on the first loss between the pre-labeled gesture category and the predicted gesture category, and the second loss between the pre-labeled gesture location and the predicted gesture location.
  • the above-mentioned electronic device may calculate between the pre-labeled gesture category and the predicted gesture category based on the pre-labeled gesture category and the predicted gesture category Based on the first loss of the pre-marked gesture position and the predicted gesture position, the second loss between the pre-marked gesture position and the predicted gesture position is calculated.
  • the parameters of the preset target detection algorithm are adjusted.
  • the above steps D to F may be performed in a loop iteration.
  • the preset target detection algorithm can be used as the target detection algorithm obtained by training. If not, based on the first loss and the second loss, the parameters of the preset target detection algorithm are adjusted. And the adjusted target detection algorithm is used to predict the gesture training image.
  • the position of the upper left corner and the position of the lower right corner of the position of the first area in the gesture image may be used to represent the position of the first area in the gesture image. It is also possible to use the position of any corner of the position of the first area in the gesture image and the width pixel value and the height pixel value of the first area to represent the position of the area in the gesture image, which is possible.
  • Step S120 among the multiple key point detection networks obtained by training, determine the key point detection network corresponding to the gesture category.
  • step S120 in this embodiment is the same as the step in the first embodiment described above, and reference may be made to a step in the first embodiment, which will not be repeated here.
  • Step S131 Extract the first area from the gesture image; input the first area into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the gesture key point corresponding to the gesture category in the first The location in an area.
  • the above-mentioned trained target detection algorithm can be used to identify the gesture category of each gesture and the position of each gesture in the gesture image, that is, the first The location of the area.
  • the above electronic device can extract the first area from the gesture image, and then input the first area into the key point detection network corresponding to the gesture category , So that the key point detection network detects the gesture key point in the first area and the position of each gesture key point in the first area.
  • the detection result output by the key point detection network corresponding to the gesture category is the gesture key point corresponding to the gesture category of the gesture image and the position of the gesture key point in the first area.
  • the above-mentioned electronic device may combine the position of the first area in the gesture image to calculate each gesture category The position of the corresponding key point of the gesture in the gesture image.
  • the position of the gesture key point corresponding to each gesture category in the gesture image can be detected. Therefore, on the basis of improving the detection accuracy, it is possible to detect the key points of the gestures in the gesture image containing multiple gesture categories.
  • the acquisition module 210 in a gesture key point detection device is specifically configured to: acquire a gesture image, and use a pre-trained target classification algorithm to determine the gesture type of the gesture image .
  • the acquisition module 210 in a gesture key point detection device is specifically configured to: acquire a gesture image, and use a target detection algorithm obtained by pre-training to determine the The gesture category and the first area containing the gesture corresponding to the gesture category;
  • the detection module 230 is specifically configured to: extract the first area from the gesture image; input the first area into the key point detection network corresponding to the gesture category to obtain the gesture key point corresponding to the gesture category and the gesture category The location of the key point of the gesture in the first area.
  • a gesture key point detection device provided by an embodiment of the present application further includes: a target detection algorithm training module configured to: acquire a gesture training image, and the gesture training image includes: pre-labeled Gesture category and pre-marked gesture position; input the gesture training image into the preset target detection algorithm to obtain the gesture prediction image corresponding to the gesture training image, the gesture prediction image includes the predicted gesture category and the predicted gesture position; based on The first loss between the pre-labeled gesture category and the predicted gesture category, and the second loss between the pre-labeled gesture location and the predicted gesture location adjust the parameters of the preset target detection algorithm.
  • a target detection algorithm training module configured to: acquire a gesture training image, and the gesture training image includes: pre-labeled Gesture category and pre-marked gesture position; input the gesture training image into the preset target detection algorithm to obtain the gesture prediction image corresponding to the gesture training image, the gesture prediction image includes the predicted gesture category and the predicted gesture position; based on The first loss between the pre-labeled gesture category and the predicted gesture category, and
  • a gesture key point detection device provided by an embodiment of the present application, the device further includes: a key point detection network training module configured to: obtain a preset key point detection network and have the same mark Gesture training images of the gesture category, the gesture training images are marked with the gesture key points corresponding to the gesture category; input the gesture training images marked with the same gesture category into the preset key point detection network to obtain the prediction corresponding to the gesture training image Gesture key points; based on the predicted loss between the gesture key points and the gesture key points marked in the gesture training image, adjust the preset key point detection network parameters.
  • a key point detection network training module configured to: obtain a preset key point detection network and have the same mark Gesture training images of the gesture category, the gesture training images are marked with the gesture key points corresponding to the gesture category; input the gesture training images marked with the same gesture category into the preset key point detection network to obtain the prediction corresponding to the gesture training image Gesture key points; based on the predicted loss between the gesture key points and the gesture key points marked in the gesture
  • the gesture key point detection method of the embodiment of the present application may be applied to a mobile terminal, which may be a mobile phone, a computer, a messaging device, a game console, a tablet device, a medical device, a fitness device, Personal digital assistant, etc.
  • a mobile terminal which may be a mobile phone, a computer, a messaging device, a game console, a tablet device, a medical device, a fitness device, Personal digital assistant, etc.
  • the mobile terminal 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input / output (I / O) interface 512, and a sensor component 514 , ⁇ ⁇ ⁇ 516.
  • the processing component 502 generally controls the overall operations of the mobile terminal 500, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 502 may include one or more processors 520 to execute instructions to complete all or part of the steps of the above method.
  • processing component 502 may include one or more modules to facilitate interaction between the processing component 502 and other components.
  • processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
  • the memory 504 is configured to store various types of data to support operation at the device 500. Examples of these data include instructions for any application or method for operating on the mobile terminal 500, contact data, phone book data, messages, pictures, videos, and so on.
  • the memory 504 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 506 provides power to various components of the mobile terminal 500.
  • the power supply component 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the mobile terminal 500.
  • the multimedia component 508 includes a screen between the mobile terminal 500 and the user that provides an output interface.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 508 includes a front camera and / or a rear camera. When the device 500 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 510 is configured to output and / or input audio signals.
  • the audio component 510 includes a microphone (MIC).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 504 or transmitted via the communication component 516.
  • the audio component 510 further includes a speaker for outputting audio signals.
  • the I / O interface 512 provides an interface between the processing component 502 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 514 includes one or more sensors for providing the mobile terminal 500 with status assessment in various aspects.
  • the sensor component 514 can detect the on / off state of the device 500, and the relative positioning of the components, such as the display and keypad of the mobile terminal 500, and the sensor component 514 can also detect the location of the mobile terminal 500 or a component of the mobile terminal 500 Changes, the presence or absence of user contact with the mobile terminal 500, the orientation or acceleration / deceleration of the mobile terminal 500, and the temperature change of the mobile terminal 500.
  • the sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 514 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 516 is configured to facilitate wired or wireless communication between the mobile terminal 500 and other devices.
  • the mobile terminal 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 516 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the mobile terminal 500 may be used by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field Programming gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented to perform all or part of the steps of the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field Programming gate array
  • controller microcontroller, microprocessor or other electronic components are implemented to perform all or part of the steps of the above method.
  • a mobile terminal provided by an embodiment of the present application can determine the key point detection network corresponding to the gesture type after acquiring the gesture image and the gesture type of the gesture image among the multiple key point detection networks obtained in advance training, and then The gesture image is input into the key point detection network corresponding to the gesture category, so that the position of the gesture key point corresponding to the gesture category in the gesture image can be obtained.
  • each key point detection network in the plurality of key point detection networks corresponds to a gesture category
  • the parameters of the key point detection network corresponding to the gesture category are parameters for the gesture category, so When a key point detection network corresponding to the gesture category is used to detect the key in the gesture image of the gesture category, the accuracy of gesture key point detection can be improved.
  • a computer program product is also provided, which can be stored in the memory 504, and when the instructions in the computer program product are executed by the processor 520 of the mobile terminal 500, causing the mobile The terminal 500 can execute the above-mentioned gesture key point detection method.
  • a non-transitory computer-readable storage medium including instructions is also provided, such as a memory 504 including instructions, which can be executed by the processor 520 of the mobile terminal 500 to complete the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
  • a gesture key point detection method may be applied to a server.
  • FIG. 6 it is a schematic structural diagram of a server 600 according to an exemplary embodiment.
  • the server 600 includes a processing component 622, which further includes one or more processors, and memory resources represented by the memory 632, for storing instructions executable by the processing component 622, such as application programs.
  • the application programs stored in the memory 632 may include one or more modules each corresponding to a set of instructions.
  • the processing component 622 is configured to execute instructions to perform all or part of the steps of the above method.
  • the server 600 may also include a power component 626 configured to perform power management of the server 600, a wired or wireless network interface 650 configured to connect the server 600 to the network, and an input / output (I / O) interface 658.
  • the server 600 may operate based on an operating system stored in the memory 632, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
  • a server provided by an embodiment of the present application may, after acquiring a gesture image and a gesture category of a gesture image, determine a key point detection network corresponding to the gesture category among a plurality of key point detection networks pre-trained, and then The gesture image is input into the key point detection network corresponding to the gesture category, so that the position of the gesture key point corresponding to the gesture category in the gesture image can be obtained.
  • each key point detection network in the plurality of key point detection networks corresponds to a gesture category
  • the parameters of the key point detection network corresponding to the gesture category are parameters for the gesture category, so
  • a key point detection network corresponding to the gesture category is used to detect the key in the gesture image of the gesture category, the accuracy of gesture key point detection can be improved.
  • a computer program product is also provided, which can be stored in the memory 632, and when the instructions in the computer program product are executed by the processing component 622 of the server 600, the server 600 Able to perform the above-mentioned gesture key point detection method.
  • An embodiment of the present application also provides a computer program that, when run on an electronic device, enables the electronic device to perform all or part of the steps in the above method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请示出了一种手势关键点检测方法、装置、电子设备及存储介质,其中,该方法包括:获取手势图像及手势图像的手势类别,在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。

Description

手势关键点检测方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求在2018年10月30日提交中国专利局、申请号为201811280155.X、申请名称为“手势关键点检测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及手势识别技术领域,尤其涉及一种手势关键点检测方法、装置、电子设备及存储介质。
背景技术
随着人机交互技术的发展,已经出现了基于识别技术的人机交互技术。例如,在基于手势识别技术的人机交互技术中,手势识别设备可以采集手势的图像,然后计算机对采集到的手势的图像进行手势识别,并转换为对应的命令,然后计算机可以执行该对应的命令,并将执行结果显示在显示器上。
在进行手势识别时,需要检测采集的手势图像中的手势关键点,然后对检测的手势关键点进行识别分析,从而完成手势识别,例如,对图1所示的手势图像进行检测,可以得到21个关键点,然后对检测得到的21个关键点进行识别分析,以完成手势识别。
在相关技术中,对该21个关键点进行检测时,首先设计一个深度卷积神经网络,然后采用训练数据对该深度卷积神经网络进行训练,得到一个多层深度卷积神经网络,最后采用该多层深度卷积神经网络对采集的手势图像进行关键点检测。
然而,对于不同的手势,关键点的相对位置存在很大差异,若采用该多层深度卷积神经网络对不同的手势图像进行关键点检测,检测结果的准确度不高。
发明内容
为克服相关技术中存在的对手势图像进行关键点检测时准确度较低的问题,本申请实施例的目的在于提供一种手势关键点检测方法、装置、电子设备及存储介质,以提高手势关键点检测的准确度。具体技术方案如下:
根据本申请实施例的第一方面,提供一种手势关键点检测方法,包括:
获取手势图像及手势图像的手势类别;
在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,其中,多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
根据本申请实施例的第二方面,提供一种手势关键点检测装置,包括:
获取模块,被配置为获取手势图像及手势图像的手势类别;
关键点检测网络确定模块,被配置为在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,其中,多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
检测模块,被配置为将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
根据本申请实施例的第三方面,提供一种电子设备,包括处理器和用于存储处理器可执行指令的存储器;
其中,处理器被配置为:
获取手势图像及手势图像的手势类别;
在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,其中,多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别 对应的手势关键点以及手势关键点在手势图像中的位置。
根据本申请实施例的第四方面,提供一种非临时性计算机可读存储介质,当存储介质中的指令由电子设备的处理器执行时,使得处理器能够执行一种手势关键点检测方法,方法包括:
获取手势图像及手势图像的手势类别;
在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,其中,多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
根据本申请实施例的第五方面,还提供一种包含指令的程序产品,当其在电子设备上运行时,使得电子设备执行上述第一方面提供的一种手势关键点检测方法步骤。
根据本申请实施例的第六方面,还提供一种计算机程序,当其在电子设备上运行时,使得电子设备执行上述第一方面提供的一种手势关键点检测方法步骤。
本申请实施例提供的一种手势关键点检测方法、装置、电子设备及存储介质,可以在获取到手势图像及手势图像的手势类别后,在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,然后将手势图像输入与手势类别对应的关键点检测网络中,从而可以得到与手势类别对应的手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请,实施本申请的任一产品或方法并不一定需要同时达 到以上的所有优点。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1为相关技术中的手势中21个关键点的示意图;
图2为根据一示例性实施例示出的一种手势关键点检测方法第一种实施方式的流程图;
图3为根据一示例性实施例示出的一种手势关键点检测装置的结构示意图;
图4为根据一示例性实施例示出的一种手势关键点检测方法第二种实施方式的流程图;
图5是根据一示例性实施例示出的一种移动终端的结构示意图;
图6是根据一示例性实施例示出的一种服务器装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决相关技术存在的问题,本申请实施例提供了一种手势关键点检测方法、装置、电子设备及存储介质,以提高手势关键点检测的准确度。下面,首先对本申请实施例的一种手势关键点检测方法进行介绍。
如图1所示,为根据一示例性实施例示出的一种手势关键点检测方法第一种实施方式的流程图,该方法可以包括:
步骤S110,获取手势图像及手势图像的手势类别。
在一些示例中,本申请实施例的一种手势关键点检测方法可以应用于一电子设备,该电子设备可以是智能手机、个人电脑或者服务器。
采用该电子设备对手势的关键点进行检测时,用户可以向上述的电子设备输入待检测的手势图像,以及该手势图像的手势类别,因此,上述的电子设备可以获取到手势图像以及手势图像的手势类别。
在一些示例中,该手势图像中可以标记有对应的手势类别。此时,上述的电子设备可以从该手势图像中提取出该手势图像的手势类别。
在一些示例中,该手势图像可以是一种手势类别的手势图像。
在一些示例中,上述的电子设备还可以对手势图像进行分类,根据手势图像确定手势图像的手势类别。
在一些示例中,上述的电子设备中可以设置有预先训练得到的目标分类算法,则用户可以向该电子设备中输入手势图像,该电子设备可以采用该预先训练得到的目标分类算法对该手势图像进行分类,确定该手势图像的手势类别。
在一些示例中,可以在上述的电子设备中预先设置一个目标分类算法,该目标分类算法可以是现有技术中的目标分类算法,然后可以向上述的电子设备中输入标记有手势类别的手势训练图像,该电子设备在接收到该手势训练图像后,可以对该设置的目标分类算法进行训练。
例如,该目标分类算法可以是基于神经网络的分类算法,也可以是K最近邻分类算法等。
步骤S120,在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络。
其中,关键点检测网络可以采用卷积神经网络中的一种,如多层深度卷积神经网络。可以对应每一种手势类别训练一个关键点检测网络,具体地, 对于每一种手势类别,以该手势类别的手势图像作为多层深度卷积神经网络的输入,以手势关键点以及每个手势关键点在手势图像中的位置作为多层深度卷积神经网络的输出,对多层深度卷积神经网络进行训练,得到该手势类别对应的训练好的关键点检测网络。
进一步地说,上述的电子设备在获取到手势图像及手势图像的手势类别后,为了实现对不同手势类别,采用对应的关键点检测网络,可以在预先训练得到的多个关键点检测网络中,查找与上述的手势类别对应的关键点检测网络。
其中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别。
在一些示例中,该多个关键点检测网络可以是采用相同结构的检测网络,也可以采用不同的检测网络,这都是可以的。例如,该关键点检测网络可以是二阶堆叠深度卷积沙漏网络。
当采用相同结构的检测网络时,各个关键点检测网络的参数不同。
在一些示例中,上述的电子设备可以采用如下方式,训练每个手势类别对应的关键点检测网络:
步骤A,获取预设的关键点检测网络和标记有相同手势类别的手势训练图像。
其中,该手势训练图像中标记有该手势类别对应的手势关键点。该标记的手势关键点可以是图1所示的21个关键点中的多个关键点,也可以是其他手势关键点。该标记有手势类别的手势训练图像中可以包括手势关键点的位置。
这样,在通过后续步骤进行训练时,可以使得训练后的关键点检测网络能够在识别手势中的关键点时,识别出手势关键点的位置。
步骤B,将标记有相同手势类别的手势训练图像输入预设的关键点检测网络中,得到与手势训练图像对应的预测的手势关键点。
在一些示例中,在将标记有相同手势类别的手势训练图像输入至该预设 的关键点检测网络之前,可以采用均值为μ和方差为δ 2的高斯分布对该预设的关键点检测网络中的参数进行初始化。
其中,该均值μ和方差δ 2可以根据经验设置,例如,该均值μ可以是0,方差δ 2可以是0.01。
步骤C,基于预测的手势关键点和手势训练图像中标记的手势关键点之间的损失,调整预设的关键点检测网络的参数。
在一些示例中,上述的步骤A~C可以是循环迭代进行的。为了降低训练复杂度,减少训练该预设的关键点检测网络的时间开销,在通过步骤B,得到与手势训练图像对应的预测的手势关键点之后,可以基于预测的手势关键点和手势训练图像中标记的手势关键点,计算预测的准确度,当预测的准确度大于或等于预设准确度阈值时,可以将该预设的关键点检测网络,作为与该手势类别对应的训练得到的关键点检测网络。
步骤S130,将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
上述的电子设备在得到与上述的手势图像的手势类别对应的关键点检测网络后,可以将该手势图像输入至该关键点检测网络中,以使得该关键点检测网络检测该手势图像中的手势关键点以及每个手势关键点在手势图像中的位置。
这样,该关键点检测网络输出的检测结果,即为与该手势图像的手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
本申请实施例提供的一种手势关键点检测方法,可以在获取到手势图像及手势图像的手势类别后,在预先训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,然后将手势图像输入与手势类别对应的关键点检测网络中,从而可以得到与手势类别对应的手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是 针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。
相应于上述的方法实施例,本申请实施例还提供了一种手势关键点检测装置,如图3所示,为根据一示例性实施例示出的一种手势关键点检测装置的结构示意图。参照图3,该装置可以包括获取模块210、关键点检测网络确定模块220和检测模块230。
该获取模块210,被配置为获取手势图像及手势图像的手势类别;
该关键点检测网络确定模块220,被配置为在预先训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,其中,多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
该检测模块230,被配置为将手势图像输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点以及手势关键点在手势图像中的位置。
本申请实施例提供的一种手势关键点检测装置,可以在获取到手势图像及手势图像的手势类别后,在预先训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,然后将手势图像输入与手势类别对应的关键点检测网络中,从而可以得到与手势类别对应的手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
在图2所示的手势关键点检测方法的基础上,本申请实施例还提供了一种可能的实现方式,如图4所示,为根据一示例性实施例示出的一种手势关键点检测方法第二种实施方式的流程图,在图4中,该方法可以包括:
步骤S111,获取手势图像,采用训练得到的目标检测算法,确定手势图 像中的手势类别和包含与手势类别对应的手势的第一区域。
在一些示例中,上述的手势图像中可以包括一种手势,也可以包括多种手势,当上述的手势图像包括多种手势时,上述的电子设备可以采用预先训练得到的目标检测算法,来检测该手势图像中每种手势的类别,以及与该手势类别的手势在该手势图像中的区域。
在一些示例中,可以通过以下方式训练得到目标检测算法:
步骤D,获取手势训练图像,手势训练图像包括:预先标记的手势类别和预先标记的手势位置。
在一些示例中,用户可以首先人工对该手势训练图像进行标记,标记该手势训练图像中的手势类别以及与每种手势类别对应的手势位置,然后将该手势训练图像输入上述的电子设备中,因此,上述的电子设备可以获取到手势训练图像。
步骤E,将手势训练图像,输入预设的目标检测算法中,得到与手势训练图像对应的手势预测图像,手势预测图像中包括预测的手势类别和预测的手势位置。
上述的电子设备在获取到该手势训练图像后,可以将该手势训练图像输入至预设的目标检测算法中,以使得该预设的目标检测算法对该手势训练图像进行预测。该预设的目标检测算法可以是现有技术中的目标检测算法,例如,该预设的目标检测算法可以是SSD(Single Shot MultiBoxDetector,单层多核检测器)目标检测算法。
步骤F,基于预先标记的手势类别和预测的手势类别之间的第一损失、预先标记的手势位置和预测的手势位置之间的第二损失,调整预设的目标检测算法的参数。
上述的电子设备在通过预设的目标检测算法得到预测的手势类别和预测的手势位置后,可以基于预先标记的手势类别和预测的手势类别,计算预先标记的手势类别和预测的手势类别之间的第一损失,基于预先标记的手势位置和预测的手势位置,计算预先标记的手势位置和预测的手势位置之间的第 二损失。
然后基于该第一损失和第二损失,对预设的目标检测算法的参数进行调整。
在一些示例中,上述的步骤D~步骤F可以是循环迭代进行的。为了降低训练复杂度,减少训练该预设的目标检测算法的时间开销,可以在通过步骤F得到第一损失和第二损失后,判断该第一损失是否小于第一损失阈值,且第二损失是否小于第二损失阈值,如果是,则可以将该预设的目标检测算法作为训练得到的目标检测算法。如果否,则基于该第一损失和第二损失,对预设的目标检测算法的参数进行调整。并采用调整后的目标检测算法对该手势训练图像进行预测。
在一些示例中,可以采用该第一区域在手势图像中的位置中的左上角位置和右下角位置来表示该第一区域在手势图像中的位置。也可以采用该第一区域在手势图像中的位置中的任一角的位置和该第一区域的宽度像素值和高度像素值来表示该区域在手势图像中的位置,这都是可以的。
步骤S120,在训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络。
需要说明的是,本实施例中的步骤S120与上述实施例一中的步骤相同,可以参考实施例一种的步骤,这里不再赘述。
步骤S131,在手势图像中提取第一区域;并将第一区域输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点、以及与手势类别对应的手势关键点在第一区域中的位置。
在一些示例中,当手势图像中包含多种手势类别时,采用上述训练得到的目标检测算法,可以识别出每种手势的手势类别,以及每种手势在该手势图像中的位置,即第一区域的位置。
为了实现对每种手势类别的手势关键点进行检测,上述的电子设备可以从该手势图像中提取出该第一区域,然后将该第一区域输入到与该手势类别对应的关键点检测网络中,以使得该关键点检测网络检测该第一区域中的手 势关键点以及每个手势关键点在该第一区域中的位置。
这样,与该手势类别对应的关键点检测网络输出的检测结果,即为与该手势图像的手势类别对应的手势关键点以及手势关键点在第一区域中的位置。
在一些示例中,上述的电子设备在得到每种手势类别对应的手势关键点在第一区域中的位置后,可以结合该第一区域在上述的手势图像中的位置,计算得到每种手势类别对应的手势关键点在该手势图像中的位置。
通过本申请实施例,可以在手势图像中存在多种手势类别时,检测出对每种手势类别对应的手势关键点在该手势图像中的位置。从而可以在提高检测准确率的基础上,实现对包含有多种手势类别的手势图像中的手势关键点进行检测。
相应于上述的方法实施例,本申请实施例提供的一种手势关键点检测装置中的获取模块210,具体被配置为:获取手势图像,采用预先训练得到的目标分类算法确定手势图像的手势类别。
相应于上述的方法实施例,本申请实施例提供的一种手势关键点检测装置中的获取模块210,具体被配置为:获取手势图像,采用预先训练得到的目标检测算法,确定手势图像中的手势类别和包含与手势类别对应的手势的第一区域;
检测模块230,具体被配置为:在手势图像中提取第一区域;并将第一区域输入与手势类别对应的关键点检测网络中,得到与手势类别对应的手势关键点、以及与手势类别对应的手势关键点在第一区域中的位置。
相应于上述的方法实施例,本申请实施例提供的一种手势关键点检测装置,该装置还包括:目标检测算法训练模块,被配置为:获取手势训练图像,手势训练图像包括:预先标记的手势类别和预先标记的手势位置;将手势训练图像,输入预设的目标检测算法中,得到与手势训练图像对应的手势预测图像,手势预测图像中包括预测的手势类别和预测的手势位置;基于预先标记的手势类别和预测的手势类别之间的第一损失、预先标记的手势位置和预测的手势位置之间的第二损失,调整预设的目标检测算法的参数。
相应于上述的方法实施例,本申请实施例提供的一种手势关键点检测装置,该装置还包括:关键点检测网络训练模块,被配置为:获取预设的关键点检测网络和标记有相同手势类别的手势训练图像,手势训练图像中标记有该手势类别对应的手势关键点;将标记有相同手势类别的手势训练图像输入预设的关键点检测网络中,得到与手势训练图像对应的预测的手势关键点;基于预测的手势关键点和手势训练图像中标记的手势关键点之间的损失,调整预设的关键点检测网络的参数。
在一些示例中,本申请实施例的一种手势关键点检测方法可以应用于移动终端,该移动终端可以是移动电话,计算机,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
如图5所示,为根据一示例性实施例示出的一种移动终端的结构示意图。参照图5,该移动终端500可以包括以下一个或多个组件:处理组件502,存储器504,电源组件506,多媒体组件508,音频组件510,输入/输出(I/O)接口512,传感器组件514,以及通信组件516。
处理组件502通常控制移动终端500的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件502可以包括一个或多个处理器520来执行指令,以完成上述的方法的全部或部分步骤。
此外,处理组件502可以包括一个或多个模块,便于处理组件502和其他组件之间的交互。例如,处理组件502可以包括多媒体模块,以方便多媒体组件508和处理组件502之间的交互。
存储器504被配置为存储各种类型的数据以支持在设备500的操作。这些数据的示例包括用于在移动终端500上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件506为移动终端500的各种组件提供电力。电源组件506可以包括电源管理系统,一个或多个电源,及其他与为移动终端500生成、管理和分配电源相关联的组件。
多媒体组件508包括在移动终端500和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件508包括一个前置摄像头和/或后置摄像头。当设备500处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件510被配置为输出和/或输入音频信号。例如,音频组件510包括一个麦克风(MIC),当移动终端500处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器504或经由通信组件516发送。在一些实施例中,音频组件510还包括一个扬声器,用于输出音频信号。
I/O接口512为处理组件502和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件514包括一个或多个传感器,用于为移动终端500提供各个方面的状态评估。例如,传感器组件514可以检测到设备500的打开/关闭状态,组件的相对定位,例如组件为移动终端500的显示器和小键盘,传感器组件514还可以检测移动终端500或移动终端500一个组件的位置改变,用户与移动终端500接触的存在或不存在,移动终端500方位或加速/减速和移动终端500的温度变化。传感器组件514可以包括接近传感器,被配置用来在没有任何的物理 接触时检测附近物体的存在。传感器组件514还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件514还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件516被配置为便于移动终端500和其他设备之间有线或无线方式的通信。移动终端500可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件516经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件516还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,移动终端500可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法的全部或部分步骤。
本申请实施例提供的一种移动终端,可以在获取到手势图像及手势图像的手势类别后,在预先训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,然后将手势图像输入与手势类别对应的关键点检测网络中,从而可以得到与手势类别对应的手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。
在示例性实施例中,还提供了一种计算机程序产品,该计算机程序产品可以被存储在存储器504中,当所述计算机程序产品中的指令由移动终端500的处理器520执行时,使得移动终端500能够执行上述手势关键点检测方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储 介质,例如包括指令的存储器504,上述指令可由移动终端500的处理器520执行以完成上述方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在一些示例中,本申请实施例的一种手势关键点检测方法可以应用于服务器,如图6所示,为根据一示例性实施例示出的一种服务器600的结构示意图。参照图6,服务器600包括处理组件622,其进一步包括一个或多个处理器,以及由存储器632所代表的存储器资源,用于存储可由处理组件622的执行的指令,例如应用程序。存储器632中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件622被配置为执行指令,以执行上述方法的全部或部分步骤。
服务器600还可以包括一个电源组件626被配置为执行服务器600的电源管理,一个有线或无线网络接口650被配置为将服务器600连接到网络,和一个输入输出(I/O)接口658。服务器600可以操作基于存储在存储器632的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
本申请实施例提供的一种服务器,可以在获取到手势图像及手势图像的手势类别后,在预先训练得到的多个关键点检测网络中,确定与手势类别对应的关键点检测网络,然后将手势图像输入与手势类别对应的关键点检测网络中,从而可以得到与手势类别对应的手势关键点在手势图像中的位置。由于在本申请实施例中,该多个关键点检测网络中的每个关键点检测网络对应一种手势类别,与该手势类别对应的关键点检测网络的参数是针对该手势类别的参数,因此,采用与该手势类别对应的关键点检测网络,检测该手势类别的手势图像中的关键时,可以提高手势关键点检测的准确度。
在示例性实施例中,还提供了一种计算机程序产品,该计算机程序产品可以被存储在存储器632中,当所述计算机程序产品中的指令由服务器600的处理组件622执行时,使得服务器600能够执行上述手势关键点检测方法。
本申请实施例还提供了一种计算机程序,当其在电子设备上运行时,使 得电子设备执行上述方法的全部或部分步骤。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (13)

  1. 一种手势关键点检测方法,所述方法包括:
    获取手势图像及所述手势图像的手势类别;
    在训练得到的多个关键点检测网络中,确定与所述手势类别对应的关键点检测网络,其中,所述多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
    将所述手势图像输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点以及所述手势关键点在所述手势图像中的位置。
  2. 根据权利要求1所述的方法,所述获取手势图像及所述手势图像的手势类别,包括:
    获取手势图像,采用训练得到的目标分类算法确定所述手势图像的手势类别。
  3. 根据权利要求1所述的方法,所述获取手势图像及所述手势图像的手势类别,包括:
    获取手势图像,采用训练得到的目标检测算法,确定所述手势图像中的手势类别和包含与所述手势类别对应的手势的第一区域;
    所述将所述手势图像输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点以及所述手势关键点在所述手势图像中的位置,包括:
    在所述手势图像中提取所述第一区域;并将所述第一区域输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点、以及与所述手势类别对应的手势关键点在所述第一区域中的位置。
  4. 根据权利要求3所述的方法,所述目标检测算法的训练过程,包括:
    获取手势训练图像,所述手势训练图像包括:预先标记的手势类别和预先标记的手势位置;
    将所述手势训练图像,输入预设的目标检测算法中,得到与所述手势训练图像对应的手势预测图像,所述手势预测图像中包括预测的手势类别和预测的手势位置;
    基于所述预先标记的手势类别和所述预测的手势类别之间的第一损失、所述预先标记的手势位置和所述预测的手势位置之间的第二损失,调整所述预设的目标检测算法的参数。
  5. 根据权利要求1所述的方法,每个所述关键点检测网络的训练过程,包括:
    获取预设的关键点检测网络和标记有相同手势类别的手势训练图像,所述手势训练图像中标记有该手势类别对应的手势关键点;
    将所述标记有相同手势类别的手势训练图像输入预设的关键点检测网络中,得到与所述手势训练图像对应的预测的手势关键点;
    基于所述预测的手势关键点和所述手势训练图像中标记的手势关键点之间的损失,调整所述预设的关键点检测网络的参数。
  6. 一种手势关键点检测装置,所述装置包括:
    获取模块,被配置为获取手势图像及所述手势图像的手势类别;
    关键点检测网络确定模块,被配置为在训练得到的多个关键点检测网络中,确定与所述手势类别对应的关键点检测网络,其中,所述多个关键点检测网络中的每个关键点检测网络对应一种手势类别;
    检测模块,被配置为将所述手势图像输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点以及所述手势关键点在所述手势图像中的位置。
  7. 一种电子设备,包括处理器和用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取手势图像及所述手势图像的手势类别;
    在训练得到的多个关键点检测网络中,确定与所述手势类别对应的关键点检测网络,其中,所述多个关键点检测网络中的每个关键点检测网络对应 一种手势类别;
    将所述手势图像输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点以及所述手势关键点在所述手势图像中的位置。
  8. 根据权利要求7所述的电子设备,所述处理器,还被配置为:
    获取手势图像,采用训练得到的目标分类算法确定所述手势图像的手势类别。
  9. 根据权利要求7所述的电子设备,所述处理器,具体被配置为:
    获取手势图像,采用训练得到的目标检测算法,确定所述手势图像中的手势类别和包含与所述手势类别对应的手势的第一区域;
    所述处理器,还具体被配置为:
    在所述手势图像中提取所述第一区域;并将所述第一区域输入与所述手势类别对应的关键点检测网络中,得到与所述手势类别对应的手势关键点、以及与所述手势类别对应的手势关键点在所述第一区域中的位置。
  10. 根据权利要求9所述的电子设备,所述处理器,还被配置为:
    获取手势训练图像,所述手势训练图像包括:预先标记的手势类别和预先标记的手势位置;
    将所述手势训练图像,输入预设的目标检测算法中,得到与所述手势训练图像对应的手势预测图像,所述手势预测图像中包括预测的手势类别和预测的手势位置;
    基于所述预先标记的手势类别和所述预测的手势类别之间的第一损失、所述预先标记的手势位置和所述预测的手势位置之间的第二损失,调整所述预设的目标检测算法的参数。
  11. 根据权利要求7所述的电子设备,所述处理器,还被配置为:
    获取预设的关键点检测网络和标记有相同手势类别的手势训练图像,所述手势训练图像中标记有该手势类别对应的手势关键点;
    将所述标记有相同手势类别的手势训练图像输入预设的关键点检测网络 中,得到与所述手势训练图像对应的预测的手势关键点;
    基于所述预测的手势关键点和所述手势训练图像中标记的手势关键点之间的损失,调整所述预设的关键点检测网络的参数。
  12. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得处理器能够执行如权利要求1-5任一项所述的手势关键点检测方法。
  13. 一种计算机程序产品,当所述计算机程序产品中的指令由电子设备的处理器执行时,使得处理器能够执行如权利要求1-5任一项所述的手势关键点检测方法。
PCT/CN2019/103119 2018-10-30 2019-08-28 手势关键点检测方法、装置、电子设备及存储介质 WO2020088069A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/119,975 US11514706B2 (en) 2018-10-30 2020-12-11 Method and device for detecting hand gesture key points

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811280155.X 2018-10-30
CN201811280155.XA CN109446994B (zh) 2018-10-30 2018-10-30 手势关键点检测方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/119,975 Continuation US11514706B2 (en) 2018-10-30 2020-12-11 Method and device for detecting hand gesture key points

Publications (1)

Publication Number Publication Date
WO2020088069A1 true WO2020088069A1 (zh) 2020-05-07

Family

ID=65550275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103119 WO2020088069A1 (zh) 2018-10-30 2019-08-28 手势关键点检测方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US11514706B2 (zh)
CN (1) CN109446994B (zh)
WO (1) WO2020088069A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011403A (zh) * 2021-04-30 2021-06-22 恒睿(重庆)人工智能技术研究院有限公司 手势识别方法、系统、介质及设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446994B (zh) 2018-10-30 2020-10-30 北京达佳互联信息技术有限公司 手势关键点检测方法、装置、电子设备及存储介质
CN110084180A (zh) * 2019-04-24 2019-08-02 北京达佳互联信息技术有限公司 关键点检测方法、装置、电子设备及可读存储介质
CN110287891B (zh) * 2019-06-26 2021-11-09 北京字节跳动网络技术有限公司 基于人体关键点的手势控制方法、装置及电子设备
CN111126339A (zh) * 2019-12-31 2020-05-08 北京奇艺世纪科技有限公司 手势识别方法、装置、计算机设备和存储介质
CN111160288A (zh) * 2019-12-31 2020-05-15 北京奇艺世纪科技有限公司 手势关键点检测方法、装置、计算机设备和存储介质
CN111414936B (zh) * 2020-02-24 2023-08-18 北京迈格威科技有限公司 分类网络的确定方法、图像检测方法、装置、设备及介质
CN111625157B (zh) * 2020-05-20 2021-09-17 北京百度网讯科技有限公司 指尖关键点检测方法、装置、设备和可读存储介质
CN113706606B (zh) * 2021-08-12 2024-04-30 新线科技有限公司 确定隔空手势位置坐标的方法及装置
CN114185429B (zh) * 2021-11-11 2024-03-26 杭州易现先进科技有限公司 手势关键点定位或姿态估计的方法、电子装置和存储介质
CN115525158A (zh) * 2022-10-14 2022-12-27 支付宝(杭州)信息技术有限公司 互动处理方法及装置
CN115661142B (zh) * 2022-12-14 2023-03-28 广东工业大学 一种基于关键点检测的舌诊图像处理方法、设备及介质
CN117115595B (zh) * 2023-10-23 2024-02-02 腾讯科技(深圳)有限公司 姿态估计模型的训练方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170068849A1 (en) * 2015-09-03 2017-03-09 Korea Institute Of Science And Technology Apparatus and method of hand gesture recognition based on depth image
CN107679512A (zh) * 2017-10-20 2018-02-09 济南大学 一种基于手势关键点的动态手势识别方法
CN107967061A (zh) * 2017-12-21 2018-04-27 北京华捷艾米科技有限公司 人机交互方法及装置
CN108229318A (zh) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 手势识别和手势识别网络的训练方法及装置、设备、介质
CN108227912A (zh) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 设备控制方法和装置、电子设备、计算机存储介质
CN109446994A (zh) * 2018-10-30 2019-03-08 北京达佳互联信息技术有限公司 手势关键点检测方法、装置、电子设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103415825B (zh) * 2010-12-29 2016-06-01 汤姆逊许可公司 用于手势识别的系统和方法
CN103513759A (zh) * 2012-06-21 2014-01-15 富士通株式会社 手势轨迹识别方法和装置
KR101526426B1 (ko) * 2013-12-31 2015-06-05 현대자동차 주식회사 제스처 인식 장치 및 방법
US9552069B2 (en) * 2014-07-11 2017-01-24 Microsoft Technology Licensing, Llc 3D gesture recognition
CN106886751A (zh) * 2017-01-09 2017-06-23 深圳数字电视国家工程实验室股份有限公司 一种手势识别方法和系统
CN108229277B (zh) * 2017-03-31 2020-05-01 北京市商汤科技开发有限公司 手势识别、手势控制及多层神经网络训练方法、装置及电子设备
CN107168527B (zh) * 2017-04-25 2019-10-18 华南理工大学 基于区域卷积神经网络的第一视角手势识别与交互方法
CN108520251A (zh) * 2018-04-20 2018-09-11 北京市商汤科技开发有限公司 关键点检测方法及装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170068849A1 (en) * 2015-09-03 2017-03-09 Korea Institute Of Science And Technology Apparatus and method of hand gesture recognition based on depth image
CN107679512A (zh) * 2017-10-20 2018-02-09 济南大学 一种基于手势关键点的动态手势识别方法
CN108229318A (zh) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 手势识别和手势识别网络的训练方法及装置、设备、介质
CN108227912A (zh) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 设备控制方法和装置、电子设备、计算机存储介质
CN107967061A (zh) * 2017-12-21 2018-04-27 北京华捷艾米科技有限公司 人机交互方法及装置
CN109446994A (zh) * 2018-10-30 2019-03-08 北京达佳互联信息技术有限公司 手势关键点检测方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011403A (zh) * 2021-04-30 2021-06-22 恒睿(重庆)人工智能技术研究院有限公司 手势识别方法、系统、介质及设备
CN113011403B (zh) * 2021-04-30 2023-11-24 恒睿(重庆)人工智能技术研究院有限公司 手势识别方法、系统、介质及设备

Also Published As

Publication number Publication date
CN109446994B (zh) 2020-10-30
CN109446994A (zh) 2019-03-08
US11514706B2 (en) 2022-11-29
US20210097270A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
WO2020088069A1 (zh) 手势关键点检测方法、装置、电子设备及存储介质
TWI766286B (zh) 圖像處理方法及圖像處理裝置、電子設備和電腦可讀儲存媒介
US9953506B2 (en) Alarming method and device
CN106557768B (zh) 对图片中的文字进行识别的方法及装置
US10452890B2 (en) Fingerprint template input method, device and medium
CN110827253A (zh) 一种目标检测模型的训练方法、装置及电子设备
US10115019B2 (en) Video categorization method and apparatus, and storage medium
WO2017124773A1 (zh) 手势识别方法及装置
JP6335289B2 (ja) 画像フィルタを生成する方法及び装置
CN106228556B (zh) 图像质量分析方法和装置
CN106650575A (zh) 人脸检测方法及装置
CN111160448B (zh) 一种图像分类模型的训练方法及装置
CN109360197B (zh) 图像的处理方法、装置、电子设备及存储介质
CN109951476B (zh) 基于时序的攻击预测方法、装置及存储介质
US10248855B2 (en) Method and apparatus for identifying gesture
CN110969120B (zh) 图像处理方法及装置、电子设备、可读存储介质
US20180238748A1 (en) Pressure detection method and apparatus, and storage medium
WO2021103994A1 (zh) 用于信息推荐的模型训练方法、装置、电子设备以及介质
WO2020108024A1 (zh) 信息交互方法、装置、电子设备及存储介质
KR20160150635A (ko) 클라우드 카드를 추천하기 위한 방법 및 장치
CN104573642A (zh) 人脸识别方法及装置
CN105224950A (zh) 滤镜类别识别方法及装置
US10133911B2 (en) Method and device for verifying fingerprint
EP3211564A1 (en) Method and device for verifying a fingerprint
CN113870195A (zh) 目标贴图检测模型的训练、贴图检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19880878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19880878

Country of ref document: EP

Kind code of ref document: A1