WO2022105692A1 - 一种手势识别方法及装置 - Google Patents

一种手势识别方法及装置 Download PDF

Info

Publication number
WO2022105692A1
WO2022105692A1 PCT/CN2021/130458 CN2021130458W WO2022105692A1 WO 2022105692 A1 WO2022105692 A1 WO 2022105692A1 CN 2021130458 W CN2021130458 W CN 2021130458W WO 2022105692 A1 WO2022105692 A1 WO 2022105692A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
dynamic
image frame
position information
category
Prior art date
Application number
PCT/CN2021/130458
Other languages
English (en)
French (fr)
Inventor
许哲豪
Original Assignee
展讯通信(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯通信(上海)有限公司 filed Critical 展讯通信(上海)有限公司
Publication of WO2022105692A1 publication Critical patent/WO2022105692A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present application relates to the field of computer technology, and in particular, to a gesture recognition method and device.
  • the current technology can only recognize a single type of static gestures, which has poor scalability.
  • the gesture model needs to be retrained, which increases the cost.
  • the recognized background content is complex, or it is detected that there are multiple gestures that need to be recognized, the final recognition accuracy is often poor.
  • the present application discloses a gesture recognition method and device, which can improve the accuracy and expansibility of air gesture recognition and reduce costs.
  • embodiments of the present application provide a gesture recognition method and device, the method comprising:
  • the first image frame includes a first gesture that satisfies the condition for starting to detect dynamic gestures, then record the gesture position information of the first gesture;
  • the second image frame does not include a second gesture that satisfies the condition for ending detection of the dynamic gesture, recording the gesture position information of the second gesture;
  • the second image frame includes a second gesture that satisfies the condition for ending the detection of the dynamic gesture, acquiring the gesture position information of each gesture that has been recorded;
  • the gesture category of the first dynamic gesture is determined according to the acquired position information of each gesture.
  • the movement track of the first dynamic gesture is determined according to the position information of each gesture; the gesture category of the first dynamic gesture is determined according to the movement track of the first dynamic gesture.
  • the similarity between the movement trajectory of the first dynamic gesture and each trajectory feature in the dynamic gesture list is determined, the dynamic gesture list includes a plurality of trajectory features, and each trajectory feature corresponds to a gesture category; if there is a high similarity value If the similarity is within the preset threshold, the trajectory feature with the highest similarity value with the trajectory feature of the first dynamic gesture is determined as the first trajectory feature; the gesture category of the first dynamic gesture is determined as the gesture category corresponding to the first trajectory feature.
  • the movement track of the second dynamic gesture is acquired; the track feature of the second dynamic gesture is determined according to the movement track of the second dynamic gesture; the track feature of the second dynamic gesture is added to the dynamic gesture list, and the second dynamic gesture
  • the trajectory feature of is used to indicate the gesture category of the second dynamic gesture.
  • instruction information corresponding to the first dynamic gesture is generated according to the gesture category of the first dynamic gesture, and the instruction information is used to instruct the terminal device to execute the instruction.
  • the content indicated by the information is indicated by the information.
  • the first image frame includes a first gesture that satisfies the conditions for starting dynamic gesture detection, after recording the gesture position information of the first gesture, if after recording the gesture position information of the first gesture If no gesture is detected from the second image frame within the preset time period of , output prompt information.
  • the frequency of acquiring image frames from the image sensor is reduced.
  • an embodiment of the present application provides a gesture recognition device, including:
  • an acquisition unit to acquire the first image frame collected by the image sensor
  • a processing unit configured to record the gesture position information of the first gesture if the first image frame includes a first gesture that satisfies the condition for starting to detect the dynamic gesture;
  • the acquisition unit is further configured to acquire the second image frame collected by the image sensor, and the collection time of the second image frame is after the first image frame;
  • the processing unit is further configured to record the gesture position information of the second gesture if the second image frame does not include the second gesture satisfying the condition for ending the detection of the dynamic gesture;
  • the processing unit is further configured to acquire the gesture position information of each gesture that has been recorded if the second image frame includes a second gesture that satisfies the condition for ending the detection of the dynamic gesture;
  • the processing unit is further configured to determine the gesture category of the first dynamic gesture according to the acquired position information of each gesture.
  • an embodiment of the present application provides a gesture recognition device, which is characterized by comprising a processor, a memory, and a user interface, wherein the processor, the memory, and the user interface are connected to each other, wherein the memory is used to store computer programs, and the computer programs Including program instructions, the processor is configured to invoke the program instructions to execute the gesture recognition method as described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more instructions, and the one or more instructions are suitable for being loaded and executed by a processor as described in Section 1.
  • a gesture recognition method is described in one aspect.
  • the terminal device may acquire the first image frame collected by the image sensor; if the first image frame includes the first gesture that satisfies the conditions for starting dynamic gesture detection, the gesture position information of the first gesture is recorded; the image sensor is acquired For the second image frame collected, the collection time of the second image frame is after the first image frame; if the second image frame does not include a second gesture that satisfies the conditions for ending dynamic gesture detection, record the gesture position of the second gesture information; if the second image frame includes a second gesture that satisfies the condition for ending detection of the dynamic gesture, obtain the gesture position information of each gesture that has been recorded; determine the gesture category of the first dynamic gesture according to the obtained gesture position information.
  • the accuracy and scalability of gesture recognition in the air can be improved, and the cost can be reduced.
  • FIG. 1 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a gesture recognition method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a first image frame including a first gesture according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a dynamic gesture provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a movement trajectory of a dynamic gesture provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a unit of a gesture recognition device according to an embodiment of the present application.
  • FIG. 7 is a simplified schematic diagram of a physical structure of a gesture recognition device provided by an embodiment of the present application.
  • Gesture Recognition is a topic in computer science and language technology that aims to recognize human gestures through mathematical algorithms. Gestures can originate from any body movement or state, but usually originate from the face or hands. Current focus in the field includes emotion recognition from face and gesture recognition. Users can use simple gestures to control or interact with devices without touching them. The recognition of posture, gait and human behavior is also the subject of gesture recognition technology. Gesture recognition can be seen as a way for computers to understand human language, thereby building a richer bridge between machines and humans than raw textual user interfaces or even graphical user interfaces (GUIs).
  • GUIs graphical user interfaces
  • Artificial Intelligence It is a new technical science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence. Research in this field includes robotics, language recognition, image recognition, Natural language processing and expert systems, etc.
  • Machine learning is a multi-domain interdisciplinary subject involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. It specializes in how computers simulate or realize human learning behaviors to acquire new knowledge or skills, and to reorganize existing knowledge structures to continuously improve their performance. Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent.
  • Deep Learning It is to learn the inherent laws and representation levels of sample data. The information obtained during the learning process is of great help to the interpretation of data such as text, images and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to recognize data such as words, images, and sounds. Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition far exceeding previous related technologies.
  • the terminal device 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, a /V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the sensor 105 may include at least an image sensor, which may be included in a camera, and may be used to capture images.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the user input unit 107 may be used to receive input numerical or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the memory 109 may be used to store software programs as well as various data.
  • the processor 110 is the control center of the mobile terminal, uses various interfaces and lines to connect various parts of the entire mobile terminal, runs or executes the software programs and/or modules stored in the memory 109, and calls the data stored in the memory 109. , perform various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • terminal device does not constitute a limitation on the terminal device, and the terminal device may include more or less components than the one shown, or combine some components, or different components layout.
  • Terminal devices can be implemented in various forms.
  • the terminal equipment described in this application may include mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, Mobile terminals such as wearable devices, smart bracelets, and pedometers, as well as stationary terminals such as digital TVs and desktop computers.
  • PDA Personal Digital Assistant
  • PMP portable media players
  • navigation devices Mobile terminals such as wearable devices, smart bracelets, and pedometers
  • Mobile terminals such as wearable devices, smart bracelets, and pedometers
  • stationary terminals such as digital TVs and desktop computers.
  • the embodiments of the present application provide a gesture recognition method and device.
  • the following further describes the gesture recognition method and device provided by the embodiments of the present application in detail.
  • FIG. 2 provides a schematic flowchart of a gesture recognition method according to an embodiment of the present application.
  • the process can include the following steps:
  • the terminal device Before acquiring the first image frame, the terminal device will first start the image sensor to perform image acquisition.
  • the image sensor may be a camera on a terminal device, and the number of image frames collected by the image sensor per second can be determined according to specific conditions, for example, it operates at a frequency of collecting 30 frames of images per second.
  • the terminal device After the image sensor is turned on, the terminal device also starts a gesture detection program, so that the terminal device enters a state of detecting gestures.
  • the image sensor When the image sensor starts to collect images, it will record multiple frames of images at a time, and each frame of images will be passed to the processor for detection.
  • the first image frame may be any frame of images in multiple frames of images.
  • the first image frame includes a first gesture that satisfies the condition for starting dynamic gesture detection, record the gesture position information of the first gesture.
  • the first gesture may be set by the terminal device, or may be set by the user, for example, the first gesture may be a fist, a palm, or the like.
  • the terminal device can detect the first frame of image through an intelligent algorithm, and first determine whether there is a gesture in the first image frame; if there is, determine the gesture type of the gesture; if the first gesture is determined, record the first gesture. Gesture location information.
  • the first gesture can trigger the terminal device to perform dynamic gesture detection, and identify image frames after the first image frame. If no gesture is detected in the first frame of image, the terminal device will detect the next frame of image.
  • recording the gesture position information of the first gesture may be by determining one or more feature points on the first gesture, and using the position information of the one or more feature points as the gesture of the first gesture location information.
  • the terminal device can determine the position of the feature point in the picture of the first image frame, which can be determined by using two coordinate axes, and the two coordinate axes respectively represent abscissa pixels and ordinate pixels.
  • subsequent images collected by the image sensor use the position information of the feature point as the gesture position information of the gesture in each image frame.
  • the embodiment of the present application does not limit the specific method of recording the gesture position information of the gesture, and other implementation methods may also be adopted, and the implementation by the method of determining the feature point is only an example, and the embodiment of the present application does not limit it.
  • the first image frame includes a first gesture, and the first gesture is a fist.
  • the terminal device may determine a feature point in the first gesture, and the feature point may be located at any position of the first gesture, and generally a position that is easily captured by the image sensor is selected. Assuming that the resolution of the first image is 1080*1920 and the position of the feature point is (400, 1100), then (400, 1100) can also be used as the gesture position information of the first gesture.
  • a target detection algorithm may be used to analyze each frame of image collected by the image sensor, for example, the current new AI target detection algorithm Yolo V4 and other algorithms. These algorithms can achieve the speed of real-time detection while ensuring the detection accuracy.
  • the target detection algorithm may also be other better algorithms, which are not limited in this embodiment of the present application.
  • the terminal device After determining that the gesture in the first image frame is the first gesture and recording the gesture position information of the first gesture, the terminal device sequentially acquires multiple image frames captured by the image sensor.
  • the acquisition time of the second image frame is located after the first image frame, and may be any one of the multiple image frames after the first image frame.
  • the second image frame does not include a second gesture that satisfies the condition for ending detection of the dynamic gesture, record the gesture position information of the second gesture.
  • the second gesture is a sign to end the detection of the dynamic gesture, and the terminal device will stop detecting the dynamic gesture when the second gesture is detected. If the second image frame does not include the second gesture, each frame of image is continuously detected, and the gesture position information of the gesture in each frame of image is recorded.
  • the gesture in the second image frame may be the first gesture, or may be a gesture other than the second gesture, for example, it may be a gesture such as raising one finger, raising two fingers, etc., which is not limited here .
  • the terminal device will record the gesture position information corresponding to the gesture in each frame of image, and save it to the memory or cache in the terminal device.
  • the second image frame does not include gestures, for example, the user's gesture leaves the capture range of the image sensor, resulting in the absence of gestures in a part of the second image frame.
  • the terminal device can use a specific algorithm to make corrections, and can analyze several frames of images before the gesture is missing and several frames of images where the gesture reappears, and calculate the possible gestures of each gesture in the missing several frames of images. location information. In this way, the error tolerance rate when the terminal device detects the dynamic gesture can be improved.
  • the terminal device cannot determine the possible gesture position information of each gesture of the missing several frames of images through analysis, it can output prompt information to prompt the user that the gesture recognition is wrong.
  • a prompt message may also be output.
  • the second image frame includes a second gesture that satisfies the condition for ending detection of the dynamic gesture, acquire gesture position information of each gesture that has been recorded.
  • the terminal device When the terminal device detects that the second image frame includes the second gesture, it immediately stops acquiring the next frame of image from the image sensor, and acquires the gesture position information of all the gestures that have been recorded.
  • the terminal device determines a feature point as a sign for recording the gesture position information of each gesture, and detects the second image frame.
  • the second image frame 1 the second image frame 2, etc. are labeled here.
  • the terminal device After detecting the gesture from the second image frame 1, the terminal device records the gesture position information of the gesture; similarly, after detecting the gesture from the second image frame 2, records the gesture position information of the gesture.
  • the terminal device detects the second image frame 3 and determines that it includes the second gesture, it stops detecting the next frame of image, and acquires gesture position information of all recorded gestures. It should be noted that only 4 frames of images are presented in FIG. 4.
  • multiple images may also be included between the first image frame and the second image frame 1, the second image frame 1 and the second image frame 2, etc.
  • Frame here is a simplified diagram, not limited.
  • the four frames of images in FIG. 4 are not displayed on the interface at the same time, but are displayed in chronological order.
  • the terminal device determines the movement trajectory of the first dynamic gesture according to the position information of each gesture.
  • the first dynamic gesture includes the first gesture detected by the terminal device and the gestures in multiple second image frames. Further, the terminal device can determine the similarity between the movement trajectory of the first dynamic gesture and each trajectory feature in the dynamic gesture list.
  • the dynamic gesture list includes a plurality of trajectory features, and each trajectory feature corresponds to a gesture category. For example, as shown in Table 1, trajectory feature 1 corresponds to gesture category 1, trajectory feature 2 corresponds to gesture category 2, and so on.
  • the trajectory feature is a simplified form of the movement trajectory, because it is difficult for the user to keep the hand moving in a straight line when making dynamic gestures.
  • the trajectory feature can be right-shift, left-shift, right-shift and then down-shift, up-shift and then left-shift, and so on.
  • the terminal device determines the similarity between the movement trajectory of the first dynamic gesture and each trajectory feature in the dynamic gesture list, if it detects that there is a similarity with a similarity value higher than a preset threshold, it determines the similarity with the trajectory feature of the first dynamic gesture.
  • the trajectory feature with the highest similarity value is the first trajectory feature, and the first trajectory feature matches the first dynamic gesture.
  • the terminal device can determine that the gesture category of the first dynamic gesture is the gesture category corresponding to the first trajectory feature.
  • the terminal device may determine the gesture type of the dynamic gesture according to the offset and moving direction of each gesture position information.
  • the terminal device may determine the movement trajectory of the first dynamic gesture after recording the gesture position information of the gesture in each image frame as shown in FIG. 4 .
  • the trajectory roughly moves up and then right.
  • the terminal device determines, according to the dynamic gesture list, that the similarity between the first dynamic gesture and the trajectory feature 1 is 5%, the similarity with the trajectory feature 2 is 20%, the similarity with the trajectory feature 3 is 80%, and the similarity with the trajectory feature 3 is 80%.
  • the similarity of 4 is 95%, and the preset threshold is 80%, and the terminal device determines that the track feature 4 is the first track feature.
  • the final terminal device determines that the gesture category of the first dynamic gesture is the gesture category corresponding to the trajectory feature 4.
  • the terminal device may also acquire the movement trajectory of the second dynamic gesture, and the acquisition method is the same as the method for acquiring the movement trajectory of the first dynamic gesture, which will not be repeated here.
  • the terminal device determines a trajectory feature of the second dynamic gesture according to the movement trajectory of the second dynamic gesture, and adds the trajectory feature to the dynamic gesture list, where the trajectory feature of the second dynamic gesture may indicate a gesture category of the second dynamic gesture.
  • the user can make changes to the dynamic gesture list, such as changing the content of a track feature, or adding a track feature and a corresponding gesture category.
  • the terminal device may generate corresponding indication information according to the gesture type of the first dynamic gesture, and perform corresponding steps according to the content indicated by the indication information. Because each dynamic gesture can correspond to a program, once the terminal device determines the gesture category of a dynamic gesture, it can perform an operation corresponding to the gesture category.
  • the terminal device determines the gesture category of the first gesture, if all gestures are detected within a preset time period, it is determined that there is no gesture in the image captured by the image sensor at this time, and then The purpose of saving power has been achieved by reducing the frequency of image frames from the image sensor.
  • the terminal device after acquiring the first image frame, the terminal device detects whether it includes a gesture, and if so, determines the gesture category of the gesture, and if it is the first gesture, records the gesture position information of the first gesture . Wherein, the first gesture satisfies the condition for starting dynamic gesture detection. Further, the terminal device continues to detect the second image frame, and records the gesture position information of the second gesture if the second image frame does not include the second gesture that satisfies the condition for ending the detection of the dynamic gesture. When the second image frame includes the second gesture, the recorded gesture position information of each gesture is obtained, so that the movement trajectory of the first dynamic gesture can be determined according to the gesture position information of each gesture, and the first dynamic gesture can be determined according to the movement trajectory.
  • Gesture category for dynamic gestures Through this method, firstly, the accuracy rate of dynamic gesture recognition in the air can be improved; secondly, the user can also add or change gestures for customization, which improves the expansibility of gesture recognition; The image frame can be realized, which reduces the cost of the solution.
  • FIG. 6 is a schematic diagram of a unit of a gesture recognition measurement device provided by an embodiment of the present application.
  • the apparatus of the terminal device shown in FIG. 6 may be used to perform some or all of the functions in the method embodiment described in FIG. 2 above.
  • the device may be a terminal device, or a device in the terminal device, or a device that can be used in combination with the terminal device.
  • the logical structure of the apparatus may include: an acquisition unit 610 and a processing unit 620 .
  • an acquisition unit 610 When the device is applied to terminal equipment:
  • the acquiring unit 610 acquires the first image frame collected by the image sensor
  • a processing unit 620 configured to record the gesture position information of the first gesture if the first image frame includes a first gesture that satisfies the condition for starting dynamic gesture detection;
  • the obtaining unit 610 is further configured to obtain the second image frame collected by the image sensor, and the collection time of the second image frame is after the first image frame;
  • the above-mentioned processing unit 620 is further configured to record the gesture position information of the second gesture if the second image frame does not include the second gesture that satisfies the condition for ending the detection of the dynamic gesture;
  • the above-mentioned processing unit 620 is further configured to obtain the gesture position information of each gesture that has been recorded if the second image frame includes a second gesture that satisfies the condition for ending the detection of the dynamic gesture;
  • the above-mentioned processing unit 620 is further configured to determine the gesture category of the first dynamic gesture according to the acquired position information of each gesture.
  • the above-mentioned processing unit 620 is further configured to determine the movement trajectory of the first dynamic gesture according to the position information of each gesture; and determine the gesture category of the first dynamic gesture according to the movement trajectory of the first dynamic gesture.
  • the above-mentioned processing unit 620 is further configured to determine the similarity between the movement trajectory of the first dynamic gesture and each trajectory feature in the dynamic gesture list, where the dynamic gesture list includes a plurality of trajectory features, and each trajectory feature corresponds to a gesture category; if there is a similarity with a similarity value higher than a preset threshold, then determine the trajectory feature with the highest similarity value with the trajectory feature of the first dynamic gesture as the first trajectory feature; determine the gesture category of the first dynamic gesture is the gesture category corresponding to the first trajectory feature.
  • the obtaining unit 610 is further configured to obtain the movement track of the second dynamic gesture; the processing unit 620 is further configured to determine the track feature of the second dynamic gesture according to the movement track of the second dynamic gesture; The trajectory feature of the second dynamic gesture is added to the dynamic gesture list, and the trajectory feature of the second dynamic gesture is used to indicate the gesture category of the second dynamic gesture.
  • the above-mentioned processing unit 620 is further configured to, after determining the gesture category of the first dynamic gesture according to the acquired position information of each gesture, generate the indication information corresponding to the first dynamic gesture according to the gesture category of the first dynamic gesture , the indication information is used to instruct the terminal device to execute the content indicated by the indication information.
  • the above-mentioned processing unit 620 is further configured to, if the second image frame includes a second gesture that satisfies the condition for ending the detection of the dynamic gesture, after acquiring the gesture position information of each gesture that has been recorded, if the If no gesture is detected from the second image frame within a preset time period after the gesture position information of the first gesture is recorded, prompt information is output.
  • the above-mentioned processing unit 620 is further configured to determine the gesture category of the first dynamic gesture according to the acquired position information of each gesture, and if no gesture is detected within a preset time period, lower the number of slaves from the image sensor Get the frequency of image frames.
  • FIG. 7 is a simplified schematic diagram of the physical structure of a gesture recognition device provided by an embodiment of the present application.
  • the device includes a processor 710, a memory 720, a communication interface 730, and a user interface 740.
  • the processor 710, the memory 720 , the communication interface 730 and the user interface 740 are connected by one or more communication buses.
  • the processor 710 is configured to support the data transmission apparatus to perform functions corresponding to the method in FIG. 2 .
  • the processor 710 may be a central processing unit (central processing unit, CPU for short), and the processor may also be other general-purpose processors, digital signal processors (digital signal processor, DSP for short) ), application specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 720 is used to store program codes and the like.
  • the memory 720 in this embodiment of the present application may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the non-volatile memory may be read-only memory (ROM for short), programmable read-only memory (PROM for short), erasable programmable read-only memory (EPROM for short) , Electrically Erasable Programmable Read-Only Memory (electrically EPROM, EEPROM for short) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous Dynamic random access memory
  • SDRAM synchronous Dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM Synchronous connection dynamic random access memory
  • DR RAM direct memory bus random access memory
  • the communication interface 730 is used to send and receive data, information or messages, etc., and can also be described as a transceiver, a transceiver circuit, and the like.
  • the user interface 740 is a medium for realizing interaction and information exchange between the user and the terminal, and its specific embodiment may include a display screen (Display) for output, a keyboard (Keyboard) for input, a touch screen, etc. It should be noted that this The keyboard at the location can be either a physical keyboard, a touch-screen virtual keyboard, or a keyboard that combines physical and touch-screen virtual keyboards.
  • the processor 710 may call program codes stored in the memory 720 to perform the following operations:
  • the processor 710 invokes the program code stored in the memory 720 to acquire the first image frame collected by the image sensor;
  • the processor 710 calls the program code stored in the memory 720 and records the gesture position information of the first gesture if the first image frame includes the first gesture that satisfies the condition for starting the detection of the dynamic gesture;
  • the processor 710 invokes the program code stored in the memory 720 to acquire the second image frame collected by the image sensor, and the collection time of the second image frame is after the first image frame;
  • the processor 710 calls the program code stored in the memory 720, if the second image frame does not include the second gesture that satisfies the condition for ending the detection of the dynamic gesture, then records the gesture position information of the second gesture;
  • the processor 710 calls the program code stored in the memory 720 if the second image frame includes a second gesture that satisfies the condition for ending the detection of the dynamic gesture, then obtains the gesture position information of the recorded gestures;
  • the processor 710 invokes the program code stored in the memory 720 to determine the gesture category of the first dynamic gesture according to the acquired position information of each gesture.
  • the processor 710 invokes the program code stored in the memory 720 to determine the movement trajectory of the first dynamic gesture according to the position information of each gesture; and determines the gesture category of the first dynamic gesture according to the movement trajectory of the first dynamic gesture .
  • the processor 710 invokes the program code stored in the memory 720 to determine the similarity between the movement track of the first dynamic gesture and each track feature in the dynamic gesture list, where the dynamic gesture list includes multiple track features, each The trajectory features respectively correspond to a gesture category; if there is a similarity with a similarity value higher than a preset threshold, then determine the trajectory feature with the highest similarity value with the trajectory feature of the first dynamic gesture as the first trajectory feature; determine the first dynamic gesture
  • the gesture category of the gesture is the gesture category corresponding to the first trajectory feature.
  • the processor 710 invokes the program code stored in the memory 720 to obtain the movement trajectory of the second dynamic gesture; the processor 710 invokes the program code stored in the memory 720 to determine the first dynamic gesture according to the movement trajectory of the second dynamic gesture. Two track features of the dynamic gesture; adding the track feature of the second dynamic gesture to the dynamic gesture list, where the track feature of the second dynamic gesture is used to indicate the gesture category of the second dynamic gesture.
  • the processor 710 invokes the program code stored in the memory 720 to determine the gesture category of the first dynamic gesture according to the acquired gesture position information, and then generates the first dynamic gesture according to the gesture category of the first dynamic gesture
  • the indication information is used to instruct the terminal device to execute the content indicated by the indication information.
  • the processor 710 invokes the program code stored in the memory 720, and if the second image frame includes a second gesture that satisfies the condition for ending the dynamic gesture detection, then obtains the gesture position information of each gesture that has been recorded Afterwards, if no gesture is detected from the second image frame within a preset time period after the gesture position information of the first gesture is recorded, prompt information is output.
  • the processor 710 calls the program code stored in the memory 720 to determine the gesture category of the first dynamic gesture according to the acquired gesture position information, if no gesture is detected within a preset time period, then Decrease how often image frames are acquired from the image sensor.
  • the units in the processing device in the embodiment of the present invention may be combined, divided, and deleted according to actual needs.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • coaxial cable, optical fiber, digital subscriber line) or wireless means to another website site, computer, server or data center.
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • Useful media may be magnetic media (eg, floppy disks, storage disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种手势识别方法及装置,其中,该方法包括:获取图像传感器采集的第一图像帧;若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息;获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;根据获取的各个手势位置信息确定第一动态手势的手势类别。通过该方法,可以提高隔空手势识别准确率和扩展性,降低成本。

Description

一种手势识别方法及装置 技术领域
本申请涉及计算机技术领域,尤其涉及一种手势识别方法及装置。
背景技术
人机交互方式随着科技发展得到了不断革新,从早期的键盘、摇杆到近几年来的触摸屏,甚至是更加先进的语音控制,手势作为人类之间经常使用的一种交流手段,很自然地被应用在人与机器的交互上,使用手势更加自然、更加灵活。
在目前的技术中,出现了隔空手势操作的应用场景,隔空手势操作需要机器对人类的手势进行识别。传统的手势识别方法是通过姿态传感器、3D结构光、专有传感器、雷达波等技术来实现的。其中,利用姿态传感器识别手势对使用场景有较大的限制,使用成本也较高;利用3D结构光识别手势需要机器设备具有较高的计算能力,提升成本,并且其识别准确率也不高;某些产品还会使用专有的传感器来识别手势,这类专有传感器的设备成本往往较昂贵,不利于大范围使用;利用雷达波检测手势,需要离传感器较紧,限制了诸多应用场景。
另外,目前的技术只能识别类型单一的、静态的手势,扩展性较差,当添加新的手势时,需要重新训练手势模型,提高成本。而且,当识别的背景内容复杂,或检测到有多个手势需要识别时,往往会导致最终的识别准确率不佳。
发明内容
本申请公开了一种手势识别方法及装置,可以提高隔空手势识别准确率和扩展性,降低成本。
第一方面,本申请实施例提供了一种手势识别方法及装置,该方法包括:
获取图像传感器采集的第一图像帧;
若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第 一手势的手势位置信息;
获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;
若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;
若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
根据获取的各个手势位置信息确定第一动态手势的手势类别。
在一实施方式中,根据各个手势位置信息确定第一动态手势的移动轨迹;根据第一动态手势的移动轨迹确定第一动态手势的手势类别。
在一实施方式中,确定第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度,动态手势列表包括多个轨迹特征,各个轨迹特征分别对应一个手势类别;若存在相似度值高于预设阈值的相似度,则确定与第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征;确定第一动态手势的手势类别为第一轨迹特征对应的手势类别。
在一实施方式中,获取第二动态手势的移动轨迹;根据第二动态手势的移动轨迹确定第二动态手势的轨迹特征;将第二动态手势的轨迹特征添加至动态手势列表,第二动态手势的轨迹特征用于指示第二动态手势的手势类别。
在一实施方式中,根据获取的各个手势位置信息确定第一动态手势的手势类别之后,根据第一动态手势的手势类别生成第一动态手势对应的指示信息,指示信息用于指示终端设备执行指示信息指示的内容。
在一实施方式中,若所述第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录所述第一手势的手势位置信息之后,若在记录第一手势的手势位置信息后的预设时长内,未从第二图像帧中检测到手势,则输出提示信息。
在一实施方式中,根据获取的各个手势位置信息确定第一动态手势的手势类别之后,若在预设时长内均未检测到手势,则降低从图像传感器获取图像帧的频率。
第二方面,本申请实施例提供了一种手势识别装置,包括:
获取单元,获取图像传感器采集的第一图像帧;
处理单元,用于若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息;
获取单元还用于获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;
处理单元还用于若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;
处理单元还用于若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
处理单元还用于根据获取的各个手势位置信息确定第一动态手势的手势类别。
第三方面,本申请实施例提供了一种手势识别装置,其特征在于,包括处理器、存储器和用户接口,处理器、存储器和用户接口相互连接,其中,存储器用于存储计算机程序,计算机程序包括程序指令,处理器被配置用于调用程序指令,执行如第一方面描述的手势识别方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其特征在于,计算机可读存储介质存储有一条或多条指令,一条或多条指令适于由处理器加载并执行如第一方面描述的手势识别方法。
本申请实施例中,终端设备可以获取图像传感器采集的第一图像帧;若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息;获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;根据获取的各个手势位置信息确定第一动态手势的手势类别。通过该方法,可以提高隔空手势识别准确率和扩展性,降低成本。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需 要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种终端设备的硬件结构示意图;
图2为本申请实施例提供的一种手势识别方法的流程示意图;
图3为本申请实施例提供的一种第一图像帧包括第一手势的示意图;
图4为本申请实施例提供的一种动态手势的示意图;
图5为本申请实施例提供的一种动态手势的移动轨迹示意图;
图6为本申请实施例提供的一种手势识别装置的单元示意图;
图7为本申请实施例提供的一种手势识别装置的实体结构简化示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
为了能够更好地理解本申请实施例,下面对本申请实施例涉及的专业术语进行介绍:
手势识别:是计算机科学和语言技术中的一个主题,目的是通过数学算法来识别人类手势。手势可以源自任何身体运动或状态,但通常源自面部或手。本领域中的当前焦点包括来自面部和手势识别的情感识别。用户可以使用简单的手势来控制或与设备交互,而无需接触他们。姿势,步态和人类行为的识别也是手势识别技术的主题。手势识别可以被视为计算机理解人体语言的方式,从而在机器和人之间搭建比原始文本用户界面或甚至图形用户界面(GUI)更丰富的桥梁。
人工智能(Artificial Intelligence,AI):是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器,该领域的研究包括机器人、语言识别、图像识别、自然语言处理和专家系统等。
机器学习(Machine Learning):机器学习是一门多领域交叉学科,涉及 概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径。
深度学习(Deep Learning,DL):是学习样本数据的内在规律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。深度学习是一个复杂的机器学习算法,在语音和图像识别方面取得的效果,远远超过先前相关技术。
为了能够更好地理解本申请实施例,下面对本申请实施例可应用的系统架构进行说明。
请参见图1,其为实现本申请各个实施例的一种终端设备的硬件结构示意图,该终端设备100可以包括:RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。
此处对硬件结构中的与本申请实施例关联较大的部分硬件做介绍。本申请实施例中,传感器105至少可以包括图像传感器,包含于摄像头中,可以用于采集图像。显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107可包括触控面板1071以及其他输入设备1072。存储器109可用于存储软件程序以及各种数据。处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。
本领域技术人员可以理解,图1中示出的终端设备结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。终端设备可以以各种形式来实施。例如,本申请中描述的终端设备可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器等移动终端,以及诸如数字TV、台式计算机等固定终端。
为了提高隔空手势识别准确率和扩展性,降低成本,本申请实施例提供了一种手势识别方法及装置,下面进一步对本申请实施例提供的手势识别方法及装置进行详细介绍。
请参见图2,图2为本申请实施例提供了一种手势识别方法的流程示意图。当该流程应用于终端设备时,可以包括以下步骤:
210、获取图像传感器采集的第一图像帧。
在获取第一图像帧之前,终端设备会先启动图像传感器,进行图像采集。该图像传感器可以是终端设备上的摄像头,图像传感器每秒钟采集的图像帧数可以根据具体情况确定,例如以每秒钟采集30帧图像的频率工作。在开启图像传感器后,终端设备还会开启手势检测程序,使终端设备进入检测手势的状态。
当图像传感器开始采集图像后,就会一次记录多帧图像,每一帧图像都会被传入到处理器中进行检测。该第一图像帧可以是多帧图像中的任意一帧图像。
220、若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息。
其中,该第一手势可以是由终端设备设置的,也可以是用户自行设定的,例如该第一手势可以是拳头、手掌等。终端设备可以通过智能算法对第一帧图像进行检测,首先判断该第一图像帧中是否存在手势;若存在,则判断该手势的手势类型;若确定该第一手势,则记录第一手势的手势位置信息。其中,该第一手势可以触发终端设备进行动态手势检测,对第一图像帧之后的 图像帧进行识别。若该第一帧图像中未检测到手势,则终端设备会检测下一帧图像。
在一种可能的实现方式中,记录第一手势的手势位置信息可以是通过在第一手势上确定一个或多个特征点,将这一个或多个特征点的位置信息作为第一手势的手势位置信息。终端设备可以确定出该特征点在该第一图像帧的画面中的位置,可以是通过两个坐标轴来确定的,两个坐标轴分别表示横坐标像素和纵坐标像素。并且确定该特征点后,后续的图像传感器采集的图像均使用该特征点的位置信息作为各个图像帧中手势的手势位置信息。当然,本申请实施例并不限定记录手势的手势位置信息的具体方法,也可以采用其他的实现方法,通过确定特征点的方法来实现仅为举例,本申请实施例不作限定。
例如,如图3所示,第一图像帧中包括第一手势,该第一手势为拳头。终端设备可以在该第一手势中确定一个特征点,该特征点可以是位于第一手势的任意位置,一般选择容易被图像传感器捕捉到的位置。假设该第一图像的分辨率为1080*1920,该特征点的位置为(400,1100),那么(400,1100)也可以作为第一手势的手势位置信息。
需要说明的是,本申请实施例中可以采用目标检测算法对图像传感器采集到的每一帧图像进行分析,例如目前新型的AI目标检测算法Yolo V4等算法。这些算法可以在保证检测准确率的同时,达到实时检测的速度。目标检测算法也可以是其他更优算法,本申请实施例不作限定。
230、获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后。
终端设备在确定第一图像帧中的手势为第一手势,并记录该第一手势的手势位置信息之后,依次获取图像传感器捕获到的多个图像帧。该第二图像帧的采集时间位于第一图像帧之后,可以是第一图像帧之后的多个图像帧中的任意一个图像帧。
240、若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息。
第二手势是结束检测动态手势的标志,终端设备检测到该第二手势则会 停止检测动态手势。而若第二图像帧中不包括第二手势,则会持续地检测每一帧图像,并记录每一帧图像中手势的手势位置信息。其中,第二图像帧中的手势可以是第一手势,也可以是除了第二手势之外的手势,例如,可以是竖起一根手指,竖起两根手指等手势,此处不作限定。终端设备会将每一帧图像中的手势对应的手势位置信息均记录下来,保存至终端设备中的存储器或者缓存中。
在一种可能的实现方式中,不排除第二图像帧中不包括手势的情况,例如用户的手势离开了图像传感器的捕获范围,导致一部分第二图像帧中手势缺失。在这种情况下,终端设备可以利用特定的算法进行修正,可以对手势缺失之前的几帧图像和手势再次出现的几帧图像进行分析,计算出缺失的几帧图像中的各个手势可能的手势位置信息。这样,可以提高终端设备检测动态手势时的容错率。而若终端设备通过分析无法确定缺失的几帧图像的各个手势可能的手势位置信息,则可以输出提示信息,提示用户手势识别出错。另外,若在记录第一手势的手势位置信息后的预设时长内,未从第二图像帧中检测到手势,则也可以输出提示消息。
250、若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息。
当终端设备检测到第二图像帧中包括第二手势,则会立即停止从图像传感器获取下一帧图像,并获取所有已经记录下来的各个手势的手势位置信息。
如图4所示,终端设备检测到第一图像帧中包括第一手势后,就确定一个特征点作为记录各个手势的手势位置信息的标志,并对第二图像帧进行检测。为了区别不同的第二图像帧,此处用第二图像帧1、第二图像帧2等标号。终端设备从第二图像帧1检测到手势后,记录该手势的手势位置信息;同样,从第二图像帧2检测到手势后,记录该手势的手势位置信息。当终端设备检测到第二图像帧3时,确定其中包括第二手势,则会停止检测下一帧图像,并获取所有已记录的各个手势的手势位置信息。需要说明的是,图4中仅呈现了4帧图像,实际应用中,第一图像帧和第二图像帧1、第二图像帧1和第二图像帧2等之间还可以包括多个图像帧,此处为简化图,不作限定。并且,图4中的4帧图像并非同时显示在界面中,而是依照时间先后顺序显示。
260、根据获取的各个手势位置信息确定第一动态手势的手势类别。
具体地,终端设备会根据各个手势位置信息确定第一动态手势的移动轨迹。其中,第一动态手势就包括了终端设备检测到的第一手势和多个第二图像帧中的手势。进而终端设备可以确定该第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度。其中,该动态手势列表中包括多个轨迹特征,各个轨迹特征分别对应一个手势类别。例如表1所示,轨迹特征1对应手势类别1,轨迹特征2对应手势类别2,以此类推。轨迹特征是一种移动轨迹的简化形态,因为用户在做动态手势的时候难以让手在移动过程保持直线运动。例如轨迹特征可以是右移、左移、右移后下移、上移后左移等等。终端设备确定第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度后,若检测到存在相似度值高于预设阈值的相似度,则确定与第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征,该第一轨迹特征与第一动态手势相匹配。最后,终端设备就可以确定第一动态手势的手势类别为第一轨迹特征对应的手势类别。
轨迹特征 移动方向 手势类别
右移 手势类别1
左移 手势类别2
…… …… ……
↓→ 下移、右移 手势类别15
↓← 下移、左移 手势类别16
表1:动态手势列表
在一种可能的实现方式中,终端设备可以根据各个手势位置信息的偏移量和移动方向来判断动态手势的手势类型。
例如,如图5所示,终端设备可以根据如图4中记录各个图像帧中手势的手势位置信息后,确定出第一动态手势的移动轨迹。该轨迹大致为先上移再右移。进而,终端设备根据动态手势列表确定出第一动态手势与轨迹特征1的相似度为5%,与轨迹特征2的相似度为20%,与轨迹特征3的相似度为80%,与轨迹特征4的相似度为95%,而该预设阈值为80%,则终端设备确定轨迹特征4的为第一轨迹特征。最终终端设备确定第一动态手势的手势类别为该轨迹 特征4所对应的手势类别。
在一种可能的实现方式中,终端设备还可以获取第二动态手势的移动轨迹,获取方法与获取第一动态手势的移动轨迹的方法相同,此处不做赘述。终端设备根据第二动态手势的移动轨迹确定第二动态手势的轨迹特征,并将该轨迹特征添加至动态手势列表中,该第二动态手势的轨迹特征可以指示第二动态手势的手势类别。通过该方法,用户可以对动态手势列表进行更改,例如改变某个轨迹特征中的内容,或者增加一个轨迹特征和对应的手势类别。
在一种可能的实现方式中,终端设备确定第一动态手势的手势类别后,可以根据第一动态手势的手势类别生成对应的指示信息,根据该指示信息所指示的内容执行相应的步骤。因为每个动态手势都可以对应一个程序,终端设备一旦确定了一个动态手势的手势类别,就可以执行该手势类别对应的操作。
在一种可能的实现方式中,终端设备在确定第一手势的手势类别后,若在预设时间段内均为检测到手势,则确定此时图像传感器捕获到的图像中没有手势,就可以降低从图像传感器后去图像帧的频率,已达到节省电量的目的。
通过本申请实施例,终端设备在获取到第一图像帧后,检测其是否包括手势,若包括,再确定该手势的手势类别,若为第一手势,则记录该第一手势的手势位置信息。其中,该第一手势满足开始检测动态手势条件。进而终端设备继续检测第二图像帧,若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息。当第二图像帧中包括第二手势,则获取已记录的各个手势的手势位置信息,这样就可以根据各个手势的手势位置信息确定第一动态手势的移动轨迹,根据移动轨迹确定该第一动态手势的手势类别。通过该方法,首先,可以提高隔空动态手势识别的准确率;其次,用户还可以对自定义添加或更改手势,提高了手势识别的扩展性;再次,本申请实施例仅通过从图像传感器获取图像帧即可实现,降低了方案的成本。
请参见图6,图6为本申请实施例提供的一种手势识别测量装置的单元示 意图。图6所示的终端设备的装置可以用于执行上述图2所描述的方法实施例中的部分或全部功能。该装置可以是终端设备,也可以是终端设备中的装置,或者是能够和终端设备匹配使用的装置。
该装置的逻辑结构可包括:获取单元610和处理单元620。当该装置被应用于终端设备时:
获取单元610,获取图像传感器采集的第一图像帧;
处理单元620,用于若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息;
上述获取单元610还用于获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;
上述处理单元620还用于若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;
上述处理单元620还用于若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
上述处理单元620还用于根据获取的各个手势位置信息确定第一动态手势的手势类别。
在一种可能的实现方式中,上述处理单元620还用于根据各个手势位置信息确定第一动态手势的移动轨迹;根据第一动态手势的移动轨迹确定第一动态手势的手势类别。
在一种可能的实现方式中,上述处理单元620还用于确定第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度,动态手势列表包括多个轨迹特征,各个轨迹特征分别对应一个手势类别;若存在相似度值高于预设阈值的相似度,则确定与第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征;确定第一动态手势的手势类别为第一轨迹特征对应的手势类别。
在一种可能的实现方式中,上述获取单元610还用于获取第二动态手势的移动轨迹;上述处理单元620还用于根据第二动态手势的移动轨迹确定第二动态手势的轨迹特征;将第二动态手势的轨迹特征添加至动态手势列表,第二动态手势的轨迹特征用于指示第二动态手势的手势类别。
在一种可能的实现方式中,上述处理单元620还用于根据获取的各个手势位置信息确定第一动态手势的手势类别之后,根据第一动态手势的手势类别生成第一动态手势对应的指示信息,指示信息用于指示终端设备执行指示信息指示的内容。
在一种可能的实现方式中,上述处理单元620还用于若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息之后,若在记录第一手势的手势位置信息后的预设时长内,未从第二图像帧中检测到手势,则输出提示信息。
在一种可能的实现方式中,上述处理单元620还用于根据获取的各个手势位置信息确定第一动态手势的手势类别之后,若在预设时长内均未检测到手势,则降低从图像传感器获取图像帧的频率。
请参见图7,图7为本申请实施例提供的一种手势识别装置的实体结构简化示意图,该装置包括处理器710、存储器720、通信接口730和用户接口740,该处理器710、存储器720、通信接口730以及用户接口740通过一条或多条通信总线连接。
处理器710被配置为支持数据传输装置执行图2中方法相应的功能。应理解,本申请实施例中,所述处理器710可以为中央处理单元(central processing unit,简称CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,简称DSP)、专用集成电路(application specific integrated circuit,简称ASIC)、现成可编程门阵列(field programmable gate array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器720用于存储程序代码等。本申请实施例中的存储器720可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,简称ROM)、可编程只读存储器(programmable ROM,简称PROM)、可擦除可编程只读存储器(erasable PROM,简称EPROM)、电可擦除可编程只读存储器(electrically EPROM,简称EEPROM)或闪存。易失性存储器可以是随机存取存储器 (random access memory,简称RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,简称RAM)可用,例如静态随机存取存储器(static RAM,简称SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,简称SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,简称DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,简称ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,简称SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,简称DR RAM)。
通信接口730用于收发数据、信息或消息等,也可以描述为收发器、收发电路等。
用户接口740是实现用户与终端进行交互和信息交换的媒介,其具体体现可以包括用于输出的显示屏(Display)以及用于输入的键盘(Keyboard)、触摸屏等等,需要说明的是,此处的键盘既可以为实体键盘,也可以为触屏虚拟键盘,还可以为实体与触屏虚拟相结合的键盘。
在本申请实施例中,当该数据传输装置应用终端设备时,该处理器710可以调用存储器720中存储的程序代码以执行以下操作:
处理器710调用存储器720中存储的程序代码获取图像传感器采集的第一图像帧;
处理器710调用存储器720中存储的程序代码若第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录第一手势的手势位置信息;
处理器710调用存储器720中存储的程序代码获取图像传感器采集的第二图像帧,第二图像帧的采集时间在第一图像帧之后;
处理器710调用存储器720中存储的程序代码若第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录第二手势的手势位置信息;
处理器710调用存储器720中存储的程序代码若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
处理器710调用存储器720中存储的程序代码根据获取的各个手势位置信息确定第一动态手势的手势类别。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码根据各个手势位置信息确定第一动态手势的移动轨迹;根据第一动态手势的移动轨迹确定第一动态手势的手势类别。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码确定第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度,动态手势列表包括多个轨迹特征,各个轨迹特征分别对应一个手势类别;若存在相似度值高于预设阈值的相似度,则确定与第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征;确定第一动态手势的手势类别为第一轨迹特征对应的手势类别。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码获取第二动态手势的移动轨迹;处理器710调用存储器720中存储的程序代码根据第二动态手势的移动轨迹确定第二动态手势的轨迹特征;将第二动态手势的轨迹特征添加至动态手势列表,第二动态手势的轨迹特征用于指示第二动态手势的手势类别。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码根据获取的各个手势位置信息确定第一动态手势的手势类别之后,根据第一动态手势的手势类别生成第一动态手势对应的指示信息,指示信息用于指示终端设备执行指示信息指示的内容。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码若第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息之后,若在记录第一手势的手势位置信息后的预设时长内,未从第二图像帧中检测到手势,则输出提示信息。
在一种可能的实现方式中,处理器710调用存储器720中存储的程序代码根据获取的各个手势位置信息确定第一动态手势的手势类别之后,若在预设时长内均未检测到手势,则降低从图像传感器获取图像帧的频率。
需要说明的是,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其他实施例的相关描述。
本发明实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删 减。
本发明实施例处理设备中的单元可以根据实际需要进行合并、划分和删减。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (20)

  1. 一种手势识别方法,其特征在于,包括:
    获取图像传感器采集的第一图像帧;
    若所述第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录所述第一手势的手势位置信息;
    获取所述图像传感器采集的第二图像帧,所述第二图像帧的采集时间在所述第一图像帧之后;
    若所述第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录所述第二手势的手势位置信息;
    若所述第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
    根据获取的各个手势位置信息确定第一动态手势的手势类别。
  2. 根据权利要求1所述的方法,其特征在于,所述根据获取的各个手势位置信息确定第一动态手势的手势类别,包括:
    根据所述各个手势位置信息确定所述第一动态手势的移动轨迹;
    根据所述第一动态手势的移动轨迹确定所述第一动态手势的手势类别。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一动态手势的移动轨迹确定所述第一动态手势的手势类别,包括:
    确定所述第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度,所述动态手势列表包括多个轨迹特征,各个轨迹特征分别对应一个手势类别;
    若存在相似度值高于预设阈值的相似度,则确定与所述第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征;
    确定所述第一动态手势的手势类别为所述第一轨迹特征对应的手势类别。
  4. 根据权利要求1所述的方法,其特征在于,所述根据获取的各个手势位置信息确定第一动态手势的手势类别,包括:
    根据所述各个手势位置信息的偏移量和移动方向确定所述第一动态手势的手势类别。
  5. 根据权利要求1或3所述的方法,其特征在于,所述方法还包括:
    获取第二动态手势的移动轨迹;
    根据所述第二动态手势的移动轨迹确定所述第二动态手势的轨迹特征;
    将所述第二动态手势的轨迹特征添加至所述动态手势列表,所述第二动态手势的轨迹特征用于指示所述第二动态手势的手势类别。
  6. 根据权利要求1所述的方法,其特征在于,所述根据获取的各个手势位置信息确定第一动态手势的手势类别之后,所述方法还包括:
    根据所述第一动态手势的手势类别生成所述第一动态手势对应的指示信息,所述指示信息用于指示终端设备执行所述指示信息指示的内容。
  7. 根据权利要求1所述的方法,其特征在于,所述若所述第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录所述第一手势的手势位置信息之后,所述方法还包括:
    若在记录所述第一手势的手势位置信息后的预设时长内,未从所述第二图像帧中检测到手势,则输出提示信息。
  8. 根据权利要求1所述的方法,其特征在于,所述根据获取的各个手势位置信息确定第一动态手势的手势类别之后,所述方法还包括:
    若在预设时间段内均未检测到手势,则降低从图像传感器获取图像帧的频率。
  9. 一种手势识别装置,其特征在于,包括:
    获取单元,获取图像传感器采集的第一图像帧;
    处理单元,用于若所述第一图像帧中包括满足开始检测动态手势条件的第一手势,则记录所述第一手势的手势位置信息;
    所述获取单元还用于获取所述图像传感器采集的第二图像帧,所述第二图像帧的采集时间在所述第一图像帧之后;
    所述处理单元还用于若所述第二图像帧中不包括满足结束检测动态手势条件的第二手势,则记录所述第二手势的手势位置信息;
    所述处理单元还用于若所述第二图像帧中包括满足结束检测动态手势条件的第二手势,则获取已记录的各个手势的手势位置信息;
    所述处理单元还用于根据获取的各个手势位置信息确定第一动态手势的手势类别。
  10. 根据权利要求9所述的手势识别装置,其特征在于,所述处理单元还用于:
    根据所述各个手势位置信息确定所述第一动态手势的移动轨迹;
    根据所述第一动态手势的移动轨迹确定所述第一动态手势的手势类别。
  11. 根据权利要求10所述的手势识别装置,其特征在于,所述处理单元还用于:
    确定所述第一动态手势的移动轨迹与动态手势列表中各个轨迹特征的相似度,所述动态手势列表包括多个轨迹特征,各个轨迹特征分别对应一个手势类别;
    若存在相似度值高于预设阈值的相似度,则确定与所述第一动态手势的轨迹特征的相似度值最高的轨迹特征为第一轨迹特征;
    确定所述第一动态手势的手势类别为所述第一轨迹特征对应的手势类别。
  12. 根据权利要求9所述的手势识别装置,其特征在于,所述处理单元还用于:
    根据所述各个手势位置信息的偏移量和移动方向确定所述第一动态手势的手势类别。
  13. 根据权利要求9或11所述的手势识别装置,其特征在于,所述获取 单元还用于:
    获取第二动态手势的移动轨迹;
    所述处理单元还用于:
    根据所述第二动态手势的移动轨迹确定所述第二动态手势的轨迹特征;
    将所述第二动态手势的轨迹特征添加至所述动态手势列表,所述第二动态手势的轨迹特征用于指示所述第二动态手势的手势类别。
  14. 根据权利要求9所述的手势识别装置,其特征在于,所述处理单元还用于:
    根据所述第一动态手势的手势类别生成所述第一动态手势对应的指示信息,所述指示信息用于指示终端设备执行所述指示信息指示的内容。
  15. 根据权利要求9所述的手势识别装置,其特征在于,所述处理单元还用于:
    若在记录所述第一手势的手势位置信息后的预设时长内,未从所述第二图像帧中检测到手势,则输出提示信息。
  16. 根据权利要求9所述的手势识别装置,其特征在于,所述处理单元还用于:
    若在预设时间段内均未检测到手势,则降低从图像传感器获取图像帧的频率。
  17. 一种手势识别装置,其特征在于,包括处理器、存储器和用户接口,所述处理器、所述存储器和所述用户接口相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1至8中任一项所述的手势识别方法。
  18. 一种芯片,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行计算机程序时实现如权利要 求1至8中任一项所述音频采集方法的步骤。
  19. 一种芯片模组,包括收发组件和芯片,所述芯片包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行计算机程序时实现如权利要求1至8中任一项所述音频采集方法的步骤。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1至8中任一项所述的手势识别方法。
PCT/CN2021/130458 2020-11-18 2021-11-12 一种手势识别方法及装置 WO2022105692A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011298230.2 2020-11-18
CN202011298230.2A CN112364799A (zh) 2020-11-18 2020-11-18 一种手势识别方法及装置

Publications (1)

Publication Number Publication Date
WO2022105692A1 true WO2022105692A1 (zh) 2022-05-27

Family

ID=74533984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130458 WO2022105692A1 (zh) 2020-11-18 2021-11-12 一种手势识别方法及装置

Country Status (2)

Country Link
CN (1) CN112364799A (zh)
WO (1) WO2022105692A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116301363A (zh) * 2023-02-27 2023-06-23 荣耀终端有限公司 隔空手势识别方法、电子设备及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364799A (zh) * 2020-11-18 2021-02-12 展讯通信(上海)有限公司 一种手势识别方法及装置
CN113282168A (zh) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 头戴式显示设备的信息输入方法、装置及头戴式显示设备
CN115643485B (zh) * 2021-11-25 2023-10-24 荣耀终端有限公司 拍摄的方法和电子设备
TWI835053B (zh) * 2022-01-18 2024-03-11 大陸商廣州印芯半導體技術有限公司 手勢感測系統及其感測方法
CN115079822B (zh) * 2022-05-31 2023-07-21 荣耀终端有限公司 隔空手势交互方法、装置、电子芯片及电子设备
CN118118778A (zh) * 2022-11-30 2024-05-31 荣耀终端有限公司 手势感知方法、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310264A1 (en) * 2014-04-29 2015-10-29 Avago Technologies General Ip (Singapore) Pte. Ltd. Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals
CN107563286A (zh) * 2017-07-28 2018-01-09 南京邮电大学 一种基于Kinect深度信息的动态手势识别方法
CN108960177A (zh) * 2018-07-13 2018-12-07 苏州浪潮智能软件有限公司 一种将手势进行数字化处理的方法及装置
CN109960980A (zh) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 动态手势识别方法及装置
CN111652017A (zh) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 一种动态手势识别方法及系统
CN111680594A (zh) * 2020-05-29 2020-09-18 北京计算机技术及应用研究所 一种基于手势识别的增强现实交互方法
CN112364799A (zh) * 2020-11-18 2021-02-12 展讯通信(上海)有限公司 一种手势识别方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8373654B2 (en) * 2010-04-29 2013-02-12 Acer Incorporated Image based motion gesture recognition method and system thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310264A1 (en) * 2014-04-29 2015-10-29 Avago Technologies General Ip (Singapore) Pte. Ltd. Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals
CN107563286A (zh) * 2017-07-28 2018-01-09 南京邮电大学 一种基于Kinect深度信息的动态手势识别方法
CN109960980A (zh) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 动态手势识别方法及装置
CN108960177A (zh) * 2018-07-13 2018-12-07 苏州浪潮智能软件有限公司 一种将手势进行数字化处理的方法及装置
CN111652017A (zh) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 一种动态手势识别方法及系统
CN111680594A (zh) * 2020-05-29 2020-09-18 北京计算机技术及应用研究所 一种基于手势识别的增强现实交互方法
CN112364799A (zh) * 2020-11-18 2021-02-12 展讯通信(上海)有限公司 一种手势识别方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116301363A (zh) * 2023-02-27 2023-06-23 荣耀终端有限公司 隔空手势识别方法、电子设备及存储介质
CN116301363B (zh) * 2023-02-27 2024-02-27 荣耀终端有限公司 隔空手势识别方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN112364799A (zh) 2021-02-12

Similar Documents

Publication Publication Date Title
WO2022105692A1 (zh) 一种手势识别方法及装置
US20180307319A1 (en) Gesture recognition
CN111105852B (zh) 一种电子病历推荐方法、装置、终端及存储介质
US11721333B2 (en) Electronic apparatus and control method thereof
US10838508B2 (en) Apparatus and method of using events for user interface
US20180372836A1 (en) Floor Determining Method and System, and Related Device
US11256463B2 (en) Content prioritization for a display array
EP4336490A1 (en) Voice processing method and related device
WO2020200263A1 (zh) 信息流中图片的处理方法、设备及计算机可读存储介质
Yin et al. A high-performance training-free approach for hand gesture recognition with accelerometer
JP2021531589A (ja) 目標対象の動作認識方法、装置及び電子機器
CN112840313A (zh) 电子设备及其控制方法
CN114391132A (zh) 电子设备及其屏幕捕获方法
US20170177144A1 (en) Touch display device and touch display method
Yang et al. Smart control of home appliances using hand gesture recognition in an IoT-enabled system
CN112488157A (zh) 一种对话状态追踪方法、装置、电子设备及存储介质
CN103593052A (zh) 基于Kinect和OpenNI的手势捕获方法
CN107749201B (zh) 点读对象处理方法、装置、存储介质及电子设备
Yang et al. Audio–visual perception‐based multimodal HCI
Babu et al. Controlling Computer Features Through Hand Gesture
Zhu et al. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
US11762515B1 (en) Touch and hover sensing on single-layer segmented sheets
CN114967927B (zh) 一种基于图像处理的智能手势交互方法
US10671450B2 (en) Coalescing events framework
CN115268645A (zh) 抬腕检测方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893839

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21893839

Country of ref document: EP

Kind code of ref document: A1